title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,467,922 | <p>I have a hard time believing that any script without any prior knowledge of the data (unlike MySql which has such info pre-loaded), would be faster than a SQL approach.</p>
<p>Aside from the time spent parsing the input, the script needs to "keep" sorting the order by array etc...</p>
<p>The following is a first guess at what should work decently fast in SQL, assuming a index (*) on the table's aa, bb, cc columns, in that order. (A possible alternative would be an "aa, bb DESC, cc" index</p>
<p>(*) This index could be clustered or not, not affecting the following query. Choice of clustering or not, and of needing an "aa,bb,cc" separate index depends on use case, on the size of the rows in table etc. etc.</p>
<pre><code>SELECT T1.aa, T1.bb, T1.cc , COUNT(*)
FROM tblAbc T1
LEFT OUTER JOIN tblAbc T2 ON T1.aa = T2.aa AND
(T1.bb < T2.bb OR(T1.bb = T2.bb AND T1.cc < T2.cc))
GROUP BY T1.aa, T1.bb, T1.cc
HAVING COUNT(*) < 5 -- trick, remember COUNT(*) goes 1,1,2,3,...
ORDER BY T1.aa, T1.bb, T1.cc, COUNT(*) DESC
</code></pre>
<p>The idea is to get a count of how many records, within a given aa value are smaller than self. There is a small trick however: we need to use LEFT OUTER join, lest we discard the record with the biggest bb value or the last one (which may happen to be one of the top 5). As a result of left joining it, the COUNT(*) value counts 1, 1, 2, 3, 4 etc. and the HAVING test therefore is "<5" to effectively pick the top 5.</p>
<p>To emulate the sample output of the OP, the ORDER BY uses DESC on the COUNT(), which could be removed to get a more traditional top 5 type of listing. Also, the COUNT() in the select list can be removed if so desired, this doesn't impact the logic of the query and the ability to properly sort.</p>
<p>Also note that this query is deterministic in terms of the dealing with ties, i,e, when a given set of records have a same value for bb (within an aa group); I think the Python program may provide slightly different outputs when the order of the input data is changed, that is because of its occasional truncating of the sorting dictionary.</p>
<p><strong>Real solution: A SQL-based procedural approach</strong></p>
<p>The self-join approach described above demonstrates how declarative statements can be used to express the OP's requirement. However this approach is naive in a sense that its performance is roughly bound to the sum of the squares of record counts within each aa 'category'. (not O(n^2) but roughly O((n/a)^2) where a is the number of different values for the aa column) In other words it performs well with data such that on average the number of records associated with a given aa value doesn't exceed a few dozens. If the data is such that the aa column is not selective, the following approach is much -much!- better suited. It leverages SQL's efficient sorting framework, while implementing a simple algorithm that would be hard to express in declarative fashion. This approach could further be improved for datasets with particularly huge number of records each/most aa 'categories' by introducing a simple binary search of the next aa value, by looking ahead (and sometimes back...) in the cursor. For cases where the number of aa 'categories' relative to the overall row count in tblAbc is low, see yet another approach, after this next one.</p>
<pre><code>DECLARE @aa AS VARCHAR(10), @bb AS INT, @cc AS VARCHAR(10)
DECLARE @curAa AS VARCHAR(10)
DECLARE @Ctr AS INT
DROP TABLE tblResults;
CREATE TABLE tblResults
( aa VARCHAR(10),
bb INT,
cc VARCHAR(10)
);
DECLARE abcCursor CURSOR
FOR SELECT aa, bb, cc
FROM tblABC
ORDER BY aa, bb DESC, cc
FOR READ ONLY;
OPEN abcCursor;
SET @curAa = ''
FETCH NEXT FROM abcCursor INTO @aa, @bb, @cc;
WHILE @@FETCH_STATUS = 0
BEGIN
IF @curAa <> @aa
BEGIN
SET @Ctr = 0
SET @curAa = @aa
END
IF @Ctr < 5
BEGIN
SET @Ctr = @Ctr + 1;
INSERT tblResults VALUES(@aa, @bb, @cc);
END
FETCH NEXT FROM AbcCursor INTO @aa, @bb, @cc;
END;
CLOSE abcCursor;
DEALLOCATE abcCursor;
SELECT * from tblResults
ORDER BY aa, bb, cc -- OR .. bb DESC ... for a more traditional order.
</code></pre>
<p><strong>Alternative to the above</strong> for cases when aa is very unselective. In other words, when we have relatively few aa 'categories'. The idea is to go through the list of distinct categories and to run a "LIMIT" (MySql) "TOP" (MSSQL) query for each of these values.
For reference purposes, the following ran in 63 seconds for tblAbc of 61 Million records divided in 45 aa values, on MSSQL 8.0, on a relatively old/weak host.</p>
<pre><code>DECLARE @aa AS VARCHAR(10)
DECLARE @aaCount INT
DROP TABLE tblResults;
CREATE TABLE tblResults
( aa VARCHAR(10),
bb INT,
cc VARCHAR(10)
);
DECLARE aaCountCursor CURSOR
FOR SELECT aa, COUNT(*)
FROM tblABC
GROUP BY aa
ORDER BY aa
FOR READ ONLY;
OPEN aaCountCursor;
FETCH NEXT FROM aaCountCursor INTO @aa, @aaCount
WHILE @@FETCH_STATUS = 0
BEGIN
INSERT tblResults
SELECT TOP 5 aa, bb, cc
FROM tblproh
WHERE aa = @aa
ORDER BY aa, bb DESC, cc
FETCH NEXT FROM aaCountCursor INTO @aa, @aaCount;
END;
CLOSE aaCountCursor
DEALLOCATE aaCountCursor
SELECT * from tblResults
ORDER BY aa, bb, cc -- OR .. bb DESC ... for a more traditional order.
</code></pre>
<p><strong>On the question of needing an index or not</strong>. (cf OP's remark)
When merely running a "SELECT * FROM myTable", a table scan is effectively the fastest appraoch, no need to bother with indexes. However, the main reason why SQL is typically better suited for this kind of things (aside from being the repository where the data has been accumulating in the first place, whereas any external solution needs to account for the time to export the relevant data), is that it can rely on indexes to avoid scanning. Many general purpose languages are far better suited to handle raw processing, but they are fighting an unfair battle with SQL because they need to rebuilt any prior knowledge of the data which SQL has gathered in the course of its data collection / import phase. Since sorting is a typically a time and sometimes space consuming task, SQL and its relatively slower processing power often ends up ahead of alternative solutions.</p>
<p>Also, even without pre-built indexes, modern query optimizers may decide on a plan that involves the creation of a temporary index. And, because sorting is an intrinsic part of DDMS, the SQL servers are generally efficient in that area.</p>
<p><strong>So... Is SQL better?</strong></p>
<p>This said, if we are trying to compare SQL and other languages for pure ETL jobs, i.e. for dealing with <em>heaps</em> (unindexed tables) as its input to perform various transformations and filtering, it is likely that multi-thread-able utilities written in say C, and leveraging efficient sorting libaries, would likely be faster. The determining question to decide on a SQL vs. Non-SQL approach is where the data is located and where should it eventually reside. If we merely to convert a file to be supplied down "the chain" external programs are better suited. If we have or need the data in a SQL server, there are only rare cases that make it worthwhile exporting and processing externally.</p>
| 9 | 2009-09-23T18:50:07Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,468,019 | <p>Isn't this just as simple as </p>
<pre><code> SELECT DISTINCT aa, bb, cc FROM tablename ORDER BY bb DESC LIMIT 5
</code></pre>
<p>?</p>
<p>Of course, it's hard to tell what would be fastest without testing it against the data. And if this is something you need to run very fast, it might make sense to optimize your database to make the query faster, rather than optimizing the query.</p>
<p>And, of course, if you need the flat file anyway, you might as well use that.</p>
| 0 | 2009-09-23T19:09:24Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,468,030 | <p>This is a sketch in Common Lisp</p>
<p>Note that for long files there is a penalty for using READ-LINE, because it conses a fresh string for each line. Then use one of the derivatives of READ-LINE that are floating around that are using a line buffer. Also you might check if you want the hash table be case sensitive or not.</p>
<p><strong>second version</strong></p>
<p>Splitting the string is no longer needed, because we do it here. It is low level code, in the hope that some speed gains will be possible. It checks for one or more spaces as field delimiter and also tabs.</p>
<pre><code>(defun read-a-line (stream)
(let ((line (read-line stream nil nil)))
(flet ((delimiter-p (c)
(or (char= c #\space) (char= c #\tab))))
(when line
(let* ((s0 (position-if #'delimiter-p line))
(s1 (position-if-not #'delimiter-p line :start s0))
(s2 (position-if #'delimiter-p line :start (1+ s1)))
(s3 (position-if #'delimiter-p line :from-end t)))
(values (subseq line 0 s0)
(list (read-from-string line nil nil :start s1 :end s2)
(subseq line (1+ s3)))))))))
</code></pre>
<p>Above function returns two values: the key and a list of the rest.</p>
<pre><code>(defun dbscan (top-5-table stream)
"get triples from each line and put them in the hash table"
(loop with aa = nil and bbcc = nil do
(multiple-value-setq (aa bbcc) (read-a-line stream))
while aa do
(setf (gethash aa top-5-table)
(let ((l (merge 'list (gethash aa top-5-table) (list bbcc)
#'> :key #'first)))
(or (and (nth 5 l) (subseq l 0 5)) l)))))
(defun dbprint (table output)
"print the hashtable contents"
(maphash (lambda (aa value)
(loop for (bb cc) in value
do (format output "~a ~a ~a~%" aa bb cc)))
table))
(defun dbsum (input &optional (output *standard-output*))
"scan and sum from a stream"
(let ((top-5-table (make-hash-table :test #'equal)))
(dbscan top-5-table input)
(dbprint top-5-table output)))
(defun fsum (infile outfile)
"scan and sum a file"
(with-open-file (input infile :direction :input)
(with-open-file (output outfile
:direction :output :if-exists :supersede)
(dbsum input output))))
</code></pre>
<p><strong>some test data</strong></p>
<pre><code>(defun create-test-data (&key (file "/tmp/test.data") (n-lines 100000))
(with-open-file (stream file :direction :output :if-exists :supersede)
(loop repeat n-lines
do (format stream "~a ~a ~a~%"
(random 1000) (random 100.0) (random 10000)))))
</code></pre>
<p>; (create-test-data)</p>
<pre><code>(defun test ()
(time (fsum "/tmp/test.data" "/tmp/result.data")))
</code></pre>
<p><strong>third version, LispWorks</strong></p>
<p>Uses some SPLIT-STRING and PARSE-FLOAT functions, otherwise generic CL.</p>
<pre><code>(defun fsum (infile outfile)
(let ((top-5-table (make-hash-table :size 50000000 :test #'equal)))
(with-open-file (input infile :direction :input)
(loop for line = (read-line input nil nil)
while line do
(destructuring-bind (aa bb cc) (split-string '(#\space #\tab) line)
(setf bb (parse-float bb))
(let ((v (gethash aa top-5-table)))
(unless v
(setf (gethash aa top-5-table)
(setf v (make-array 6 :fill-pointer 0))))
(vector-push (cons bb cc) v)
(when (> (length v) 5)
(setf (fill-pointer (sort v #'> :key #'car)) 5))))))
(with-open-file (output outfile :direction :output :if-exists :supersede)
(maphash (lambda (aa value)
(loop for (bb . cc) across value do
(format output "~a ~f ~a~%" aa bb cc)))
top-5-table))))
</code></pre>
| 3 | 2009-09-23T19:12:02Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,468,087 | <p>Pick "top 5" would look something like this. Note that there's no sorting. Nor does any list in the top_5 dictionary ever grow beyond 5 elements.</p>
<pre><code>from collections import defaultdict
import sys
def keep_5( aList, aPair ):
minbb= min( bb for bb,cc in aList )
bb, cc = aPair
if bb < minbb: return aList
aList.append( aPair )
min_i= 0
for i in xrange(1,6):
if aList[i][0] < aList[min_i][0]
min_i= i
aList.pop(min_i)
return aList
top_5= defaultdict(list)
for row in sys.stdin:
aa, bb, cc = row.split()
bb = float(bb)
if len(top_5[aa]) < 5:
top_5[aa].append( (bb,cc) )
else:
top_5[aa]= keep_5( top_5[aa], (bb,cc) )
</code></pre>
| -1 | 2009-09-23T19:23:02Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,468,626 | <p>Here is a C++ solution. I didn't have a lot of data to test it with, however, so I don't know how fast it actually is.</p>
<p>[edit] Thanks to the test data provided by the awk script in this thread, I
managed to clean up and speed up the code a bit. I am not trying to find out the fastest possible version - the intent is to provide a reasonably fast version that isn't as ugly as people seem to think STL solutions can be. </p>
<p>This version should be about twice as fast as the first version (goes through 27 million lines in about 35 seconds). Gcc users, remember to
compile this with -O2.</p>
<pre><code>#include <map>
#include <iostream>
#include <functional>
#include <utility>
#include <string>
int main() {
using namespace std;
typedef std::map<string, std::multimap<double, string> > Map;
Map m;
string aa, cc;
double bb;
std::cin.sync_with_stdio(false); // Dunno if this has any effect, but anyways.
while (std::cin >> aa >> bb >> cc)
{
if (m[aa].size() == 5)
{
Map::mapped_type::iterator iter = m[aa].begin();
if (bb < iter->first)
continue;
m[aa].erase(iter);
}
m[aa].insert(make_pair(bb, cc));
}
for (Map::const_iterator iter = m.begin(); iter != m.end(); ++iter)
for (Map::mapped_type::const_iterator iter2 = iter->second.begin();
iter2 != iter->second.end();
++iter2)
std::cout << iter->first << " " << iter2->first << " " << iter2->second <<
std::endl;
}
</code></pre>
| 1 | 2009-09-23T21:32:51Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,468,831 | <p>This took 45.7s on my machine with 27M rows of data that looked like this:</p>
<pre><code>42 0.49357 0
96 0.48075 1
27 0.640761 2
8 0.389128 3
75 0.395476 4
24 0.212069 5
80 0.121367 6
81 0.271959 7
91 0.18581 8
69 0.258922 9
</code></pre>
<p>Your script took 1m42 on this data, the c++ example too 1m46 (g++ t.cpp -o t to compile it, I don't know anything about c++).</p>
<p>Java 6, not that it matters really. Output isn't perfect, but it's easy to fix.</p>
<pre><code>package top5;
import java.io.BufferedReader;
import java.io.FileReader;
import java.util.Arrays;
import java.util.Map;
import java.util.TreeMap;
public class Main {
public static void main(String[] args) throws Exception {
long start = System.currentTimeMillis();
Map<String, Pair[]> top5map = new TreeMap<String, Pair[]>();
BufferedReader br = new BufferedReader(new FileReader("/tmp/file.dat"));
String line = br.readLine();
while(line != null) {
String parts[] = line.split(" ");
String key = parts[0];
double score = Double.valueOf(parts[1]);
String value = parts[2];
Pair[] pairs = top5map.get(key);
boolean insert = false;
Pair p = null;
if (pairs != null) {
insert = (score > pairs[pairs.length - 1].score) || pairs.length < 5;
} else {
insert = true;
}
if (insert) {
p = new Pair(score, value);
if (pairs == null) {
pairs = new Pair[1];
pairs[0] = new Pair(score, value);
} else {
if (pairs.length < 5) {
Pair[] newpairs = new Pair[pairs.length + 1];
System.arraycopy(pairs, 0, newpairs, 0, pairs.length);
pairs = newpairs;
}
int k = 0;
for(int i = pairs.length - 2; i >= 0; i--) {
if (pairs[i].score <= p.score) {
pairs[i + 1] = pairs[i];
} else {
k = i + 1;
break;
}
}
pairs[k] = p;
}
top5map.put(key, pairs);
}
line = br.readLine();
}
for(Map.Entry<String, Pair[]> e : top5map.entrySet()) {
System.out.print(e.getKey());
System.out.print(" ");
System.out.println(Arrays.toString(e.getValue()));
}
System.out.println(System.currentTimeMillis() - start);
}
static class Pair {
double score;
String value;
public Pair(double score, String value) {
this.score = score;
this.value = value;
}
public int compareTo(Object o) {
Pair p = (Pair) o;
return (int)Math.signum(score - p.score);
}
public String toString() {
return String.valueOf(score) + ", " + value;
}
}
}
</code></pre>
<p>AWK script to fake the data:</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 3 | 2009-09-23T22:19:09Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,469,267 | <p>Interestingly, the original Python solution is by far the cleanest <em>looking</em> (although the C++ example comes close). </p>
<p>How about using Pyrex or Psyco on your original code?</p>
| 1 | 2009-09-24T00:37:18Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,469,525 | <p>Of <em>all</em> the programs in this thread that I've tested so far, <strong>the OCaml version is the fastest</strong> and also among the <strong>shortest</strong>. (Line-of-code-based measurements are a little fuzzy, but it's not <em>clearly longer</em> than the Python version or the C or C++ versions, and it <em>is</em> clearly faster.)</p>
<blockquote>
<p>Note: I figured out why my earlier runtimes were so nondeterministic! My CPU heatsink was clogged with dust and my CPU was overheating as a result. Now I am getting nice deterministic benchmark times. I think I've now redone all the timing measurements in this thread now that I have a reliable way to time things.</p>
</blockquote>
<p>Here are the timings for the different versions so far, running on a 27-million-row 630-megabyte input data file. I'm on Ubuntu Intrepid Ibex on a dual-core 1.6GHz Celeron, running a 32-bit version of the OS (the Ethernet driver was broken in the 64-bit version). I ran each program five times and report the range of times those five tries took. I'm using Python 2.5.2, OpenJDK 1.6.0.0, OCaml 3.10.2, GCC 4.3.2, SBCL 1.0.8.debian, and Octave 3.0.1.</p>
<ul>
<li>SquareCog's Pig version: not yet tested (because I can't just <code>apt-get install pig</code>), <em>7</em> lines of code.</li>
<li>mjv's pure SQL version: not yet tested, but I predict a runtime of several days; <em>7</em> lines of code.</li>
<li>ygrek's OCaml version: <strong>68.7 seconds</strong> ±0.9 in <em>15</em> lines of code.</li>
<li>My Python version: <strong>169 seconds</strong> ±4 or <strong>86 seconds</strong> ±2 with Psyco, in <em>16</em> lines of code.</li>
<li>abbot's heap-based Python version: <strong>177 seconds</strong> ±5 in <em>18</em> lines of code, or <strong>83 seconds</strong> ±5 with Psyco.</li>
<li>My C version below, composed with GNU <code>sort -n</code>: <strong>90 + 5.5 seconds</strong> (±3, ±0.1), but gives the wrong answer because of a deficiency in GNU <code>sort</code>, in <em>22</em> lines of code (including one line of shell.)</li>
<li>hrnt's C++ version: <strong>217 seconds</strong> ±3 in <em>25</em> lines of code.</li>
<li>mjv's alternative SQL-based procedural approach: not yet tested, <em>26</em> lines of code.</li>
<li>mjv's first SQL-based procedural approach: not yet tested, <em>29</em> lines of code.</li>
<li>peufeu's <a href="http://gist.github.com/194877" rel="nofollow" title="My modified version as a gist">Python version with Psyco</a>: <strong>181 seconds</strong> ±4, somewhere around <em>30</em> lines of code.</li>
<li>Rainer Joswig's Common Lisp version: <strong>478 seconds</strong> (only run once) in <em>42</em> lines of code.</li>
<li>abbot's <code>noop.py</code>, which intentionally gives incorrect results to establish a lower bound: not yet tested, <em>15</em> lines of code.</li>
<li>Will Hartung's Java version: <strong>96 seconds</strong> ±10 in, according to David A. Wheelerâs SLOCCount, <em>74</em> lines of code.</li>
<li>Greg's Matlab version: doesn't work.</li>
<li>Schuyler Erle's suggestion of using Pyrex on one of the Python versions: not yet tried.</li>
</ul>
<p>I supect abbot's version comes out relatively worse for me than for them because the real dataset has a highly nonuniform distribution: as I said, some <code>aa</code> values (âplayersâ) have thousands of lines, while others only have one.</p>
<p>About Psyco: I applied Psyco to my original code (and abbot's version) by putting it in a <code>main</code> function, which by itself cut the time down to about 140 seconds, and calling <code>psyco.full()</code> before calling <code>main()</code>. This added about four lines of code.</p>
<p>I can <strong>almost</strong> solve the problem using GNU <code>sort</code>, as follows:</p>
<pre><code>kragen@inexorable:~/devel$ time LANG=C sort -nr infile -o sorted
real 1m27.476s
user 0m59.472s
sys 0m8.549s
kragen@inexorable:~/devel$ time ./top5_sorted_c < sorted > outfile
real 0m5.515s
user 0m4.868s
sys 0m0.452s
</code></pre>
<p>Here <code>top5_sorted_c</code> is this short C program:</p>
<pre><code>#include <ctype.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
enum { linesize = 1024 };
char buf[linesize];
char key[linesize]; /* last key seen */
int main() {
int n = 0;
char *p;
while (fgets(buf, linesize, stdin)) {
for (p = buf; *p && !isspace(*p); p++) /* find end of key on this line */
;
if (p - buf != strlen(key) || 0 != memcmp(buf, key, p - buf))
n = 0; /* this is a new key */
n++;
if (n <= 5) /* copy up to five lines for each key */
if (fputs(buf, stdout) == EOF) abort();
if (n == 1) { /* save new key in `key` */
memcpy(key, buf, p - buf);
key[p-buf] = '\0';
}
}
return 0;
}
</code></pre>
<p>I first tried writing that program in C++ as follows, and I got runtimes which were substantially slower, at 33.6±2.3 seconds instead of 5.5±0.1 seconds:</p>
<pre><code>#include <map>
#include <iostream>
#include <string>
int main() {
using namespace std;
int n = 0;
string prev, aa, bb, cc;
while (cin >> aa >> bb >> cc) {
if (aa != prev) n = 0;
++n;
if (n <= 5) cout << aa << " " << bb << " " << cc << endl;
prev = aa;
}
return 0;
}
</code></pre>
<p>I did say <strong>almost</strong>. The problem is that <code>sort -n</code> does okay for most of the data, but it fails when it's trying to compare <code>0.33</code> with <code>3.78168e-05</code>. So to get this kind of performance and actually solve the problem, I need a better sort.</p>
<p>Anyway, I kind of feel like I'm whining, but the sort-and-filter approach is about 5Ã faster than the Python program, while the elegant STL program from hrnt is actually a little slower â there seems to be some kind of gross inefficiency in <code><iostream></code>. I don't know where the other 83% of the runtime is going in that little C++ version of the filter, but it isn't going anywhere useful, which makes me suspect I don't know where it's going in hrnt's <code>std::map</code> version either. Could that version be sped up 5Ã too? Because that would be pretty cool. Its working set <em>might</em> be bigger than my L2 cache, but as it happens it probably isn't.</p>
<p>Some investigation with callgrind says my filter program in C++ is executing 97% of its instructions inside of <code>operator >></code>. I can identify at least 10 function calls per input byte, and <code>cin.sync_with_stdio(false);</code> doesnât help. This probably means I could get hrntâs C program to run substantially faster by parsing input lines more efficiently.</p>
<p>Edit: kcachegrind claims that hrntâs program executes 62% of its instructions (on a small 157000 line input file) extracting <code>double</code>s from an <code>istream</code>. A substantial part of this is because the istreams library apparently executes about 13 function calls per input byte when trying to parse a <code>double</code>. Insane. Could I be misunderstanding kcachegrind's output?</p>
<p>Anyway, any other suggestions?</p>
| 2 | 2009-09-24T02:39:38Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,469,787 | <p>Has anybody tried doing this problem with just awk. Specifically 'mawk'? It should be faster than even Java and C++, according to this blog post: <a href="http://anyall.org/blog/2009/09/dont-mawk-awk-the-fastest-and-most-elegant-big-data-munging-language/" rel="nofollow">http://anyall.org/blog/2009/09/dont-mawk-awk-the-fastest-and-most-elegant-big-data-munging-language/</a> </p>
<p>EDIT: Just wanted to clarify that the only claim being made in that blog post is that for a certain class of problems that are specifically suited to awk-style processing, the mawk virtual machine can beat 'vanilla' implementations in Java and C++. </p>
| 1 | 2009-09-24T04:34:01Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,469,955 | <p>The Pig version would go something like this (untested):</p>
<pre><code> Data = LOAD '/my/data' using PigStorage() as (aa:int, bb:float, cc:chararray);
grp = GROUP Data by aa;
topK = FOREACH grp (
sorted = ORDER Data by bb DESC;
lim = LIMIT sorted 5;
GENERATE group as aa, lim;
)
STORE topK INTO '/my/output' using PigStorage();
</code></pre>
<p>Pig isn't optimized for performance; it's goal is to enable processing of multi-terabyte datasets using parallel execution frameworks. It does have a local mode, so you can try it, but I doubt it will beat your script.</p>
| 0 | 2009-09-24T05:50:46Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,469,997 | <p>You could use smarter data structures and still use python.
I've ran your reference implementation and my python implementation on my machine and even compared the output to be sure in results.</p>
<p>This is yours:</p>
<pre><code>$ time python ./ref.py < data-large.txt > ref-large.txt
real 1m57.689s
user 1m56.104s
sys 0m0.573s
</code></pre>
<p>This is mine:</p>
<pre><code>$ time python ./my.py < data-large.txt > my-large.txt
real 1m35.132s
user 1m34.649s
sys 0m0.261s
$ diff my-large.txt ref-large.txt
$ echo $?
0
</code></pre>
<p>And this is the source:</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
import heapq
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
if len(current) < 5:
heapq.heappush(current, (bb, cc))
else:
if current[0] < (bb, cc):
heapq.heapreplace(current, (bb, cc))
for aa in top_5:
current = top_5[aa]
while len(current) > 0:
bb, cc = heapq.heappop(current)
print aa, bb, cc
</code></pre>
<p><strong>Update:</strong> Know your limits.
I've also timed a noop code, to know the fastest possible python solution with code similar to the original:</p>
<pre><code>$ time python noop.py < data-large.txt > noop-large.txt
real 1m20.143s
user 1m19.846s
sys 0m0.267s
</code></pre>
<p>And the noop.py itself:</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
import heapq
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
if len(current) < 5:
current.append((bb, cc))
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
| 6 | 2009-09-24T06:08:23Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,474,658 | <p>Since you asked about Matlab, here's how I did something like what you're asking for. I tried to do it without any for loops, but I do have one because I didn't care to take a long time with it. If you were worried about memory then you could pull data from the stream in chunks with fscanf rather than reading the entire buffer.</p>
<pre><code>fid = fopen('fakedata.txt','r');
tic
A=fscanf(fid,'%d %d %d\n');
A=reshape(A,3,length(A)/3)'; %Matlab reads the data into one long column'
Names = unique(A(:,1));
for i=1:length(Names)
indices = find(A(:,1)==Names(i)); %Grab all instances of key i
[Y,I] = sort(A(indices,2),1,'descend'); %sort in descending order of 2nd record
A(indices(I(1:min([5,length(indices(I))]))),:) %Print the top five
end
toc
fclose(fid)
</code></pre>
| 1 | 2009-09-24T23:14:46Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,477,216 | <p>Pretty straightforward Caml (27 * 10^6 rows -- 27 sec, C++ by hrnt -- 29 sec)</p>
<pre><code>open Printf
open ExtLib
let (>>) x f = f x
let cmp x y = compare (fst x : float) (fst y)
let wsp = Str.regexp "[ \t]+"
let () =
let all = Hashtbl.create 1024 in
Std.input_lines stdin >> Enum.iter (fun line ->
let [a;b;c] = Str.split wsp line in
let b = float_of_string b in
try
match Hashtbl.find all a with
| [] -> assert false
| (bmin,_) as prev::tl -> if b > bmin then
begin
let m = List.sort ~cmp ((b,c)::tl) in
Hashtbl.replace all a (if List.length tl < 4 then prev::m else m)
end
with Not_found -> Hashtbl.add all a [b,c]
);
all >> Hashtbl.iter (fun a -> List.iter (fun (b,c) -> printf "%s %f %s\n" a b c))
</code></pre>
| 2 | 2009-09-25T13:12:46Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,481,186 | <p>That was a nice lunch break challenge, he, he.</p>
<p>Top-N is a well-known database killer. As shown by the post above, there is no way to efficiently express it in common SQL.</p>
<p>As for the various implementations, you got to keep in mind that the slow part in this is not the sorting or the top-N, it's the parsing of text. Have you looked at the source code for glibc's strtod() lately ?</p>
<p>For instance, I get, using Python :</p>
<pre><code>Read data : 80.5 s
My TopN : 34.41 s
HeapTopN : 30.34 s
</code></pre>
<p>It is quite likely that you'll never get very fast timings, no matter what language you use, unless your data is in some format that is a lot faster to parse than text. For instance, loading the test data into postgres takes 70 s, and the majority of that is text parsing, too.</p>
<p>If the N in your topN is small, like 5, a C implementation of my algorithm below would probably be the fastest. If N can be larger, heaps are a much better option.</p>
<p>So, since your data is probably in a database, and your problem is getting at the data, not the actual processing, if you're really in need of a super fast TopN engine, what you should do is write a C module for your database of choice. Since postgres is faster for about anything, I suggest using postgres, plus it isn't difficult to write a C module for it.</p>
<p>Here's my Python code :</p>
<pre><code>import random, sys, time, heapq
ROWS = 27000000
def make_data( fname ):
f = open( fname, "w" )
r = random.Random()
for i in xrange( 0, ROWS, 10000 ):
for j in xrange( i,i+10000 ):
f.write( "%d %f %d\n" % (r.randint(0,100), r.uniform(0,1000), j))
print ("write: %d\r" % i),
sys.stdout.flush()
print
def read_data( fname ):
for n, line in enumerate( open( fname ) ):
r = line.strip().split()
yield int(r[0]),float(r[1]),r[2]
if not (n % 10000 ):
print ("read: %d\r" % n),
sys.stdout.flush()
print
def topn( ntop, data ):
ntop -= 1
assert ntop > 0
min_by_key = {}
top_by_key = {}
for key,value,label in data:
tup = (value,label)
if key not in top_by_key:
# initialize
top_by_key[key] = [ tup ]
else:
top = top_by_key[ key ]
l = len( top )
if l > ntop:
# replace minimum value in top if it is lower than current value
idx = min_by_key[ key ]
if top[idx] < tup:
top[idx] = tup
min_by_key[ key ] = top.index( min( top ) )
elif l < ntop:
# fill until we have ntop entries
top.append( tup )
else:
# we have ntop entries in list, we'll have ntop+1
top.append( tup )
# initialize minimum to keep
min_by_key[ key ] = top.index( min( top ) )
# finalize:
return dict( (key, sorted( values, reverse=True )) for key,values in top_by_key.iteritems() )
def grouptopn( ntop, data ):
top_by_key = {}
for key,value,label in data:
if key in top_by_key:
top_by_key[ key ].append( (value,label) )
else:
top_by_key[ key ] = [ (value,label) ]
return dict( (key, sorted( values, reverse=True )[:ntop]) for key,values in top_by_key.iteritems() )
def heaptopn( ntop, data ):
top_by_key = {}
for key,value,label in data:
tup = (value,label)
if key not in top_by_key:
top_by_key[ key ] = [ tup ]
else:
top = top_by_key[ key ]
if len(top) < ntop:
heapq.heappush(top, tup)
else:
if top[0] < tup:
heapq.heapreplace(top, tup)
return dict( (key, sorted( values, reverse=True )) for key,values in top_by_key.iteritems() )
def dummy( data ):
for row in data:
pass
make_data( "data.txt" )
t = time.clock()
dummy( read_data( "data.txt" ) )
t_read = time.clock() - t
t = time.clock()
top_result = topn( 5, read_data( "data.txt" ) )
t_topn = time.clock() - t
t = time.clock()
htop_result = heaptopn( 5, read_data( "data.txt" ) )
t_htopn = time.clock() - t
# correctness checking :
for key in top_result:
print key, " : ", " ".join (("%f:%s"%(value,label)) for (value,label) in top_result[key])
print key, " : ", " ".join (("%f:%s"%(value,label)) for (value,label) in htop_result[key])
print
print "Read data :", t_read
print "TopN : ", t_topn - t_read
print "HeapTopN : ", t_htopn - t_read
for key in top_result:
assert top_result[key] == htop_result[key]
</code></pre>
| 0 | 2009-09-26T13:19:37Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,484,752 | <p>Well, please grab a coffee and read the source code for strtod -- it's mindboggling, but needed, if you want to float -> text -> float to give back the same float you started with.... really...</p>
<p>Parsing integers is a lot faster (not so much in python, though, but in C, yes).</p>
<p>Anyway, putting the data in a Postgres table :</p>
<pre><code>SELECT count( key ) FROM the dataset in the above program
</code></pre>
<p>=> 7 s (so it takes 7 s to read the 27M records)</p>
<pre><code>CREATE INDEX topn_key_value ON topn( key, value );
</code></pre>
<p>191 s</p>
<pre><code>CREATE TEMPORARY TABLE topkeys AS SELECT key FROM topn GROUP BY key;
</code></pre>
<p>12 s</p>
<p>(You can use the index to get distinct values of 'key' faster too but it requires some light plpgsql hacking)</p>
<pre><code>CREATE TEMPORARY TABLE top AS SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1) AS r FROM topkeys a) foo;
</code></pre>
<p>Temps : 15,310 ms</p>
<pre><code>INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 1) AS r FROM topkeys a) foo;
</code></pre>
<p>Temps : 17,853 ms</p>
<pre><code>INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 2) AS r FROM topkeys a) foo;
</code></pre>
<p>Temps : 13,983 ms</p>
<pre><code>INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 3) AS r FROM topkeys a) foo;
</code></pre>
<p>Temps : 16,860 ms</p>
<pre><code>INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 4) AS r FROM topkeys a) foo;
</code></pre>
<p>Temps : 17,651 ms</p>
<pre><code>INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 5) AS r FROM topkeys a) foo;
</code></pre>
<p>Temps : 19,216 ms</p>
<pre><code>SELECT * FROM top ORDER BY key,value;
</code></pre>
<p>As you can see computing the top-n is extremely fast (provided n is small) but creating the (mandatory) index is extremely slow because it involves a full sort.</p>
<p>Your best bet is to use a format that is fast to parse (either binary, or write a custom C aggregate for your database, which would be the best choice IMHO). The runtime in the C program shouldn't be more than 1s if python can do it in 1 s.</p>
| 0 | 2009-09-27T23:21:10Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,486,887 | <p>I love lunch break challenges. Here's a 1 hour implementation.</p>
<p>OK, when you don't want do some extremely exotic crap like additions, nothing stops you from using a custom base-10 floating point format whose only implemented operator is comparison, right ? lol.</p>
<p>I had some fast-atoi code lying around from a previous project, so I just imported that.</p>
<p><a href="http://www.copypastecode.com/11541/" rel="nofollow">http://www.copypastecode.com/11541/</a></p>
<p>This C source code takes about 6.6 seconds to parse the 580MB of input text (27 million lines), half of that time is fgets, lol. Then it takes approximately 0.05 seconds to compute the top-n, but I don't know for sure, since the time it takes for the top-n is less than the timer noise.</p>
<p>You'll be the one to test it for correctness though XDDDDDDDDDDD</p>
<p>Interesting huh ?</p>
| 0 | 2009-09-28T12:58:42Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,497,938 | <p>Speaking of lower bounds on compute time :</p>
<p>Let's analyze my algo above :</p>
<pre><code>for each row (key,score,id) :
create or fetch a list of top scores for the row's key
if len( this list ) < N
append current
else if current score > minimum score in list
replace minimum of list with current row
update minimum of all lists if needed
</code></pre>
<p>Let N be the N in top-N
Let R be the number of rows in your data set
Let K be the number of distinct keys</p>
<p>What assumptions can we make ?</p>
<p>R * sizeof( row ) > RAM or at least it's big enough that we don't want to load it all, use a hash to group by key, and sort each bin. For the same reason we don't sort the whole stuff.</p>
<p>Kragen likes hashtables, so K * sizeof(per-key state) << RAM, most probably it fits in L2/3 cache</p>
<p>Kragen is not sorting, so K*N << R ie each key has much more than N entries</p>
<p>(note : A << B means A is small relative to B)</p>
<p>If the data has a random distribution, then </p>
<p>after a small number of rows, the majority of rows will be rejected by the per-key minimum condition, the cost is 1 comparison per row.</p>
<p>So the cost per row is 1 hash lookup + 1 comparison + epsilon * (list insertion + (N+1) comparisons for the minimum)</p>
<p>If the scores have a random distribution (say between 0 and 1) and the conditions above hold, both epsilons will be very small.</p>
<p>Experimental proof :</p>
<p>The 27 million rows dataset above produces 5933 insertions into the top-N lists. All other rows are rejected by a simple key lookup and comparison. epsilon = 0.0001</p>
<p>So roughly, the cost is 1 lookup + coparison per row, which takes a few nanoseconds.</p>
<p>On current hardware, there is no way this is not going to be negligible versus IO cost and especially parsing costs.</p>
| 1 | 2009-09-30T12:58:14Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
What language could I use for fast execution of this database summarization task? | 1,467,898 | <p>So I wrote a Python program to handle a little data processing
task.</p>
<p>Here's a very brief specification in a made-up language of the computation I want:</p>
<pre><code>parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
</code></pre>
<p>That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.</p>
<p>I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
</code></pre>
<p>Hereâs some sample input data:</p>
<pre><code>3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
</code></pre>
<p>Hereâs the output I get from it:</p>
<pre><code>3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
</code></pre>
<p>There are seven values for <code>3</code>, and so we drop the <code>c</code> and <code>d</code> values
because their <code>bb</code> value puts them out of the top 5. Because <code>4</code> has
only one value, its âtop 5â consists of just that one value.</p>
<p>This runs faster than doing the same queries in MySQL (at least, the
way weâve found to do the queries) but Iâm pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So Iâd like to
write it in a language that has a faster implementation.</p>
<p>But Iâm not sure what language to choose.</p>
<p>I havenât been able to figure out how to express this as a single query in SQL, and
actually Iâm really unimpressed with MySQLâs ability even to merely
<code>select * from foo into outfile 'bar';</code> the input data.</p>
<p>C is an obvious choice, but things like <code>line.split()</code>, sorting a list
of 2-tuples, and making a hash table require writing some code thatâs
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.</p>
<p>C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL. </p>
<p>OCaml would be fine, but does it have an equivalent of <code>line.split()</code>,
and will I be sad about the performance of its map?</p>
<p>Common Lisp might work?</p>
<p>Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried <a href="http://hadoop.apache.org/pig/">Pig</a>?</p>
<p>(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)</p>
<p>(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)</p>
<p>Edit:</p>
<p>There was an <a href="http://groups.google.com/group/sbcl-devel/browse%5Fthread/thread/f70c47e9f22d158a/9349b7b72943d314">eerily similar question asked on sbcl-devel in 2007</a> (thanks, Rainer!), and here's an <code>awk</code> script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):</p>
<pre><code>BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
</code></pre>
| 9 | 2009-09-23T18:45:07Z | 1,502,932 | <p>Here is one more OCaml version - targeted for speed - with custom parser on Streams. Too long, but parts of the parser are reusable. Thanks <strong>peufeu</strong> for triggering competition :)</p>
<p>Speed :</p>
<ul>
<li>simple ocaml - 27 sec</li>
<li>ocaml with Stream parser - 15 sec</li>
<li>c with manual parser - 5 sec</li>
</ul>
<p>Compile with :</p>
<pre><code>ocamlopt -pp camlp4o code.ml -o caml
</code></pre>
<p>Code :</p>
<pre><code>open Printf
let cmp x y = compare (fst x : float) (fst y)
let digit c = Char.code c - Char.code '0'
let rec parse f = parser
| [< a=int; _=spaces; b=float; _=spaces;
c=rest (Buffer.create 100); t >] -> f a b c; parse f t
| [< >] -> ()
and int = parser
| [< ''0'..'9' as c; t >] -> int_ (digit c) t
| [< ''-'; ''0'..'9' as c; t >] -> - (int_ (digit c) t)
and int_ n = parser
| [< ''0'..'9' as c; t >] -> int_ (n * 10 + digit c) t
| [< >] -> n
and float = parser
| [< n=int; t=frem; e=fexp >] -> (float_of_int n +. t) *. (10. ** e)
and frem = parser
| [< ''.'; r=frem_ 0.0 10. >] -> r
| [< >] -> 0.0
and frem_ f base = parser
| [< ''0'..'9' as c; t >] ->
frem_ (float_of_int (digit c) /. base +. f) (base *. 10.) t
| [< >] -> f
and fexp = parser
| [< ''e'; e=int >] -> float_of_int e
| [< >] -> 0.0
and spaces = parser
| [< '' '; t >] -> spaces t
| [< ''\t'; t >] -> spaces t
| [< >] -> ()
and crlf = parser
| [< ''\r'; t >] -> crlf t
| [< ''\n'; t >] -> crlf t
| [< >] -> ()
and rest b = parser
| [< ''\r'; _=crlf >] -> Buffer.contents b
| [< ''\n'; _=crlf >] -> Buffer.contents b
| [< 'c; t >] -> Buffer.add_char b c; rest b t
| [< >] -> Buffer.contents b
let () =
let all = Array.make 200 [] in
let each a b c =
assert (a >= 0 && a < 200);
match all.(a) with
| [] -> all.(a) <- [b,c]
| (bmin,_) as prev::tl -> if b > bmin then
begin
let m = List.sort cmp ((b,c)::tl) in
all.(a) <- if List.length tl < 4 then prev::m else m
end
in
parse each (Stream.of_channel stdin);
Array.iteri
(fun a -> List.iter (fun (b,c) -> printf "%i %f %s\n" a b c))
all
</code></pre>
| 3 | 2009-10-01T09:40:44Z | [
"python",
"sql",
"lisp",
"ocaml",
"apache-pig"
] |
How does this work? | 1,467,902 | <p>So I'm trying to comprehend the source file for csv2rec in matplotlib.mlab. It is used to take a csv file and parse the data into certain formats. So it may take a string '234' and convert it to int. or take a date string and make it into python datetimes. </p>
<pre><code>def get_converters(reader):
converters = None
for i, row in enumerate(reader):
if i==0:
converters = [mybool]*len(row)
if checkrows and i>checkrows:
break
#print i, len(names), len(row)
#print 'converters', zip(converters, row)
for j, (name, item) in enumerate(zip(names, row)):
func = converterd.get(j)
if func is None:
func = converterd.get(name)
if func is None:
#if not item.strip(): continue
func = converters[j]
if len(item.strip()):
func = get_func(name, item, func)
else:
# how should we handle custom converters and defaults?
func = with_default_value(func, None)
converters[j] = func
return converters
</code></pre>
<p>My issue with this function is 'converters.' It starts off as None. Then later 'func = converters[j]' j I know is a number which is just created through enumeration. So it is looking for the corresponding converters item as indexed by j. But there is nothing in converters because it is None right? Unless python programs don't have to be read from top to bottom? In that case we get the func from the next two lines "if len(item.st....etc)" or from the 'else:' section. But, I just assumed it would have to be read from top to bottom. </p>
<p>I don't know if any of the other things are important so I just included the whole function. converterd is a dictionary mapping I believe that the user can provide as a parameter to find a converter automatically. checkrows is just a number provided by the user as a parameter in the beginning to check for validity. It is by default None. I'm still kind of a beginner, so just fyi. =)</p>
<p>Thanks everyone. This site is so helpful!</p>
| 0 | 2009-09-23T18:45:52Z | 1,467,916 | <p>Unless I'm missing something, on the first iteration "i" is 0, so the following is executed:</p>
<pre><code>converters = [mybool]*len(row)
</code></pre>
<p>and that initializes "converters"</p>
| 1 | 2009-09-23T18:48:55Z | [
"python",
"function",
"matplotlib"
] |
How does this work? | 1,467,902 | <p>So I'm trying to comprehend the source file for csv2rec in matplotlib.mlab. It is used to take a csv file and parse the data into certain formats. So it may take a string '234' and convert it to int. or take a date string and make it into python datetimes. </p>
<pre><code>def get_converters(reader):
converters = None
for i, row in enumerate(reader):
if i==0:
converters = [mybool]*len(row)
if checkrows and i>checkrows:
break
#print i, len(names), len(row)
#print 'converters', zip(converters, row)
for j, (name, item) in enumerate(zip(names, row)):
func = converterd.get(j)
if func is None:
func = converterd.get(name)
if func is None:
#if not item.strip(): continue
func = converters[j]
if len(item.strip()):
func = get_func(name, item, func)
else:
# how should we handle custom converters and defaults?
func = with_default_value(func, None)
converters[j] = func
return converters
</code></pre>
<p>My issue with this function is 'converters.' It starts off as None. Then later 'func = converters[j]' j I know is a number which is just created through enumeration. So it is looking for the corresponding converters item as indexed by j. But there is nothing in converters because it is None right? Unless python programs don't have to be read from top to bottom? In that case we get the func from the next two lines "if len(item.st....etc)" or from the 'else:' section. But, I just assumed it would have to be read from top to bottom. </p>
<p>I don't know if any of the other things are important so I just included the whole function. converterd is a dictionary mapping I believe that the user can provide as a parameter to find a converter automatically. checkrows is just a number provided by the user as a parameter in the beginning to check for validity. It is by default None. I'm still kind of a beginner, so just fyi. =)</p>
<p>Thanks everyone. This site is so helpful!</p>
| 0 | 2009-09-23T18:45:52Z | 1,467,924 | <p>Converters gets set again at the beginning of the loop with</p>
<pre><code>if i==0:
converters = [mybool]*len(row)
</code></pre>
<p>So after that it's not None anymore.</p>
| 2 | 2009-09-23T18:51:02Z | [
"python",
"function",
"matplotlib"
] |
How does this work? | 1,467,902 | <p>So I'm trying to comprehend the source file for csv2rec in matplotlib.mlab. It is used to take a csv file and parse the data into certain formats. So it may take a string '234' and convert it to int. or take a date string and make it into python datetimes. </p>
<pre><code>def get_converters(reader):
converters = None
for i, row in enumerate(reader):
if i==0:
converters = [mybool]*len(row)
if checkrows and i>checkrows:
break
#print i, len(names), len(row)
#print 'converters', zip(converters, row)
for j, (name, item) in enumerate(zip(names, row)):
func = converterd.get(j)
if func is None:
func = converterd.get(name)
if func is None:
#if not item.strip(): continue
func = converters[j]
if len(item.strip()):
func = get_func(name, item, func)
else:
# how should we handle custom converters and defaults?
func = with_default_value(func, None)
converters[j] = func
return converters
</code></pre>
<p>My issue with this function is 'converters.' It starts off as None. Then later 'func = converters[j]' j I know is a number which is just created through enumeration. So it is looking for the corresponding converters item as indexed by j. But there is nothing in converters because it is None right? Unless python programs don't have to be read from top to bottom? In that case we get the func from the next two lines "if len(item.st....etc)" or from the 'else:' section. But, I just assumed it would have to be read from top to bottom. </p>
<p>I don't know if any of the other things are important so I just included the whole function. converterd is a dictionary mapping I believe that the user can provide as a parameter to find a converter automatically. checkrows is just a number provided by the user as a parameter in the beginning to check for validity. It is by default None. I'm still kind of a beginner, so just fyi. =)</p>
<p>Thanks everyone. This site is so helpful!</p>
| 0 | 2009-09-23T18:45:52Z | 1,468,122 | <p>First,</p>
<pre><code>converters = None
</code></pre>
<p>sets an initial value for <code>converters</code>. This way, if the iteration doesn't happen (because <code>readers</code> might be empty) then when the function returns <code>converters</code> it will exist and have the value <code>None</code>.</p>
<p>If the iteration over <code>readers</code> happens, then <code>converters</code> is immediately reset to a more meaningful value in the first pass through the iteration (when <code>i==0</code>):</p>
<pre><code>converters = [mybool]*len(row)
</code></pre>
| 1 | 2009-09-23T19:34:18Z | [
"python",
"function",
"matplotlib"
] |
python: how to get all members of an array except for ones that match a condition | 1,467,930 | <p>I'm trying to create an array of all .asm files I need to build <em>except</em> for one that is causing me trouble right now. Here's what I have, based on the Scons <a href="http://www.scons.org/doc/HTML/scons-user/a10760.html" rel="nofollow">"Handling Common Cases"</a> page:</p>
<pre><code>projfiles['buildasm'] =
['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a']];
</code></pre>
<p>(this maps paths of the form 'foo.a' to '#build/foo.asm')</p>
<p>I want to run this for each member of <code>projfiles['a']</code> <em>except</em> if a member of the array matches 'baz.a'. How can I do this?</p>
| 1 | 2009-09-23T18:51:54Z | 1,467,945 | <pre><code>projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a'] if x != 'baz.a']
</code></pre>
<p>or more generally:</p>
<pre><code>ignored_files = ['baz.a',
'foo.a',
'xyzzy.a',
]
projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a'] if x not in ignored_files]
</code></pre>
| 7 | 2009-09-23T18:53:58Z | [
"python"
] |
python win32 extensions documentation | 1,468,099 | <p>I'm new to both python and the python win32 extensions available at <a href="http://python.net/crew/skippy/win32/">http://python.net/crew/skippy/win32/</a> but I can't find any documentation online or in the installation directories concerning what exactly the win32 extensions provide. Where is this information?</p>
| 13 | 2009-09-23T19:27:17Z | 1,468,133 | <p>You'll find documentation here:
<a href="http://docs.activestate.com/activepython/2.4/pywin32/PyWin32.HTML">http://docs.activestate.com/activepython/2.4/pywin32/PyWin32.HTML</a></p>
<p>(Note: most of the API docs are under 'modules' and 'objects'. Note that the documentation is very sparse here but rembember: since it's only a wrapper on top of the win32 API --> the 'full' documentation is also on the MSDN website, google should be helpful...)</p>
| 10 | 2009-09-23T19:35:59Z | [
"python",
"documentation",
"pywin32"
] |
python win32 extensions documentation | 1,468,099 | <p>I'm new to both python and the python win32 extensions available at <a href="http://python.net/crew/skippy/win32/">http://python.net/crew/skippy/win32/</a> but I can't find any documentation online or in the installation directories concerning what exactly the win32 extensions provide. Where is this information?</p>
| 13 | 2009-09-23T19:27:17Z | 1,468,190 | <p>In addition to ChristopheD's recommendations I also find that <a href="http://timgolden.me.uk/python/index.html" rel="nofollow">Tim Golden's Python Stuff</a> is very useful.</p>
<p>Good Luck,
Cutaway</p>
| 2 | 2009-09-23T19:49:02Z | [
"python",
"documentation",
"pywin32"
] |
python win32 extensions documentation | 1,468,099 | <p>I'm new to both python and the python win32 extensions available at <a href="http://python.net/crew/skippy/win32/">http://python.net/crew/skippy/win32/</a> but I can't find any documentation online or in the installation directories concerning what exactly the win32 extensions provide. Where is this information?</p>
| 13 | 2009-09-23T19:27:17Z | 1,468,275 | <p><a href="http://oreilly.com/catalog/9781565926219" rel="nofollow" title="Print ISBN: 978-1-56592-621-9 | ISBN 10: 1-56592-621-8">Python Programming On Win32</a> from O'Reilly is a great, if dated, book on the subject. I've read it and is very good.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/1565926218" rel="nofollow" title="ISBN: 978-1-56592-621-9"><img src="http://covers.oreilly.com/images/9781565926219/cat.gif" alt="You can get it in Amazon too." /></a></p>
<p>Its not documentation, <em>per se</em>, but its really useful for a good introduction to COM programming with Python, among other advanced stuff.</p>
| 0 | 2009-09-23T20:09:53Z | [
"python",
"documentation",
"pywin32"
] |
python win32 extensions documentation | 1,468,099 | <p>I'm new to both python and the python win32 extensions available at <a href="http://python.net/crew/skippy/win32/">http://python.net/crew/skippy/win32/</a> but I can't find any documentation online or in the installation directories concerning what exactly the win32 extensions provide. Where is this information?</p>
| 13 | 2009-09-23T19:27:17Z | 1,469,940 | <p>PyWin32 docs are included with <a href="http://www.activestate.com/activepython/" rel="nofollow">ActivePython</a> (which I highly recommend you to install). ChristopheD's link is for Python 2.4 which is an older version. For Python 2.6 version (which is the latest), <a href="http://downloads.activestate.com/ActivePython/etc/ActivePython26.chm" rel="nofollow">here is the CHM file</a> that contains PyWin32 docs. Note that this CHM file is also included with ActivePython itself.</p>
<p><img src="http://dl.getdropbox.com/u/87045/permalinks/apy26-pywin32.png" alt="alt text" /></p>
| 2 | 2009-09-24T05:45:44Z | [
"python",
"documentation",
"pywin32"
] |
Python CreateFile Cannot Find PhysicalMemory | 1,468,130 | <p>I am trying to access the Physical Memory of a Windows 2000 system (trying to do this without a memory dumping tool). My understanding is that I need to do this using the CreateFile function to create a handle. I have used an older version of <a href="http://www.msuiche.net/2008/06/14/capture-memory-under-win2k3-or-vista-with-win32dd/" rel="nofollow">win32dd</a> to help me through this. Other documentation on the web points me to using either "\Device\PhysicalMemory" or "\\.\PhysicalMemory". Unfortunately, I get the same error for each.</p>
<pre><code>Traceback (most recent call last):
File "testHandles.py", line 101, in (module)
File "testHandles.py", line 72, in createFileHandle
pywintypes.error: (3, 'CreateFile', 'The system cannot find the path specified.')
</code></pre>
<p>Actually, the error number returned is different for each run \\.\PhysicalMemory == 3 and \Device\PhysicalMemory == 2. Review of pywin32, win32file, createfile, pyhandle, and pywintypes did not produce information as to the different return values.</p>
<p>Here is my code. I am using py2exe to get this working on Windows 2000 (and yes it compiles successfully). I realize that I might also have a problem with DeviceIoControl but right now I am concentrating on CreateFile.</p>
<pre><code># testHandles.py
import ctypes
import socket
import struct
import sys
import win32file
import pywintypes
def createFileHandle():
outLoc = pywintypes.Unicode("C:\\Documents and Settings\\Administrator\\My Documents\\pymemdump_dotPM.dd")
handleLoc = pywintypes.Unicode("\\\\.\\PhysicalMemory")
#handleLoc = pywintypes.Unicode("\\Device\\PhysicalMemory")
placeHolder = 0
BytesReturned = 0
# Device = CreateFile(L"\\\\.\\win32dd", GENERIC_ALL, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
# CreateFile(fileName, desiredAccess , shareMode , attributes , creationDisposition , flagsAndAttributes , hTemplateFile )
#hMemHandle = win32file.CreateFile(handleLoc, GENERIC_ALL, SHARE_READ, None, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, None)
hMemHandle = win32file.CreateFile(handleLoc, win32file.GENERIC_READ, win32file.FILE_SHARE_READ, None, win32file.OPEN_EXISTING, win32file.FILE_ATTRIBUTE_NORMAL, None)
print "hMemHandle: %s" % hMemHandle
if (hMemHandle == NO_ERROR):
print "Could not build hMemHandle"
sys.exit()
# We send destination path to the driver.
#if (!DeviceIoControl(hMemHandle, 0x19880922, outLoc, (ULONG)(wcslen(outLoc) + 1) * sizeof(TCHAR), NULL, 0, &BytesReturned, NULL))
if (ctypes.windll.Kernel32.DeviceIoControl(hMemHandle, 0x19880922, outLoc, 5, NULL, 0, BytesReturned, NULL)):
print "Error: DeviceIoControl(), Cannot send IOCTL.\n"
else:
print "[win32dd] Physical memory dumped. You can now check %s.\n" % outLoc
# Dump memory
createFileHandle()
</code></pre>
<p>Thank you,
Cutaway</p>
| 0 | 2009-09-23T19:35:22Z | 1,468,792 | <p>I don't believe it's possible to access the physical memory object from user mode land in Windows. As your <a href="http://www.msuiche.net/2008/06/14/capture-memory-under-win2k3-or-vista-with-win32dd/" rel="nofollow">win32dd link</a> suggests, you will need to do it from kernel mode.</p>
| 0 | 2009-09-23T22:09:01Z | [
"python",
"memory",
"ctypes",
"createfile"
] |
Cross-platform help viewer with search functionality | 1,468,314 | <p>I am looking for a help viewer like Windows CHM that basically provides support for </p>
<ol>
<li>adding content in HTML format</li>
<li>define Table of Contents</li>
<li>decent search</li>
</ol>
<p>It should work on Windows, Mac and Linux. Bonus points for also having support for generating a "plain HTML/javascript" version that can be viewed in any browser (albeit without search support).</p>
<p>Language preference: Python</p>
| 0 | 2009-09-23T20:20:40Z | 1,468,448 | <p><a href="http://docs.wxwidgets.org/stable/wx%5Fwxhtmlhelpcontroller.html#wxhtmlhelpcontroller" rel="nofollow">wxHtmlHelpController</a>, which is part of <a href="http://www.wxwidgets.org/" rel="nofollow">wxWidgets</a>, is a cross-platform viewer for HtmlHelp.</p>
<p>I'm not sure how easy it is to use it from a non-wxWidgets program, but I think it can be done.</p>
| 2 | 2009-09-23T20:50:15Z | [
"python",
"documentation",
"cross-platform",
"chm"
] |
Cross-platform help viewer with search functionality | 1,468,314 | <p>I am looking for a help viewer like Windows CHM that basically provides support for </p>
<ol>
<li>adding content in HTML format</li>
<li>define Table of Contents</li>
<li>decent search</li>
</ol>
<p>It should work on Windows, Mac and Linux. Bonus points for also having support for generating a "plain HTML/javascript" version that can be viewed in any browser (albeit without search support).</p>
<p>Language preference: Python</p>
| 0 | 2009-09-23T20:20:40Z | 1,809,732 | <p>wxHtmlHelpController doesn't support any scripting within pages, nor does it support css.</p>
| 1 | 2009-11-27T16:49:08Z | [
"python",
"documentation",
"cross-platform",
"chm"
] |
How to contribute improvements to packages hosted on Cheeseshop ( pypi )? | 1,468,476 | <p>I've been using zc.buildout more and more and I'm encountering problems with some recipes that I have solutions to.</p>
<p>These packages generally fall into several categories:</p>
<ol>
<li>Package with no obvious links to a project site</li>
<li>Package with links to free hosted service like github or google code</li>
</ol>
<p>Setup #2 is better then #1, but not much better because for both of these situations, I would have to wait for the developer to apply these changes before i can use the updated package buildout.</p>
<p>What I've been doing up to this point is basically forking the package, giving it a different name and uploading it to pypi, but this is creating redundancy and I think only aggravating the problem. </p>
<p>One possible solution, is to use to use a personal server package index where I would upload updated versions of the code until the developer updates he/her package. This is doable, but it adds additional work, that I would prefer to avoid.</p>
<p>Is there a better way to do this?</p>
<p>Thank you</p>
| 2 | 2009-09-23T20:55:40Z | 1,468,603 | <p>Your "upload my personalized fork" solution sounds like a terrible idea. You should try <a href="http://pypi.python.org/pypi/collective.recipe.patch" rel="nofollow">http://pypi.python.org/pypi/collective.recipe.patch</a> which lets you automatically patch eggs. Try <a href="http://stackoverflow.com/questions/557462/how-do-i-use-easyinstall-and-buildout-when-pypi-is-down">setting up a local PyPi-compatible index</a>. I think you can also point <code>find-links =</code> at a directory (not just a <code>http://</code> url) containing your personal versions of those "almost good enough" packages. You can also try monkey patching the defective package, or take advantage of the Zope component model to override the necessary bits in a new package. Often the real authors are listed somewhere in the source code of a package, even if they decided not to put their names up on PyPi.</p>
<p>I've been trying to cut down on the number of custom versions of packages I use. Usually I work with customized packages as develop eggs by linking src/some.project to my checkout of that project's code. I don't have to build a new egg or reinstall every time I edit those packages.</p>
<p>A lot of Python packages used in buildouts are hosted in Plone's svn collective. It's relatively easy to get commit access to that repository.</p>
| 3 | 2009-09-23T21:27:52Z | [
"python",
"collaboration",
"buildout",
"pypi"
] |
How do you turn an unquoted Python function/lambda into AST? 2.6 | 1,468,634 | <p>This seems like it should be easy, but I can't find the answer anywhere - nor able to derive one myself. How do you turn an unquoted python function/lambda into an AST? </p>
<p>Here is what I'd like to be able to do.</p>
<pre><code>import ast
class Walker(ast.NodeVisitor):
pass
# ...
# note, this doesnt work as ast.parse wants a string
tree = ast.parse(lambda x,y: x+y)
Walker().visit(tree)
</code></pre>
| 5 | 2009-09-23T21:34:07Z | 1,468,652 | <p>Your lambda expression is a function, which has lots of information, but I don't think it still has source code associated with. I'm not sure you can get what you want.</p>
| 0 | 2009-09-23T21:37:45Z | [
"python",
"abstract-syntax-tree"
] |
How do you turn an unquoted Python function/lambda into AST? 2.6 | 1,468,634 | <p>This seems like it should be easy, but I can't find the answer anywhere - nor able to derive one myself. How do you turn an unquoted python function/lambda into an AST? </p>
<p>Here is what I'd like to be able to do.</p>
<pre><code>import ast
class Walker(ast.NodeVisitor):
pass
# ...
# note, this doesnt work as ast.parse wants a string
tree = ast.parse(lambda x,y: x+y)
Walker().visit(tree)
</code></pre>
| 5 | 2009-09-23T21:34:07Z | 1,468,770 | <p>In general, you can't. For example, <code>2 + 2</code> is an expression -- but if you pass it to any function or method, the argument being passed is just the number <code>4</code>, no way to recover what expression it was computed from. Function source code can sometimes be recovered (though not for a <code>lambda</code>), but "an unquoted Python expression" gets <strong>evaluated</strong> so what you get is just the object that's the expression's value.</p>
<p>What problem are you trying to solve? There may be other, viable approaches.</p>
<p><strong>Edit</strong>: tx to the OP for clarifying. There's no way to do it for <code>lambda</code> or some other corner cases, but as I mention function source code can sometimes be recovered...:</p>
<pre><code>import ast
import inspect
def f():
return 23
tree = ast.parse(inspect.getsource(f))
print ast.dump(tree)
</code></pre>
<p><code>inspect.getsource</code> raises <code>IOError</code> if it can't get the source code for whatever object you're passing it. I suggest you wrap the parsing and getsource call into an auxiliary function that can accept a string (and just parses it) OR a function (and tries getsource on it, possibly giving better errors in the <code>IOError</code> case).</p>
| 9 | 2009-09-23T22:03:27Z | [
"python",
"abstract-syntax-tree"
] |
How do you turn an unquoted Python function/lambda into AST? 2.6 | 1,468,634 | <p>This seems like it should be easy, but I can't find the answer anywhere - nor able to derive one myself. How do you turn an unquoted python function/lambda into an AST? </p>
<p>Here is what I'd like to be able to do.</p>
<pre><code>import ast
class Walker(ast.NodeVisitor):
pass
# ...
# note, this doesnt work as ast.parse wants a string
tree = ast.parse(lambda x,y: x+y)
Walker().visit(tree)
</code></pre>
| 5 | 2009-09-23T21:34:07Z | 1,469,053 | <p>You can't generate AST from compiled bytecode. You need the source code.</p>
| 1 | 2009-09-23T23:33:28Z | [
"python",
"abstract-syntax-tree"
] |
How do you turn an unquoted Python function/lambda into AST? 2.6 | 1,468,634 | <p>This seems like it should be easy, but I can't find the answer anywhere - nor able to derive one myself. How do you turn an unquoted python function/lambda into an AST? </p>
<p>Here is what I'd like to be able to do.</p>
<pre><code>import ast
class Walker(ast.NodeVisitor):
pass
# ...
# note, this doesnt work as ast.parse wants a string
tree = ast.parse(lambda x,y: x+y)
Walker().visit(tree)
</code></pre>
| 5 | 2009-09-23T21:34:07Z | 1,470,481 | <p>If you only get access to the function/lambda you only have the compiled python bytecode. The exact Python AST can't be reconstructed from the bytecode because there is information loss in the compilation process. But you can analyze the bytecode and create AST's for that. There is one such analyzer in GeniuSQL. I also have a small proof of concept that analyzes bytecode and creates SQLAlchemy clauseelements from this.</p>
<p>The process I used for analyzing is the following:</p>
<ol>
<li>Split the code into a list of opcodes with potential arguments.</li>
<li>Find the basic blocks in the code by going through the opcodes and for every jump create a basic block boundary after the jump and before the jump target</li>
<li>Create a control flow graph from the basic blocks.</li>
<li>Go through all the basic blocks with abstract interpretation tracking stack and variable assignments in SSA form.</li>
<li>To create the output expression just get the calculated SSA return value.</li>
</ol>
<p>I have pasted my <a href="http://dpaste.com/97578/">proof of concept</a> and <a href="http://dpaste.com/97579/">example code using it</a>. This is non-clean quickly hacked together code, but you're free to build on it if you like. Leave a note if you decide to make something useful from it.</p>
| 5 | 2009-09-24T08:52:11Z | [
"python",
"abstract-syntax-tree"
] |
How do you turn an unquoted Python function/lambda into AST? 2.6 | 1,468,634 | <p>This seems like it should be easy, but I can't find the answer anywhere - nor able to derive one myself. How do you turn an unquoted python function/lambda into an AST? </p>
<p>Here is what I'd like to be able to do.</p>
<pre><code>import ast
class Walker(ast.NodeVisitor):
pass
# ...
# note, this doesnt work as ast.parse wants a string
tree = ast.parse(lambda x,y: x+y)
Walker().visit(tree)
</code></pre>
| 5 | 2009-09-23T21:34:07Z | 12,204,014 | <p><a href="https://github.com/srossross/Meta" rel="nofollow">The Meta library</a> allows you to recover the source in many cases, with some exceptions such as comprehensions and lambdas.</p>
<pre><code>import meta, ast
source = '''
a = 1
b = 2
c = (a ** b)
'''
mod = ast.parse(source, '<nofile>', 'exec')
code = compile(mod, '<nofile>', 'exec')
mod2 = meta.decompile(code)
source2 = meta.dump_python_source(mod2)
assert source == source2
</code></pre>
| 4 | 2012-08-30T19:23:35Z | [
"python",
"abstract-syntax-tree"
] |
What networking libraries/frameworks exist for Python? | 1,468,780 | <p>I was wondering what good networking libraries/frameworks there are for Python.</p>
<p>Please provide a link to the standard API documentation for the library, and perhaps a link to a decent tutorial to get started with it.</p>
<p>A comment or two about its advantages/disadvantages would be nice as well.</p>
| 1 | 2009-09-23T22:06:21Z | 1,468,812 | <p><a href="http://twistedmatrix.com/" rel="nofollow">Twisted</a> is the most complete, and complex, of all Python networking frameworks.</p>
<p>It's well-established and very complete, but it has a steep learning curve.</p>
<p><a href="http://twistedmatrix.com/trac/wiki/Documentation" rel="nofollow">Documentation here</a>; <a href="http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions" rel="nofollow">FAQ here</a>.</p>
| 3 | 2009-09-23T22:12:26Z | [
"python",
"networking",
"frameworks"
] |
What networking libraries/frameworks exist for Python? | 1,468,780 | <p>I was wondering what good networking libraries/frameworks there are for Python.</p>
<p>Please provide a link to the standard API documentation for the library, and perhaps a link to a decent tutorial to get started with it.</p>
<p>A comment or two about its advantages/disadvantages would be nice as well.</p>
| 1 | 2009-09-23T22:06:21Z | 1,468,815 | <p>Consider the <a href="http://twistedmatrix.com/trac/" rel="nofollow">Twisted</a> framework. The advantage:</p>
<ul>
<li>solid reactor implementation</li>
<li>support for almost all network protocols found in the wild</li>
<li>well documented</li>
</ul>
<p>Disadvantages:</p>
<ul>
<li>it's <em>huge</em></li>
<li>the asynchronous APIs need some time to get used to (but once you are familiar, things are actually pretty usable)</li>
</ul>
<p>CPython itself ships with a tiny <a href="http://docs.python.org/library/asyncore.html" rel="nofollow">reactor/socket package</a>. Never used it myself, though.</p>
| 4 | 2009-09-23T22:12:45Z | [
"python",
"networking",
"frameworks"
] |
What networking libraries/frameworks exist for Python? | 1,468,780 | <p>I was wondering what good networking libraries/frameworks there are for Python.</p>
<p>Please provide a link to the standard API documentation for the library, and perhaps a link to a decent tutorial to get started with it.</p>
<p>A comment or two about its advantages/disadvantages would be nice as well.</p>
| 1 | 2009-09-23T22:06:21Z | 1,468,816 | <p>The standard library has <a href="http://docs.python.org/library/asyncore.html">asyncore</a> which is good for very simple stuff as well as the <a href="http://docs.python.org/library/socketserver.html">SocketServer</a> stuff if you'd prefer something that does threads. There's also <a href="http://twistedmatrix.com/">Twisted</a> but the barrier of entry to that is a bit high if you're not used to event-driven IO. If you're after web frameworks, <a href="http://cherrypy.org/">CherryPy</a> is a good start or there's <a href="http://www.djangoproject.com/">Django</a> and <a href="http://turbogears.org/">TurboGears</a> if you're looking for something more full-featured.</p>
| 6 | 2009-09-23T22:14:07Z | [
"python",
"networking",
"frameworks"
] |
What networking libraries/frameworks exist for Python? | 1,468,780 | <p>I was wondering what good networking libraries/frameworks there are for Python.</p>
<p>Please provide a link to the standard API documentation for the library, and perhaps a link to a decent tutorial to get started with it.</p>
<p>A comment or two about its advantages/disadvantages would be nice as well.</p>
| 1 | 2009-09-23T22:06:21Z | 1,469,904 | <p>In case you want to build/manipulate your own packets there is Scapy too :)</p>
<p>The usage is pretty straight forward, it lets you do whatever you want with the packets
and it's multi-Platform.</p>
<p>Project Page: <a href="http://www.secdev.org/projects/scapy/" rel="nofollow">http://www.secdev.org/projects/scapy/</a></p>
<p>Docs: <a href="http://www.secdev.org/projects/scapy/doc/" rel="nofollow">http://www.secdev.org/projects/scapy/doc/</a></p>
<p>Example: <a href="http://www.secdev.org/projects/scapy/demo.html" rel="nofollow">http://www.secdev.org/projects/scapy/demo.html</a></p>
| 1 | 2009-09-24T05:35:36Z | [
"python",
"networking",
"frameworks"
] |
Python canvas suggestions | 1,469,286 | <p>I'm looking for a very simple canvas for python. What I really need is the ability to draw lines and circles, move them around / get rid of them, and scroll the canvas (so, I'm ideally drawing on an infinite canvas, and just scrolling it around). Ideally, the code would look like:</p>
<pre><code>c = Canvas()
l1 = c.line((x0, y0), (x1, y1))
l2 = c.line((x2, y2), (x3, y3))
c1 = c.circle(((x0 + x1 + x2 + x3)/4, (y0 + y1 + y2 + y3)/4), 10)
c1.delete()
l1.move(5, 10)
c.scroll(5, 5)
</code></pre>
<p>That's just some dream code, I'm fine with some minimal boilerplate, but I really don't need anything fancy, probably the only feature I would really like would be the ability to embed in some GUI that looks good on Windows (that rules out Tkinter) and is not extremely heavyweight (that might rule out GTK/Cairo).</p>
<p>This is in Python 2.6. I'd be happy to give any other information</p>
| 1 | 2009-09-24T00:47:16Z | 1,469,299 | <p>I've used PyGame with a good deal of success for such things:</p>
<p><a href="http://www.pygame.org/" rel="nofollow">http://www.pygame.org/</a></p>
| 0 | 2009-09-24T00:50:28Z | [
"python",
"canvas"
] |
Python canvas suggestions | 1,469,286 | <p>I'm looking for a very simple canvas for python. What I really need is the ability to draw lines and circles, move them around / get rid of them, and scroll the canvas (so, I'm ideally drawing on an infinite canvas, and just scrolling it around). Ideally, the code would look like:</p>
<pre><code>c = Canvas()
l1 = c.line((x0, y0), (x1, y1))
l2 = c.line((x2, y2), (x3, y3))
c1 = c.circle(((x0 + x1 + x2 + x3)/4, (y0 + y1 + y2 + y3)/4), 10)
c1.delete()
l1.move(5, 10)
c.scroll(5, 5)
</code></pre>
<p>That's just some dream code, I'm fine with some minimal boilerplate, but I really don't need anything fancy, probably the only feature I would really like would be the ability to embed in some GUI that looks good on Windows (that rules out Tkinter) and is not extremely heavyweight (that might rule out GTK/Cairo).</p>
<p>This is in Python 2.6. I'd be happy to give any other information</p>
| 1 | 2009-09-24T00:47:16Z | 1,469,305 | <p>You may want to look at this question:
<a href="http://stackoverflow.com/questions/67000/fast-pixel-precision-2d-drawing-api-for-graphics-app">http://stackoverflow.com/questions/67000/fast-pixel-precision-2d-drawing-api-for-graphics-app</a></p>
<p>The first suggestion is for pyglet, <a href="http://code.google.com/p/pyglet/" rel="nofollow">http://code.google.com/p/pyglet/</a></p>
| 0 | 2009-09-24T00:53:11Z | [
"python",
"canvas"
] |
Python canvas suggestions | 1,469,286 | <p>I'm looking for a very simple canvas for python. What I really need is the ability to draw lines and circles, move them around / get rid of them, and scroll the canvas (so, I'm ideally drawing on an infinite canvas, and just scrolling it around). Ideally, the code would look like:</p>
<pre><code>c = Canvas()
l1 = c.line((x0, y0), (x1, y1))
l2 = c.line((x2, y2), (x3, y3))
c1 = c.circle(((x0 + x1 + x2 + x3)/4, (y0 + y1 + y2 + y3)/4), 10)
c1.delete()
l1.move(5, 10)
c.scroll(5, 5)
</code></pre>
<p>That's just some dream code, I'm fine with some minimal boilerplate, but I really don't need anything fancy, probably the only feature I would really like would be the ability to embed in some GUI that looks good on Windows (that rules out Tkinter) and is not extremely heavyweight (that might rule out GTK/Cairo).</p>
<p>This is in Python 2.6. I'd be happy to give any other information</p>
| 1 | 2009-09-24T00:47:16Z | 1,616,803 | <p>I ended up using WxPython with the built-in FloatCanvas. I wouldn't really advise it for anyone else, though; it depends on NumPy, which is a very big installation, and is almost completely undocumented (reading the sources was a big part of the app I programmed). It is, however, very nice and does a lot for you.</p>
<p>WxWiki: <a href="http://wiki.wxpython.org/FloatCanvas" rel="nofollow">http://wiki.wxpython.org/FloatCanvas</a></p>
<p>Docs: <a href="http://www.wxpython.org/docs/api/wx.lib.floatcanvas-module.html" rel="nofollow">http://www.wxpython.org/docs/api/wx.lib.floatcanvas-module.html</a></p>
<p>Devel: <a href="http://trac.paulmcnett.com/floatcanvas" rel="nofollow">http://trac.paulmcnett.com/floatcanvas</a></p>
| 1 | 2009-10-24T03:05:48Z | [
"python",
"canvas"
] |
Why can't I import the 'math' library when embedding python in c? | 1,469,370 | <p>I'm using the example in python's 2.6 docs to begin a foray into embedding some python in C. The <a href="http://docs.python.org/extending/embedding.html#pure-embedding" rel="nofollow">example C-code</a> does not allow me to execute the following 1 line script:</p>
<pre><code>import math
</code></pre>
<p>Using line:</p>
<pre><code>./tmp.exe tmp foo bar
</code></pre>
<p>it complains</p>
<pre><code>Traceback (most recent call last):
File "/home/rbroger1/scripts/tmp.py", line 1, in <module>
import math
ImportError: [...]/python/2.6.2/lib/python2.6/lib-dynload/math.so: undefined symbol: PyInt_FromLong
</code></pre>
<p>When I do <code>nm</code> on my generated binary (tmp.exe) it shows </p>
<pre><code>0000000000420d30 T PyInt_FromLong
</code></pre>
<p>The function seems to be defined, so why can't the shared object find the function?</p>
| 2 | 2009-09-24T01:20:33Z | 1,469,445 | <p>I'm using Python 2.6, and I successfully compiled and ran that same example code that you listed, without changing anything in the source. </p>
<pre>
$ gcc python.c -I/usr/include/python2.6/ /usr/lib/libpython2.6.so
$ ./a.out random randint 1 100
Result of call: 39
$ ./a.out random randint 1 100
Result of call: 57
</pre>
<p>I specifically chose the <code>random</code> module because it does have <code>from math import log,...</code> so it is certainly importing the <code>math</code> module as well.</p>
<p>Your issue is probably due to how you're linking; see <a href="http://objectmix.com/python/311970-weird-embedding-problem.html" rel="nofollow">this forum post</a> for a similar issue someone else had. I can't find the links again, but it seems like there are some common issues when trying to link against Python's static library then importing modules that require a dynamic library.</p>
| 2 | 2009-09-24T01:53:31Z | [
"c++",
"python",
"c"
] |
Probability exercise returning different result that expected | 1,469,421 | <p>As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this:</p>
<pre><code># rollFive.py
from random import *
def main():
n = input("Please enter the number of sims to run: ")
hits = simNRolls(n)
hits = float(hits)
n = float(n)
prob = hits/n
print "The odds of rolling 5 of the same number are", prob
def simNRolls(n):
hits = 0
for i in range(n):
hits = hits + diceRoll()
return hits
def diceRoll():
firstDie = randrange(1,7,1)
for i in range(4):
nextDie = randrange(1,7,1)
if nextDie!=firstDie:
success = 0
break
else:
success = 1
return success
</code></pre>
<p>The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5).</p>
<p>Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?</p>
| 2 | 2009-09-24T01:46:46Z | 1,469,443 | <p>The probability of getting a particular number five times is (1/6)^5, but the probability of getting any five numbers the same is (1/6)^4.</p>
<p>There are two ways to see this.</p>
<p>First, the probability of getting all 1's, for example, is (1/6)^5 since there is only one way out of six to get a 1. Multiply that by five dice, and you get (1/6)^5. But, since there are six possible numbers to get the same, then there are six ways to succeed, which is 6((1/6)^5) or (1/6)^4.</p>
<p>Looked at another way, it doesn't matter what the first roll gives, so we exclude it. Then we have to match that number with the four remaining rolls, the probability of which is (1/6)^4.</p>
| 6 | 2009-09-24T01:52:56Z | [
"python",
"probability"
] |
Probability exercise returning different result that expected | 1,469,421 | <p>As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this:</p>
<pre><code># rollFive.py
from random import *
def main():
n = input("Please enter the number of sims to run: ")
hits = simNRolls(n)
hits = float(hits)
n = float(n)
prob = hits/n
print "The odds of rolling 5 of the same number are", prob
def simNRolls(n):
hits = 0
for i in range(n):
hits = hits + diceRoll()
return hits
def diceRoll():
firstDie = randrange(1,7,1)
for i in range(4):
nextDie = randrange(1,7,1)
if nextDie!=firstDie:
success = 0
break
else:
success = 1
return success
</code></pre>
<p>The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5).</p>
<p>Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?</p>
| 2 | 2009-09-24T01:46:46Z | 1,469,446 | <p>Your math is wrong. The probability of getting five dice with the same number is <code>6*(1/6)^5 = 0.0007716</code>.</p>
| 1 | 2009-09-24T01:53:33Z | [
"python",
"probability"
] |
Probability exercise returning different result that expected | 1,469,421 | <p>As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this:</p>
<pre><code># rollFive.py
from random import *
def main():
n = input("Please enter the number of sims to run: ")
hits = simNRolls(n)
hits = float(hits)
n = float(n)
prob = hits/n
print "The odds of rolling 5 of the same number are", prob
def simNRolls(n):
hits = 0
for i in range(n):
hits = hits + diceRoll()
return hits
def diceRoll():
firstDie = randrange(1,7,1)
for i in range(4):
nextDie = randrange(1,7,1)
if nextDie!=firstDie:
success = 0
break
else:
success = 1
return success
</code></pre>
<p>The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5).</p>
<p>Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?</p>
| 2 | 2009-09-24T01:46:46Z | 1,469,458 | <p>I think your expected probability is wrong, as you've stated the problem. (1/6)^5 is the probability of rolling some <em>specific</em> number 5 times in a row; (1/6)^4 is the probability of rolling <em>any</em> number 5 times in a row (because the first roll is always "successful" -- that is, the first roll will always result in some number).</p>
<pre><code>>>> (1.0/6.0)**4
0.00077160493827160479
</code></pre>
<p>Compare to running your program with 1 million iterations:</p>
<pre><code>[me@host:~] python roll5.py
Please enter the number of sims to run: 1000000
The odds of rolling 5 of the same number are 0.000755
</code></pre>
| 0 | 2009-09-24T01:57:29Z | [
"python",
"probability"
] |
Probability exercise returning different result that expected | 1,469,421 | <p>As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this:</p>
<pre><code># rollFive.py
from random import *
def main():
n = input("Please enter the number of sims to run: ")
hits = simNRolls(n)
hits = float(hits)
n = float(n)
prob = hits/n
print "The odds of rolling 5 of the same number are", prob
def simNRolls(n):
hits = 0
for i in range(n):
hits = hits + diceRoll()
return hits
def diceRoll():
firstDie = randrange(1,7,1)
for i in range(4):
nextDie = randrange(1,7,1)
if nextDie!=firstDie:
success = 0
break
else:
success = 1
return success
</code></pre>
<p>The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5).</p>
<p>Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?</p>
| 2 | 2009-09-24T01:46:46Z | 1,471,073 | <p>Very simply, there are <code>6 ** 5</code> possible outcomes from rolling 5 dice, and only 6 of those outcomes are successful, so the answer is <code>6.0 / 6 ** 5</code></p>
| 1 | 2009-09-24T11:25:15Z | [
"python",
"probability"
] |
Why am I receiving a low level socket error when using the Fabric python library? | 1,469,431 | <p>When I run the command:</p>
<pre><code>fab -H localhost host_type
</code></pre>
<p>I receive the following error:</p>
<pre><code>[localhost] Executing task 'host_type'
[localhost] run: uname -s
Fatal error: Low level socket error connecting to host localhost: Connection refused
Aborting.
</code></pre>
<p>Any thoughts as to why? Thanks.</p>
<h3>Fabfile.py</h3>
<pre><code>from fabric.api import run
def host_type():
run('uname -s')
</code></pre>
<h3>Configuration</h3>
<ul>
<li>Fabric 1.0a0 (installed from the <a href="http://github.com/cumulusware/fabric/commit/b8e1b6ac3f43787375bb77c1a64d7f9c000e7511">most recent Github commit---b8e1b6a</a>)</li>
<li>Paramiko 1.7.4</li>
<li>PyCrypto 2.0.1</li>
<li>Virtualenv ver 1.3.3</li>
<li>Python 2.6.2+ (release26-maint:74924, Sep 18 2009, 16:03:18)</li>
<li>Mac OS X 10.6.1</li>
</ul>
| 16 | 2009-09-24T01:51:21Z | 1,469,466 | <p>The important part isn't the "low level error" part of the message - the important part is the "Connection refused" part. You'll get a "connection refused" message when trying to connect to a closed port.</p>
<p>The most likely scenario is that you are not running an ssh server on your machine at the time that Fabric is running. If you do</p>
<pre><code>ssh localhost
</code></pre>
<p>you'll probably get a message similar to</p>
<pre><code>ssh: connect to host localhost: Connection refused
</code></pre>
<p>So you'll have to go out and set up an SSH server on your computer before you can proceed with Fabric from there.</p>
| 24 | 2009-09-24T02:01:12Z | [
"python"
] |
Why am I receiving a low level socket error when using the Fabric python library? | 1,469,431 | <p>When I run the command:</p>
<pre><code>fab -H localhost host_type
</code></pre>
<p>I receive the following error:</p>
<pre><code>[localhost] Executing task 'host_type'
[localhost] run: uname -s
Fatal error: Low level socket error connecting to host localhost: Connection refused
Aborting.
</code></pre>
<p>Any thoughts as to why? Thanks.</p>
<h3>Fabfile.py</h3>
<pre><code>from fabric.api import run
def host_type():
run('uname -s')
</code></pre>
<h3>Configuration</h3>
<ul>
<li>Fabric 1.0a0 (installed from the <a href="http://github.com/cumulusware/fabric/commit/b8e1b6ac3f43787375bb77c1a64d7f9c000e7511">most recent Github commit---b8e1b6a</a>)</li>
<li>Paramiko 1.7.4</li>
<li>PyCrypto 2.0.1</li>
<li>Virtualenv ver 1.3.3</li>
<li>Python 2.6.2+ (release26-maint:74924, Sep 18 2009, 16:03:18)</li>
<li>Mac OS X 10.6.1</li>
</ul>
| 16 | 2009-09-24T01:51:21Z | 11,511,884 | <p>I had the same problem, but the reason was different: While I could easily log in to the server via SSH (default port 22), fabric tried to connect on a closed port 9090.</p>
<p>Finally I recognized that I had defined "env.port=9090" in my old fabfile for some WSGI server setup; while that was never a problem, I updated my Python installation some weeks before, and fabric now uses env.port for its SSH connection.
I just renamed that config, and all is well again.</p>
| 2 | 2012-07-16T20:17:32Z | [
"python"
] |
Why am I receiving a low level socket error when using the Fabric python library? | 1,469,431 | <p>When I run the command:</p>
<pre><code>fab -H localhost host_type
</code></pre>
<p>I receive the following error:</p>
<pre><code>[localhost] Executing task 'host_type'
[localhost] run: uname -s
Fatal error: Low level socket error connecting to host localhost: Connection refused
Aborting.
</code></pre>
<p>Any thoughts as to why? Thanks.</p>
<h3>Fabfile.py</h3>
<pre><code>from fabric.api import run
def host_type():
run('uname -s')
</code></pre>
<h3>Configuration</h3>
<ul>
<li>Fabric 1.0a0 (installed from the <a href="http://github.com/cumulusware/fabric/commit/b8e1b6ac3f43787375bb77c1a64d7f9c000e7511">most recent Github commit---b8e1b6a</a>)</li>
<li>Paramiko 1.7.4</li>
<li>PyCrypto 2.0.1</li>
<li>Virtualenv ver 1.3.3</li>
<li>Python 2.6.2+ (release26-maint:74924, Sep 18 2009, 16:03:18)</li>
<li>Mac OS X 10.6.1</li>
</ul>
| 16 | 2009-09-24T01:51:21Z | 35,858,644 | <pre><code>env.roledefs = {
'role1': env.hosts[0:5],
'role2':[env.hosts[5],]
}
</code></pre>
<p>I encountered the same error if "role" value <strong>IS NOT A LIST</strong>. for example, the above code works but the following doesn't.</p>
<pre><code> env.roledefs = {
'role1': env.hosts[0:5],
'role2':env.hosts[5],
}
</code></pre>
| 1 | 2016-03-08T03:54:50Z | [
"python"
] |
Why am I receiving a low level socket error when using the Fabric python library? | 1,469,431 | <p>When I run the command:</p>
<pre><code>fab -H localhost host_type
</code></pre>
<p>I receive the following error:</p>
<pre><code>[localhost] Executing task 'host_type'
[localhost] run: uname -s
Fatal error: Low level socket error connecting to host localhost: Connection refused
Aborting.
</code></pre>
<p>Any thoughts as to why? Thanks.</p>
<h3>Fabfile.py</h3>
<pre><code>from fabric.api import run
def host_type():
run('uname -s')
</code></pre>
<h3>Configuration</h3>
<ul>
<li>Fabric 1.0a0 (installed from the <a href="http://github.com/cumulusware/fabric/commit/b8e1b6ac3f43787375bb77c1a64d7f9c000e7511">most recent Github commit---b8e1b6a</a>)</li>
<li>Paramiko 1.7.4</li>
<li>PyCrypto 2.0.1</li>
<li>Virtualenv ver 1.3.3</li>
<li>Python 2.6.2+ (release26-maint:74924, Sep 18 2009, 16:03:18)</li>
<li>Mac OS X 10.6.1</li>
</ul>
| 16 | 2009-09-24T01:51:21Z | 37,236,582 | <p>This can also happen in OS X 10.11.4 and Fabric 1.10.1, in the case where you are ssh'ing to a VM using Vagrant, which does port forwarding from localhost. In this case, localhost was resolving to the IPv6 <code>::1</code> address (not due to <code>/etc/hosts</code> file), and giving this error.</p>
<p>The fix was to force use of IPv4 by using the <code>127.0.0.1</code> address in the fabfile instead of the hostname. Using a hostname in <code>/etc/hosts</code> with this address didn't work.</p>
<p>You might also want to try these <a href="https://github.com/fabric/fabric/issues/575#issuecomment-5079964" rel="nofollow">useful tips for debugging connection issues in Fabric</a>.</p>
| 2 | 2016-05-15T09:24:39Z | [
"python"
] |
Making a facade in Python 2.5 | 1,469,591 | <p>I want to have a Python class that acts as a wrapper for another Python class.</p>
<p>Something like this</p>
<pre><code>class xxx:
name = property( fset=lambda self, v: setattr(self.inner, 'name', v), fget=lambda self: getattr(self.inner, 'name' ))
def setWrapper( self, obj )
self.inner = obj
</code></pre>
<p>So when someone says xxx().x = 'hello' I want it to set the value on xxx().inner.whatever and not xxx().x.</p>
<p>Is this possible? I have been trying lambdas, but to no avail. I'm using Python 2.5</p>
<p><strong>Edit:</strong>
Now my wife isn't rushing me to bed so I can flush out the code a bit more. I kind of had an idea about what you guys had below, but from the <a href="http://www.python.org/download/releases/2.2.3/descrintro/#property" rel="nofollow">docs</a> it seemed that you should avoid overriding <code>__setattr__/__getattr__</code> and use property instead. If it is not possible to do this via the property function then I will use the <code>__setattr__/__getattr__</code>? Thanks</p>
| 1 | 2009-09-24T03:07:02Z | 1,469,599 | <p>This is where you use the <code>__setattr__</code> and <code>__getattr__</code> methods as documented <a href="http://docs.python.org/reference/datamodel.html#customizing-attribute-access" rel="nofollow">here</a>.</p>
<p>In short, if you do this:</p>
<pre><code>class Wrapper(object):
def __init__(self, wrapped):
object.__setattr__(self, 'inner', wrapped)
def __getattr__(self, attr):
return getattr(self.inner, attr)
def __setattr__(self, attr, value):
setattr(self.inner, attr, value)
</code></pre>
<p>It should do what you're after. I'd highly recommend reading the docs linked to above.</p>
<p>EDIT: As pointed out, the original version fails due to <code>__setattr__</code> being called unconditionally. I've updated this example. Note that this version is reliant on new-style class behaviour (as indicated by subclassing from object). For old-style I think you'd need to mess with <code>__dict__</code> as follows:</p>
<pre><code>class OldStyleWrapper:
def __init__(self, wrapped):
self.__dict__['inner'] = wrapped
def __getattr__(self, attr):
return getattr(self.inner, attr)
def __setattr__(self, attr, value):
setattr(self.inner, attr, value)
</code></pre>
| 6 | 2009-09-24T03:10:53Z | [
"python"
] |
Making a facade in Python 2.5 | 1,469,591 | <p>I want to have a Python class that acts as a wrapper for another Python class.</p>
<p>Something like this</p>
<pre><code>class xxx:
name = property( fset=lambda self, v: setattr(self.inner, 'name', v), fget=lambda self: getattr(self.inner, 'name' ))
def setWrapper( self, obj )
self.inner = obj
</code></pre>
<p>So when someone says xxx().x = 'hello' I want it to set the value on xxx().inner.whatever and not xxx().x.</p>
<p>Is this possible? I have been trying lambdas, but to no avail. I'm using Python 2.5</p>
<p><strong>Edit:</strong>
Now my wife isn't rushing me to bed so I can flush out the code a bit more. I kind of had an idea about what you guys had below, but from the <a href="http://www.python.org/download/releases/2.2.3/descrintro/#property" rel="nofollow">docs</a> it seemed that you should avoid overriding <code>__setattr__/__getattr__</code> and use property instead. If it is not possible to do this via the property function then I will use the <code>__setattr__/__getattr__</code>? Thanks</p>
| 1 | 2009-09-24T03:07:02Z | 1,469,619 | <p>As another answer said, <code>__getattr__</code> and <code>__setattr__</code> are the key, <strong>but</strong> you need care when using the latter...:</p>
<pre><code>class Wrapper(object):
def __init__(self, wrapped):
object.__setattr__(self, 'inner', wrapped)
def __getattr__(self, n):
return getattr(self.inner, n)
def __setattr__(self, n, value):
setattr(self.inner, n, value)
</code></pre>
<p>You don't need precautions for <code>self.inner</code> <strong>access</strong>, because <code>__getattr__</code> gets called only for attributes that aren't otherwise present; but <code>__setattr__</code> gets called for EVERY attribute, so when you actually need to <strong>set</strong> <code>self.inner</code> (or any other attribute), you need to explicitly bypass <code>__setattr__</code> (here I'm using <code>object</code> for the purpose, so I also <em>inherit</em> from <code>object</code> -- <strong>highly</strong> advisable anyway, in Python 2.*, otherwise you'd be making an "old-style class" [[the kind which AT LAST disappeared in Python 3]], which you really don't want to...;-).</p>
| 5 | 2009-09-24T03:19:09Z | [
"python"
] |
Making a facade in Python 2.5 | 1,469,591 | <p>I want to have a Python class that acts as a wrapper for another Python class.</p>
<p>Something like this</p>
<pre><code>class xxx:
name = property( fset=lambda self, v: setattr(self.inner, 'name', v), fget=lambda self: getattr(self.inner, 'name' ))
def setWrapper( self, obj )
self.inner = obj
</code></pre>
<p>So when someone says xxx().x = 'hello' I want it to set the value on xxx().inner.whatever and not xxx().x.</p>
<p>Is this possible? I have been trying lambdas, but to no avail. I'm using Python 2.5</p>
<p><strong>Edit:</strong>
Now my wife isn't rushing me to bed so I can flush out the code a bit more. I kind of had an idea about what you guys had below, but from the <a href="http://www.python.org/download/releases/2.2.3/descrintro/#property" rel="nofollow">docs</a> it seemed that you should avoid overriding <code>__setattr__/__getattr__</code> and use property instead. If it is not possible to do this via the property function then I will use the <code>__setattr__/__getattr__</code>? Thanks</p>
| 1 | 2009-09-24T03:07:02Z | 1,469,753 | <p>I believe both Alex and Ben's answers should edit their setattr call as follows:</p>
<pre><code>setattr(self.inner, n, value)
</code></pre>
| 0 | 2009-09-24T04:19:04Z | [
"python"
] |
Django: ImportError: cannot import name Count | 1,469,614 | <p>I just pulled from my github and tried to setup my application on my Ubuntu (I originally ran my app on a Mac at home).</p>
<p>I re-created the database and reconfigured the settings.py -- also update the template locations, etc.</p>
<p>However, when I run the server "python manage.py runserver" get an error that says:</p>
<pre><code>ImportError: cannot import name Count
</code></pre>
<p>I imported the Count in my views.py to use the annotate():</p>
<pre><code>from django.shortcuts import render_to_response
from django.http import Http404, HttpResponse, HttpResponseRedirect
from django.db.models import Count
from mysite.blog.models import Blog
from mysite.blog.models import Comment
from mysite.blog.forms import CommentForm
def index(request):
#below, I used annotate()
blog_posts = Blog.objects.all().annotate(Count('comment')).order_by('-pub_date')[:5]
return render_to_response('blog/index.html',
{'blog_posts': blog_posts})
</code></pre>
<p>Why is not working?</p>
<p>Also, if I remove the "import Count" line, the error goes away and my app functions like normal.</p>
<p>Thanks,
Wenbert</p>
<p><strong>UPDATE:</strong></p>
<p>my models.py looks like this:</p>
<pre><code>from django.db import models
class Blog(models.Model):
author = models.CharField(max_length=200)
title = models.CharField(max_length=200)
content = models.TextField()
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.content
def was_published_today(self):
return self.pub_date.date() == datetime.date.today()
class Comment(models.Model):
blog = models.ForeignKey(Blog)
author = models.CharField(max_length=200)
comment = models.TextField()
url = models.URLField()
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.comment
</code></pre>
<p><strong>UPDATE 2</strong></p>
<p>My urls.py looks like this:</p>
<pre><code>from django.conf.urls.defaults import *
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
(r'^admin/(.*)', admin.site.root),
(r'^blog/$','mysite.blog.views.index'),
(r'^display_meta/$','mysite.blog.views.display_meta'),
(r'^blog/post/(?P<blog_id>\d+)/$','mysite.blog.views.post'),
)
</code></pre>
| 3 | 2009-09-24T03:18:20Z | 1,469,658 | <p>I've updated my Django and it turns out that your import statement is correct as module structure was changed a bit. Are you sure your Django is of latest version?</p>
| 1 | 2009-09-24T03:41:08Z | [
"python",
"django"
] |
Django: ImportError: cannot import name Count | 1,469,614 | <p>I just pulled from my github and tried to setup my application on my Ubuntu (I originally ran my app on a Mac at home).</p>
<p>I re-created the database and reconfigured the settings.py -- also update the template locations, etc.</p>
<p>However, when I run the server "python manage.py runserver" get an error that says:</p>
<pre><code>ImportError: cannot import name Count
</code></pre>
<p>I imported the Count in my views.py to use the annotate():</p>
<pre><code>from django.shortcuts import render_to_response
from django.http import Http404, HttpResponse, HttpResponseRedirect
from django.db.models import Count
from mysite.blog.models import Blog
from mysite.blog.models import Comment
from mysite.blog.forms import CommentForm
def index(request):
#below, I used annotate()
blog_posts = Blog.objects.all().annotate(Count('comment')).order_by('-pub_date')[:5]
return render_to_response('blog/index.html',
{'blog_posts': blog_posts})
</code></pre>
<p>Why is not working?</p>
<p>Also, if I remove the "import Count" line, the error goes away and my app functions like normal.</p>
<p>Thanks,
Wenbert</p>
<p><strong>UPDATE:</strong></p>
<p>my models.py looks like this:</p>
<pre><code>from django.db import models
class Blog(models.Model):
author = models.CharField(max_length=200)
title = models.CharField(max_length=200)
content = models.TextField()
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.content
def was_published_today(self):
return self.pub_date.date() == datetime.date.today()
class Comment(models.Model):
blog = models.ForeignKey(Blog)
author = models.CharField(max_length=200)
comment = models.TextField()
url = models.URLField()
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.comment
</code></pre>
<p><strong>UPDATE 2</strong></p>
<p>My urls.py looks like this:</p>
<pre><code>from django.conf.urls.defaults import *
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
(r'^admin/(.*)', admin.site.root),
(r'^blog/$','mysite.blog.views.index'),
(r'^display_meta/$','mysite.blog.views.display_meta'),
(r'^blog/post/(?P<blog_id>\d+)/$','mysite.blog.views.post'),
)
</code></pre>
| 3 | 2009-09-24T03:18:20Z | 1,469,712 | <p>This sounds like you're not using Django 1.1. Double check by opening up the Django shell and running</p>
<pre><code>import django
print django.VERSION
</code></pre>
<p>You should see something like <code>(1, 1, 0, 'final', 0)</code> if you're using 1.1</p>
| 1 | 2009-09-24T04:04:49Z | [
"python",
"django"
] |
Networkx node traversal | 1,469,653 | <p>Using Python's Networkx library, I created an undirected graph to represent a relationship network between various people. A snippet of my code is below:</p>
<pre><code>import networkx as nx
def creategraph(filepath):
G=nx.Graph()
#All the various nodes and edges are added in this stretch of code.
return G
</code></pre>
<p>From what I understand, each node is basically a dictionary. The problem that this presents to me is that I want to perform a different kind of Random Walk algorithm. Now before you jump on me and tell me to use one of the standard functions of the Networkx library, I want to point out that it is a custom algorithm. Suppose I run the creategraph function, and the G object is returned and stored in another object (let's call it X). I want to start off at a node called 'Bob.' Bob is connected to Alice and Joe. Now, I want to reassign Y to point to either Alice or Bob at random (with the data I'm dealing with, a given node could have hundreds of edges leaving it). How do I go about doing this? Also, how do I deal with unicode entries in a given node's dict (like how Alice and Joe are listed below?) </p>
<pre><code>X = creategraph("filename")
Y=X['Bob']
print Y
>> {u'Alice': {}, u'Joe': {}}
</code></pre>
| 0 | 2009-09-24T03:40:25Z | 1,469,670 | <p>The <a href="http://docs.python.org/library/random.html#random.choice" rel="nofollow">choice</a> function in the <a href="http://docs.python.org/library/random.html" rel="nofollow">random</a> module could help with the selection process. You don't really need to worry about the distinction between unicode and string unless you're trying to write them out somewhere as sometimes unicode characters aren't translatable into the ASCII charset that Python defaults to.</p>
<p>The way you'd use random.choice would be something along the lines of:</p>
<pre><code>Y = Y[random.choice(Y.keys())]
</code></pre>
| 4 | 2009-09-24T03:45:57Z | [
"python",
"graph",
"nodes",
"networkx"
] |
Understanding Python profile output | 1,469,679 | <p>I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent.</p>
<p>Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". </p>
<p>I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated.</p>
<p>Profile output:</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 116.168 116.168 <string>:1(<module>)
1 0.001 0.001 116.168 116.168 {execfile}
1 0.003 0.003 116.167 116.167 foo.py:1(<module>)
1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown)
1 0.000 0.000 116.109 116.109 plugins.py:148(load)
1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile)
100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot)
100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot)
316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates)
417310/417273 0.111 0.000 0.111 0.000 {len}
200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects}
99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects}
100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects}
1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass)
3 0.016 0.005 0.029 0.010 {__import__}
1 0.022 0.022 0.025 0.025 ballots.py:1(<module>)
1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>)
7 0.000 0.000 0.003 0.000 re.py:227(_compile)
</code></pre>
<p>Code:</p>
<pre><code> def appendBallot(self, ballot, ballotID=None):
"Append a ballot to this Ballots object."
# String representation of ballot for determining whether ballot is unique
ballotString = str(list(ballot))
# Ballot as the appropriate array to conserve memory
ballot = self.newBallot(ballot)
# Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
# Check to see if we have seen this ballot before
if self.uniqueBallotsLookup.has_key(ballotString):
i = self.uniqueBallotsLookup[ballotString]
self.uniqueBallotIDs[i].add(ballotID)
else:
i = len(self.uniqueBallots)
self.uniqueBallotsLookup[ballotString] = i
self.uniqueBallots.append(ballot)
self.uniqueBallotIDs.append(set([ballotID]))
self.ballotOrder.append(i)
</code></pre>
| 6 | 2009-09-24T03:50:00Z | 1,469,697 | <p>Yeah I came across that same problem as well.</p>
<p>The only way I know to work around this is to wrap your large function into several smaller function calls. This will allow the profiler to take into account each of the smaller function calls.</p>
<p>Interesting enough, the process of doing this (for me, anyway) made it obvious where the inefficiencies were, so I didn't even have to run the profiler.</p>
| 5 | 2009-09-24T03:58:55Z | [
"python",
"profiling",
"profile"
] |
Understanding Python profile output | 1,469,679 | <p>I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent.</p>
<p>Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". </p>
<p>I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated.</p>
<p>Profile output:</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 116.168 116.168 <string>:1(<module>)
1 0.001 0.001 116.168 116.168 {execfile}
1 0.003 0.003 116.167 116.167 foo.py:1(<module>)
1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown)
1 0.000 0.000 116.109 116.109 plugins.py:148(load)
1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile)
100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot)
100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot)
316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates)
417310/417273 0.111 0.000 0.111 0.000 {len}
200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects}
99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects}
100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects}
1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass)
3 0.016 0.005 0.029 0.010 {__import__}
1 0.022 0.022 0.025 0.025 ballots.py:1(<module>)
1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>)
7 0.000 0.000 0.003 0.000 re.py:227(_compile)
</code></pre>
<p>Code:</p>
<pre><code> def appendBallot(self, ballot, ballotID=None):
"Append a ballot to this Ballots object."
# String representation of ballot for determining whether ballot is unique
ballotString = str(list(ballot))
# Ballot as the appropriate array to conserve memory
ballot = self.newBallot(ballot)
# Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
# Check to see if we have seen this ballot before
if self.uniqueBallotsLookup.has_key(ballotString):
i = self.uniqueBallotsLookup[ballotString]
self.uniqueBallotIDs[i].add(ballotID)
else:
i = len(self.uniqueBallots)
self.uniqueBallotsLookup[ballotString] = i
self.uniqueBallots.append(ballot)
self.uniqueBallotIDs.append(set([ballotID]))
self.ballotOrder.append(i)
</code></pre>
| 6 | 2009-09-24T03:50:00Z | 1,469,884 | <p>I've had a look at your code, and it looks like you make a lot of function calls and attribute lookups as part of your 'checking' or looking ahead before leaping. You also have a lot of code dedicated to track the same condition, i.e many bits of code looking at creating 'unique' IDs. </p>
<p>instead of trying to assign some kind of unique string to each ballot, couldn't you just
use the ballotID (an integer number?)</p>
<p>now you could have a dictionary (uniqueBallotIDs) mapping ballotID and the actual ballot object.</p>
<p>the process might be something like this:</p>
<pre><code>def appendBallot(self, ballot, ballotID=None):
if ballotID is None:
ballotID = self._getuniqueid() # maybe just has a counter? up to you.
# check to see if we have seen this ballot before.
if not self._isunique(ballotID):
# code for non-unique ballot ids.
else:
# code for unique ballot ids.
self.ballotOrder.append(i)
</code></pre>
<p>You might be able to handle some of your worries about the dictionary missing a given key
by using a defaultdict (from the collections module). <a href="http://docs.python.org/library/collections.html" rel="nofollow">collection docs</a></p>
<p><strong>Edit</strong> for completeness I will include a sample usage of the defaultdict:</p>
<pre><code>>>> from collections import defaultdict
>>> ballotIDmap = defaultdict(list)
>>> ballotID, ballot = 1, object() # some nominal ballotID and object.
>>> # I will now try to save my ballotID.
>>> ballotIDmap[ballotID].append(ballot)
>>> ballotIDmap.items()
[(1, [<object object at 0x009BB950>])]
</code></pre>
| 5 | 2009-09-24T05:25:49Z | [
"python",
"profiling",
"profile"
] |
Understanding Python profile output | 1,469,679 | <p>I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent.</p>
<p>Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". </p>
<p>I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated.</p>
<p>Profile output:</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 116.168 116.168 <string>:1(<module>)
1 0.001 0.001 116.168 116.168 {execfile}
1 0.003 0.003 116.167 116.167 foo.py:1(<module>)
1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown)
1 0.000 0.000 116.109 116.109 plugins.py:148(load)
1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile)
100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot)
100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot)
316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates)
417310/417273 0.111 0.000 0.111 0.000 {len}
200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects}
99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects}
100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects}
1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass)
3 0.016 0.005 0.029 0.010 {__import__}
1 0.022 0.022 0.025 0.025 ballots.py:1(<module>)
1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>)
7 0.000 0.000 0.003 0.000 re.py:227(_compile)
</code></pre>
<p>Code:</p>
<pre><code> def appendBallot(self, ballot, ballotID=None):
"Append a ballot to this Ballots object."
# String representation of ballot for determining whether ballot is unique
ballotString = str(list(ballot))
# Ballot as the appropriate array to conserve memory
ballot = self.newBallot(ballot)
# Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
# Check to see if we have seen this ballot before
if self.uniqueBallotsLookup.has_key(ballotString):
i = self.uniqueBallotsLookup[ballotString]
self.uniqueBallotIDs[i].add(ballotID)
else:
i = len(self.uniqueBallots)
self.uniqueBallotsLookup[ballotString] = i
self.uniqueBallots.append(ballot)
self.uniqueBallotIDs.append(set([ballotID]))
self.ballotOrder.append(i)
</code></pre>
| 6 | 2009-09-24T03:50:00Z | 1,470,014 | <p>I have used <a href="http://mg.pov.lt/blog/profiling.html" rel="nofollow" title="Profiling decorator">this decorator</a> in my code, and it helped me with my pyparsing tuning work.</p>
| 4 | 2009-09-24T06:12:26Z | [
"python",
"profiling",
"profile"
] |
Understanding Python profile output | 1,469,679 | <p>I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent.</p>
<p>Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". </p>
<p>I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated.</p>
<p>Profile output:</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 116.168 116.168 <string>:1(<module>)
1 0.001 0.001 116.168 116.168 {execfile}
1 0.003 0.003 116.167 116.167 foo.py:1(<module>)
1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown)
1 0.000 0.000 116.109 116.109 plugins.py:148(load)
1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile)
100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot)
100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot)
316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates)
417310/417273 0.111 0.000 0.111 0.000 {len}
200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects}
99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects}
100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects}
1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass)
3 0.016 0.005 0.029 0.010 {__import__}
1 0.022 0.022 0.025 0.025 ballots.py:1(<module>)
1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>)
7 0.000 0.000 0.003 0.000 re.py:227(_compile)
</code></pre>
<p>Code:</p>
<pre><code> def appendBallot(self, ballot, ballotID=None):
"Append a ballot to this Ballots object."
# String representation of ballot for determining whether ballot is unique
ballotString = str(list(ballot))
# Ballot as the appropriate array to conserve memory
ballot = self.newBallot(ballot)
# Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
# Check to see if we have seen this ballot before
if self.uniqueBallotsLookup.has_key(ballotString):
i = self.uniqueBallotsLookup[ballotString]
self.uniqueBallotIDs[i].add(ballotID)
else:
i = len(self.uniqueBallots)
self.uniqueBallotsLookup[ballotString] = i
self.uniqueBallots.append(ballot)
self.uniqueBallotIDs.append(set([ballotID]))
self.ballotOrder.append(i)
</code></pre>
| 6 | 2009-09-24T03:50:00Z | 1,470,325 | <p>I'll support Fragsworth by saying that you'll want to split up your function into smaller ones.</p>
<p>Having said that, you are reading the output correctly: the tottime is the one to watch. </p>
<p>Now for where your slowdown is likely to be:</p>
<p>Since there seem to be 100000 calls to appendBallot, and there aren't any obvious loops, I'd suggest it is in your assert. Because you are executing:</p>
<pre><code>assert(ballotID not in self.ballotIDs)
</code></pre>
<p>This will actually act as a loop. Thus, the first time you call this function, it will iterate through a (probably empty) array, and then assert if the value was found. The 100000th time it will iterate through the entire array.</p>
<p>And there is actually a possible bug here: if a ballot is deleted, then the next ballot added would have the same id as the last added one (unless that were the one deleted). I think you would be better off using a simple counter. That way you can just increment it each time you add a ballot. Alternatively, you could use a UUID to get unique ids.</p>
<p>Alternatively, if you are looking at some level of persistence, use an ORM, and get it to do the ID generation, and unique checking for you.</p>
| 3 | 2009-09-24T08:01:11Z | [
"python",
"profiling",
"profile"
] |
Understanding Python profile output | 1,469,679 | <p>I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent.</p>
<p>Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". </p>
<p>I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated.</p>
<p>Profile output:</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 116.168 116.168 <string>:1(<module>)
1 0.001 0.001 116.168 116.168 {execfile}
1 0.003 0.003 116.167 116.167 foo.py:1(<module>)
1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown)
1 0.000 0.000 116.109 116.109 plugins.py:148(load)
1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile)
100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot)
100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot)
316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates)
417310/417273 0.111 0.000 0.111 0.000 {len}
200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects}
99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects}
100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects}
1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass)
3 0.016 0.005 0.029 0.010 {__import__}
1 0.022 0.022 0.025 0.025 ballots.py:1(<module>)
1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>)
7 0.000 0.000 0.003 0.000 re.py:227(_compile)
</code></pre>
<p>Code:</p>
<pre><code> def appendBallot(self, ballot, ballotID=None):
"Append a ballot to this Ballots object."
# String representation of ballot for determining whether ballot is unique
ballotString = str(list(ballot))
# Ballot as the appropriate array to conserve memory
ballot = self.newBallot(ballot)
# Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
# Check to see if we have seen this ballot before
if self.uniqueBallotsLookup.has_key(ballotString):
i = self.uniqueBallotsLookup[ballotString]
self.uniqueBallotIDs[i].add(ballotID)
else:
i = len(self.uniqueBallots)
self.uniqueBallotsLookup[ballotString] = i
self.uniqueBallots.append(ballot)
self.uniqueBallotIDs.append(set([ballotID]))
self.ballotOrder.append(i)
</code></pre>
| 6 | 2009-09-24T03:50:00Z | 1,471,017 | <p>You have two problems in this little slice of code:</p>
<pre><code># Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
</code></pre>
<p>Firstly it appears that self.ballotIDs is a list, so the assert statement will cause quadratic behaviour. As you didn't give any documentation at all for your data structures, it's not possible to be prescriptive, but if the order of appearance doesn't matter, you could use a set instead of a list.</p>
<p>Secondly, the logic (in the absence of documentation on what a ballotID is all about, and what a not-None ballotID arg means) seems seriously bugged:</p>
<pre><code>obj.appendBallot(ballota, 2) # self.ballotIDs -> [2]
obj.appendBallot(ballotb) # self.ballotIDs -> [2, 1]
obj.appendBallot(ballotc) # wants to add 2 but triggers assertion
</code></pre>
<p>Other comments:</p>
<p>Instead of <code>adict.has_key(key)</code>, use <code>key in adict</code> -- it's faster and looks better.</p>
<p>You may like to consider reviewing your data structures ... they appear to be slightly baroque; there may be a fair bit of CPU time involved in building them.</p>
| 2 | 2009-09-24T11:12:31Z | [
"python",
"profiling",
"profile"
] |
Understanding Python profile output | 1,469,679 | <p>I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent.</p>
<p>Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". </p>
<p>I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated.</p>
<p>Profile output:</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 116.168 116.168 <string>:1(<module>)
1 0.001 0.001 116.168 116.168 {execfile}
1 0.003 0.003 116.167 116.167 foo.py:1(<module>)
1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown)
1 0.000 0.000 116.109 116.109 plugins.py:148(load)
1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile)
100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot)
100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot)
316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates)
417310/417273 0.111 0.000 0.111 0.000 {len}
200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects}
99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects}
100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects}
1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses)
1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass)
3 0.016 0.005 0.029 0.010 {__import__}
1 0.022 0.022 0.025 0.025 ballots.py:1(<module>)
1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>)
7 0.000 0.000 0.003 0.000 re.py:227(_compile)
</code></pre>
<p>Code:</p>
<pre><code> def appendBallot(self, ballot, ballotID=None):
"Append a ballot to this Ballots object."
# String representation of ballot for determining whether ballot is unique
ballotString = str(list(ballot))
# Ballot as the appropriate array to conserve memory
ballot = self.newBallot(ballot)
# Assign a ballot ID if one has not been given
if ballotID is None:
ballotID = len(self.ballotIDs)
assert(ballotID not in self.ballotIDs)
self.ballotIDs.append(ballotID)
# Check to see if we have seen this ballot before
if self.uniqueBallotsLookup.has_key(ballotString):
i = self.uniqueBallotsLookup[ballotString]
self.uniqueBallotIDs[i].add(ballotID)
else:
i = len(self.uniqueBallots)
self.uniqueBallotsLookup[ballotString] = i
self.uniqueBallots.append(ballot)
self.uniqueBallotIDs.append(set([ballotID]))
self.ballotOrder.append(i)
</code></pre>
| 6 | 2009-09-24T03:50:00Z | 1,472,429 | <p>Profilers can be like that. The method I use is <a href="http://stackoverflow.com/questions/375913/what-can-i-use-to-profile-c-code-in-linux/378024#378024">this</a>. It gets right to the heart of the problem in no time.</p>
| 3 | 2009-09-24T15:26:50Z | [
"python",
"profiling",
"profile"
] |
Better webserver performance for Python Django: Apache mod_wsgi or Lighttpd fastcgi | 1,469,770 | <p>I am currently running a high-traffic python/django website using Apache and mod_wsgi. I'm hoping that there's a faster webserver configuration out there, and I've heard a fair number of recommendations for lighttpd and fastcgi. Is this setup faster than apache+mod_wsgi for serving dynamic django pages (I'm already convinced that lighttpd can server static files better)? The benchmarks online are either poorly conducted or inconclusive so I'm looking for some personal anecdotes. What architectural benefits does lighttpd + fastcgi provide? I understand that lighttpd uses epoll, and that a fastcgi process will be multithreaded. Also, having two separate processes, one for lighttpd and one for the python interpreter, will be largely beneficial.</p>
<p>I am aware of tornado and its ability to handle thousands of file descriptors with much fewer threads using epoll and callbacks. However, I'd prefer to stick with django for now.</p>
<p>Thanks,</p>
<p>Ken</p>
| 2 | 2009-09-24T04:26:57Z | 1,469,813 | <p>I don't have thorough benchmarks, but I'm personally convinced that, just like lighttpd can outperform apache on simpler tasks, <code>mod_wsgi</code> gives apache the laurel when it comes to serving Python web apps. (<a href="http://nginx.net/" rel="nofollow"><code>nginx</code></a> with <em>its own</em> <code>mod_wsgi</code> seems to perform even better than apache, but, hey, you didn't ask about <em>that</em>!-).</p>
| 1 | 2009-09-24T04:44:46Z | [
"python",
"django",
"apache",
"fastcgi",
"lighttpd"
] |
Better webserver performance for Python Django: Apache mod_wsgi or Lighttpd fastcgi | 1,469,770 | <p>I am currently running a high-traffic python/django website using Apache and mod_wsgi. I'm hoping that there's a faster webserver configuration out there, and I've heard a fair number of recommendations for lighttpd and fastcgi. Is this setup faster than apache+mod_wsgi for serving dynamic django pages (I'm already convinced that lighttpd can server static files better)? The benchmarks online are either poorly conducted or inconclusive so I'm looking for some personal anecdotes. What architectural benefits does lighttpd + fastcgi provide? I understand that lighttpd uses epoll, and that a fastcgi process will be multithreaded. Also, having two separate processes, one for lighttpd and one for the python interpreter, will be largely beneficial.</p>
<p>I am aware of tornado and its ability to handle thousands of file descriptors with much fewer threads using epoll and callbacks. However, I'd prefer to stick with django for now.</p>
<p>Thanks,</p>
<p>Ken</p>
| 2 | 2009-09-24T04:26:57Z | 1,470,382 | <p>I suggest <a href="http://nginx.net/" rel="nofollow">nginx</a> with <a href="http://bitbucket.org/barbuza/superfcgi" rel="nofollow">superfcgi</a> for web sites with high load. nginx is very fast for static files. superfcgi uses multiple processes with multiple threads that shows high stability for python applications in spite of GIL, just set number of processes to number of CPU cores at your server.</p>
| 4 | 2009-09-24T08:18:35Z | [
"python",
"django",
"apache",
"fastcgi",
"lighttpd"
] |
Better webserver performance for Python Django: Apache mod_wsgi or Lighttpd fastcgi | 1,469,770 | <p>I am currently running a high-traffic python/django website using Apache and mod_wsgi. I'm hoping that there's a faster webserver configuration out there, and I've heard a fair number of recommendations for lighttpd and fastcgi. Is this setup faster than apache+mod_wsgi for serving dynamic django pages (I'm already convinced that lighttpd can server static files better)? The benchmarks online are either poorly conducted or inconclusive so I'm looking for some personal anecdotes. What architectural benefits does lighttpd + fastcgi provide? I understand that lighttpd uses epoll, and that a fastcgi process will be multithreaded. Also, having two separate processes, one for lighttpd and one for the python interpreter, will be largely beneficial.</p>
<p>I am aware of tornado and its ability to handle thousands of file descriptors with much fewer threads using epoll and callbacks. However, I'd prefer to stick with django for now.</p>
<p>Thanks,</p>
<p>Ken</p>
| 2 | 2009-09-24T04:26:57Z | 1,470,459 | <p>Doesn't answer you question, but do you already use caching for your site? Like memcached? This might give you a better performance gain than going through the mess of switching webservers.</p>
| 1 | 2009-09-24T08:47:14Z | [
"python",
"django",
"apache",
"fastcgi",
"lighttpd"
] |
Better webserver performance for Python Django: Apache mod_wsgi or Lighttpd fastcgi | 1,469,770 | <p>I am currently running a high-traffic python/django website using Apache and mod_wsgi. I'm hoping that there's a faster webserver configuration out there, and I've heard a fair number of recommendations for lighttpd and fastcgi. Is this setup faster than apache+mod_wsgi for serving dynamic django pages (I'm already convinced that lighttpd can server static files better)? The benchmarks online are either poorly conducted or inconclusive so I'm looking for some personal anecdotes. What architectural benefits does lighttpd + fastcgi provide? I understand that lighttpd uses epoll, and that a fastcgi process will be multithreaded. Also, having two separate processes, one for lighttpd and one for the python interpreter, will be largely beneficial.</p>
<p>I am aware of tornado and its ability to handle thousands of file descriptors with much fewer threads using epoll and callbacks. However, I'd prefer to stick with django for now.</p>
<p>Thanks,</p>
<p>Ken</p>
| 2 | 2009-09-24T04:26:57Z | 9,142,698 | <p>you can try fcgid. <a href="https://github.com/chenyf/fcgid" rel="nofollow">https://github.com/chenyf/fcgid</a>, it's a C++ fastcgi server</p>
| 0 | 2012-02-04T17:11:07Z | [
"python",
"django",
"apache",
"fastcgi",
"lighttpd"
] |
PGP/GnuPG to encrypt | 1,469,798 | <p>Need to use PGP/GnuPG to encrypt.can suggest what the Python packages to use that.
for PGP encryption i.e. on the other side is PGP used to decrypt.</p>
| 2 | 2009-09-24T04:37:35Z | 1,469,809 | <p>You could try <a href="http://code.google.com/p/python-gnupg/" rel="nofollow">python-gnupg</a>. Encryption is covered in the docs <a href="http://www.red-dove.com/python%5Fgnupg/index.html#encryption-and-decryption" rel="nofollow">here</a>.</p>
| 0 | 2009-09-24T04:43:39Z | [
"python",
"gnupg"
] |
Eclipse+PyDev+GAE memcache error | 1,469,860 | <p>I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to <a href="http://code.google.com/appengine/articles/eclipse.html">this tutorial</a>.</p>
<p>Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it:</p>
<p><img src="http://www.freeimagehosting.net/uploads/fc176c0957.png" alt="alt text" /></p>
<p>Error: Undefined variable from import: get</p>
<p>How to fix this?
Sure, it is only PyDev checker problem. Code is correct and run on GAE.</p>
<p>UPDATE:</p>
<ol>
<li>I'm using PyDev 1.5.0 but experienced the same with 1.4.8.</li>
<li>My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH):
<ul>
<li><code>C:\Program Files\Google\google_appengine</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\django</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\webob</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\yaml\lib</code></li>
</ul></li>
</ol>
<p>UPDATE 2:</p>
<p>I took a look at <code>C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py</code> and found <code>get()</code> is not declared as <code>memcache</code> module function. They use the following trick to do that (I didn't hear about such possibility):</p>
<pre><code>_CLIENT = None
def setup_client(client_obj):
"""Sets the Client object instance to use for all module-level methods.
Use this method if you want to have customer persistent_id() or
persistent_load() functions associated with your client.
Args:
client_obj: Instance of the memcache.Client object.
"""
global _CLIENT
var_dict = globals()
_CLIENT = client_obj
var_dict['set_servers'] = _CLIENT.set_servers
var_dict['disconnect_all'] = _CLIENT.disconnect_all
var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts
var_dict['debuglog'] = _CLIENT.debuglog
var_dict['get'] = _CLIENT.get
var_dict['get_multi'] = _CLIENT.get_multi
var_dict['set'] = _CLIENT.set
var_dict['set_multi'] = _CLIENT.set_multi
var_dict['add'] = _CLIENT.add
var_dict['add_multi'] = _CLIENT.add_multi
var_dict['replace'] = _CLIENT.replace
var_dict['replace_multi'] = _CLIENT.replace_multi
var_dict['delete'] = _CLIENT.delete
var_dict['delete_multi'] = _CLIENT.delete_multi
var_dict['incr'] = _CLIENT.incr
var_dict['decr'] = _CLIENT.decr
var_dict['flush_all'] = _CLIENT.flush_all
var_dict['get_stats'] = _CLIENT.get_stats
setup_client(Client())
</code></pre>
<p>Hmm... Any idea how to force PyDev to recognize that?</p>
| 18 | 2009-09-24T05:12:28Z | 1,469,935 | <p>What version of PyDev are you using? A recent one (1.5) or the old one referred by the Google tutorial?<br />
See <a href="http://sourceforge.net/projects/pydev/forums/forum/293649/topic/3388818" rel="nofollow">this thread</a>.</p>
<p>There is a similar <a href="http://root.cern.ch/root/roottalk/roottalk09/0064.html" rel="nofollow">issue with PyROOT</a></p>
<blockquote>
<p>Since PyDEV plugin does not read <code>$HOME/.pystartup</code>, touching functions/ classes is not a solution. Because it analyze the syntax and structures of python modules to be imported not on-the-fly but when I set the <code>PYTHONPATH</code> from Eclipse's preference panel. </p>
</blockquote>
<p>So does your <code>PYTHONPATH</code> reference the Google library?</p>
<p><hr /></p>
<p>They might be an issue with code completion in 1.5 which could force you to disable code analysis: <a href="http://sourceforge.net/tracker/index.php?func=detail&aid=2855598&group%5Fid=85796&atid=577329" rel="nofollow">Pydev 1.5.0 code anlaysis breaks code pyqt4 code completion - ID: 2855598</a></p>
<blockquote>
<p>have <code>pyqt 4.5.4</code> installed.<br />
Initially I had <code>pydev 1.4.8</code> the open source version installed and code completion worked fine.<br />
After updating to pydev 1.5.0, pyqt code completion stopped functioning.<br />
After disabling the Pydev code analysis in <strong><code>"eclipse preferences -> pydev -> editor -> code analysis -> do code analysis?"</code></strong>, code completion began working again for PyQt
classes etc.</p>
</blockquote>
| 2 | 2009-09-24T05:44:39Z | [
"python",
"eclipse",
"google-app-engine",
"pydev"
] |
Eclipse+PyDev+GAE memcache error | 1,469,860 | <p>I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to <a href="http://code.google.com/appengine/articles/eclipse.html">this tutorial</a>.</p>
<p>Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it:</p>
<p><img src="http://www.freeimagehosting.net/uploads/fc176c0957.png" alt="alt text" /></p>
<p>Error: Undefined variable from import: get</p>
<p>How to fix this?
Sure, it is only PyDev checker problem. Code is correct and run on GAE.</p>
<p>UPDATE:</p>
<ol>
<li>I'm using PyDev 1.5.0 but experienced the same with 1.4.8.</li>
<li>My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH):
<ul>
<li><code>C:\Program Files\Google\google_appengine</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\django</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\webob</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\yaml\lib</code></li>
</ul></li>
</ol>
<p>UPDATE 2:</p>
<p>I took a look at <code>C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py</code> and found <code>get()</code> is not declared as <code>memcache</code> module function. They use the following trick to do that (I didn't hear about such possibility):</p>
<pre><code>_CLIENT = None
def setup_client(client_obj):
"""Sets the Client object instance to use for all module-level methods.
Use this method if you want to have customer persistent_id() or
persistent_load() functions associated with your client.
Args:
client_obj: Instance of the memcache.Client object.
"""
global _CLIENT
var_dict = globals()
_CLIENT = client_obj
var_dict['set_servers'] = _CLIENT.set_servers
var_dict['disconnect_all'] = _CLIENT.disconnect_all
var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts
var_dict['debuglog'] = _CLIENT.debuglog
var_dict['get'] = _CLIENT.get
var_dict['get_multi'] = _CLIENT.get_multi
var_dict['set'] = _CLIENT.set
var_dict['set_multi'] = _CLIENT.set_multi
var_dict['add'] = _CLIENT.add
var_dict['add_multi'] = _CLIENT.add_multi
var_dict['replace'] = _CLIENT.replace
var_dict['replace_multi'] = _CLIENT.replace_multi
var_dict['delete'] = _CLIENT.delete
var_dict['delete_multi'] = _CLIENT.delete_multi
var_dict['incr'] = _CLIENT.incr
var_dict['decr'] = _CLIENT.decr
var_dict['flush_all'] = _CLIENT.flush_all
var_dict['get_stats'] = _CLIENT.get_stats
setup_client(Client())
</code></pre>
<p>Hmm... Any idea how to force PyDev to recognize that?</p>
| 18 | 2009-09-24T05:12:28Z | 2,930,265 | <p>I'm a bit late to the party, but you can add the following comment in all of your files that use memcache to selectively switch off pydev analysis:</p>
<p><code>#@PydevCodeAnalysisIgnore</code></p>
| 4 | 2010-05-28T15:25:20Z | [
"python",
"eclipse",
"google-app-engine",
"pydev"
] |
Eclipse+PyDev+GAE memcache error | 1,469,860 | <p>I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to <a href="http://code.google.com/appengine/articles/eclipse.html">this tutorial</a>.</p>
<p>Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it:</p>
<p><img src="http://www.freeimagehosting.net/uploads/fc176c0957.png" alt="alt text" /></p>
<p>Error: Undefined variable from import: get</p>
<p>How to fix this?
Sure, it is only PyDev checker problem. Code is correct and run on GAE.</p>
<p>UPDATE:</p>
<ol>
<li>I'm using PyDev 1.5.0 but experienced the same with 1.4.8.</li>
<li>My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH):
<ul>
<li><code>C:\Program Files\Google\google_appengine</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\django</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\webob</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\yaml\lib</code></li>
</ul></li>
</ol>
<p>UPDATE 2:</p>
<p>I took a look at <code>C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py</code> and found <code>get()</code> is not declared as <code>memcache</code> module function. They use the following trick to do that (I didn't hear about such possibility):</p>
<pre><code>_CLIENT = None
def setup_client(client_obj):
"""Sets the Client object instance to use for all module-level methods.
Use this method if you want to have customer persistent_id() or
persistent_load() functions associated with your client.
Args:
client_obj: Instance of the memcache.Client object.
"""
global _CLIENT
var_dict = globals()
_CLIENT = client_obj
var_dict['set_servers'] = _CLIENT.set_servers
var_dict['disconnect_all'] = _CLIENT.disconnect_all
var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts
var_dict['debuglog'] = _CLIENT.debuglog
var_dict['get'] = _CLIENT.get
var_dict['get_multi'] = _CLIENT.get_multi
var_dict['set'] = _CLIENT.set
var_dict['set_multi'] = _CLIENT.set_multi
var_dict['add'] = _CLIENT.add
var_dict['add_multi'] = _CLIENT.add_multi
var_dict['replace'] = _CLIENT.replace
var_dict['replace_multi'] = _CLIENT.replace_multi
var_dict['delete'] = _CLIENT.delete
var_dict['delete_multi'] = _CLIENT.delete_multi
var_dict['incr'] = _CLIENT.incr
var_dict['decr'] = _CLIENT.decr
var_dict['flush_all'] = _CLIENT.flush_all
var_dict['get_stats'] = _CLIENT.get_stats
setup_client(Client())
</code></pre>
<p>Hmm... Any idea how to force PyDev to recognize that?</p>
| 18 | 2009-09-24T05:12:28Z | 3,126,195 | <p>If you don't want to turn off all code analysis for your project/module, then just turn it off for that line. <a href="http://stackoverflow.com/questions/2112715/how-do-i-fix-pydev-undefined-variable-from-import-errors/2248987#2248987">This answer</a> explains that you can hit Ctrl+1 to bring up quick fix and insert <code>#@UndefinedVariable</code> at the end of the line.</p>
| 4 | 2010-06-27T02:42:40Z | [
"python",
"eclipse",
"google-app-engine",
"pydev"
] |
Eclipse+PyDev+GAE memcache error | 1,469,860 | <p>I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to <a href="http://code.google.com/appengine/articles/eclipse.html">this tutorial</a>.</p>
<p>Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it:</p>
<p><img src="http://www.freeimagehosting.net/uploads/fc176c0957.png" alt="alt text" /></p>
<p>Error: Undefined variable from import: get</p>
<p>How to fix this?
Sure, it is only PyDev checker problem. Code is correct and run on GAE.</p>
<p>UPDATE:</p>
<ol>
<li>I'm using PyDev 1.5.0 but experienced the same with 1.4.8.</li>
<li>My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH):
<ul>
<li><code>C:\Program Files\Google\google_appengine</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\django</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\webob</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\yaml\lib</code></li>
</ul></li>
</ol>
<p>UPDATE 2:</p>
<p>I took a look at <code>C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py</code> and found <code>get()</code> is not declared as <code>memcache</code> module function. They use the following trick to do that (I didn't hear about such possibility):</p>
<pre><code>_CLIENT = None
def setup_client(client_obj):
"""Sets the Client object instance to use for all module-level methods.
Use this method if you want to have customer persistent_id() or
persistent_load() functions associated with your client.
Args:
client_obj: Instance of the memcache.Client object.
"""
global _CLIENT
var_dict = globals()
_CLIENT = client_obj
var_dict['set_servers'] = _CLIENT.set_servers
var_dict['disconnect_all'] = _CLIENT.disconnect_all
var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts
var_dict['debuglog'] = _CLIENT.debuglog
var_dict['get'] = _CLIENT.get
var_dict['get_multi'] = _CLIENT.get_multi
var_dict['set'] = _CLIENT.set
var_dict['set_multi'] = _CLIENT.set_multi
var_dict['add'] = _CLIENT.add
var_dict['add_multi'] = _CLIENT.add_multi
var_dict['replace'] = _CLIENT.replace
var_dict['replace_multi'] = _CLIENT.replace_multi
var_dict['delete'] = _CLIENT.delete
var_dict['delete_multi'] = _CLIENT.delete_multi
var_dict['incr'] = _CLIENT.incr
var_dict['decr'] = _CLIENT.decr
var_dict['flush_all'] = _CLIENT.flush_all
var_dict['get_stats'] = _CLIENT.get_stats
setup_client(Client())
</code></pre>
<p>Hmm... Any idea how to force PyDev to recognize that?</p>
| 18 | 2009-09-24T05:12:28Z | 4,262,347 | <p>There is a cleaner solution: Try adding GAE's memcache to your forced builtins.</p>
<p>In your PyDev->Interpreter-Python->ForcedBuiltins window, add the "google.appengine.api.memcache" entry and apply.</p>
<p>Double-click on the memcache errors to check them back, they disappear!</p>
<p>Please make sure that system pythonpath includes google APE install directory.</p>
| 25 | 2010-11-24T00:09:11Z | [
"python",
"eclipse",
"google-app-engine",
"pydev"
] |
Eclipse+PyDev+GAE memcache error | 1,469,860 | <p>I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to <a href="http://code.google.com/appengine/articles/eclipse.html">this tutorial</a>.</p>
<p>Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it:</p>
<p><img src="http://www.freeimagehosting.net/uploads/fc176c0957.png" alt="alt text" /></p>
<p>Error: Undefined variable from import: get</p>
<p>How to fix this?
Sure, it is only PyDev checker problem. Code is correct and run on GAE.</p>
<p>UPDATE:</p>
<ol>
<li>I'm using PyDev 1.5.0 but experienced the same with 1.4.8.</li>
<li>My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH):
<ul>
<li><code>C:\Program Files\Google\google_appengine</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\django</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\webob</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\yaml\lib</code></li>
</ul></li>
</ol>
<p>UPDATE 2:</p>
<p>I took a look at <code>C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py</code> and found <code>get()</code> is not declared as <code>memcache</code> module function. They use the following trick to do that (I didn't hear about such possibility):</p>
<pre><code>_CLIENT = None
def setup_client(client_obj):
"""Sets the Client object instance to use for all module-level methods.
Use this method if you want to have customer persistent_id() or
persistent_load() functions associated with your client.
Args:
client_obj: Instance of the memcache.Client object.
"""
global _CLIENT
var_dict = globals()
_CLIENT = client_obj
var_dict['set_servers'] = _CLIENT.set_servers
var_dict['disconnect_all'] = _CLIENT.disconnect_all
var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts
var_dict['debuglog'] = _CLIENT.debuglog
var_dict['get'] = _CLIENT.get
var_dict['get_multi'] = _CLIENT.get_multi
var_dict['set'] = _CLIENT.set
var_dict['set_multi'] = _CLIENT.set_multi
var_dict['add'] = _CLIENT.add
var_dict['add_multi'] = _CLIENT.add_multi
var_dict['replace'] = _CLIENT.replace
var_dict['replace_multi'] = _CLIENT.replace_multi
var_dict['delete'] = _CLIENT.delete
var_dict['delete_multi'] = _CLIENT.delete_multi
var_dict['incr'] = _CLIENT.incr
var_dict['decr'] = _CLIENT.decr
var_dict['flush_all'] = _CLIENT.flush_all
var_dict['get_stats'] = _CLIENT.get_stats
setup_client(Client())
</code></pre>
<p>Hmm... Any idea how to force PyDev to recognize that?</p>
| 18 | 2009-09-24T05:12:28Z | 15,501,478 | <p>This worked for me and it's different than the solutions above.
Pretty straight forward:
<a href="http://blog.kicaj.com/fixing-pydev-memcache-unresolved-import/" rel="nofollow">http://blog.kicaj.com/fixing-pydev-memcache-unresolved-import/</a></p>
<p>Just says to add the google_appengine folder to the pydev python interpreter library.</p>
| 3 | 2013-03-19T13:56:33Z | [
"python",
"eclipse",
"google-app-engine",
"pydev"
] |
Eclipse+PyDev+GAE memcache error | 1,469,860 | <p>I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to <a href="http://code.google.com/appengine/articles/eclipse.html">this tutorial</a>.</p>
<p>Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it:</p>
<p><img src="http://www.freeimagehosting.net/uploads/fc176c0957.png" alt="alt text" /></p>
<p>Error: Undefined variable from import: get</p>
<p>How to fix this?
Sure, it is only PyDev checker problem. Code is correct and run on GAE.</p>
<p>UPDATE:</p>
<ol>
<li>I'm using PyDev 1.5.0 but experienced the same with 1.4.8.</li>
<li>My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH):
<ul>
<li><code>C:\Program Files\Google\google_appengine</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\django</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\webob</code></li>
<li><code>C:\Program Files\Google\google_appengine\lib\yaml\lib</code></li>
</ul></li>
</ol>
<p>UPDATE 2:</p>
<p>I took a look at <code>C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py</code> and found <code>get()</code> is not declared as <code>memcache</code> module function. They use the following trick to do that (I didn't hear about such possibility):</p>
<pre><code>_CLIENT = None
def setup_client(client_obj):
"""Sets the Client object instance to use for all module-level methods.
Use this method if you want to have customer persistent_id() or
persistent_load() functions associated with your client.
Args:
client_obj: Instance of the memcache.Client object.
"""
global _CLIENT
var_dict = globals()
_CLIENT = client_obj
var_dict['set_servers'] = _CLIENT.set_servers
var_dict['disconnect_all'] = _CLIENT.disconnect_all
var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts
var_dict['debuglog'] = _CLIENT.debuglog
var_dict['get'] = _CLIENT.get
var_dict['get_multi'] = _CLIENT.get_multi
var_dict['set'] = _CLIENT.set
var_dict['set_multi'] = _CLIENT.set_multi
var_dict['add'] = _CLIENT.add
var_dict['add_multi'] = _CLIENT.add_multi
var_dict['replace'] = _CLIENT.replace
var_dict['replace_multi'] = _CLIENT.replace_multi
var_dict['delete'] = _CLIENT.delete
var_dict['delete_multi'] = _CLIENT.delete_multi
var_dict['incr'] = _CLIENT.incr
var_dict['decr'] = _CLIENT.decr
var_dict['flush_all'] = _CLIENT.flush_all
var_dict['get_stats'] = _CLIENT.get_stats
setup_client(Client())
</code></pre>
<p>Hmm... Any idea how to force PyDev to recognize that?</p>
| 18 | 2009-09-24T05:12:28Z | 20,630,869 | <p>I fixed that by adding this few lines to my code on the top of my file:</p>
<pre><code>from google.appengine.api import memcache
# work-around for Eclipse+PyDev+GAE memcache error
if not hasattr(memcache, 'set'):
Client=None
memcache.setup_client(Client)
memcache = Client
</code></pre>
<p>You can commnet it out in production version. </p>
<p>It's only to keep eclipse happy and to let code completion work.</p>
| 0 | 2013-12-17T09:50:53Z | [
"python",
"eclipse",
"google-app-engine",
"pydev"
] |
Python ctypes: copying Structure's contents | 1,470,343 | <p>I want to mimic a piece of C code in Python with ctypes, the code is something like:</p>
<pre><code>typedef struct {
int x;
int y;
} point;
void copy_point(point *a, point *b) {
*a = *b;
}
</code></pre>
<p>in ctypes it's not possible to do the following:</p>
<pre><code>from ctypes import *
class Point(Structure):
_fields_ = [("x", c_int),("y", c_int)]
def copy_point(a, b):
a.contents = b.contents
p0 = pointer(Point())
p1 = pointer(Point())
copy_point(p0,p1)
</code></pre>
<p>as the <code>contents</code> still is a Python ctypes Structure object, that is managed as a reference itself.</p>
<p>An obvious workaround would be to manually copy each field (that is represented as immutable python int's), but that doesn't scale with more complex structures. Also, it would need to be done recursively for fields that are not basic, but structured types.</p>
<p>My other option is to use <code>memmove</code> and copy the objects as if they were buffers, but that seems very error prone (as Python is dynamically typed it would be too easy to use it with objects of distinct type and size, leading to memory corruption or segmentation faults)...</p>
<p>Any suggestions?</p>
<p><strong>Edit</strong>:</p>
<p>I could also use a fresh new copy of the structure, so maybe this could be useful:</p>
<pre><code>import copy
p0 = Point()
p1 = copy.deepcopy(p0) #or just a shallow copy for this example
</code></pre>
<p>but I don't know if there might be some kind of bizarre behaviours copying ctypes proxys as if they were regular Python objects...</p>
| 5 | 2009-09-24T08:07:40Z | 1,470,375 | <p>I'm now also thinking about defining a method like:</p>
<pre><code>def safe_copy(dst, src):
if type(src) != type(dst) or not isinstance(src, Structure):
raise Exception("wrong types")
memmove(addressof(dst), addressof(src), sizeof(src))
</code></pre>
<p>But there might be still nicer options out there...</p>
| 0 | 2009-09-24T08:17:34Z | [
"pointers",
"copy",
"python",
"ctypes"
] |
Python ctypes: copying Structure's contents | 1,470,343 | <p>I want to mimic a piece of C code in Python with ctypes, the code is something like:</p>
<pre><code>typedef struct {
int x;
int y;
} point;
void copy_point(point *a, point *b) {
*a = *b;
}
</code></pre>
<p>in ctypes it's not possible to do the following:</p>
<pre><code>from ctypes import *
class Point(Structure):
_fields_ = [("x", c_int),("y", c_int)]
def copy_point(a, b):
a.contents = b.contents
p0 = pointer(Point())
p1 = pointer(Point())
copy_point(p0,p1)
</code></pre>
<p>as the <code>contents</code> still is a Python ctypes Structure object, that is managed as a reference itself.</p>
<p>An obvious workaround would be to manually copy each field (that is represented as immutable python int's), but that doesn't scale with more complex structures. Also, it would need to be done recursively for fields that are not basic, but structured types.</p>
<p>My other option is to use <code>memmove</code> and copy the objects as if they were buffers, but that seems very error prone (as Python is dynamically typed it would be too easy to use it with objects of distinct type and size, leading to memory corruption or segmentation faults)...</p>
<p>Any suggestions?</p>
<p><strong>Edit</strong>:</p>
<p>I could also use a fresh new copy of the structure, so maybe this could be useful:</p>
<pre><code>import copy
p0 = Point()
p1 = copy.deepcopy(p0) #or just a shallow copy for this example
</code></pre>
<p>but I don't know if there might be some kind of bizarre behaviours copying ctypes proxys as if they were regular Python objects...</p>
| 5 | 2009-09-24T08:07:40Z | 1,470,387 | <p>Pointer operations as rule are not very memory safe. I would create wrapper classes for each struct data type you are interested in and let them handle the pointer copy operations. Pretty much like you are doing here. There are lambda and map functions which you can use recursively as syntactic sugar.</p>
| 0 | 2009-09-24T08:20:33Z | [
"pointers",
"copy",
"python",
"ctypes"
] |
Python ctypes: copying Structure's contents | 1,470,343 | <p>I want to mimic a piece of C code in Python with ctypes, the code is something like:</p>
<pre><code>typedef struct {
int x;
int y;
} point;
void copy_point(point *a, point *b) {
*a = *b;
}
</code></pre>
<p>in ctypes it's not possible to do the following:</p>
<pre><code>from ctypes import *
class Point(Structure):
_fields_ = [("x", c_int),("y", c_int)]
def copy_point(a, b):
a.contents = b.contents
p0 = pointer(Point())
p1 = pointer(Point())
copy_point(p0,p1)
</code></pre>
<p>as the <code>contents</code> still is a Python ctypes Structure object, that is managed as a reference itself.</p>
<p>An obvious workaround would be to manually copy each field (that is represented as immutable python int's), but that doesn't scale with more complex structures. Also, it would need to be done recursively for fields that are not basic, but structured types.</p>
<p>My other option is to use <code>memmove</code> and copy the objects as if they were buffers, but that seems very error prone (as Python is dynamically typed it would be too easy to use it with objects of distinct type and size, leading to memory corruption or segmentation faults)...</p>
<p>Any suggestions?</p>
<p><strong>Edit</strong>:</p>
<p>I could also use a fresh new copy of the structure, so maybe this could be useful:</p>
<pre><code>import copy
p0 = Point()
p1 = copy.deepcopy(p0) #or just a shallow copy for this example
</code></pre>
<p>but I don't know if there might be some kind of bizarre behaviours copying ctypes proxys as if they were regular Python objects...</p>
| 5 | 2009-09-24T08:07:40Z | 1,470,554 | <p>You can use sequence assignment to copy the pointed-to objects (rather than assigning to <code>p.contents</code>, which changes the pointer value):</p>
<pre><code>def copy(dst, src):
"""Copies the contents of src to dst"""
pointer(dst)[0] = src
# alternately
def new_copy(src):
"""Returns a new ctypes object which is a bitwise copy of an existing one"""
dst = type(src)()
pointer(dst)[0] = src
return dst
# or if using pointers
def ptr_copy(dst_ptr, src_ptr):
dst_ptr[0] = src_ptr[0]
</code></pre>
<p><code>ctypes</code> will do type checking for you (which isn't fool-proof, but it's better than nothing).</p>
<p>Example of use, with verification that it does in fact work ;):</p>
<pre><code>>>> o1 = Point(1, 1)
>>> o2 = Point(2, 2)
>>> print (o1.x, o1.y, addressof(o1)), (o2.x, o2.y, addressof(o2))
(1, 1, 6474004) (2, 2, 6473524)
>>> copy(o2, o1)
>>> pprint (o1.x, o1.y, addressof(o1)), (o2.x, o2.y, addressof(o2))
(1, 1, 6474004) (1, 1, 6473524)
>>> o1 = Point(1, 1), o2 = Point(2, 2)
>>> print (o1.x, o1.y, addressof(o1)), (o2.x, o2.y, addressof(o2))
(1, 1, 6473844) (2, 2, 6473684)
>>> p1, p2 = pointer(o1), pointer(o2)
>>> addressof(p1.contents), addressof(p2.contents)
(6473844, 6473684)
>>> ptr_copy(p1, p2)
>>> print (o1.x, o1.y, addressof(o1)), (o2.x, o2.y, addressof(o2))
(2, 2, 6473844) (2, 2, 6473684)
>>> addressof(p1.contents), addressof(p2.contents)
(6473844, 6473684)
</code></pre>
| 4 | 2009-09-24T09:10:35Z | [
"pointers",
"copy",
"python",
"ctypes"
] |
Python ctypes: copying Structure's contents | 1,470,343 | <p>I want to mimic a piece of C code in Python with ctypes, the code is something like:</p>
<pre><code>typedef struct {
int x;
int y;
} point;
void copy_point(point *a, point *b) {
*a = *b;
}
</code></pre>
<p>in ctypes it's not possible to do the following:</p>
<pre><code>from ctypes import *
class Point(Structure):
_fields_ = [("x", c_int),("y", c_int)]
def copy_point(a, b):
a.contents = b.contents
p0 = pointer(Point())
p1 = pointer(Point())
copy_point(p0,p1)
</code></pre>
<p>as the <code>contents</code> still is a Python ctypes Structure object, that is managed as a reference itself.</p>
<p>An obvious workaround would be to manually copy each field (that is represented as immutable python int's), but that doesn't scale with more complex structures. Also, it would need to be done recursively for fields that are not basic, but structured types.</p>
<p>My other option is to use <code>memmove</code> and copy the objects as if they were buffers, but that seems very error prone (as Python is dynamically typed it would be too easy to use it with objects of distinct type and size, leading to memory corruption or segmentation faults)...</p>
<p>Any suggestions?</p>
<p><strong>Edit</strong>:</p>
<p>I could also use a fresh new copy of the structure, so maybe this could be useful:</p>
<pre><code>import copy
p0 = Point()
p1 = copy.deepcopy(p0) #or just a shallow copy for this example
</code></pre>
<p>but I don't know if there might be some kind of bizarre behaviours copying ctypes proxys as if they were regular Python objects...</p>
| 5 | 2009-09-24T08:07:40Z | 1,471,079 | <p><code>memmove</code> is the correct operation here. By setting the <code>argtypes</code> of your CopyPoint function, you can easily enforce type-safety.</p>
<pre><code>from ctypes import *
class Point(Structure):
_fields_ = [("x", c_int), ("y", c_int)]
def __str__(self):
return "<Point: x=%d, y=%d, addr=%ld>" % (self.x, self.y, addressof(self))
def CopyPoint(a, b):
memmove(a, b, sizeof(Point))
CopyPoint.argtypes = [POINTER(Point), POINTER(Point)]
pt0 = Point(x=0, y=10)
pt1 = Point(x=5, y=7)
print pt0, pt1
CopyPoint(byref(pt0), byref(pt1))
print pt0, pt1
try:
CopyPoint(byref(pt0), Point(x=2, y=3))
except ArgumentError as e:
print "Could not copy!", e
</code></pre>
<p>outputs:</p>
<pre><code>$ python ct.py
<Point: x=0, y=10, addr=3083711192> <Point: x=5, y=7, addr=3083711120>
<Point: x=5, y=7, addr=3083711192> <Point: x=5, y=7, addr=3083711120>
Could not copy! argument 2: <type 'exceptions.TypeError'>: wrong type
</code></pre>
<p>Note that you could easily make a factory to generate this kind of function at run-time based on a specific type, if you need to generalize:</p>
<pre><code>def CopierFactory(typ):
def f(a,b):
memmove(a,b, sizeof(typ))
f.argtypes = [POINTER(typ), POINTER(typ)]
return f
copy_point = CopierFactory(Point)
a = Point(x=1, y=2)
b = Point(x=-1, y=-1)
print a, b
copy_point(byref(a), byref(b))
print a, b
</code></pre>
<p>output:</p>
<pre><code><Point: x=1, y=2, addr=3085088024> <Point: x=-1, y=-1, addr=3085087952>
<Point: x=-1, y=-1, addr=3085088024> <Point: x=-1, y=-1, addr=3085087952>
</code></pre>
| 5 | 2009-09-24T11:26:51Z | [
"pointers",
"copy",
"python",
"ctypes"
] |
Python ctypes: copying Structure's contents | 1,470,343 | <p>I want to mimic a piece of C code in Python with ctypes, the code is something like:</p>
<pre><code>typedef struct {
int x;
int y;
} point;
void copy_point(point *a, point *b) {
*a = *b;
}
</code></pre>
<p>in ctypes it's not possible to do the following:</p>
<pre><code>from ctypes import *
class Point(Structure):
_fields_ = [("x", c_int),("y", c_int)]
def copy_point(a, b):
a.contents = b.contents
p0 = pointer(Point())
p1 = pointer(Point())
copy_point(p0,p1)
</code></pre>
<p>as the <code>contents</code> still is a Python ctypes Structure object, that is managed as a reference itself.</p>
<p>An obvious workaround would be to manually copy each field (that is represented as immutable python int's), but that doesn't scale with more complex structures. Also, it would need to be done recursively for fields that are not basic, but structured types.</p>
<p>My other option is to use <code>memmove</code> and copy the objects as if they were buffers, but that seems very error prone (as Python is dynamically typed it would be too easy to use it with objects of distinct type and size, leading to memory corruption or segmentation faults)...</p>
<p>Any suggestions?</p>
<p><strong>Edit</strong>:</p>
<p>I could also use a fresh new copy of the structure, so maybe this could be useful:</p>
<pre><code>import copy
p0 = Point()
p1 = copy.deepcopy(p0) #or just a shallow copy for this example
</code></pre>
<p>but I don't know if there might be some kind of bizarre behaviours copying ctypes proxys as if they were regular Python objects...</p>
| 5 | 2009-09-24T08:07:40Z | 39,343,746 | <p>In python 3x,your code can run correctly. shown below:</p>
<pre><code>>>> from ctypes import *
>>> class Point(Structure):
... _fields_ = [("x", c_int),("y", c_int)]
>>> def copy_point(a, b):
... a.contents = b.contents
>>> p0 = pointer(Point())
>>> p1 = pointer(Point(1,2))
>>> p0.contents.x
0
>>> copy_point(p0,p1)
>>> p0.contents.x
1
</code></pre>
| 0 | 2016-09-06T08:07:15Z | [
"pointers",
"copy",
"python",
"ctypes"
] |
Why is the WindowsError while deleting the temporary file? | 1,470,350 | <ol>
<li>I have created a temporary file. </li>
<li>Added some data to the file created. </li>
<li>Saved it and then trying to delete it. </li>
</ol>
<p>But I am getting <code>WindowsError</code>. I have closed the file after editing it. How do I check which other process is accessing the file. </p>
<pre><code>C:\Documents and Settings\Administrator>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tempfile
>>> __, filename = tempfile.mkstemp()
>>> print filename
c:\docume~1\admini~1\locals~1\temp\tmpm5clkb
>>> fptr = open(filename, "wb")
>>> fptr.write("Hello World!")
>>> fptr.close()
>>> import os
>>> os.remove(filename)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
WindowsError: [Error 32] The process cannot access the file because it is being used by
another process: 'c:\\docume~1\\admini~1\\locals~1\\temp\\tmpm5clkb'
</code></pre>
| 3 | 2009-09-24T08:10:28Z | 1,470,370 | <p>I believe you need to release the fptr to close the file cleanly. Try setting fptr to None.</p>
| 0 | 2009-09-24T08:16:06Z | [
"python",
"temporary-files"
] |
Why is the WindowsError while deleting the temporary file? | 1,470,350 | <ol>
<li>I have created a temporary file. </li>
<li>Added some data to the file created. </li>
<li>Saved it and then trying to delete it. </li>
</ol>
<p>But I am getting <code>WindowsError</code>. I have closed the file after editing it. How do I check which other process is accessing the file. </p>
<pre><code>C:\Documents and Settings\Administrator>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tempfile
>>> __, filename = tempfile.mkstemp()
>>> print filename
c:\docume~1\admini~1\locals~1\temp\tmpm5clkb
>>> fptr = open(filename, "wb")
>>> fptr.write("Hello World!")
>>> fptr.close()
>>> import os
>>> os.remove(filename)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
WindowsError: [Error 32] The process cannot access the file because it is being used by
another process: 'c:\\docume~1\\admini~1\\locals~1\\temp\\tmpm5clkb'
</code></pre>
| 3 | 2009-09-24T08:10:28Z | 1,470,388 | <p>The file is still open. Do this:</p>
<pre><code>fh, filename = tempfile.mkstemp()
...
os.close(fh)
os.remove(filename)
</code></pre>
| 6 | 2009-09-24T08:20:54Z | [
"python",
"temporary-files"
] |
Why is the WindowsError while deleting the temporary file? | 1,470,350 | <ol>
<li>I have created a temporary file. </li>
<li>Added some data to the file created. </li>
<li>Saved it and then trying to delete it. </li>
</ol>
<p>But I am getting <code>WindowsError</code>. I have closed the file after editing it. How do I check which other process is accessing the file. </p>
<pre><code>C:\Documents and Settings\Administrator>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tempfile
>>> __, filename = tempfile.mkstemp()
>>> print filename
c:\docume~1\admini~1\locals~1\temp\tmpm5clkb
>>> fptr = open(filename, "wb")
>>> fptr.write("Hello World!")
>>> fptr.close()
>>> import os
>>> os.remove(filename)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
WindowsError: [Error 32] The process cannot access the file because it is being used by
another process: 'c:\\docume~1\\admini~1\\locals~1\\temp\\tmpm5clkb'
</code></pre>
| 3 | 2009-09-24T08:10:28Z | 1,472,979 | <p>From the <a href="http://docs.python.org/library/tempfile.html#tempfile.mkstemp">documentation</a>:</p>
<blockquote>
<p>mkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. New in version 2.3. </p>
</blockquote>
<p>So, <code>mkstemp</code> returns both the OS file handle to <em>and</em> the filename of the temporary file. When you re-open the temp file, the original returned file handle is still open (no-one stops you from opening twice or more the same file in your program).</p>
<p>If you want to operate on that OS file handle as a python file object, you can:</p>
<pre><code>>>> __, filename = tempfile.mkstemp()
>>> fptr= os.fdopen(__)
</code></pre>
<p>and then continue with your normal code.</p>
| 9 | 2009-09-24T17:02:49Z | [
"python",
"temporary-files"
] |
What is going on with this code from the Google App Engine tutorial | 1,470,405 | <pre><code>import cgi
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import db
class Greeting(db.Model):
author = db.UserProperty()
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp.RequestHandler):
def get(self):
self.response.out.write('<html><body>')
greetings = db.GqlQuery("SELECT * FROM Greeting ORDER BY date DESC LIMIT 10")
for greeting in greetings:
if greeting.author:
self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname())
else:
self.response.out.write('An anonymous person wrote:')
self.response.out.write('<blockquote>%s</blockquote>' %
cgi.escape(greeting.content))
# Write the submission form and the footer of the page
self.response.out.write("""
<form action="/sign" method="post">
<div><textarea name="content" rows="3" cols="60"></textarea></div>
<div><input type="submit" value="Sign Guestbook"></div>
</form>
</body>
</html>""")
class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
application = webapp.WSGIApplication(
[('/', MainPage),
('/sign', Guestbook)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
</code></pre>
<p>I am new to Python and a bit confused looking at this Google App Engine tutorial code. In the Greeting class, content = db.StringProperty(multiline=True), but in the Guestbook class, "content" in the greeting object is then set to greeting.content = self.request.get('content').</p>
<p>I don't understand how the "content" variable is being set in the Greeting class as well as the Guestbook class, yet seemingly hold the value and properties of both statements.</p>
| 1 | 2009-09-24T08:25:43Z | 1,470,433 | <p>The first piece of code is a model definition:</p>
<pre><code>class Greeting(db.Model):
content = db.StringProperty(multiline=True)
</code></pre>
<p>It says that there is a model <code>Greeting</code> that has a <code>StringProperty</code> with the name <code>content</code>.</p>
<p>In the second piece of code, you create an instance of the <code>Greeting</code> model and assign a value to its <code>content</code> property</p>
<pre><code>greeting = Greeting()
greeting.content = self.request.get('content')
</code></pre>
<p>edit: to answer your question in the comment: this is basic object oriented programming (or OOP) with a little bit of Python's special sauce (descriptors and metaclasses). If you're new to OOP, read <a href="http://www.voidspace.org.uk/python/articles/OOP.shtml" rel="nofollow">this article</a> to get a little bit more familiar with the concept (this is a complex subject, there are whole libraries on OOP, so don't except to understand everything after reading one article). You don't really have to know descriptors or metaclasses, but it can come in handy sometimes. Here's a <a href="http://martyalchin.com/2007/nov/23/python-descriptors-part-1-of-2/" rel="nofollow">good introduction</a> to descriptors.</p>
| 2 | 2009-09-24T08:35:18Z | [
"python",
"google-app-engine"
] |
What is going on with this code from the Google App Engine tutorial | 1,470,405 | <pre><code>import cgi
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import db
class Greeting(db.Model):
author = db.UserProperty()
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp.RequestHandler):
def get(self):
self.response.out.write('<html><body>')
greetings = db.GqlQuery("SELECT * FROM Greeting ORDER BY date DESC LIMIT 10")
for greeting in greetings:
if greeting.author:
self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname())
else:
self.response.out.write('An anonymous person wrote:')
self.response.out.write('<blockquote>%s</blockquote>' %
cgi.escape(greeting.content))
# Write the submission form and the footer of the page
self.response.out.write("""
<form action="/sign" method="post">
<div><textarea name="content" rows="3" cols="60"></textarea></div>
<div><input type="submit" value="Sign Guestbook"></div>
</form>
</body>
</html>""")
class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
application = webapp.WSGIApplication(
[('/', MainPage),
('/sign', Guestbook)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
</code></pre>
<p>I am new to Python and a bit confused looking at this Google App Engine tutorial code. In the Greeting class, content = db.StringProperty(multiline=True), but in the Guestbook class, "content" in the greeting object is then set to greeting.content = self.request.get('content').</p>
<p>I don't understand how the "content" variable is being set in the Greeting class as well as the Guestbook class, yet seemingly hold the value and properties of both statements.</p>
| 1 | 2009-09-24T08:25:43Z | 1,470,455 | <p><a href="http://stackoverflow.com/users/45691/piquadrat">piquadrat</a>'s answer is good. You can read more about App Engine models <a href="http://code.google.com/appengine/docs/python/datastore/modelclass.html" rel="nofollow">here</a>.</p>
| 0 | 2009-09-24T08:46:13Z | [
"python",
"google-app-engine"
] |
What is going on with this code from the Google App Engine tutorial | 1,470,405 | <pre><code>import cgi
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import db
class Greeting(db.Model):
author = db.UserProperty()
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp.RequestHandler):
def get(self):
self.response.out.write('<html><body>')
greetings = db.GqlQuery("SELECT * FROM Greeting ORDER BY date DESC LIMIT 10")
for greeting in greetings:
if greeting.author:
self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname())
else:
self.response.out.write('An anonymous person wrote:')
self.response.out.write('<blockquote>%s</blockquote>' %
cgi.escape(greeting.content))
# Write the submission form and the footer of the page
self.response.out.write("""
<form action="/sign" method="post">
<div><textarea name="content" rows="3" cols="60"></textarea></div>
<div><input type="submit" value="Sign Guestbook"></div>
</form>
</body>
</html>""")
class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
application = webapp.WSGIApplication(
[('/', MainPage),
('/sign', Guestbook)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
</code></pre>
<p>I am new to Python and a bit confused looking at this Google App Engine tutorial code. In the Greeting class, content = db.StringProperty(multiline=True), but in the Guestbook class, "content" in the greeting object is then set to greeting.content = self.request.get('content').</p>
<p>I don't understand how the "content" variable is being set in the Greeting class as well as the Guestbook class, yet seemingly hold the value and properties of both statements.</p>
| 1 | 2009-09-24T08:25:43Z | 1,470,480 | <pre><code>class Greeting(db.Model):
author = db.UserProperty()
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
</code></pre>
<p>This code instructs the ORM (object relational mapper) to create a table in the database with the fields "author", "content" and "date". Notice how class Greeting is inherited from db.Model: It's a model for a table to be created in the database.</p>
<pre><code>class Guestbook(webapp.RequestHandler):
def post(self):
greeting = Greeting()
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/')
</code></pre>
<p>Guestbook is a request handler (notice which class it's inherited from). The post() method of a request handler is called on the event of a POST request. There can be several other methods in this class as well to handle different kinds of requests. Now notice what the post method does: It instantiates the Greeting class- we now have an instance, greeting object. Next, the "author" and "content" of the greeting object are set from the request information. Finally, greeting.put() writes to the database. Additionally, note that "date" is also set automatically to the date/time of writing the object to the database.</p>
| 2 | 2009-09-24T08:51:51Z | [
"python",
"google-app-engine"
] |
Create new List object in python | 1,470,446 | <p>I am a beginner in python. I want to Create new List object in python.</p>
<p><strong>My Code:</strong></p>
<pre><code>recordList=[]
mappedDictionay={}
sectionGroupName= None
for record in recordCols:
item = record
print item
if not sectionGroupName == record[0]:
sectionGroupName = record[0]
del recordList[0:] # Here I want to create new list object for recordList
recordList.append(item)
mappedDictionay[sectionGroupName] = recordList
else:
recordList.append(tempItem)
</code></pre>
| 3 | 2009-09-24T08:42:50Z | 1,470,460 | <p>It's not that easy to understand your question, especially since your code lost its formatting, but you can create new list objects quite easily. The following assigns a new list object to the variable recordList:</p>
<pre><code>recordList = list()
</code></pre>
<p>You could also use</p>
<pre><code>recordList = []
</code></pre>
<p>[] and list() are equivalent in this case.</p>
| 9 | 2009-09-24T08:47:29Z | [
"python",
"list"
] |
Create new List object in python | 1,470,446 | <p>I am a beginner in python. I want to Create new List object in python.</p>
<p><strong>My Code:</strong></p>
<pre><code>recordList=[]
mappedDictionay={}
sectionGroupName= None
for record in recordCols:
item = record
print item
if not sectionGroupName == record[0]:
sectionGroupName = record[0]
del recordList[0:] # Here I want to create new list object for recordList
recordList.append(item)
mappedDictionay[sectionGroupName] = recordList
else:
recordList.append(tempItem)
</code></pre>
| 3 | 2009-09-24T08:42:50Z | 1,470,463 | <p>Python is <a href="http://en.wikipedia.org/wiki/Garbage%5Fcollection%5F%28computer%5Fscience%29" rel="nofollow">garbage-collected</a>. Just do</p>
<pre><code>recordList = []
</code></pre>
<p>and you'll have a new empty list.</p>
| 3 | 2009-09-24T08:47:50Z | [
"python",
"list"
] |
Create new List object in python | 1,470,446 | <p>I am a beginner in python. I want to Create new List object in python.</p>
<p><strong>My Code:</strong></p>
<pre><code>recordList=[]
mappedDictionay={}
sectionGroupName= None
for record in recordCols:
item = record
print item
if not sectionGroupName == record[0]:
sectionGroupName = record[0]
del recordList[0:] # Here I want to create new list object for recordList
recordList.append(item)
mappedDictionay[sectionGroupName] = recordList
else:
recordList.append(tempItem)
</code></pre>
| 3 | 2009-09-24T08:42:50Z | 1,470,650 | <p>Don't use <code>del</code>. Period. It's an "advanced" thing.</p>
<pre><code>from collections import defaultdict
mappedDictionay= defaultdict( list ) # mappedDictionary is a poor name
sectionGroupName= None
for record in recordCols:
mappedDictionay[record[0]].append( record )
</code></pre>
| 4 | 2009-09-24T09:32:25Z | [
"python",
"list"
] |
Create new List object in python | 1,470,446 | <p>I am a beginner in python. I want to Create new List object in python.</p>
<p><strong>My Code:</strong></p>
<pre><code>recordList=[]
mappedDictionay={}
sectionGroupName= None
for record in recordCols:
item = record
print item
if not sectionGroupName == record[0]:
sectionGroupName = record[0]
del recordList[0:] # Here I want to create new list object for recordList
recordList.append(item)
mappedDictionay[sectionGroupName] = recordList
else:
recordList.append(tempItem)
</code></pre>
| 3 | 2009-09-24T08:42:50Z | 1,470,723 | <p>You can use <code>list()</code> or simply <code>[]</code> to create a new list.
However, I think what you are trying to achieve can be solved simply by using <code>grouby</code>:</p>
<pre><code>from itertools import groupby
mappedIterator = groupby(recordCols, lambda x: x[0])
</code></pre>
<p>or</p>
<pre><code>from itertools import groupby
from operator import itemgetter
mappedIterator = groupby(recordCols, itemgetter(0))
</code></pre>
<p>if you prefer.</p>
<p>The <code>groupBy</code> function will return an iterator rather than a dictionary, where each item is of the form <code>(category, sub-iterator-over-items-in-that-category)</code>.</p>
<p>If you really want to convert it into a dictionary like you have it in your code, you can run the following afterwards:</p>
<pre><code>mappedDictionary = dict(( (x[0], list(x[1])) for x in mappedIterator ))
</code></pre>
| 0 | 2009-09-24T09:52:27Z | [
"python",
"list"
] |
Create new List object in python | 1,470,446 | <p>I am a beginner in python. I want to Create new List object in python.</p>
<p><strong>My Code:</strong></p>
<pre><code>recordList=[]
mappedDictionay={}
sectionGroupName= None
for record in recordCols:
item = record
print item
if not sectionGroupName == record[0]:
sectionGroupName = record[0]
del recordList[0:] # Here I want to create new list object for recordList
recordList.append(item)
mappedDictionay[sectionGroupName] = recordList
else:
recordList.append(tempItem)
</code></pre>
| 3 | 2009-09-24T08:42:50Z | 36,386,003 | <p>You want to create a new list?<br>
It's easy: for eg, you wanna make a list of days, write the following: </p>
<p><code>new_list=['mon','tue','wed','thurs','fri',sat','sun']</code> </p>
<p>Here the name of the list is <code>new_list</code> and it contains the name of different week days.</p>
<p><strong>Note</strong>: never forget use [ ] brackets to make a list, using ( ) will create a tuple, a very different object and { } will create a dictionary object which will raise a error instead.</p>
| 0 | 2016-04-03T13:31:24Z | [
"python",
"list"
] |
How to check which part of app is consuming CPU? | 1,470,453 | <p>I have a wxPython app which has many worker threads, idle event cycles, and many other such event handling code which can consume CPU, for now when app is not being interacted with consumes about 8-10% CPU.</p>
<p>Question:</p>
<p>Is there a tool which can tell which part/threads of my app is consuming most CPU? If there are no such generic tools, I am willing to know the approaches you usually take to tackle such scenarios? e.g. disabling part of app, trace etc</p>
<p><strong>Edit:</strong> May be my question's language is ambiguous, I do not want to know which function or code block in my code takes up most resources, for that I can use profiler.
What I want to know is when I run my app, and I see cpu usage it is 8-10%, now is there a way to know what different parts, threads of my app are using up that 10% cpu?
Basically at that instant i want to know which part(s) of code is running?</p>
| 0 | 2009-09-24T08:45:56Z | 1,470,508 | <p>If all your threads have unique start methods you could use the <a href="http://docs.python.org/library/profile.html" rel="nofollow">profiler that comes with Python</a>.</p>
<p>If you're on a Mac you should check out the Instruments app. You could also use <a href="http://en.wikipedia.org/wiki/DTrace" rel="nofollow">dtrace</a> for Linux.</p>
| 1 | 2009-09-24T09:00:50Z | [
"python",
"wxpython",
"cpu-usage"
] |
How to check which part of app is consuming CPU? | 1,470,453 | <p>I have a wxPython app which has many worker threads, idle event cycles, and many other such event handling code which can consume CPU, for now when app is not being interacted with consumes about 8-10% CPU.</p>
<p>Question:</p>
<p>Is there a tool which can tell which part/threads of my app is consuming most CPU? If there are no such generic tools, I am willing to know the approaches you usually take to tackle such scenarios? e.g. disabling part of app, trace etc</p>
<p><strong>Edit:</strong> May be my question's language is ambiguous, I do not want to know which function or code block in my code takes up most resources, for that I can use profiler.
What I want to know is when I run my app, and I see cpu usage it is 8-10%, now is there a way to know what different parts, threads of my app are using up that 10% cpu?
Basically at that instant i want to know which part(s) of code is running?</p>
| 0 | 2009-09-24T08:45:56Z | 1,470,720 | <p>This isn't very practical at a language-agnostic level. Take away the language and all you have left is a load of machine code instructions sprinkled around with system calls. You could use strace on Linux or ProcessExplorer on Windows to try and guess what is going on from those system calls, but it would make far more sense to just use a profiler. If you do have access to the language, then there are a variety of things you could do (extra logging, random pausing in the debugger) but in that situation the profiler is still your best tool.</p>
| 0 | 2009-09-24T09:51:32Z | [
"python",
"wxpython",
"cpu-usage"
] |
How to check which part of app is consuming CPU? | 1,470,453 | <p>I have a wxPython app which has many worker threads, idle event cycles, and many other such event handling code which can consume CPU, for now when app is not being interacted with consumes about 8-10% CPU.</p>
<p>Question:</p>
<p>Is there a tool which can tell which part/threads of my app is consuming most CPU? If there are no such generic tools, I am willing to know the approaches you usually take to tackle such scenarios? e.g. disabling part of app, trace etc</p>
<p><strong>Edit:</strong> May be my question's language is ambiguous, I do not want to know which function or code block in my code takes up most resources, for that I can use profiler.
What I want to know is when I run my app, and I see cpu usage it is 8-10%, now is there a way to know what different parts, threads of my app are using up that 10% cpu?
Basically at that instant i want to know which part(s) of code is running?</p>
| 0 | 2009-09-24T08:45:56Z | 1,470,876 | <p>I am able to solve my problem by writing a modifed version of python <a href="http://docs.python.org/library/trace.html" rel="nofollow">trace</a> module , which can be enabled disabled, basically modify <code>Trace</code> class something like this</p>
<pre><code>import sys
import trace
class MyTrace(trace.Trace):
def __init__(self, *args, **kwargs):
trace.Trace.__init__(self, *args, **kwargs)
self.enabled = False
def localtrace_trace_and_count(self, *args, **kwargs):
if not self.enabled:
return None
return trace.Trace.localtrace_trace_and_count(self, *args, **kwargs)
tracer = MyTrace(ignoredirs=[sys.prefix, sys.exec_prefix],)
def main():
a = 1
tracer.enabled = True
a = 2
tracer.enabled = False
a = 3
# run the new command using the given tracer
tracer.run('main()')
</code></pre>
<p>Output:</p>
<pre><code> --- modulename: untitled-2, funcname: main
untitled-2.py(19): a = 2
untitled-2.py(20): tracer.enabled = False
</code></pre>
<p>Enabling it at the critical points helps me to trace line by line which code statements are executing most.</p>
| 0 | 2009-09-24T10:38:28Z | [
"python",
"wxpython",
"cpu-usage"
] |
How to check which part of app is consuming CPU? | 1,470,453 | <p>I have a wxPython app which has many worker threads, idle event cycles, and many other such event handling code which can consume CPU, for now when app is not being interacted with consumes about 8-10% CPU.</p>
<p>Question:</p>
<p>Is there a tool which can tell which part/threads of my app is consuming most CPU? If there are no such generic tools, I am willing to know the approaches you usually take to tackle such scenarios? e.g. disabling part of app, trace etc</p>
<p><strong>Edit:</strong> May be my question's language is ambiguous, I do not want to know which function or code block in my code takes up most resources, for that I can use profiler.
What I want to know is when I run my app, and I see cpu usage it is 8-10%, now is there a way to know what different parts, threads of my app are using up that 10% cpu?
Basically at that instant i want to know which part(s) of code is running?</p>
| 0 | 2009-09-24T08:45:56Z | 1,608,455 | <p>On Windows XP and higher, <a href="http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx" rel="nofollow">Process Explorer</a> will show all processes, and you can view the properties of each process and see open threads. It shows thread ID, start time, state, kernel time, user time, and more.</p>
| 0 | 2009-10-22T16:35:43Z | [
"python",
"wxpython",
"cpu-usage"
] |
How can I tell the Django ORM to reverse the order of query results? | 1,470,676 | <p>In my quest to understand queries against Django models, I've been trying to get the last 3 added valid Avatar models with a query like:</p>
<pre><code>newUserAv = Avatar.objects.filter(valid=True).order_by("date")[:3]
</code></pre>
<p>However, this instead gives me the first three avatars added ordered by date. I'm sure this is simple, but I've had trouble finding it in the Django docs: how do I select the <em>last</em> three avatar objects instead of the first three?</p>
| 26 | 2009-09-24T09:39:09Z | 1,470,680 | <p>Put a hyphen before the field name.</p>
<pre><code>.order_by('-date')
</code></pre>
| 60 | 2009-09-24T09:40:23Z | [
"python",
"django"
] |
How can I tell the Django ORM to reverse the order of query results? | 1,470,676 | <p>In my quest to understand queries against Django models, I've been trying to get the last 3 added valid Avatar models with a query like:</p>
<pre><code>newUserAv = Avatar.objects.filter(valid=True).order_by("date")[:3]
</code></pre>
<p>However, this instead gives me the first three avatars added ordered by date. I'm sure this is simple, but I've had trouble finding it in the Django docs: how do I select the <em>last</em> three avatar objects instead of the first three?</p>
| 26 | 2009-09-24T09:39:09Z | 1,470,683 | <p>Reverse order with <code>.reverse()</code> </p>
<pre><code>newUserAv = Avatar.objects.filter(valid=True).order_by("date").reverse()[:3]
</code></pre>
| -2 | 2009-09-24T09:41:48Z | [
"python",
"django"
] |
Django: disallow can_delete on GenericStackedInline | 1,470,811 | <p>I've built this model which contains a generic foreign key:</p>
<pre><code>class MyModel(models.Model):
content_type = models.ForeignKey(ContentType, verbose_name=_('content type'))
object_id = models.PositiveIntegerField(_('object id'))
content_object = generic.GenericForeignKey('content_type', 'object_id')
</code></pre>
<p>Next I've made a generic stacked inline to put it in any ModelAmin class:</p>
<pre><code>class MyModelStackedInline(generic.GenericStackedInline):
model = MyModel
formset = generic.generic_inlineformset_factory(MyModel, can_delete=False)
extra = 0
class SomeOhterModelAdmin(admin.ModelAdmin):
inlines = [MyModelStackedInline]
</code></pre>
<p>However, despite the can_<code>delete=False</code> arg passed by in generic_inlineformset_factory, I always see a <code>Delete</code> checkbox in my admin change_form.</p>
<p>Here is an example: <a href="http://img8.imageshack.us/img8/3323/screenshotbe.png">http://img8.imageshack.us/img8/3323/screenshotbe.png</a></p>
<p>Do you know how to remove this checkbox ?</p>
<p>Thank you :)</p>
| 15 | 2009-09-24T10:16:36Z | 1,474,066 | <p>Update 2016: as per Stan's answer below, modern versions of django let you set <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#django.contrib.admin.InlineModelAdmin.can_delete" rel="nofollow"><code>can_delete = True</code></a> on the <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/contenttypes/#django.contrib.contenttypes.admin.GenericStackedInline" rel="nofollow"><code>GenericStackedInline</code></a> subclass, as it inherits from <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#django.contrib.admin.InlineModelAdmin.can_delete" rel="nofollow"><code>InlineModelAdmin</code></a></p>
<hr>
<p>I've run into this before - for some reason passing can_delete as an argument doesn't work, but setting it in the formset's <strong>init</strong> method does. Try this:</p>
<pre><code>class MyInlineFormset(generic.generic_inlineformset_factory(MyModel)):
def __init__(self, *args, **kwargs):
super(MyInlineFormset, self).__init__(*args, **kwargs)
self.can_delete = False
</code></pre>
<p>then in your admin inline class:</p>
<pre><code>class MyModelStackedInline(generic.GenericStackedInline):
model = MyModel
formset = MyInlineFormset
extra = 0
</code></pre>
| 7 | 2009-09-24T20:48:45Z | [
"python",
"django",
"django-admin",
"generics",
"formset"
] |
Django: disallow can_delete on GenericStackedInline | 1,470,811 | <p>I've built this model which contains a generic foreign key:</p>
<pre><code>class MyModel(models.Model):
content_type = models.ForeignKey(ContentType, verbose_name=_('content type'))
object_id = models.PositiveIntegerField(_('object id'))
content_object = generic.GenericForeignKey('content_type', 'object_id')
</code></pre>
<p>Next I've made a generic stacked inline to put it in any ModelAmin class:</p>
<pre><code>class MyModelStackedInline(generic.GenericStackedInline):
model = MyModel
formset = generic.generic_inlineformset_factory(MyModel, can_delete=False)
extra = 0
class SomeOhterModelAdmin(admin.ModelAdmin):
inlines = [MyModelStackedInline]
</code></pre>
<p>However, despite the can_<code>delete=False</code> arg passed by in generic_inlineformset_factory, I always see a <code>Delete</code> checkbox in my admin change_form.</p>
<p>Here is an example: <a href="http://img8.imageshack.us/img8/3323/screenshotbe.png">http://img8.imageshack.us/img8/3323/screenshotbe.png</a></p>
<p>Do you know how to remove this checkbox ?</p>
<p>Thank you :)</p>
| 15 | 2009-09-24T10:16:36Z | 4,895,968 | <p>Maybe It is a post '09 feature, but you can specify that without overriding the <code>__init__()</code> method :</p>
<pre><code>class StupidCarOptionsInline(admin.StackedInline):
model = models.StupidOption
form = StupidCarOptionAdminForm
extra = 0
can_delete = False
</code></pre>
| 19 | 2011-02-04T08:33:29Z | [
"python",
"django",
"django-admin",
"generics",
"formset"
] |
Twisted network client with multiprocessing workers? | 1,470,850 | <p>So, I've got an application that uses Twisted + Stomper as a STOMP client which farms out work to a multiprocessing.Pool of workers.</p>
<p>This appears to work ok when I just use a python script to fire this up, which (simplified) looks something like this:</p>
<pre><code># stompclient.py
logging.config.fileConfig(config_path)
logger = logging.getLogger(__name__)
# Add observer to make Twisted log via python
twisted.python.log.PythonLoggingObserver().start()
# initialize the process pool. (child processes get forked off immediately)
pool = multiprocessing.Pool(processes=processes)
StompClientFactory.username = username
StompClientFactory.password = password
StompClientFactory.destination = destination
reactor.connectTCP(host, port, StompClientFactory())
reactor.run()
</code></pre>
<p>As this gets packaged for deployment, I thought I would take advantage of the twistd script and run this from a tac file.</p>
<p>Here's my very-similar-looking tac file:</p>
<pre><code># stompclient.tac
logging.config.fileConfig(config_path)
logger = logging.getLogger(__name__)
# Add observer to make Twisted log via python
twisted.python.log.PythonLoggingObserver().start()
# initialize the process pool. (child processes get forked off immediately)
pool = multiprocessing.Pool(processes=processes)
StompClientFactory.username = username
StompClientFactory.password = password
StompClientFactory.destination = destination
application = service.Application('myapp')
service = internet.TCPClient(host, port, StompClientFactory())
service.setServiceParent(application)
</code></pre>
<p>For the sake of illustration, I have collapsed or changed a few details; hopefully they were not the essence of the problem. For example, my app has a plugin system, the pool is initialized by a separate method, and then work is delegated to the pool using pool.apply_async() passing one of my plugin's process() methods.</p>
<p>So, if I run the script (stompclient.py), everything works as expected.</p>
<p>It also appears to work OK if I run twist in non-daemon mode (-n):</p>
<pre><code>twistd -noy stompclient.tac
</code></pre>
<p>however, it does <em>not</em> work when I run in daemon mode:</p>
<pre><code>twistd -oy stompclient.tac
</code></pre>
<p>The application appears to start up OK, but when it attempts to fork off work, it just hangs. By "hangs", I mean that it appears that the child process is never asked to do anything and the parent (that called pool.apply_async()) just sits there waiting for the response to return.</p>
<p>I'm sure that I'm doing something stupid with Twisted + multiprocessing, but I'm really hoping that someone can explain to my the flaw in my approach.</p>
<p>Thanks in advance!</p>
| 7 | 2009-09-24T10:28:19Z | 1,471,108 | <p>A possible idea for you...</p>
<p>When running in daemon mode twistd will close stdin, stdout and stderr. Does something that your clients do read or write to these?</p>
| 0 | 2009-09-24T11:32:44Z | [
"python",
"twisted",
"multiprocessing"
] |
Twisted network client with multiprocessing workers? | 1,470,850 | <p>So, I've got an application that uses Twisted + Stomper as a STOMP client which farms out work to a multiprocessing.Pool of workers.</p>
<p>This appears to work ok when I just use a python script to fire this up, which (simplified) looks something like this:</p>
<pre><code># stompclient.py
logging.config.fileConfig(config_path)
logger = logging.getLogger(__name__)
# Add observer to make Twisted log via python
twisted.python.log.PythonLoggingObserver().start()
# initialize the process pool. (child processes get forked off immediately)
pool = multiprocessing.Pool(processes=processes)
StompClientFactory.username = username
StompClientFactory.password = password
StompClientFactory.destination = destination
reactor.connectTCP(host, port, StompClientFactory())
reactor.run()
</code></pre>
<p>As this gets packaged for deployment, I thought I would take advantage of the twistd script and run this from a tac file.</p>
<p>Here's my very-similar-looking tac file:</p>
<pre><code># stompclient.tac
logging.config.fileConfig(config_path)
logger = logging.getLogger(__name__)
# Add observer to make Twisted log via python
twisted.python.log.PythonLoggingObserver().start()
# initialize the process pool. (child processes get forked off immediately)
pool = multiprocessing.Pool(processes=processes)
StompClientFactory.username = username
StompClientFactory.password = password
StompClientFactory.destination = destination
application = service.Application('myapp')
service = internet.TCPClient(host, port, StompClientFactory())
service.setServiceParent(application)
</code></pre>
<p>For the sake of illustration, I have collapsed or changed a few details; hopefully they were not the essence of the problem. For example, my app has a plugin system, the pool is initialized by a separate method, and then work is delegated to the pool using pool.apply_async() passing one of my plugin's process() methods.</p>
<p>So, if I run the script (stompclient.py), everything works as expected.</p>
<p>It also appears to work OK if I run twist in non-daemon mode (-n):</p>
<pre><code>twistd -noy stompclient.tac
</code></pre>
<p>however, it does <em>not</em> work when I run in daemon mode:</p>
<pre><code>twistd -oy stompclient.tac
</code></pre>
<p>The application appears to start up OK, but when it attempts to fork off work, it just hangs. By "hangs", I mean that it appears that the child process is never asked to do anything and the parent (that called pool.apply_async()) just sits there waiting for the response to return.</p>
<p>I'm sure that I'm doing something stupid with Twisted + multiprocessing, but I'm really hoping that someone can explain to my the flaw in my approach.</p>
<p>Thanks in advance!</p>
| 7 | 2009-09-24T10:28:19Z | 1,472,271 | <p>Since the difference between your working invocation and your non-working invocation is only the "-n" option, it seems most likely that the problem is caused by the daemonization process (which "-n" prevents from happening).</p>
<p>On POSIX, one of the steps involved in daemonization is forking and having the parent exit. Among of things, this has the consequence of having your code run in a different process than the one in which the .tac file was evaluated. This also re-arranges the child/parent relationship of processes which were started in the .tac file - as your pool of multiprocessing processes were.</p>
<p>The multiprocessing pool's processes start off with a parent of the twistd process you start. However, when that process exits as part of daemonization, their parent becomes the system init process. This may cause some problems, although probably not the hanging problem you described. There are probably other similarly low-level implementation details which normally allow the multiprocessing module to work but which are disrupted by the daemonization process.</p>
<p>Fortunately, avoiding this strange interaction should be straightforward. Twisted's service APIs allow you to run code after daemonization has completed. If you use these APIs, then you can delay the initialization of the multiprocessing module's process pool until after daemonization and hopefully avoid the problem. Here's an example of what that might look like:</p>
<pre><code>from twisted.application.service import Service
class MultiprocessingService(Service):
def startService(self):
self.pool = multiprocessing.Pool(processes=processes)
MultiprocessingService().setServiceParent(application)
</code></pre>
<p>Now, separately, you may also run into problems relating to clean up of the multiprocessing module's child processes, or possibly issues with processes created with Twisted's process creation API, reactor.spawnProcess. This is because part of dealing with child processes correctly generally involves handling the SIGCHLD signal. Twisted and multiprocessing aren't going to be cooperating in this regard, though, so one of them is going to get notified of all children exiting and the other will never be notified. If you don't use Twisted's API for creating child processes at all, then this may be okay for you - but you might want to check to make sure any signal handler the multiprocessing module tries to install actually "wins" and doesn't get replaced by Twisted's own handler.</p>
| 12 | 2009-09-24T14:58:46Z | [
"python",
"twisted",
"multiprocessing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.