content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Writing Sudoku Solver wih Python Here is my Sudoku Solver written in python language, When I run this program there seems to be a problem with in Update function and Solve function. No matter how much time I look over and move the codes around, I seem to have no luck Can anyone Help me? import copy def display (A): if A: for i in range (9): for j in range (9): if type (A[i][j]) == type ([]): print A[i][j][0], else: print A[i][j] print print else: print A def has_conflict(A): for i in range(9): for j in range(9): for (x,y) in get_neighbors(i,j): if len(A[i][j])==1 and A[i][j]==A[x][y]: return True return False def get_neighbors(x,y): neighbors = [] for i in range(3): for j in range(3): a = 3*(x / 3)+i b = 3*(y / 3)+j if (x,y) != (a,b): neighbors += [(a,b)] for i in range(9): if (x,y) != (x,i) and (x,i) not in neighbors: neighbors += [(x,i)] if (x,y) != (i,y) and (i,y) not in neighbors: neighbors += [(i,y)] return neighbors def update(A,x,y,value): B = copy.deepcopy(A) B[x][y] = [value] for (i,j) in get_neighbors(x,y): if B[i][j] == B[x][y]: if len(B[i][j]) > 1: B[i][j].remove(value) else: return [] if has_conflict(B) == True: return [] else: return B def solve(A): for x in range (9): for y in range(9): if len(A[x][y]) == 1: return A[x][y] if len(A[x][y]) > 1: lst = update(A,x,y,A[x][y]) if len(lst[x][y]) > 1: solve(lst) if lst == []: return [] if len(lst[x][y]) == 1: return lst else: return A[x][y] A=[] infile = open('puzzle1.txt','r') for i in range(9): A += [[]] for j in range(9): num = int(infile.read(2)) if num: A[i] += [[num]] else: A[i] += [[1,2,3,4,5,6,7,8,9]] for i in range(9): for j in range(9): if len(A[i][j])==1: A = update(A, i, j, A[i][j][0]) if A == []: break if A==[]: break if A<>[]: A = solve(A) display(A) Here are some puzzles: Puzzle 1 0 0 0 2 6 0 7 0 1 6 8 0 0 7 0 0 9 0 1 9 0 0 0 4 5 0 0 8 2 0 1 0 0 0 4 0 0 0 4 6 0 2 9 0 0 0 5 0 0 0 3 0 2 8 0 0 9 3 0 0 0 7 4 0 4 0 0 5 0 0 3 6 7 0 3 0 1 8 0 0 0 Puzzle 2 1 0 0 4 8 9 0 0 6 7 3 0 0 0 0 0 4 0 0 0 0 0 0 1 2 9 5 0 0 7 1 2 0 6 0 0 5 0 0 7 0 3 0 0 8 0 0 6 0 9 5 7 0 0 9 1 4 6 0 0 0 0 0 0 2 0 0 0 0 0 3 7 8 0 0 5 1 2 0 0 4 Puzzle 3 0 2 0 6 0 8 0 0 0 5 8 0 0 0 9 7 0 0 0 0 0 0 4 0 0 0 0 3 7 0 0 0 0 5 0 0 6 0 0 0 0 0 0 0 4 0 0 8 0 0 0 0 1 3 0 0 0 0 2 0 0 0 0 0 0 9 8 0 0 0 3 6 0 0 0 3 0 6 0 9 0 A: If you want to stabilize your code, then write small test cases for each function which make sure that they work correctly. In your case, run a puzzle, and determine which field is wrong. Then guess which function might produce the wrong output. Call it with the input to see what it really does. Repeat for every bug you find. [EDIT] The module unittest is your friend. A: I would avoid things like "move the codes around". This is called "Programming by Coincidence" (see The Pragmatic Programmer). Programming like this won't make you a better programmer. Instead, you should take out a paper and pencil, and start thinking how things should work. Then, read the code and carefully write what it actually does. Only when you understand, go back to the computer terminal. A: I'd like to help you in a way that you can write the actual code, so here is some explanation, and some pseudo-code. Those sudoku solvers that don't mimic human deduction logic are bruteforce-based. Basically, you'll need a backtrack algorithm. You have your has_conflict method, which checks whether the candidate is ok at first look. Then you write the backtrack algorithm like this: Solve(s): Pick a candidate. Does it have a conflict? If yes, go back, and pick another one. No more empty cells? Then cool, return True. Have you run out of candidates? Then it cant be solved, return False. At this point, it seems ok. So call Solve(s) again, lets see how it works out with the new candidate. If Solve returned false, then after all it was a bad candidate. Go back to picking another one. If Solve returned True, then you solved the sudoku! The main idea here is that if your guess was wrong, despite not having a conflict at first look, then a confliction will reveal itself somewhere deeper in the call tree. The original sudokus have only one solution. You can extend this method to different solutions for sudokus that have them by trying any candidates despite the return value of Solve (but that will be very slow with this approach). There's a nice trick to find out if a sudoku has more than one solutions. First try the candidates in natural order in every call of solve. Then try them backwards. Then do these two steps again, but this time run the algorithm from the last cell of the sudoku, stepping backwards. If these four solutions are identical, then it has only one solution. Unfortunately I don't have a formal proof, but it seemed to work all the time. I tried to prove it, but I'm not that great with graphs. A: Solving sudoku need some bruteforcing method, I dont see those in your codes. I also tried to do before, but finally I saw norvig's codes, its just working perfect. and ended up with learning his codes finally.
Writing Sudoku Solver wih Python
Here is my Sudoku Solver written in python language, When I run this program there seems to be a problem with in Update function and Solve function. No matter how much time I look over and move the codes around, I seem to have no luck Can anyone Help me? import copy def display (A): if A: for i in range (9): for j in range (9): if type (A[i][j]) == type ([]): print A[i][j][0], else: print A[i][j] print print else: print A def has_conflict(A): for i in range(9): for j in range(9): for (x,y) in get_neighbors(i,j): if len(A[i][j])==1 and A[i][j]==A[x][y]: return True return False def get_neighbors(x,y): neighbors = [] for i in range(3): for j in range(3): a = 3*(x / 3)+i b = 3*(y / 3)+j if (x,y) != (a,b): neighbors += [(a,b)] for i in range(9): if (x,y) != (x,i) and (x,i) not in neighbors: neighbors += [(x,i)] if (x,y) != (i,y) and (i,y) not in neighbors: neighbors += [(i,y)] return neighbors def update(A,x,y,value): B = copy.deepcopy(A) B[x][y] = [value] for (i,j) in get_neighbors(x,y): if B[i][j] == B[x][y]: if len(B[i][j]) > 1: B[i][j].remove(value) else: return [] if has_conflict(B) == True: return [] else: return B def solve(A): for x in range (9): for y in range(9): if len(A[x][y]) == 1: return A[x][y] if len(A[x][y]) > 1: lst = update(A,x,y,A[x][y]) if len(lst[x][y]) > 1: solve(lst) if lst == []: return [] if len(lst[x][y]) == 1: return lst else: return A[x][y] A=[] infile = open('puzzle1.txt','r') for i in range(9): A += [[]] for j in range(9): num = int(infile.read(2)) if num: A[i] += [[num]] else: A[i] += [[1,2,3,4,5,6,7,8,9]] for i in range(9): for j in range(9): if len(A[i][j])==1: A = update(A, i, j, A[i][j][0]) if A == []: break if A==[]: break if A<>[]: A = solve(A) display(A) Here are some puzzles: Puzzle 1 0 0 0 2 6 0 7 0 1 6 8 0 0 7 0 0 9 0 1 9 0 0 0 4 5 0 0 8 2 0 1 0 0 0 4 0 0 0 4 6 0 2 9 0 0 0 5 0 0 0 3 0 2 8 0 0 9 3 0 0 0 7 4 0 4 0 0 5 0 0 3 6 7 0 3 0 1 8 0 0 0 Puzzle 2 1 0 0 4 8 9 0 0 6 7 3 0 0 0 0 0 4 0 0 0 0 0 0 1 2 9 5 0 0 7 1 2 0 6 0 0 5 0 0 7 0 3 0 0 8 0 0 6 0 9 5 7 0 0 9 1 4 6 0 0 0 0 0 0 2 0 0 0 0 0 3 7 8 0 0 5 1 2 0 0 4 Puzzle 3 0 2 0 6 0 8 0 0 0 5 8 0 0 0 9 7 0 0 0 0 0 0 4 0 0 0 0 3 7 0 0 0 0 5 0 0 6 0 0 0 0 0 0 0 4 0 0 8 0 0 0 0 1 3 0 0 0 0 2 0 0 0 0 0 0 9 8 0 0 0 3 6 0 0 0 3 0 6 0 9 0
[ "If you want to stabilize your code, then write small test cases for each function which make sure that they work correctly.\nIn your case, run a puzzle, and determine which field is wrong. Then guess which function might produce the wrong output. Call it with the input to see what it really does. Repeat for every bug you find.\n[EDIT] The module unittest is your friend.\n", "I would avoid things like \"move the codes around\". This is called \"Programming by Coincidence\" (see The Pragmatic Programmer). Programming like this won't make you a better programmer.\nInstead, you should take out a paper and pencil, and start thinking how things should work. Then, read the code and carefully write what it actually does. Only when you understand, go back to the computer terminal.\n", "I'd like to help you in a way that you can write the actual code, so here is some explanation, and some pseudo-code.\nThose sudoku solvers that don't mimic human deduction logic are bruteforce-based. Basically, you'll need a backtrack algorithm. You have your has_conflict method, which checks whether the candidate is ok at first look. Then you write the backtrack algorithm like this: \nSolve(s):\n Pick a candidate.\n Does it have a conflict? If yes, go back, and pick another one.\n No more empty cells? Then cool, return True.\n Have you run out of candidates? Then it cant be solved, return False.\n\n At this point, it seems ok. So call Solve(s) again, lets see how it works \n out with the new candidate.\n If Solve returned false, then after all it was a bad candidate. Go\n back to picking another one.\n If Solve returned True, then you solved the sudoku!\n\nThe main idea here is that if your guess was wrong, despite not having a conflict at first look, then a confliction will reveal itself somewhere deeper in the call tree.\nThe original sudokus have only one solution. You can extend this method to different solutions for sudokus that have them by trying any candidates despite the return value of Solve (but that will be very slow with this approach).\nThere's a nice trick to find out if a sudoku has more than one solutions. First try the candidates in natural order in every call of solve. Then try them backwards. Then do these two steps again, but this time run the algorithm from the last cell of the sudoku, stepping backwards. If these four solutions are identical, then it has only one solution. Unfortunately I don't have a formal proof, but it seemed to work all the time. I tried to prove it, but I'm not that great with graphs.\n", "Solving sudoku need some bruteforcing method, I dont see those in your codes. \nI also tried to do before, but finally I saw norvig's codes, its just working perfect.\nand ended up with learning his codes finally.\n" ]
[ 3, 3, 2, 0 ]
[]
[]
[ "python", "sudoku" ]
stackoverflow_0001781795_python_sudoku.txt
Q: app engine: string to datetime? i have string date = "11/28/2009" hour = "23" minutes = "59" seconds = "00" how can i convert to datetime object and store it in datastore? A: I apologize if this isn't what you want, but at least for the first part of the question you could probably do it like so? >>> import datetime >>> datetime.datetime.strptime(date + ' ' + hour + ':' + minutes + ':' + seconds, '%m/%d/%Y %H:%M:%S') datetime.datetime(2009, 11, 28, 23, 59)
app engine: string to datetime?
i have string date = "11/28/2009" hour = "23" minutes = "59" seconds = "00" how can i convert to datetime object and store it in datastore?
[ "I apologize if this isn't what you want, but at least for the first part of the question you could probably do it like so?\n>>> import datetime\n>>> datetime.datetime.strptime(date + ' ' + hour + ':' + minutes + ':' + seconds, '%m/%d/%Y %H:%M:%S')\ndatetime.datetime(2009, 11, 28, 23, 59)\n\n" ]
[ 11 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0001782255_datetime_python.txt
Q: How do I order this list in Python? [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')] I want to order this alphabetically, by "PRP, VBD, PRP, and VBP" It's not the traditional sort, right? A: Use itemgetter: >>> a = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')] >>> import operator >>> a.sort(key = operator.itemgetter(1)) >>> a [(u'.', '.'), (u'we', 'PRP'), (u'you', 'PRP'), (u'saw', 'VBD'), (u'bruh', 'VBP')] A: The sort method takes a key argument to extract a comparison key from each argument, i.e. key is a function which transforms the list item into the value you wish to sort on. In this case it's very easy to use a lambda to extract the second item from each tuple: >>> mylist = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP') , (u'.', '.')] >>> mylist.sort(key = lambda x: x[1]) >>> mylist [(u'.', '.'), (u'we', 'PRP'), (u'you', 'PRP'), (u'saw', 'VBD'), (u'bruh', 'VBP')] A: You can pass a comparison function in the sort method. Example: l = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')] l.sort(lambda x, y: cmp(x[1], y[1])) # compare tuple's 2nd elements Output: >>> l [(u'.', '.'), (u'we', 'PRP'), (u'you', 'PRP'), (u'saw', 'VBD'), (u'bruh', 'VBP')] >>> A: Or use the Built-in Function sorted (similar to Nick_D's answer): ls = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')] sorted(ls, cmp=lambda t1, t2: cmp(t1[1], t2[1]))
How do I order this list in Python?
[(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')] I want to order this alphabetically, by "PRP, VBD, PRP, and VBP" It's not the traditional sort, right?
[ "Use itemgetter:\n>>> a = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')]\n>>> import operator\n>>> a.sort(key = operator.itemgetter(1))\n>>> a\n[(u'.', '.'), (u'we', 'PRP'), (u'you', 'PRP'), (u'saw', 'VBD'), (u'bruh', 'VBP')]\n\n", "The sort method takes a key argument to extract a comparison key from each argument, i.e. key is a function which transforms the list item into the value you wish to sort on.\nIn this case it's very easy to use a lambda to extract the second item from each tuple:\n>>> mylist = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP')\n, (u'.', '.')]\n>>> mylist.sort(key = lambda x: x[1])\n>>> mylist\n[(u'.', '.'), (u'we', 'PRP'), (u'you', 'PRP'), (u'saw', 'VBD'), (u'bruh', 'VBP')]\n\n", "You can pass a comparison function in the sort method. \nExample:\nl = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')]\nl.sort(lambda x, y: cmp(x[1], y[1])) # compare tuple's 2nd elements\n\nOutput:\n>>> l\n[(u'.', '.'),\n (u'we', 'PRP'),\n (u'you', 'PRP'),\n (u'saw', 'VBD'),\n (u'bruh', 'VBP')]\n>>> \n\n", "Or use the Built-in Function sorted (similar to Nick_D's answer):\nls = [(u'we', 'PRP'), (u'saw', 'VBD'), (u'you', 'PRP'), (u'bruh', 'VBP'), (u'.', '.')]\n\n\nsorted(ls, cmp=lambda t1, t2: cmp(t1[1], t2[1]))\n\n" ]
[ 15, 7, 1, 0 ]
[]
[]
[ "list", "python", "sorting" ]
stackoverflow_0001782253_list_python_sorting.txt
Q: How to change firefox proxy from webdriver? how can I access Firefox proxy settings from Python Webdriver and change them to make Firefox use modified proxy settings without needing to restart it? A: I don't think that is possible from the outside of Firefox. Have a look at FoxyProxy. It allows you to define a proxy per URL pattern.
How to change firefox proxy from webdriver?
how can I access Firefox proxy settings from Python Webdriver and change them to make Firefox use modified proxy settings without needing to restart it?
[ "I don't think that is possible from the outside of Firefox. Have a look at FoxyProxy. It allows you to define a proxy per URL pattern.\n" ]
[ 0 ]
[]
[]
[ "firefox", "proxy", "python", "webdriver" ]
stackoverflow_0001782375_firefox_proxy_python_webdriver.txt
Q: Convert Perl script to Python: dedupe 2 files based on hash keys I am new to Python and would like to know if someone would kindly convert an example of a fairly simple Perl script to Python? The script takes 2 files and outputs only unique lines from the second file by comparing hash keys. It also outputs duplicate lines to a file. I have found that this method of deduping is extremely fast with Perl, and would like to see how Python compares. #! /usr/bin/perl ## Compare file1 and file2 and output only the unique lines from file2. ## Opening file1.txt and store the data in a hash. open my $file1, '<', "file1.txt" or die $!; while ( <$file1> ) { my $name = $_; $file1hash{$name}=$_; } ## Opening file2.txt and store the data in a hash. open my $file2, '<', "file2.txt" or die $!; while ( <$file2> ) { $name = $_; $file2hash{$name}=$_; } open my $dfh, '>', "duplicate.txt"; ## Compare the keys and remove the duplicate one in the file2 hash foreach ( keys %file1hash ) { if ( exists ( $file2hash{$_} )) { print $dfh $file2hash{$_}; delete $file2hash{$_}; } } open my $ofh, '>', "file2_clean.txt"; print $ofh values(%file2hash) ; I have tested both perl and python scripts on 2 files of over 1 million lines and total time was less than 6 seconds. For the business purpose this served, the performance is outstanding! I modified the script Kriss offered and I am very happy with both results: 1) The performance of the script and 2) the ease with which I modified the script to be more flexible: #!/usr/bin/env python import os filename1 = raw_input("What is the first file name to compare? ") filename2 = raw_input("What is the second file name to compare? ") file1set = set([line for line in file(filename1)]) file2set = set([line for line in file(filename2)]) for name, results in [ (os.path.abspath(os.getcwd()) + "/duplicate.txt", file1set.intersection(file2set)), (os.path.abspath(os.getcwd()) + "/" + filename2 + "_clean.txt", file2set.difference(file1set))]: with file(name, 'w') as fh: for line in results: fh.write(line) A: You can use sets in Python if you don't care about order: file1=set(open("file1").readlines()) file2=set(open("file2").readlines()) intersection = file1 & file2 #common lines non_intersection = file2 - file1 #uncommon lines (in file2 but not file1) for items in intersection: print items for nitems in non_intersection: print nitems Other methods include using difflib, filecmp libraries. The other way, only using list comparison. # lines in file2 common with file1 data1=map(str.rstrip,open("file1").readlines()) for line in open("file2"): line=line.rstrip() if line in data1: print line # lines in file2 not in file1, use "not" data1=map(str.rstrip,open("file1").readlines()) for line in open("file2"): line=line.rstrip() if not line in data1: print line A: Yet another variant (merely syntaxic changes from other proposals, there is also more than one way to do it using python). file1set = set([line for line in file("file1.txt")]) file2set = set([line for line in file("file2.txt")]) for name, results in [ ("duplicate.txt", file1set.intersection(file2set)), ("file2_clean.txt", file2set.difference(file1set))]: with file(name, 'w') as fh: for line in results: fh.write(line) Side note: we should also contribute another perl version, the one proposed in not very perlish... below is the perl equivalent of my python version. Does not looks much like the initial one. What I want to point out is that in proposed answers issue is as much algorithmic and language independant that perl vs python. use strict; open my $file1, '<', "file1.txt" or die $!; my %file1hash = map { $_ => 1 } <$file1>; open my $file2, '<', "file2.txt" or die $!; my %file2hash = map { $_ => 1 } <$file2>; for (["duplicate.txt", [grep $file1hash{$_}, keys(%file2hash)]], ["file2_clean.txt", [grep !$file1hash{$_}, keys(%file2hash)]]){ my ($name, $results) = @$_; open my $fh, ">$name" or die $!; print $fh @$results; } A: Here's a slightly different solution that's a little more memory friendly, should the files be very large. This only creates a set for the original file (as there doesn't seem to be a need to have all of file2 in memory at once): with open("file1.txt", "r") as file1: file1set = set(line.rstrip() for line in file1) with open("file2.txt", "r") as file2: with open("duplicate.txt", "w") as dfh: with open("file2_clean.txt", "w") as ofh: for line in file2: if line.rstrip() in file1set: dfh.write(line) # duplicate line else: ofh.write(line) # not duplicate Note, if you want to include trailing whitespace and the end-of-line characters in the comparisons, you can replace the second line.rstrip() with just line and simplify the second line to: file1set = set(file1) Also, as of Python 3.1, the with statement allows multiple items so the three with statements could be combined into one.
Convert Perl script to Python: dedupe 2 files based on hash keys
I am new to Python and would like to know if someone would kindly convert an example of a fairly simple Perl script to Python? The script takes 2 files and outputs only unique lines from the second file by comparing hash keys. It also outputs duplicate lines to a file. I have found that this method of deduping is extremely fast with Perl, and would like to see how Python compares. #! /usr/bin/perl ## Compare file1 and file2 and output only the unique lines from file2. ## Opening file1.txt and store the data in a hash. open my $file1, '<', "file1.txt" or die $!; while ( <$file1> ) { my $name = $_; $file1hash{$name}=$_; } ## Opening file2.txt and store the data in a hash. open my $file2, '<', "file2.txt" or die $!; while ( <$file2> ) { $name = $_; $file2hash{$name}=$_; } open my $dfh, '>', "duplicate.txt"; ## Compare the keys and remove the duplicate one in the file2 hash foreach ( keys %file1hash ) { if ( exists ( $file2hash{$_} )) { print $dfh $file2hash{$_}; delete $file2hash{$_}; } } open my $ofh, '>', "file2_clean.txt"; print $ofh values(%file2hash) ; I have tested both perl and python scripts on 2 files of over 1 million lines and total time was less than 6 seconds. For the business purpose this served, the performance is outstanding! I modified the script Kriss offered and I am very happy with both results: 1) The performance of the script and 2) the ease with which I modified the script to be more flexible: #!/usr/bin/env python import os filename1 = raw_input("What is the first file name to compare? ") filename2 = raw_input("What is the second file name to compare? ") file1set = set([line for line in file(filename1)]) file2set = set([line for line in file(filename2)]) for name, results in [ (os.path.abspath(os.getcwd()) + "/duplicate.txt", file1set.intersection(file2set)), (os.path.abspath(os.getcwd()) + "/" + filename2 + "_clean.txt", file2set.difference(file1set))]: with file(name, 'w') as fh: for line in results: fh.write(line)
[ "You can use sets in Python if you don't care about order:\nfile1=set(open(\"file1\").readlines())\nfile2=set(open(\"file2\").readlines())\nintersection = file1 & file2 #common lines\nnon_intersection = file2 - file1 #uncommon lines (in file2 but not file1)\nfor items in intersection:\n print items\nfor nitems in non_intersection:\n print nitems\n\nOther methods include using difflib, filecmp libraries.\nThe other way, only using list comparison.\n# lines in file2 common with file1\ndata1=map(str.rstrip,open(\"file1\").readlines())\nfor line in open(\"file2\"):\n line=line.rstrip()\n if line in data1:\n print line\n\n# lines in file2 not in file1, use \"not\"\ndata1=map(str.rstrip,open(\"file1\").readlines())\nfor line in open(\"file2\"):\n line=line.rstrip()\n if not line in data1:\n print line\n\n", "Yet another variant (merely syntaxic changes from other proposals, there is also more than one way to do it using python).\nfile1set = set([line for line in file(\"file1.txt\")])\nfile2set = set([line for line in file(\"file2.txt\")])\n\nfor name, results in [\n (\"duplicate.txt\", file1set.intersection(file2set)),\n (\"file2_clean.txt\", file2set.difference(file1set))]:\n with file(name, 'w') as fh:\n for line in results:\n fh.write(line)\n\nSide note: we should also contribute another perl version, the one proposed in not very perlish... below is the perl equivalent of my python version. Does not looks much like the initial one. What I want to point out is that in proposed answers issue is as much algorithmic and language independant that perl vs python. \nuse strict;\n\nopen my $file1, '<', \"file1.txt\" or die $!;\nmy %file1hash = map { $_ => 1 } <$file1>;\n\nopen my $file2, '<', \"file2.txt\" or die $!;\nmy %file2hash = map { $_ => 1 } <$file2>;\n\nfor ([\"duplicate.txt\", [grep $file1hash{$_}, keys(%file2hash)]],\n [\"file2_clean.txt\", [grep !$file1hash{$_}, keys(%file2hash)]]){\n my ($name, $results) = @$_;\n open my $fh, \">$name\" or die $!;\n print $fh @$results;\n}\n\n", "Here's a slightly different solution that's a little more memory friendly, should the files be very large. This only creates a set for the original file (as there doesn't seem to be a need to have all of file2 in memory at once):\nwith open(\"file1.txt\", \"r\") as file1:\n file1set = set(line.rstrip() for line in file1)\n\nwith open(\"file2.txt\", \"r\") as file2:\n with open(\"duplicate.txt\", \"w\") as dfh:\n with open(\"file2_clean.txt\", \"w\") as ofh:\n for line in file2:\n if line.rstrip() in file1set:\n dfh.write(line) # duplicate line\n else:\n ofh.write(line) # not duplicate\n\nNote, if you want to include trailing whitespace and the end-of-line characters in the comparisons, you can replace the second line.rstrip() with just line and simplify the second line to:\n file1set = set(file1)\n\nAlso, as of Python 3.1, the with statement allows multiple items so the three with statements could be combined into one.\n" ]
[ 7, 4, 3 ]
[]
[]
[ "hash", "perl", "python" ]
stackoverflow_0001782033_hash_perl_python.txt
Q: Python programs coexisting on Windows I'm looking for a way to let multiple Python programs coexist on the same Windows machine. Here's the problem: suppose program A needs Python 2.5, B needs 2.6, C needs 3, and each of them needs its own version of Qt, Wx or whatever other modules or whatever. Trying to install all these dependencies on the same machine will break things, e.g. you can install different versions of Python side-by-side but only one of them can have the .py file association, so if you give that to Python 2.5 then B and C won't work, etc. The ideal state of affairs would be if program A could live in C:\A along with its own Python interpreter, Qt/Wx/MySQL driver/whatever and never touch anything outside that directory, ditto for B and C. Is there any way to accomplish this, other than going the full virtual box route? edit: I tried the batch file solution, but it doesn't work. That is, it works on simple test scripts but e.g. OpenRPG fails at some point in its loading process if its required version of Python doesn't own the file association. A: VirtualEnv. virtualenv is a tool to create isolated Python environments. The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.4/site-packages (or whatever your platform's standard location is), it's easy to end up in a situation where you unintentionally upgrade an application that shouldn't be upgraded. See previous answer here. The other tool you should look at is pip which is great for installing particular versions of a library into a virtual environment. If you need to run v 1.0 of a library in python v 2.x for one application and 1.1 of the same library in python v 2.x, for example, you will need virtualenv plus a means of installing a particular version in that environment. Virtualenv + pip is your best choice. A: Use batch files to run scripts, write in notepad for example: c:\python26\python.exe C:\Script_B\B.py and save it as runB.bat (or anything .bat). It will run with interpreter in c:\python26\python.exe file specified after a whitespace. A: One solution would be to craft a batch file that invokes the correct interpreter for a given application. THis way, you can install additional interpreters in separate folders. Probably not perfect but it works. A: Have you considered compiling them to EXEs? Once you do that, all you have to do is call the EXE, for which the machine does not require python to be installed. All the required modules etc are packaged with the distribution when you compile. A: write a python script that mimics the way unix shells handle scirpts -- look at the first line and see if it matches #!(name-of-shell). Then have your python script exec that interpreter and feed it the rest of its arguments. Then, associate .py with your script. A: It looks like the best solution is a batch file that sets the file association before running the appropriate version of Python, as mentioned in the comments to one of the answers here: how to run both python 2.6 and 3.0 on the same windows XP box?
Python programs coexisting on Windows
I'm looking for a way to let multiple Python programs coexist on the same Windows machine. Here's the problem: suppose program A needs Python 2.5, B needs 2.6, C needs 3, and each of them needs its own version of Qt, Wx or whatever other modules or whatever. Trying to install all these dependencies on the same machine will break things, e.g. you can install different versions of Python side-by-side but only one of them can have the .py file association, so if you give that to Python 2.5 then B and C won't work, etc. The ideal state of affairs would be if program A could live in C:\A along with its own Python interpreter, Qt/Wx/MySQL driver/whatever and never touch anything outside that directory, ditto for B and C. Is there any way to accomplish this, other than going the full virtual box route? edit: I tried the batch file solution, but it doesn't work. That is, it works on simple test scripts but e.g. OpenRPG fails at some point in its loading process if its required version of Python doesn't own the file association.
[ "VirtualEnv. \n\nvirtualenv is a tool to create\n isolated Python environments.\nThe basic problem being addressed is\n one of dependencies and versions, and\n indirectly permissions. Imagine you\n have an application that needs version\n 1 of LibFoo, but another application\n requires version 2. How can you use\n both these applications? If you\n install everything into\n /usr/lib/python2.4/site-packages (or\n whatever your platform's standard\n location is), it's easy to end up in a\n situation where you unintentionally\n upgrade an application that shouldn't\n be upgraded.\n\nSee previous answer here.\nThe other tool you should look at is pip which is great for installing particular versions of a library into a virtual environment. If you need to run v 1.0 of a library in python v 2.x for one application and 1.1 of the same library in python v 2.x, for example, you will need virtualenv plus a means of installing a particular version in that environment. Virtualenv + pip is your best choice.\n", "Use batch files to run scripts, write in notepad for example:\nc:\\python26\\python.exe C:\\Script_B\\B.py\nand save it as runB.bat (or anything .bat). It will run with interpreter in c:\\python26\\python.exe file specified after a whitespace.\n", "One solution would be to craft a batch file that invokes the correct interpreter for a given application. THis way, you can install additional interpreters in separate folders.\nProbably not perfect but it works.\n", "Have you considered compiling them to EXEs? Once you do that, all you have to do is call the EXE, for which the machine does not require python to be installed. All the required modules etc are packaged with the distribution when you compile.\n", "write a python script that mimics the way unix shells handle scirpts -- look at the first line and see if it matches #!(name-of-shell). Then have your python script exec that interpreter and feed it the rest of its arguments.\nThen, associate .py with your script. \n", "It looks like the best solution is a batch file that sets the file association before running the appropriate version of Python, as mentioned in the comments to one of the answers here: how to run both python 2.6 and 3.0 on the same windows XP box?\n" ]
[ 7, 2, 1, 1, 0, 0 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0001779630_python_windows.txt
Q: Why don't these two math functions return the same result? I'm trying to use fancy indexing instead of looping to speed up a function in Numpy. To the best of my knowledge, I've implemented the fancy indexing version correctly. The problem is that the two functions (loop and fancy-indexed) do not return the same result. I'm not sure why. It's worth pointing out that the functions do return the same result if a smaller array is used (e.g., 20 x 20 x 20). Below I've included everything necessary to reproduce the error. If the functions do return the same result, then the line find_maxdiff(data) - find_maxdiff_fancy(data) should return an array full of zeroes. from numpy import * def rms(data, axis=0): return sqrt(mean(data ** 2, axis)) def find_maxdiff(data): samples, channels, epochs = shape(data) window_size = 50 maxdiff = zeros(epochs) for epoch in xrange(epochs): signal = rms(data[:, :, epoch], axis=1) for t in xrange(window_size, alen(signal) - window_size): amp_a = mean(signal[t-window_size:t], axis=0) amp_b = mean(signal[t:t+window_size], axis=0) the_diff = abs(amp_b - amp_a) if the_diff > maxdiff[epoch]: maxdiff[epoch] = the_diff return maxdiff def find_maxdiff_fancy(data): samples, channels, epochs = shape(data) window_size = 50 maxdiff = zeros(epochs) signal = rms(data, axis=1) for t in xrange(window_size, alen(signal) - window_size): amp_a = mean(signal[t-window_size:t], axis=0) amp_b = mean(signal[t:t+window_size], axis=0) the_diff = abs(amp_b - amp_a) maxdiff[the_diff > maxdiff] = the_diff return maxdiff data = random.random((600, 20, 100)) find_maxdiff(data) - find_maxdiff_fancy(data) data = random.random((20, 20, 20)) find_maxdiff(data) - find_maxdiff_fancy(data) A: The problem is this line: maxdiff[the_diff > maxdiff] = the_diff The left side selects only some elements of maxdiff, but the right side contains all elements of the_diff. This should work instead: replaceElements = the_diff > maxdiff maxdiff[replaceElements] = the_diff[replaceElements] or simply: maxdiff = maximum(maxdiff, the_diff) As for why 20x20x20 size seems to work: This is because your window size is too large, so nothing gets executed. A: First, in fancy your signal is now 2D if I understand correctly - so I think it would be clearer to index it explicitly (eg amp_a = mean(signal[t-window_size:t,:], axis=0). Similarly with alen(signal) - this should just be samples in both cases so I think it would be clearer to use that. It is wrong whenever you are actually doing something in the t loop - when samples < window_lenght as in the 20x20x20 example, that loop never gets executed. As soon as that loop is executed more than once (ie samples > 2 *window_length+1) then the errors come. Not sure why though - they do look equivalent to me.
Why don't these two math functions return the same result?
I'm trying to use fancy indexing instead of looping to speed up a function in Numpy. To the best of my knowledge, I've implemented the fancy indexing version correctly. The problem is that the two functions (loop and fancy-indexed) do not return the same result. I'm not sure why. It's worth pointing out that the functions do return the same result if a smaller array is used (e.g., 20 x 20 x 20). Below I've included everything necessary to reproduce the error. If the functions do return the same result, then the line find_maxdiff(data) - find_maxdiff_fancy(data) should return an array full of zeroes. from numpy import * def rms(data, axis=0): return sqrt(mean(data ** 2, axis)) def find_maxdiff(data): samples, channels, epochs = shape(data) window_size = 50 maxdiff = zeros(epochs) for epoch in xrange(epochs): signal = rms(data[:, :, epoch], axis=1) for t in xrange(window_size, alen(signal) - window_size): amp_a = mean(signal[t-window_size:t], axis=0) amp_b = mean(signal[t:t+window_size], axis=0) the_diff = abs(amp_b - amp_a) if the_diff > maxdiff[epoch]: maxdiff[epoch] = the_diff return maxdiff def find_maxdiff_fancy(data): samples, channels, epochs = shape(data) window_size = 50 maxdiff = zeros(epochs) signal = rms(data, axis=1) for t in xrange(window_size, alen(signal) - window_size): amp_a = mean(signal[t-window_size:t], axis=0) amp_b = mean(signal[t:t+window_size], axis=0) the_diff = abs(amp_b - amp_a) maxdiff[the_diff > maxdiff] = the_diff return maxdiff data = random.random((600, 20, 100)) find_maxdiff(data) - find_maxdiff_fancy(data) data = random.random((20, 20, 20)) find_maxdiff(data) - find_maxdiff_fancy(data)
[ "The problem is this line:\nmaxdiff[the_diff > maxdiff] = the_diff\n\nThe left side selects only some elements of maxdiff, but the right side contains all elements of the_diff. This should work instead:\nreplaceElements = the_diff > maxdiff\nmaxdiff[replaceElements] = the_diff[replaceElements]\n\nor simply:\nmaxdiff = maximum(maxdiff, the_diff)\n\nAs for why 20x20x20 size seems to work: This is because your window size is too large, so nothing gets executed.\n", "First, in fancy your signal is now 2D if I understand correctly - so I think it would be clearer to index it explicitly (eg amp_a = mean(signal[t-window_size:t,:], axis=0). Similarly with alen(signal) - this should just be samples in both cases so I think it would be clearer to use that.\nIt is wrong whenever you are actually doing something in the t loop - when samples < window_lenght as in the 20x20x20 example, that loop never gets executed. As soon as that loop is executed more than once (ie samples > 2 *window_length+1) then the errors come. Not sure why though - they do look equivalent to me.\n" ]
[ 3, 0 ]
[]
[]
[ "numpy", "python", "scipy" ]
stackoverflow_0001782114_numpy_python_scipy.txt
Q: Configure MySQL to work with Django Just installed Django (with easy_install) and created a project, but can't get mysql to work. python manage.py syncdb throws this error: ..... File "/Library/Python/2.6/site-packages/Django-1.1.1-py2.6.egg/django/db/backends/mysql/base.py", line 13, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb I have MAMP installed and path to MySQL is probably: /Applications/MAMP/Library/bin/mysql Thanks! A: you need Python library for MySQL access, MySQLdb: http://sourceforge.net/projects/mysql-python/ A: The MySQL egg requires a compiler from the dev tools (download XCode from the apple developers site) and a MySQL installation. If you have installed those, you have set the PATH to include mysql_config. export PATH=$PATH:/usr/local/mysql-5.1.39-osx10.5-x86_64/bin/ Check the path, as this is the installation on my machine! After that, you should be able to build the egg with easy_install. Good google terms are MySQLdb OS X Snow Leopard.
Configure MySQL to work with Django
Just installed Django (with easy_install) and created a project, but can't get mysql to work. python manage.py syncdb throws this error: ..... File "/Library/Python/2.6/site-packages/Django-1.1.1-py2.6.egg/django/db/backends/mysql/base.py", line 13, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb I have MAMP installed and path to MySQL is probably: /Applications/MAMP/Library/bin/mysql Thanks!
[ "you need Python library for MySQL access, MySQLdb:\nhttp://sourceforge.net/projects/mysql-python/\n", "The MySQL egg requires a compiler from the dev tools (download XCode from the apple developers site) and a MySQL installation.\nIf you have installed those, you have set the PATH to include mysql_config.\nexport PATH=$PATH:/usr/local/mysql-5.1.39-osx10.5-x86_64/bin/\n\nCheck the path, as this is the installation on my machine!\nAfter that, you should be able to build the egg with easy_install.\nGood google terms are MySQLdb OS X Snow Leopard.\n" ]
[ 3, 0 ]
[]
[]
[ "configuration", "django", "installation", "mysql", "python" ]
stackoverflow_0001781618_configuration_django_installation_mysql_python.txt
Q: WSGI byte ranges serving I'm looking into supporting HTTP/1.1 Byte serving in WSGI server/application for: resuming partial downloads multi-part downloads better streaming WSGI PEP 333 mentions that WSGI server may implement handling of byte serving (from RFC 2616 section 14.35.2 defines Accept-Range/Range/Content-Range response/request/response headers) and application should implement it if announces the capability: A server may transmit byte ranges of the application's response if requested by the client, and the application doesn't natively support byte ranges. Again, however, the application should perform this function on its own if desired. I've performed some Googling but found little information upon which of the available WSGI servers/middleware/applications implement Byte-Ranges? Does anyone has an experience in the field and can hint me place to dig further? EDIT: Can anyone comment, how I can enhance the question to be able to find an answer? A: I think webob may do the trick, see the end of the file example for a range request implementation which efficiently seeks into the file being served. A: You just need to use WebOb and create the response as Response(conditional_request=True) or subclass the WebOb Response object making conditional_request=True the default. When conditional_request=True and the request asked for a range, WebOb's Response.app_iter_range wraps the complete response to return only the requested range. The WebOb file serving example shows how you would implement your own app_iter_range for cases where it is practical to get a range of bytes without generating the whole response.
WSGI byte ranges serving
I'm looking into supporting HTTP/1.1 Byte serving in WSGI server/application for: resuming partial downloads multi-part downloads better streaming WSGI PEP 333 mentions that WSGI server may implement handling of byte serving (from RFC 2616 section 14.35.2 defines Accept-Range/Range/Content-Range response/request/response headers) and application should implement it if announces the capability: A server may transmit byte ranges of the application's response if requested by the client, and the application doesn't natively support byte ranges. Again, however, the application should perform this function on its own if desired. I've performed some Googling but found little information upon which of the available WSGI servers/middleware/applications implement Byte-Ranges? Does anyone has an experience in the field and can hint me place to dig further? EDIT: Can anyone comment, how I can enhance the question to be able to find an answer?
[ "I think webob may do the trick, see the end of the file example for a range request implementation which efficiently seeks into the file being served.\n", "You just need to use WebOb and create the response as Response(conditional_request=True) or subclass the WebOb Response object making conditional_request=True the default.\nWhen conditional_request=True and the request asked for a range, WebOb's Response.app_iter_range wraps the complete response to return only the requested range.\nThe WebOb file serving example shows how you would implement your own app_iter_range for cases where it is practical to get a range of bytes without generating the whole response.\n" ]
[ 3, 0 ]
[]
[]
[ "http", "http_headers", "middleware", "python", "wsgi" ]
stackoverflow_0000579426_http_http_headers_middleware_python_wsgi.txt
Q: What is the most efficent way to implement concurrency in Python? In a cluster environment using Python what is the least expensive way to develop a concurrent application or what is the pro / con of the various options? A: I would say Google app engine is the easiest option since you don't have to worry about all this things. But if it's a big project, it might not be the cheapest solution. Again, it really depends on the size of your project and what you really want to do. You need to give more details if you want to get a precise answer. There is also a few packages available on the python package index.
What is the most efficent way to implement concurrency in Python?
In a cluster environment using Python what is the least expensive way to develop a concurrent application or what is the pro / con of the various options?
[ "I would say Google app engine is the easiest option since you don't have to worry about all this things. But if it's a big project, it might not be the cheapest solution. \nAgain, it really depends on the size of your project and what you really want to do.\nYou need to give more details if you want to get a precise answer.\nThere is also a few packages available on the python package index.\n" ]
[ 1 ]
[]
[]
[ "concurrency", "multithreading", "process", "python" ]
stackoverflow_0001783714_concurrency_multithreading_process_python.txt
Q: String preallocation in numpy.arrays >>> import numpy as np >>> a = np.array(['zero', 'one', 'two', 'three']) >>> a[1] = 'thirteen' >>> print a ['zero' 'thirt' 'two' 'three'] >>> As you can see, the second element has been truncated to the maximum number of characters in the original array. Is it possible to workaround this problem? A: If you don't know the maximum length element, then you can use dtype=object >>> import numpy as np >>> a = np.array(['zero', 'one', 'two', 'three'], dtype=object) >>> a[1] = 'thirteen' >>> print a ['zero' 'thirteen' 'two' 'three'] >>> A: Use the dtype argument in numpy.array, e.g.: >>> import numpy as np >>> a = np.array(['zero', 'one', 'two', 'three'], dtype='S8') >>> a[1] = 'thirteen' >>> print(a) ['zero' 'thirteen' 'two' 'three']
String preallocation in numpy.arrays
>>> import numpy as np >>> a = np.array(['zero', 'one', 'two', 'three']) >>> a[1] = 'thirteen' >>> print a ['zero' 'thirt' 'two' 'three'] >>> As you can see, the second element has been truncated to the maximum number of characters in the original array. Is it possible to workaround this problem?
[ "If you don't know the maximum length element, then you can use dtype=object\n>>> import numpy as np\n>>> a = np.array(['zero', 'one', 'two', 'three'], dtype=object)\n>>> a[1] = 'thirteen'\n>>> print a\n['zero' 'thirteen' 'two' 'three']\n>>>\n\n", "Use the dtype argument in numpy.array, e.g.:\n>>> import numpy as np\n>>> a = np.array(['zero', 'one', 'two', 'three'], dtype='S8')\n>>> a[1] = 'thirteen'\n>>> print(a)\n['zero' 'thirteen' 'two' 'three']\n\n" ]
[ 6, 2 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001783369_numpy_python.txt
Q: How do I unit test Django views on the Google App Engine? I am struggling to run unit tests using the Django Client class on the Google App Engine. I downloaded GAEUnit (v2.0a for Django) and I am trying to use that as my testing framework (maybe I should rather be using something else?) I copy all the GAEUnit files into my project root as instructed, and I modify my app.yaml file. Currently app.yaml looks as follows: application: myapp version: 1 runtime: python api_version: 1 handlers: - url: /static static_dir: static - url: /.* script: django_bootstrap.py - url: /test.* script: gaeunit.py I also modified settings.py to add gaeunit as an application... (snippet from settings.py) INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.contenttypes', 'gaeunit', ) My unit test class resides in the 'test' folder and looks as follows (very simple): import unittest class Test(unittest.TestCase): def testName(self): self.assertTrue(False) if __name__ == "__main__": #import sys;sys.argv = ['', 'Test.testName'] unittest.main() However, when I try to run my application by navigating to http://localhost:8080, it fails with the following error: ViewDoesNotExist at / Could not import gaeunit.gaeunit. Error was: No module named gaeunit gaeunit.py does definitely exist in the folder. What am I doing wrong? A: I managed to figure out what was wrong here. I made two mistakes: In app.yaml, url: /test.* had to be before url:/.* (otherwise the /test URL would be matched to /.* before getting to the /test.* handler) Beware of copying all the files from the GAEUnit package into your project root! The GAEUnit folder contains a urls.py file that will overwrite your one if you are not paying attention. This was happening in my case. I simply restored my original urls.py. When working with the django test Client, and if you intend to access the response.template or response.context properties, you need to ensure that you call django.test.utils.setup_test_environment(), otherwise template and context will be Nothing. I call this in my test module, right at the bottom. Result: working unit tests!
How do I unit test Django views on the Google App Engine?
I am struggling to run unit tests using the Django Client class on the Google App Engine. I downloaded GAEUnit (v2.0a for Django) and I am trying to use that as my testing framework (maybe I should rather be using something else?) I copy all the GAEUnit files into my project root as instructed, and I modify my app.yaml file. Currently app.yaml looks as follows: application: myapp version: 1 runtime: python api_version: 1 handlers: - url: /static static_dir: static - url: /.* script: django_bootstrap.py - url: /test.* script: gaeunit.py I also modified settings.py to add gaeunit as an application... (snippet from settings.py) INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.contenttypes', 'gaeunit', ) My unit test class resides in the 'test' folder and looks as follows (very simple): import unittest class Test(unittest.TestCase): def testName(self): self.assertTrue(False) if __name__ == "__main__": #import sys;sys.argv = ['', 'Test.testName'] unittest.main() However, when I try to run my application by navigating to http://localhost:8080, it fails with the following error: ViewDoesNotExist at / Could not import gaeunit.gaeunit. Error was: No module named gaeunit gaeunit.py does definitely exist in the folder. What am I doing wrong?
[ "I managed to figure out what was wrong here. I made two mistakes:\n\nIn app.yaml, url: /test.* had to be before url:/.* (otherwise the /test URL would be matched to /.* before getting to the /test.* handler)\nBeware of copying all the files from the GAEUnit package into your project root! The GAEUnit folder contains a urls.py file that will overwrite your one if you are not paying attention. This was happening in my case. I simply restored my original urls.py. \nWhen working with the django test Client, and if you intend to access the response.template or response.context properties, you need to ensure that you call django.test.utils.setup_test_environment(), otherwise template and context will be Nothing. I call this in my test module, right at the bottom.\n\nResult: working unit tests!\n" ]
[ 0 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001784076_django_google_app_engine_python.txt
Q: Django passing a model instance to slightly different model I'm writing an django app for a project where everybody can change articles but the changes that users commit have to be viewed by someone before they go online. So you see it is a bit like the system used by wikipedia. class Content(models.Model): tp = models.DateTimeField(auto_now_add=True) topic = models.CharField(max_length=60) content = models.TextField() slug = models.SlugField(max_length=80) class ChangeSet(Content): content = models.ForeignKey('Content') those are my models. ChangeSet just inherits the Content and it has a ForeignKey to the original content. my question is how do I save my ChangeSet? def content(request, content_slug): content = get_object_or_404(Content, slug=content_slug) if request.method == 'POST': new_content = ContentModelForm(request.POST, instance=content) new_content = new_content.save(commit=False) changeset = ChangeSet(content=content) can I somehow pass the ChangeSet the content instance? Does Django recognize that those two models are the same except for the fk? Or do I have to manually add every field like: changeset.topic = new_content.topic Edit #1 It doesn't look like it's a big deal to just write 'changeset.topic = new_content.topic' but I shortened my real Content model so you guys won't have to read all the stuff that is irrelevant for solving this problem. Edit #2 To generalize the question a bit more. What is the best way saving Changesets? Making a new model for the changeset like I did or should I just add a ForeignKey with a reference to itself to my Content model? A: The way you have your models coded, I don't think it's going to work like you're expecting. In this case ChangeSet inherits from Content. The way Django implements this is by create a OneToOneField that connects ChangeSet with Content. This means 2 things for your application: Having the ForeignKey is pointless as that's like having an FK to yourself (and there's already a OneToOne behind the scenes anyways) ChangeSet is always going to point to the most recent instance of Content. There's nothing in this model setup that is going to save a copy of changes. Probably the best method I've seen to achieve this (used by django-reversion) is to take Content, serialized it, then save the Content Id and Content Type to a model. You can than access it like ChangeSet.original.{tp/topic/etc.}. Have a look at it's model code here: models.py. The equivalent to your ChangeSet would be the Version model.
Django passing a model instance to slightly different model
I'm writing an django app for a project where everybody can change articles but the changes that users commit have to be viewed by someone before they go online. So you see it is a bit like the system used by wikipedia. class Content(models.Model): tp = models.DateTimeField(auto_now_add=True) topic = models.CharField(max_length=60) content = models.TextField() slug = models.SlugField(max_length=80) class ChangeSet(Content): content = models.ForeignKey('Content') those are my models. ChangeSet just inherits the Content and it has a ForeignKey to the original content. my question is how do I save my ChangeSet? def content(request, content_slug): content = get_object_or_404(Content, slug=content_slug) if request.method == 'POST': new_content = ContentModelForm(request.POST, instance=content) new_content = new_content.save(commit=False) changeset = ChangeSet(content=content) can I somehow pass the ChangeSet the content instance? Does Django recognize that those two models are the same except for the fk? Or do I have to manually add every field like: changeset.topic = new_content.topic Edit #1 It doesn't look like it's a big deal to just write 'changeset.topic = new_content.topic' but I shortened my real Content model so you guys won't have to read all the stuff that is irrelevant for solving this problem. Edit #2 To generalize the question a bit more. What is the best way saving Changesets? Making a new model for the changeset like I did or should I just add a ForeignKey with a reference to itself to my Content model?
[ "The way you have your models coded, I don't think it's going to work like you're expecting. In this case ChangeSet inherits from Content. The way Django implements this is by create a OneToOneField that connects ChangeSet with Content. This means 2 things for your application:\n\nHaving the ForeignKey is pointless as that's like having an FK to yourself (and there's already a OneToOne behind the scenes anyways)\nChangeSet is always going to point to the most recent instance of Content. There's nothing in this model setup that is going to save a copy of changes.\n\nProbably the best method I've seen to achieve this (used by django-reversion) is to take Content, serialized it, then save the Content Id and Content Type to a model. You can than access it like ChangeSet.original.{tp/topic/etc.}.\nHave a look at it's model code here: models.py. The equivalent to your ChangeSet would be the Version model.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001782815_django_python.txt
Q: Most useful list-comprehension construction? What Python's user-made list-comprehension construction is the most useful? I have created the following two quantifiers, which I use to do different verification operations: def every(f, L): return not (False in [f(x) for x in L]) def some(f, L): return True in [f(x) for x in L] an optimized versions (requres Python 2.5+) was proposed below: def every(f, L): return all(f(x) for x in L) def some(f, L): return any(f(x) for x in L) So, how it works? """For all x in [1,4,9] there exists such y from [1,2,3] that x = y**2""" answer = every([1,4,9], lambda x: some([1,2,3], lambda y: y**2 == x)) Using such operations, you can easily do smart verifications, like: """There exists at least one bot in a room which has a life below 30%""" answer = some(bots_in_this_room, lambda x: x.life < 0.3) and so on, you can answer even very complicated questions using such quantifiers. Of course, there is no infinite lists in Python (hey, it's not Haskell :) ), but Python's lists comprehensions are very practical. Do you have your own favourite lists-comprehension constructions? PS: I wonder, why most people tend not to answer questions but criticize presented examples? The question is about favourite list-comprehension construction actually. A: anyand all are part of standard Python from 2.5. There's no need to make your own versions of these. Also the official version of any and all short-circuit the evaluation if possible, giving a performance improvement. Your versions always iterate over the entire list. If you want a version that accepts a predicate, use something like this that leverages the existing any and all functions: def anyWithPredicate(predicate, l): return any(predicate(x) for x in l) def allWithPredicate(predicate, l): return all(predicate(x) for x in l) I don't particularly see the need for these functions though, as it doesn't really save much typing. Also, hiding existing standard Python functions with your own functions that have the same name but different behaviour is a bad practice. A: There aren't all that many cases where a list comprehension (LC for short) will be substantially more useful than the equivalent generator expression (GE for short, i.e., using round parentheses instead of square brackets, to generate one item at a time rather than "all in bulk at the start"). Sometimes you can get a little extra speed by "investing" the extra memory to hold the list all at once, depending on vagaries of optimization and garbage collection on one or another version of Python, but that hardly amounts to substantial extra usefulness of LC vs GE. Essentially, to get substantial extra use out of the LC as compared to the GE, you need use cases which intrinsically require "more than one pass" on the sequence. In such cases, a GE would require you to generate the sequence once per pass, while, with an LC, you can generate the sequence once, then perform multiple passes on it (paying the generation cost only once). Multiple generation may also be problematic if the GE / LC are based on an underlying iterator that's not trivially restartable (e.g., a "file" that's actually a Unix pipe). For example, say you are reading a non-empty open text file f which has a bunch of (textual representations of) numbers separated by whitespace (including newlines here and there, empty lines, etc). You could transform it into a sequence of numbers with either a GE: G = (float(s) for line in f for s in line.split()) or a LC: L = [float(s) for line in f for s in line.split()] Which one is better? Depends on what you're doing with it (i.e, the use case!). If all you want is, say, the sum, sum(G) and sum(L) will do just as well. If you want the average, sum(L)/len(L) is fine for the list, but won't work for the generator -- given the difficulty in "restarting f", to avoid an intermediate list you'll have to do something like: tot = 0.0 for i, x in enumerate(G): tot += x return tot/(i+1) nowhere as snappy, fast, concise and elegant as return sum(L)/len(L). Remember that sorted(G) does return a list (inevitably), so L.sort() (which is in-place) is the rough equivalent in this case -- sorted(L) would be supererogatory (as now you have two lists). So when sorting is needed a generator may often be preferred simply due to conciseness. All in all, since L is identically equivalent to list(G), it's hard to get very excited about the ability to express it via punctuation (square brackets instead of round parentheses) instead of a single, short, pronounceable and obvious word like list;-). And that's all a LC is -- punctuation-based syntax shortcut for list(some_genexp)...! A: This solution shadows builtins which is generally a bad idea. However the usage feels fairly pythonic, and it preserves the original functionality. Note there are several ways to potentially optimize this based on testing, including, moving the imports out into the module level and changing f's default into None and testing for it instead of using a default lambda as I did. def any(l, f=lambda x: x): from __builtin__ import any as _any return _any(f(x) for x in l) def all(l, f=lambda x: x): from __builtin__ import all as _all return _all(f(x) for x in l) Just putting that out there for consideration and to see what people think of doing something so potentially dirty. A: For your information, the documentation for module itertools in python 3.x lists some pretty nice generator functions.
Most useful list-comprehension construction?
What Python's user-made list-comprehension construction is the most useful? I have created the following two quantifiers, which I use to do different verification operations: def every(f, L): return not (False in [f(x) for x in L]) def some(f, L): return True in [f(x) for x in L] an optimized versions (requres Python 2.5+) was proposed below: def every(f, L): return all(f(x) for x in L) def some(f, L): return any(f(x) for x in L) So, how it works? """For all x in [1,4,9] there exists such y from [1,2,3] that x = y**2""" answer = every([1,4,9], lambda x: some([1,2,3], lambda y: y**2 == x)) Using such operations, you can easily do smart verifications, like: """There exists at least one bot in a room which has a life below 30%""" answer = some(bots_in_this_room, lambda x: x.life < 0.3) and so on, you can answer even very complicated questions using such quantifiers. Of course, there is no infinite lists in Python (hey, it's not Haskell :) ), but Python's lists comprehensions are very practical. Do you have your own favourite lists-comprehension constructions? PS: I wonder, why most people tend not to answer questions but criticize presented examples? The question is about favourite list-comprehension construction actually.
[ "anyand all are part of standard Python from 2.5. There's no need to make your own versions of these. Also the official version of any and all short-circuit the evaluation if possible, giving a performance improvement. Your versions always iterate over the entire list.\nIf you want a version that accepts a predicate, use something like this that leverages the existing any and all functions:\ndef anyWithPredicate(predicate, l): return any(predicate(x) for x in l) \ndef allWithPredicate(predicate, l): return all(predicate(x) for x in l) \n\nI don't particularly see the need for these functions though, as it doesn't really save much typing.\nAlso, hiding existing standard Python functions with your own functions that have the same name but different behaviour is a bad practice.\n", "There aren't all that many cases where a list comprehension (LC for short) will be substantially more useful than the equivalent generator expression (GE for short, i.e., using round parentheses instead of square brackets, to generate one item at a time rather than \"all in bulk at the start\").\nSometimes you can get a little extra speed by \"investing\" the extra memory to hold the list all at once, depending on vagaries of optimization and garbage collection on one or another version of Python, but that hardly amounts to substantial extra usefulness of LC vs GE.\nEssentially, to get substantial extra use out of the LC as compared to the GE, you need use cases which intrinsically require \"more than one pass\" on the sequence. In such cases, a GE would require you to generate the sequence once per pass, while, with an LC, you can generate the sequence once, then perform multiple passes on it (paying the generation cost only once). Multiple generation may also be problematic if the GE / LC are based on an underlying iterator that's not trivially restartable (e.g., a \"file\" that's actually a Unix pipe).\nFor example, say you are reading a non-empty open text file f which has a bunch of (textual representations of) numbers separated by whitespace (including newlines here and there, empty lines, etc). You could transform it into a sequence of numbers with either a GE:\nG = (float(s) for line in f for s in line.split())\n\nor a LC:\nL = [float(s) for line in f for s in line.split()]\n\nWhich one is better? Depends on what you're doing with it (i.e, the use case!). If all you want is, say, the sum, sum(G) and sum(L) will do just as well. If you want the average, sum(L)/len(L) is fine for the list, but won't work for the generator -- given the difficulty in \"restarting f\", to avoid an intermediate list you'll have to do something like:\ntot = 0.0\nfor i, x in enumerate(G): tot += x\nreturn tot/(i+1)\n\nnowhere as snappy, fast, concise and elegant as return sum(L)/len(L).\nRemember that sorted(G) does return a list (inevitably), so L.sort() (which is in-place) is the rough equivalent in this case -- sorted(L) would be supererogatory (as now you have two lists). So when sorting is needed a generator may often be preferred simply due to conciseness.\nAll in all, since L is identically equivalent to list(G), it's hard to get very excited about the ability to express it via punctuation (square brackets instead of round parentheses) instead of a single, short, pronounceable and obvious word like list;-). And that's all a LC is -- punctuation-based syntax shortcut for list(some_genexp)...!\n", "This solution shadows builtins which is generally a bad idea. However the usage feels fairly pythonic, and it preserves the original functionality. \nNote there are several ways to potentially optimize this based on testing, including, moving the imports out into the module level and changing f's default into None and testing for it instead of using a default lambda as I did.\ndef any(l, f=lambda x: x):\n from __builtin__ import any as _any\n return _any(f(x) for x in l)\n\ndef all(l, f=lambda x: x):\n from __builtin__ import all as _all\n return _all(f(x) for x in l)\n\nJust putting that out there for consideration and to see what people think of doing something so potentially dirty.\n", "For your information, the documentation for module itertools in python 3.x lists some pretty nice generator functions.\n" ]
[ 13, 5, 4, 2 ]
[]
[]
[ "list", "list_comprehension", "python" ]
stackoverflow_0001783974_list_list_comprehension_python.txt
Q: Python - Strip all drive letters from csv file and replace with Z: Here is the code example. Basically output.csv needs to remove any drive letter A:-Y: and replace it with Z: I tried to do this with a list (not complete yet) but it generates the error: TypeError: expected a character buffer object #!/usr/bin/python import os.path import os import shutil import csv import re # Create the videos directory in the current directory # If the directory exists ignore it. # # Moves all files with the .wmv extenstion to the # videos folder for file structure # #Crawl the videos directory then change to videos directory # create the videos.csv file in the videos directory # replace any drive letter A:-Y: with Z: def createCSV(): directory = "videos" if not os.path.isdir("." + directory + "/"): os.mkdir("./" + directory + "/") for file in os.listdir("./"): if os.path.splitext(file)[1] == ".wmv": shutil.move(file, os.path.join("videos", file)) listDirectory = os.listdir("videos") os.chdir(directory) f = open("videos.csv", "w") f.writelines(os.path.join(os.getcwd(), f + '\n') for f in listDirectory) f = open('videos.csv', 'r') w = open('output.csv', 'w') f_cont = f.readlines() for line in f_cont: regex = re.compile("\b[GHI]:") re.sub(regex, "Z:", line) w.write(line) f.close() createCSV() EDIT: I think my flow/logic is wrong, the output.csv file that gets created still G: in the .csv it was not renamed to Z:\ from the re.sub line. A: It seems like the problem is in the loop at the bottom of your code. The string's replace method doesn't receive a list as its first arguments, but another string. You need to loop through your removeDrives list and call line.remove with every item in that list. A: I can see you use some pythonic snippets, with smart uses of path.join and a commented code. This can get even better, let's rewrite a few things so we can solve your drive letters issue, and gain a more pythonic code on the way : #!/usr/bin/env python # -*- coding= UTF-8 -*- # Firstly, modules can be documented using docstring, so drop the comments """ Create the videos directory in the current directory If the directory exists ignore it. Moves all files with the .wmv extension to the videos folder for file structure Crawl the videos directory then change to videos directory create the videos.csv file in the videos directory create output.csv replace any drive letter A:-Y: with Z: """ # not useful to import os and os.path as the second is contain in the first one import os import shutil import csv # import glob, it will be handy import glob import ntpath # this is to split the drive # don't really need to use a function # Here, don't bother checking if the directory exists # and you don't need add any slash either directory = "videos" ext = "*.wmv" try : os.mkdir(directory) except OSError : pass listDirectory = [] # creating a buffer so no need to list the dir twice for file in glob.glob(ext): # much easier this way, isn't it ? shutil.move(file, os.path.join(directory, file)) # good catch for shutil :-) listDirectory.append(file) os.chdir(directory) # you've smartly imported the csv module, so let's use it ! f = open("videos.csv", "w") vid_csv = csv.writer(f) w = open('output.csv', 'w') out_csv = csv.writer(w) # let's do everything in one loop for file in listDirectory : file_path = os.path.abspath(file) # Python includes functions to deal with drive letters :-D # I use ntpath because I am under linux but you can use # normal os.path functions on windows with the same names file_path_with_new_letter = ntpath.join("Z:", ntpath.splitdrive(file_path)[1]) # let's write the csv, using tuples vid_csv.writerow((file_path, )) out_csv.writerow((file_path_with_new_letter, )) A: You could use for driveletter in removedrives: line = line.replace(driveletter, 'Z:') thereby iterating over your list and replacing one of the possible drive letters after the other. As abyx wrote, replace expects a string, not a list, so you need this extra step. Or use a regular expression like import re regex = re.compile(r"\b[FGH]:") re.sub(regex, "Z:", line) Additional bonus: Regex can check that it's really a drive letter and not, for example, a part of something bigger like OH: hydrogen group. Apart from that, I suggest you use os.path's own path manipulation functions instead of trying to implement them yourself. And of course, if you do anything further with the CSV file, take a look at the csv module. A commentator above has already mentioned that you should close all the files you've opened. Or use with with statement: with open("videos.csv", "w") as f: do_stuff()
Python - Strip all drive letters from csv file and replace with Z:
Here is the code example. Basically output.csv needs to remove any drive letter A:-Y: and replace it with Z: I tried to do this with a list (not complete yet) but it generates the error: TypeError: expected a character buffer object #!/usr/bin/python import os.path import os import shutil import csv import re # Create the videos directory in the current directory # If the directory exists ignore it. # # Moves all files with the .wmv extenstion to the # videos folder for file structure # #Crawl the videos directory then change to videos directory # create the videos.csv file in the videos directory # replace any drive letter A:-Y: with Z: def createCSV(): directory = "videos" if not os.path.isdir("." + directory + "/"): os.mkdir("./" + directory + "/") for file in os.listdir("./"): if os.path.splitext(file)[1] == ".wmv": shutil.move(file, os.path.join("videos", file)) listDirectory = os.listdir("videos") os.chdir(directory) f = open("videos.csv", "w") f.writelines(os.path.join(os.getcwd(), f + '\n') for f in listDirectory) f = open('videos.csv', 'r') w = open('output.csv', 'w') f_cont = f.readlines() for line in f_cont: regex = re.compile("\b[GHI]:") re.sub(regex, "Z:", line) w.write(line) f.close() createCSV() EDIT: I think my flow/logic is wrong, the output.csv file that gets created still G: in the .csv it was not renamed to Z:\ from the re.sub line.
[ "It seems like the problem is in the loop at the bottom of your code. The string's replace method doesn't receive a list as its first arguments, but another string. You need to loop through your removeDrives list and call line.remove with every item in that list.\n", "I can see you use some pythonic snippets, with smart uses of path.join and a commented code. This can get even better, let's rewrite a few things so we can solve your drive letters issue, and gain a more pythonic code on the way :\n#!/usr/bin/env python\n# -*- coding= UTF-8 -*-\n\n# Firstly, modules can be documented using docstring, so drop the comments\n\"\"\"\n Create the videos directory in the current directory\n If the directory exists ignore it.\n\n Moves all files with the .wmv extension to the\n videos folder for file structure\n\n Crawl the videos directory then change to videos directory\n create the videos.csv file in the videos directory\n create output.csv replace any drive letter A:-Y: with Z:\n\"\"\"\n\n# not useful to import os and os.path as the second is contain in the first one\nimport os\nimport shutil\nimport csv\n# import glob, it will be handy\nimport glob\nimport ntpath # this is to split the drive\n\n# don't really need to use a function \n\n# Here, don't bother checking if the directory exists\n# and you don't need add any slash either\ndirectory = \"videos\"\next = \"*.wmv\"\ntry :\n os.mkdir(directory)\nexcept OSError :\n pass\n\nlistDirectory = [] # creating a buffer so no need to list the dir twice\n\nfor file in glob.glob(ext): # much easier this way, isn't it ?\n shutil.move(file, os.path.join(directory, file)) # good catch for shutil :-)\n listDirectory.append(file)\n\nos.chdir(directory)\n\n# you've smartly imported the csv module, so let's use it !\nf = open(\"videos.csv\", \"w\")\nvid_csv = csv.writer(f)\nw = open('output.csv', 'w')\nout_csv = csv.writer(w)\n\n# let's do everything in one loop\nfor file in listDirectory :\n file_path = os.path.abspath(file)\n # Python includes functions to deal with drive letters :-D\n # I use ntpath because I am under linux but you can use \n # normal os.path functions on windows with the same names\n file_path_with_new_letter = ntpath.join(\"Z:\", ntpath.splitdrive(file_path)[1])\n # let's write the csv, using tuples\n vid_csv.writerow((file_path, ))\n out_csv.writerow((file_path_with_new_letter, ))\n\n", "You could use\nfor driveletter in removedrives:\n line = line.replace(driveletter, 'Z:')\n\nthereby iterating over your list and replacing one of the possible drive letters after the other. As abyx wrote, replace expects a string, not a list, so you need this extra step. \nOr use a regular expression like\nimport re\nregex = re.compile(r\"\\b[FGH]:\")\nre.sub(regex, \"Z:\", line)\n\nAdditional bonus: Regex can check that it's really a drive letter and not, for example, a part of something bigger like OH: hydrogen group.\nApart from that, I suggest you use os.path's own path manipulation functions instead of trying to implement them yourself.\nAnd of course, if you do anything further with the CSV file, take a look at the csv module.\nA commentator above has already mentioned that you should close all the files you've opened. Or use with with statement:\nwith open(\"videos.csv\", \"w\") as f:\n do_stuff()\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001783994_python.txt
Q: Python: For each list element apply a function across the list Given [1,2,3,4,5], how can I do something like 1/1, 1/2, 1/3,1/4,1/5, ...., 3/1,3/2,3/3,3/4,3/5,.... 5/1,5/2,5/3,5/4,5/5 I would like to store all the results, find the minimum, and return the two numbers used to find the minimum. So in the case I've described above I would like to return (1,5). So basically I would like to do something like for each element i in the list map some function across all elements in the list, taking i and j as parameters store the result in a master list, find the minimum value in the master list, and return the arguments i, jused to calculate this minimum value. In my real problem I have a list objects/coordinates, and the function I am using takes two coordinates and calculates the euclidean distance. I'm trying to find minimum euclidean distance between any two points but I don't need a fancy algorithm. A: You can do this using list comprehensions and min() (Python 3.0 code): >>> nums = [1,2,3,4,5] >>> [(x,y) for x in nums for y in nums] [(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5)] >>> min(_, key=lambda pair: pair[0]/pair[1]) (1, 5) Note that to run this on Python 2.5 you'll need to either make one of the arguments a float, or do from __future__ import division so that 1/5 correctly equals 0.2 instead of 0. A: If I'm correct in thinking that you want to find the minimum value of a function for all possible pairs of 2 elements from a list... l = [1,2,3,4,5] def f(i,j): return i+j # Prints min value of f(i,j) along with i and j print min( (f(i,j),i,j) for i in l for j in l) A: Some readable python: def JoeCalimar(l): masterList = [] for i in l: for j in l: masterList.append(1.*i/j) pos = masterList.index(min(masterList)) a = pos/len(masterList) b = pos%len(masterList) return (l[a],l[b]) Let me know if something is not clear. A: If you don't mind importing the numpy package, it has a lot of convenient functionality built in. It's likely to be much more efficient to use their data structures than lists of lists, etc. from __future__ import division import numpy data = numpy.asarray([1,2,3,4,5]) dists = data.reshape((1,5)) / data.reshape((5,1)) print dists which = dists.argmin() (r,c) = (which // 5, which % 5) # assumes C ordering # pick whichever is most appropriate for you... minval = dists[r,c] minval = dists.min() minval = dists.ravel()[which] A: Doing it the mathy way... nums = [1, 2, 3, 4, 5] min_combo = (min(nums), max(nums)) Unless, of course, you have negatives in there. In that case, this won't work because you actually want the min and max absolute values - the numerator should be close to zero, and the denominator far from it, in either direction. And double negatives would break it. A: If working with Python ≥2.6 (including 3.x), you can: from __future__ import division import operator, itertools def getmin(alist): return min( (operator.div(*pair), pair) for pair in itertools.product(alist, repeat=2) )[1] getmin([1, 2, 3, 4, 5]) EDIT: Now that I think of it and if I remember my mathematics correctly, this should also give the answer assuming that all numbers are non-negative: def getmin(alist): return min(alist), max(alist) A: >>> nums = [1, 2, 3, 4, 5] >>> min(map((lambda t: ((float(t[0])/t[1]), t)), ((x, y) for x in nums for y in nums)))[1] (1, 5)
Python: For each list element apply a function across the list
Given [1,2,3,4,5], how can I do something like 1/1, 1/2, 1/3,1/4,1/5, ...., 3/1,3/2,3/3,3/4,3/5,.... 5/1,5/2,5/3,5/4,5/5 I would like to store all the results, find the minimum, and return the two numbers used to find the minimum. So in the case I've described above I would like to return (1,5). So basically I would like to do something like for each element i in the list map some function across all elements in the list, taking i and j as parameters store the result in a master list, find the minimum value in the master list, and return the arguments i, jused to calculate this minimum value. In my real problem I have a list objects/coordinates, and the function I am using takes two coordinates and calculates the euclidean distance. I'm trying to find minimum euclidean distance between any two points but I don't need a fancy algorithm.
[ "You can do this using list comprehensions and min() (Python 3.0 code):\n>>> nums = [1,2,3,4,5]\n>>> [(x,y) for x in nums for y in nums]\n[(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5)]\n>>> min(_, key=lambda pair: pair[0]/pair[1])\n(1, 5)\n\nNote that to run this on Python 2.5 you'll need to either make one of the arguments a float, or do from __future__ import division so that 1/5 correctly equals 0.2 instead of 0.\n", "If I'm correct in thinking that you want to find the minimum value of a function for all possible pairs of 2 elements from a list...\nl = [1,2,3,4,5]\n\ndef f(i,j):\n return i+j \n\n# Prints min value of f(i,j) along with i and j\nprint min( (f(i,j),i,j) for i in l for j in l)\n\n", "Some readable python:\ndef JoeCalimar(l):\n masterList = []\n for i in l:\n for j in l:\n masterList.append(1.*i/j)\n pos = masterList.index(min(masterList))\n a = pos/len(masterList)\n b = pos%len(masterList)\n return (l[a],l[b])\n\nLet me know if something is not clear.\n", "If you don't mind importing the numpy package, it has a lot of convenient functionality built in. It's likely to be much more efficient to use their data structures than lists of lists, etc.\nfrom __future__ import division\n\nimport numpy\n\ndata = numpy.asarray([1,2,3,4,5])\ndists = data.reshape((1,5)) / data.reshape((5,1))\n\nprint dists\n\nwhich = dists.argmin()\n(r,c) = (which // 5, which % 5) # assumes C ordering\n\n# pick whichever is most appropriate for you...\nminval = dists[r,c]\nminval = dists.min()\nminval = dists.ravel()[which]\n\n", "Doing it the mathy way...\nnums = [1, 2, 3, 4, 5]\nmin_combo = (min(nums), max(nums))\n\nUnless, of course, you have negatives in there. In that case, this won't work because you actually want the min and max absolute values - the numerator should be close to zero, and the denominator far from it, in either direction. And double negatives would break it.\n", "If working with Python ≥2.6 (including 3.x), you can:\nfrom __future__ import division\nimport operator, itertools\n\ndef getmin(alist):\n return min(\n (operator.div(*pair), pair)\n for pair in itertools.product(alist, repeat=2)\n )[1]\n\ngetmin([1, 2, 3, 4, 5])\n\nEDIT: Now that I think of it and if I remember my mathematics correctly, this should also give the answer assuming that all numbers are non-negative:\ndef getmin(alist):\n return min(alist), max(alist)\n\n", ">>> nums = [1, 2, 3, 4, 5] \n>>> min(map((lambda t: ((float(t[0])/t[1]), t)), ((x, y) for x in nums for y in nums)))[1]\n(1, 5)\n\n" ]
[ 42, 10, 3, 3, 1, 1, 0 ]
[]
[]
[ "algorithm", "list", "list_comprehension", "python" ]
stackoverflow_0000493367_algorithm_list_list_comprehension_python.txt
Q: wxPython. Create a panel with four static sized boxes I'm trying to create a panel, with four boxes containing some data. These four boxes should have a predefined static size. What I have so far is four boxes that is overlapping to some extent. Any ideas? Code: import wx class MyFrame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) self.pl = wx.Panel(self) self.SetSize((500, 350)) sb = wx.StaticBox(self.pl, -1, 'BOX0', size=(180, 150)) sat = wx.CheckBox(self.pl, -1, 'Satellite') gsm = wx.CheckBox(self.pl, -1, 'GSM') wlan = wx.CheckBox(self.pl, -1, 'WLAN') sb2 = wx.StaticBox(self.pl, -1, 'BOX1', size=(180, 150)) nm2 = wx.StaticText(self.pl, -1, 'default1') sb3 = wx.StaticBox(self.pl, -1, 'BOX2', size=(180, 150)) nm3 = wx.StaticText(self.pl, -1, 'default2') sb4 = wx.StaticBox(self.pl, -1, 'BOX3', size=(180, 150)) nm4 = wx.StaticText(self.pl, -1, 'default3') box = wx.StaticBoxSizer(sb, wx.VERTICAL) box.Add(sat, 0, wx.ALL, 5) box.Add(gsm, 0, wx.ALL, 5) box.Add(wlan, 0, wx.ALL, 5) box2 = wx.StaticBoxSizer(sb2) box2.Add(nm2, 0, wx.ALL, 5) box3 = wx.StaticBoxSizer(sb3) box3.Add(nm3, 0, wx.ALL, 5) box4 = wx.StaticBoxSizer(sb4) box4.Add(nm4, 0, wx.ALL, 5) gs = wx.BoxSizer(wx.HORIZONTAL) gs.Add(box) gs.Add(box2) gss = wx.BoxSizer(wx.HORIZONTAL) gss.Add(box3) gss.Add(box4) gt = wx.BoxSizer(wx.VERTICAL) gt.Add(gs) gt.Add(gss) self.pl.SetSizer(gt) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, '08_gridsizer.py') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = MyApp(0) app.MainLoop() A: I'll just answer my own question. The solution is to add a wx.Sizer.SetMinSize() to each wx.StaticBoxSizer() like this. sb = wx.StaticBox(self.pl, -1, 'BOX0') sat = wx.CheckBox(self.pl, -1, 'Satellite') gsm = wx.CheckBox(self.pl, -1, 'GSM') wlan = wx.CheckBox(self.pl, -1, 'WLAN') box = wx.StaticBoxSizer(sb, wx.VERTICAL) box.SetMinSize((180, 150)) box.Add(sat, 0, wx.ALL, 5) box.Add(gsm, 0, wx.ALL, 5) box.Add(wlan, 0, wx.ALL, 5) And I removed the size argument in wx.StaticText()
wxPython. Create a panel with four static sized boxes
I'm trying to create a panel, with four boxes containing some data. These four boxes should have a predefined static size. What I have so far is four boxes that is overlapping to some extent. Any ideas? Code: import wx class MyFrame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) self.pl = wx.Panel(self) self.SetSize((500, 350)) sb = wx.StaticBox(self.pl, -1, 'BOX0', size=(180, 150)) sat = wx.CheckBox(self.pl, -1, 'Satellite') gsm = wx.CheckBox(self.pl, -1, 'GSM') wlan = wx.CheckBox(self.pl, -1, 'WLAN') sb2 = wx.StaticBox(self.pl, -1, 'BOX1', size=(180, 150)) nm2 = wx.StaticText(self.pl, -1, 'default1') sb3 = wx.StaticBox(self.pl, -1, 'BOX2', size=(180, 150)) nm3 = wx.StaticText(self.pl, -1, 'default2') sb4 = wx.StaticBox(self.pl, -1, 'BOX3', size=(180, 150)) nm4 = wx.StaticText(self.pl, -1, 'default3') box = wx.StaticBoxSizer(sb, wx.VERTICAL) box.Add(sat, 0, wx.ALL, 5) box.Add(gsm, 0, wx.ALL, 5) box.Add(wlan, 0, wx.ALL, 5) box2 = wx.StaticBoxSizer(sb2) box2.Add(nm2, 0, wx.ALL, 5) box3 = wx.StaticBoxSizer(sb3) box3.Add(nm3, 0, wx.ALL, 5) box4 = wx.StaticBoxSizer(sb4) box4.Add(nm4, 0, wx.ALL, 5) gs = wx.BoxSizer(wx.HORIZONTAL) gs.Add(box) gs.Add(box2) gss = wx.BoxSizer(wx.HORIZONTAL) gss.Add(box3) gss.Add(box4) gt = wx.BoxSizer(wx.VERTICAL) gt.Add(gs) gt.Add(gss) self.pl.SetSizer(gt) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, '08_gridsizer.py') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = MyApp(0) app.MainLoop()
[ "I'll just answer my own question.\nThe solution is to add a wx.Sizer.SetMinSize() to each wx.StaticBoxSizer() like this.\nsb = wx.StaticBox(self.pl, -1, 'BOX0')\nsat = wx.CheckBox(self.pl, -1, 'Satellite')\ngsm = wx.CheckBox(self.pl, -1, 'GSM')\nwlan = wx.CheckBox(self.pl, -1, 'WLAN')\n\nbox = wx.StaticBoxSizer(sb, wx.VERTICAL)\nbox.SetMinSize((180, 150))\nbox.Add(sat, 0, wx.ALL, 5)\nbox.Add(gsm, 0, wx.ALL, 5)\nbox.Add(wlan, 0, wx.ALL, 5)\n\nAnd I removed the size argument in wx.StaticText()\n" ]
[ 1 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0001784026_python_wxpython.txt
Q: Python for web scripting I'm just starting out with Python and have practiced so far in the IDLE interface. Now I'd like to configure Python with MAMP so I can start creating really basic webapps — using Python inside HTML, or well, vice-versa. (I'm assuming HTML is allowed in Python, just like PHP? If not, are there any modules/template engines for that?) What modules do I need to install to run .py from my localhost? Googling a bit, it seems there're various methods — mod_python, FastCGI etc.. which one should I use and how to install it with MAMP Pro 1.8.2? Many thanks A: I think probably the easiest way for you to get started is to work with something like Django. It's a top-to-bottom web development stack which provides you with everything you need to develop and run a backend server. Things can be very simple in that world, no need to mess around with mod_python or FastCGI unless you really have the need. It's also nice because it conforms to WSGI, which is a Python standard which allows you to plug together unrelated bits of reusable code to add specific functionality to your web app when needed (say for example on-the-fly gzip compression, or OpenID authentication). Once you have outgrown the default Django stack, or want to change something specific you can go down this road if you want. Those are a few pointers to get you started. You could also look at other alternative frameworks such as TurboGears or paste if you wanted but Django is a great way to get something up and running quickly. Anyway, I'm sure you'll enjoy the experience: WSGI makes it a real joy knocking up web apps with the wealth of Python code you'll find on the web. [edit: you may find it helpful to browse some of the may Django related questions here on stack-overflow if you run into problems] A: You asked whether HTML is allowed within Python, which indicates that you still think too much in PHP terms about it. Contrary to PHP, Python was not designed to create dynamic web-pages. Instead, it was designed as a stand-alone, general-purpose programming language. Therefore you will not be able to put HTML into Python. There are some templating libraries which allow you to go the other way around, somewhat, but that's a completely different issue. With things like Django or TurboGears or all the other web-frameworks, you essentially set up a small, stand-alone web-server (which comes bundled with the framework so you don't have to do anything), tell the server which function should handle what URL and then write those functions. In the simplest case, each URL you specify has its own function. That 'handler function' (or 'view function' in Django terminology) receives a request object in which interesting info about the just-received request is contained. It then does whatever processing is required (a DB query for example). Finally, it produces some output, which is returned to the client. A typical way to get the output is to have some data passed to a template where it is rendered together with some HTML. So, the HTML is separated in a template (in the typical case) and is not in the Python code. About Python 3: I think you will find that the vast majority of all Python development going on in the world is still with Python 2.*. As others have pointed out here, Python 3 is just coming out, most of the good stuff is not available for it yet, and you shouldn't be bothered about that. My advise: Grab yourself Python 2.6 and Django 1.1 and dive in. It's fun. A: Django is definitely not the easiest way. check out pylons. http://pylonshq.com/ also check sqlalchemy for sql related stuff. Very cool library. On the other hand, you can always start with something very simple like mako for templating. http://www.makotemplates.org/
Python for web scripting
I'm just starting out with Python and have practiced so far in the IDLE interface. Now I'd like to configure Python with MAMP so I can start creating really basic webapps — using Python inside HTML, or well, vice-versa. (I'm assuming HTML is allowed in Python, just like PHP? If not, are there any modules/template engines for that?) What modules do I need to install to run .py from my localhost? Googling a bit, it seems there're various methods — mod_python, FastCGI etc.. which one should I use and how to install it with MAMP Pro 1.8.2? Many thanks
[ "I think probably the easiest way for you to get started is to work with something like Django. It's a top-to-bottom web development stack which provides you with everything you need to develop and run a backend server. Things can be very simple in that world, no need to mess around with mod_python or FastCGI unless you really have the need. \nIt's also nice because it conforms to WSGI, which is a Python standard which allows you to plug together unrelated bits of reusable code to add specific functionality to your web app when needed (say for example on-the-fly gzip compression, or OpenID authentication). Once you have outgrown the default Django stack, or want to change something specific you can go down this road if you want.\nThose are a few pointers to get you started. You could also look at other alternative frameworks such as TurboGears or paste if you wanted but Django is a great way to get something up and running quickly. Anyway, I'm sure you'll enjoy the experience: WSGI makes it a real joy knocking up web apps with the wealth of Python code you'll find on the web.\n[edit: you may find it helpful to browse some of the may Django related questions here on stack-overflow if you run into problems]\n", "You asked whether HTML is allowed within Python, which indicates that you still think too much in PHP terms about it. Contrary to PHP, Python was not designed to create dynamic web-pages. Instead, it was designed as a stand-alone, general-purpose programming language. Therefore you will not be able to put HTML into Python. There are some templating libraries which allow you to go the other way around, somewhat, but that's a completely different issue.\nWith things like Django or TurboGears or all the other web-frameworks, you essentially set up a small, stand-alone web-server (which comes bundled with the framework so you don't have to do anything), tell the server which function should handle what URL and then write those functions. In the simplest case, each URL you specify has its own function.\nThat 'handler function' (or 'view function' in Django terminology) receives a request object in which interesting info about the just-received request is contained. It then does whatever processing is required (a DB query for example). Finally, it produces some output, which is returned to the client. A typical way to get the output is to have some data passed to a template where it is rendered together with some HTML.\nSo, the HTML is separated in a template (in the typical case) and is not in the Python code.\nAbout Python 3: I think you will find that the vast majority of all Python development going on in the world is still with Python 2.*. As others have pointed out here, Python 3 is just coming out, most of the good stuff is not available for it yet, and you shouldn't be bothered about that.\nMy advise: Grab yourself Python 2.6 and Django 1.1 and dive in. It's fun.\n", "Django is definitely not the easiest way.\ncheck out pylons. http://pylonshq.com/\nalso check sqlalchemy for sql related stuff. Very cool library.\nOn the other hand, you can always start with something very simple like mako for templating. http://www.makotemplates.org/\n" ]
[ 2, 2, 0 ]
[]
[]
[ "fastcgi", "html", "python", "template_engine", "web_applications" ]
stackoverflow_0001781431_fastcgi_html_python_template_engine_web_applications.txt
Q: Designing a multi-process spider in Python I'm working on a multi-process spider in Python. It should start scraping one page for links and work from there. Specifically, the top-level page contains a list of categories, the second-level pages events in those categories, and the final, third-level pages participants in the events. I can't predict how many categories, events or participants there'll be. I'm at a bit of a loss as to how best to design such a spider, and in particular, how to know when it's finished crawling (it's expected to keep going till it has discovered and retrieved every relevant page). Ideally, the first scrape would be synchronous, and everything else async to maximise parallel parsing and adding to the DB, but I'm stuck on how to figure out when the crawling is finished. How would you suggest I structure the spider, in terms of parallel processes and particularly the above problem? A: You might want to look into Scrapy, an asynchronous (based on Twisted) web-scraper. It looks like for your task, the XPath description for the spider would be pretty easy to define! Good luck! (If you really want to do it yourself, maybe consider having small sqlite db that keeps track of whether each page has been hit or not... or if it's reasonable size, just do it in memory... Twisted in general might be your friend for hit.) A: I presume you are putting items to visit in a queue, exhausting the queue with workers, and the workers find new items to visit and add them to the queue. It's finished when all the workers are idle, and the queue of items to visit is empty. When the workers take advantage of the queue's task_done() method, The main thread can join() the queue to block until it's empty.
Designing a multi-process spider in Python
I'm working on a multi-process spider in Python. It should start scraping one page for links and work from there. Specifically, the top-level page contains a list of categories, the second-level pages events in those categories, and the final, third-level pages participants in the events. I can't predict how many categories, events or participants there'll be. I'm at a bit of a loss as to how best to design such a spider, and in particular, how to know when it's finished crawling (it's expected to keep going till it has discovered and retrieved every relevant page). Ideally, the first scrape would be synchronous, and everything else async to maximise parallel parsing and adding to the DB, but I'm stuck on how to figure out when the crawling is finished. How would you suggest I structure the spider, in terms of parallel processes and particularly the above problem?
[ "You might want to look into Scrapy, an asynchronous (based on Twisted) web-scraper. It looks like for your task, the XPath description for the spider would be pretty easy to define! \nGood luck!\n(If you really want to do it yourself, maybe consider having small sqlite db that keeps track of whether each page has been hit or not... or if it's reasonable size, just do it in memory... Twisted in general might be your friend for hit.)\n", "I presume you are putting items to visit in a queue, exhausting the queue with workers, and the workers find new items to visit and add them to the queue. \nIt's finished when all the workers are idle, and the queue of items to visit is empty. \nWhen the workers take advantage of the queue's task_done() method, The main thread can join() the queue to block until it's empty. \n" ]
[ 2, 1 ]
[]
[]
[ "multithreading", "python", "web_crawler" ]
stackoverflow_0001784632_multithreading_python_web_crawler.txt
Q: Speed of many regular expressions in python I'm writing a python program that deals with a fair amount of strings/files. My problem is that I'm going to be presented with a fairly short piece of text, and I'm going to need to search it for instances of a fairly broad range of words/phrases. I'm thinking I'll need to compile regular expressions as a way of matching these words/phrases in the text. My concern, however, is that this will take a lot of time. My question is how fast is the process of repeatedly compiling regular expressions, and then searching through a small body of text to find matches? Would I be better off using some string method? Edit: So, I guess an example of my question would be: How expensive would it be to compile and search with one regular expression versus say, iterating 'if "word" in string' say, 5 times? A: You should try to compile all your regexps into a single one using the | operator. That way, the regexp engine will do most of the optimizations for you. Use the grouping operator () to determine which regexp matched. A: If speed is of the essence, you are better off running some tests before you decide how to code your production application. First of all, you said that you are searching for words which suggests that you may be able to do this using split() to break up the string on whitespace. And then use simple string comparisons to do your search. Definitely do compile your regular expressions and do a timing test comparing that with the plain string functions. Check the documentation for the string class for a full list. A: If you like to know how does it fast during compiling regex patterns, you need to benchmark it. Here is how I do that. Its compile 1 Million time each patterns. import time,re def taken(f): def wrap(*arg): t1,r,t2=time.time(),f(*arg),time.time() print t2-t1,"s taken" return r return wrap @taken def regex_compile_test(x): for i in range(1000000): re.compile(x) print "for",x, #sample tests regex_compile_test("a") regex_compile_test("[a-z]") regex_compile_test("[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}") Its took around 5 min for each patterns in my computer. for a 4.88999986649 s taken for [a-z] 4.70300006866 s taken for [A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4} 4.78200006485 s taken The real Bottleneck is not in compiling patterns, its in extracting text like re.findall, replacing re.sub. If you use that against Several MB texts, Its quite slow. If your text is fixed, use normal str.find, its faster than regex. Actually, If you give your text samples, and your regex patterns samples, we could give you better idea, there is many many great regex, and python guys out there. Hope this help, sorry If my answer couldn't help you. A: Your requirement appears to be searching a text for the first occurrence of any one of a collection of strings. Presumably you then wish to restart the search to find the next occurrence, and so on until the searched string is exhausted. Only plain old string comparison is involved. The classic algorithm for this task is Aho-Corasick for which there is a Python extension (written in C). This should beat the socks off any alternative that's using the re module. A: When you compile the regexp, it is converted into a state machine representation. Provided the regexp is efficiently expressed, it should still be very fast to match. Compiling the regexp can be expensive though, so you will want to do that up front, and as infrequently as possible. Ultimately though, only you can answer if it is fast enough for your requirements. There are other string searching approaches, such as the Boyer-Moore algorithm. But I'd wager the complexity of searching for multiple separate strings is much higher than a regexp that can switch off each successive character. A: This is a question that can readily be answered by just trying it. >>> import re >>> import timeit >>> find = ['foo', 'bar', 'baz'] >>> pattern = re.compile("|".join(find)) >>> with open('c:\\temp\\words.txt', 'r') as f: words = f.readlines() >>> len(words) 235882 >>> timeit.timeit('r = filter(lambda w: any(s for s in find if w.find(s) >= 0), words)', 'from __main__ import find, words', number=30) 18.404569854548527 >>> timeit.timeit('r = filter(lambda w: any(s for s in find if s in w), words)', 'from __main__ import find, words', number=30) 10.953313759150944 >>> timeit.timeit('r = filter(lambda w: pattern.search(w), words)', 'from __main__ import pattern, words', number=30) 6.8793022576891758 It looks like you can reasonably expect regular expressions to be faster than using find or in. Though if I were you I'd repeat this test with a case that was more like your real data. A: If you're just searching for a particular substring, use str.find() instead. A: Depending on what you're doing it might be better to use a tokenizer and loop through the tokens to find matches. However, when it comes to short pieces of text regexes have incredibly good performance. Personally I remember only coming into problems when text sizes became ridiculous like 100k words or something like that. Furthermore, if you are worried about the speed of actual regex compilation rather than matching, you might benefit from creating a daemon that compiles all the regexes then goes through all the pieces of text in a big loop or runs as a service. This way you will only have to compile the regexes once. A: in general case, you can use "in" keyword for line in open("file"): if "word" in line: print line.rstrip() regex is usually not needed when you use Python :)
Speed of many regular expressions in python
I'm writing a python program that deals with a fair amount of strings/files. My problem is that I'm going to be presented with a fairly short piece of text, and I'm going to need to search it for instances of a fairly broad range of words/phrases. I'm thinking I'll need to compile regular expressions as a way of matching these words/phrases in the text. My concern, however, is that this will take a lot of time. My question is how fast is the process of repeatedly compiling regular expressions, and then searching through a small body of text to find matches? Would I be better off using some string method? Edit: So, I guess an example of my question would be: How expensive would it be to compile and search with one regular expression versus say, iterating 'if "word" in string' say, 5 times?
[ "You should try to compile all your regexps into a single one using the | operator. That way, the regexp engine will do most of the optimizations for you. Use the grouping operator () to determine which regexp matched.\n", "If speed is of the essence, you are better off running some tests before you decide how to code your production application.\nFirst of all, you said that you are searching for words which suggests that you may be able to do this using split() to break up the string on whitespace. And then use simple string comparisons to do your search.\nDefinitely do compile your regular expressions and do a timing test comparing that with the plain string functions. Check the documentation for the string class for a full list.\n", "If you like to know how does it fast during compiling regex patterns, you need to benchmark it.\nHere is how I do that. Its compile 1 Million time each patterns.\nimport time,re\n\ndef taken(f):\n def wrap(*arg):\n t1,r,t2=time.time(),f(*arg),time.time()\n print t2-t1,\"s taken\"\n return r\n return wrap\n\n@taken\ndef regex_compile_test(x):\n for i in range(1000000):\n re.compile(x)\n print \"for\",x,\n\n#sample tests\nregex_compile_test(\"a\")\nregex_compile_test(\"[a-z]\")\nregex_compile_test(\"[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,4}\")\n\nIts took around 5 min for each patterns in my computer. \nfor a 4.88999986649 s taken\nfor [a-z] 4.70300006866 s taken\nfor [A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,4} 4.78200006485 s taken\n\nThe real Bottleneck is not in compiling patterns, its in extracting text like re.findall, replacing re.sub. If you use that against Several MB texts, Its quite slow.\nIf your text is fixed, use normal str.find, its faster than regex.\nActually, If you give your text samples, and your regex patterns samples, we could give you better idea, there is many many great regex, and python guys out there.\nHope this help, sorry If my answer couldn't help you.\n", "Your requirement appears to be searching a text for the first occurrence of any one of a collection of strings. Presumably you then wish to restart the search to find the next occurrence, and so on until the searched string is exhausted. Only plain old string comparison is involved.\nThe classic algorithm for this task is Aho-Corasick for which there is a Python extension (written in C). This should beat the socks off any alternative that's using the re module.\n", "When you compile the regexp, it is converted into a state machine representation. Provided the regexp is efficiently expressed, it should still be very fast to match. Compiling the regexp can be expensive though, so you will want to do that up front, and as infrequently as possible. Ultimately though, only you can answer if it is fast enough for your requirements.\nThere are other string searching approaches, such as the Boyer-Moore algorithm. But I'd wager the complexity of searching for multiple separate strings is much higher than a regexp that can switch off each successive character.\n", "This is a question that can readily be answered by just trying it.\n>>> import re\n>>> import timeit\n>>> find = ['foo', 'bar', 'baz']\n>>> pattern = re.compile(\"|\".join(find))\n>>> with open('c:\\\\temp\\\\words.txt', 'r') as f:\n words = f.readlines()\n\n>>> len(words)\n235882\n>>> timeit.timeit('r = filter(lambda w: any(s for s in find if w.find(s) >= 0), words)', 'from __main__ import find, words', number=30)\n18.404569854548527\n>>> timeit.timeit('r = filter(lambda w: any(s for s in find if s in w), words)', 'from __main__ import find, words', number=30)\n10.953313759150944\n>>> timeit.timeit('r = filter(lambda w: pattern.search(w), words)', 'from __main__ import pattern, words', number=30)\n6.8793022576891758\n\nIt looks like you can reasonably expect regular expressions to be faster than using find or in. Though if I were you I'd repeat this test with a case that was more like your real data.\n", "If you're just searching for a particular substring, use str.find() instead.\n", "Depending on what you're doing it might be better to use a tokenizer and loop through the tokens to find matches.\nHowever, when it comes to short pieces of text regexes have incredibly good performance. Personally I remember only coming into problems when text sizes became ridiculous like 100k words or something like that.\nFurthermore, if you are worried about the speed of actual regex compilation rather than matching, you might benefit from creating a daemon that compiles all the regexes then goes through all the pieces of text in a big loop or runs as a service. This way you will only have to compile the regexes once.\n", "in general case, you can use \"in\" keyword\nfor line in open(\"file\"):\n if \"word\" in line:\n print line.rstrip()\n\nregex is usually not needed when you use Python :)\n" ]
[ 6, 5, 3, 3, 2, 2, 0, 0, 0 ]
[]
[]
[ "performance", "python", "regex" ]
stackoverflow_0001782586_performance_python_regex.txt
Q: term by term division in python (division termino a termino en python ) hello all, need to define a function that can be divided term by term matrix or in the worst cases, between arrays of lists so you get the result in a third matrix, thanks for any response A: Unless I'm misunderstanding, this is where numpy can be put to good use: >>> from numpy import * >>> a = array([[1,2,3],[4,5,6],[7,8,9]]) >>> b = array([[0.5] * 3, [0.5] * 3, [0.5] * 3]) >>> a / b array([[ 2., 4., 6.], [ 8., 10., 12.], [ 14., 16., 18.]]) This works for multiplication too. And indeed, as noted by Mark, scalar division (and multiplication) is also possible: >>> a / 10.0 array([[ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.7, 0.8, 0.9]]) >>> a * 10 array([[10, 20, 30], [40, 50, 60], [70, 80, 90]]) Edit: to be complete, for lists of lists you could do the following: >>> a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> b = [[0.5] * 3, [0.5] * 3, [0.5] * 3] >>> def mat_div(a, b): ... return [[n / d for n, d in zip(ra, rb)] for ra, rb in zip(a, b)] ... >>> mat_div(a, b) [[2.0, 4.0, 6.0], [8.0, 10.0, 12.0], [14.0, 16.0, 18.0]]
term by term division in python (division termino a termino en python )
hello all, need to define a function that can be divided term by term matrix or in the worst cases, between arrays of lists so you get the result in a third matrix, thanks for any response
[ "Unless I'm misunderstanding, this is where numpy can be put to good use:\n>>> from numpy import *\n>>> a = array([[1,2,3],[4,5,6],[7,8,9]])\n>>> b = array([[0.5] * 3, [0.5] * 3, [0.5] * 3])\n>>> a / b\narray([[ 2., 4., 6.],\n [ 8., 10., 12.],\n [ 14., 16., 18.]])\n\nThis works for multiplication too. And indeed, as noted by Mark, scalar division (and multiplication) is also possible:\n>>> a / 10.0\narray([[ 0.1, 0.2, 0.3],\n [ 0.4, 0.5, 0.6],\n [ 0.7, 0.8, 0.9]])\n>>> a * 10\narray([[10, 20, 30],\n [40, 50, 60],\n [70, 80, 90]])\n\n\nEdit: to be complete, for lists of lists you could do the following:\n>>> a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n>>> b = [[0.5] * 3, [0.5] * 3, [0.5] * 3]\n>>> def mat_div(a, b): \n... return [[n / d for n, d in zip(ra, rb)] for ra, rb in zip(a, b)] \n... \n>>> mat_div(a, b)\n[[2.0, 4.0, 6.0], [8.0, 10.0, 12.0], [14.0, 16.0, 18.0]]\n\n" ]
[ 8 ]
[]
[]
[ "python" ]
stackoverflow_0001785005_python.txt
Q: Tuples in Dicts Is it possible in python to add a tuple as a value in a dictionary? And if it is,how can we add a new value, then? And how can we remove and change it? A: >>> a = {'tuple': (23, 32)} >>> a {'tuple': (23, 32)} >>> a['tuple'] = (42, 24) >>> a {'tuple': (42, 24)} >>> del a['tuple'] >>> a {} if you meant to use tuples as keys you could do: >>> b = {(23, 32): 'tuple as key'} >>> b {(23, 32): 'tuple as key'} >>> b[23, 32] = 42 >>> b {(23, 32): 42} Generally speaking there is nothing specific about tuples being in dictionary, they keep behaving as tuples. A: Since tuples are immutable, you cannot add a value to the tuple. What you can do, is construct a new tuple from the current tuple and an extra value. The += operator does this for you, provided the left argument is a variable (or in this case a dictionary value): >>> t = {'k': (1, 2)} >>> t['k'] += (3,) >>> t {'k': (1, 2, 3)} Regardless, if you plan on altering the tuple value, perhaps it's better to store lists? Those are mutable. Edit: Since you updated your question†, observe the following: >>> d = {42: ('name', 'date')} >>> d[42][0] = 'name2' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment This happens because, as stated before, tuples are immutable. You cannot change them. If you want to change them, then in fact you'll have to create a new one. Thus: >>> d[42] = ('name2', d[42][2]) >>> d {42: ('name2', 'date')} As a side note, you may want to use namedtuples. They work just like regular tuples, but allow you to refer to elements within the tuple by name: >>> from collections import namedtuple >>> Person = namedtuple('Person', 'name date') >>> t = {42: Person('name', 'date')} >>> t[42] = Person('name2', t[42].date) >>> t {42: Person(name='name2', date='date')}   †: Next time please edit your actual question. Do not post an answer containing only further questions. This is not a forum. A: You can't change a tuple itself. You have to replace it by a different tuple. When you use a list, you could also add values to it (changing the list itself) without need to replace it: >> a = {'list': (23, 32)} >> a {'list': [23, 32]} >> a['list'].append(99) >> a {'list': [23, 32, 99]} In most cases, lists can be used as replacement for tuples (since as much I know they support all tuple functions -- this is duck typing, man!) A: t1=('name','date') t2=('x','y') # "Number" is a String key! d1={"Number":t1} # Update the value of "Number" d1["Number"] = t2 # Use a tuple as key, and another tuple as value d1[t1] = t2 # Obtain values (getters) # Can throw a KeyError if "Number" not a key foo = d1["Number"] # Does not throw a key error, t1 is the value if "Number" is not in the dict d1.get("Number", t1) # t3 now is the same as t1 t3 = d1[ ('name', 'date') ] You updated your question again. Please take a look at Python dict docs. Python documentation is one of it's strong points! And play with the interpreter (python)on the command line! But let's continue. initially key 0 d[0] = ('name', datetime.now()) id known d1 = d[0] del d[0] name changed tmp = d1 d1 = ( newname, tmp1 ) And please consider using a class Person(object): personIdCounter = 1 def __init__(self): self.id = Person.personIdCounter Person.personIdCounter += 1 self.name self.date then persons = {} person = Person() persons[person.id] = person person.name = "something" persons[1].name = "something else" That looks better than a tuple and models your data better.
Tuples in Dicts
Is it possible in python to add a tuple as a value in a dictionary? And if it is,how can we add a new value, then? And how can we remove and change it?
[ ">>> a = {'tuple': (23, 32)}\n>>> a\n{'tuple': (23, 32)}\n>>> a['tuple'] = (42, 24)\n>>> a\n{'tuple': (42, 24)}\n>>> del a['tuple']\n>>> a\n{}\n\nif you meant to use tuples as keys you could do:\n>>> b = {(23, 32): 'tuple as key'}\n>>> b\n{(23, 32): 'tuple as key'}\n>>> b[23, 32] = 42\n>>> b\n{(23, 32): 42}\n\nGenerally speaking there is nothing specific about tuples being in dictionary, they keep behaving as tuples.\n", "Since tuples are immutable, you cannot add a value to the tuple. What you can do, is construct a new tuple from the current tuple and an extra value. The += operator does this for you, provided the left argument is a variable (or in this case a dictionary value):\n>>> t = {'k': (1, 2)}\n>>> t['k'] += (3,)\n>>> t\n{'k': (1, 2, 3)}\n\nRegardless, if you plan on altering the tuple value, perhaps it's better to store lists? Those are mutable.\nEdit: Since you updated your question†, observe the following:\n>>> d = {42: ('name', 'date')}\n>>> d[42][0] = 'name2'\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: 'tuple' object does not support item assignment\n\nThis happens because, as stated before, tuples are immutable. You cannot change them. If you want to change them, then in fact you'll have to create a new one. Thus:\n>>> d[42] = ('name2', d[42][2])\n>>> d\n{42: ('name2', 'date')}\n\nAs a side note, you may want to use namedtuples. They work just like regular tuples, but allow you to refer to elements within the tuple by name:\n>>> from collections import namedtuple\n>>> Person = namedtuple('Person', 'name date')\n>>> t = {42: Person('name', 'date')}\n>>> t[42] = Person('name2', t[42].date)\n>>> t\n{42: Person(name='name2', date='date')}\n\n\n  †: Next time please edit your actual question. Do not post an answer containing only further questions. This is not a forum.\n", "You can't change a tuple itself. You have to replace it by a different tuple.\nWhen you use a list, you could also add values to it (changing the list itself) without need to replace it:\n>> a = {'list': (23, 32)}\n>> a\n{'list': [23, 32]}\n>> a['list'].append(99)\n>> a\n{'list': [23, 32, 99]}\n\nIn most cases, lists can be used as replacement for tuples (since as much I know they support all tuple functions -- this is duck typing, man!)\n", "t1=('name','date')\nt2=('x','y')\n\n# \"Number\" is a String key!\nd1={\"Number\":t1}\n# Update the value of \"Number\"\nd1[\"Number\"] = t2\n\n# Use a tuple as key, and another tuple as value\nd1[t1] = t2\n\n# Obtain values (getters)\n\n# Can throw a KeyError if \"Number\" not a key\nfoo = d1[\"Number\"]\n\n# Does not throw a key error, t1 is the value if \"Number\" is not in the dict\nd1.get(\"Number\", t1)\n\n# t3 now is the same as t1\nt3 = d1[ ('name', 'date') ]\n\nYou updated your question again. Please take a look at Python dict docs. Python documentation is one of it's strong points! And play with the interpreter (python)on the command line! But let's continue.\n\ninitially key 0\nd[0] = ('name', datetime.now())\nid known\nd1 = d[0]\ndel d[0]\nname changed\ntmp = d1\nd1 = ( newname, tmp1 )\n\nAnd please consider using a \nclass Person(object):\n personIdCounter = 1\n def __init__(self):\n self.id = Person.personIdCounter\n Person.personIdCounter += 1\n self.name\n self.date\n\nthen \npersons = {}\nperson = Person()\npersons[person.id] = person\nperson.name = \"something\"\n\npersons[1].name = \"something else\"\n\nThat looks better than a tuple and models your data better.\n" ]
[ 29, 7, 2, 1 ]
[]
[]
[ "dictionary", "python", "tuples" ]
stackoverflow_0001784973_dictionary_python_tuples.txt
Q: Managing *args variance in calls to functions Have a method with the following signature: def foo(self, bar, *uks): return other_method(..., uks) Normally this is called as: instance.foo(1234, a, b, c, d) However in some cases I need to do something like this: p = [a, b, c, d] instance.foo(1234, p) At the receiving end this does not work, because other_method sees *args being made up of a single list object instead of simply a [a, b, c, d] list construct. If I type the method as: def foo(self, bar, uks = []): return other_method(..., uks) It works, but then I'm forced to do this every time: instance.foo(1234, [a, b, c, d]) It's not a huge deal I guess, but I just want to know if I'm missing some more pythonic way of doing this? Thanks! A: Python supports unpacking of argument lists to handle exactly this situation. The two following calls are equivalent: Regular call: instance.foo(1234, a, b, c, d) Argument list expansion: p = [a, b, c, d] instance.foo(1234, *p) A: p = [a, b, c, d] instance.foo(1234, *p) The *p form is the crucial part here -- it means "expand sequence p into separate positional arguments". A: I think the answers you have here are correct. Here's a fully fleshed out example: class MyObject(object): def foo(self, bar, *uks): return self.other_method(1, uks) def other_method(self, x, uks): print "uks is %r" % (uks,) # sample data... a, b, c, d = 'a', 'b', 'c', 'd' instance = MyObject() print "Called as separate arguments:" instance.foo(1234, a, b, c, d) print "Called as a list:" p = [a, b, c, d] instance.foo(1234, *p) When run, this prints: Called as separate arguments: uks is ('a', 'b', 'c', 'd') Called as a list: uks is ('a', 'b', 'c', 'd') You said on Alex's answer that you got ([a, b, c, d],), but I don't see how.
Managing *args variance in calls to functions
Have a method with the following signature: def foo(self, bar, *uks): return other_method(..., uks) Normally this is called as: instance.foo(1234, a, b, c, d) However in some cases I need to do something like this: p = [a, b, c, d] instance.foo(1234, p) At the receiving end this does not work, because other_method sees *args being made up of a single list object instead of simply a [a, b, c, d] list construct. If I type the method as: def foo(self, bar, uks = []): return other_method(..., uks) It works, but then I'm forced to do this every time: instance.foo(1234, [a, b, c, d]) It's not a huge deal I guess, but I just want to know if I'm missing some more pythonic way of doing this? Thanks!
[ "Python supports unpacking of argument lists to handle exactly this situation. The two following calls are equivalent:\nRegular call:\ninstance.foo(1234, a, b, c, d)\n\nArgument list expansion:\np = [a, b, c, d]\ninstance.foo(1234, *p)\n\n", "p = [a, b, c, d]\ninstance.foo(1234, *p)\n\nThe *p form is the crucial part here -- it means \"expand sequence p into separate positional arguments\".\n", "I think the answers you have here are correct. Here's a fully fleshed out example:\nclass MyObject(object):\n def foo(self, bar, *uks):\n return self.other_method(1, uks)\n\n def other_method(self, x, uks):\n print \"uks is %r\" % (uks,)\n\n# sample data...\na, b, c, d = 'a', 'b', 'c', 'd'\n\ninstance = MyObject()\n\nprint \"Called as separate arguments:\"\ninstance.foo(1234, a, b, c, d)\n\nprint \"Called as a list:\"\np = [a, b, c, d]\ninstance.foo(1234, *p)\n\nWhen run, this prints:\nCalled as separate arguments:\nuks is ('a', 'b', 'c', 'd')\nCalled as a list:\nuks is ('a', 'b', 'c', 'd')\n\nYou said on Alex's answer that you got ([a, b, c, d],), but I don't see how.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001784971_python.txt
Q: Function not being called in Python, why? and how can I solve it? I am currently working on python/django site, at the moment I have a template that looks like this {% extends "shopbase.html" %} {% block pageid %}products{% endblock %} {% block right-content %} <img src="{{MEDIA_URL}}/local/images/assets/products.png" alt="Neal and Wolf News" class="position"/> <div class="products"> <form method="post" action="{% url category category.slug %}"> {% for product in category.products.all %} <div class="{% cycle 'clear' '' '' %}"> <img src="{{MEDIA_URL}}{{download.mini.thumbnail}}" alt="{{product.name}}" class="thumbnail"/> <h3><a href="{% url shop.views.product category.slug product.slug %}">{{ product.product_type_name }}</a></h3> <p class="strap">{{ product.product_sub_name }}</p> <p>{{ product.strap }}</p> <ul class="clear"> <li class="price"><b>&pound;{{product.price}}</b></li> <li class="quantity"> <select name="quantity_{{product.id}}"> <option label="1" value="1">1</option> <option label="2" value="2">2</option> <option label="3" value="3">3</option> <option label="4" value="4">4</option> <option label="5" value="5">5</option> <option label="6" value="6">6</option> <option label="7" value="7">7</option> <option label="8" value="8">8</option> <option label="9" value="9">9</option> </select> </li> <li><b><a href="details">Details &gt;</a></b></li> <li><input type="submit" name="add_to_basket_{{product.id}}" value="Add to Basket &gt;"/></a></li> </ul> </div> {% endfor %} </form> </div> {% endblock %} and in the model I have a class that looks like this def get_thumbnail(self, dimension, on=RES_X, use=USE_BOTH): """ Generate a thumbnail image for this Store Object. The thumbnail will be of size 'dimension' on the axis specified by the on parameter (which is deduced from calling x_or_y). If the use parameter is set to USE_BOTH then the thumbnail will take priority in the generation order. In other words, the function checks for the existance of an uploaded thumbnail. If one exists, it will use that, otherwise the image field is used instead. Specifying either USE_IMAGE or USE_THUMBNAIL here will force the generation to a particular asset. The image is saved out as a jpeg with the quality set to 90. The path relative to the MEDIA_ROOT is then returned. """ from PIL import Image import os if self.thumbnail and use != StoreObject.USE_IMAGE: source_path = str(self.thumbnail) elif self.image and use != StoreObject.USE_THUMBNAIL: source_path = str(self.image) else: return "" try: against = StoreObject.x_or_y(on) except KeyError: return "" target_path = os.path.join(StoreObject.GENERATED_THUMB_LOCATION, "%s-%d%s.png" % (self.slug, dimension, against)) savepath = os.path.join(settings.MEDIA_ROOT, target_path) loadpath = os.path.join(settings.MEDIA_ROOT, source_path) if os.path.exists(savepath) and os.path.getmtime(savepath) > os.path.getmtime(loadpath): return target_path try: img = Image.open(loadpath) except IOError: return "" aspect = float(img.size[0]) / float(img.size[1]) if against == StoreObject.RES_X: height = dimension / aspect width = dimension elif against == StoreObject.RES_Y: width = dimension * aspect height = dimension img = img.resize((width,height), Image.ANTIALIAS) img.convert('RGBA').save(savepath, "PNG") return target_path def mini_thumbnail(self): """ Generate and return the path to, a thumbnail that is 50px high """ return self.get_thumbnail(50, "x") def preview_image(self): """ Generate and return the path to, a thumbnail that is 300px wide, build from the image field """ return self.get_thumbnail(300, use=StoreObject.USE_IMAGE) def preview_thumbnail(self): """ Generate and return the path to, a thumbnail that is 300px wide, build from the thumbnail field """ return self.get_thumbnail(300, use=StoreObject.USE_THUMBNAIL) Now as you can see I am trying to pull out of the database some information on a certain item, and show an image of that item in a thumbnail which generated from the thumbnail function above (in the template I using product.mini_thumbnail however there seems to be know image? however if I print product.image with my MEDIA_URL then I get the fullsized image. Can anyone suggest what might be happening? A: As with many things in Django... someone has already solved this problem and come up with a clean, flexible solution for generating/storing thumbnails of different sizes etc. I'd strongly suggest looking at the sorl-thumbnail project. Docs here: http://thumbnail.sorl.net/docs/ Downloads here: http://code.google.com/p/sorl-thumbnail/downloads/list A: Your template has {{download.mini.thumbnail}} instead of {{download.mini_thumbnail}}
Function not being called in Python, why? and how can I solve it?
I am currently working on python/django site, at the moment I have a template that looks like this {% extends "shopbase.html" %} {% block pageid %}products{% endblock %} {% block right-content %} <img src="{{MEDIA_URL}}/local/images/assets/products.png" alt="Neal and Wolf News" class="position"/> <div class="products"> <form method="post" action="{% url category category.slug %}"> {% for product in category.products.all %} <div class="{% cycle 'clear' '' '' %}"> <img src="{{MEDIA_URL}}{{download.mini.thumbnail}}" alt="{{product.name}}" class="thumbnail"/> <h3><a href="{% url shop.views.product category.slug product.slug %}">{{ product.product_type_name }}</a></h3> <p class="strap">{{ product.product_sub_name }}</p> <p>{{ product.strap }}</p> <ul class="clear"> <li class="price"><b>&pound;{{product.price}}</b></li> <li class="quantity"> <select name="quantity_{{product.id}}"> <option label="1" value="1">1</option> <option label="2" value="2">2</option> <option label="3" value="3">3</option> <option label="4" value="4">4</option> <option label="5" value="5">5</option> <option label="6" value="6">6</option> <option label="7" value="7">7</option> <option label="8" value="8">8</option> <option label="9" value="9">9</option> </select> </li> <li><b><a href="details">Details &gt;</a></b></li> <li><input type="submit" name="add_to_basket_{{product.id}}" value="Add to Basket &gt;"/></a></li> </ul> </div> {% endfor %} </form> </div> {% endblock %} and in the model I have a class that looks like this def get_thumbnail(self, dimension, on=RES_X, use=USE_BOTH): """ Generate a thumbnail image for this Store Object. The thumbnail will be of size 'dimension' on the axis specified by the on parameter (which is deduced from calling x_or_y). If the use parameter is set to USE_BOTH then the thumbnail will take priority in the generation order. In other words, the function checks for the existance of an uploaded thumbnail. If one exists, it will use that, otherwise the image field is used instead. Specifying either USE_IMAGE or USE_THUMBNAIL here will force the generation to a particular asset. The image is saved out as a jpeg with the quality set to 90. The path relative to the MEDIA_ROOT is then returned. """ from PIL import Image import os if self.thumbnail and use != StoreObject.USE_IMAGE: source_path = str(self.thumbnail) elif self.image and use != StoreObject.USE_THUMBNAIL: source_path = str(self.image) else: return "" try: against = StoreObject.x_or_y(on) except KeyError: return "" target_path = os.path.join(StoreObject.GENERATED_THUMB_LOCATION, "%s-%d%s.png" % (self.slug, dimension, against)) savepath = os.path.join(settings.MEDIA_ROOT, target_path) loadpath = os.path.join(settings.MEDIA_ROOT, source_path) if os.path.exists(savepath) and os.path.getmtime(savepath) > os.path.getmtime(loadpath): return target_path try: img = Image.open(loadpath) except IOError: return "" aspect = float(img.size[0]) / float(img.size[1]) if against == StoreObject.RES_X: height = dimension / aspect width = dimension elif against == StoreObject.RES_Y: width = dimension * aspect height = dimension img = img.resize((width,height), Image.ANTIALIAS) img.convert('RGBA').save(savepath, "PNG") return target_path def mini_thumbnail(self): """ Generate and return the path to, a thumbnail that is 50px high """ return self.get_thumbnail(50, "x") def preview_image(self): """ Generate and return the path to, a thumbnail that is 300px wide, build from the image field """ return self.get_thumbnail(300, use=StoreObject.USE_IMAGE) def preview_thumbnail(self): """ Generate and return the path to, a thumbnail that is 300px wide, build from the thumbnail field """ return self.get_thumbnail(300, use=StoreObject.USE_THUMBNAIL) Now as you can see I am trying to pull out of the database some information on a certain item, and show an image of that item in a thumbnail which generated from the thumbnail function above (in the template I using product.mini_thumbnail however there seems to be know image? however if I print product.image with my MEDIA_URL then I get the fullsized image. Can anyone suggest what might be happening?
[ "As with many things in Django... someone has already solved this problem and come up with a clean, flexible solution for generating/storing thumbnails of different sizes etc. I'd strongly suggest looking at the sorl-thumbnail project.\nDocs here: http://thumbnail.sorl.net/docs/\nDownloads here: http://code.google.com/p/sorl-thumbnail/downloads/list\n", "Your template has {{download.mini.thumbnail}} instead of {{download.mini_thumbnail}}\n" ]
[ 5, 2 ]
[]
[]
[ "django", "python", "python_imaging_library" ]
stackoverflow_0001784197_django_python_python_imaging_library.txt
Q: Filtering mousePressEvent with installEventFilter I am having problem filtering the "mousePressEvent" with installEventFilter MyTestxEdit is a widget that holds QTextEdit I want that all the events of QTextEdit will be handle by MyTestxEdit I have used the installEventFilter This Trick works well for events like keyPressEvent but doesn't handle the mousePressEvent what am i doing wrong? import sys from PyQt4.QtGui import QApplication, QErrorMessage from KdeQt.KQApplication import KQApplication from KdeQt.KQMainWindow import KQMainWindow from PyQt4.QtCore import * from PyQt4.QtGui import * import thread class MyTestxEdit1(QTextEdit): def __init__(self,parent): QTextEdit.__init__(self) self.setMouseTracking(True) class MyTestxEdit(QWidget): def __init__(self): QWidget.__init__(self) self.__qTextEdit=MyTestxEdit1(self) self.__qHBoxLayout=QHBoxLayout() self.setLayout(self.__qHBoxLayout) self.__qHBoxLayout.addWidget(self.__qTextEdit) self.__qTextEdit.installEventFilter(self) def eventFilter(self,target,event): print "eventFilter "+str(event.type()) if(event.type()==QEvent.MouseButtonPress): print "Mouse was presssed "+str(event.type()) self.mousePressEvent(event) return True return False if __name__ == '__main__': app = KQApplication(sys.argv,[]) mainWindow = KQMainWindow()#loc, splash, pluginFile, noopen, restartArgs) s = QSize(800, 600) mainWindow.resize(s) testxEdit=MyTestxEdit() mainWindow.setCentralWidget(testxEdit) mainWindow.show() res = app.exec_() sys.exit(res) A: Try to install the filter on the QTextEdit's viewport instead of the QTextEdit itself... I don't know python but something like: self.__qTextEdit.viewport().installEventFilter(self) I hope it helps! You should do something like: MyClassFrm::MyClassFrm() { ... // Get your TextEdit from the UI here , or create your TextEdit here.... // Install the filter pMyTextEdit->viewport()->installEventFilter(this); ... } ... bool MyClassFrm::eventFilter(QObject* pObject, QEvent* pEvent) { if (pEvent->type() == QEvent::MousePressEvent) { qDebug() << "Mouse pressed !!"; // standard event processing return QObject::eventFilter(pObject, pEvent); } } You should be able to make it work, I just tested in a my application, it works... I'm sure you're close!
Filtering mousePressEvent with installEventFilter
I am having problem filtering the "mousePressEvent" with installEventFilter MyTestxEdit is a widget that holds QTextEdit I want that all the events of QTextEdit will be handle by MyTestxEdit I have used the installEventFilter This Trick works well for events like keyPressEvent but doesn't handle the mousePressEvent what am i doing wrong? import sys from PyQt4.QtGui import QApplication, QErrorMessage from KdeQt.KQApplication import KQApplication from KdeQt.KQMainWindow import KQMainWindow from PyQt4.QtCore import * from PyQt4.QtGui import * import thread class MyTestxEdit1(QTextEdit): def __init__(self,parent): QTextEdit.__init__(self) self.setMouseTracking(True) class MyTestxEdit(QWidget): def __init__(self): QWidget.__init__(self) self.__qTextEdit=MyTestxEdit1(self) self.__qHBoxLayout=QHBoxLayout() self.setLayout(self.__qHBoxLayout) self.__qHBoxLayout.addWidget(self.__qTextEdit) self.__qTextEdit.installEventFilter(self) def eventFilter(self,target,event): print "eventFilter "+str(event.type()) if(event.type()==QEvent.MouseButtonPress): print "Mouse was presssed "+str(event.type()) self.mousePressEvent(event) return True return False if __name__ == '__main__': app = KQApplication(sys.argv,[]) mainWindow = KQMainWindow()#loc, splash, pluginFile, noopen, restartArgs) s = QSize(800, 600) mainWindow.resize(s) testxEdit=MyTestxEdit() mainWindow.setCentralWidget(testxEdit) mainWindow.show() res = app.exec_() sys.exit(res)
[ "Try to install the filter on the QTextEdit's viewport instead of the QTextEdit itself...\nI don't know python but something like:\nself.__qTextEdit.viewport().installEventFilter(self)\n\nI hope it helps!\nYou should do something like:\nMyClassFrm::MyClassFrm()\n{\n ...\n // Get your TextEdit from the UI here , or create your TextEdit here....\n // Install the filter\n pMyTextEdit->viewport()->installEventFilter(this);\n ...\n}\n\n...\n\nbool MyClassFrm::eventFilter(QObject* pObject, QEvent* pEvent)\n{\n if (pEvent->type() == QEvent::MousePressEvent) \n {\n qDebug() << \"Mouse pressed !!\";\n // standard event processing\n return QObject::eventFilter(pObject, pEvent);\n }\n}\n\nYou should be able to make it work, I just tested in a my application, it works... I'm sure you're close!\n" ]
[ 6 ]
[]
[]
[ "python", "qt" ]
stackoverflow_0001785251_python_qt.txt
Q: Piping Cygwin into a Python program As a i'm new to the whole Piping thing and Python I had recently encountered a problem trying to pipe Cygwin's stdin & stdout into a python program usin Python's subprocess moudle. for example I took a simple program: cygwin = subprocess.Popen('PathToCygwin',shell=False,stdin=subprocess.PIPE,stdout=subprocess.PIPE) cygwin.stdin.write('ssh') After that I'm getting this error: cygwin.stdin.write('ssh') IOError: [Errno 22] Invalid argument What am I doing wrong? A: Well, your literal code won't work. You are passing a string with the value 'PathToCygwin' and that isn't going to do anything. I assume that you are passing a better string than that, but you didn't show us what. I think your PathToCygwin is probably the problem. If you don't get the path right, it won't work. Here is my test code. I ran this under the Cygwin version of Python, so I used a Cygwin-style path: instead of r"C:\cygwin\bin\bash.exe" I used a /cygdrive/c path: >>> cpath = "/cygdrive/c/cygwin/bin/bash.exe" >>> cygwin = subprocess.Popen(cpath,shell=False,stdin=subprocess.PIPE,stdout=su bprocess.PIPE) >>> cygwin.stdin.write("ssh") >>> Again, I ran this in the Cygwin-compiled version of Python. If you are using the Windows native version of Python, you will probably need to use a C: path. If you still are having trouble, would you please tell us exactly which version of Python you are using, and show us the actual path code you are using?
Piping Cygwin into a Python program
As a i'm new to the whole Piping thing and Python I had recently encountered a problem trying to pipe Cygwin's stdin & stdout into a python program usin Python's subprocess moudle. for example I took a simple program: cygwin = subprocess.Popen('PathToCygwin',shell=False,stdin=subprocess.PIPE,stdout=subprocess.PIPE) cygwin.stdin.write('ssh') After that I'm getting this error: cygwin.stdin.write('ssh') IOError: [Errno 22] Invalid argument What am I doing wrong?
[ "Well, your literal code won't work. You are passing a string with the value 'PathToCygwin' and that isn't going to do anything. I assume that you are passing a better string than that, but you didn't show us what.\nI think your PathToCygwin is probably the problem. If you don't get the path right, it won't work.\nHere is my test code. I ran this under the Cygwin version of Python, so I used a Cygwin-style path: instead of r\"C:\\cygwin\\bin\\bash.exe\" I used a /cygdrive/c path:\n>>> cpath = \"/cygdrive/c/cygwin/bin/bash.exe\"\n>>> cygwin = subprocess.Popen(cpath,shell=False,stdin=subprocess.PIPE,stdout=su\nbprocess.PIPE)\n>>> cygwin.stdin.write(\"ssh\")\n>>>\n\nAgain, I ran this in the Cygwin-compiled version of Python. If you are using the Windows native version of Python, you will probably need to use a C: path.\nIf you still are having trouble, would you please tell us exactly which version of Python you are using, and show us the actual path code you are using?\n" ]
[ 2 ]
[]
[]
[ "cygwin", "pipe", "python", "subprocess" ]
stackoverflow_0001785265_cygwin_pipe_python_subprocess.txt
Q: Convert \r text to \n so readlines() works as intended In Python, you can read a file and load its lines into a list by using f = open('file.txt','r') lines = f.readlines() Each individual line is delimited by \n but if the contents of a line have \r then it is not treated as a new line. I need to convert all \r to \n and get the correct list lines. If I do .split('\r') inside the lines I'll get lists inside the list. I thought about opening a file, replace all \r to \n, closing the file and reading it in again and then use the readlines() but this seems wasteful. How should I implement this? A: f = open('file.txt','rU') This opens the file with Python's universal newline support and \r is treated as an end-of-line. A: If it's a concern, open in binary format and convert with this code: from __future__ import with_statement with open(filename, "rb") as f: s = f.read().replace('\r\n', '\n').replace('\r', '\n') lines = s.split('\n')
Convert \r text to \n so readlines() works as intended
In Python, you can read a file and load its lines into a list by using f = open('file.txt','r') lines = f.readlines() Each individual line is delimited by \n but if the contents of a line have \r then it is not treated as a new line. I need to convert all \r to \n and get the correct list lines. If I do .split('\r') inside the lines I'll get lists inside the list. I thought about opening a file, replace all \r to \n, closing the file and reading it in again and then use the readlines() but this seems wasteful. How should I implement this?
[ "f = open('file.txt','rU')\n\nThis opens the file with Python's universal newline support and \\r is treated as an end-of-line.\n", "If it's a concern, open in binary format and convert with this code:\nfrom __future__ import with_statement\n\nwith open(filename, \"rb\") as f:\n s = f.read().replace('\\r\\n', '\\n').replace('\\r', '\\n')\n lines = s.split('\\n')\n\n" ]
[ 43, 4 ]
[]
[]
[ "python", "readline" ]
stackoverflow_0001785233_python_readline.txt
Q: Python 3 - pull down a file object from a web server over a proxy (no-auth) I have a very simple problem and I am absolutely amazed that I haven't seen anything on this specifically. I am attempting to follow best practices for copying a file that is hosted on a webserver going through a proxy server (which does not require auth) using python3. i have done similar things using python 2.5 but I am really coming up short here. I am trying to make this into a function that i can reuse for future scripts on this network. any assistance that can be provided would be greatly appreciated. I have the feeling that my issue lies within attempting to use urllib.request or http.client without any clear doc on how to incorporate the use of a proxy (without auth). I've been looking here and pulling out my hair... http://docs.python.org/3.1/library/urllib.request.html#urllib.request.ProxyHandler http://docs.python.org/3.1/library/http.client.html http://diveintopython3.org/http-web-services.html even this stackoverflow article: Proxy with urllib2 but in python3 urllib2 is deprecated... A: here is an function to retrieve a file through an http proxy: import urllib.request def retrieve( url, filename ): proxy = urllib.request.ProxyHandler( {'http': '127.0.0.1'} ) opener = urllib.request.build_opener( proxy ) remote = opener.open( url ) local = open( filename, 'wb' ) data = remote.read(100) while data: local.write(data) data = remote.read(100) local.close() remote.close() (error handling is left as an exercise to the reader...) you can eventually save the opener object for later use, in case you need to retrieve multiple files. the content is written as-is into the file, but it may need to be decoded if a fancy encoding has been used.
Python 3 - pull down a file object from a web server over a proxy (no-auth)
I have a very simple problem and I am absolutely amazed that I haven't seen anything on this specifically. I am attempting to follow best practices for copying a file that is hosted on a webserver going through a proxy server (which does not require auth) using python3. i have done similar things using python 2.5 but I am really coming up short here. I am trying to make this into a function that i can reuse for future scripts on this network. any assistance that can be provided would be greatly appreciated. I have the feeling that my issue lies within attempting to use urllib.request or http.client without any clear doc on how to incorporate the use of a proxy (without auth). I've been looking here and pulling out my hair... http://docs.python.org/3.1/library/urllib.request.html#urllib.request.ProxyHandler http://docs.python.org/3.1/library/http.client.html http://diveintopython3.org/http-web-services.html even this stackoverflow article: Proxy with urllib2 but in python3 urllib2 is deprecated...
[ "here is an function to retrieve a file through an http proxy:\nimport urllib.request\n\ndef retrieve( url, filename ):\n proxy = urllib.request.ProxyHandler( {'http': '127.0.0.1'} )\n opener = urllib.request.build_opener( proxy )\n remote = opener.open( url )\n local = open( filename, 'wb' )\n data = remote.read(100)\n while data:\n local.write(data)\n data = remote.read(100)\n local.close()\n remote.close()\n\n(error handling is left as an exercise to the reader...)\nyou can eventually save the opener object for later use, in case you need to retrieve multiple files. the content is written as-is into the file, but it may need to be decoded if a fancy encoding has been used.\n" ]
[ 1 ]
[]
[]
[ "http", "proxy", "python", "tunnel" ]
stackoverflow_0001784483_http_proxy_python_tunnel.txt
Q: Where in a virtualenv does the custom code go? What sort of directory structure should one follow when using virtualenv? For instance, if I were building a WSGI application and created a virtualenv called foobar I would start with a directory structure like: /foobar /bin {activate, activate.py, easy_install, python} /include {python2.6/...} /lib {python2.6/...} Once this environment is created, where would one place their own: python files? static files (images/etc)? "custom" packages, such as those available online but not found in the cheese-shop? in relation to the virtualenv directories? (Assume I already know where the virtualenv directories themselves should go.) A: virtualenv provides a python interpreter instance, not an application instance. You wouldn't normally create your application files within the directories containing a system's default Python, likewise there's no requirement to locate your application within a virtualenv directory. For example, you might have a project where you have multiple applications using the same virtualenv. Or, you may be testing an application with a virtualenv that will later be deployed with a system Python. Or, you may be packaging up a standalone app where it might make sense to have the virtualenv directory located somewhere within the app directory itself. So, in general, I don't think there is one right answer to the question. And, a good thing about virtualenv is that it supports many different use cases: there doesn't need to be one right way. A: If you only have a few projects every so often, nothing stops you from creating a new virtualenv for each one, and putting your packages right inside: /foobar /bin {activate, activate.py, easy_install, python} /include {python2.6/...} /lib {python2.6/...} /mypackage1 __init__.py /mypackage2 __init__.py The advantage of this approach is that you can always be sure to find find the activate script that belongs to the project inside. $ cd /foobar $ source bin/activate $ python >>> import mypackage1 >>> If you decide to be a bit more organized, you should consider putting all your virtualenvs into one folder, and name each of them after the project you are working on. /virtualenvs /foobar /bin {activate, activate.py, easy_install, python} /include {python2.6/...} /lib {python2.6/...} /foobar /mypackage1 __init__.py /mypackage2 __init__.py This way you can always start over with a new virtualenv when things go wrong, and your project files stay safe. Another advantage is that several of your projects can use the same virtualenv, so you don't have to do the same installation over and over if you have a lot of dependencies. $ cd /foobar $ source ../virtualenvs/foobar/bin/activate $ python >>> import mypackage2 >>> For users that regularly have to set up and tear down virtualenvs it would make sense to look at virtualenvwrapper. http://pypi.python.org/pypi/virtualenvwrapper With virtualenvwrapper you can * create and delete virtual environments * organize virtual environments in a central place * easily switch between environments You no more have to worry about where your virtualenvs are when working on the projects "foo" and "bar": /foo /mypackage1 __init__.py /bar /mypackage2 __init__.py This is how you start working on project "foo": $ cd foo $ workon bar foo $ workon foo (foo)$ python >>> import mypackage1 >>> Then switching to project "bar" is as simple as this: $ cd ../bar $ workon bar (bar)$ python >>> import mypackage2 >>> Pretty neat, isn't it? A: Because virtualenvs are not relocatable, in my opinion it is bad practice to place your project files inside a virtualenv directory. The virtualenv itself is a generated development/deployment artifact (sort of like a .pyc file), not part of the project; it should be easy to blow it away and recreate it anytime, or create a new one on a new deploy host, etc. Many people in fact use virtualenvwrapper, which removes the actual virtualenvs from your awareness almost completely, placing them all side-by-side in $HOME/.virtualenvs by default. A: If you give your project a setup.py, pip can import it from version control directly. Do something like this: $ virtualenv --no-site-packages myproject $ . myproject/bin/activate $ easy_install pip $ pip install -e hg+http://bitbucket.org/owner/myproject#egg=proj The -e will put the project in myproject/src, but link it to myproject/lib/pythonX.X/site-packages/, so any changes you make will get picked up immediately in modules that import it from your local site-packages. The #egg bit tells pip what name you want to give to the egg package it creates for you. If you don't use --no-site-packages, be careful to specify that you want pip to install into the virtualenv with the -E option
Where in a virtualenv does the custom code go?
What sort of directory structure should one follow when using virtualenv? For instance, if I were building a WSGI application and created a virtualenv called foobar I would start with a directory structure like: /foobar /bin {activate, activate.py, easy_install, python} /include {python2.6/...} /lib {python2.6/...} Once this environment is created, where would one place their own: python files? static files (images/etc)? "custom" packages, such as those available online but not found in the cheese-shop? in relation to the virtualenv directories? (Assume I already know where the virtualenv directories themselves should go.)
[ "virtualenv provides a python interpreter instance, not an application instance. You wouldn't normally create your application files within the directories containing a system's default Python, likewise there's no requirement to locate your application within a virtualenv directory. \nFor example, you might have a project where you have multiple applications using the same virtualenv. Or, you may be testing an application with a virtualenv that will later be deployed with a system Python. Or, you may be packaging up a standalone app where it might make sense to have the virtualenv directory located somewhere within the app directory itself. \nSo, in general, I don't think there is one right answer to the question. And, a good thing about virtualenv is that it supports many different use cases: there doesn't need to be one right way.\n", "If you only have a few projects every so often, nothing stops you from creating a new virtualenv for each one, and putting your packages right inside:\n/foobar\n /bin\n {activate, activate.py, easy_install, python}\n /include\n {python2.6/...}\n /lib\n {python2.6/...}\n /mypackage1\n __init__.py\n /mypackage2\n __init__.py\n\nThe advantage of this approach is that you can always be sure to find find the activate script that belongs to the project inside.\n$ cd /foobar\n$ source bin/activate\n$ python \n>>> import mypackage1\n>>>\n\nIf you decide to be a bit more organized, you should consider putting all your virtualenvs into one folder, and name each of them after the project you are working on.\n /virtualenvs\n /foobar\n /bin\n {activate, activate.py, easy_install, python}\n /include\n {python2.6/...}\n /lib\n {python2.6/...}\n /foobar\n /mypackage1\n __init__.py\n /mypackage2\n __init__.py\n\nThis way you can always start over with a new virtualenv when things go wrong, and your project files stay safe.\nAnother advantage is that several of your projects can use the same virtualenv, so you don't have to do the same installation over and over if you have a lot of dependencies.\n$ cd /foobar\n$ source ../virtualenvs/foobar/bin/activate\n$ python \n>>> import mypackage2\n>>>\n\nFor users that regularly have to set up and tear down virtualenvs it would make sense to look at virtualenvwrapper.\nhttp://pypi.python.org/pypi/virtualenvwrapper\n\nWith virtualenvwrapper you can\n* create and delete virtual environments\n\n* organize virtual environments in a central place\n\n* easily switch between environments\n\nYou no more have to worry about where your virtualenvs are when working on the projects \"foo\" and \"bar\":\n /foo\n /mypackage1\n __init__.py\n /bar\n /mypackage2\n __init__.py\n\nThis is how you start working on project \"foo\":\n$ cd foo\n$ workon\nbar\nfoo\n$ workon foo\n(foo)$ python\n>>> import mypackage1\n>>>\n\nThen switching to project \"bar\" is as simple as this:\n$ cd ../bar\n$ workon bar\n(bar)$ python\n>>> import mypackage2\n>>>\n\nPretty neat, isn't it? \n", "Because virtualenvs are not relocatable, in my opinion it is bad practice to place your project files inside a virtualenv directory. The virtualenv itself is a generated development/deployment artifact (sort of like a .pyc file), not part of the project; it should be easy to blow it away and recreate it anytime, or create a new one on a new deploy host, etc.\nMany people in fact use virtualenvwrapper, which removes the actual virtualenvs from your awareness almost completely, placing them all side-by-side in $HOME/.virtualenvs by default.\n", "If you give your project a setup.py, pip can import it from version control directly. \nDo something like this:\n$ virtualenv --no-site-packages myproject\n$ . myproject/bin/activate\n$ easy_install pip\n$ pip install -e hg+http://bitbucket.org/owner/myproject#egg=proj\n\nThe -e will put the project in myproject/src, but link it to myproject/lib/pythonX.X/site-packages/, so any changes you make will get picked up immediately in modules that import it from your local site-packages. The #egg bit tells pip what name you want to give to the egg package it creates for you.\nIf you don't use --no-site-packages, be careful to specify that you want pip to install into the virtualenv with the -E option\n" ]
[ 98, 63, 32, 3 ]
[]
[]
[ "project", "python", "virtualenv" ]
stackoverflow_0001783146_project_python_virtualenv.txt
Q: Nested Lambdas in Python I'm a beginning python programmer, and I'd like someone to clarify the following behavior. I have the following code: env = lambda id: -1 def add(id, val, myenv): return lambda x: val if x == id else myenv(id) test_env = add("a", 1, env) test_env_2 = add("b", 2, test_env) When I look up "a" in test_env, it functions correctly, but when I look it up in test_env_2 it seems to have been clobbered by "b". At least, "b" is all I can retrieve from test_env_2. So, I have already read Scope of python lambda functions and their parameters et al and understand that closures operate on references rather than values, but I believe this is not the exact same case since I am using string literals. Can someone explain to me what is going on under the hood here? (And yes, I know that Python isn't intended as a functional language. This is just research.) A: I think you just confused myenv(id) with myenv(x). Change it and you'll get the desired output.
Nested Lambdas in Python
I'm a beginning python programmer, and I'd like someone to clarify the following behavior. I have the following code: env = lambda id: -1 def add(id, val, myenv): return lambda x: val if x == id else myenv(id) test_env = add("a", 1, env) test_env_2 = add("b", 2, test_env) When I look up "a" in test_env, it functions correctly, but when I look it up in test_env_2 it seems to have been clobbered by "b". At least, "b" is all I can retrieve from test_env_2. So, I have already read Scope of python lambda functions and their parameters et al and understand that closures operate on references rather than values, but I believe this is not the exact same case since I am using string literals. Can someone explain to me what is going on under the hood here? (And yes, I know that Python isn't intended as a functional language. This is just research.)
[ "I think you just confused myenv(id) with myenv(x). Change it and you'll get the desired output.\n" ]
[ 5 ]
[]
[]
[ "closures", "functional_programming", "lambda", "nested", "python" ]
stackoverflow_0001785826_closures_functional_programming_lambda_nested_python.txt
Q: Efficient way of calling set of functions in Python I have a set of functions: functions=set(...) All the functions need one parameter x. What is the most efficient way in python of doing something similar to: for function in functions: function(x) A: The code you give, for function in functions: function(x) ...does not appear to do anything with the result of calling function(x). If that is indeed so, meaning that these functions are called for their side-effects, then there is no more pythonic alternative. Just leave your code as it is.† The point to take home here, specifically, is                                Avoid functions with side-effects in list-comprehensions. As for efficiency: I expect that using anything else instead of your simple loop will not improve runtime. When in doubt, use timeit. For example, the following tests seem to indicate that a regular for-loop is faster than a list-comprehension. (I would be reluctant to draw any general conclusions from this test, thought): >>> timeit.Timer('[f(20) for f in functions]', 'functions = [lambda n: i * n for i in range(100)]').repeat() [44.727972984313965, 44.752119779586792, 44.577917814254761] >>> timeit.Timer('for f in functions: f(20)', 'functions = [lambda n: i * n for i in range(100)]').repeat() [40.320928812026978, 40.491761207580566, 40.303879022598267] But again, even if these tests would have indicated that list-comprehensions are faster, the point remains that you should not use them when side-effects are involved, for readability's sake.   †: Well, I'd write for f in functions, so that the difference beteen function and functions is more pronounced. But that's not what this question is about. A: If you need the output, a list comprehension would work. [func(x) for func in functions] A: I'm somewhat doubtful of how much of an impact this will have on the total running time of your program, but I guess you could do something like this: [func(x) for func in functions] The downside is that you will create a new list that you immediatly toss away, but it should be slightly faster than just the for-loop. In any case, make sure you profile your code to confirm that this really is a bottleneck that you need to take care of. A: Edit: I redid the test using timeit My new test code: import timeit def func(i): return i; a = b = c = d = e = f = func functions = [a, b, c, d, e, f] timer = timeit.Timer("[f(2) for f in functions]", "from __main__ import functions") print (timer.repeat()) timer = timeit.Timer("map(lambda f: f(2), functions)", "from __main__ import functions") print (timer.repeat()) timer = timeit.Timer("for f in functions: f(2)", "from __main__ import functions") print (timer.repeat()) Here is the results from this timing. testing list comprehension [1.7169530391693115, 1.7683839797973633, 1.7840299606323242] testing map(f, l) [2.5285000801086426, 2.5957231521606445, 2.6551258563995361] testing plain loop [1.1665718555450439, 1.1711149215698242, 1.1652190685272217] My original, time.time() based timings are pretty much inline with this testing, plain for loops seem to be the most efficient.
Efficient way of calling set of functions in Python
I have a set of functions: functions=set(...) All the functions need one parameter x. What is the most efficient way in python of doing something similar to: for function in functions: function(x)
[ "The code you give,\nfor function in functions:\n function(x)\n\n...does not appear to do anything with the result of calling function(x). If that is indeed so, meaning that these functions are called for their side-effects, then there is no more pythonic alternative. Just leave your code as it is.† The point to take home here, specifically, is\n\n                              \nAvoid functions with side-effects in list-comprehensions.\nAs for efficiency: I expect that using anything else instead of your simple loop will not improve runtime. When in doubt, use timeit. For example, the following tests seem to indicate that a regular for-loop is faster than a list-comprehension. (I would be reluctant to draw any general conclusions from this test, thought):\n>>> timeit.Timer('[f(20) for f in functions]', 'functions = [lambda n: i * n for i in range(100)]').repeat()\n[44.727972984313965, 44.752119779586792, 44.577917814254761]\n>>> timeit.Timer('for f in functions: f(20)', 'functions = [lambda n: i * n for i in range(100)]').repeat()\n[40.320928812026978, 40.491761207580566, 40.303879022598267]\n\nBut again, even if these tests would have indicated that list-comprehensions are faster, the point remains that you should not use them when side-effects are involved, for readability's sake.\n\n  †: Well, I'd write for f in functions, so that the difference beteen function and functions is more pronounced. But that's not what this question is about.\n", "If you need the output, a list comprehension would work.\n[func(x) for func in functions]\n\n", "I'm somewhat doubtful of how much of an impact this will have on the total running time of your program, but I guess you could do something like this:\n[func(x) for func in functions]\n\nThe downside is that you will create a new list that you immediatly toss away, but it should be slightly faster than just the for-loop.\nIn any case, make sure you profile your code to confirm that this really is a bottleneck that you need to take care of.\n", "Edit: I redid the test using timeit\nMy new test code:\nimport timeit\n\ndef func(i):\n return i;\n\na = b = c = d = e = f = func\n\nfunctions = [a, b, c, d, e, f]\n\ntimer = timeit.Timer(\"[f(2) for f in functions]\", \"from __main__ import functions\")\nprint (timer.repeat())\n\ntimer = timeit.Timer(\"map(lambda f: f(2), functions)\", \"from __main__ import functions\")\nprint (timer.repeat())\n\ntimer = timeit.Timer(\"for f in functions: f(2)\", \"from __main__ import functions\")\nprint (timer.repeat())\n\nHere is the results from this timing.\ntesting list comprehension\n[1.7169530391693115, 1.7683839797973633, 1.7840299606323242]\n\ntesting map(f, l)\n[2.5285000801086426, 2.5957231521606445, 2.6551258563995361] \n\ntesting plain loop\n[1.1665718555450439, 1.1711149215698242, 1.1652190685272217]\n\nMy original, time.time() based timings are pretty much inline with this testing, plain for loops seem to be the most efficient. \n" ]
[ 7, 1, 0, 0 ]
[]
[]
[ "iteration", "python" ]
stackoverflow_0001785867_iteration_python.txt
Q: Creation of PyTuple in C++ module crashes Having some trouble with this code. Trying to return a tuple of tuples (coordinates) from a C++ module Im writing. It looks right to me, the dirty list contains two Coords so len is 2, the x and y values of the items in the list are 0,0 and 0,1 respectively. First time Im attempting this so I might very well have misunderstood the docs or something. Any hints? PyObject* getDirty() { int len = dirty.size(); PyObject* tuple = PyTuple_New(len); int count = 0; for (std::list<Coord>::iterator i = dirty.begin(); i != dirty.end(); ++i) { PyTuple_SET_ITEM(tuple, count, PyTuple_Pack(2, (*i).x, (*i).y)); ++count; } return tuple; } Edit: Oh, forgot to mention, the actual crash is on the PyTuple_Set_ITEM line. A: The arguments to PyTuple_Pack, after the first one, must be PyObject pointers. You might want instead Py_BuildValue("(ii)", (*i).x, (*i).y) ...assuming the coordinates are actually of type int.
Creation of PyTuple in C++ module crashes
Having some trouble with this code. Trying to return a tuple of tuples (coordinates) from a C++ module Im writing. It looks right to me, the dirty list contains two Coords so len is 2, the x and y values of the items in the list are 0,0 and 0,1 respectively. First time Im attempting this so I might very well have misunderstood the docs or something. Any hints? PyObject* getDirty() { int len = dirty.size(); PyObject* tuple = PyTuple_New(len); int count = 0; for (std::list<Coord>::iterator i = dirty.begin(); i != dirty.end(); ++i) { PyTuple_SET_ITEM(tuple, count, PyTuple_Pack(2, (*i).x, (*i).y)); ++count; } return tuple; } Edit: Oh, forgot to mention, the actual crash is on the PyTuple_Set_ITEM line.
[ "The arguments to PyTuple_Pack, after the first one, must be PyObject pointers.\nYou might want instead\nPy_BuildValue(\"(ii)\", (*i).x, (*i).y)\n\n...assuming the coordinates are actually of type int.\n" ]
[ 1 ]
[]
[]
[ "python", "python_c_api" ]
stackoverflow_0001786070_python_python_c_api.txt
Q: How to install EasyGUI on Mac OS X 10.6 (Snow Leopard)? I would like to install EasyGUI on Mac OS X 10.6, but am running into trouble. Has anyone successfully done this? If so, what explicit set of steps did you follow? Thank you. A: It's hard to know what running into trouble means but, nonetheless, something like this seems to work on 10.6: mkdir test_easygui cd test_easygui curl http://easygui.sourceforge.net/current_version/easygui_v0.93.tar.gz | tar xz /usr/bin/python2.6 easygui.py EDIT: Unfortunately, the EasyGui download does not include a setup.py that would allow you to easily install it in the normal Python manner. To be a good citizen in the Python World, it should. Fortunately, it is easy to supply one. In the download directory (test_easygui in the example above), create the following text file and name it setup.py: from distutils.core import setup setup(name='easygui', version='0.93', py_modules=['easygui'], ) Then run the following command: sudo /usr/bin/python2.6 setup.py install Now you should be able to run the demo from any directory by: /usr/bin/python2.6 -m easygui and you should be able to just import easygui in your own python modules. By the way, this process should work on any supported python platform (not just OS X) and with any supported python interpreter you have installed: just substitute the proper path for /usr/bin/python2.6. (If you don't have any extra pythons installed on OS X 10.6, typing just python should be sufficient.)
How to install EasyGUI on Mac OS X 10.6 (Snow Leopard)?
I would like to install EasyGUI on Mac OS X 10.6, but am running into trouble. Has anyone successfully done this? If so, what explicit set of steps did you follow? Thank you.
[ "It's hard to know what running into trouble means but, nonetheless, something like this seems to work on 10.6:\nmkdir test_easygui\ncd test_easygui\ncurl http://easygui.sourceforge.net/current_version/easygui_v0.93.tar.gz | tar xz\n/usr/bin/python2.6 easygui.py\n\nEDIT: \nUnfortunately, the EasyGui download does not include a setup.py that would allow you to easily install it in the normal Python manner. To be a good citizen in the Python World, it should. Fortunately, it is easy to supply one. In the download directory (test_easygui in the example above), create the following text file and name it setup.py:\nfrom distutils.core import setup\nsetup(name='easygui',\n version='0.93',\n py_modules=['easygui'],\n )\n\nThen run the following command:\nsudo /usr/bin/python2.6 setup.py install\n\nNow you should be able to run the demo from any directory by:\n/usr/bin/python2.6 -m easygui\n\nand you should be able to just import easygui in your own python modules. By the way, this process should work on any supported python platform (not just OS X) and with any supported python interpreter you have installed: just substitute the proper path for /usr/bin/python2.6. (If you don't have any extra pythons installed on OS X 10.6, typing just python should be sufficient.)\n" ]
[ 2 ]
[]
[]
[ "easygui", "macos", "python" ]
stackoverflow_0001785987_easygui_macos_python.txt
Q: How do I copy wsgi.input if I want to process POST data more than once? In WSGI, post data is consumed by reading the file-like object environ['wsgi.input']. If a second element in the stack also wants to read post data it may hang the program by reading when there's nothing more to read. How should I copy the POST data so it can be processed multiple times? A: You could try putting a file-like replica of the stream back in the environment: from cStringIO import StringIO length = int(environ.get('CONTENT_LENGTH', '0')) body = StringIO(environ['wsgi.input'].read(length)) environ['wsgi.input'] = body Needing to do this is a bit of a smell, though. Ideally only one piece of code should be parsing the query string and post body, and delivering the results to other components. A: Go have a look at WebOb package. It provides functionality that allows one to designate that wsgi.input should be made seekable. This has the effect of allowing you to rewind the input stream such that content can be replayed through different handler. Even if you don't use WebOb, the way it does this should be instructive as would trust Ian to have done this in an appropriate way. For search results in documentation go here. A: If you're gonna read it in one fell swoop, you could always read it in, create a CStringIO file-like object of the stuff you've read and then assign it back, like this: import cStringIO import copy lines = [] for line in environ['wsgi.input']: lines.append(line) newlines = copy.copy(lines) environ['wsgi.input'] = cStringIO.StringIO(''.join(newlines)) There's most likely a more efficient way to do this, but I in general find wsgi's post stuff pretty brittle if you want to do anything non-trivial (like read post data muptiple times)...
How do I copy wsgi.input if I want to process POST data more than once?
In WSGI, post data is consumed by reading the file-like object environ['wsgi.input']. If a second element in the stack also wants to read post data it may hang the program by reading when there's nothing more to read. How should I copy the POST data so it can be processed multiple times?
[ "You could try putting a file-like replica of the stream back in the environment:\nfrom cStringIO import StringIO\n\nlength = int(environ.get('CONTENT_LENGTH', '0'))\nbody = StringIO(environ['wsgi.input'].read(length))\nenviron['wsgi.input'] = body\n\nNeeding to do this is a bit of a smell, though. Ideally only one piece of code should be parsing the query string and post body, and delivering the results to other components.\n", "Go have a look at WebOb package. It provides functionality that allows one to designate that wsgi.input should be made seekable. This has the effect of allowing you to rewind the input stream such that content can be replayed through different handler. Even if you don't use WebOb, the way it does this should be instructive as would trust Ian to have done this in an appropriate way. For search results in documentation go here.\n", "If you're gonna read it in one fell swoop, you could always read it in, create a CStringIO file-like object of the stuff you've read and then assign it back, like this:\nimport cStringIO\nimport copy\nlines = []\nfor line in environ['wsgi.input']:\n lines.append(line)\nnewlines = copy.copy(lines)\nenviron['wsgi.input'] = cStringIO.StringIO(''.join(newlines))\n\nThere's most likely a more efficient way to do this, but I in general find wsgi's post stuff pretty brittle if you want to do anything non-trivial (like read post data muptiple times)...\n" ]
[ 12, 8, 1 ]
[]
[]
[ "python", "wsgi" ]
stackoverflow_0001783383_python_wsgi.txt
Q: GAE - How to live with no joins? Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing? A: If you look at how the SQL solution you provided will be executed, it will go basically like this: Fetch a list of friends for the current user For each user in the list, start an index scan over recent posts Merge-join all the scans from step 2, stopping when you've retrieved enough entries You can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them. You're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends. A: This topic is covered in a Google io talk: http://code.google.com/events/io/sessions/BuildingScalableComplexApps.html Basically the Google team suggest using list properties and what they call relational index entities, an example application can be found here: http://pubsub-test.appspot.com/ A: "Load user, loop through the list of friends and load their latest blog posts." That's all a join is -- nested loops. Some kinds of joins are loops with lookups. Most lookups are just loops; some are hashes. "Finally merge all the blog posts to find the latest 10 blog entries" That's a ORDER BY with a LIMIT. That's what the database is doing for you. I'm not sure what's not scalable about this; it's what a database does anyway. A: Here is an example in python gleamed from http://pubsub-test.appspot.com/: Anyone have one for java? Thanks. from google.appengine.ext import webapp from google.appengine.ext import db class Message(db.Model): body = db.TextProperty(required=True) sender = db.StringProperty(required=True) receiver_id = db.ListProperty(int) class SlimMessage(db.Model): body = db.TextProperty(required=True) sender = db.StringProperty(required=True) class MessageIndex(db.Model): receiver_id = db.ListProperty(int) class MainHandler(webapp.RequestHandler): def get(self): receiver_id = int(self.request.get('receiver_id', '1')) key_only = self.request.get('key_only').lower() == 'on' if receiver_id: if key_only: keys = db.GqlQuery( 'SELECT __key__ FROM MessageIndex WHERE receiver_id = :1', receiver_id).fetch(10) messages.extend(db.get([k.parent() for k in keys])) else: messages.extend(Message.gql('WHERE receiver_id = :1', receiver_id).fetch(10))
GAE - How to live with no joins?
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
[ "If you look at how the SQL solution you provided will be executed, it will go basically like this:\n\nFetch a list of friends for the current user\nFor each user in the list, start an index scan over recent posts\nMerge-join all the scans from step 2, stopping when you've retrieved enough entries\n\nYou can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them.\nYou're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends.\n", "This topic is covered in a Google io talk:\nhttp://code.google.com/events/io/sessions/BuildingScalableComplexApps.html\nBasically the Google team suggest using list properties and what they call relational index entities, an example application can be found here: http://pubsub-test.appspot.com/\n", "\"Load user, loop through the list of friends and load their latest blog posts.\"\nThat's all a join is -- nested loops. Some kinds of joins are loops with lookups. Most lookups are just loops; some are hashes.\n\"Finally merge all the blog posts to find the latest 10 blog entries\"\nThat's a ORDER BY with a LIMIT. That's what the database is doing for you.\nI'm not sure what's not scalable about this; it's what a database does anyway.\n", "Here is an example in python gleamed from http://pubsub-test.appspot.com/:\nAnyone have one for java? Thanks.\nfrom google.appengine.ext import webapp\n\nfrom google.appengine.ext import db\n\nclass Message(db.Model):\n body = db.TextProperty(required=True)\n sender = db.StringProperty(required=True)\n receiver_id = db.ListProperty(int)\n\nclass SlimMessage(db.Model):\n body = db.TextProperty(required=True)\n sender = db.StringProperty(required=True)\n\nclass MessageIndex(db.Model): \n receiver_id = db.ListProperty(int)\n\nclass MainHandler(webapp.RequestHandler):\n\n def get(self):\n receiver_id = int(self.request.get('receiver_id', '1'))\n key_only = self.request.get('key_only').lower() == 'on'\n if receiver_id:\n if key_only:\n keys = db.GqlQuery(\n 'SELECT __key__ FROM MessageIndex WHERE receiver_id = :1',\n receiver_id).fetch(10)\n messages.extend(db.get([k.parent() for k in keys]))\n else:\n messages.extend(Message.gql('WHERE receiver_id = :1',\n receiver_id).fetch(10))\n\n" ]
[ 13, 7, 1, 0 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "join", "python" ]
stackoverflow_0000445827_google_app_engine_google_cloud_datastore_join_python.txt
Q: Python Vs Ruby On Rails : on Size I am planning to do a small web application that will be distributed as a single installable. I have plans to develop this application in either Python/Django or Ruby On Rails. (I am a Java/C++ programmer, hence both these languages are new to me). My main concern is about the size and simplicity of final installable (say setup.exe). I want it to be small in size and also should be able to pack all required components in it. Which one among Python/Django and Ruby On Rails is suitable for me? A: I personally prefer Python/django. Size is small given u have necessary things installed. A: With disk space at the current price, size shouldn't matter. Give both a try and figure out which will be easier for you to learn and maintain. Despite the fact that people believe that when you know one language, you know all, that's only true as long as you write code on the "hello world" level. A: One option with Ruby on Rails is to go with a JRuby deployment which would allow you to pack it all into a single .war file. This would require the person deploying the web application to have a java web application server (Jetty is probably the smallest and easiest to bundle). With Rails, you are generally going to have to install Ruby and any required ruby gems. The Ruby install is going to be machine specific- different for Windows/Linux. Everything else should be easily scripted. If you go with an Apache Passenger (mod_ruby) solution, you will need to get that installed as well. In reality, I haven't run into many server applications with simple, compact installs. A: I just used heroku to deploy a blog written in Rails, and it was a fantastically easy experience. If you're interested in simplicity, it's probably the most simple deploy I've ever experienced. A: I don't think you can get them both. I'm sorry to say this but you have to choose which one is more important to you. Django application is smaller in size because many things is already provided out of the box, but deployment is not as easy. On the other hand, RoR apps deployment is easier (both Ruby MRI or JRuby) but the application's size is naturally larger given you have to install other gems and Ruby On Rails plugins. A: If you are experienced with Java and concerned about deploying Django and Rails apps, I'd recommend you give JRuby a try. This will give you several benefits from a Java-perpective: You can call Java-classes and components from your Ruby/Rails app You can use a familiar IDE such as Netbeans You can package and deploy our entire Rails app as a single WAR-file with all dependencies included A: With the cheeseshop, any python application can be made installable with a single command. I'm a big fan of Django, but it will require you to hook into an external webserver, as the built in server is for development only. You might look for something that has a more robust builtin web server if you want something you can just plunk down and start running. Twisted might meet your needs, though there's a bit more of a learning curve on that. I'm not sure how other python or ruby apps stand up on this front.
Python Vs Ruby On Rails : on Size
I am planning to do a small web application that will be distributed as a single installable. I have plans to develop this application in either Python/Django or Ruby On Rails. (I am a Java/C++ programmer, hence both these languages are new to me). My main concern is about the size and simplicity of final installable (say setup.exe). I want it to be small in size and also should be able to pack all required components in it. Which one among Python/Django and Ruby On Rails is suitable for me?
[ "I personally prefer Python/django. Size is small given u have necessary things installed. \n", "With disk space at the current price, size shouldn't matter. Give both a try and figure out which will be easier for you to learn and maintain. Despite the fact that people believe that when you know one language, you know all, that's only true as long as you write code on the \"hello world\" level.\n", "One option with Ruby on Rails is to go with a JRuby deployment which would allow you to pack it all into a single .war file. This would require the person deploying the web application to have a java web application server (Jetty is probably the smallest and easiest to bundle).\nWith Rails, you are generally going to have to install Ruby and any required ruby gems. The Ruby install is going to be machine specific- different for Windows/Linux. Everything else should be easily scripted. If you go with an Apache Passenger (mod_ruby) solution, you will need to get that installed as well. \nIn reality, I haven't run into many server applications with simple, compact installs.\n", "I just used heroku to deploy a blog written in Rails, and it was a fantastically easy experience. If you're interested in simplicity, it's probably the most simple deploy I've ever experienced.\n", "I don't think you can get them both. I'm sorry to say this but you have to choose which one is more important to you. \n\nDjango application is smaller in size because many things is already provided out of the box, but deployment is not as easy.\nOn the other hand, RoR apps deployment is easier (both Ruby MRI or JRuby) but the application's size is naturally larger given you have to install other gems and Ruby On Rails plugins.\n\n", "If you are experienced with Java and concerned about deploying Django and Rails apps, I'd recommend you give JRuby a try. This will give you several benefits from a Java-perpective:\n\nYou can call Java-classes and components from your Ruby/Rails app\nYou can use a familiar IDE such as Netbeans\nYou can package and deploy our entire Rails app as a single WAR-file with all dependencies included\n\n", "With the cheeseshop, any python application can be made installable with a single command. I'm a big fan of Django, but it will require you to hook into an external webserver, as the built in server is for development only. You might look for something that has a more robust builtin web server if you want something you can just plunk down and start running. Twisted might meet your needs, though there's a bit more of a learning curve on that. I'm not sure how other python or ruby apps stand up on this front.\n" ]
[ 4, 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "django", "python", "ruby_on_rails" ]
stackoverflow_0001783431_django_python_ruby_on_rails.txt
Q: Parsing XML in Python using ElementTree example I'm having a hard time finding a good, basic example of how to parse XML in python using Element Tree. From what I can find, this appears to be the easiest library to use for parsing XML. Here is a sample of the XML I'm working with: <timeSeriesResponse> <queryInfo> <locationParam>01474500</locationParam> <variableParam>99988</variableParam> <timeParam> <beginDateTime>2009-09-24T15:15:55.271</beginDateTime> <endDateTime>2009-11-23T15:15:55.271</endDateTime> </timeParam> </queryInfo> <timeSeries name="NWIS Time Series Instantaneous Values"> <values count="2876"> <value dateTime="2009-09-24T15:30:00.000-04:00" qualifiers="P">550</value> <value dateTime="2009-09-24T16:00:00.000-04:00" qualifiers="P">419</value> <value dateTime="2009-09-24T16:30:00.000-04:00" qualifiers="P">370</value> ..... </values> </timeSeries> </timeSeriesResponse> I am able to do what I need, using a hard-coded method. But I need my code to be a bit more dynamic. Here is what worked: tree = ET.parse(sample.xml) doc = tree.getroot() timeseries = doc[1] values = timeseries[2] print child.attrib['dateTime'], child.text #prints 2009-09-24T15:30:00.000-04:00, 550 Here are a couple of things I've tried, none of them worked, reporting that they couldn't find timeSeries (or anything else I tried): tree = ET.parse(sample.xml) tree.find('timeSeries') tree = ET.parse(sample.xml) doc = tree.getroot() doc.find('timeSeries') Basically, I want to load the xml file, search for the timeSeries tag, and iterate through the value tags, returning the dateTime and the value of the tag itself; everything I'm doing in the above example, but not hard coding the sections of xml I'm interested in. Can anyone point me to some examples, or give me some suggestions on how to work through this? Thanks for all the help. Using both of the below suggestions worked on the sample file I provided, however, they didn't work on the full file. Here is the error I get from the real file when I use Ed Carrel's method: (<type 'exceptions.AttributeError'>, AttributeError("'NoneType' object has no attribute 'attrib'",), <traceback object at 0x011EFB70>) I figured there was something in the real file it didn't like, so I incremently removed things until it worked. Here are the lines that I changed: originally: <timeSeriesResponse xsi:schemaLocation="a URL I removed" xmlns="a URL I removed" xmlns:xsi="a URL I removed"> changed to: <timeSeriesResponse> originally: <sourceInfo xsi:type="SiteInfoType"> changed to: <sourceInfo> originally: <geogLocation xsi:type="LatLonPointType" srs="EPSG:4326"> changed to: <geogLocation> Removing the attributes that have 'xsi:...' fixed the problem. Is the 'xsi:...' not valid XML? It will be hard for me to remove these programmatically. Any suggested work arounds? Here is the full XML file: http://www.sendspace.com/file/lofcpt When I originally asked this question, I was unaware of namespaces in XML. Now that I know what's going on, I don't have to remove the "xsi" attributes, which are the namespace declarations. I just include them in my xpath searches. See this page for more info on namespaces in lxml. A: So I have ElementTree 1.2.6 on my box now, and ran the following code against the XML chunk you posted: import elementtree.ElementTree as ET tree = ET.parse("test.xml") doc = tree.getroot() thingy = doc.find('timeSeries') print thingy.attrib and got the following back: {'name': 'NWIS Time Series Instantaneous Values'} It appears to have found the timeSeries element without needing to use numerical indices. What would be useful now is knowing what you mean when you say "it doesn't work." Since it works for me given the same input, it is unlikely that ElementTree is broken in some obvious way. Update your question with any error messages, backtraces, or anything you can provide to help us help you. A: If I understand your question correctly: for elem in doc.findall('timeSeries/values/value'): print elem.get('dateTime'), elem.text or if you prefer (and if there is only one occurrence of timeSeries/values: values = doc.find('timeSeries/values') for value in values: print value.get('dateTime'), elem.text The findall() method returns a list of all matching elements, whereas find() returns only the first matching element. The first example loops over all the found elements, the second loops over the child elements of the values element, in this case leading to the same result. I don't see where the problem with not finding timeSeries comes from however. Maybe you just forgot the getroot() call? (note that you don't really need it because you can work from the elementtree itself too, if you change the path expression to for example /timeSeriesResponse/timeSeries/values or //timeSeries/values)
Parsing XML in Python using ElementTree example
I'm having a hard time finding a good, basic example of how to parse XML in python using Element Tree. From what I can find, this appears to be the easiest library to use for parsing XML. Here is a sample of the XML I'm working with: <timeSeriesResponse> <queryInfo> <locationParam>01474500</locationParam> <variableParam>99988</variableParam> <timeParam> <beginDateTime>2009-09-24T15:15:55.271</beginDateTime> <endDateTime>2009-11-23T15:15:55.271</endDateTime> </timeParam> </queryInfo> <timeSeries name="NWIS Time Series Instantaneous Values"> <values count="2876"> <value dateTime="2009-09-24T15:30:00.000-04:00" qualifiers="P">550</value> <value dateTime="2009-09-24T16:00:00.000-04:00" qualifiers="P">419</value> <value dateTime="2009-09-24T16:30:00.000-04:00" qualifiers="P">370</value> ..... </values> </timeSeries> </timeSeriesResponse> I am able to do what I need, using a hard-coded method. But I need my code to be a bit more dynamic. Here is what worked: tree = ET.parse(sample.xml) doc = tree.getroot() timeseries = doc[1] values = timeseries[2] print child.attrib['dateTime'], child.text #prints 2009-09-24T15:30:00.000-04:00, 550 Here are a couple of things I've tried, none of them worked, reporting that they couldn't find timeSeries (or anything else I tried): tree = ET.parse(sample.xml) tree.find('timeSeries') tree = ET.parse(sample.xml) doc = tree.getroot() doc.find('timeSeries') Basically, I want to load the xml file, search for the timeSeries tag, and iterate through the value tags, returning the dateTime and the value of the tag itself; everything I'm doing in the above example, but not hard coding the sections of xml I'm interested in. Can anyone point me to some examples, or give me some suggestions on how to work through this? Thanks for all the help. Using both of the below suggestions worked on the sample file I provided, however, they didn't work on the full file. Here is the error I get from the real file when I use Ed Carrel's method: (<type 'exceptions.AttributeError'>, AttributeError("'NoneType' object has no attribute 'attrib'",), <traceback object at 0x011EFB70>) I figured there was something in the real file it didn't like, so I incremently removed things until it worked. Here are the lines that I changed: originally: <timeSeriesResponse xsi:schemaLocation="a URL I removed" xmlns="a URL I removed" xmlns:xsi="a URL I removed"> changed to: <timeSeriesResponse> originally: <sourceInfo xsi:type="SiteInfoType"> changed to: <sourceInfo> originally: <geogLocation xsi:type="LatLonPointType" srs="EPSG:4326"> changed to: <geogLocation> Removing the attributes that have 'xsi:...' fixed the problem. Is the 'xsi:...' not valid XML? It will be hard for me to remove these programmatically. Any suggested work arounds? Here is the full XML file: http://www.sendspace.com/file/lofcpt When I originally asked this question, I was unaware of namespaces in XML. Now that I know what's going on, I don't have to remove the "xsi" attributes, which are the namespace declarations. I just include them in my xpath searches. See this page for more info on namespaces in lxml.
[ "So I have ElementTree 1.2.6 on my box now, and ran the following code against the XML chunk you posted: \nimport elementtree.ElementTree as ET\n\ntree = ET.parse(\"test.xml\")\ndoc = tree.getroot()\nthingy = doc.find('timeSeries')\n\nprint thingy.attrib\n\nand got the following back:\n{'name': 'NWIS Time Series Instantaneous Values'}\n\nIt appears to have found the timeSeries element without needing to use numerical indices.\nWhat would be useful now is knowing what you mean when you say \"it doesn't work.\" Since it works for me given the same input, it is unlikely that ElementTree is broken in some obvious way. Update your question with any error messages, backtraces, or anything you can provide to help us help you.\n", "If I understand your question correctly:\nfor elem in doc.findall('timeSeries/values/value'):\n print elem.get('dateTime'), elem.text\n\nor if you prefer (and if there is only one occurrence of timeSeries/values:\nvalues = doc.find('timeSeries/values')\nfor value in values:\n print value.get('dateTime'), elem.text\n\nThe findall() method returns a list of all matching elements, whereas find() returns only the first matching element. The first example loops over all the found elements, the second loops over the child elements of the values element, in this case leading to the same result.\nI don't see where the problem with not finding timeSeries comes from however. Maybe you just forgot the getroot() call? (note that you don't really need it because you can work from the elementtree itself too, if you change the path expression to for example /timeSeriesResponse/timeSeries/values or //timeSeries/values) \n" ]
[ 49, 22 ]
[]
[]
[ "elementtree", "python", "xml" ]
stackoverflow_0001786476_elementtree_python_xml.txt
Q: Why doesn't this division work in Python? Consider: >>> numerator = 29 >>> denom = 1009 >>> print str(float(numerator/denom)) 0.0 How do I make it return a decimal? A: Until version 3, Python's division operator, /, behaved like C's division operator when presented with two integer arguments: it returns an integer result that's truncated down when there would be a fractional part. See: PEP 238 >>> n = 29 >>> d = 1009 >>> print str(float(n)/d) 0.0287413280476 In Python 2 (and maybe earlier) you could use: >>> from __future__ import division >>> n/d 0.028741328047571853 A: In Python 2.x, division works like it does in C-like languages: if both arguments are integers, the result is truncated to an integer, so 29/1009 is 0. 0 as a float is 0.0. To fix it, cast to a float before dividing: print str(float(numerator)/denominator) In Python 3.x, the division acts more naturally, so you'll get the correct mathematical result (within floating-point error). A: In your evaluation you are casting the result, you need to instead cast the operands. A: print str(float(numerator)/float(denom))
Why doesn't this division work in Python?
Consider: >>> numerator = 29 >>> denom = 1009 >>> print str(float(numerator/denom)) 0.0 How do I make it return a decimal?
[ "\nUntil version 3, Python's division operator, /, behaved like C's division operator when presented with two integer arguments: it returns an integer result that's truncated down when there would be a fractional part. See: PEP 238\n\n>>> n = 29\n>>> d = 1009\n>>> print str(float(n)/d)\n0.0287413280476\n\nIn Python 2 (and maybe earlier) you could use:\n>>> from __future__ import division\n>>> n/d\n0.028741328047571853\n\n", "In Python 2.x, division works like it does in C-like languages: if both arguments are integers, the result is truncated to an integer, so 29/1009 is 0. 0 as a float is 0.0. To fix it, cast to a float before dividing:\nprint str(float(numerator)/denominator)\n\nIn Python 3.x, the division acts more naturally, so you'll get the correct mathematical result (within floating-point error).\n", "In your evaluation you are casting the result, you need to instead cast the operands. \n", "print str(float(numerator)/float(denom))\n\n" ]
[ 30, 8, 1, 0 ]
[]
[]
[ "numbers", "python" ]
stackoverflow_0001787249_numbers_python.txt
Q: Python - Simple algorithmic task on lists (standard question for a job-interview) There are 2 input lists L and M, for example: L = ['a', 'ab', 'bba'] M = ['baa', 'aa', 'bb'] How to obtain 2 non-empty output lists U and V such that: ''.join(U) == ''.join(V)) is True, and every element of U is in L, and every element of V is in M? For example, one possible solution for the two input lists above is: U=['bba', 'ab', 'bba', 'a'] V=['bb', 'aa', 'bb', 'baa'] because 'bbaabbbaa' == 'bbaabbbaa' is True and every element of ['bba', 'ab', 'bba', 'a'] is in ['a', 'ab', 'bba'] and every element of ['bb', 'aa', 'bb', 'baa'] is in ['baa', 'aa', 'bb'] 1) Create an algorithm which will find at least one solution (U and V). 2) Can it be solved in O(n) where n=len(L+M) ? :wq A: What are you looking for -- all (the countable infinity of) possible solutions? The "shortest" (by some measure) non-empty solution, or the set of equal-shortest ones, or...? Because, if any solution will do, setting U and V both to [] meets all the stated conditions, and is O(1) to boot;-). Edit: ok, so, joking apart, here's a nicely symmetrical solution to print the first ten of the countably-infinite non-empty solutions: import itertools as it import collections L = ['a', 'ab', 'bba'] M = ['baa', 'aa', 'bb'] def cmbs(L=L, M=M): Ucans = collections.defaultdict(list) Vcans = collections.defaultdict(list) sides = (L, Vcans, Ucans), (M, Ucans, Vcans) for i in it.count(1): for k, (G, Ocans, Tcans) in enumerate(sides): for u in it.product(G, repeat=i): j = ''.join(u) if j in Ocans: for samp in Ocans[j]: result = samp, u yield result[1-k], result[k] Tcans[j].append(u) if __name__ == '__main__': for x, y in it.islice(cmbs(), 10): print x, y, ''.join(x), ''.join(y) which emits ('a', 'a') ('aa',) aa aa ('bba', 'a') ('bb', 'aa') bbaa bbaa ('a', 'a', 'a', 'a') ('aa', 'aa') aaaa aaaa ('a', 'a', 'bba', 'a') ('aa', 'bb', 'aa') aabbaa aabbaa ('a', 'ab', 'a', 'a') ('aa', 'baa') aabaa aabaa ('a', 'ab', 'bba', 'a') ('aa', 'bb', 'baa') aabbbaa aabbbaa ('bba', 'a', 'a', 'a') ('bb', 'aa', 'aa') bbaaaa bbaaaa ('bba', 'ab', 'a', 'a') ('bb', 'aa', 'baa') bbaabaa bbaabaa ('bba', 'ab', 'bba', 'a') ('bb', 'aa', 'bb', 'baa') bbaabbbaa bbaabbbaa ('bba', 'a', 'bba', 'a') ('bb', 'aa', 'bb', 'aa') bbaabbaa bbaabbaa I'm not sure what's meant by O(N) in the context of a problem with countably infinite solutions -- what's N supposed to be here?!-) Edit 2: changed to use (default)dicts of lists to ensure it finds all solutions even when the same joined-string can be made in > 1 ways from one of the input collections (a condition which did not occur in the sample input, so the sample output is unaffected); for example, if L is ['a', 'aa'] clearly any joined string with > 1 a can be made in multiple ways -- the current solution will emit all of those multiple ways when such a joined string matches one made for M, while the previous one just emitted one of them. A: This is my try! I think it finds all the solutions. from itertools import product m = 5 # top limit of elements in output lists sumsets = lambda s1, s2: s1 | s2 for u in reduce(sumsets, [set(product(L, repeat=i)) for i in range(1, m+1)]): for v in reduce(sumsets, [set(product(M, repeat=i)) for i in range(1, m+1)]): if ''.join(u) == ''.join(v): print u, v Output: U, V ('a', 'a', 'a', 'a') ('aa', 'aa') ('a', 'a') ('aa',) ('a', 'a', 'bba', 'a') ('aa', 'bb', 'aa') ('bba', 'a', 'a', 'a') ('bb', 'aa', 'aa') ('bba', 'a') ('bb', 'aa') ('bba', 'ab', 'a', 'a') ('bb', 'aa', 'baa') ('a', 'ab', 'a', 'a') ('aa', 'baa') ('a', 'ab', 'bba', 'a') ('aa', 'bb', 'baa') ('bba', 'ab', 'bba', 'a') ('bb', 'aa', 'bb', 'baa') ('bba', 'a', 'bba', 'a') ('bb', 'aa', 'bb', 'aa') A: This is known as the Post Correspondence Problem and it's undecidable as others have said.
Python - Simple algorithmic task on lists (standard question for a job-interview)
There are 2 input lists L and M, for example: L = ['a', 'ab', 'bba'] M = ['baa', 'aa', 'bb'] How to obtain 2 non-empty output lists U and V such that: ''.join(U) == ''.join(V)) is True, and every element of U is in L, and every element of V is in M? For example, one possible solution for the two input lists above is: U=['bba', 'ab', 'bba', 'a'] V=['bb', 'aa', 'bb', 'baa'] because 'bbaabbbaa' == 'bbaabbbaa' is True and every element of ['bba', 'ab', 'bba', 'a'] is in ['a', 'ab', 'bba'] and every element of ['bb', 'aa', 'bb', 'baa'] is in ['baa', 'aa', 'bb'] 1) Create an algorithm which will find at least one solution (U and V). 2) Can it be solved in O(n) where n=len(L+M) ? :wq
[ "What are you looking for -- all (the countable infinity of) possible solutions? The \"shortest\" (by some measure) non-empty solution, or the set of equal-shortest ones, or...?\nBecause, if any solution will do, setting U and V both to [] meets all the stated conditions, and is O(1) to boot;-).\nEdit: ok, so, joking apart, here's a nicely symmetrical solution to print the first ten of the countably-infinite non-empty solutions:\nimport itertools as it\nimport collections\n\nL = ['a', 'ab', 'bba']\nM = ['baa', 'aa', 'bb']\n\ndef cmbs(L=L, M=M):\n Ucans = collections.defaultdict(list)\n Vcans = collections.defaultdict(list)\n sides = (L, Vcans, Ucans), (M, Ucans, Vcans)\n for i in it.count(1):\n for k, (G, Ocans, Tcans) in enumerate(sides):\n for u in it.product(G, repeat=i):\n j = ''.join(u)\n if j in Ocans:\n for samp in Ocans[j]:\n result = samp, u\n yield result[1-k], result[k]\n Tcans[j].append(u)\n\nif __name__ == '__main__':\n for x, y in it.islice(cmbs(), 10):\n print x, y, ''.join(x), ''.join(y)\n\nwhich emits\n('a', 'a') ('aa',) aa aa\n('bba', 'a') ('bb', 'aa') bbaa bbaa\n('a', 'a', 'a', 'a') ('aa', 'aa') aaaa aaaa\n('a', 'a', 'bba', 'a') ('aa', 'bb', 'aa') aabbaa aabbaa\n('a', 'ab', 'a', 'a') ('aa', 'baa') aabaa aabaa\n('a', 'ab', 'bba', 'a') ('aa', 'bb', 'baa') aabbbaa aabbbaa\n('bba', 'a', 'a', 'a') ('bb', 'aa', 'aa') bbaaaa bbaaaa\n('bba', 'ab', 'a', 'a') ('bb', 'aa', 'baa') bbaabaa bbaabaa\n('bba', 'ab', 'bba', 'a') ('bb', 'aa', 'bb', 'baa') bbaabbbaa bbaabbbaa\n('bba', 'a', 'bba', 'a') ('bb', 'aa', 'bb', 'aa') bbaabbaa bbaabbaa\n\nI'm not sure what's meant by O(N) in the context of a problem with countably infinite solutions -- what's N supposed to be here?!-)\nEdit 2: changed to use (default)dicts of lists to ensure it finds all solutions even when the same joined-string can be made in > 1 ways from one of the input collections (a condition which did not occur in the sample input, so the sample output is unaffected); for example, if L is ['a', 'aa'] clearly any joined string with > 1 a can be made in multiple ways -- the current solution will emit all of those multiple ways when such a joined string matches one made for M, while the previous one just emitted one of them.\n", "This is my try! I think it finds all the solutions.\nfrom itertools import product \nm = 5 # top limit of elements in output lists\nsumsets = lambda s1, s2: s1 | s2\n\nfor u in reduce(sumsets, [set(product(L, repeat=i)) for i in range(1, m+1)]):\n for v in reduce(sumsets, [set(product(M, repeat=i)) for i in range(1, m+1)]):\n if ''.join(u) == ''.join(v):\n print u, v \n\nOutput: U, V\n('a', 'a', 'a', 'a') ('aa', 'aa')\n('a', 'a') ('aa',)\n('a', 'a', 'bba', 'a') ('aa', 'bb', 'aa')\n('bba', 'a', 'a', 'a') ('bb', 'aa', 'aa')\n('bba', 'a') ('bb', 'aa')\n('bba', 'ab', 'a', 'a') ('bb', 'aa', 'baa')\n('a', 'ab', 'a', 'a') ('aa', 'baa')\n('a', 'ab', 'bba', 'a') ('aa', 'bb', 'baa')\n('bba', 'ab', 'bba', 'a') ('bb', 'aa', 'bb', 'baa')\n('bba', 'a', 'bba', 'a') ('bb', 'aa', 'bb', 'aa')\n\n", "This is known as the Post Correspondence Problem and it's undecidable as others have said.\n" ]
[ 9, 4, 1 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001786504_algorithm_python.txt
Q: How to execute Javascript from Python on Windows? how can I execute Javascript from Python on Windows? I want to get python-spidermonkey functionality. Just like this: >>> class Foo: ... def hello(self): ... print "Hello, Javascript world!" >>> cx.bind_class(Foo, bind_constructor=True) >>> cx.eval_script("var f = new Foo(); f.hello();") Hello, Javascript world! I can't use python-spidermonkey, because it doesn't work in windows A: How about pyv8: http://code.google.com/p/pyv8/ A: You could call SpiderMonkey.
How to execute Javascript from Python on Windows?
how can I execute Javascript from Python on Windows? I want to get python-spidermonkey functionality. Just like this: >>> class Foo: ... def hello(self): ... print "Hello, Javascript world!" >>> cx.bind_class(Foo, bind_constructor=True) >>> cx.eval_script("var f = new Foo(); f.hello();") Hello, Javascript world! I can't use python-spidermonkey, because it doesn't work in windows
[ "How about pyv8: http://code.google.com/p/pyv8/\n", "You could call SpiderMonkey.\n" ]
[ 4, 1 ]
[]
[]
[ "javascript", "python" ]
stackoverflow_0001764674_javascript_python.txt
Q: How different are the semantics between Python and JavaScript? Both these languages seem extremely similar to me. Although Python supports actual classes instead of being prototype-based, in Python classes are not all that different from functions that generate objects containing values and functions, just as you'd do in JavaScript. On the other hand, JavaScript only supports floating-point numbers and strings as built-in data types. These seem like fairly shallow differences to me, so these things aside, what are some more important differences between them? A: Classical inheritance in Python, Prototypal inheritance in ECMAScript ECMAScript is a braces and semicolons language while Python is white-space and indent/block based No var keyword in Python, implicit globals in ECMAScript, both are lexically scoped Closures in Python 2.5 and lower ( re: Alex Martelli's comment ) are somewhat "limited" because the bindings are read-only, you can't access private variables like you could in ECMAScript There's no undefined in Python, exceptions are thrown Immutable list arrays in Python ( tuples ) No switch statement in Python but instead you're encouraged to use a dictionary in that manner, sometimes its convenient assigning properties to lambdas and executing them ECMAScript 3 does not have a yield statement, nor let expressions/statements, nor array comprehensions - however these are included in Mozilla's JS which is non-standard raise vs throw, except vs catch ( Python, JS ) Native Unicode strings in ECMAScript keyword operators such as and, is, and not are used in Python Python doesn't support counters such as i++ Python's for loop is "smart" so you don't need to use a counter for enumerating through lists, nor do you run into prototypal properties inherited from Object.prototype You don't have to use the new operator in Python to create objects Python is duck-typed I stole a good bit of info from http://hg.toolness.com/python-for-js-programmers/raw-file/tip/PythonForJsProgrammers.html A: Typing: Javascript and Python are both dynamically typed, whereas javascript is weakly, python strongly typed. A: In python, "self" is explicitly passed to a member function, and is not a special keyword or anything. In javascript, "this" is dynamically scoped. you can fiddle with the scope of a member function by calling apply() on it. A: I'll add a few I haven't seen mentioned yet: JavaScript supports object-literal notation. Python doesn't exactly work the same way, but Python dictionaries are similar to JavaScript associative arrays. JavaScript objects/arrays support that cool feature where you don't need to quote (single-word) strings when creating new objects: var foo = { bar: "baz" }; Accessing associative array keys in JavaScript can be done using dot notation, in addition to brace notation. That is, these are the same: foo.bar; //returns "baz" foo["bar"]; // returns "baz" Python's anonymous function (lambda) syntax is not as flexible as JavaScript's anonymous functions. Python has, like, a standard library and stuff. (And yes, I know about Rhino et al., but the libraries they give you are not standard. There's no standardized way to read a file in JavaScript... that I know of.) You can run JavaScript in a browser. Python... not so much. ;) A: Being a JavaScript developer and done some Python stuff (thanks to Google App Engine) I would say that the two major differences between JavaScript and Python would be Formatting. JavaScript doesn't care about the looks of your code (think of all the code minimizers and what the resulting looks like) Unicode support. JavaScript is all the way unicode, GAE's Python 2.5 not so much (having Latin 1 as the default character set). So having the need to support non-latin characters can be a real PITA if your'e not sure what you are doing. A: In Python, whitespace is part of the language. In Javascript, braces define code blocks and spaces are ignored. Furthermore, Python has bindings for the Java API, .net, and other cool fancy libraries. Javascript is pretty limited in the library department when compared to Python, but it has some neat windowing libraries and such.
How different are the semantics between Python and JavaScript?
Both these languages seem extremely similar to me. Although Python supports actual classes instead of being prototype-based, in Python classes are not all that different from functions that generate objects containing values and functions, just as you'd do in JavaScript. On the other hand, JavaScript only supports floating-point numbers and strings as built-in data types. These seem like fairly shallow differences to me, so these things aside, what are some more important differences between them?
[ "\nClassical inheritance in Python, Prototypal inheritance in ECMAScript\nECMAScript is a braces and semicolons language while Python is white-space and indent/block based\nNo var keyword in Python, implicit globals in ECMAScript, both are lexically scoped\nClosures in Python 2.5 and lower ( re: Alex Martelli's comment ) are somewhat \"limited\" because the bindings are read-only, you can't access private variables like you could in ECMAScript \nThere's no undefined in Python, exceptions are thrown\nImmutable list arrays in Python ( tuples )\nNo switch statement in Python but instead you're encouraged to use a dictionary in that manner, sometimes its convenient assigning properties to lambdas and executing them\nECMAScript 3 does not have a yield statement, nor let expressions/statements, nor array comprehensions - however these are included in Mozilla's JS which is non-standard\nraise vs throw, except vs catch ( Python, JS )\nNative Unicode strings in ECMAScript\nkeyword operators such as and, is, and not are used in Python\nPython doesn't support counters such as i++\nPython's for loop is \"smart\" so you don't need to use a counter for enumerating through lists, nor do you run into prototypal properties inherited from Object.prototype \nYou don't have to use the new operator in Python to create objects\nPython is duck-typed\n\nI stole a good bit of info from http://hg.toolness.com/python-for-js-programmers/raw-file/tip/PythonForJsProgrammers.html\n", "Typing: Javascript and Python are both dynamically typed, whereas javascript is weakly, python strongly typed.\n", "In python, \"self\" is explicitly passed to a member function, and is not a special keyword or anything.\nIn javascript, \"this\" is dynamically scoped. you can fiddle with the scope of a member function by calling apply() on it.\n", "I'll add a few I haven't seen mentioned yet:\n\nJavaScript supports object-literal notation. Python doesn't exactly work the same way, but Python dictionaries are similar to JavaScript associative arrays. \nJavaScript objects/arrays support that cool feature where you don't need to quote (single-word) strings when creating new objects:\nvar foo = { bar: \"baz\" };\nAccessing associative array keys in JavaScript can be done using dot notation, in addition to brace notation. That is, these are the same:\nfoo.bar; //returns \"baz\"\nfoo[\"bar\"]; // returns \"baz\"\nPython's anonymous function (lambda) syntax is not as flexible as JavaScript's anonymous functions.\nPython has, like, a standard library and stuff. (And yes, I know about Rhino et al., but the libraries they give you are not standard. There's no standardized way to read a file in JavaScript... that I know of.)\nYou can run JavaScript in a browser. Python... not so much. ;)\n\n", "Being a JavaScript developer and done some Python stuff (thanks to Google App Engine) I would say that the two major differences between JavaScript and Python would be\n\nFormatting. JavaScript doesn't care about the looks of your code (think of all the code minimizers and what the resulting looks like)\nUnicode support. JavaScript is all the way unicode, GAE's Python 2.5 not so much (having Latin 1 as the default character set). So having the need to support non-latin characters can be a real PITA if your'e not sure what you are doing.\n\n", "In Python, whitespace is part of the language. In Javascript, braces define code blocks and spaces are ignored. Furthermore, Python has bindings for the Java API, .net, and other cool fancy libraries. Javascript is pretty limited in the library department when compared to Python, but it has some neat windowing libraries and such.\n" ]
[ 40, 6, 5, 5, 2, 1 ]
[]
[]
[ "javascript", "python", "semantics" ]
stackoverflow_0001786522_javascript_python_semantics.txt
Q: C lib with Python bindings where both want to render I'm sketching on some fluid dynamics in Python. After a while, I'm looking for a bit more speed, so I rewrote the actual logic in C and put up some Python bindings (using SWIG). My problem now is that I don't how to render it in a good way. The logic is run pixel by pixel so pixels are what I want to track and render. Python gives my a TypeError if I try to make a function in the C lib that accepts a SDL_Surface*, I was probably a bit naive to think that PyGame mapped that easily directly to SDL. Python also seems unsure what to do if I make the C libs "init" return an SDL_Surface*. What is a good way to do this? It wouldn't be a problem if I would just render everything in the C lib. But I want to put on some GUI there (using Python). The C lib already keeps track of which pixels are "dirty". Should I expose that list and let Python loop through it, call a function for every dirty pixel? Seems bad, since those kind of huge loops are the exact reason I wanted to rewrite parts of the app in C. And before anyone suggests it, boost.python is a bit heavy to install right now (since I'm on Windows), so I'll just stick to SWIG for the moment (unless anyone has a clever way to install "just" boost.python?). I'm hoping for a silver bullet here. How to make a C lib, running SDL, share a render target with Python, running PyGame? A: Have you tried something like the following to get SDL_Surface* from python object? PySurfaceObject *obj; SDL_Surface *surf; if (!PyArg_ParseTuple(args, 'O!', &PySurface_Type, &obj) { return NULL; # or other action for error } surf = PySurface_AsSurface(obj);
C lib with Python bindings where both want to render
I'm sketching on some fluid dynamics in Python. After a while, I'm looking for a bit more speed, so I rewrote the actual logic in C and put up some Python bindings (using SWIG). My problem now is that I don't how to render it in a good way. The logic is run pixel by pixel so pixels are what I want to track and render. Python gives my a TypeError if I try to make a function in the C lib that accepts a SDL_Surface*, I was probably a bit naive to think that PyGame mapped that easily directly to SDL. Python also seems unsure what to do if I make the C libs "init" return an SDL_Surface*. What is a good way to do this? It wouldn't be a problem if I would just render everything in the C lib. But I want to put on some GUI there (using Python). The C lib already keeps track of which pixels are "dirty". Should I expose that list and let Python loop through it, call a function for every dirty pixel? Seems bad, since those kind of huge loops are the exact reason I wanted to rewrite parts of the app in C. And before anyone suggests it, boost.python is a bit heavy to install right now (since I'm on Windows), so I'll just stick to SWIG for the moment (unless anyone has a clever way to install "just" boost.python?). I'm hoping for a silver bullet here. How to make a C lib, running SDL, share a render target with Python, running PyGame?
[ "Have you tried something like the following to get SDL_Surface* from python object?\nPySurfaceObject *obj;\nSDL_Surface *surf;\nif (!PyArg_ParseTuple(args, 'O!', &PySurface_Type, &obj) {\n return NULL; # or other action for error\n}\nsurf = PySurface_AsSurface(obj);\n\n" ]
[ 0 ]
[]
[]
[ "c", "pygame", "python", "sdl", "swig" ]
stackoverflow_0001785604_c_pygame_python_sdl_swig.txt
Q: Build a GQL query (for Google App Engine) that has a condition on ReferenceProperty Say I have the following model: class Schedule(db.Model): tripCode = db.StringProperty(required=True) station = db.ReferenceProperty(Station, required=True) arrivalTime = db.TimeProperty(required=True) departureTime = db.TimeProperty(required=True) And let's say I have a Station object stored in the var foo. How do I assemble a GQL query that returns all Schedule objects with a reference to the Station object referenced by foo? This is my best (albeit incorrect) attempt to form such a query: myQuery = "SELECT * FROM Schedule where station = " + str(foo.key()) Once again foo is a Station object A: You shouldn't be inserting user data into a GQL string using string substitution. GQL supports parameter substitution, so you can do this: db.GqlQuery("SELECT * FROM Schedule WHERE station = $1", foo.key()) or, using the Query interface: Schedule.all().filter("station =", foo.key()) A: An even easier thing to do is to change the model definition by adding the 'collection_name' field to the ReferenceProperty: station = db.ReferenceProperty(Station, required=True, collection_name="schedules") Then you can just do: foo.schedules whenever you want to get all the stations' schedules.
Build a GQL query (for Google App Engine) that has a condition on ReferenceProperty
Say I have the following model: class Schedule(db.Model): tripCode = db.StringProperty(required=True) station = db.ReferenceProperty(Station, required=True) arrivalTime = db.TimeProperty(required=True) departureTime = db.TimeProperty(required=True) And let's say I have a Station object stored in the var foo. How do I assemble a GQL query that returns all Schedule objects with a reference to the Station object referenced by foo? This is my best (albeit incorrect) attempt to form such a query: myQuery = "SELECT * FROM Schedule where station = " + str(foo.key()) Once again foo is a Station object
[ "You shouldn't be inserting user data into a GQL string using string substitution. GQL supports parameter substitution, so you can do this:\ndb.GqlQuery(\"SELECT * FROM Schedule WHERE station = $1\", foo.key())\n\nor, using the Query interface:\nSchedule.all().filter(\"station =\", foo.key())\n\n", "An even easier thing to do is to change the model definition by adding the 'collection_name' field to the ReferenceProperty: \n\nstation = db.ReferenceProperty(Station, required=True, collection_name=\"schedules\")\n\nThen you can just do: \n\nfoo.schedules \n\nwhenever you want to get all the stations' schedules. \n" ]
[ 10, 7 ]
[]
[]
[ "google_app_engine", "gql", "python" ]
stackoverflow_0000852055_google_app_engine_gql_python.txt
Q: Catching update errors on MySQLdb I have a function that updates a MySQL table from a CSV file. The MySQL table contains the client account number -- this is what I use to compare with the CSV file. At some point, some of the queries will fail because the account number being compared from the CSV file has not been added yet. How do I get the records from the CSV file that failed during the update process? I wanted to store these records in a separate file and then re-read the file at a later time until all records have been successfully updated. Below is the function that updates the DB. def updateDatabase(records, options): """Update database""" import re # Regular expression library import MySQLdb # establish DB connection try: db = MySQLdb.connect(host="localhost", user="root", passwd="", db="demo") except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) sys.exit (1) # create cursor cursor = db.cursor() # tell MySQLdb to turn off auto-commit db.autocommit(False) # inform the user that this could take a while if len(records) > 499: print 'This process can take a while.' print 'Updating the database now...' # this is the actual loop maxrecords = len(records) for record in records: account_no, ag_1to15, ag_16to30, ag_31to60, ag_61to90, ag_91to120, beyond_120, total, status, credit_limit = record if re.match('1000', account_no): query = """UPDATE sys_accountscf SET cf_581 = %s, cf_583 = %s, cf_574 = %s, cf_575 = %s, cf_576 = %s, cf_577 = %s, cf_579 = %s, cf_585 = '%s', cf_558 = %s WHERE cf_538 = %s""" else: query = """UPDATE sys_accountscf SET cf_580 = %s, cf_582 = %s, cf_568 = %s, cf_569 = %s, cf_571 = %s, cf_572 = %s, cf_578 = %s, cf_584 = '%s', cf_555 = %s WHERE cf_535 = %s""" cursor.execute(query % (ag_1to15, ag_16to30, ag_31to60, ag_61to90, ag_91to120, beyond_120, total, status, credit_limit, account_no)) # commit all changes and close database connection try: db.commit() except: db.rollback() cursor.close() db.close() A: An update query returns the number of rows affected. Checking the Cursor.rowcount after you made am execute will give that number. If it is not 1, that that update row failed.
Catching update errors on MySQLdb
I have a function that updates a MySQL table from a CSV file. The MySQL table contains the client account number -- this is what I use to compare with the CSV file. At some point, some of the queries will fail because the account number being compared from the CSV file has not been added yet. How do I get the records from the CSV file that failed during the update process? I wanted to store these records in a separate file and then re-read the file at a later time until all records have been successfully updated. Below is the function that updates the DB. def updateDatabase(records, options): """Update database""" import re # Regular expression library import MySQLdb # establish DB connection try: db = MySQLdb.connect(host="localhost", user="root", passwd="", db="demo") except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) sys.exit (1) # create cursor cursor = db.cursor() # tell MySQLdb to turn off auto-commit db.autocommit(False) # inform the user that this could take a while if len(records) > 499: print 'This process can take a while.' print 'Updating the database now...' # this is the actual loop maxrecords = len(records) for record in records: account_no, ag_1to15, ag_16to30, ag_31to60, ag_61to90, ag_91to120, beyond_120, total, status, credit_limit = record if re.match('1000', account_no): query = """UPDATE sys_accountscf SET cf_581 = %s, cf_583 = %s, cf_574 = %s, cf_575 = %s, cf_576 = %s, cf_577 = %s, cf_579 = %s, cf_585 = '%s', cf_558 = %s WHERE cf_538 = %s""" else: query = """UPDATE sys_accountscf SET cf_580 = %s, cf_582 = %s, cf_568 = %s, cf_569 = %s, cf_571 = %s, cf_572 = %s, cf_578 = %s, cf_584 = '%s', cf_555 = %s WHERE cf_535 = %s""" cursor.execute(query % (ag_1to15, ag_16to30, ag_31to60, ag_61to90, ag_91to120, beyond_120, total, status, credit_limit, account_no)) # commit all changes and close database connection try: db.commit() except: db.rollback() cursor.close() db.close()
[ "An update query returns the number of rows affected.\nChecking the Cursor.rowcount after you made am execute will give that number. If it is not 1, that that update row failed.\n" ]
[ 1 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001788000_mysql_python.txt
Q: Using contexts in rdflib I am having trouble finding a clear, sensible example of usage of context with rdflib. ConjunctiveGraph does not accept contexts, and Graph is deprecated. How am I supposed to create and operate on different contexts within the same global ConjunctiveGraph ? A: Yes. This is the code import rdflib from rdflib.Graph import Graph conj=rdflib.ConjunctiveGraph() NS=rdflib.Namespace("http://example.com/#") NS_CTX=rdflib.Namespace("http://example.com/context/#") alice=NS.alice bob=NS.bob charlie=NS.charlie pizza=NS.pizza meat=NS.meat chocolate=NS.chocolate loves=NS.loves hates=NS.hates likes=NS.likes dislikes=NS.dislikes love_ctx=Graph(conj.store, NS_CTX.love) food_ctx=Graph(conj.store, NS_CTX.food) love_ctx.add( (alice, loves, bob) ) love_ctx.add( (alice, loves, charlie) ) love_ctx.add( (bob, hates, charlie) ) love_ctx.add( (charlie, loves, bob) ) food_ctx.add( (alice, likes, chocolate) ) food_ctx.add( (alice, likes, meat) ) food_ctx.add( (alice, dislikes, pizza) ) print "Full context" for t in conj: print t print "" print "Contexts" for c in conj.contexts(): print c print "love context" for t in love_ctx: print t print "food context" for t in food_ctx: print t And this is the output Full context (rdflib.URIRef('http://example.com/#bob'), rdflib.URIRef('http://example.com/#hates'), rdflib.URIRef('http://example.com/#charlie')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#chocolate')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#meat')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#dislikes'), rdflib.URIRef('http://example.com/#pizza')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#charlie')) (rdflib.URIRef('http://example.com/#charlie'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob')) Contexts <http://example.com/context/#food> a rdfg:Graph;rdflib:storage [a rdflib:Store;rdfs:label 'IOMemory']. <http://example.com/context/#love> a rdfg:Graph;rdflib:storage [a rdflib:Store;rdfs:label 'IOMemory']. love context (rdflib.URIRef('http://example.com/#bob'), rdflib.URIRef('http://example.com/#hates'), rdflib.URIRef('http://example.com/#charlie')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#charlie')) (rdflib.URIRef('http://example.com/#charlie'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob')) food context (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#chocolate')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#meat')) (rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#dislikes'), rdflib.URIRef('http://example.com/#pizza'))
Using contexts in rdflib
I am having trouble finding a clear, sensible example of usage of context with rdflib. ConjunctiveGraph does not accept contexts, and Graph is deprecated. How am I supposed to create and operate on different contexts within the same global ConjunctiveGraph ?
[ "Yes. This is the code\nimport rdflib\nfrom rdflib.Graph import Graph\n\nconj=rdflib.ConjunctiveGraph()\n\nNS=rdflib.Namespace(\"http://example.com/#\")\nNS_CTX=rdflib.Namespace(\"http://example.com/context/#\")\n\nalice=NS.alice\nbob=NS.bob\ncharlie=NS.charlie\n\npizza=NS.pizza\nmeat=NS.meat\nchocolate=NS.chocolate\n\nloves=NS.loves\nhates=NS.hates\nlikes=NS.likes\ndislikes=NS.dislikes\n\nlove_ctx=Graph(conj.store, NS_CTX.love)\nfood_ctx=Graph(conj.store, NS_CTX.food)\n\nlove_ctx.add( (alice, loves, bob) )\nlove_ctx.add( (alice, loves, charlie) )\nlove_ctx.add( (bob, hates, charlie) )\nlove_ctx.add( (charlie, loves, bob) )\n\nfood_ctx.add( (alice, likes, chocolate) )\nfood_ctx.add( (alice, likes, meat) )\nfood_ctx.add( (alice, dislikes, pizza) )\n\nprint \"Full context\"\nfor t in conj:\n print t\n\nprint \"\"\nprint \"Contexts\"\nfor c in conj.contexts():\n print c\n\nprint \"love context\"\nfor t in love_ctx:\n print t\n\nprint \"food context\"\nfor t in food_ctx:\n print t\n\nAnd this is the output\nFull context\n(rdflib.URIRef('http://example.com/#bob'), rdflib.URIRef('http://example.com/#hates'), rdflib.URIRef('http://example.com/#charlie'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#chocolate'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#meat'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#dislikes'), rdflib.URIRef('http://example.com/#pizza'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#charlie'))\n(rdflib.URIRef('http://example.com/#charlie'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob'))\n\nContexts\n<http://example.com/context/#food> a rdfg:Graph;rdflib:storage [a rdflib:Store;rdfs:label 'IOMemory'].\n<http://example.com/context/#love> a rdfg:Graph;rdflib:storage [a rdflib:Store;rdfs:label 'IOMemory'].\nlove context\n(rdflib.URIRef('http://example.com/#bob'), rdflib.URIRef('http://example.com/#hates'), rdflib.URIRef('http://example.com/#charlie'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#charlie'))\n(rdflib.URIRef('http://example.com/#charlie'), rdflib.URIRef('http://example.com/#loves'), rdflib.URIRef('http://example.com/#bob'))\nfood context\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#chocolate'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#likes'), rdflib.URIRef('http://example.com/#meat'))\n(rdflib.URIRef('http://example.com/#alice'), rdflib.URIRef('http://example.com/#dislikes'), rdflib.URIRef('http://example.com/#pizza'))\n\n" ]
[ 14 ]
[]
[]
[ "python", "rdf", "rdflib" ]
stackoverflow_0001788063_python_rdf_rdflib.txt
Q: Python and .NET integration I'm currently looking at python because I really like the text parsing capabilities and the nltk library, but traditionally I am a .Net/C# programmer. I don't think IronPython is an integration point for me because I am using NLTK and presumably would need a port of that library to the CLR. I've looked a little at Python for .NET and was wondering if this was a good place to start. Is there a way to marshal a python class into C#? Also, is this solution still being used? Better yet, has anyone done this? One thing I am considering is just using a persistence medium as a go-between (parse in Python, store in MongoDB, and run site in .NET). A: NLTK is pure-python and thus can be made to run on IronPython easily. A search turned up this ticket - all one has to do is install a couple of extra Python libraries that don't come by default with IronPython. This is probably the easiest way for you to integrate. Otherwise, you'll have to either run Python as a subprocess, which sounds complex, or run Python as a server that answers your requests. This is probably the most scalable, though complex, approach. If you go this way, consider Twisted to simplify the server code. But do try IronPython first... A: I don't know why you have a problem with IronPython. you can still use any and all nltk calls there. To answer your question about porting a Python class into C#: try compiling your python code into an EXE. This creates a DLL with all your python classes in it. This is something that has been around for a while and it has worked like a charm for me in the past A: Just an Idea How about running Python behind as a server, and connect it from .NET with socket? Since NLTK loading take time and better load it in advance anyway.
Python and .NET integration
I'm currently looking at python because I really like the text parsing capabilities and the nltk library, but traditionally I am a .Net/C# programmer. I don't think IronPython is an integration point for me because I am using NLTK and presumably would need a port of that library to the CLR. I've looked a little at Python for .NET and was wondering if this was a good place to start. Is there a way to marshal a python class into C#? Also, is this solution still being used? Better yet, has anyone done this? One thing I am considering is just using a persistence medium as a go-between (parse in Python, store in MongoDB, and run site in .NET).
[ "NLTK is pure-python and thus can be made to run on IronPython easily. A search turned up this ticket - all one has to do is install a couple of extra Python libraries that don't come by default with IronPython.\nThis is probably the easiest way for you to integrate. Otherwise, you'll have to either run Python as a subprocess, which sounds complex, or run Python as a server that answers your requests. This is probably the most scalable, though complex, approach. If you go this way, consider Twisted to simplify the server code.\nBut do try IronPython first...\n", "I don't know why you have a problem with IronPython. you can still use any and all nltk calls there.\nTo answer your question about porting a Python class into C#: try compiling your python code into an EXE. This creates a DLL with all your python classes in it. This is something that has been around for a while and it has worked like a charm for me in the past\n", "Just an Idea\nHow about running Python behind as a server, and connect it from .NET with socket?\nSince NLTK loading take time and better load it in advance anyway.\n" ]
[ 9, 6, 2 ]
[]
[]
[ ".net", "nlp", "nltk", "python", "python.net" ]
stackoverflow_0001787755_.net_nlp_nltk_python_python.net.txt
Q: How to read data from file and display in QEditText box in QT i would like to read a line of data from text file and display that data in Text Edit box A: It's quite simple, actually: import sys from PyQt4.QtCore import * from PyQt4.QtGui import * FILENAME = 'textedit_example.py' class Form(QDialog): def __init__(self, parent=None): super(Form, self).__init__(parent) self.edit = QTextEdit() layout = QVBoxLayout() layout.addWidget(self.edit) self.setLayout(layout) self.edit.setText("No file found") with open(FILENAME) as f: self.edit.setText(f.readline()) app = QApplication(sys.argv) form = Form() form.show() app.exec_() Some notes: Save it as 'textedit_example.py' and run. You'll see the first line of the source in the text box (import sys) It requires Python 2.6 and latest PyQt4 to run A: You may interest to look at this tutorial Simple Text Viewer Example
How to read data from file and display in QEditText box in QT
i would like to read a line of data from text file and display that data in Text Edit box
[ "It's quite simple, actually:\nimport sys\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\n\n\nFILENAME = 'textedit_example.py'\n\n\nclass Form(QDialog):\n def __init__(self, parent=None):\n super(Form, self).__init__(parent)\n self.edit = QTextEdit()\n layout = QVBoxLayout()\n layout.addWidget(self.edit)\n self.setLayout(layout)\n\n self.edit.setText(\"No file found\")\n\n with open(FILENAME) as f:\n self.edit.setText(f.readline())\n\n\napp = QApplication(sys.argv)\nform = Form()\nform.show()\napp.exec_()\n\nSome notes:\n\nSave it as 'textedit_example.py' and run. You'll see the first line of the source in the text box (import sys)\nIt requires Python 2.6 and latest PyQt4 to run\n\n", "You may interest to look at this tutorial \nSimple Text Viewer Example\n" ]
[ 2, 0 ]
[]
[]
[ "c++", "linux", "python", "qt4", "ubuntu" ]
stackoverflow_0001788062_c++_linux_python_qt4_ubuntu.txt
Q: Django: Serving admin media files I am trying to serve static files from another domain (sub domain of current domain). To serve all media files I used this settings: MEDIA_URL = 'http://media.bud-inform.co.ua/' So when in template I used {{ MEDIA_URL }} it was replace with the setting above. Now I am trying to serve admin media files from the same subdomain, I changed the settings this way: ADMIN_MEDIA_PREFIX = 'http://media.bud-inform.co.ua/admin_media/', and expected that all calls to media from my admin site will be made to this url.... But actually it didn't work this way, I still see paths to CSS made as following: http://bud-inform.co.ua/media/css/login.css Could you suggest how to serve admin media files correctly A: MEDIA_URL and ADMIN_MEDIA_PREFIX are two different things. One is the location of your media files, while the other is the location of the django admin system's media files. You have to make sure that the ADMIN_MEDIA_PREFIX points to somewhere where you're actually serving the admin media. Django doesn't handle that step for you. The django admin media is at django/contrib/admin/media/. Copy or symlink that directory somewhere publicly visible and set ADMIN_MEDIA_PREFIX to reflect where you put it.
Django: Serving admin media files
I am trying to serve static files from another domain (sub domain of current domain). To serve all media files I used this settings: MEDIA_URL = 'http://media.bud-inform.co.ua/' So when in template I used {{ MEDIA_URL }} it was replace with the setting above. Now I am trying to serve admin media files from the same subdomain, I changed the settings this way: ADMIN_MEDIA_PREFIX = 'http://media.bud-inform.co.ua/admin_media/', and expected that all calls to media from my admin site will be made to this url.... But actually it didn't work this way, I still see paths to CSS made as following: http://bud-inform.co.ua/media/css/login.css Could you suggest how to serve admin media files correctly
[ "MEDIA_URL and ADMIN_MEDIA_PREFIX are two different things. One is the location of your media files, while the other is the location of the django admin system's media files.\nYou have to make sure that the ADMIN_MEDIA_PREFIX points to somewhere where you're actually serving the admin media. Django doesn't handle that step for you.\nThe django admin media is at django/contrib/admin/media/. Copy or symlink that directory somewhere publicly visible and set ADMIN_MEDIA_PREFIX to reflect where you put it.\n" ]
[ 2 ]
[]
[]
[ "django", "django_admin", "python" ]
stackoverflow_0001788274_django_django_admin_python.txt
Q: How to determine if data is valid tar file without a file? My upload form expects a tar file and I want to check whether the uploaded data is valid. The tarfile module supports is_tarfile(), but expects a filename - I don't want to waste resources writing the file to disk just to check if it is valid. Is there a way to check the data is a valid tar file without writing to disk, using standard Python libraries? A: The tar file format is here on Wikipedia. I suspect your best bet would be to check that the header checksum for the first file is valid. You may also want to check the file name for sanity but that may not be reliable, depending on the file names that have been stored in there. Duplicating the relevant information here: Offset Size Description 0 100 File name 100 8 File mode 108 8 Owner's numeric user ID 116 8 Group's numeric user ID 124 12 File size in bytes 136 12 Last modification time in numeric Unix time format 148 8 Checksum for header block 156 1 Link indicator (file type) 157 100 Name of linked file The checksum is calculated by taking the sum of the unsigned byte values of the header block with the eight checksum bytes taken to be ASCII spaces (decimal value 32). It is stored as a six digit octal number with leading zeroes followed by a null and then a space. Various implementations do not adhere to this, so relying on the first white space trimmed six digits for checksum yields better compatibility. In addition, some historic tar implementations treated bytes as signed. Readers must calculate the checksum both ways, and treat it as good if either the signed or unsigned sum matches the included checksum. There is also the UStar format (also detailed in that link) but, since it's an extension to the old tar format, the method detailed above should still work. UStar is generally for just storing extra information about each file. Alternatively, since Python is open source, you could see how is_tarfile works and adapt it to check your stream rather than a file. The source code is available here under Python-3.1.1/Lib/tarfile.py but it's not for the faint of heart :-) A: The class TarFile accepts a fileobj object. I guess you can pass any partial download entity you get from your web framework. __init__(self, name=None, mode='r', fileobj=None) Adding to paxdiablo post: tar is a very difficult and complex file format, despite its apparent simplicity. You can check basic constraint, but if you have to support all the possible existing tar dialects you are going to waste a lot of time. Most of its complexity comes from the following issues: absence of a real standard until a de-facto standard existed (UStar/pax) holes in the specification leaving vendors grey areas where each one implemented their own solution vendors saying "our tar is better, and it will take over t3h world" limitations, and workarounds for these limitations (e.g. filename length) Also, there format has no upfront header, so the only way to check if the whole archive is sane is to scan the file completely, catch each record, and validate each one. A: The open method of tarfile takes a file-like object in its fileObj argument. This can be a StringIO instance A: Say your uploaded data is contained in string data. from tarfile import TarFile, TarError from StringIO import StringIO sio = StringIO(data) try: tf = TarFile(fileobj=sio) # process the file.... except TarError: print "Not a tar file" There are additional complexities such as handling different tar file formats and compression. More info is available in the tarfile documentation.
How to determine if data is valid tar file without a file?
My upload form expects a tar file and I want to check whether the uploaded data is valid. The tarfile module supports is_tarfile(), but expects a filename - I don't want to waste resources writing the file to disk just to check if it is valid. Is there a way to check the data is a valid tar file without writing to disk, using standard Python libraries?
[ "The tar file format is here on Wikipedia.\nI suspect your best bet would be to check that the header checksum for the first file is valid. You may also want to check the file name for sanity but that may not be reliable, depending on the file names that have been stored in there.\nDuplicating the relevant information here:\nOffset Size Description\n 0 100 File name\n 100 8 File mode\n 108 8 Owner's numeric user ID\n 116 8 Group's numeric user ID\n 124 12 File size in bytes\n 136 12 Last modification time in numeric Unix time format\n 148 8 Checksum for header block\n 156 1 Link indicator (file type)\n 157 100 Name of linked file\n\n\nThe checksum is calculated by taking the sum of the unsigned byte values of the header block with the eight checksum bytes taken to be ASCII spaces (decimal value 32).\nIt is stored as a six digit octal number with leading zeroes followed by a null and then a space.\nVarious implementations do not adhere to this, so relying on the first white space trimmed six digits for checksum yields better compatibility. In addition, some historic tar implementations treated bytes as signed.\nReaders must calculate the checksum both ways, and treat it as good if either the signed or unsigned sum matches the included checksum.\n\nThere is also the UStar format (also detailed in that link) but, since it's an extension to the old tar format, the method detailed above should still work. UStar is generally for just storing extra information about each file.\nAlternatively, since Python is open source, you could see how is_tarfile works and adapt it to check your stream rather than a file. The source code is available here under Python-3.1.1/Lib/tarfile.py but it's not for the faint of heart :-)\n", "The class TarFile accepts a fileobj object. I guess you can pass any partial download entity you get from your web framework.\n__init__(self, name=None, mode='r', fileobj=None)\n\nAdding to paxdiablo post: tar is a very difficult and complex file format, despite its apparent simplicity. You can check basic constraint, but if you have to support all the possible existing tar dialects you are going to waste a lot of time. Most of its complexity comes from the following issues:\n\nabsence of a real standard until a de-facto standard existed (UStar/pax)\nholes in the specification leaving vendors grey areas where each one implemented their own solution\nvendors saying \"our tar is better, and it will take over t3h world\"\nlimitations, and workarounds for these limitations (e.g. filename length)\n\nAlso, there format has no upfront header, so the only way to check if the whole archive is sane is to scan the file completely, catch each record, and validate each one. \n", "The open method of tarfile takes a file-like object in its fileObj argument. This can be a StringIO instance\n", "Say your uploaded data is contained in string data.\nfrom tarfile import TarFile, TarError\nfrom StringIO import StringIO\n\nsio = StringIO(data)\ntry:\n tf = TarFile(fileobj=sio)\n # process the file....\nexcept TarError:\n print \"Not a tar file\"\n\nThere are additional complexities such as handling different tar file formats and compression. More info is available in the tarfile documentation.\n" ]
[ 5, 3, 3, 3 ]
[]
[]
[ "python", "tar", "tarfile" ]
stackoverflow_0001788236_python_tar_tarfile.txt
Q: How do I remove something form a list, plus string matching? [(',', 52), ('news', 15), ('.', 11), ('bbc', 8), ('and', 8), ('the', 8), (':', 6), ('music', 5), ('-', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)] Suppose I have this list, with tuples inside. How do I go through the list and remove elements that don't have alphabetical characters in them? So that it becomes this: [('news', 15), ('bbc', 8), ('and', 8), ('the', 8), ('music', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)] A: the_list = [(a, b) for a, b in the_list if a.isalpha()] A: Easiest should be a list comprehension with a regular expression: import re lst = [...] lst = [t for t in lst if re.search(r'\w', t[0])] A: @OP, just go through the list items one by one, and check the first element of each item. This is just our simple and basic thought process. No need to think too deeply about being pythonic or not, or using fanciful list comprehensions etc.. keep everything simple. l = [(',', 52), ('news', 15), ('.', 11), ('bbc', 8), ('and', 8), ('the', 8), (':', 6), ('music', 5), ('-', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)] for item in l: if item[0].isalpha(): print item output $ ./python.py ('news', 15) ('bbc', 8) ('and', 8) ('the', 8) ('music', 5) ('blog', 4) ('world', 4) ('asia', 4) ('international', 4) ('on', 4) ('itunes', 4) ('online', 4) ('digital', 3) A: This uses string.ascii_letters, but SilentGhost's solution is to be preferred. >>> from string import ascii_letters >>> [(a, b) for a, b in l if all(c in ascii_letters for c in a)] [('news', 15), ('bbc', 8), ('and', 8), ('the', 8), ('music', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)] A: you could use built-in filter function too, Its dedicated to that purpose actually. filter(lambda x:x[0].isalpha(),LIST) The result is like this [('news', 15), ('bbc', 8), ('and', 8), ('the', 8), ('music', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)]
How do I remove something form a list, plus string matching?
[(',', 52), ('news', 15), ('.', 11), ('bbc', 8), ('and', 8), ('the', 8), (':', 6), ('music', 5), ('-', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)] Suppose I have this list, with tuples inside. How do I go through the list and remove elements that don't have alphabetical characters in them? So that it becomes this: [('news', 15), ('bbc', 8), ('and', 8), ('the', 8), ('music', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)]
[ "the_list = [(a, b) for a, b in the_list if a.isalpha()]\n\n", "Easiest should be a list comprehension with a regular expression:\nimport re\n\nlst = [...]\nlst = [t for t in lst if re.search(r'\\w', t[0])]\n\n", "@OP, just go through the list items one by one, and check the first element of each item. This is just our simple and basic thought process. No need to think too deeply about being pythonic or not, or using fanciful list comprehensions etc.. keep everything simple.\nl = [(',', 52),\n ('news', 15),\n ('.', 11),\n ('bbc', 8),\n ('and', 8),\n ('the', 8),\n (':', 6),\n ('music', 5),\n ('-', 5),\n ('blog', 4),\n ('world', 4),\n ('asia', 4),\n ('international', 4),\n ('on', 4),\n ('itunes', 4),\n ('online', 4),\n ('digital', 3)]\n\nfor item in l:\n if item[0].isalpha():\n print item\n\noutput\n$ ./python.py\n('news', 15)\n('bbc', 8)\n('and', 8)\n('the', 8)\n('music', 5)\n('blog', 4)\n('world', 4)\n('asia', 4)\n('international', 4)\n('on', 4)\n('itunes', 4)\n('online', 4)\n('digital', 3)\n\n", "This uses string.ascii_letters, but SilentGhost's solution is to be preferred.\n>>> from string import ascii_letters\n>>> [(a, b) for a, b in l if all(c in ascii_letters for c in a)]\n[('news', 15), ('bbc', 8), ('and', 8), ('the', 8), ('music', 5), ('blog', 4), ('world', 4), ('asia', 4), ('international', 4), ('on', 4), ('itunes', 4), ('online', 4), ('digital', 3)]\n\n", "you could use built-in filter function too, Its dedicated to that purpose actually.\nfilter(lambda x:x[0].isalpha(),LIST)\n\nThe result is like this\n[('news', 15), \n('bbc', 8), \n('and', 8), \n('the', 8), \n('music', 5), \n('blog', 4), \n('world', 4), \n('asia', 4),\n('international', 4), \n('on', 4), \n('itunes', 4), \n('online', 4), \n('digital', 3)]\n\n" ]
[ 10, 3, 1, 0, 0 ]
[]
[]
[ "list", "python", "regex" ]
stackoverflow_0001788710_list_python_regex.txt
Q: python inheritance and __init__ functions I came across the folloqing type of code when looking for some pyQt examples : class DisplayPage(QWizardPage): def __init__(self, *args): apply(QWizardPage.__init__, (self, ) + args) What does *args mean ? What is the purpose of using apply for this type of code ? A: *args means that __init__ takes any number of positional arguments, all of which will be stored in the list args. For more on that, see What does *args and **kwargs mean? This piece of code uses the deprecated apply function. Nowadays you would write this in one of three ways: QWizardPage.__init__(self, *args) super(DisplayPage, self).__init__(*args) super().__init__(*args) The first line is a literal translation of what apply does (don't use it in this case, unless QWizardPage is not a new-style class). The second uses super as defined in PEP 367. The third uses super as defined in PEP 3135 (works only in Python 3.x). A: DisplayPage inherits from QWizardPage. Its constructor accepts a variable amount of arguments (which is what *args means), and passes them all to the constructor of its parent, QWizardPage It's better to say: super(DisplayPage, self).__init__(*args) A: "Variable length argument lists": http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/ Basically, it's just saying, take all the arguments that were passed to DisplayPage's __init__ method and pass them to QWizardPage's __init__ method. A: In a parameter list (definition of a function) *args is Python's way of representing "variable arguments" (called "varargs" in C and C like languages). In an argument list (a call to a function) *args has the complementary meaning ... it "applies" the function to the value of the variable as if they'd been unpacked and "pasted" into the function's call. This distinction between "parameters" and "arguments" is one that's often not elucidated. A parameter is a slot into which arguments are placed. Arguments are supplied to a function call. Parameters are the names by which arguments can be referred from within the scope of the function. So if I define a function: def foo(x, *a): print "First arg: ", x print "Other args: ", ' '.join([str(x) for x in a]) I can call it thus: foo(1, 2, 3, 4) ... and my code will see 1 as "x" (the argument is an object reference to the integer 1, bound to the parameter named "x") and the list [2,3,4] as a (the argument will be an object reference to a three item list and bound to the function's parameter named "a"). If I bind the following tuple: bar = (1, 2, 3, 4) ... and call foo() thus: foo(*bar) ... it will be a call that's identical to my previous example. "bar" will be unpacked, and passed to foo() as a sequence of 4 arguments. This particular function would bind 1 to the first parameter and pack any number of other arguments into the a parameter. However I could call some other function: geewhiz(*bar) ... and it would be passed four arguments just as I described for foo(). (If geewhiz() was written to take only 3 arguments then Python will raise a TypeError for calling a function with the wrong number of arguments ... exactly as it would if you called geewhiz(1,2,3,4). In general Python's support for defining functions taking defaulted arguments, variable numbers of arguments, and keyword arguments is more flexible than any other scripting language I've ever seen. However all that power and flexibility can be a bit confusing. In Python3 they've also added some wrinkles to tuple packing assignments. Tuple packing assignments look like: a, b = 1, 2 ... and also show up frequently in code like: for key, val in my_dict.items(): ... Each of the items is being returned by the .items() method as a tuple, and being packed into the key, val tuple. (Tuples in Python don't require enclosing parentheses. The , is the tuple-token). Now in Python3 it's possible to do something like this: a, *b = 1, 2, 3, 4 ... which, as you might guess, binds the first element to "a" and the rest are packed into another tuple which is bound to "b." While this really isn't related to *args in function parameter lists I mention it because they are conceptually and syntactically similar.
python inheritance and __init__ functions
I came across the folloqing type of code when looking for some pyQt examples : class DisplayPage(QWizardPage): def __init__(self, *args): apply(QWizardPage.__init__, (self, ) + args) What does *args mean ? What is the purpose of using apply for this type of code ?
[ "*args means that __init__ takes any number of positional arguments, all of which will be stored in the list args. For more on that, see What does *args and **kwargs mean?\nThis piece of code uses the deprecated apply function. Nowadays you would write this in one of three ways:\n QWizardPage.__init__(self, *args)\n super(DisplayPage, self).__init__(*args)\n super().__init__(*args)\n\nThe first line is a literal translation of what apply does (don't use it in this case, unless QWizardPage is not a new-style class). The second uses super as defined in PEP 367. The third uses super as defined in PEP 3135 (works only in Python 3.x).\n", "DisplayPage inherits from QWizardPage. Its constructor accepts a variable amount of arguments (which is what *args means), and passes them all to the constructor of its parent, QWizardPage\nIt's better to say:\nsuper(DisplayPage, self).__init__(*args)\n\n", "\"Variable length argument lists\": http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/\nBasically, it's just saying, take all the arguments that were passed to DisplayPage's __init__ method and pass them to QWizardPage's __init__ method.\n", "In a parameter list (definition of a function) *args is Python's way of representing \"variable arguments\" (called \"varargs\" in C and C like languages). In an argument list (a call to a function) *args has the complementary meaning ... it \"applies\" the function to the value of the variable as if they'd been unpacked and \"pasted\" into the function's call.\nThis distinction between \"parameters\" and \"arguments\" is one that's often not elucidated. A parameter is a slot into which arguments are placed. Arguments are supplied to a function call. Parameters are the names by which arguments can be referred from within the scope of the function.\nSo if I define a function:\ndef foo(x, *a):\n print \"First arg: \", x\n print \"Other args: \", ' '.join([str(x) for x in a])\n\nI can call it thus:\nfoo(1, 2, 3, 4)\n\n... and my code will see 1 as \"x\" (the argument is an object reference to the integer 1, bound to the parameter named \"x\") and the list [2,3,4] as a (the argument will be an object reference to a three item list and bound to the function's parameter named \"a\").\nIf I bind the following tuple:\nbar = (1, 2, 3, 4)\n\n... and call foo() thus:\nfoo(*bar)\n\n... it will be a call that's identical to my previous example. \"bar\" will be unpacked, and passed to foo() as a sequence of 4 arguments. This particular function would bind 1 to the first parameter and pack any number of other arguments into the a parameter. However I could call some other function:\ngeewhiz(*bar)\n\n... and it would be passed four arguments just as I described for foo(). (If geewhiz() was written to take only 3 arguments then Python will raise a TypeError for calling a function with the wrong number of arguments ... exactly as it would if you called geewhiz(1,2,3,4).\nIn general Python's support for defining functions taking defaulted arguments, variable numbers of arguments, and keyword arguments is more flexible than any other scripting language I've ever seen. However all that power and flexibility can be a bit confusing.\nIn Python3 they've also added some wrinkles to tuple packing assignments. Tuple packing assignments look like:\na, b = 1, 2\n\n... and also show up frequently in code like:\nfor key, val in my_dict.items():\n ...\n\nEach of the items is being returned by the .items() method as a tuple, and being packed into the key, val tuple. (Tuples in Python don't require enclosing parentheses. The , is the tuple-token).\nNow in Python3 it's possible to do something like this:\na, *b = 1, 2, 3, 4\n\n... which, as you might guess, binds the first element to \"a\" and the rest are packed into another tuple which is bound to \"b.\"\nWhile this really isn't related to *args in function parameter lists I mention it because they are conceptually and syntactically similar.\n" ]
[ 10, 3, 1, 0 ]
[]
[]
[ "pyqt", "python" ]
stackoverflow_0001788842_pyqt_python.txt
Q: Redirecting CGI error output from STDERR to a file (python AND perl) I'm moving a website to Hostmonster and asked where the server log is located so I can automatically scan it for CGI errors. I was told, "We're sorry, but we do not have cgi errors go to any files that you have access to." For organizational reasons I'm stuck with Hostmonster and this awful policy, so as a workaround I thought maybe I'd modify the CGI scripts to redirect STDERR to a custom log file. I have a lot of scripts (269) so I need an easy way in both Python and Perl to redirect STDERR to a custom log file. Something that accounts for file locking either explicitly or implicitly would be great, since a shared CGI error log file could theoretically be written to by more than one script at once if more than one script fails at the same time. (I want to use a shared error log so I can email its contents to myself nightly and then archive or delete it.) I know I may have to modify each file (grrr), that's why I'm looking for something elegant that will only be a few lines of code. Thanks. A: For Perl, just close and re-open STDERR to point to a file of your choice. close STDERR; open STDERR, '>>', '/path/to/your/log.txt' or die "Couldn't redirect STDERR: $!"; warn "this will go to log.txt"; Alternatively, you could look into a filehandle multiplexer like File::Tee. A: Python: cgitb. At the top of your script, before other imports: import cgitb cgitb.enable(False, '/home/me/www/myapp/logs/errors') (‘errors’ being a directory the web server user has write-access to.) A: In Perl try CGI::Carp BEGIN { use CGI::Carp qw(carpout); use diagnostics; open(LOG, ">errors.txt"); carpout(LOG); close(LOG); } use CGI::Carp qw(fatalsToBrowser); A: python: import sys sys.stderr = open('file_path_with_write_permission/filename', 'a') A: The solution I finally went with was similar to the following, near the top of all my scripts: Perl: open(STDERR,">>","/path/to/my/cgi-error.log") or die "Could not redirect STDERR: $OS_ERROR"; Python: sys.stderr = open("/path/to/my/cgi-error.log", "a") Apparently in Perl you don't need to close the STDERR handle before reopening it. Normally I would close it anyway as a best practice, but as I said in the question, I have 269 scripts and I'm trying to minimize the changes. (Plus it seems more Perlish to just re-open the open filehandle, as awful as that sounds.) In case anyone else has something similar in the future, here's what I'm going to do for updating all my scripts at once: Perl: find . -type f -name "*.pl" -exec perl -pi.bak -e 's%/usr/bin/perl%/usr/bin/perl\nopen(STDERR,">>","/path/to/my/cgi-error.log")\n or die "Could not redirect STDERR: \$OS_ERROR";%' {} \; Python: find . -type f -name "*.py" -exec perl -pi.bak -e 's%^(import os, sys.*)%$1\nsys.stderr = open("/path/to/my/cgi-error.log", "a")%' {} \; The reason I'm posting these commands is that it took me quite a lot of syntactical massaging to get those commands to work (e.g., changing Couldn't to Could not, changing #!/usr/bin/perl to just /usr/bin/perl so the shell wouldn't interpret ! as a history character, using $OS_ERROR instead of $!, etc.) Thanks to everyone who commented. Since no one answered for both Perl and Python I couldn't really "accept" any of the given answers, but I did give votes to the ones which led me in the right direction. Thanks again! A: Python has the sys.stderr module that you might want to look into. >>>help(sys.__stderr__.read) Help on built-in function read: read(...) read([size]) -> read at most size bytes, returned as a string. If the size argument is negative or omitted, read until EOF is reached. Notice that when in non-blocking mode, less data than what was requested may be returned, even if no size parameter was given. You can store the output of this in a string and write that string to a file. Hope this helps A: In my Perl CGI programs, I usually have BEGIN { open(STDERR,'>>','stderr.log'); } right after shebang line and "use strict;use warnings;". If you want, you may append $0 to file name. But this will not solve multiple programs problem, as several copies of one programs may be run simultaneously. I usually just have several output files, for every program group.
Redirecting CGI error output from STDERR to a file (python AND perl)
I'm moving a website to Hostmonster and asked where the server log is located so I can automatically scan it for CGI errors. I was told, "We're sorry, but we do not have cgi errors go to any files that you have access to." For organizational reasons I'm stuck with Hostmonster and this awful policy, so as a workaround I thought maybe I'd modify the CGI scripts to redirect STDERR to a custom log file. I have a lot of scripts (269) so I need an easy way in both Python and Perl to redirect STDERR to a custom log file. Something that accounts for file locking either explicitly or implicitly would be great, since a shared CGI error log file could theoretically be written to by more than one script at once if more than one script fails at the same time. (I want to use a shared error log so I can email its contents to myself nightly and then archive or delete it.) I know I may have to modify each file (grrr), that's why I'm looking for something elegant that will only be a few lines of code. Thanks.
[ "For Perl, just close and re-open STDERR to point to a file of your choice.\nclose STDERR;\nopen STDERR, '>>', '/path/to/your/log.txt' \n or die \"Couldn't redirect STDERR: $!\";\n\nwarn \"this will go to log.txt\";\n\nAlternatively, you could look into a filehandle multiplexer like File::Tee.\n", "Python: cgitb. At the top of your script, before other imports:\nimport cgitb\ncgitb.enable(False, '/home/me/www/myapp/logs/errors')\n\n(‘errors’ being a directory the web server user has write-access to.)\n", "In Perl try CGI::Carp\nBEGIN { \nuse CGI::Carp qw(carpout); \nuse diagnostics;\nopen(LOG, \">errors.txt\"); \ncarpout(LOG);\nclose(LOG);\n}\n\nuse CGI::Carp qw(fatalsToBrowser);\n\n", "python:\nimport sys\n\nsys.stderr = open('file_path_with_write_permission/filename', 'a')\n\n", "The solution I finally went with was similar to the following, near the top of all my scripts:\nPerl:\nopen(STDERR,\">>\",\"/path/to/my/cgi-error.log\")\n or die \"Could not redirect STDERR: $OS_ERROR\";\n\nPython:\nsys.stderr = open(\"/path/to/my/cgi-error.log\", \"a\")\n\nApparently in Perl you don't need to close the STDERR handle before reopening it.\nNormally I would close it anyway as a best practice, but as I said in the question, I have 269 scripts and I'm trying to minimize the changes. (Plus it seems more Perlish to just re-open the open filehandle, as awful as that sounds.)\nIn case anyone else has something similar in the future, here's what I'm going to do for updating all my scripts at once:\nPerl:\nfind . -type f -name \"*.pl\" -exec perl -pi.bak -e 's%/usr/bin/perl%/usr/bin/perl\\nopen(STDERR,\">>\",\"/path/to/my/cgi-error.log\")\\n or die \"Could not redirect STDERR: \\$OS_ERROR\";%' {} \\;\n\nPython:\nfind . -type f -name \"*.py\" -exec perl -pi.bak -e 's%^(import os, sys.*)%$1\\nsys.stderr = open(\"/path/to/my/cgi-error.log\", \"a\")%' {} \\;\n\nThe reason I'm posting these commands is that it took me quite a lot of syntactical massaging to get those commands to work (e.g., changing Couldn't to Could not, changing #!/usr/bin/perl to just /usr/bin/perl so the shell wouldn't interpret ! as a history character, using $OS_ERROR instead of $!, etc.)\nThanks to everyone who commented. Since no one answered for both Perl and Python I couldn't really \"accept\" any of the given answers, but I did give votes to the ones which led me in the right direction. Thanks again!\n", "Python has the sys.stderr module that you might want to look into.\n>>>help(sys.__stderr__.read)\nHelp on built-in function read:\n\nread(...)\n read([size]) -> read at most size bytes, returned as a string.\n\n If the size argument is negative or omitted, read until EOF is reached.\n Notice that when in non-blocking mode, less data than what was requested\n may be returned, even if no size parameter was given.\n\nYou can store the output of this in a string and write that string to a file.\nHope this helps\n", "In my Perl CGI programs, I usually have\nBEGIN {\n open(STDERR,'>>','stderr.log');\n}\n\nright after shebang line and \"use strict;use warnings;\". If you want, you may append $0 to file name. But this will not solve multiple programs problem, as several copies of one programs may be run simultaneously. I usually just have several output files, for every program group.\n" ]
[ 4, 3, 3, 2, 2, 0, 0 ]
[]
[]
[ "cgi", "hostmonster", "perl", "python", "stderr" ]
stackoverflow_0001781436_cgi_hostmonster_perl_python_stderr.txt
Q: appengine: how to use validator in Class:Property? described in http://code.google.com/intl/en/appengine/docs/python/datastore/propertyclass.html#Property but there is no example code. i code sth like: class Model(db.Model): email = db.EmailProperty(validator=clean_email) def clean_email(self,value): if ... A: class Model(db.Model): def clean_email(value): if ... email = db.EmailProperty(validator=clean_email) use a argument. and the argument itself is the value of email in this case. A: You need to either define the method before the property, as joetsuihk demonstrates, or define it as a function, outside the class. I would recommend the latter, as there's no reason for the validator to be associated with the class.
appengine: how to use validator in Class:Property?
described in http://code.google.com/intl/en/appengine/docs/python/datastore/propertyclass.html#Property but there is no example code. i code sth like: class Model(db.Model): email = db.EmailProperty(validator=clean_email) def clean_email(self,value): if ...
[ "class Model(db.Model):\n\n def clean_email(value):\n if ...\n\n email = db.EmailProperty(validator=clean_email)\n\nuse a argument. and the argument itself is the value of email in this case.\n", "You need to either define the method before the property, as joetsuihk demonstrates, or define it as a function, outside the class. I would recommend the latter, as there's no reason for the validator to be associated with the class.\n" ]
[ 2, 1 ]
[]
[]
[ "google_app_engine", "python", "validation" ]
stackoverflow_0001788119_google_app_engine_python_validation.txt
Q: How to get the next token (int, float or string) from a file in Python? Is there some way to just get the next token from a file in Python, as for example the Scanner class does in Java? File file = new File("something"); Scanner myinput = new Scanner(file); double a = myinput.nextDouble(); String s = myinput.next(); I'd like to ignore whitespaces, tabs, newlines and just get the next int/float/word from the file. I know I could read the lines and build something like Scanner myself, but I'd like to know if there isn't already something that I could use. I've searched around but could only find line-oriented methods. Thank you! A: Check out the shlex-module in the standard library: http://docs.python.org/library/shlex.html import shlex import StringIO # use in place of files list(shlex.shlex(StringIO.StringIO('Some tokens. 123, 45.67 "A string with whitespace"'))) It does not handle floats the way you seem to want. Maybe you can extend or modify it. A: I don't think there is really something around that sophisticated. But you can take a look at the following options use re.split(pattern, string) and get what you want by providing regex's There is somewhere a Scanner class in the re module (but I don't think they developed it further) You could also consider using tokenize + StringIO Or as you yourself mentioned: Build one yourself, donate it do community and get famous ;) A: if your file is *.ini alike text files, you could use ConfigParser module There is few examples out there. http://docs.python.org/library/configparser.html and pyparsing will do that for other purpose, I think. I havn't use pyparsing before, so I have no clue right now. http://pyparsing.wikispaces.com/ A: Probably you can take a look at PLY
How to get the next token (int, float or string) from a file in Python?
Is there some way to just get the next token from a file in Python, as for example the Scanner class does in Java? File file = new File("something"); Scanner myinput = new Scanner(file); double a = myinput.nextDouble(); String s = myinput.next(); I'd like to ignore whitespaces, tabs, newlines and just get the next int/float/word from the file. I know I could read the lines and build something like Scanner myself, but I'd like to know if there isn't already something that I could use. I've searched around but could only find line-oriented methods. Thank you!
[ "Check out the shlex-module in the standard library: http://docs.python.org/library/shlex.html\nimport shlex\nimport StringIO # use in place of files\n\nlist(shlex.shlex(StringIO.StringIO('Some tokens. 123, 45.67 \"A string with whitespace\"')))\n\nIt does not handle floats the way you seem to want. Maybe you can extend or modify it.\n", "I don't think there is really something around that sophisticated.\nBut you can take a look at the following options\n\nuse re.split(pattern, string) and get what you want by providing regex's\nThere is somewhere a Scanner class in the re module (but I don't think they developed it further)\nYou could also consider using tokenize + StringIO\nOr as you yourself mentioned: Build one yourself, donate it do community and get famous ;)\n\n", "if your file is *.ini alike text files, you could use ConfigParser module\nThere is few examples out there.\nhttp://docs.python.org/library/configparser.html\nand pyparsing will do that for other purpose, I think. \nI havn't use pyparsing before, so I have no clue right now.\nhttp://pyparsing.wikispaces.com/\n", "Probably you can take a look at PLY\n" ]
[ 10, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001789114_python.txt
Q: I am trying to determine if a string is a Question. How can I analyze the "?" symbol (python) This is a question: "Where is the car?" This is NOT a question: "Check this out: http://domain.com/?q=test" How do I write a function to analyze a string so that we know for sure it is a question and not part of a URL? A: This regex finds question marks following a word character, and followed by either whitespace or the end of the string/line. Not perfect, but should catch most cases... \w\?[$\s] Edit (lack of caffeine strikes...): That should have been: \w\?(\s|$) In the original, $ is interpreted as a literal character. (Thanks Gumbo) A: If question mark is always there, you could check like if question.strip().endswith("?") and "://" not in question: # do something ? If you really want to parse real sentence, you may need nltk, I am not sure for that case. p.s this is just an sample if the text is fixed, nobody can parse real English grammar with regex. A: Essentially what others say is correct. There should be no whitespace before the ?. If the question is entered by a user, things get more ambiguous however. In that case a proper parser using a context free grammar may yield better results. Even with questions not having a question mark at the end. But it may not recognize all questions. Covering all possible structure variations, inflections and whatnot is not straight-forward. But, if you are certain that the questions always end with a question mark, you could do something as simple as if question_text.strip().endswith("?"): print `question_text`, "is a question" Or: import re p = re.compile( r"\w+\?\s*" ) if p.search( question_text ): print `question_text`, "contains a question" Not tested, but should work for most cases. A: You can for example check if the question mark is immediately followed by a non-space, non-line break character. But I guess that a more safe way would be to strip any possible URL from the string before searching the question mark on it. A: The question mark will not have white space either side or a line break/end-of-string after it, if it is in a url A: A probably not very robust approach that you might be able to get some traction with would be to look for "question words" in strings that end with question marks. In English, most question sentences or clauses (i.e. following a comma) start with "who", "what", "where", "when", "how", "why", "can", "may", "will", "won't, "does", "doesn't", etc. You could probably build up a pretty good heuristic this way that might work better than a regex (or could be incorporated into one or more regexes).
I am trying to determine if a string is a Question. How can I analyze the "?" symbol (python)
This is a question: "Where is the car?" This is NOT a question: "Check this out: http://domain.com/?q=test" How do I write a function to analyze a string so that we know for sure it is a question and not part of a URL?
[ "This regex finds question marks following a word character, and followed by either whitespace or the end of the string/line. Not perfect, but should catch most cases...\n\\w\\?[$\\s]\n\nEdit (lack of caffeine strikes...):\nThat should have been:\n\\w\\?(\\s|$)\n\nIn the original, $ is interpreted as a literal character. (Thanks Gumbo)\n", "If question mark is always there, you could check like\nif question.strip().endswith(\"?\") and \"://\" not in question:\n # do something ?\n\nIf you really want to parse real sentence, you may need nltk, I am not sure for that case.\np.s this is just an sample if the text is fixed, nobody can parse real English grammar with regex.\n", "Essentially what others say is correct. There should be no whitespace before the ?. If the question is entered by a user, things get more ambiguous however.\nIn that case a proper parser using a context free grammar may yield better results. Even with questions not having a question mark at the end. But it may not recognize all questions. Covering all possible structure variations, inflections and whatnot is not straight-forward.\nBut, if you are certain that the questions always end with a question mark, you could do something as simple as\nif question_text.strip().endswith(\"?\"):\n print `question_text`, \"is a question\"\n\nOr:\nimport re\np = re.compile( r\"\\w+\\?\\s*\" )\nif p.search( question_text ):\n print `question_text`, \"contains a question\"\n\nNot tested, but should work for most cases.\n", "You can for example check if the question mark is immediately followed by a non-space, non-line break character. But I guess that a more safe way would be to strip any possible URL from the string before searching the question mark on it.\n", "The question mark will not have white space either side or a line break/end-of-string after it, if it is in a url\n", "A probably not very robust approach that you might be able to get some traction with would be to look for \"question words\" in strings that end with question marks. In English, most question sentences or clauses (i.e. following a comma) start with \"who\", \"what\", \"where\", \"when\", \"how\", \"why\", \"can\", \"may\", \"will\", \"won't, \"does\", \"doesn't\", etc. You could probably build up a pretty good heuristic this way that might work better than a regex (or could be incorporated into one or more regexes).\n" ]
[ 3, 2, 2, 1, 0, 0 ]
[]
[]
[ "python", "regex", "string", "url" ]
stackoverflow_0001789009_python_regex_string_url.txt
Q: Just Curious about Python+Numpy to Realtime Gesture Recognition i 'm just finish labs meeting with my advisor, previous code is written in matlab and it run offline mode not realtime mode, so i decide to convert to python+numpy (in offline version) but after labs meeting, my advisor raise issue about speed of realtime recognition, so i have doubt about speed of python+numpy to do this project. or better in c? my project is about using electronic glove (2x sensors) to get realtime data and do data processing, recognition process A: NumPy is very fast if you follow some basic rules. You should avoid Python loops, using the operators provided by NumPy instead whenever you can. This and this should be a good starting points. After reading through that, why don't you write some simple code in both Matlab and NumPy and compare the performance? If it performs well in NumPy, it should be enough to convince your advisor, especially if the code is representative of the actual algorithms you are using in your project. Note: you should also see that your algorithm really is suited for realtime recognition. A: I think the answer depends on three things: how well you code in Matlab, how well you code in Python/Numpy, and your algorithm. Both Matlab and Python can be fast for number crunching if you're diligent about vectorizing everything and using library calls. If your Matlab code is already very good I would be surprised if you saw much performance benefit moving to Numpy unless there's some specific idiom you can use to your advantage. You might not even see a large benefit moving to C. I this case your effort would likely be better spent tuning your algorithm. If your Matlab code isn't so good you could 1) write better Matlab code, 2) rewrite in good Numpy code, or 3) rewrite in C. A: You might look at OpenCV, which has Python libs ctypes-opencv and opencv-cython; I haven't used these myself. Ideally you want to combine a fast-running C inner loop with a flexible Python/Numpy play-with-algorithms. Bytheway google "opencv gesture recognition" → 6680 hits.
Just Curious about Python+Numpy to Realtime Gesture Recognition
i 'm just finish labs meeting with my advisor, previous code is written in matlab and it run offline mode not realtime mode, so i decide to convert to python+numpy (in offline version) but after labs meeting, my advisor raise issue about speed of realtime recognition, so i have doubt about speed of python+numpy to do this project. or better in c? my project is about using electronic glove (2x sensors) to get realtime data and do data processing, recognition process
[ "NumPy is very fast if you follow some basic rules. You should avoid Python loops, using the operators provided by NumPy instead whenever you can. This and this should be a good starting points.\nAfter reading through that, why don't you write some simple code in both Matlab and NumPy and compare the performance? If it performs well in NumPy, it should be enough to convince your advisor, especially if the code is representative of the actual algorithms you are using in your project.\nNote: you should also see that your algorithm really is suited for realtime recognition.\n", "I think the answer depends on three things: how well you code in Matlab, how well you code in Python/Numpy, and your algorithm. Both Matlab and Python can be fast for number crunching if you're diligent about vectorizing everything and using library calls.\nIf your Matlab code is already very good I would be surprised if you saw much performance benefit moving to Numpy unless there's some specific idiom you can use to your advantage. You might not even see a large benefit moving to C. I this case your effort would likely be better spent tuning your algorithm.\nIf your Matlab code isn't so good you could 1) write better Matlab code, 2) rewrite in good Numpy code, or 3) rewrite in C.\n", "You might look at OpenCV, which has Python libs\nctypes-opencv\nand opencv-cython;\nI haven't used these myself.\nIdeally you want to combine a fast-running C inner loop\nwith a flexible Python/Numpy play-with-algorithms.\nBytheway google \"opencv gesture recognition\" → 6680 hits.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "c", "gesture_recognition", "numpy", "python" ]
stackoverflow_0001727950_c_gesture_recognition_numpy_python.txt
Q: Where does Python's pydoc help function get its content? I have a lot of callable objects and they all have the __doc__ string correctly filled out, but running help on them produces the help for their class instead of help based on __doc__. I want to change it so that running help on them produces customized help that looks essentially like what I would get if they were actual functions instead of instances of a class that implements __call__. In code, I'd like to make the output of this: class myCallable: def __init__(self, doc): self.__doc__ = doc def __call__(self): # do some stuff pass myFunc = myCallable("some doco text") help(myFunc) Look more like the output of this: def myFunc(): "some doco text" # do some stuff pass help(myFunc) A: The help function (implemented in the pydoc module) isn't prepared to find per-instance docstrings. I took a quick look through the module to see if there was a way to provide explicit help, but there doesn't seem to be. It uses the inspect module to determine what kind of thing it is, and your myFunc doesn't look like a function, it looks like an instance. So pydoc prints help about the instance's class instead. It'd be nice if similar to __doc__ you could add a __help__ attribute, but there's no support for that. I hesitate to suggest it, but your best bet might be to define a new help function: old_help = help def help(thing): if hasattr(thing, '__help__'): print thing.__help__ else: old_help(thing) and then put a __help__ attribute on your instances: class myCallable: def __init__(self, doc): self.__doc__ = doc self.__help__ = doc A: I'm not very clear about what your question is exactly. My understanding is that you have a class and a function defined in it and you want to know where Python gets the help text for that function from. Python gets the help text from the doc strings provided in that class/method. If you have a class "A" and a method "f" in that class and there are docstrings in the function "f", then the following terminal dump should help clear your question: >>> class A: def __init__(self): self.c = 0 # some class variable def f(self, x): """this is the documentation/help text for the function "f" """ return x+1 >>> help(A.f) Help on method f in module __main__: f(self, x) unbound __main__.A method this is the documentation/help text for the function "f" >>> A.f.__doc__ 'this is the documentation/help text for the function "f" ' Hope this helps
Where does Python's pydoc help function get its content?
I have a lot of callable objects and they all have the __doc__ string correctly filled out, but running help on them produces the help for their class instead of help based on __doc__. I want to change it so that running help on them produces customized help that looks essentially like what I would get if they were actual functions instead of instances of a class that implements __call__. In code, I'd like to make the output of this: class myCallable: def __init__(self, doc): self.__doc__ = doc def __call__(self): # do some stuff pass myFunc = myCallable("some doco text") help(myFunc) Look more like the output of this: def myFunc(): "some doco text" # do some stuff pass help(myFunc)
[ "The help function (implemented in the pydoc module) isn't prepared to find per-instance docstrings. I took a quick look through the module to see if there was a way to provide explicit help, but there doesn't seem to be. It uses the inspect module to determine what kind of thing it is, and your myFunc doesn't look like a function, it looks like an instance. So pydoc prints help about the instance's class instead.\nIt'd be nice if similar to __doc__ you could add a __help__ attribute, but there's no support for that. \nI hesitate to suggest it, but your best bet might be to define a new help function:\nold_help = help\ndef help(thing):\n if hasattr(thing, '__help__'):\n print thing.__help__\n else:\n old_help(thing)\n\nand then put a __help__ attribute on your instances:\nclass myCallable:\n def __init__(self, doc):\n self.__doc__ = doc\n self.__help__ = doc\n\n", "I'm not very clear about what your question is exactly. My understanding is that you have a class and a function defined in it and you want to know where Python gets the help text for that function from.\nPython gets the help text from the doc strings provided in that class/method.\nIf you have a class \"A\" and a method \"f\" in that class and there are docstrings in the function \"f\", then the following terminal dump should help clear your question:\n>>> class A:\n def __init__(self):\n self.c = 0 # some class variable\n def f(self, x):\n \"\"\"this is the documentation/help text for the function \"f\" \"\"\"\n return x+1\n\n>>> help(A.f)\nHelp on method f in module __main__:\n\nf(self, x) unbound __main__.A method\nthis is the documentation/help text for the function \"f\" \n\n>>> A.f.__doc__\n'this is the documentation/help text for the function \"f\" '\n\nHope this helps\n" ]
[ 5, 2 ]
[]
[]
[ "introspection", "pydoc", "python" ]
stackoverflow_0001787056_introspection_pydoc_python.txt
Q: Reference to Part of List - Python If I have a list in python, how can I create a reference to part of the list? For example: myList = ["*", "*", "*", "*", "*", "*", "*", "*", "*"] listPart = myList[0:7:3] #This makes a new list, which is not what I want myList[0] = "1" listPart[0] "1" Is this possible and if so how would I code it? Cheers, Joe A: You can write a list view type. Here is something I have written as experiment, it is by no means guaranteed to be complete or bug-free class listview (object): def __init__(self, data, start, end): self.data = data self.start, self.end = start, end def __repr__(self): return "<%s %s>" % (type(self).__name__, list(self)) def __len__(self): return self.end - self.start def __getitem__(self, idx): if isinstance(idx, slice): return [self[i] for i in xrange(*idx.indices(len(self)))] if idx >= len(self): raise IndexError idx %= len(self) return self.data[self.start+idx] def __setitem__(self, idx, val): if isinstance(idx, slice): start, stop, stride = idx.indices(len(self)) for i, v in zip(xrange(start, stop, stride), val): self[i] = v return if idx >= len(self): raise IndexError(idx) idx %= len(self) self.data[self.start+idx] = val L = range(10) s = listview(L, 2, 5) print L print s print len(s) s[:] = range(3) print s[:] print L Output: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] <listview [2, 3, 4]> 3 [0, 1, 2] [0, 1, 0, 1, 2, 5, 6, 7, 8, 9] You may assign to indices in the listview, and it will reflect on the underlying list. However,it does not make sense to define append or similar actions on the listview. It may also break if the underlying list changes in length. A: Use a slice object or an islice iterator? http://docs.python.org/library/functions.html#slice A: There's nothing in python that really does what you want. Basically you want to write some sort of proxy object. A: I think that it is impossible. It would lead to many possible errors, for example: what when you append the list which is reference to part of a bigger list? Shall next element in big list be replaced, or inserted? As far as I know, silce is internal mechanism for getting elements of the list. They do not create new list object, referencing to parts of older list object. Islice just iterates over the elements given by slice, it is also not the reference, but the actual object - changing it doesn't affect original list. Or am I mistaken? As in comment, and that solution contributes really to Mr. Bastien, you can do: sliceobject = slice(0,7,3) for i in xrange(sliceobject.start, sliceobject.stop, sliceobject.step) myList[i] = whatever That way you can access each specified element of your list by reference.
Reference to Part of List - Python
If I have a list in python, how can I create a reference to part of the list? For example: myList = ["*", "*", "*", "*", "*", "*", "*", "*", "*"] listPart = myList[0:7:3] #This makes a new list, which is not what I want myList[0] = "1" listPart[0] "1" Is this possible and if so how would I code it? Cheers, Joe
[ "You can write a list view type. Here is something I have written as experiment, it is by no means guaranteed to be complete or bug-free\nclass listview (object):\n def __init__(self, data, start, end):\n self.data = data\n self.start, self.end = start, end\n def __repr__(self):\n return \"<%s %s>\" % (type(self).__name__, list(self))\n def __len__(self):\n return self.end - self.start\n def __getitem__(self, idx):\n if isinstance(idx, slice):\n return [self[i] for i in xrange(*idx.indices(len(self)))]\n if idx >= len(self):\n raise IndexError\n idx %= len(self)\n return self.data[self.start+idx]\n def __setitem__(self, idx, val):\n if isinstance(idx, slice):\n start, stop, stride = idx.indices(len(self))\n for i, v in zip(xrange(start, stop, stride), val):\n self[i] = v\n return\n if idx >= len(self):\n raise IndexError(idx)\n idx %= len(self)\n self.data[self.start+idx] = val\n\n\nL = range(10)\n\ns = listview(L, 2, 5)\n\nprint L\nprint s\nprint len(s)\ns[:] = range(3)\nprint s[:]\nprint L\n\nOutput:\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n<listview [2, 3, 4]>\n3\n[0, 1, 2]\n[0, 1, 0, 1, 2, 5, 6, 7, 8, 9]\n\nYou may assign to indices in the listview, and it will reflect on the underlying list. However,it does not make sense to define append or similar actions on the listview. It may also break if the underlying list changes in length.\n", "Use a slice object or an islice iterator?\nhttp://docs.python.org/library/functions.html#slice\n", "There's nothing in python that really does what you want. Basically you want to write some sort of proxy object.\n", "I think that it is impossible. It would lead to many possible errors, for example: what when you append the list which is reference to part of a bigger list? Shall next element in big list be replaced, or inserted?\nAs far as I know, silce is internal mechanism for getting elements of the list. They do not create new list object, referencing to parts of older list object. Islice just iterates over the elements given by slice, it is also not the reference, but the actual object - changing it doesn't affect original list. Or am I mistaken?\nAs in comment, and that solution contributes really to Mr. Bastien, you can do:\nsliceobject = slice(0,7,3)\nfor i in xrange(sliceobject.start, sliceobject.stop, sliceobject.step)\n myList[i] = whatever\n\nThat way you can access each specified element of your list by reference.\n" ]
[ 5, 4, 3, 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001788608_list_python.txt
Q: How to parse *.py file with python? I'd like to parse Python source in order to try making a basic source code converter from Python to Go. What module should I use? Should I proceed or not? If I should proceed, how? A: Have a look at the language services packages, particularly the ast. My guess is that if you don't already have a solid grasp of both parsing as well as code generation techniques, this is going to be a difficult project to undertake. good luck! A: The Boo Solution Are you trying to make a python-like language, that can compiles into Go? This seems most sensible, as you will want to do Go-specific things (to take advantage of Go features). Look at pyparsing. It includes an example of a complete python parser, but you probably don't want to do that. You want to incrementally build your converter / translator, so you want to incrementally build the parser, otherwise you might choke on the AST. OK, you could parse everything and just ignore the stuff you don't understand, but that's not great behavior from a compiler. You could start with parsing basic arithmetic. The Pyrex Solution This is similar to the Boo solution, but much harder. Get the Boo solution working first. Then learn to generate wrapper code, so your Go and python parts can work together. The PyPy Solution A complete Python-Go compiler? Good luck. You'll need it. A: As for the 'should I go ahead or better not' question: why do you want to do this in the first place? If it's a purely learning exercise, then you don't don't need to ask us whether it's worthwhile. You want to learn, so go right ahead. If it's meant to be a practical tool, then my suggestion is to not do it. An industrial-strength tool to perform such conversions might be useful but I would guess that you're not going to go that far. With that in mind it's probably more fruitful to rewrite the Python code in Go manually. That assumes there is any real benefit to compiling to Go; current testing suggests that you get better performance and similar code structure from using Stackless Python. A: There's a good list of parsers rounded-up by Ned Batchelder which might help.
How to parse *.py file with python?
I'd like to parse Python source in order to try making a basic source code converter from Python to Go. What module should I use? Should I proceed or not? If I should proceed, how?
[ "Have a look at the language services packages, particularly the ast.\nMy guess is that if you don't already have a solid grasp of both parsing as well as code generation techniques, this is going to be a difficult project to undertake.\ngood luck!\n", "The Boo Solution\nAre you trying to make a python-like language, that can compiles into Go? This seems most sensible, as you will want to do Go-specific things (to take advantage of Go features). \nLook at pyparsing. It includes an example of a complete python parser, but you probably don't want to do that.\nYou want to incrementally build your converter / translator, so you want to incrementally build the parser, otherwise you might choke on the AST. OK, you could parse everything and just ignore the stuff you don't understand, but that's not great behavior from a compiler. \nYou could start with parsing basic arithmetic.\nThe Pyrex Solution\nThis is similar to the Boo solution, but much harder. Get the Boo solution working first. Then learn to generate wrapper code, so your Go and python parts can work together.\nThe PyPy Solution\nA complete Python-Go compiler? Good luck. You'll need it.\n", "As for the 'should I go ahead or better not' question: why do you want to do this in the first place?\nIf it's a purely learning exercise, then you don't don't need to ask us whether it's worthwhile. You want to learn, so go right ahead.\nIf it's meant to be a practical tool, then my suggestion is to not do it. An industrial-strength tool to perform such conversions might be useful but I would guess that you're not going to go that far. With that in mind it's probably more fruitful to rewrite the Python code in Go manually.\nThat assumes there is any real benefit to compiling to Go; current testing suggests that you get better performance and similar code structure from using Stackless Python.\n", "There's a good list of parsers rounded-up by Ned Batchelder which might help.\n" ]
[ 14, 3, 3, 2 ]
[]
[]
[ "code_conversion", "go", "python" ]
stackoverflow_0001789301_code_conversion_go_python.txt
Q: Adding even values to new list Python If I have a list and I want to create a list with only even values of the original list, how would I do that? I originally have: list1 = [1,2,3,4,5] list2 = [] for v in list1: if v % 2 == 0: list2 += v print list2 A: List comprehension is the way to go: list1 = [1,2,3,4,5] list2 = [i for i in list1 if i%2 == 0] print list2 # => [2, 4] A: If you want to extend an existing list2 (not necessarily initially empty): list2.extend(v for v in list1 if v % 2 == 0) If there's no "initial value" for list2, and you just want to build it from scratch: list2 = [v for v in list1 if v % 2 == 0] You'll notice that the inner part is identical -- you can use that part in different ways (inside brackets to create a new list, as argument to .extend to extend an existing list, and so forth). A: You can use list comprehensions: list2 = [x for x in list1 if x % 2 == 0] A: list2 = [x for x in list1 if x%2 == 0] A: list2 = [i for i in list1 if not i%2] A: I find this to be the most readable solution: list2 = filter(lambda x: x % 2 == 0: list1) or if you have to use this function multiple times: is_even = lambda x: x % 2 == 0 list2 = filter(is_even, list1)
Adding even values to new list Python
If I have a list and I want to create a list with only even values of the original list, how would I do that? I originally have: list1 = [1,2,3,4,5] list2 = [] for v in list1: if v % 2 == 0: list2 += v print list2
[ "List comprehension is the way to go:\nlist1 = [1,2,3,4,5]\nlist2 = [i for i in list1 if i%2 == 0]\nprint list2 # => [2, 4]\n\n", "If you want to extend an existing list2 (not necessarily initially empty):\nlist2.extend(v for v in list1 if v % 2 == 0)\n\nIf there's no \"initial value\" for list2, and you just want to build it from scratch:\nlist2 = [v for v in list1 if v % 2 == 0]\n\nYou'll notice that the inner part is identical -- you can use that part in different ways (inside brackets to create a new list, as argument to .extend to extend an existing list, and so forth).\n", "You can use list comprehensions:\nlist2 = [x for x in list1 if x % 2 == 0]\n\n", "list2 = [x for x in list1 if x%2 == 0]\n\n", "list2 = [i for i in list1 if not i%2]\n\n", "I find this to be the most readable solution:\nlist2 = filter(lambda x: x % 2 == 0: list1)\n\nor if you have to use this function multiple times:\nis_even = lambda x: x % 2 == 0\n\nlist2 = filter(is_even, list1)\n\n" ]
[ 7, 3, 3, 1, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001780904_list_python.txt
Q: Python Webdriver doesn't wait until the page is downloaded in Firefox when used with proxy when I set the Firefox proxy with python webdriver, it doesn't wait until the page is fully downloaded, this doesn't happen when I don't set one. How can I change this behavior? Or how can I check that the page download is over? A: The simplest thing to do is to poll the page looking for an element you know will be present once the download is complete. The Java webdriver bindings offer a "Wait" class for just this purpose, though there isn't (yet) an analogue for this in the python bindings.
Python Webdriver doesn't wait until the page is downloaded in Firefox when used with proxy
when I set the Firefox proxy with python webdriver, it doesn't wait until the page is fully downloaded, this doesn't happen when I don't set one. How can I change this behavior? Or how can I check that the page download is over?
[ "The simplest thing to do is to poll the page looking for an element you know will be present once the download is complete. The Java webdriver bindings offer a \"Wait\" class for just this purpose, though there isn't (yet) an analogue for this in the python bindings.\n" ]
[ 1 ]
[]
[]
[ "firefox", "proxy", "python", "webdriver" ]
stackoverflow_0001785607_firefox_proxy_python_webdriver.txt
Q: Facebook Connect help According to the Facebook API documentation, most of the work is handled through javascript. That means that all the processing is done, and then the front end checks if the user is connected to Facebook/authorized. right? My question is: Suppose a user goes to my site for the first time ever. He clicks on "facebook connect". The javascript verifies him as authentic, and it "redirects" to another page on my server. From then on, how do I know that the user is actually authenticated to my website, since everything is done on frontend? I think this is correct, but aren't there some security issues..: -After user clicks Login, Facebook redirects to a page on my site. AND they also create a cookie with a specific "Facebook ID" that is retrieved only from this user. My backened will "read" the cookie and grab that ID...and then associate it to my userID. If that is correct...then it doesn't make sense. What if people steal other people's "facebook ID" and then forge the cookie? And then my backend sees the cookie and thinks it's the real user...? Am I confused? If I am confused, please help me re-organize and tell me how it's like. A: Facebook Connect uses a clever (or insane, depending on your point of view) hack to achieve cross-site communication between your site and Facebook's authentication system from within the browser. The way it works is as follows: Your site includes a very simple static HTML file, known as the cross-domain communications channel. This file is called xd_receiver.htm in the FB docs, but it can be named anything you like. Your site's login page includes a reference to the Javascript library hosted on Facebook's server. When a user logs in via the "Connect" button, it calls a function in Facebook's JS API which pops up a login dialog. This login box has an invisible iframe in which the cross-domain communications file is loaded. The user fills out the form and submits it, posting the form to Facebook. Facebook checks the login. If it's successful, it communicates this to your site. Here's where that cross-domain stuff comes in: Because of cross-domain security policies, Facebook's login window can not inspect the DOM tree for documents hosted on your server. But the login window can update the src element of any iframe within it, and this is used to communicate with the cross-domain communications file hosted on your page. When the cross-domain communications file receives a communication indicating that the login was successful, it uses Javascript to set some cookies containing the user's ID and session. Since this file lives on your server, those cookies have your domain and your backend can receive them. Any further communication in Facebook's direction can be accomplished by inserting another nested iframe in the other iframe -- this second-level iframe lives on Facebook's server instead of yours. The cookies are secure (in theory) because the data is signed with the secret key that Facebook generated for you when you signed up for the developer program. The JS library uses your public key (the "API key") to validate the cookies. Theoretically, Facebook's Javascript library handles this all automatically once you've set everything up. In practice, I've found it doesn't always work exactly smoothly. For a more detailed explanation of the mechanics of cross-domain communication using iframes, see this article from MSDN. A: Please someone correct me if I'm wrong - as I am also trying to figure all this stuff out myself. My understanding with the security of the cookies is that there is also a cookie which is a special signature cookie. This cookie is created by combining the data of the other cookies, adding your application secret that only you and FB know, and the result MD5-Hashed. You can then test this hash server-side, which could not easily be duplicated by a hacker, to make sure the data can be trusted as coming from FB. A more charming explaination can be found here - scroll about halfway down the page. A: Same issues here, and I think Scott is closer to the solution. Also Im using "http://developers.facebook.com/docs/?u=facebook.jslib-alpha.FB.init" there open source js framework. So things are a little different. For me, via the opensource js framework, facebook provides and sets a session on my site with a signature. So what I am thinking is to recreate that signature on my side. - if they both match then the user is who he says he is. So basically if a user wanted to save something to my database, grab the session signature set up by facebook and recreate that signature with php and validate it against the one facebook gave me? if($_SESSION['facebookSignature'] == reGeneratedSignature){ // save to database }else{ // go away I don't trust you } But how do you regenerate that signature? preferably without making more calls to Facebook?
Facebook Connect help
According to the Facebook API documentation, most of the work is handled through javascript. That means that all the processing is done, and then the front end checks if the user is connected to Facebook/authorized. right? My question is: Suppose a user goes to my site for the first time ever. He clicks on "facebook connect". The javascript verifies him as authentic, and it "redirects" to another page on my server. From then on, how do I know that the user is actually authenticated to my website, since everything is done on frontend? I think this is correct, but aren't there some security issues..: -After user clicks Login, Facebook redirects to a page on my site. AND they also create a cookie with a specific "Facebook ID" that is retrieved only from this user. My backened will "read" the cookie and grab that ID...and then associate it to my userID. If that is correct...then it doesn't make sense. What if people steal other people's "facebook ID" and then forge the cookie? And then my backend sees the cookie and thinks it's the real user...? Am I confused? If I am confused, please help me re-organize and tell me how it's like.
[ "Facebook Connect uses a clever (or insane, depending on your point of view) hack to achieve cross-site communication between your site and Facebook's authentication system from within the browser.\nThe way it works is as follows:\n\nYour site includes a very simple static HTML file, known as the cross-domain communications channel. This file is called xd_receiver.htm in the FB docs, but it can be named anything you like.\nYour site's login page includes a reference to the Javascript library hosted on Facebook's server.\nWhen a user logs in via the \"Connect\" button, it calls a function in Facebook's JS API which pops up a login dialog. This login box has an invisible iframe in which the cross-domain communications file is loaded.\nThe user fills out the form and submits it, posting the form to Facebook.\nFacebook checks the login. If it's successful, it communicates this to your site. Here's where that cross-domain stuff comes in:\n\n\nBecause of cross-domain security policies, Facebook's login window can not inspect the DOM tree for documents hosted on your server. But the login window can update the src element of any iframe within it, and this is used to communicate with the cross-domain communications file hosted on your page.\nWhen the cross-domain communications file receives a communication indicating that the login was successful, it uses Javascript to set some cookies containing the user's ID and session. Since this file lives on your server, those cookies have your domain and your backend can receive them.\n\nAny further communication in Facebook's direction can be accomplished by inserting another nested iframe in the other iframe -- this second-level iframe lives on Facebook's server instead of yours.\n\nThe cookies are secure (in theory) because the data is signed with the secret key that Facebook generated for you when you signed up for the developer program. The JS library uses your public key (the \"API key\") to validate the cookies.\nTheoretically, Facebook's Javascript library handles this all automatically once you've set everything up. In practice, I've found it doesn't always work exactly smoothly.\nFor a more detailed explanation of the mechanics of cross-domain communication using iframes, see this article from MSDN.\n", "Please someone correct me if I'm wrong - as I am also trying to figure all this stuff out myself. My understanding with the security of the cookies is that there is also a cookie which is a special signature cookie. This cookie is created by combining the data of the other cookies, adding your application secret that only you and FB know, and the result MD5-Hashed. You can then test this hash server-side, which could not easily be duplicated by a hacker, to make sure the data can be trusted as coming from FB.\nA more charming explaination can be found here - scroll about halfway down the page. \n", "Same issues here, and I think Scott is closer to the solution.\nAlso Im using \"http://developers.facebook.com/docs/?u=facebook.jslib-alpha.FB.init\" there open source js framework. So things are a little different.\nFor me, via the opensource js framework, facebook provides and sets a session on my site with a signature. So what I am thinking is to recreate that signature on my side. - if they both match then the user is who he says he is.\nSo basically if a user wanted to save something to my database, grab the session signature set up by facebook and recreate that signature with php and validate it against the one facebook gave me?\nif($_SESSION['facebookSignature'] == reGeneratedSignature){\n // save to database\n}else{\n // go away I don't trust you\n}\n\nBut how do you regenerate that signature? preferably without making more calls to Facebook? \n" ]
[ 6, 0, 0 ]
[]
[]
[ "facebook", "javascript", "python" ]
stackoverflow_0001580504_facebook_javascript_python.txt
Q: Creating a tree from a list of tuples I seem to be blind at the moment, so I need to ask here. I want to sort a list of tuples which look like that (id, parent_id, value) So that it is a representation of the tree as a flattend list of list of tree nodes. For example the input (1, None, '...') (3, 2', '...') (2, 1, '...') (4, 1, '...') (5, 2, '...') (6, None, '...') Should sorted like that afterwards (1, None, '...') (2, 1, '...') (3, 2', '...') (5, 2, '...') (4, 1, '...') (6, None, '...') Any hint would be highly appreciated. Thanks in advance. A: Python sorts tuples from left to right, so if you arrange your tuples so the first sort key is the first item and so forth, it'll be reasonably efficient. The mapping from a list of tuples to a tree is not clear from what you're describing. Please draw it out, or explain it more thoroughly. For example, your example appears to be: (source: sabi.net) If you've got two nodes with no parent, that is more like a forest than a tree. What are you trying to represent with the tree? What does "sorted" mean in this context? A: I'm not sure I've quite follows what you are exactly trying to do, but if you have a forest as a list of nodes, can't you just read it and build the tree structure, then write it out as a bread-first traversal of all the trees? Any particular reason to avoid this? A: Oliver, if I understand correctly, I think you can either (a) retrieve all tuples from your database into a dictionary or list, and then construct the tree, OR (b) use an ORDER BY clause when you retrieve the tuples so that they are in the order in which you will add them to the tree. If changes to the tree may be made in your application and then propagated to the database, I would opt for a, but if changes are always made as database inserts, updates or deletes, and apart from these your tree is read-only, then option b should be faster and require less resources. Roland
Creating a tree from a list of tuples
I seem to be blind at the moment, so I need to ask here. I want to sort a list of tuples which look like that (id, parent_id, value) So that it is a representation of the tree as a flattend list of list of tree nodes. For example the input (1, None, '...') (3, 2', '...') (2, 1, '...') (4, 1, '...') (5, 2, '...') (6, None, '...') Should sorted like that afterwards (1, None, '...') (2, 1, '...') (3, 2', '...') (5, 2, '...') (4, 1, '...') (6, None, '...') Any hint would be highly appreciated. Thanks in advance.
[ "Python sorts tuples from left to right, so if you arrange your tuples so the first sort key is the first item and so forth, it'll be reasonably efficient.\nThe mapping from a list of tuples to a tree is not clear from what you're describing. Please draw it out, or explain it more thoroughly. For example, your example appears to be:\n\n(source: sabi.net) \nIf you've got two nodes with no parent, that is more like a forest than a tree. What are you trying to represent with the tree? What does \"sorted\" mean in this context?\n", "I'm not sure I've quite follows what you are exactly trying to do, but if you have a forest as a list of nodes, can't you just read it and build the tree structure, then write it out as a bread-first traversal of all the trees? Any particular reason to avoid this?\n", "Oliver, if I understand correctly, I think you can either\n(a) retrieve all tuples from your database into a dictionary or list, and then construct the tree, \nOR\n(b) use an ORDER BY clause when you retrieve the tuples so that they are in the order in which you will add them to the tree. \nIf changes to the tree may be made in your application and then propagated to the database, I would opt for a, but if changes are always made as database inserts, updates or deletes, and apart from these your tree is read-only, then option b should be faster and require less resources.\nRoland\n" ]
[ 4, 1, 0 ]
[]
[]
[ "python", "sorting", "tree" ]
stackoverflow_0000783217_python_sorting_tree.txt
Q: Adding Version Control / Numbering (?) to Python Project With my Java projects at present, I have full version control by declaring it as a Maven project. However I now have a Python project that I'm about to tag 0.2.0 which has no version control. Therefore should I come accross this code at a later date, I won't no what version it is. How do I add version control to a Python project, in the same way Maven does it for Java? A: First, maven is a build tool and has nothing to do with version control. You don't need a build tool with Python -- there's nothing to "build". Some folks like to create .egg files for distribution. It's as close to a "build" as you get with Python. This is a simple setup.py file. You can use SVN keyword replacement in your source like this. Remember to enable keyword replacement for the modules that will have this. __version__ = "$Revision$" That will assure that the version or revision strings are forced into your source by SVN. You should also include version keywords in your setup.py file. A: Create a distutils setup.py file. This is the Python equivalent to maven pom.xml, it looks something like this: from distutils.core import setup setup(name='foo', version='1.0', py_modules=['foo'], ) If you want dependency management like maven, take a look at setuptools. A: Ants's answer is correct, but I would like to add that your modules can define a __version__ variable, according to PEP 8, which can be populated manually or via Subversion or CVS, e.g. if you have a module thingy, with a file thingy/__init__.py: ___version___ = '0.2.0' You can then import this version in setup.py: from distutils.core import setup import thingy setup(name='thingy', version=thingy.__version__, py_modules=['thingy'], )
Adding Version Control / Numbering (?) to Python Project
With my Java projects at present, I have full version control by declaring it as a Maven project. However I now have a Python project that I'm about to tag 0.2.0 which has no version control. Therefore should I come accross this code at a later date, I won't no what version it is. How do I add version control to a Python project, in the same way Maven does it for Java?
[ "First, maven is a build tool and has nothing to do with version control. You don't need a build tool with Python -- there's nothing to \"build\". \nSome folks like to create .egg files for distribution. It's as close to a \"build\" as you get with Python. This is a simple setup.py file.\nYou can use SVN keyword replacement in your source like this. Remember to enable keyword replacement for the modules that will have this.\n__version__ = \"$Revision$\"\n\nThat will assure that the version or revision strings are forced into your source by SVN.\nYou should also include version keywords in your setup.py file.\n", "Create a distutils setup.py file. This is the Python equivalent to maven pom.xml, it looks something like this:\nfrom distutils.core import setup\nsetup(name='foo',\n version='1.0',\n py_modules=['foo'],\n )\n\nIf you want dependency management like maven, take a look at setuptools.\n", "Ants's answer is correct, but I would like to add that your modules can define a __version__ variable, according to PEP 8, which can be populated manually or via Subversion or CVS, e.g. if you have a module thingy, with a file thingy/__init__.py:\n___version___ = '0.2.0'\n\nYou can then import this version in setup.py:\nfrom distutils.core import setup\nimport thingy\nsetup(name='thingy',\n version=thingy.__version__,\n py_modules=['thingy'],\n )\n\n" ]
[ 5, 3, 2 ]
[]
[]
[ "python", "version_control" ]
stackoverflow_0001790235_python_version_control.txt
Q: Small "embeddable" database that can also be synced over the network? I am looking for a small database that can be "embedded" into my Python application without running a separate server, as one can do with SQLite or Metakit. I don't need an SQL database, in fact storing free-form data like Python dictionaries or JSON is preferable. The other requirement is that to be able to run an instance of the database on a server, and have instances of my application (clients) sync the database with the server (two-way), similar to what CouchDB replication can do. Is there a database that will do this? A: From what you describe, it sounds like you could get by using pickle and FTP. A: If you don't need an SQL database, what's wrong with CouchDB? You can spawn a local process to serve the DB, and you could easily write a server wrapper to allow only access from your app. I'm not sure about the access story, but I believe the latest Ubuntu uses CouchDB for synchronizeable user-level data. A: Seems like the perfect job for CouchDB: 2 way sync is incredibly easy, schema-less JSON documents are the native format. If you're using python, couchdb-python is a great way to work with CouchDB. A: Do you need clients to work offline and then resync when they reconnect to the network? I don't know if MongoDB can handle the offline client scenario, but if the client is online all the time, MongoDB might be a good solution too. It has pretty goode python support. Still a separate process, but perhaps easier to get running on Windows than CouchDB. A: BerkeleyDB might be another option to check out, and it's lightweight enough. easy_install bsddb3 if you need a Python interface. A: HSQLDB does this, but unfortunately it's Java rather than Python. Firebird SQL might be closer to what you want, since it does seem to have a Python interface.
Small "embeddable" database that can also be synced over the network?
I am looking for a small database that can be "embedded" into my Python application without running a separate server, as one can do with SQLite or Metakit. I don't need an SQL database, in fact storing free-form data like Python dictionaries or JSON is preferable. The other requirement is that to be able to run an instance of the database on a server, and have instances of my application (clients) sync the database with the server (two-way), similar to what CouchDB replication can do. Is there a database that will do this?
[ "From what you describe, it sounds like you could get by using pickle and FTP.\n", "If you don't need an SQL database, what's wrong with CouchDB? You can spawn a local process to serve the DB, and you could easily write a server wrapper to allow only access from your app. I'm not sure about the access story, but I believe the latest Ubuntu uses CouchDB for synchronizeable user-level data.\n", "Seems like the perfect job for CouchDB: 2 way sync is incredibly easy, schema-less JSON documents are the native format. If you're using python, couchdb-python is a great way to work with CouchDB.\n", "Do you need clients to work offline and then resync when they reconnect to the network? I don't know if MongoDB can handle the offline client scenario, but if the client is online all the time, MongoDB might be a good solution too. It has pretty goode python support. Still a separate process, but perhaps easier to get running on Windows than CouchDB.\n", "BerkeleyDB might be another option to check out, and it's lightweight enough. easy_install bsddb3 if you need a Python interface.\n", "HSQLDB does this, but unfortunately it's Java rather than Python.\nFirebird SQL might be closer to what you want, since it does seem to have a Python interface.\n" ]
[ 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "couchdb", "database", "python", "sqlite" ]
stackoverflow_0001779287_couchdb_database_python_sqlite.txt
Q: Running average in Python Is there a pythonic way to build up a list that contains a running average of some function? After reading a fun little piece about Martians, black boxes, and the Cauchy distribution, I thought it would be fun to calculate a running average of the Cauchy distribution myself: import math import random def cauchy(location, scale): p = 0.0 while p == 0.0: p = random.random() return location + scale*math.tan(math.pi*(p - 0.5)) # is this next block of code a good way to populate running_avg? sum = 0 count = 0 max = 10 running_avg = [] while count < max: num = cauchy(3,1) sum += num count += 1 running_avg.append(sum/count) print running_avg # or do something else with it, besides printing I think that this approach works, but I'm curious if there might be a more elegant approach to building up that running_avg list than using loops and counters (e.g. list comprehensions). There are some related questions, but they address more complicated problems (small window size, exponential weighting) or aren't specific to Python: calculate exponential moving average in python How to efficiently calculate a running standard deviation? Calculating the Moving Average of a List A: You could write a generator: def running_average(): sum = 0 count = 0 while True: sum += cauchy(3,1) count += 1 yield sum/count Or, given a generator for Cauchy numbers and a utility function for a running sum generator, you can have a neat generator expression: # Cauchy numbers generator def cauchy_numbers(): while True: yield cauchy(3,1) # running sum utility function def running_sum(iterable): sum = 0 for x in iterable: sum += x yield sum # Running averages generator expression (** the neat part **) running_avgs = (sum/(i+1) for (i,sum) in enumerate(running_sum(cauchy_numbers()))) # goes on forever for avg in running_avgs: print avg # alternatively, take just the first 10 import itertools for avg in itertools.islice(running_avgs, 10): print avg A: You could use coroutines. They are similar to generators, but allows you to send in values. Coroutines was added in Python 2.5, so this won't work in versions before that. def running_average(): sum = 0.0 count = 0 value = yield(float('nan')) while True: sum += value count += 1 value = yield(sum/count) ravg = running_average() next(ravg) # advance the corutine to the first yield for i in xrange(10): avg = ravg.send(cauchy(3,1)) print 'Running average: %.6f' % (avg,) As a list comprehension: ravg = running_average() next(ravg) ravg_list = [ravg.send(cauchy(3,1)) for i in xrange(10)] Edits: Using the next() function instead of the it.next() method. This is so it also will work with Python 3. The next() function has also been back-ported to Python 2.6+. In Python 2.5, you can either replace the calls with it.next(), or define a next function yourself. (Thanks Adam Parkin) A: I've got two possible solutions here for you. Both are just generic running average functions that work on any list of numbers. (could be made to work with any iterable) Generator based: nums = [cauchy(3,1) for x in xrange(10)] def running_avg(numbers): for count in xrange(1, len(nums)+1): yield sum(numbers[:count])/count print list(running_avg(nums)) List Comprehension based (really the same code as the earlier): nums = [cauchy(3,1) for x in xrange(10)] print [sum(nums[:count])/count for count in xrange(1, len(nums)+1)] Generator-compatabile Generator based: Edit: This one I just tested to see if I could make my solution compatible with generators easily and what it's performance would be. This is what I came up with. def running_avg(numbers): sum = 0 for count, number in enumerate(numbers): sum += number yield sum/(count+1) See the performance stats below, well worth it. Performance characteristics: Edit: I also decided to test Orip's interesting use of multiple generators to see the impact on performance. Using timeit and the following (1,000,000 iterations 3 times): print "Generator based:", ', '.join(str(x) for x in Timer('list(running_avg(nums))', 'from __main__ import nums, running_avg').repeat()) print "LC based:", ', '.join(str(x) for x in Timer('[sum(nums[:count])/count for count in xrange(1, len(nums)+1)]', 'from __main__ import nums').repeat()) print "Orip's:", ', '.join(str(x) for x in Timer('list(itertools.islice(running_avgs, 10))', 'from __main__ import itertools, running_avgs').repeat()) print "Generator-compatabile Generator based:", ', '.join(str(x) for x in Timer('list(running_avg(nums))', 'from __main__ import nums, running_avg').repeat()) I get the following results: Generator based: 17.653908968, 17.8027219772, 18.0342400074 LC based: 14.3925321102, 14.4613749981, 14.4277560711 Orip's: 30.8035550117, 30.3142540455, 30.5146529675 Generator-compatabile Generator based: 3.55352187157, 3.54164409637, 3.59098005295 See comments for code: Orip's genEx based: 4.31488609314, 4.29926609993, 4.30518198013 Results are in seconds, and show the LC new generator-compatible generator method to be consistently faster, your results may vary though. I expect the massive difference between my original generator and the new one is the fact that the sum isn't calculated on the fly.
Running average in Python
Is there a pythonic way to build up a list that contains a running average of some function? After reading a fun little piece about Martians, black boxes, and the Cauchy distribution, I thought it would be fun to calculate a running average of the Cauchy distribution myself: import math import random def cauchy(location, scale): p = 0.0 while p == 0.0: p = random.random() return location + scale*math.tan(math.pi*(p - 0.5)) # is this next block of code a good way to populate running_avg? sum = 0 count = 0 max = 10 running_avg = [] while count < max: num = cauchy(3,1) sum += num count += 1 running_avg.append(sum/count) print running_avg # or do something else with it, besides printing I think that this approach works, but I'm curious if there might be a more elegant approach to building up that running_avg list than using loops and counters (e.g. list comprehensions). There are some related questions, but they address more complicated problems (small window size, exponential weighting) or aren't specific to Python: calculate exponential moving average in python How to efficiently calculate a running standard deviation? Calculating the Moving Average of a List
[ "You could write a generator:\ndef running_average():\n sum = 0\n count = 0\n while True:\n sum += cauchy(3,1)\n count += 1\n yield sum/count\n\nOr, given a generator for Cauchy numbers and a utility function for a running sum generator, you can have a neat generator expression:\n# Cauchy numbers generator\ndef cauchy_numbers():\n while True:\n yield cauchy(3,1)\n\n# running sum utility function\ndef running_sum(iterable):\n sum = 0\n for x in iterable:\n sum += x\n yield sum\n\n# Running averages generator expression (** the neat part **)\nrunning_avgs = (sum/(i+1) for (i,sum) in enumerate(running_sum(cauchy_numbers())))\n\n# goes on forever\nfor avg in running_avgs:\n print avg\n\n# alternatively, take just the first 10\nimport itertools\nfor avg in itertools.islice(running_avgs, 10):\n print avg\n\n", "You could use coroutines. They are similar to generators, but allows you to send in values. Coroutines was added in Python 2.5, so this won't work in versions before that.\ndef running_average():\n sum = 0.0\n count = 0\n value = yield(float('nan'))\n while True:\n sum += value\n count += 1\n value = yield(sum/count)\n\nravg = running_average()\nnext(ravg) # advance the corutine to the first yield\n\nfor i in xrange(10):\n avg = ravg.send(cauchy(3,1))\n print 'Running average: %.6f' % (avg,)\n\nAs a list comprehension:\nravg = running_average()\nnext(ravg)\nravg_list = [ravg.send(cauchy(3,1)) for i in xrange(10)]\n\nEdits:\n\nUsing the next() function instead of the it.next() method. This is so it also will work with Python 3. The next() function has also been back-ported to Python 2.6+.\nIn Python 2.5, you can either replace the calls with it.next(), or define a next function yourself.\n(Thanks Adam Parkin)\n\n", "I've got two possible solutions here for you. Both are just generic running average functions that work on any list of numbers. (could be made to work with any iterable)\nGenerator based:\nnums = [cauchy(3,1) for x in xrange(10)]\n\ndef running_avg(numbers):\n for count in xrange(1, len(nums)+1):\n yield sum(numbers[:count])/count\n\nprint list(running_avg(nums))\n\nList Comprehension based (really the same code as the earlier):\nnums = [cauchy(3,1) for x in xrange(10)]\n\nprint [sum(nums[:count])/count for count in xrange(1, len(nums)+1)]\n\nGenerator-compatabile Generator based:\nEdit: This one I just tested to see if I could make my solution compatible with generators easily and what it's performance would be. This is what I came up with.\ndef running_avg(numbers):\n sum = 0\n for count, number in enumerate(numbers):\n sum += number\n yield sum/(count+1)\n\nSee the performance stats below, well worth it.\nPerformance characteristics:\nEdit: I also decided to test Orip's interesting use of multiple generators to see the impact on performance.\nUsing timeit and the following (1,000,000 iterations 3 times):\nprint \"Generator based:\", ', '.join(str(x) for x in Timer('list(running_avg(nums))', 'from __main__ import nums, running_avg').repeat())\nprint \"LC based:\", ', '.join(str(x) for x in Timer('[sum(nums[:count])/count for count in xrange(1, len(nums)+1)]', 'from __main__ import nums').repeat())\nprint \"Orip's:\", ', '.join(str(x) for x in Timer('list(itertools.islice(running_avgs, 10))', 'from __main__ import itertools, running_avgs').repeat())\n\nprint \"Generator-compatabile Generator based:\", ', '.join(str(x) for x in Timer('list(running_avg(nums))', 'from __main__ import nums, running_avg').repeat())\n\nI get the following results:\nGenerator based: 17.653908968, 17.8027219772, 18.0342400074\nLC based: 14.3925321102, 14.4613749981, 14.4277560711\nOrip's: 30.8035550117, 30.3142540455, 30.5146529675\n\nGenerator-compatabile Generator based: 3.55352187157, 3.54164409637, 3.59098005295\n\nSee comments for code:\nOrip's genEx based: 4.31488609314, 4.29926609993, 4.30518198013 \n\nResults are in seconds, and show the LC new generator-compatible generator method to be consistently faster, your results may vary though. I expect the massive difference between my original generator and the new one is the fact that the sum isn't calculated on the fly. \n" ]
[ 15, 6, 4 ]
[]
[]
[ "list_comprehension", "moving_average", "python" ]
stackoverflow_0001790550_list_comprehension_moving_average_python.txt
Q: Compact Class DSL in python I want to have compact class based python DSLs in the following form: class MyClass(Static): z = 3 def _init_(cls, x=0): cls._x = x def set_x(cls, x): cls._x = x def print_x_plus_z(cls): print cls._x + cls.z @property def x(cls): return cls._x class MyOtherClass(MyClass): z = 6 def _init_(cls): MyClass._init_(cls, x=3) I don't want to write MyClass() and MyOtherClass() afterwards. Just want to get this working with only class definitions. MyClass.print_x_plus_z() c = MyOtherClass c.z = 5 c.print_x_plus_z() assert MyOtherClass.z == 5, "instances don't share the same values!" I used metaclasses and managed to get _init_, print_x and subclassing working properly, but properties don't work. Could anyone suggest better alternative? I'm using Python 2.4+ A: To give a class (as opposed to its instances) a property, you need to have that property object as an attribute of the class's metaclass (so you'll probably need to make a custom metaclass to avoid inflicting that property upon other classes with the same metaclass). Similarly for special methods such as __init__ -- if they're on the class they'd affect the instances (which you don't want to make) -- to have them affect the class, you need to have them on the (custom) metaclass. What are you trying to accomplish by programming everything "one metalevel up", i.e., never-instantiated class with custom metaclass rather than normal instances of a normal class? It just seems a slight amount of extra work for no returns;-).
Compact Class DSL in python
I want to have compact class based python DSLs in the following form: class MyClass(Static): z = 3 def _init_(cls, x=0): cls._x = x def set_x(cls, x): cls._x = x def print_x_plus_z(cls): print cls._x + cls.z @property def x(cls): return cls._x class MyOtherClass(MyClass): z = 6 def _init_(cls): MyClass._init_(cls, x=3) I don't want to write MyClass() and MyOtherClass() afterwards. Just want to get this working with only class definitions. MyClass.print_x_plus_z() c = MyOtherClass c.z = 5 c.print_x_plus_z() assert MyOtherClass.z == 5, "instances don't share the same values!" I used metaclasses and managed to get _init_, print_x and subclassing working properly, but properties don't work. Could anyone suggest better alternative? I'm using Python 2.4+
[ "To give a class (as opposed to its instances) a property, you need to have that property object as an attribute of the class's metaclass (so you'll probably need to make a custom metaclass to avoid inflicting that property upon other classes with the same metaclass). Similarly for special methods such as __init__ -- if they're on the class they'd affect the instances (which you don't want to make) -- to have them affect the class, you need to have them on the (custom) metaclass. What are you trying to accomplish by programming everything \"one metalevel up\", i.e., never-instantiated class with custom metaclass rather than normal instances of a normal class? It just seems a slight amount of extra work for no returns;-).\n" ]
[ 2 ]
[]
[]
[ "class", "dsl", "properties", "python", "singleton" ]
stackoverflow_0001790856_class_dsl_properties_python_singleton.txt
Q: Django, is possible to run two different versions? I have a server on which I have two sites built with Django and Python, one site is major site is build with an older version of django, the other with the newer release, I have upgraded to the new release and major aspects of my other site have broken, is it possible to tell the site to use a different version in say the python path? in the virtualhost? I am desperate for help! Some more info, it is on a linux and server users mod python, here is what I am trying with the vitrualhost <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE website.settings SetEnv PYTHON_EGG_CACHE /var/cache/pyeggcache SetEnv PYTHONPATH "sys.path + ['usr/lib/python2.6/site-packages/django2']" PythonDebug On PythonPath "['/var/www/website_live/src'] + ['/var/www/webiste_live/src/website'] + sys.path" </Location> I have replaced the website name with 'website' my seperate version of django lives at /usr/lib/python2.6/site-packages/django2 A: When you have more than one site on a server, you should consider using something like virtualenv. Using that you can setup different virtual environments and place site specific packages and such in there, instead of messing up your site-packages folder. It also makes development a lot easier as you easily can setup these environments on your local with specific versions of whatever you use. This quickly becomes very handy if you use other apps, and this is something that Pinax uses very heavily. The easiest way to handle packages and versions is simply to create a requirements file. A: Yes, you could. I have blogged about this at length over here. A: Of course it is, but it will require a bit of nestling. It depends mainly on which server you use. The key point is the $PYTHONPATH. This variable stores, where Python looks for modules to embed. If you use import django.conf it really looks through all dirs in $PYTHONPATH and searches for a folder called django. So the key is to manipulate $PYTHONPATH depending on where the request goes to. If you happen to use mod_python and Apache, it could look like this: <VirtualHost *:80> DocumentRoot "/var/htdocs/old_django_project" ServerName old-django PythonPath "sys.path + ['/var/software/old_django']" </VirtualHost> <VirtualHost *:80> DocumentRoot "/var/htdocs/new_django_project" ServerName new-django PythonPath "sys.path + ['/var/software/new_django']" </VirtualHost> Then, visiting http://old-django/ brings you to the old django instance, and the new-django likewise. A: It is possible, but in my experience of an environment that used different versions of both Django and Python it tends to end up getting messy, especially if you have more than one developer working on the projects. Each developer then needs to maintain two versions of Django and remember which features they can and cannot use. A: I use Wigwam for this sort of thing. It's a heavy-handed approach — there's a separate build of each of Apahce, Python, Django, etc. — but it works quite well.
Django, is possible to run two different versions?
I have a server on which I have two sites built with Django and Python, one site is major site is build with an older version of django, the other with the newer release, I have upgraded to the new release and major aspects of my other site have broken, is it possible to tell the site to use a different version in say the python path? in the virtualhost? I am desperate for help! Some more info, it is on a linux and server users mod python, here is what I am trying with the vitrualhost <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE website.settings SetEnv PYTHON_EGG_CACHE /var/cache/pyeggcache SetEnv PYTHONPATH "sys.path + ['usr/lib/python2.6/site-packages/django2']" PythonDebug On PythonPath "['/var/www/website_live/src'] + ['/var/www/webiste_live/src/website'] + sys.path" </Location> I have replaced the website name with 'website' my seperate version of django lives at /usr/lib/python2.6/site-packages/django2
[ "When you have more than one site on a server, you should consider using something like virtualenv.\nUsing that you can setup different virtual environments and place site specific packages and such in there, instead of messing up your site-packages folder. It also makes development a lot easier as you easily can setup these environments on your local with specific versions of whatever you use.\nThis quickly becomes very handy if you use other apps, and this is something that Pinax uses very heavily. The easiest way to handle packages and versions is simply to create a requirements file.\n", "Yes, you could. I have blogged about this at length over here.\n", "Of course it is, but it will require a bit of nestling. It depends mainly on which server you use.\nThe key point is the $PYTHONPATH. This variable stores, where Python looks for modules to embed. If you use\nimport django.conf\n\nit really looks through all dirs in $PYTHONPATH and searches for a folder called django.\nSo the key is to manipulate $PYTHONPATH depending on where the request goes to. If you happen to use mod_python and Apache, it could look like this:\n<VirtualHost *:80>\n DocumentRoot \"/var/htdocs/old_django_project\"\n ServerName old-django\n PythonPath \"sys.path + ['/var/software/old_django']\"\n</VirtualHost>\n\n<VirtualHost *:80>\n DocumentRoot \"/var/htdocs/new_django_project\"\n ServerName new-django\n PythonPath \"sys.path + ['/var/software/new_django']\"\n</VirtualHost>\n\nThen, visiting http://old-django/ brings you to the old django instance, and the new-django likewise.\n", "It is possible, but in my experience of an environment that used different versions of both Django and Python it tends to end up getting messy, especially if you have more than one developer working on the projects. Each developer then needs to maintain two versions of Django and remember which features they can and cannot use.\n", "I use Wigwam for this sort of thing. It's a heavy-handed approach — there's a separate build of each of Apahce, Python, Django, etc. — but it works quite well.\n" ]
[ 7, 5, 1, 0, 0 ]
[]
[]
[ "django", "hosting", "python", "shared_hosting" ]
stackoverflow_0001789285_django_hosting_python_shared_hosting.txt
Q: Python: Invalid Syntax with test data using Pyparser Using pyparser, I am trying to create a very simple parser for the S-Expression language. I have written a very small grammar. Here is my code: from pyparsing import * alphaword = Word(alphas) integer = Word(nums) sexp = Forward() LPAREN = Suppress("(") RPAREN = Suppress(")") sexp << ( alphaword | integer | ( LPAREN + ZeroOrMore(sexp) + RPAREN) tests = """\ red 100 ( red 100 blue ) ( green ( ( 1 2 ) mauve ) plaid () ) """.splitlines() for t in tests: print t print sexp.parseString(t) print While looking at examples of this code it seems that everything is fine, however when running i get a syntax error for this line tests = """\ ^ I don't understand it. I would be grateful for any help A: parentheses on a previous line are not closed. sexp << ( alphaword | integer | ( LPAREN + ZeroOrMore(sexp) + RPAREN) Needs more )'s
Python: Invalid Syntax with test data using Pyparser
Using pyparser, I am trying to create a very simple parser for the S-Expression language. I have written a very small grammar. Here is my code: from pyparsing import * alphaword = Word(alphas) integer = Word(nums) sexp = Forward() LPAREN = Suppress("(") RPAREN = Suppress(")") sexp << ( alphaword | integer | ( LPAREN + ZeroOrMore(sexp) + RPAREN) tests = """\ red 100 ( red 100 blue ) ( green ( ( 1 2 ) mauve ) plaid () ) """.splitlines() for t in tests: print t print sexp.parseString(t) print While looking at examples of this code it seems that everything is fine, however when running i get a syntax error for this line tests = """\ ^ I don't understand it. I would be grateful for any help
[ "parentheses on a previous line are not closed.\nsexp << ( alphaword | integer | ( LPAREN + ZeroOrMore(sexp) + RPAREN)\n\nNeeds more )'s\n" ]
[ 4 ]
[]
[]
[ "pyparsing", "python", "syntax", "syntax_error" ]
stackoverflow_0001791269_pyparsing_python_syntax_syntax_error.txt
Q: Django ease of building a RESTful interface I'm looking for an excuse to learn Django for a new project that has come up. Typically I like to build RESTful server-side interfaces where a URL maps to resources that spits out data in some platform independent context, such as XML or JSON. This is rather straightforward to do without the use of frameworks, but some of them such as Ruby on Rails conveniently allow you to easily spit back XML to a client based on the type of URL you pass it, based on your existing model code. My question is, does something like Django have support for this? I've googled and found some 'RESTful' 3rd party code that can go on top of Django. Not sure if I'm too keen on that. If not Django, any other Python framework that's already built with this in mind so I do not have to reinvent the wheel as I already have in languages like PHP? A: This is probably pretty easy to do. URL mappings are easy to construct, for example: urlpatterns = patterns('books.views', (r'^books/$', 'index'), (r'^books/(\d+)/$', 'get')) Django supports model serialization, so it's easy to turn models into XML: from django.core import serializers from models import Book data = serializers.serialize("xml", Book.objects.all()) Combine the two with decorators and you can build fast, quick handlers: from django.http import HttpResponse from django.shortcuts import get_object_or_404 def xml_view(func): def wrapper(*args, **kwargs): result = func(*args, **kwargs) return HttpResponse(serializers.serialize("xml", result), mimetype="text/xml") return wrapper @xml_view def index(request): return Books.objects.all() @xml_view def get(request, id): return get_object_or_404(Book, pk=id) A: (I had to edit out the most obvious links.) +1 for piston - (link above). I had used apibuilder (Washington Times open source) in the past, but Piston works easier for me. The most difficult thing for me is in figuring out my URL structures for the API, and to help with the regular expressions. I've also used surlex which makes that chore much easier. Example, using this model for Group (from a timetable system we're working on): class Group(models.Model): """ Tree-like structure that holds groups that may have other groups as leaves. For example ``st01gp01`` is part of ``stage1``. This allows subgroups to work. The name is ``parents``, i.e.:: >>> stage1group01 = Group.objects.get(unique_name = 'St 1 Gp01') >>> stage1group01 >>> <Group: St 1 Gp01> # get the parents... >>> stage1group01.parents.all() >>> [<Group: Stage 1>] ``symmetrical`` on ``subgroup`` is needed to allow the 'parents' attribute to be 'visible'. """ subgroup = models.ManyToManyField("Group", related_name = "parents", symmetrical= False, blank=True) unique_name = models.CharField(max_length=255) name = models.CharField(max_length=255) academic_year = models.CharField(max_length=255) dept_id = models.CharField(max_length=255) class Meta: db_table = u'timetable_group' def __unicode__(self): return "%s" % self.name And this urls.py fragment (note that surlex allows regular expression macros to be set up easily): from surlex.dj import surl from surlex import register_macro from piston.resource import Resource from api.handlers import GroupHandler group_handler = Resource(GroupHandler) # add another macro to our 'surl' function # this picks up our module definitions register_macro('t', r'[\w\W ,-]+') urlpatterns = patterns('', # group handler # all groups url(r'^groups/$', group_handler), surl(r'^group/<id:#>/$', group_handler), surl(r'^group/<name:t>/$', group_handler),) Then this handler will look after JSON output (by default) and can also do XML and YAML. class GroupHandler(BaseHandler): """ Entry point for Group model """ allowed_methods = ('GET', ) model = Group fields = ('id', 'unique_name', 'name', 'dept_id', 'academic_year', 'subgroup') def read(self, request, id=None, name=None): base = Group.objects if id: print self.__class__, 'ID' try: return base.get(id=id) except ObjectDoesNotExist: return rc.NOT_FOUND except MultipleObjectsReturned: # Should never happen, since we're using a primary key. return rc.BAD_REQUEST else: if name: print self.__class__, 'Name' return base.filter(unique_name = name).all() else: print self.__class__, 'NO ID' return base.all() As you can see, most of the handler code is in figuring out what parameters are being passed in urlpatterns. Some example URLs are api/groups/, api/group/3301/ and api/group/st1gp01/ - all of which will output JSON. A: Take a look at Piston, it's a mini-framework for Django for creating RESTful APIs. A recent blog post by Eric Holscher provides some more insight on the PROs of using Piston: Large Problems in Django, Mostly Solved: APIs A: It can respond with any kind of data. JSON/XML/PDF/pictures/CSV... Django itself comes with a set of serializers. Edit I just had a look at at Piston — looks promising. Best feature: Stays out of your way. :) A: Regarding your comment about not liking 3rd party code - that's too bad because the pluggable apps are one of django's greatest features. Like others answered, piston will do most of the work for you. A: A little over a year ago, I wrote a REST web service in Django for a large Seattle company that does streaming media on the Internet. Django was excellent for the purpose. As "a paid nerd" observed, the Django URL config is wonderful: you can set up your URLs just the way you want them, and have it serve up the appropriate objects. The one thing I didn't like: the Django ORM has absolutely no support for binary BLOBs. If you want to serve up photos or something, you will need to keep them in a file system, and not in a database. Because we were using multiple servers, I had to choose between writing my own BLOB support or finding some replication framework that would keep all the servers up to date with the latest binary data. (I chose to write my own BLOB support. It wasn't very hard, so I was actually annoyed that the Django guys didn't do that work. There should be one, and preferably only one, obvious way to do something.) I really like the Django ORM. It makes the database part really easy; you don't need to know any SQL. (I don't like SQL and I do like Python, so it's a double win.) The "admin interface", which you get for free, gives you a great way to look through your data, and to poke data in during testing and development. I recommend Django without reservation.
Django ease of building a RESTful interface
I'm looking for an excuse to learn Django for a new project that has come up. Typically I like to build RESTful server-side interfaces where a URL maps to resources that spits out data in some platform independent context, such as XML or JSON. This is rather straightforward to do without the use of frameworks, but some of them such as Ruby on Rails conveniently allow you to easily spit back XML to a client based on the type of URL you pass it, based on your existing model code. My question is, does something like Django have support for this? I've googled and found some 'RESTful' 3rd party code that can go on top of Django. Not sure if I'm too keen on that. If not Django, any other Python framework that's already built with this in mind so I do not have to reinvent the wheel as I already have in languages like PHP?
[ "This is probably pretty easy to do.\nURL mappings are easy to construct, for example:\nurlpatterns = patterns('books.views',\n (r'^books/$', 'index'),\n (r'^books/(\\d+)/$', 'get'))\n\nDjango supports model serialization, so it's easy to turn models into XML:\nfrom django.core import serializers\nfrom models import Book\n\ndata = serializers.serialize(\"xml\", Book.objects.all())\n\nCombine the two with decorators and you can build fast, quick handlers:\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404\n\ndef xml_view(func):\n def wrapper(*args, **kwargs):\n result = func(*args, **kwargs)\n return HttpResponse(serializers.serialize(\"xml\", result),\n mimetype=\"text/xml\")\n return wrapper\n\n@xml_view\ndef index(request):\n return Books.objects.all()\n\n@xml_view\ndef get(request, id):\n return get_object_or_404(Book, pk=id)\n\n", "(I had to edit out the most obvious links.)\n+1 for piston - (link above). I had used apibuilder (Washington Times open source) in the past, but Piston works easier for me. The most difficult thing for me is in figuring out my URL structures for the API, and to help with the regular expressions. I've also used surlex which makes that chore much easier.\nExample, using this model for Group (from a timetable system we're working on):\nclass Group(models.Model):\n \"\"\"\n Tree-like structure that holds groups that may have other groups as leaves. \n For example ``st01gp01`` is part of ``stage1``.\n This allows subgroups to work. The name is ``parents``, i.e.::\n\n >>> stage1group01 = Group.objects.get(unique_name = 'St 1 Gp01')\n >>> stage1group01\n >>> <Group: St 1 Gp01>\n # get the parents...\n >>> stage1group01.parents.all()\n >>> [<Group: Stage 1>]\n\n ``symmetrical`` on ``subgroup`` is needed to allow the 'parents' attribute to be 'visible'.\n \"\"\"\n subgroup = models.ManyToManyField(\"Group\", related_name = \"parents\", symmetrical= False, blank=True)\n unique_name = models.CharField(max_length=255)\n name = models.CharField(max_length=255)\n academic_year = models.CharField(max_length=255)\n dept_id = models.CharField(max_length=255)\n class Meta:\n db_table = u'timetable_group'\n def __unicode__(self):\n return \"%s\" % self.name\n\nAnd this urls.py fragment (note that surlex allows regular expression macros to be set up easily):\nfrom surlex.dj import surl\nfrom surlex import register_macro\nfrom piston.resource import Resource\nfrom api.handlers import GroupHandler\ngroup_handler = Resource(GroupHandler)\n\n# add another macro to our 'surl' function\n# this picks up our module definitions\nregister_macro('t', r'[\\w\\W ,-]+')\n\nurlpatterns = patterns('',\n# group handler\n# all groups\nurl(r'^groups/$', group_handler),\nsurl(r'^group/<id:#>/$', group_handler),\nsurl(r'^group/<name:t>/$', group_handler),)\n\nThen this handler will look after JSON output (by default) and can also do XML and YAML.\nclass GroupHandler(BaseHandler):\n \"\"\"\n Entry point for Group model\n \"\"\"\n\n allowed_methods = ('GET', )\n model = Group\n fields = ('id', 'unique_name', 'name', 'dept_id', 'academic_year', 'subgroup')\n\n def read(self, request, id=None, name=None):\n base = Group.objects\n if id:\n print self.__class__, 'ID'\n try:\n return base.get(id=id)\n except ObjectDoesNotExist:\n return rc.NOT_FOUND\n except MultipleObjectsReturned: # Should never happen, since we're using a primary key.\n return rc.BAD_REQUEST\n else:\n if name:\n print self.__class__, 'Name'\n return base.filter(unique_name = name).all()\n else:\n print self.__class__, 'NO ID'\n return base.all()\n\nAs you can see, most of the handler code is in figuring out what parameters are being passed in urlpatterns.\nSome example URLs are api/groups/, api/group/3301/ and api/group/st1gp01/ - all of which will output JSON.\n", "Take a look at Piston, it's a mini-framework for Django for creating RESTful APIs.\nA recent blog post by Eric Holscher provides some more insight on the PROs of using Piston: Large Problems in Django, Mostly Solved: APIs\n", "It can respond with any kind of data. JSON/XML/PDF/pictures/CSV... \nDjango itself comes with a set of serializers.\nEdit\nI just had a look at at Piston — looks promising. Best feature: \n\nStays out of your way.\n\n:)\n", "Regarding your comment about not liking 3rd party code - that's too bad because the pluggable apps are one of django's greatest features. Like others answered, piston will do most of the work for you. \n", "A little over a year ago, I wrote a REST web service in Django for a large Seattle company that does streaming media on the Internet.\nDjango was excellent for the purpose. As \"a paid nerd\" observed, the Django URL config is wonderful: you can set up your URLs just the way you want them, and have it serve up the appropriate objects.\nThe one thing I didn't like: the Django ORM has absolutely no support for binary BLOBs. If you want to serve up photos or something, you will need to keep them in a file system, and not in a database. Because we were using multiple servers, I had to choose between writing my own BLOB support or finding some replication framework that would keep all the servers up to date with the latest binary data. (I chose to write my own BLOB support. It wasn't very hard, so I was actually annoyed that the Django guys didn't do that work. There should be one, and preferably only one, obvious way to do something.)\nI really like the Django ORM. It makes the database part really easy; you don't need to know any SQL. (I don't like SQL and I do like Python, so it's a double win.) The \"admin interface\", which you get for free, gives you a great way to look through your data, and to poke data in during testing and development.\nI recommend Django without reservation.\n" ]
[ 15, 4, 3, 2, 2, 1 ]
[]
[]
[ "django", "python", "rest" ]
stackoverflow_0001732452_django_python_rest.txt
Q: Genshi: Nested for loops I need to generate a HTML using a Genshi template. The Html is, basicaly a very long html with tables. The data comes in a simple CSV, so, i read it with python, i put it into a list[] and then i call the template and send the variable (the list) Actually i solved it by doing something like this in the template: <html> <?python> for i in t: for e in tp[i]: print "<SOME_HTML_TAGS>" </?> </html> But, the idea is to use the Genshi funcions (such as loops, etc) I read the manual, and I see that a simple for is done like this: <li py:for="fruit in fruits"> I like ${fruit}s </li> But, how do i do a loop inside a loop (nested for loops)??? A: <table> <tr py:for="i in t"> <td py:for="e in tp[i]"> ${e}s </td> </tr> </table>
Genshi: Nested for loops
I need to generate a HTML using a Genshi template. The Html is, basicaly a very long html with tables. The data comes in a simple CSV, so, i read it with python, i put it into a list[] and then i call the template and send the variable (the list) Actually i solved it by doing something like this in the template: <html> <?python> for i in t: for e in tp[i]: print "<SOME_HTML_TAGS>" </?> </html> But, the idea is to use the Genshi funcions (such as loops, etc) I read the manual, and I see that a simple for is done like this: <li py:for="fruit in fruits"> I like ${fruit}s </li> But, how do i do a loop inside a loop (nested for loops)???
[ "<table>\n<tr py:for=\"i in t\"> \n<td py:for=\"e in tp[i]\">\n${e}s\n</td>\n</tr>\n</table>\n\n" ]
[ 2 ]
[]
[]
[ "csv", "genshi", "python" ]
stackoverflow_0001791252_csv_genshi_python.txt
Q: How do I build the 32-bit pypy JIT in 64-bit Linux? Pypy's JIT will compile on 64-bit Linux ever since it grew 64-bit support, but what if I wanted to compile a 32-bit version? How should I cross-compile a 32-bit JITting pypy on that machine? A: You could try compiling it in a chroot.
How do I build the 32-bit pypy JIT in 64-bit Linux?
Pypy's JIT will compile on 64-bit Linux ever since it grew 64-bit support, but what if I wanted to compile a 32-bit version? How should I cross-compile a 32-bit JITting pypy on that machine?
[ "You could try compiling it in a chroot.\n" ]
[ 2 ]
[]
[]
[ "pypy", "python" ]
stackoverflow_0001785428_pypy_python.txt
Q: Creating container relationship in declarative SQLAlchemy My Python / SQLAlchemy application manages a set of nodes, all derived from a base class Node. I'm using SQLAlchemy's polymorphism features to manage the nodes in a SQLite3 table. Here's the definition of the base Node class: class Node(db.Base): __tablename__ = 'nodes' id = Column(Integer, primary_key=True) node_type = Column(String(40)) title = Column(UnicodeText) __mapper_args__ = {'polymorphic_on': node_type} and, as an example, one of the derived classes, NoteNode: class NoteNode(Node): __mapper_args__ = {'polymorphic_identity': 'note'} __tablename__ = 'nodes_note' id = Column(None,ForeignKey('nodes.id'),primary_key=True) content_type = Column(String) content = Column(UnicodeText) Now I need a new kind of node, ListNode, that is an ordered container of zero or more Nodes. When I load a ListNode, I want it to have its ID and title (from the base Node class) along with a collection of its contained (child) nodes. A Node may appear in more than one ListNode, so it's not a proper hierarchy. I would create them along these lines: note1 = NoteNode(title=u"Note 1", content_type="text/text", content=u"I am note #1") session.add(note1) note2 = NoteNode(title=u"Note 2", content_type="text/text", content=u"I am note #2") session.add(note2) list1 = ListNode(title=u"My List") list1.items = [note1,note2] session.add(list1) The list of children should only consist of Node objects -- that is, all I need is their base class stuff. They shouldn't be fully realized into the specialized classes (so I don't get the whole graph at once, among other reasons). I started along the following lines, cobbling together bits and pieces I found in various places without a complete understanding of what was going on, so this may not make much sense: class ListNode(Node): __mapper_args__ = {'polymorphic_identity': 'list', 'inherit_condition':id==Node.id} __tablename__ = 'nodes_list_contents' id = Column(None, ForeignKey('nodes.id'), primary_key=True) item_id = Column(None, ForeignKey('nodes.id'), primary_key=True) items = relation(Node, primaryjoin="Node.id==ListNode.item_id") This approach fails in several ways: it doesn't appear to allow an empty ListNode, and setting the items attribute to a list results in SQLAlchemy complaining that 'list' object has no attribute '_sa_instance_state'. Not surprisingly, hours of random mutations on this theme haven't given any good results, I have limited experience in SQLAlchemy but really want to get this working soon. I'd very much appreciate any advice or direction you can offer. Thanks in advance! A: You need an additional table for many-to-many relation: nodes_list_nodes = Table( 'nodes_list_nodes', metadata, Column('parent_id', None, ForeignKey('nodes_list.id'), nullable=False), Column('child_id', None, ForeignKey(Node.id), nullable=False), PrimaryKeyConstraint('parent_id', 'child_id'), ) class ListNode(Node): __mapper_args__ = {'polymorphic_identity': 'list'} __tablename__ = 'nodes_list' id = Column(None, ForeignKey('nodes.id'), primary_key=True) items = relation(Node, secondary=nodes_list_nodes) Update: below is an example for ordered list using association_proxy: from sqlalchemy.orm.collections import InstrumentedList from sqlalchemy.ext.associationproxy import association_proxy class ListNodeAssociation(Base): __tablename__ = 'nodes_list_nodes' parent_id = Column(None, ForeignKey('nodes_list.id'), primary_key=True) child_id = Column(None, ForeignKey(Node.id), primary_key=True) order = Column(Integer, nullable=False, default=0) child = relation(Node) __table_args__ = ( PrimaryKeyConstraint('parent_id', 'child_id'), {}, ) class OrderedList(InstrumentedList): def append(self, item): if self: item.order = self[-1].order+1 else: item.order = 1 InstrumentedList.append(self, item) class ListNode(Node): __mapper_args__ = {'polymorphic_identity': 'list'} __tablename__ = 'nodes_list' id = Column(None, ForeignKey('nodes.id'), primary_key=True) _items = relation(ListNodeAssociation, order_by=ListNodeAssociation.order, collection_class=OrderedList, cascade='all, delete-orphan') items = association_proxy( '_items', 'child', creator=lambda item: ListNodeAssociation(child=item))
Creating container relationship in declarative SQLAlchemy
My Python / SQLAlchemy application manages a set of nodes, all derived from a base class Node. I'm using SQLAlchemy's polymorphism features to manage the nodes in a SQLite3 table. Here's the definition of the base Node class: class Node(db.Base): __tablename__ = 'nodes' id = Column(Integer, primary_key=True) node_type = Column(String(40)) title = Column(UnicodeText) __mapper_args__ = {'polymorphic_on': node_type} and, as an example, one of the derived classes, NoteNode: class NoteNode(Node): __mapper_args__ = {'polymorphic_identity': 'note'} __tablename__ = 'nodes_note' id = Column(None,ForeignKey('nodes.id'),primary_key=True) content_type = Column(String) content = Column(UnicodeText) Now I need a new kind of node, ListNode, that is an ordered container of zero or more Nodes. When I load a ListNode, I want it to have its ID and title (from the base Node class) along with a collection of its contained (child) nodes. A Node may appear in more than one ListNode, so it's not a proper hierarchy. I would create them along these lines: note1 = NoteNode(title=u"Note 1", content_type="text/text", content=u"I am note #1") session.add(note1) note2 = NoteNode(title=u"Note 2", content_type="text/text", content=u"I am note #2") session.add(note2) list1 = ListNode(title=u"My List") list1.items = [note1,note2] session.add(list1) The list of children should only consist of Node objects -- that is, all I need is their base class stuff. They shouldn't be fully realized into the specialized classes (so I don't get the whole graph at once, among other reasons). I started along the following lines, cobbling together bits and pieces I found in various places without a complete understanding of what was going on, so this may not make much sense: class ListNode(Node): __mapper_args__ = {'polymorphic_identity': 'list', 'inherit_condition':id==Node.id} __tablename__ = 'nodes_list_contents' id = Column(None, ForeignKey('nodes.id'), primary_key=True) item_id = Column(None, ForeignKey('nodes.id'), primary_key=True) items = relation(Node, primaryjoin="Node.id==ListNode.item_id") This approach fails in several ways: it doesn't appear to allow an empty ListNode, and setting the items attribute to a list results in SQLAlchemy complaining that 'list' object has no attribute '_sa_instance_state'. Not surprisingly, hours of random mutations on this theme haven't given any good results, I have limited experience in SQLAlchemy but really want to get this working soon. I'd very much appreciate any advice or direction you can offer. Thanks in advance!
[ "You need an additional table for many-to-many relation:\nnodes_list_nodes = Table(\n 'nodes_list_nodes', metadata,\n Column('parent_id', None, ForeignKey('nodes_list.id'), nullable=False),\n Column('child_id', None, ForeignKey(Node.id), nullable=False),\n PrimaryKeyConstraint('parent_id', 'child_id'),\n)\n\nclass ListNode(Node):\n __mapper_args__ = {'polymorphic_identity': 'list'}\n __tablename__ = 'nodes_list'\n id = Column(None, ForeignKey('nodes.id'), primary_key=True)\n items = relation(Node, secondary=nodes_list_nodes)\n\nUpdate: below is an example for ordered list using association_proxy:\nfrom sqlalchemy.orm.collections import InstrumentedList\nfrom sqlalchemy.ext.associationproxy import association_proxy\n\n\nclass ListNodeAssociation(Base):\n __tablename__ = 'nodes_list_nodes'\n parent_id = Column(None, ForeignKey('nodes_list.id'), primary_key=True)\n child_id = Column(None, ForeignKey(Node.id), primary_key=True)\n order = Column(Integer, nullable=False, default=0)\n child = relation(Node)\n __table_args__ = (\n PrimaryKeyConstraint('parent_id', 'child_id'),\n {},\n )\n\n\nclass OrderedList(InstrumentedList):\n\n def append(self, item):\n if self:\n item.order = self[-1].order+1\n else:\n item.order = 1\n InstrumentedList.append(self, item)\n\n\nclass ListNode(Node):\n __mapper_args__ = {'polymorphic_identity': 'list'}\n __tablename__ = 'nodes_list'\n id = Column(None, ForeignKey('nodes.id'), primary_key=True)\n _items = relation(ListNodeAssociation,\n order_by=ListNodeAssociation.order,\n collection_class=OrderedList,\n cascade='all, delete-orphan')\n items = association_proxy(\n '_items', 'child',\n creator=lambda item: ListNodeAssociation(child=item))\n\n" ]
[ 5 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001791713_python_sqlalchemy.txt
Q: Python Class with integer emulation Given is the following example: class Foo(object): def __init__(self, value=0): self.value=value def __int__(self): return self.value I want to have a class Foo, which acts as an integer (or float). So I want to do the following things: f=Foo(3) print int(f)+5 # is working print f+5 # TypeError: unsupported operand type(s) for +: 'Foo' and 'int' The first statement print int(f)+5 is working, cause there are two integers. The second one is failing, because I have to implement __add__ to do this operation with my class. So to implement the integer behaviour, I have to implement all the integer emulating methods. How could I get around this. I tried to inherit from int, but this attempt was not successful. Update Inheriting from int fails, if you want to use a __init__: class Foo(int): def __init__(self, some_argument=None, value=0): self.value=value # do some stuff def __int__(self): return int(self.value) If you then call: f=Foo(some_argument=3) you get: TypeError: 'some_argument' is an invalid keyword argument for this function Tested with Python 2.5 and 2.6 A: In Python 2.4+ inheriting from int works: class MyInt(int):pass f=MyInt(3) assert f + 5 == 8 A: You need to override __new__, not __init__: class Foo(int): def __new__(cls, some_argument=None, value=0): i = int.__new__(cls, value) i._some_argument = some_argument return i def print_some_argument(self): print self._some_argument Now your class work as expected: >>> f = Foo(some_argument="I am a customized int", value=10) >>> f 10 >>> f + 8 18 >>> f * 0.25 2.5 >>> f.print_some_argument() I am a customized int More information about overriding new can be found in Unifying types and classes in Python 2.2. A: Try to use an up-to-date version of python. Your code works in 2.6.1.
Python Class with integer emulation
Given is the following example: class Foo(object): def __init__(self, value=0): self.value=value def __int__(self): return self.value I want to have a class Foo, which acts as an integer (or float). So I want to do the following things: f=Foo(3) print int(f)+5 # is working print f+5 # TypeError: unsupported operand type(s) for +: 'Foo' and 'int' The first statement print int(f)+5 is working, cause there are two integers. The second one is failing, because I have to implement __add__ to do this operation with my class. So to implement the integer behaviour, I have to implement all the integer emulating methods. How could I get around this. I tried to inherit from int, but this attempt was not successful. Update Inheriting from int fails, if you want to use a __init__: class Foo(int): def __init__(self, some_argument=None, value=0): self.value=value # do some stuff def __int__(self): return int(self.value) If you then call: f=Foo(some_argument=3) you get: TypeError: 'some_argument' is an invalid keyword argument for this function Tested with Python 2.5 and 2.6
[ "In Python 2.4+ inheriting from int works:\nclass MyInt(int):pass\nf=MyInt(3)\nassert f + 5 == 8\n\n", "You need to override __new__, not __init__:\nclass Foo(int):\n def __new__(cls, some_argument=None, value=0):\n i = int.__new__(cls, value)\n i._some_argument = some_argument\n return i\n\n def print_some_argument(self):\n print self._some_argument\n\nNow your class work as expected:\n>>> f = Foo(some_argument=\"I am a customized int\", value=10)\n>>> f\n10\n>>> f + 8\n18\n>>> f * 0.25\n2.5\n>>> f.print_some_argument()\nI am a customized int\n\nMore information about overriding new can be found in Unifying types and classes in Python 2.2.\n", "Try to use an up-to-date version of python. Your code works in 2.6.1.\n" ]
[ 7, 6, 2 ]
[]
[]
[ "emulation", "floating_point", "integer", "python" ]
stackoverflow_0001638229_emulation_floating_point_integer_python.txt
Q: Python: dynamic class generation: overwrite members I have a python class hierarchy, that I want to extend at runtime. Furthermore every class in this hierarchy has a static attribute 'dict', that I want to overwrite in every subclass. Simplyfied it looks like this: 'dict' is a protected (public but with leading underscore) member class A(object): _dict = {} @classmethod def getdict(cls): return cls._dict @classmethod def setval(cls, name, val): cls._dict[name] = val @classmethod def addchild(cls, name): return type(name, (cls, ), { '_dict' : {} }) B = A.addchild('B') A.setval(1, 5) print A.getdict() # prints: {1: 5} # like expected print B.getdict() # prints: {} # like expected This works just like expected. The question now is: why doesnt it work anymore if I declare the attribute private: Now the same thing with 'dict' beeing a private member class C(object): __dict = {} @classmethod def getdict(cls): return cls.__dict @classmethod def setval(cls, name, val): cls.__dict[name] = val @classmethod def addchild(cls, name): return type(name, (cls, ), { '__dict' : {} }) D = C.addchild('D') C.setval(1, 5) print C.getdict() # prints: {1: 5} # like expected print D.getdict() # prints: {1: 5} # why!? Suddenly D, the subclass of C, has the same values in 'dict' as its superclass!? Could anyone be so kind and explain to me, what the reason for this is? Thanks in advance! A: phild, as you know, when you prefix an attribute name with double-underscore __, the python interpreter automagically changes (mangles) attribute name from __attribute to _CLS__attribute, where CLS is the class name. However, when you say return type(name, (cls, ), { '__dict' : {} }) the keys in the dictionary { '__dict' : {} } do not get mangled. __dict remains the same. Thus D ends up with both D._C__dict and D.__dict: (Pdb) dir(D) ['_C__dict', '__class__', '__delattr__', '__dict', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'addchild', 'getdict', 'setval'] D._C__dict refers to C's class attribute. So when you run C.setval(1, 5) you are changing D._C__dict as well as C._C__dict. They are one and the same. A: Here is a chapter in documentation about "private" attributes. And I commented you class definition to make it more clear: class C(object): __dict = {} # This creates C.__dict__['_C__dict'] @classmethod def getdict(cls): return cls.__dict # Uses cls.__dict__['_C__dict'] @classmethod def setval(cls, name, val): cls.__dict[name] = val # Uses cls.__dict__['_C__dict'] @classmethod def addchild(cls, name): return type(name, (cls, ), { '__dict' : {} }) # Creates child.__dict__['__dict'] I.e. all childs have their own __dict attribute, but only one from base class is used. A: The Java or C++ concepts of "protected" and "private" do not apply. The naming convention Python does a little, but not what you're imagining. The __name does some name mangling, making it hard to access because the name is obscured. Your _dict and __dict are simply class-level attributes that are simply shared by all instances of the classes.
Python: dynamic class generation: overwrite members
I have a python class hierarchy, that I want to extend at runtime. Furthermore every class in this hierarchy has a static attribute 'dict', that I want to overwrite in every subclass. Simplyfied it looks like this: 'dict' is a protected (public but with leading underscore) member class A(object): _dict = {} @classmethod def getdict(cls): return cls._dict @classmethod def setval(cls, name, val): cls._dict[name] = val @classmethod def addchild(cls, name): return type(name, (cls, ), { '_dict' : {} }) B = A.addchild('B') A.setval(1, 5) print A.getdict() # prints: {1: 5} # like expected print B.getdict() # prints: {} # like expected This works just like expected. The question now is: why doesnt it work anymore if I declare the attribute private: Now the same thing with 'dict' beeing a private member class C(object): __dict = {} @classmethod def getdict(cls): return cls.__dict @classmethod def setval(cls, name, val): cls.__dict[name] = val @classmethod def addchild(cls, name): return type(name, (cls, ), { '__dict' : {} }) D = C.addchild('D') C.setval(1, 5) print C.getdict() # prints: {1: 5} # like expected print D.getdict() # prints: {1: 5} # why!? Suddenly D, the subclass of C, has the same values in 'dict' as its superclass!? Could anyone be so kind and explain to me, what the reason for this is? Thanks in advance!
[ "phild, as you know, when you prefix an attribute name with double-underscore __, the python interpreter automagically changes (mangles) attribute name from __attribute to _CLS__attribute, where CLS is the class name.\nHowever, when you say\nreturn type(name, (cls, ), { '__dict' : {} })\nthe keys in the dictionary { '__dict' : {} } do not get mangled. __dict remains the same.\nThus D ends up with both D._C__dict and D.__dict: \n(Pdb) dir(D)\n['_C__dict', '__class__', '__delattr__', '__dict', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'addchild', 'getdict', 'setval']\n\nD._C__dict refers to C's class attribute. So when you run\nC.setval(1, 5)\nyou are changing D._C__dict as well as C._C__dict. They are one and the same.\n", "Here is a chapter in documentation about \"private\" attributes. And I commented you class definition to make it more clear:\nclass C(object):\n __dict = {} # This creates C.__dict__['_C__dict']\n\n @classmethod\n def getdict(cls):\n return cls.__dict # Uses cls.__dict__['_C__dict'] \n\n @classmethod\n def setval(cls, name, val):\n cls.__dict[name] = val # Uses cls.__dict__['_C__dict'] \n\n @classmethod\n def addchild(cls, name):\n return type(name, (cls, ), { '__dict' : {} }) # Creates child.__dict__['__dict']\n\nI.e. all childs have their own __dict attribute, but only one from base class is used.\n", "The Java or C++ concepts of \"protected\" and \"private\" do not apply. The naming convention Python does a little, but not what you're imagining.\nThe __name does some name mangling, making it hard to access because the name is obscured.\nYour _dict and __dict are simply class-level attributes that are simply shared by all instances of the classes.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "dynamic_class_creation", "inheritance", "name_mangling", "python" ]
stackoverflow_0001792104_dynamic_class_creation_inheritance_name_mangling_python.txt
Q: Looping through files in a folder I'm fairly new when it comes to programming, and have started out learning python. What I want to do is to recolour sprites for a game, and I am given the original colours, followed by what they are to be turned into. Each sprite has between 20 and 60 angles, so looping through each one in the folder for each colour is probably the way to go for me. My code goes thusly; import media import sys import os.path original_colors = str(raw_input('Please enter the original RGB component, separated ONLY by a single space: ')) new_colors = str(raw_input('Please insert the new RGB component, separated ONLY by a single space: ')) original_list = original_colors.split(' ') new_list = new_colors.split(' ') folder = 'C:\Users\Spriting\blue' if original_colors == 'quit' or new_colors == 'quit': sys.exit(0) else: while 1: for filename in os.listdir (folder): for pix in filename: if (media.get_red(pix) == int(original_list[0])) and (media.get_green(pix) == int(original_list[1])) and \ (media.get_blue(pix) == int(original_list[2])): media.set_red(pix, new_list[0]) media.set_green(pix, new_list[1]) media.set_blue(pix, new_list[2]) media.save(pic) But I keep getting an error on the pathname, and on pix being a string value (They're all pictures) Any help appreciated. A: os.listdir() returns a list of file names. Thus, filename is a string. You need to open the file before iterating on it, I guess. Also, be careful with backslashes in strings. They are mostly used for special escape sequences, so you need to escape them by doubling them. You could use the constant os.sep to be more portable, or even use os.path.join() : folder = os.path.join('C:\\', 'Users', 'Sprinting', 'blue') A: for pix in filename: iterates over the letters of the filename. So that's certainly not what you want. You'll probably want to replace that line by: with open(filename) as current_file: for pix in current_file: (assuming Python 2.6) and indent the rest of the loop accordingly. However, I'm not sure that the new for loop does what you want unless by pix you mean a line of text in the current file. If the files are binary picture files, you'll first need to read their contents correcty - not enough info in your post to guess what's right here. A: The path is wrong because the backslashes need to be doubled up - backslash is an escape for special characters. os.listdir does not return open files, it returns filenames. You need to open the file using the filename.
Looping through files in a folder
I'm fairly new when it comes to programming, and have started out learning python. What I want to do is to recolour sprites for a game, and I am given the original colours, followed by what they are to be turned into. Each sprite has between 20 and 60 angles, so looping through each one in the folder for each colour is probably the way to go for me. My code goes thusly; import media import sys import os.path original_colors = str(raw_input('Please enter the original RGB component, separated ONLY by a single space: ')) new_colors = str(raw_input('Please insert the new RGB component, separated ONLY by a single space: ')) original_list = original_colors.split(' ') new_list = new_colors.split(' ') folder = 'C:\Users\Spriting\blue' if original_colors == 'quit' or new_colors == 'quit': sys.exit(0) else: while 1: for filename in os.listdir (folder): for pix in filename: if (media.get_red(pix) == int(original_list[0])) and (media.get_green(pix) == int(original_list[1])) and \ (media.get_blue(pix) == int(original_list[2])): media.set_red(pix, new_list[0]) media.set_green(pix, new_list[1]) media.set_blue(pix, new_list[2]) media.save(pic) But I keep getting an error on the pathname, and on pix being a string value (They're all pictures) Any help appreciated.
[ "os.listdir() returns a list of file names. Thus, filename is a string. You need to open the file before iterating on it, I guess.\nAlso, be careful with backslashes in strings. They are mostly used for special escape sequences, so you need to escape them by doubling them. You could use the constant os.sep to be more portable, or even use os.path.join() :\nfolder = os.path.join('C:\\\\', 'Users', 'Sprinting', 'blue')\n\n", "for pix in filename:\n\niterates over the letters of the filename. So that's certainly not what you want. You'll probably want to replace that line by:\nwith open(filename) as current_file:\n for pix in current_file:\n\n(assuming Python 2.6) and indent the rest of the loop accordingly. \nHowever, I'm not sure that the new for loop does what you want unless by pix you mean a line of text in the current file. If the files are binary picture files, you'll first need to read their contents correcty - not enough info in your post to guess what's right here. \n", "The path is wrong because the backslashes need to be doubled up - backslash is an escape for special characters.\nos.listdir does not return open files, it returns filenames. You need to open the file using the filename.\n" ]
[ 34, 10, 3 ]
[]
[]
[ "file", "loops", "python" ]
stackoverflow_0001792312_file_loops_python.txt
Q: Updating value in binary file with Python I'm trying to figure out how to update the data in a binary file using Python. I'm already comfortable reading and writing complete files using "array", but I'm having trouble with in place editing. Here's what I've tried: my_file.seek(100) my_array = array.array('B') my_array.append(0) my_array.tofile(my_file) Essentially, I want to change the value of the byte at position 100. The above code does update the value, but then truncates the rest of the file. I want to be able to change the value at position 100, without modifying anything else in the file. Note that I'm editing multi-gigabyte files, so I don't want to read the entire thing into memory, update memory, and then write back out to disk. A: According to the documentation of open(), you should open the file in 'rb+' mode to avoid the truncating behavior. A: Are you opening the file in 'r+b' mode?
Updating value in binary file with Python
I'm trying to figure out how to update the data in a binary file using Python. I'm already comfortable reading and writing complete files using "array", but I'm having trouble with in place editing. Here's what I've tried: my_file.seek(100) my_array = array.array('B') my_array.append(0) my_array.tofile(my_file) Essentially, I want to change the value of the byte at position 100. The above code does update the value, but then truncates the rest of the file. I want to be able to change the value at position 100, without modifying anything else in the file. Note that I'm editing multi-gigabyte files, so I don't want to read the entire thing into memory, update memory, and then write back out to disk.
[ "According to the documentation of open(), you should open the file in 'rb+' mode to avoid the truncating behavior.\n", "Are you opening the file in 'r+b' mode?\n" ]
[ 5, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001792701_python.txt
Q: marking duplicates in a csv file I'm stumped with a problem illustrated in the sample below: "ID","NAME","PHONE","REF","DISCARD" 1,"JOHN",12345,, 2,"PETER",6232,, 3,"JON",12345,, 4,"PETERSON",6232,, 5,"ALEX",7854,, 6,"JON",12345,, I want to detect duplicates in column "PHONE", and mark the subsequent duplicates using the column "REF", with a value pointing to the "ID" of the first item and the value "Yes" for the "DISCARD" column "ID","NAME","PHONE","REF","DISCARD" 1,"JOHN",12345,1, 2,"PETER",6232,2, 3,"JON",12345,1,"Yes" 4,"PETERSON",6232,2,"Yes" 5,"ALEX",7854,, 6,"JON",12345,1,"Yes" So, how do I go about it? I tried this code but my logic wasn't right, of course. import csv myfile = open("C:\Users\Eduardo\Documents\TEST2.csv", "rb") myfile1 = open("C:\Users\Eduardo\Documents\TEST2.csv", "rb") dest = csv.writer(open("C:\Users\Eduardo\Documents\TESTFIXED.csv", "wb"), dialect="excel") reader = csv.reader(myfile) verum = list(reader) verum.sort(key=lambda x: x[2]) for i, row in enumerate(verum): if row[2] == verum[i][2]: verum[i][3] = row[0] print verum Your direction and help would be much appreciated. A: The only thing you have to keep in memory while this is running is a map of phone numbers to their IDs. map = {} with open(r'c:\temp\input.csv', 'r') as fin: reader = csv.reader(fin) with open(r'c:\temp\output.csv', 'w') as fout: writer = csv.writer(fout) # omit this if the file has no header row writer.writerow(next(reader)) for row in reader: (id, name, phone, ref, discard) = row if map.has_key(phone): ref = map[phone] discard = "YES" else: map[phone] = id writer.writerow((id, name, phone, ref, discard)) A: Sounds like homework. Since this is a CSV file (and thus changing the record size is next to impossible) you are best off loading the whole file into memory and manipulating it there before writing it out to a new file. Create a list of strings which is the original lines of the file. Then create a map, insert into the the phone number (the key) and the value (the id). Before the insert you look for the number if it already exists, you update the line containing the duplicate phone number. If it isn't already in the map, you insert the (phone, id) pair. A: from operator import itemgetter from itertools import groupby import csv verum = csv.reader(open('data.csv','rb')) verum.sort(key=itemgetter(2,0)) def grouper( verum ): for key, grp in groupby(verum,itemgetter(2)): # key = phone number, grp = records with that number first = grp.next() # first item gets its id written into the 4th column yield [first[0],first[1],first[2],first[0],''] #or list(itemgetter(0,1,2,0,4)(first)) for x in grp: # all others get the first items id as ref yield [x[0],x[1],x[2], first[0], "Yes"] for line in sorted(grouper(verum), key=itemgetter(0)): print line Outputs: ['1', 'JOHN', '12345', '1', ''] ['2', 'PETER', '6232', '2', ''] ['3', 'JON', '12345', '1', 'Yes'] ['4', 'PETERSON', '6232', '2', 'Yes'] ['5', 'ALEX', '7854', '5', ''] ['6', 'JON', '12345', '1', 'Yes'] Writing the data back is left to the reader ;-) A: I know one thing. I know you don't have to read the entire file into memory to accomplish this. import csv myfile = "C:\Users\Eduardo\Documents\TEST2.csv" dest = csv.writer(open("C:\Users\Eduardo\Documents\TESTFIXED.csv", "wb"), dialect="excel") phonedict = {} for row in cvs.reader(open(myfile, "r")): # setdefault sets the value to the second argument if it hasn't been set, and then # returns what the value in the dictionary is. firstid = phonedict.setdefault(row[2], row[0]) row[3] = firstid if firstid is not row[0]: row[4] = "Yes" dest.writerow(row) A: I work with large 40k plus record csv files, the easiest way to get rid of dupes it with Access. 1. Create new database, 2, Tables tab Get external Data 3. Save Table. 4. Queries tab New find dupe wizard ( Match on phone field, show all fields and count) 5. Save Query ( export has .txt but name dupes.txt ) 6. Import Query result as new table, do not import field with dupe count.. 7. Query Find unmatched (match by phone field, show all fields in result. save query then Export has .txt but name unique.txt) 8. Import unique file in to existing table ( dupes ) 9.You can now save and export again into what ever files type you need and not have any dupes
marking duplicates in a csv file
I'm stumped with a problem illustrated in the sample below: "ID","NAME","PHONE","REF","DISCARD" 1,"JOHN",12345,, 2,"PETER",6232,, 3,"JON",12345,, 4,"PETERSON",6232,, 5,"ALEX",7854,, 6,"JON",12345,, I want to detect duplicates in column "PHONE", and mark the subsequent duplicates using the column "REF", with a value pointing to the "ID" of the first item and the value "Yes" for the "DISCARD" column "ID","NAME","PHONE","REF","DISCARD" 1,"JOHN",12345,1, 2,"PETER",6232,2, 3,"JON",12345,1,"Yes" 4,"PETERSON",6232,2,"Yes" 5,"ALEX",7854,, 6,"JON",12345,1,"Yes" So, how do I go about it? I tried this code but my logic wasn't right, of course. import csv myfile = open("C:\Users\Eduardo\Documents\TEST2.csv", "rb") myfile1 = open("C:\Users\Eduardo\Documents\TEST2.csv", "rb") dest = csv.writer(open("C:\Users\Eduardo\Documents\TESTFIXED.csv", "wb"), dialect="excel") reader = csv.reader(myfile) verum = list(reader) verum.sort(key=lambda x: x[2]) for i, row in enumerate(verum): if row[2] == verum[i][2]: verum[i][3] = row[0] print verum Your direction and help would be much appreciated.
[ "The only thing you have to keep in memory while this is running is a map of phone numbers to their IDs.\nmap = {}\nwith open(r'c:\\temp\\input.csv', 'r') as fin:\n reader = csv.reader(fin)\n with open(r'c:\\temp\\output.csv', 'w') as fout:\n writer = csv.writer(fout)\n # omit this if the file has no header row\n writer.writerow(next(reader))\n for row in reader:\n (id, name, phone, ref, discard) = row\n if map.has_key(phone):\n ref = map[phone]\n discard = \"YES\"\n else:\n map[phone] = id\n writer.writerow((id, name, phone, ref, discard))\n\n", "Sounds like homework. Since this is a CSV file (and thus changing the record size is next to impossible) you are best off loading the whole file into memory and manipulating it there before writing it out to a new file. Create a list of strings which is the original lines of the file. Then create a map, insert into the the phone number (the key) and the value (the id). Before the insert you look for the number if it already exists, you update the line containing the duplicate phone number. If it isn't already in the map, you insert the (phone, id) pair.\n", "from operator import itemgetter\nfrom itertools import groupby\n\nimport csv\nverum = csv.reader(open('data.csv','rb'))\n\nverum.sort(key=itemgetter(2,0))\ndef grouper( verum ):\n for key, grp in groupby(verum,itemgetter(2)):\n # key = phone number, grp = records with that number\n first = grp.next()\n # first item gets its id written into the 4th column\n yield [first[0],first[1],first[2],first[0],''] #or list(itemgetter(0,1,2,0,4)(first)) \n for x in grp:\n # all others get the first items id as ref\n yield [x[0],x[1],x[2], first[0], \"Yes\"]\n\nfor line in sorted(grouper(verum), key=itemgetter(0)):\n print line\n\nOutputs:\n['1', 'JOHN', '12345', '1', '']\n['2', 'PETER', '6232', '2', '']\n['3', 'JON', '12345', '1', 'Yes']\n['4', 'PETERSON', '6232', '2', 'Yes']\n['5', 'ALEX', '7854', '5', '']\n['6', 'JON', '12345', '1', 'Yes']\n\nWriting the data back is left to the reader ;-)\n", "I know one thing. I know you don't have to read the entire file into memory to accomplish this.\nimport csv\nmyfile = \"C:\\Users\\Eduardo\\Documents\\TEST2.csv\"\n\ndest = csv.writer(open(\"C:\\Users\\Eduardo\\Documents\\TESTFIXED.csv\", \"wb\"), dialect=\"excel\")\n\nphonedict = {}\n\nfor row in cvs.reader(open(myfile, \"r\")):\n # setdefault sets the value to the second argument if it hasn't been set, and then\n # returns what the value in the dictionary is.\n firstid = phonedict.setdefault(row[2], row[0])\n row[3] = firstid\n if firstid is not row[0]:\n row[4] = \"Yes\"\n dest.writerow(row)\n\n", "I work with large 40k plus record csv files, the easiest way to get rid of dupes it with Access.\n1. Create new database, \n2, Tables tab Get external Data\n3. Save Table.\n4. Queries tab New find dupe wizard ( Match on phone field, show all fields and count)\n5. Save Query ( export has .txt but name dupes.txt )\n6. Import Query result as new table, do not import field with dupe count..\n7. Query Find unmatched (match by phone field, show all fields in result. save query then Export has .txt but name unique.txt)\n8. Import unique file in to existing table ( dupes ) \n9.You can now save and export again into what ever files type you need and not have any dupes\n" ]
[ 7, 0, 0, 0, 0 ]
[]
[]
[ "csv", "duplicates", "python" ]
stackoverflow_0001733166_csv_duplicates_python.txt
Q: Django is_valid() not working with modelformset_factory I've created a simple contact form using the modelformset_factory to build the form in the view using the DB model. The issue that I am having is that the is_valid() check before the save() is not working. When I submit the form with empty fields it still passes the is_valid() and attempts to write to the DB. I would like the is_valid() check to fail when the fields are empty so that the user can be directed to the form again with an error message. I believe that there is a simple solution to this. Do you know what I am missing in my code? Thanks. Code: models.py class Response(models.Model): name = models.CharField(max_length=50,verbose_name='Your Name:') email = models.CharField(max_length=50,verbose_name='Email:') phone = models.CharField(max_length=50,verbose_name='Phone Number:') apt_size = models.CharField(max_length=25, choices=APT_CHOICES, verbose_name='Apt Size:') movein_at= models.DateField(verbose_name='Desired Move-In Date') community = models.CharField(max_length=50, choices=COMMUNITY_CHOICES, verbose_name='Community You Are Interested In:') referred_by = models.CharField(max_length=50, choices=REFERRED_CHOICES, verbose_name='Found Us Where?') referred_other = models.CharField(blank=True,max_length=50,verbose_name='If Other:') comments = models.TextField(verbose_name='Comments:') created_at = models.DateTimeField(auto_now_add=True) def __unicode__(self): return self.name views.py from summitpark.contact.models import * from django.shortcuts import render_to_response from django.forms.models import modelformset_factory def form(request): contact_form_set = modelformset_factory(Response,fields=('name','email','phone', 'apt_size','movein_at', 'community','referred_by', 'comments'), exclude=('id')) if request.method == 'POST': formset = contact_form_set(request.POST) if formset.is_valid(): formset.save() return render_to_response('contact/confirm.html') else: return render_to_response('contact/form.html',{'formset':formset}) else: formset = contact_form_set(queryset=Response.objects.none()) return render_to_response('contact/form.html',{'formset':formset} Solution: class BaseContactFormSet(BaseModelFormSet): def clean(self): if any(self.errors): return for form in self.forms: name = form['name'].data if not name: raise forms.ValidationError, "Please Complete the Required Fields A: Your issue is that providing 0 items is a valid formset, there is no minimum validation. I'd provide a custom BaseModelFormset subclass that's clean() method just checked for a minimum of one obj. A: Did you really want a formset? I suspect if you have a contacts form with only one instance of the Response in then you want a ModelForm... class ResponseForm(forms.ModelForm): class Meta: model = Response fields=('name','email','phone', 'apt_size','movein_at', 'community','referred_by', 'comments') As for which fields are allowed to be blank and which aren't, make sure it does the right thing in the admin first, then the ModelForm will do exactly the right thing (that is how the admin makes its forms after all).
Django is_valid() not working with modelformset_factory
I've created a simple contact form using the modelformset_factory to build the form in the view using the DB model. The issue that I am having is that the is_valid() check before the save() is not working. When I submit the form with empty fields it still passes the is_valid() and attempts to write to the DB. I would like the is_valid() check to fail when the fields are empty so that the user can be directed to the form again with an error message. I believe that there is a simple solution to this. Do you know what I am missing in my code? Thanks. Code: models.py class Response(models.Model): name = models.CharField(max_length=50,verbose_name='Your Name:') email = models.CharField(max_length=50,verbose_name='Email:') phone = models.CharField(max_length=50,verbose_name='Phone Number:') apt_size = models.CharField(max_length=25, choices=APT_CHOICES, verbose_name='Apt Size:') movein_at= models.DateField(verbose_name='Desired Move-In Date') community = models.CharField(max_length=50, choices=COMMUNITY_CHOICES, verbose_name='Community You Are Interested In:') referred_by = models.CharField(max_length=50, choices=REFERRED_CHOICES, verbose_name='Found Us Where?') referred_other = models.CharField(blank=True,max_length=50,verbose_name='If Other:') comments = models.TextField(verbose_name='Comments:') created_at = models.DateTimeField(auto_now_add=True) def __unicode__(self): return self.name views.py from summitpark.contact.models import * from django.shortcuts import render_to_response from django.forms.models import modelformset_factory def form(request): contact_form_set = modelformset_factory(Response,fields=('name','email','phone', 'apt_size','movein_at', 'community','referred_by', 'comments'), exclude=('id')) if request.method == 'POST': formset = contact_form_set(request.POST) if formset.is_valid(): formset.save() return render_to_response('contact/confirm.html') else: return render_to_response('contact/form.html',{'formset':formset}) else: formset = contact_form_set(queryset=Response.objects.none()) return render_to_response('contact/form.html',{'formset':formset} Solution: class BaseContactFormSet(BaseModelFormSet): def clean(self): if any(self.errors): return for form in self.forms: name = form['name'].data if not name: raise forms.ValidationError, "Please Complete the Required Fields
[ "Your issue is that providing 0 items is a valid formset, there is no minimum validation. I'd provide a custom BaseModelFormset subclass that's clean() method just checked for a minimum of one obj.\n", "Did you really want a formset? I suspect if you have a contacts form with only one instance of the Response in then you want a ModelForm...\nclass ResponseForm(forms.ModelForm):\n class Meta:\n model = Response\n fields=('name','email','phone',\n 'apt_size','movein_at',\n 'community','referred_by',\n 'comments')\n\nAs for which fields are allowed to be blank and which aren't, make sure it does the right thing in the admin first, then the ModelForm will do exactly the right thing (that is how the admin makes its forms after all).\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_forms", "django_models", "python" ]
stackoverflow_0001791942_django_django_forms_django_models_python.txt
Q: Packet Queue in Python? is there any way to queue packets to a socket in Python? I've been looking for something like the libipq library, but can't find anything equivalent. Here's what I'm trying to accomplish: Create tcp socket connection between server and client (both under my control). Try transmitting data (waiting for connection to fault -- e.g. client loses connectivity because of shutting laptop) Catch SocketException and hold on to the data that was trying to be sent while keeping the remaining data waiting (in a Queue?) Enter in to loop trying to reconnect (assuming that success is inevitable) Create new socket upon success Resume data transmission Any suggestions? Can Twisted do this? Do I need to involve pcapy? Should I do this (sockets, queues, etc) in C and use Boost to make hybrid code? Thanks in advance. Edit 1: Response to Nick: I left out the fact that the data I'll be transmitting will be generalized and unending -- think of this app sitting under an ssh session (i'm not in any way trying to peek into the packets). So, the transmission will be bilateral. I want to be able to go from the office to my home (closing my laptop in between), open the laptop at home and continue in my session seamlessly. (I know SCREEN exists). This might lead you to wonder how it'd work without proxies. It won't, I just haven't explained that design. Blockquote With the added context, I should also say I won't have to catch a SocketException on the server side since that machine will be (or assume to be) fixed. When the client figures out that it's got connectivity again, it'll just re-connect to the server. A: I suggest you implement an application protocol, so when the client receives data it acknowledges it to the server, with a serial number for each bit of data. The server then keeps note of which serial number the client needs next and sends it. If the connection breaks then the server re-makes it. The data could be buffered anyway you like, but I'd keep it simple to start with and use a deque probably which is a bit more efficient than a list for taking data of the start and end. I'd probably create a Packet object which has a serial number attribute, and a way to serialize and unserialize it and its data to a byte stream (eg using netstrings). You would then store these Packets in your deque. You would pop the packet off the deque and whenever you got an acknowledgement from the client and send the next one. Such protocols have lots of corner cases, timeouts needed and are generally annoying to debug. Look at the source code for a SMTP server some day if you want some idea! As for sockets, you could easily implement this with normal python sockets and see how it goes. If you plan to have lots of clients at once then twisted will be a good idea.
Packet Queue in Python?
is there any way to queue packets to a socket in Python? I've been looking for something like the libipq library, but can't find anything equivalent. Here's what I'm trying to accomplish: Create tcp socket connection between server and client (both under my control). Try transmitting data (waiting for connection to fault -- e.g. client loses connectivity because of shutting laptop) Catch SocketException and hold on to the data that was trying to be sent while keeping the remaining data waiting (in a Queue?) Enter in to loop trying to reconnect (assuming that success is inevitable) Create new socket upon success Resume data transmission Any suggestions? Can Twisted do this? Do I need to involve pcapy? Should I do this (sockets, queues, etc) in C and use Boost to make hybrid code? Thanks in advance. Edit 1: Response to Nick: I left out the fact that the data I'll be transmitting will be generalized and unending -- think of this app sitting under an ssh session (i'm not in any way trying to peek into the packets). So, the transmission will be bilateral. I want to be able to go from the office to my home (closing my laptop in between), open the laptop at home and continue in my session seamlessly. (I know SCREEN exists). This might lead you to wonder how it'd work without proxies. It won't, I just haven't explained that design. Blockquote With the added context, I should also say I won't have to catch a SocketException on the server side since that machine will be (or assume to be) fixed. When the client figures out that it's got connectivity again, it'll just re-connect to the server.
[ "I suggest you implement an application protocol, so when the client receives data it acknowledges it to the server, with a serial number for each bit of data.\nThe server then keeps note of which serial number the client needs next and sends it. If the connection breaks then the server re-makes it.\nThe data could be buffered anyway you like, but I'd keep it simple to start with and use a deque probably which is a bit more efficient than a list for taking data of the start and end.\nI'd probably create a Packet object which has a serial number attribute, and a way to serialize and unserialize it and its data to a byte stream (eg using netstrings). You would then store these Packets in your deque. You would pop the packet off the deque and whenever you got an acknowledgement from the client and send the next one.\nSuch protocols have lots of corner cases, timeouts needed and are generally annoying to debug. Look at the source code for a SMTP server some day if you want some idea!\nAs for sockets, you could easily implement this with normal python sockets and see how it goes. If you plan to have lots of clients at once then twisted will be a good idea.\n" ]
[ 0 ]
[]
[]
[ "packet", "python", "queue", "sockets" ]
stackoverflow_0001792320_packet_python_queue_sockets.txt
Q: How to treat a returned/stored string like a raw string in Python? I am trying to .split() a hex string i.e. '\xff\x00' to get a list i.e. ['ff', '00'] This works if I split on a raw string literal i.e. r'\xff\x00' using .split('\\x') but not if I split on a hex string stored in a variable or returned from a function (which I presume is not a raw string) How do I convert or at least 'cast' a stored/returned string as a raw string? A: x = '\xff\x00' y = ['%02x' % ord(c) for c in x] print y Output: ['ff', '00'] A: Here is a solution in the spirit of the original question: x = '\xff\x00' eval("r"+repr(x)).split('\\x') It will return the same thing as r'\xff\x00'.split('\\x'): ['', 'ff', '00'].
How to treat a returned/stored string like a raw string in Python?
I am trying to .split() a hex string i.e. '\xff\x00' to get a list i.e. ['ff', '00'] This works if I split on a raw string literal i.e. r'\xff\x00' using .split('\\x') but not if I split on a hex string stored in a variable or returned from a function (which I presume is not a raw string) How do I convert or at least 'cast' a stored/returned string as a raw string?
[ "x = '\\xff\\x00'\ny = ['%02x' % ord(c) for c in x]\nprint y\n\nOutput:\n['ff', '00']\n\n", "Here is a solution in the spirit of the original question:\nx = '\\xff\\x00'\neval(\"r\"+repr(x)).split('\\\\x')\n\nIt will return the same thing as r'\\xff\\x00'.split('\\\\x'): ['', 'ff', '00'].\n" ]
[ 6, 0 ]
[]
[]
[ "escaping", "python", "string" ]
stackoverflow_0001792807_escaping_python_string.txt
Q: Python urllib2 HTTPS and proxy NTLM authentication urllib2 doesn't seem to support HTTPS with proxy authentication in general, even less with NTLM authentication. Anyone knows if there is a patch somewhere for HTTPS on proxy with NTLM authentication. Regards, Laurent A: Late reply. Urllib2 does not support NTLM proxying but pycurl does. Excerpt: self._connection = pycurl.Curl() self._connection.setopt(pycurl.PROXY, PROXY_HOST) self._connection.setopt(pycurl.PROXYPORT, PROXY_PORT) self._connection.setopt(pycurl.PROXYUSERPWD, "%s:%s" % (PROXY_USER, PROXY_PASS)) ... A: http://code.google.com/p/python-ntlm/ I never tried with HTTPS but I think it should work. EDIT: If you are using SSL Tunneling, proxy authentication is a bad idea. Proxy using Basic Auth over HTTPS is not secure when the SSL is tunneled. Your password will be sent in clear (Base64-encoded) to proxy. Lots of people assumes the password will be encrypted inside SSL. It's not true in this case. It's almost impossible to support other encrypted or hashed mechanisms like Digest/NTLM because they all require negotiation (multiple exchanges) and that's not defined in CONNECT protocol. This negotiation happens out of the band of the HTTP connection. It's very hard to implement in proxy/browser also. If this is an enterprise proxy, IP ACL is the only secure solution. A: Good recipe (for HTTPS w/proxy) and discussion here, it should be possible to meld that with the python-nltm code @ZZ has already suggested.
Python urllib2 HTTPS and proxy NTLM authentication
urllib2 doesn't seem to support HTTPS with proxy authentication in general, even less with NTLM authentication. Anyone knows if there is a patch somewhere for HTTPS on proxy with NTLM authentication. Regards, Laurent
[ "Late reply. Urllib2 does not support NTLM proxying but pycurl does. Excerpt:\nself._connection = pycurl.Curl()\nself._connection.setopt(pycurl.PROXY, PROXY_HOST)\nself._connection.setopt(pycurl.PROXYPORT, PROXY_PORT)\nself._connection.setopt(pycurl.PROXYUSERPWD,\n \"%s:%s\" % (PROXY_USER, PROXY_PASS))\n...\n\n", "http://code.google.com/p/python-ntlm/\nI never tried with HTTPS but I think it should work.\nEDIT: If you are using SSL Tunneling, proxy authentication is a bad idea.\nProxy using Basic Auth over HTTPS is not secure when the SSL is tunneled. Your password will be sent in clear (Base64-encoded) to proxy. Lots of people assumes the password will be encrypted inside SSL. It's not true in this case. \nIt's almost impossible to support other encrypted or hashed mechanisms like Digest/NTLM because they all require negotiation (multiple exchanges) and that's not defined in CONNECT protocol. This negotiation happens out of the band of the HTTP connection. It's very hard to implement in proxy/browser also.\nIf this is an enterprise proxy, IP ACL is the only secure solution.\n", "Good recipe (for HTTPS w/proxy) and discussion here, it should be possible to meld that with the python-nltm code @ZZ has already suggested.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "authentication", "https", "ntlm", "proxy", "python" ]
stackoverflow_0001481398_authentication_https_ntlm_proxy_python.txt
Q: Make Dictionary From 2 List Trying to make dictionary with 2 list one being the key and one being the value but I'm having a problem. This is what I have so far: d={} for num in range(10): for nbr in range(len(key)): d[num]=key[nbr] Say my key is a list from 1 to 9, and value list is [2,4,0,9,6,6,8,6,4,5]. How do I assign so it that its like {0:2, 1:4, etc...}? A: zip() to the rescue! >>> k = range(1,10) # or some list or iterable of sorts >>> v = [2,4,0,9,6,6,8,6,4,5] >>> d = dict(zip(k,v)) >>> d {1: 2, 2: 4, 3: 0, 4: 9, 5: 6, 6: 6, 7: 8, 8: 6, 9: 4} >>> For more details, see zip() built-in function, in Python documentation. Note, regarding range() and the list of "keys". The question reads "key is a list from 1 to 9" (i.e. 9 distinct keys) but the value list shows 10 distinct values. This provides the opportunity to discuss two points of "detail": the range() function in the snippet above will produce the 1 through 9 range, that is because the starting value (1, here), if provided, is always included, whereas the ending value (10, here) is never included. the zip() function stops after the iteration which includes the last item of the shortest iterable (in our case, omitting '5', the last value of the list) A: If you are mapping indexes specifically, use the enumerate builtin function instead of zip/range. dict(enumerate([2,4,0,9,6,6,8,6,4,5])) A: values = [2,4,0,9,6,6,8,6,4,5] d = dict(zip(range(10), values)) A: mydict = dict(zip(range(10), [2,4,0,9,6,6,8,6,4,5])) A: should be something like dict(zip(a,b))
Make Dictionary From 2 List
Trying to make dictionary with 2 list one being the key and one being the value but I'm having a problem. This is what I have so far: d={} for num in range(10): for nbr in range(len(key)): d[num]=key[nbr] Say my key is a list from 1 to 9, and value list is [2,4,0,9,6,6,8,6,4,5]. How do I assign so it that its like {0:2, 1:4, etc...}?
[ "zip() to the rescue!\n>>> k = range(1,10) # or some list or iterable of sorts\n>>> v = [2,4,0,9,6,6,8,6,4,5]\n>>> d = dict(zip(k,v))\n>>> d\n{1: 2, 2: 4, 3: 0, 4: 9, 5: 6, 6: 6, 7: 8, 8: 6, 9: 4}\n>>>\n\nFor more details, see zip() built-in function, in Python documentation.\nNote, regarding range() and the list of \"keys\".\nThe question reads \"key is a list from 1 to 9\" (i.e. 9 distinct keys) but the value list shows 10 distinct values. This provides the opportunity to discuss two points of \"detail\":\n\nthe range() function in the snippet above will produce the 1 through 9 range, that is because the starting value (1, here), if provided, is always included, whereas the ending value (10, here) is never included.\nthe zip() function stops after the iteration which includes the last item of the shortest iterable (in our case, omitting '5', the last value of the list)\n\n", "If you are mapping indexes specifically, use the enumerate builtin function instead of zip/range.\ndict(enumerate([2,4,0,9,6,6,8,6,4,5]))\n\n", "values = [2,4,0,9,6,6,8,6,4,5]\nd = dict(zip(range(10), values))\n\n", "mydict = dict(zip(range(10), [2,4,0,9,6,6,8,6,4,5]))\n\n", "should be something like\ndict(zip(a,b))\n\n" ]
[ 12, 6, 2, 1, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0001793154_dictionary_list_python.txt
Q: Printing several binary data fields from Google DataStore? I'm using Google App Engine and python for a web service. Some of the models (tables) I have in my web service have several binary data fields in them, and I'd like to present this data to a computer requesting it, all fields at the same time. Now, the problem is I don't know how to write it out in a way that the other computer knows where the first data ends and the other starts. I've been using JSON for all the things that aren't binary, but afaik JSON doesn't work for binary data. So how do you get around this? You could of course separate the data and put it in its own model, and then reference it back to some metadata model. That would allow you to make a single page that just prints one data field of one of the items, but that is trappy both server and client implementation wise. Another solution would be to put in some kind of separator, and just split the data on that. I suppose it would work and that's how you do it, but isn't there like a standardized way to do that? Any libraries I could use? In short, I'd like to be able to do something like this: binaryDataField1: data data data ... binaryDataField2: data data data ... etc A: Several easy options: base64 encode your data - meaning you can still use JSON. Use Protocol Buffers. Prefix each field with its length - either as a 4- or 8- byte integer, or as a numeric string. A: One solution that would leverage your json investment would be to simply convert the binary data to something that json can support. For example, Base64 encoding might work well for you. You could treat the output of your BAse64 encoder just like you would a normal string in json. it looks like python has Base64 support built in, though i only use java on app engine so I can't guarantee that the linked library work in the sandbox or not.
Printing several binary data fields from Google DataStore?
I'm using Google App Engine and python for a web service. Some of the models (tables) I have in my web service have several binary data fields in them, and I'd like to present this data to a computer requesting it, all fields at the same time. Now, the problem is I don't know how to write it out in a way that the other computer knows where the first data ends and the other starts. I've been using JSON for all the things that aren't binary, but afaik JSON doesn't work for binary data. So how do you get around this? You could of course separate the data and put it in its own model, and then reference it back to some metadata model. That would allow you to make a single page that just prints one data field of one of the items, but that is trappy both server and client implementation wise. Another solution would be to put in some kind of separator, and just split the data on that. I suppose it would work and that's how you do it, but isn't there like a standardized way to do that? Any libraries I could use? In short, I'd like to be able to do something like this: binaryDataField1: data data data ... binaryDataField2: data data data ... etc
[ "Several easy options:\n\nbase64 encode your data - meaning you can still use JSON.\nUse Protocol Buffers.\nPrefix each field with its length - either as a 4- or 8- byte integer, or as a numeric string.\n\n", "One solution that would leverage your json investment would be to simply convert the binary data to something that json can support. For example, Base64 encoding might work well for you. You could treat the output of your BAse64 encoder just like you would a normal string in json. it looks like python has Base64 support built in, though i only use java on app engine so I can't guarantee that the linked library work in the sandbox or not.\n" ]
[ 2, 1 ]
[]
[]
[ "binary", "google_app_engine", "json", "python" ]
stackoverflow_0001791580_binary_google_app_engine_json_python.txt
Q: How to use long integers in Python to build a range? I am trying to build a range with an upper bound bigger than limit of integer in Python. I was looking for something like the following: import sys start = 100; step = 100; limit = sys.maxint + 1; result = xrange(start, limit, step); However, xrange parameters are limited to the integer. According to Python Standard Library, I have to use itertools module: islice(count(start, step), (stop-start+step-1)//step), but islice seems to have the same integer limitations for the parameters. Is there any other way to build the list with upper limit represented by long integer? A: xrange is trivial to roll out on your own, so you can do just that: def xlongrange(start, limit, step): n = start while n < limit: yield n n += step
How to use long integers in Python to build a range?
I am trying to build a range with an upper bound bigger than limit of integer in Python. I was looking for something like the following: import sys start = 100; step = 100; limit = sys.maxint + 1; result = xrange(start, limit, step); However, xrange parameters are limited to the integer. According to Python Standard Library, I have to use itertools module: islice(count(start, step), (stop-start+step-1)//step), but islice seems to have the same integer limitations for the parameters. Is there any other way to build the list with upper limit represented by long integer?
[ "xrange is trivial to roll out on your own, so you can do just that:\ndef xlongrange(start, limit, step):\n n = start\n while n < limit:\n yield n\n n += step\n\n" ]
[ 9 ]
[]
[]
[ "python" ]
stackoverflow_0001793426_python.txt
Q: What pure Python library should I use to scrape a website? I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense. Now I'm trying to port this over to Google App Engine, and keep getting stuck. I've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH. I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'. Do I keep trying to hack ElementTree in there, or do I try to use something else? thanks, Mark A: Beautiful Soup. A: lxml -- 100x better than elementtree A: There's also scrapy, might be more up your alley. A: There are a number of examples of web page scrapers written using pyparsing, such as this one (extracts all URL links from yahoo.com) and this one (for extracting the NIST NTP server addresses). Be sure to use the pyparsing helper method makeHTMLTags, instead of just hand coding "<" + Literal(tagname) + ">" - makeHTMLTags creates a very robust parser, with accommodation for extra spaces, upper/lower case inconsistencies, unexpected attributes, attribute values with various quoting styles, and so on. Pyparsing will also give you more control over special syntax issues, such as custom entities. Also it is pure Python, liberally licensed, and small footprint (a single source module), so it is easy to drop into your GAE app right in with your other application code. A: BeautifulSoup is good, but its API is awkward. Try ElementSoup, which provides an ElementTree interface to BeautifulSoup.
What pure Python library should I use to scrape a website?
I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense. Now I'm trying to port this over to Google App Engine, and keep getting stuck. I've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH. I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'. Do I keep trying to hack ElementTree in there, or do I try to use something else? thanks, Mark
[ "Beautiful Soup.\n", "lxml -- 100x better than elementtree\n", "There's also scrapy, might be more up your alley.\n", "There are a number of examples of web page scrapers written using pyparsing, such as this one (extracts all URL links from yahoo.com) and this one (for extracting the NIST NTP server addresses). Be sure to use the pyparsing helper method makeHTMLTags, instead of just hand coding \"<\" + Literal(tagname) + \">\" - makeHTMLTags creates a very robust parser, with accommodation for extra spaces, upper/lower case inconsistencies, unexpected attributes, attribute values with various quoting styles, and so on. Pyparsing will also give you more control over special syntax issues, such as custom entities. Also it is pure Python, liberally licensed, and small footprint (a single source module), so it is easy to drop into your GAE app right in with your other application code.\n", "BeautifulSoup is good, but its API is awkward. Try ElementSoup, which provides an ElementTree interface to BeautifulSoup. \n" ]
[ 11, 6, 4, 0, 0 ]
[]
[]
[ "beautifulsoup", "google_app_engine", "mechanize", "python", "xpath" ]
stackoverflow_0001563165_beautifulsoup_google_app_engine_mechanize_python_xpath.txt
Q: Python HTML scraping It's not really scraping, I'm just trying to find the URLs in a web page where the class has a specific value. For example: <a class="myClass" href="/url/7df028f508c4685ddf65987a0bd6f22e"> I want to get the href value. Any ideas on how to do this? Maybe regex? Could you post some example code? I'm guessing html scraping libs, such as BeautifulSoup, are a bit of overkill just for this... Huge thanks! A: Regex is usally a bad idea, try using BeautifulSoup Quick example: html = #get html soup = BeautifulSoup(html) links = soup.findAll('a', attrs={'class': 'myclass'}) for link in links: #process link A: Aargh, not regex for parsing HTML! Luckily in Python we have BeautifulSoup or lxml to do that job for us. A: Regex would be a bad choice. HTML is not a regular language. How about Beautiful Soup? A: Regex should not be used to parse HTML. See the first answer to this question for an explanation :) +1 for BeautifulSoup. A: If your task is just this simple, just use string manipulation (without even regex) f=open("htmlfile") for line in f: if "<a class" in line and "myClass" in line and "href" in line: s = line [ line.index("href") + len('href="') : ] print s[:s.index('">')] f.close() HTML parsers is not a must for such cases. A: read Parsing Html The Cthulhu Way https://blog.codinghorror.com/parsing-html-the-cthulhu-way/ A: The thing is I know the structure of the HTML page, and I just want to find that specific kind of links (where class="myclass"). BeautifulSoup anyway?
Python HTML scraping
It's not really scraping, I'm just trying to find the URLs in a web page where the class has a specific value. For example: <a class="myClass" href="/url/7df028f508c4685ddf65987a0bd6f22e"> I want to get the href value. Any ideas on how to do this? Maybe regex? Could you post some example code? I'm guessing html scraping libs, such as BeautifulSoup, are a bit of overkill just for this... Huge thanks!
[ "Regex is usally a bad idea, try using BeautifulSoup\nQuick example:\nhtml = #get html\nsoup = BeautifulSoup(html)\nlinks = soup.findAll('a', attrs={'class': 'myclass'})\nfor link in links:\n #process link\n\n", "Aargh, not regex for parsing HTML!\nLuckily in Python we have BeautifulSoup or lxml to do that job for us.\n", "Regex would be a bad choice. HTML is not a regular language. How about Beautiful Soup?\n", "Regex should not be used to parse HTML. See the first answer to this question for an explanation :)\n+1 for BeautifulSoup.\n", "If your task is just this simple, just use string manipulation (without even regex)\nf=open(\"htmlfile\")\nfor line in f:\n if \"<a class\" in line and \"myClass\" in line and \"href\" in line:\n s = line [ line.index(\"href\") + len('href=\"') : ]\n print s[:s.index('\">')]\nf.close()\n\nHTML parsers is not a must for such cases.\n", "read Parsing Html The Cthulhu Way https://blog.codinghorror.com/parsing-html-the-cthulhu-way/\n", "The thing is I know the structure of the HTML page, and I just want to find that specific kind of links (where class=\"myclass\"). BeautifulSoup anyway?\n" ]
[ 16, 9, 2, 1, 1, 0, 0 ]
[]
[]
[ "html", "html_content_extraction", "python", "regex", "screen_scraping" ]
stackoverflow_0001793663_html_html_content_extraction_python_regex_screen_scraping.txt
Q: Minimal binary diff for similar 1000 byte blocks with static noise? I need a minimal diff for similar 1000 byte blocks. These blocks will have at most 20% of the bits different. The flipped bits will be like radio static -- randomly flipped bits with a uniform distribution over the whole block. Here's my pseudo code using XOR and lzo compression: minimal_diff=lzo(XOR(block1,block2)) Since the blocks are small, I'm using lzo's compression with the hope that this compression format has minimal boilerplate. I have reviewed algorithms such as xdelta and bsdiff, but these will not work for random static noise like this. These are more oriented around finding shifted sequences of bytes. Can error correcting codes work here for creating a minimal diff? How exactly? Exact algorithms would be nice. If it's just a research paper theory and not implemented then I'm not interested. NOTE: The similar bits in each block line up. There is no shifting. There is just some random noise bit flips that differentiate the blocks. A: if its truly random noise then it does not really compress. This means that if you have 8,000 bits (1,000 bytes x 8 bits / byte) and every individual bit has 1/5 (20%) probability of flipping, then you can't encode the changed bits in less than 8,000 x (-4/5 x ln2 4/5 + -1/5 x ln2 1/5) = 8,000 x (-4/5 x -0.322 + -1/5 x -2.322) = 8,000 x (0.2576 + 0.4644) = 5,776 bits i.e. 722 bytes. This is based on Shannon's information theory. Because the trivial way to represent the changed bits takes 1000 bytes (just encode the XOR of two blocks), you can save at most 30% of the space by compression. If you achieve consistently more then the bits are not randomly distributed or the bit flip probability is less than 20%. Standard algorithms like Lempel-Ziv are designed for structured data (i.e. data that is not random noise). Random noise like this is best encoded by simple Huffman-coding and that kind of stuff. But you can save at most 30%, so it's a question whether it's actually worth the effort. A: Have you tried standard compression algorithms already? What performance do you see? You should get fairly good compression ratios on the xor of the old and new blocks, due to the high bias towards 0s. Other than the standard options, one alternative that springs to mind is encoding each diff as a list of variable-length integers specifying the distance between flipped bits. For example, using 5-bit variable length integers, you could describe gaps of up to 16 bits in 5 bits, gaps of 17 to 1024 bits in 10 bits, and so forth. If there's any regularity to the intervals between flipped bits, you can use a regular compressor on this encoding for further savings.
Minimal binary diff for similar 1000 byte blocks with static noise?
I need a minimal diff for similar 1000 byte blocks. These blocks will have at most 20% of the bits different. The flipped bits will be like radio static -- randomly flipped bits with a uniform distribution over the whole block. Here's my pseudo code using XOR and lzo compression: minimal_diff=lzo(XOR(block1,block2)) Since the blocks are small, I'm using lzo's compression with the hope that this compression format has minimal boilerplate. I have reviewed algorithms such as xdelta and bsdiff, but these will not work for random static noise like this. These are more oriented around finding shifted sequences of bytes. Can error correcting codes work here for creating a minimal diff? How exactly? Exact algorithms would be nice. If it's just a research paper theory and not implemented then I'm not interested. NOTE: The similar bits in each block line up. There is no shifting. There is just some random noise bit flips that differentiate the blocks.
[ "if its truly random noise then it does not really compress. This means that if you have 8,000 bits (1,000 bytes x 8 bits / byte) and every individual bit has 1/5 (20%) probability of flipping, then you can't encode the changed bits in less than 8,000 x (-4/5 x ln2 4/5 + -1/5 x ln2 1/5) = 8,000 x (-4/5 x -0.322 + -1/5 x -2.322) = 8,000 x (0.2576 + 0.4644) = 5,776 bits i.e. 722 bytes. This is based on Shannon's information theory.\nBecause the trivial way to represent the changed bits takes 1000 bytes (just encode the XOR of two blocks), you can save at most 30% of the space by compression. If you achieve consistently more then the bits are not randomly distributed or the bit flip probability is less than 20%.\nStandard algorithms like Lempel-Ziv are designed for structured data (i.e. data that is not random noise). Random noise like this is best encoded by simple Huffman-coding and that kind of stuff. But you can save at most 30%, so it's a question whether it's actually worth the effort.\n", "Have you tried standard compression algorithms already? What performance do you see? You should get fairly good compression ratios on the xor of the old and new blocks, due to the high bias towards 0s.\nOther than the standard options, one alternative that springs to mind is encoding each diff as a list of variable-length integers specifying the distance between flipped bits. For example, using 5-bit variable length integers, you could describe gaps of up to 16 bits in 5 bits, gaps of 17 to 1024 bits in 10 bits, and so forth. If there's any regularity to the intervals between flipped bits, you can use a regular compressor on this encoding for further savings.\n" ]
[ 3, 0 ]
[]
[]
[ "algorithm", "diff", "python" ]
stackoverflow_0001793253_algorithm_diff_python.txt
Q: Passing SQLite variables in Python I am writing a app in python and utilzing sqlite. I have a list of strings which I would like to add too the database, where each element represents some data which coincides with the column it will be put. currently I have something like this cursor.execute("""insert into credit values ('Citi','5567','visa',6000,9.99,'23',9000)""") I can add strings easily but dont know how to add the variables of my list. A: Use parameters to .execute(): query = """ INSERT INTO credit (bank, number, card, int1, value, type, int2) VALUES (?, ?, ?, ?, ?, ?, ?) """ data = ['Citi', '5567', 'visa', 6000, 9.99, '23', 9000] cursor.execute(query, data) According to PEP249: .execute(operation[,parameters]): Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified in a database-specific notation (see the module's paramstyle attribute for details) Checking paramstyle: >>> import sqlite3 >>> print sqlite3.paramstyle qmark qmark means you use ? for parameters.
Passing SQLite variables in Python
I am writing a app in python and utilzing sqlite. I have a list of strings which I would like to add too the database, where each element represents some data which coincides with the column it will be put. currently I have something like this cursor.execute("""insert into credit values ('Citi','5567','visa',6000,9.99,'23',9000)""") I can add strings easily but dont know how to add the variables of my list.
[ "Use parameters to .execute():\nquery = \"\"\"\n INSERT INTO credit\n (bank, number, card, int1, value, type, int2)\n VALUES\n (?, ?, ?, ?, ?, ?, ?)\n \"\"\"\ndata = ['Citi', '5567', 'visa', 6000, 9.99, '23', 9000]\n\ncursor.execute(query, data)\n\nAccording to PEP249:\n\n.execute(operation[,parameters]):\nPrepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified in a database-specific notation (see the module's paramstyle attribute for details)\n\nChecking paramstyle:\n>>> import sqlite3\n>>> print sqlite3.paramstyle\nqmark\n\nqmark means you use ? for parameters.\n" ]
[ 11 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0001793970_python_sqlite.txt
Q: Matching multiple regex groups and removing them I have been given a file that I would like to extract the useful data from. The format of the file goes something like this: LINE: 1 TOKENKIND: somedata TOKENKIND: somedata LINE: 2 TOKENKIND: somedata LINE: 3 etc... What I would like to do is remove LINE: and the line number as well as TOKENKIND: so I am just left with a string that consists of 'somedata somedate somedata...' I'm using Python to do this, using regular expressions (that I'm not sure are correct) to match the bits of the file I'd like removing. My question is, how can I get Python to match multiple regex groups and ignore them, adding anything that isn't matched by my regex to my output string? My current code looks like this: import re import sys ignoredTokens = re.compile(''' (?P<WHITESPACE> \s+ ) | (?P<LINE> LINE:\s[0-9]+ ) | (?P<TOKEN> [A-Z]+: ) ''', re.VERBOSE) tokenList = open(sys.argv[1], 'r').read() cleanedList = '' scanner = ignoredTokens.scanner(tokenList) for line in tokenList: match = scanner.match() if match.lastgroup not in ('WHITESPACE', 'LINE', 'TOKEN'): cleanedList = cleanedList + match.group(match.lastindex) + ' ' print cleanedList A: import re x = '''LINE: 1 TOKENKIND: somedata TOKENKIND: somedata LINE: 2 TOKENKIND: somedata LINE: 3''' junkre = re.compile(r'(\s*LINE:\s*\d*\s*)|(\s*TOKENKIND:)', re.DOTALL) print junkre.sub('', x) A: no need to use regex in Python. Its Python after all, not Perl. Think simple and use its string manipulation capabilities f=open("file") for line in f: if line.startswith("LINE:"): continue if "TOKENKIND" in line: print line.split(" ",1)[-1].strip() f.close() A: How about replacing (^LINE: \d+$)|(^\w+:) with an empty string ""? Use \n instead of ^ and $ to remove unwanted empty lines also.
Matching multiple regex groups and removing them
I have been given a file that I would like to extract the useful data from. The format of the file goes something like this: LINE: 1 TOKENKIND: somedata TOKENKIND: somedata LINE: 2 TOKENKIND: somedata LINE: 3 etc... What I would like to do is remove LINE: and the line number as well as TOKENKIND: so I am just left with a string that consists of 'somedata somedate somedata...' I'm using Python to do this, using regular expressions (that I'm not sure are correct) to match the bits of the file I'd like removing. My question is, how can I get Python to match multiple regex groups and ignore them, adding anything that isn't matched by my regex to my output string? My current code looks like this: import re import sys ignoredTokens = re.compile(''' (?P<WHITESPACE> \s+ ) | (?P<LINE> LINE:\s[0-9]+ ) | (?P<TOKEN> [A-Z]+: ) ''', re.VERBOSE) tokenList = open(sys.argv[1], 'r').read() cleanedList = '' scanner = ignoredTokens.scanner(tokenList) for line in tokenList: match = scanner.match() if match.lastgroup not in ('WHITESPACE', 'LINE', 'TOKEN'): cleanedList = cleanedList + match.group(match.lastindex) + ' ' print cleanedList
[ "import re\n\nx = '''LINE: 1\nTOKENKIND: somedata\nTOKENKIND: somedata\nLINE: 2\nTOKENKIND: somedata\nLINE: 3'''\n\njunkre = re.compile(r'(\\s*LINE:\\s*\\d*\\s*)|(\\s*TOKENKIND:)', re.DOTALL)\n\nprint junkre.sub('', x)\n\n", "no need to use regex in Python. Its Python after all, not Perl. Think simple and use its string manipulation capabilities\nf=open(\"file\")\nfor line in f:\n if line.startswith(\"LINE:\"): continue\n if \"TOKENKIND\" in line:\n print line.split(\" \",1)[-1].strip()\nf.close()\n\n", "How about replacing (^LINE: \\d+$)|(^\\w+:) with an empty string \"\"? \nUse \\n instead of ^ and $ to remove unwanted empty lines also.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "lexical_analysis", "python", "regex" ]
stackoverflow_0001791097_lexical_analysis_python_regex.txt
Q: operation in arrays I have a question, as I perform mathematical operations with an array of lists such as I get the sum of each array list getting a new list with the Valar for the sum of each list in the array. thanks for any response A: Try a list comprehension: >>> list_of_lists = [[1,2],[3,4]] >>> [sum(li) for li in list_of_lists] [3, 7] A: You can also try mapping the lists with the built-in sum function. >>> a = [11, 13, 17, 19, 23] >>> b = [29, 31, 37, 41, 43] >>> c = [47, 53, 59, 61, 67] >>> d = [71, 73, 79, 83, 89] >>> map(sum, [a, b, c, d]) <map object at 0x02A0E0D0> >>> list(_) [83, 181, 287, 395] A: If you're going to manipulate lists of numbers to perform some mathematical calculos, you'd better use Numpy's arrays: >>> import numpy >>> a = numpy.array([1,2,3]) >>> b = numpy.array([2,6]) >>> a_list = [a,b] >>> [x.sum() for x in a_list] [6, 8] It'll be faster! A: What I understand is that you have a list of list -- in effect, matrix. You want the sum of each row. I agree with other answerers that you should use numpy. we can create a mulidimensional array: >>> import numpy >>> a = numpy.array([[1,2,3], [4,5,6], [7,8,9]]) >>> a array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) Now we can use a.sum([dimension]) where dimension is how you want to sum the array. Summing each row is dimension 1: >>> a.sum(1) array([ 6, 15, 24]) You can also sum each column: >>> a.sum(0) array([12, 15, 18]) And sum all: >>> a.sum() 45
operation in arrays
I have a question, as I perform mathematical operations with an array of lists such as I get the sum of each array list getting a new list with the Valar for the sum of each list in the array. thanks for any response
[ "Try a list comprehension:\n>>> list_of_lists = [[1,2],[3,4]]\n>>> [sum(li) for li in list_of_lists]\n[3, 7]\n\n", "You can also try mapping the lists with the built-in sum function.\n>>> a = [11, 13, 17, 19, 23]\n>>> b = [29, 31, 37, 41, 43]\n>>> c = [47, 53, 59, 61, 67]\n>>> d = [71, 73, 79, 83, 89]\n>>> map(sum, [a, b, c, d])\n<map object at 0x02A0E0D0>\n>>> list(_)\n[83, 181, 287, 395]\n\n", "If you're going to manipulate lists of numbers to perform some mathematical calculos, you'd better use Numpy's arrays:\n>>> import numpy\n>>> a = numpy.array([1,2,3])\n>>> b = numpy.array([2,6])\n>>> a_list = [a,b]\n>>> [x.sum() for x in a_list]\n[6, 8]\n\nIt'll be faster!\n", "What I understand is that you have a list of list -- in effect, matrix. You want the sum of each row. I agree with other answerers that you should use numpy.\nwe can create a mulidimensional array:\n>>> import numpy\n>>> a = numpy.array([[1,2,3], [4,5,6], [7,8,9]])\n>>> a\narray([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\n\nNow we can use a.sum([dimension]) where dimension is how you want to sum the array. Summing each row is dimension 1:\n>>> a.sum(1)\narray([ 6, 15, 24])\n\nYou can also sum each column:\n>>> a.sum(0)\narray([12, 15, 18])\n\nAnd sum all:\n>>> a.sum()\n45\n\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001793214_python.txt
Q: Merge two lists of lists - Python This is a great primer but doesn't answer what I need: Combining two sorted lists in Python I have two Python lists, each is a list of datetime,value pairs: list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]] And: list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]] There are actually numerous list_a lists with different key/values. All list_a datetimes are in list_x. I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x. Bonus: In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be: dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]} I can figure that part out though. A: Here's some code that does what you asked for. You can turn your list of pairs into a dictionary straightforwardly. Then keys that are shared can be found by intersecting the sets of keys. Finally, constructing the result dictionary is easy given the set of shared keys. dict_a = dict(list_a) dict_x = dict(list_x) shared_keys = set(dict_a).intersection(set(dict_x)) result = dict((k, (dict_a[k], dict_x[k])) for k in shared_keys) A: "I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x." def merge_lists( list_a, list_x ): dict_x= dict(list_x) for k,v in list_a: if k in dict_x: yield k, (v, dict_x[k]) Something like that may work also. merged= list( merge_lists( someDict['object_a'], someDict['object_b'] ) This may be slightly quicker because it only makes one dictionary for lookups, and leaves the other list alone. A: Nothing beats a nice functional one-liner: reduce(lambda l1,l2: l1 + l2, list) A: Could try extend: list_a.extend(list_b)
Merge two lists of lists - Python
This is a great primer but doesn't answer what I need: Combining two sorted lists in Python I have two Python lists, each is a list of datetime,value pairs: list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]] And: list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]] There are actually numerous list_a lists with different key/values. All list_a datetimes are in list_x. I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x. Bonus: In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be: dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]} I can figure that part out though.
[ "Here's some code that does what you asked for. You can turn your list of pairs into a dictionary straightforwardly. Then keys that are shared can be found by intersecting the sets of keys. Finally, constructing the result dictionary is easy given the set of shared keys.\ndict_a = dict(list_a)\ndict_x = dict(list_x)\n\nshared_keys = set(dict_a).intersection(set(dict_x))\n\nresult = dict((k, (dict_a[k], dict_x[k])) for k in shared_keys)\n\n", "\"I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x.\"\ndef merge_lists( list_a, list_x ):\n dict_x= dict(list_x)\n for k,v in list_a:\n if k in dict_x:\n yield k, (v, dict_x[k])\n\nSomething like that may work also.\nmerged= list( merge_lists( someDict['object_a'], someDict['object_b'] )\n\nThis may be slightly quicker because it only makes one dictionary for lookups, and leaves the other list alone.\n", "Nothing beats a nice functional one-liner:\nreduce(lambda l1,l2: l1 + l2, list)\n\n", "Could try extend:\nlist_a.extend(list_b)\n\n" ]
[ 4, 3, 2, 0 ]
[]
[]
[ "django", "list", "python" ]
stackoverflow_0000803526_django_list_python.txt
Q: Knowing if any key is pressed, wxPython I have a timer, and need to know if any of the keys is pressed on any cycle. How do I do it? A: If you are using Linux it's found in the curses module, if you use Windows it's in the msvcrt module. I found following article really helpful in describing this topic - Event Driven Programming A: Try: import sys c = sys.stdin.read(1) A: If you are using Windows, Use PyHook If you like to know system wide key press events. import pythoncom, pyHook def OnKeyboardEvent(event): print 'Ascii:', event.Ascii, chr(event.Ascii) print 'Key:', event.Key print 'KeyID:', event.KeyID print 'ScanCode:', event.ScanCode print 'Extended:', event.Extended return True #for pass through key events, False to eat Keys hm = pyHook.HookManager() hm.KeyDown = OnKeyboardEvent hm.HookKeyboard() pythoncom.PumpMessages()
Knowing if any key is pressed, wxPython
I have a timer, and need to know if any of the keys is pressed on any cycle. How do I do it?
[ "If you are using Linux it's found in the curses module, if you use Windows it's in the msvcrt module.\nI found following article really helpful in describing this topic - Event Driven Programming\n", "Try:\nimport sys\nc = sys.stdin.read(1)\n\n", "If you are using Windows, Use PyHook If you like to know system wide key press events.\nimport pythoncom, pyHook \n\ndef OnKeyboardEvent(event):\n print 'Ascii:', event.Ascii, chr(event.Ascii)\n print 'Key:', event.Key\n print 'KeyID:', event.KeyID\n print 'ScanCode:', event.ScanCode\n print 'Extended:', event.Extended\n\n return True #for pass through key events, False to eat Keys\n\nhm = pyHook.HookManager()\nhm.KeyDown = OnKeyboardEvent\nhm.HookKeyboard()\npythoncom.PumpMessages()\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0001786194_python_wxpython.txt
Q: PyQt: removeChild/addChild QGroupBox I am developing a system for a customer which is displayed in a set of tabs, and shows a table in the centralwidget with data extracted from a database. Depending on mouse events, the container (groupBox) must be removed from the centralwidget, or then added with new updated data for the table. Here is a piece of the code that runs nicely and shows the table with data inside the GroupBox: self.tab_tableview = QtGui.QWidget() self.tab_tableview.setObjectName("tab_tableview") self.viewGroupBox = QtGui.QGroupBox(self.tab_tableview) self.viewGroupBox.setGeometry(QtCore.QRect(10, 0, 751, 501)) self.viewGroupBox.setObjectName("viewGroupBox") self.vBox = QtGui.QVBoxLayout() self.vBox.addWidget(self.newGroupBox) self.vBox.setGeometry(QtCore.QRect(40, 170, 171, 111)) self.vBox.addStretch(1) self.viewTableWidget = QtGui.QTableView(self.viewGroupBox) self.viewTableWidget.setGeometry(QtCore.QRect(10, 20, 731, 471)) self.viewTableWidget.setObjectName("viewTableWidget") updatedTableModel=self.callShowTable() self.viewTableWidget.setModel(updatedTableModel) self.viewTableWidget.setColumnWidth(0,30) self.viewTableWidget.setColumnWidth(1,550) self.viewTabWidget.addTab(self.tab_tableview, "") if removeContainer_Bottun_Pressed: print "remove bottun was pressed" self.vBox.removeWidget(self.viewGroupBox) if addContainer_Bottun_Pressed: print "add bottun was pressed" self.vBox.addWidget(self.viewGroupBox) The program detects when "removeContainer_Bottun_Pressed" is true, and run the removeWidget(self.newGroupBox). Although removeWidget runs, the groupBox stays in the same place, instead of disappearing and reappearing on request. What is missing here? All comments and suggestions are highly appreciated. A: I don't think calling removeWidget is necessary. Try just calling widget.deleteLater on whatever you want to delete. Then when you want to add it back, recreate it and use layout.insertWidget to put it in its proper place. Does that work? It's working for me here on Windows XP... import sys from PyQt4 import QtGui, QtCore app = QtGui.QApplication(sys.argv) widget = QtGui.QWidget() widget_layout = QtGui.QHBoxLayout() widget.setLayout(widget_layout) def add_group_box(): group_box = widget.group_box = QtGui.QGroupBox() group_layout = QtGui.QVBoxLayout() group_box.setLayout(group_layout) for i in range(2): group_layout.addWidget(QtGui.QRadioButton(str(i))) widget_layout.insertWidget(0, group_box) add_group_box() show_button = QtGui.QPushButton("show") hide_button = QtGui.QPushButton("hide") def on_show(): if not widget.group_box: add_group_box() def on_hide(): if widget.group_box: widget.group_box.deleteLater() widget.group_box = None show_button.connect(show_button, QtCore.SIGNAL("clicked()"), on_show) hide_button.connect(hide_button, QtCore.SIGNAL("clicked()"), on_hide) widget_layout.addWidget(show_button) widget_layout.addWidget(hide_button) widget.show() app.exec_() A: It seems to contain a typo: addContainer_Bottun_Pressed Wouldn't it be addContainer_Botton_Pressed instead? You might need to call some kind of "relayout" method after changing the widgets on the fly. You should try to call this after removing/adding child widgets: self.vBox.adjustSize() A: First of all thanks for all the input I received here. In the sequence is the source code working -- or at least working 80% perfectly well. 80% - What it does: radioButton to delete groupBox; radioButton to say Hello; radioButton to say Nice; 20% - What it still doesn't do: radioButton to add groupBox. As you can see in the sequence, the function addBox is called, but it doesn't add the groupBox for the second time it runs. Here are the imports: #//===========================================================//# import os import platform import sys from PyQt4 import QtGui, QtCore from PyQt4.QtCore import * from PyQt4.QtGui import * from PyQt4 import QtCore, QtGui #//===========================================================//# Here is the Ui_Addwidget class.. #//===========================================================//# class Ui_Addwidget(object): def runbutton3(self): print "hello // radioButton3.isChecked : ", self.radioButton3.isChecked() def runButton4(self): print "nice // radioButton4.isChecked : ", self.radioButton4.isChecked() def addBox(self): self.vLayout_wdg = QtGui.QWidget(self.centralwidget) self.vLayout_wdg.setGeometry(QtCore.QRect(40, 160, 171, 121)) self.vLayout_wdg.setObjectName("vLayout_wdg") self.vLayoutBoxObj = QtGui.QHBoxLayout() self.vLayout_wdg.setLayout(self.vLayoutBoxObj) self.newGroupBox = self.vLayout_wdg.newGroupBox = QtGui.QGroupBox(self.vLayout_wdg) self.newGroupBox.setObjectName("newGroupBox") self.newGroupBox.setTitle(QtGui.QApplication.translate("MainWindow", "newGroupBox", None, QtGui.QApplication.UnicodeUTF8)) self.groupLayoutBox = QtGui.QVBoxLayout() self.groupLayoutBox.setObjectName("groupLayoutBox") self.newGroupBox.setLayout(self.groupLayoutBox) self.radioButton3 = QtGui.QRadioButton() self.radioButton3.setGeometry(QtCore.QRect(30, 30, 101, 21)) self.radioButton3.setObjectName("helloRadioButton") self.radioButton3.setText(QtGui.QApplication.translate("MainWindow", "say: Hello", None, QtGui.QApplication.UnicodeUTF8)) self.radioButton4 = QtGui.QRadioButton() self.radioButton4.setGeometry(QtCore.QRect(30, 60, 111, 18)) self.radioButton4.setObjectName("niceRadioButton") self.radioButton4.setText(QtGui.QApplication.translate("MainWindow", "say: Nice", None, QtGui.QApplication.UnicodeUTF8)) self.groupLayoutBox.addWidget(self.radioButton3) self.groupLayoutBox.addWidget(self.radioButton4) self.vLayoutBoxObj.insertWidget(0, self.newGroupBox) def on_show(self): print "addBox // radioButton1.isChecked : ", self.radioButton1.isChecked() if not self.vLayout_wdg.newGroupBox: self.addBox() def on_hide(self): print "deleteBox // radioButton2.isChecked : ", self.radioButton2.isChecked() if self.vLayout_wdg.newGroupBox: self.vLayout_wdg.newGroupBox.deleteLater() self.vLayout_wdg.newGroupBox = None def connectEvent(self): QtCore.QObject.connect(self.radioButton1, QtCore.SIGNAL("toggled(bool)"),self.on_show) QtCore.QObject.connect(self.radioButton2, QtCore.SIGNAL("toggled(bool)"),self.on_hide) QtCore.QObject.connect(self.radioButton3, QtCore.SIGNAL("toggled(bool)"),self.runbutton3) QtCore.QObject.connect(self.radioButton4, QtCore.SIGNAL("toggled(bool)"),self.runButton4) def selectBox(self): self.selectGroupBox = QtGui.QGroupBox(self.centralwidget) self.selectGroupBox.setGeometry(QtCore.QRect(40, 20, 171, 111)) self.selectGroupBox.setObjectName("selectGroupBox") self.selectGroupBox.setTitle(QtGui.QApplication.translate("MainWindow", "select", None, QtGui.QApplication.UnicodeUTF8)) self.radioButton1 = QtGui.QRadioButton(self.selectGroupBox) self.radioButton1.setGeometry(QtCore.QRect(30, 30, 111, 18)) self.radioButton1.setObjectName("radioButton1") self.radioButton1.setText(QtGui.QApplication.translate("MainWindow", "add groupbox", None, QtGui.QApplication.UnicodeUTF8)) self.radioButton2 = QtGui.QRadioButton(self.selectGroupBox) self.radioButton2.setGeometry(QtCore.QRect(30, 60, 111, 18)) self.radioButton2.setObjectName("radioButton2") self.radioButton2.setText(QtGui.QApplication.translate("MainWindow", "delete groupbox", None, QtGui.QApplication.UnicodeUTF8)) def addwidget_centralwdg(self,MainWindow): self.centralwidget = QtGui.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.selectBox() self.addBox() self.connectEvent() MainWindow.setCentralWidget(self.centralwidget) def addwidget_setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(250, 300) self.addwidget_centralwdg(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) #//===========================================================//# Here you have the mainDesign class... #//===========================================================//# class mainDesign(QtGui.QMainWindow,Ui_Addwidget): def __init__(self,parent=None): super(mainDesign,self).__init__(parent) self.addwidget_setupUi(self) #//===========================================================//# And of course, the def main... #//===========================================================//# def main(): app = QtGui.QApplication(sys.argv) main = mainDesign() main.show() sys.exit(app.exec_()) main() #//===========================================================//# To try it, just copy the code and the classes to a *.py file. And it will run. Any other comments or suggestions to solve the missing piece of the puzzle, are highly welcome and appreciated. A: Here is the final solution, working 100%: update as follows: def on_show(self): print "addBox // radioButton1.isChecked : ", self.radioButton1.isChecked() if not self.centralwidget.newGroupBox: #self.addBox() self.addwidget_centralwdg(self.globalMainWindow) def addwidget_centralwdg(self,MainWindow): self.centralwidget = QtGui.QWidget(MainWindow) self.globalMainWindow=MainWindow self.centralwidget.setObjectName("centralwidget") self.selectBox() self.addBox() self.connectEvent() MainWindow.setCentralWidget(self.centralwidget) Hope this can help others too.
PyQt: removeChild/addChild QGroupBox
I am developing a system for a customer which is displayed in a set of tabs, and shows a table in the centralwidget with data extracted from a database. Depending on mouse events, the container (groupBox) must be removed from the centralwidget, or then added with new updated data for the table. Here is a piece of the code that runs nicely and shows the table with data inside the GroupBox: self.tab_tableview = QtGui.QWidget() self.tab_tableview.setObjectName("tab_tableview") self.viewGroupBox = QtGui.QGroupBox(self.tab_tableview) self.viewGroupBox.setGeometry(QtCore.QRect(10, 0, 751, 501)) self.viewGroupBox.setObjectName("viewGroupBox") self.vBox = QtGui.QVBoxLayout() self.vBox.addWidget(self.newGroupBox) self.vBox.setGeometry(QtCore.QRect(40, 170, 171, 111)) self.vBox.addStretch(1) self.viewTableWidget = QtGui.QTableView(self.viewGroupBox) self.viewTableWidget.setGeometry(QtCore.QRect(10, 20, 731, 471)) self.viewTableWidget.setObjectName("viewTableWidget") updatedTableModel=self.callShowTable() self.viewTableWidget.setModel(updatedTableModel) self.viewTableWidget.setColumnWidth(0,30) self.viewTableWidget.setColumnWidth(1,550) self.viewTabWidget.addTab(self.tab_tableview, "") if removeContainer_Bottun_Pressed: print "remove bottun was pressed" self.vBox.removeWidget(self.viewGroupBox) if addContainer_Bottun_Pressed: print "add bottun was pressed" self.vBox.addWidget(self.viewGroupBox) The program detects when "removeContainer_Bottun_Pressed" is true, and run the removeWidget(self.newGroupBox). Although removeWidget runs, the groupBox stays in the same place, instead of disappearing and reappearing on request. What is missing here? All comments and suggestions are highly appreciated.
[ "I don't think calling removeWidget is necessary. Try just calling widget.deleteLater on whatever you want to delete. Then when you want to add it back, recreate it and use layout.insertWidget to put it in its proper place. Does that work?\nIt's working for me here on Windows XP... \nimport sys\n\nfrom PyQt4 import QtGui, QtCore\n\napp = QtGui.QApplication(sys.argv)\n\nwidget = QtGui.QWidget()\nwidget_layout = QtGui.QHBoxLayout()\nwidget.setLayout(widget_layout)\n\ndef add_group_box():\n group_box = widget.group_box = QtGui.QGroupBox()\n group_layout = QtGui.QVBoxLayout()\n group_box.setLayout(group_layout)\n\n for i in range(2):\n group_layout.addWidget(QtGui.QRadioButton(str(i)))\n\n widget_layout.insertWidget(0, group_box)\nadd_group_box()\n\nshow_button = QtGui.QPushButton(\"show\")\nhide_button = QtGui.QPushButton(\"hide\")\ndef on_show():\n if not widget.group_box:\n add_group_box()\ndef on_hide():\n if widget.group_box:\n widget.group_box.deleteLater()\n widget.group_box = None\nshow_button.connect(show_button, QtCore.SIGNAL(\"clicked()\"), on_show)\nhide_button.connect(hide_button, QtCore.SIGNAL(\"clicked()\"), on_hide) \nwidget_layout.addWidget(show_button)\nwidget_layout.addWidget(hide_button)\n\nwidget.show()\n\napp.exec_()\n\n", "\nIt seems to contain a typo: addContainer_Bottun_Pressed\nWouldn't it be addContainer_Botton_Pressed instead?\n\nYou might need to call some kind of \"relayout\" method after changing the widgets on the fly. You should try to call this after removing/adding child widgets: self.vBox.adjustSize()\n\n\n", "First of all thanks for all the input I received here. In the sequence is the source code working -- or at least working 80% perfectly well. \n\n80% - What it does: radioButton to delete groupBox; radioButton to say Hello; radioButton to say Nice;\n20% - What it still doesn't do: radioButton to add groupBox.\n\nAs you can see in the sequence, the function addBox is called, but it doesn't add the groupBox for the second time it runs.\nHere are the imports:\n#//===========================================================//#\nimport os\nimport platform\nimport sys\nfrom PyQt4 import QtGui, QtCore\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\nfrom PyQt4 import QtCore, QtGui\n#//===========================================================//#\n\nHere is the Ui_Addwidget class..\n#//===========================================================//# \nclass Ui_Addwidget(object):\n def runbutton3(self):\n print \"hello // radioButton3.isChecked : \", self.radioButton3.isChecked()\n def runButton4(self):\n print \"nice // radioButton4.isChecked : \", self.radioButton4.isChecked() \n def addBox(self):\n self.vLayout_wdg = QtGui.QWidget(self.centralwidget)\n self.vLayout_wdg.setGeometry(QtCore.QRect(40, 160, 171, 121))\n self.vLayout_wdg.setObjectName(\"vLayout_wdg\")\n\n self.vLayoutBoxObj = QtGui.QHBoxLayout()\n self.vLayout_wdg.setLayout(self.vLayoutBoxObj)\n\n self.newGroupBox = self.vLayout_wdg.newGroupBox = QtGui.QGroupBox(self.vLayout_wdg)\n self.newGroupBox.setObjectName(\"newGroupBox\")\n self.newGroupBox.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"newGroupBox\", None, QtGui.QApplication.UnicodeUTF8))\n\n self.groupLayoutBox = QtGui.QVBoxLayout()\n self.groupLayoutBox.setObjectName(\"groupLayoutBox\")\n self.newGroupBox.setLayout(self.groupLayoutBox)\n\n self.radioButton3 = QtGui.QRadioButton()\n self.radioButton3.setGeometry(QtCore.QRect(30, 30, 101, 21))\n self.radioButton3.setObjectName(\"helloRadioButton\")\n self.radioButton3.setText(QtGui.QApplication.translate(\"MainWindow\", \"say: Hello\", None, QtGui.QApplication.UnicodeUTF8))\n\n self.radioButton4 = QtGui.QRadioButton()\n self.radioButton4.setGeometry(QtCore.QRect(30, 60, 111, 18))\n self.radioButton4.setObjectName(\"niceRadioButton\")\n self.radioButton4.setText(QtGui.QApplication.translate(\"MainWindow\", \"say: Nice\", None, QtGui.QApplication.UnicodeUTF8))\n\n self.groupLayoutBox.addWidget(self.radioButton3)\n self.groupLayoutBox.addWidget(self.radioButton4)\n\n self.vLayoutBoxObj.insertWidget(0, self.newGroupBox)\n def on_show(self):\n print \"addBox // radioButton1.isChecked : \", self.radioButton1.isChecked()\n if not self.vLayout_wdg.newGroupBox:\n self.addBox()\n def on_hide(self):\n print \"deleteBox // radioButton2.isChecked : \", self.radioButton2.isChecked()\n if self.vLayout_wdg.newGroupBox:\n self.vLayout_wdg.newGroupBox.deleteLater()\n self.vLayout_wdg.newGroupBox = None\n def connectEvent(self):\n QtCore.QObject.connect(self.radioButton1, QtCore.SIGNAL(\"toggled(bool)\"),self.on_show)\n QtCore.QObject.connect(self.radioButton2, QtCore.SIGNAL(\"toggled(bool)\"),self.on_hide)\n QtCore.QObject.connect(self.radioButton3, QtCore.SIGNAL(\"toggled(bool)\"),self.runbutton3)\n QtCore.QObject.connect(self.radioButton4, QtCore.SIGNAL(\"toggled(bool)\"),self.runButton4)\n def selectBox(self):\n self.selectGroupBox = QtGui.QGroupBox(self.centralwidget)\n self.selectGroupBox.setGeometry(QtCore.QRect(40, 20, 171, 111))\n self.selectGroupBox.setObjectName(\"selectGroupBox\")\n self.selectGroupBox.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"select\", None, QtGui.QApplication.UnicodeUTF8))\n\n self.radioButton1 = QtGui.QRadioButton(self.selectGroupBox)\n self.radioButton1.setGeometry(QtCore.QRect(30, 30, 111, 18))\n self.radioButton1.setObjectName(\"radioButton1\")\n self.radioButton1.setText(QtGui.QApplication.translate(\"MainWindow\", \"add groupbox\", None, QtGui.QApplication.UnicodeUTF8))\n\n self.radioButton2 = QtGui.QRadioButton(self.selectGroupBox)\n self.radioButton2.setGeometry(QtCore.QRect(30, 60, 111, 18))\n self.radioButton2.setObjectName(\"radioButton2\")\n self.radioButton2.setText(QtGui.QApplication.translate(\"MainWindow\", \"delete groupbox\", None, QtGui.QApplication.UnicodeUTF8)) \n def addwidget_centralwdg(self,MainWindow):\n self.centralwidget = QtGui.QWidget(MainWindow)\n self.centralwidget.setObjectName(\"centralwidget\")\n\n self.selectBox()\n self.addBox()\n self.connectEvent()\n\n MainWindow.setCentralWidget(self.centralwidget)\n def addwidget_setupUi(self, MainWindow):\n MainWindow.setObjectName(\"MainWindow\")\n MainWindow.resize(250, 300)\n self.addwidget_centralwdg(MainWindow)\n QtCore.QMetaObject.connectSlotsByName(MainWindow)\n#//===========================================================//# \n\nHere you have the mainDesign class...\n#//===========================================================//#\nclass mainDesign(QtGui.QMainWindow,Ui_Addwidget):\n def __init__(self,parent=None):\n super(mainDesign,self).__init__(parent)\n self.addwidget_setupUi(self)\n#//===========================================================//# \n\nAnd of course, the def main...\n#//===========================================================//# \ndef main():\n app = QtGui.QApplication(sys.argv)\n main = mainDesign()\n main.show()\n sys.exit(app.exec_())\nmain()\n#//===========================================================//#\n\nTo try it, just copy the code and the classes to a *.py file. And it will run.\nAny other comments or suggestions to solve the missing piece of the puzzle, are highly welcome and appreciated.\n", "Here is the final solution, working 100%:\n\nupdate as follows:\ndef on_show(self):\n print \"addBox // radioButton1.isChecked : \", self.radioButton1.isChecked()\n if not self.centralwidget.newGroupBox:\n #self.addBox()\n self.addwidget_centralwdg(self.globalMainWindow)\ndef addwidget_centralwdg(self,MainWindow):\n self.centralwidget = QtGui.QWidget(MainWindow)\n self.globalMainWindow=MainWindow\n self.centralwidget.setObjectName(\"centralwidget\")\n self.selectBox()\n self.addBox()\n self.connectEvent()\n MainWindow.setCentralWidget(self.centralwidget)\n\n\nHope this can help others too.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "pyqt", "python", "qt" ]
stackoverflow_0001781173_pyqt_python_qt.txt
Q: Python on Rails? Would it be possible to translate the Ruby on Rails code base to Python? I think many people like Python more than Ruby, but find Ruby on Rails features better (as a whole) than the ones in Python web frameworks. So that, would it be possible? Or does Ruby on Rails utilize language-specific features that would be difficult to translate to Python? A: This is a great blog post. Rails developers chose a framework, and coding in Ruby is the afterthought. Python developers chose the language for the language, not the framework. On the other hand, that made a lot lower bar to entry for frameworks. A: Many of the methodology used in Rails has been translated into Django. Have you tried it? http://www.djangoproject.com/ A: I think one of the things that people like about RoR is the domain-specific language (DSL) style of programming. This is something that Ruby is much better at than Python. A: I know that Rails does not necessarily = MVC per se, but I think a lot of what makes Rails productive is that it enforces (well, strongly encourages) MVC development, so you might find something similar if you look for Python MVC, such as this previous post here on Stack: What's a good lightweight Python MVC framework? There are lots of Python MVC frameworks out there, but I keep hearing a lot about Django (http://www.djangoproject.com/) so that should definitely be on your list of things to check out IMO.
Python on Rails?
Would it be possible to translate the Ruby on Rails code base to Python? I think many people like Python more than Ruby, but find Ruby on Rails features better (as a whole) than the ones in Python web frameworks. So that, would it be possible? Or does Ruby on Rails utilize language-specific features that would be difficult to translate to Python?
[ "This is a great blog post. Rails developers chose a framework, and coding in Ruby is the afterthought. \nPython developers chose the language for the language, not the framework. On the other hand, that made a lot lower bar to entry for frameworks.\n", "Many of the methodology used in Rails has been translated into Django. Have you tried it?\nhttp://www.djangoproject.com/\n", "I think one of the things that people like about RoR is the domain-specific language (DSL) style of programming. This is something that Ruby is much better at than Python.\n", "I know that Rails does not necessarily = MVC per se, but I think a lot of what makes Rails productive is that it enforces (well, strongly encourages) MVC development, so you might find something similar if you look for Python MVC, such as this previous post here on Stack: What's a good lightweight Python MVC framework?\nThere are lots of Python MVC frameworks out there, but I keep hearing a lot about Django (http://www.djangoproject.com/) so that should definitely be on your list of things to check out IMO.\n" ]
[ 17, 16, 14, 1 ]
[]
[]
[ "code_translation", "metaprogramming", "python", "ruby_on_rails" ]
stackoverflow_0001794179_code_translation_metaprogramming_python_ruby_on_rails.txt
Q: Python : fork and exec a process to run on different terminal I am trying to simulate a a network consisting of several clients and servers. I have written node.py which contains client-server code. I want to run multiple instances node.py. But I don't want to do it manually so I have written another file spawn.py which spawns multiple instances of node.py using fork and exec. However, I need to run each instance of node.py on different terminal(shell) so that I can easily debug what is happening inside each node. How can we do that? Please help. EDIT : I am working on linux and using python 2.5 and I want to run all processes on the same box A: If you want "real" (pseudo-;-) terminals, and are using X11 (almost every GUI interface on Linux does;-), you could exec xterm -e python node.py instead of just python node.py -- substitute for xterm whatever terminal emulator program you prefer, of course (I'm sure they all have command-line switches equivalent to good old xterm's -e, to specify what program they should run!-). A: shell #1: for p in 1 2 3 4 5 do python node.py > $p.log 2>&1 done shell #2: tail -F 1.log shell #3: tail -F 2.log etc...
Python : fork and exec a process to run on different terminal
I am trying to simulate a a network consisting of several clients and servers. I have written node.py which contains client-server code. I want to run multiple instances node.py. But I don't want to do it manually so I have written another file spawn.py which spawns multiple instances of node.py using fork and exec. However, I need to run each instance of node.py on different terminal(shell) so that I can easily debug what is happening inside each node. How can we do that? Please help. EDIT : I am working on linux and using python 2.5 and I want to run all processes on the same box
[ "If you want \"real\" (pseudo-;-) terminals, and are using X11 (almost every GUI interface on Linux does;-), you could exec xterm -e python node.py instead of just python node.py -- substitute for xterm whatever terminal emulator program you prefer, of course (I'm sure they all have command-line switches equivalent to good old xterm's -e, to specify what program they should run!-).\n", "shell #1:\nfor p in 1 2 3 4 5\ndo\n python node.py > $p.log 2>&1\ndone\n\n\nshell #2:\ntail -F 1.log \n\nshell #3:\ntail -F 2.log \n\netc...\n\n" ]
[ 1, 0 ]
[]
[]
[ "process", "python" ]
stackoverflow_0001794536_process_python.txt
Q: How can I read the memory of another process in Python in Windows? I'm trying to write a Python script that reads a series of memory locations of a particular process. How can I do this in Python? I'll be using Windows if it matters. I have the processes PID that I'm attempting to read/edit. Am I going to have to revert to calling ReadProcessMemory() and using ctypes? A: I didn't see anything in the standard python libraries but I found an example using ctypes like you suggested on another site: from ctypes import * from ctypes.wintypes import * OpenProcess = windll.kernel32.OpenProcess ReadProcessMemory = windll.kernel32.ReadProcessMemory CloseHandle = windll.kernel32.CloseHandle PROCESS_ALL_ACCESS = 0x1F0FFF pid = 4044 # I assume you have this from somewhere. address = 0x1000000 # Likewise; for illustration I'll get the .exe header. buffer = c_char_p("The data goes here") bufferSize = len(buffer.value) bytesRead = c_ulong(0) processHandle = OpenProcess(PROCESS_ALL_ACCESS, False, pid) if ReadProcessMemory(processHandle, address, buffer, bufferSize, byref(bytesRead)): print "Success:", buffer else: print "Failed." CloseHandle(processHandle) A: Yes, ctypes (or win32all) and ReadProcessMemory are exactly the way to go. Were you looking for something extra/different? What, in particular?
How can I read the memory of another process in Python in Windows?
I'm trying to write a Python script that reads a series of memory locations of a particular process. How can I do this in Python? I'll be using Windows if it matters. I have the processes PID that I'm attempting to read/edit. Am I going to have to revert to calling ReadProcessMemory() and using ctypes?
[ "I didn't see anything in the standard python libraries but I found an example using ctypes like you suggested on another site:\nfrom ctypes import *\nfrom ctypes.wintypes import *\n\nOpenProcess = windll.kernel32.OpenProcess\nReadProcessMemory = windll.kernel32.ReadProcessMemory\nCloseHandle = windll.kernel32.CloseHandle\n\nPROCESS_ALL_ACCESS = 0x1F0FFF\n\npid = 4044 # I assume you have this from somewhere.\naddress = 0x1000000 # Likewise; for illustration I'll get the .exe header.\n\nbuffer = c_char_p(\"The data goes here\")\nbufferSize = len(buffer.value)\nbytesRead = c_ulong(0)\n\nprocessHandle = OpenProcess(PROCESS_ALL_ACCESS, False, pid)\nif ReadProcessMemory(processHandle, address, buffer, bufferSize, byref(bytesRead)):\n print \"Success:\", buffer\nelse:\n print \"Failed.\"\n\nCloseHandle(processHandle)\n\n", "Yes, ctypes (or win32all) and ReadProcessMemory are exactly the way to go. Were you looking for something extra/different? What, in particular?\n" ]
[ 27, 0 ]
[ "See http://www.windowsreference.com/windows-xp/dos-commands-and-equivalent-linux-commands/\nYou can use tasklist.exe to list processes, then scrape the results. Then use taskkill.exe (or tstskill.exe) to end them.\nBut ctypes and kernal32 is probably safer.\n" ]
[ -7 ]
[ "python" ]
stackoverflow_0001794579_python.txt
Q: Chat comet site using python and twisted i want to build a site similar to www.omegle.com. can any one suggest me some ideas. I think its built usning twisted , orbiter comet server. A: Twisted is a good choice. I used it a few years ago to build a server for a browser-based online game I wrote - it kept track of clients, served them replies to Ajax requests, and used HTML5 Server-Sent DOM Events as well. Worked rather painlessly thanks to Twisted's good HTTP library. For a Python web framework, I personally favor Django. It's quick to get going with it, and it has a lot of functionality out of the box ("batteries included" as it says on their site I think). Pylons is another popular choice. A: You can use Nevow, which is a web framework that is built on top of Twisted. The documentation for Nevow includes a fully functional two-way chat application including examples of how to write unit tests for it. A: I'd suggest you use Twisted. ;) It has both chat clients and chat servers. Then you also need a web framework. I'd use either Grok or BFD, but there are many Python Web Frameworks around, and few of them are really bad. A: Most XMPP servers support BOSH. If you use the strophe javascript library, you have only to worry about presentation -- the rest is done for you. A: Because you seem to be looking for both Comet functionality and a Web Framework, you might have a look here: http://github.com/clemesha/hotdot which is a complete example of combining Django, Orbited, and Twisted.
Chat comet site using python and twisted
i want to build a site similar to www.omegle.com. can any one suggest me some ideas. I think its built usning twisted , orbiter comet server.
[ "Twisted is a good choice. I used it a few years ago to build a server for a browser-based online game I wrote - it kept track of clients, served them replies to Ajax requests, and used HTML5 Server-Sent DOM Events as well. Worked rather painlessly thanks to Twisted's good HTTP library.\nFor a Python web framework, I personally favor Django. It's quick to get going with it, and it has a lot of functionality out of the box (\"batteries included\" as it says on their site I think). Pylons is another popular choice.\n", "You can use Nevow, which is a web framework that is built on top of Twisted. The documentation for Nevow includes a fully functional two-way chat application including examples of how to write unit tests for it.\n", "I'd suggest you use Twisted. ;) It has both chat clients and chat servers. Then you also need a web framework. I'd use either Grok or BFD, but there are many Python Web Frameworks around, and few of them are really bad.\n", "Most XMPP servers support BOSH. If you use the strophe javascript library, you have only to worry about presentation -- the rest is done for you.\n", "Because you seem to be looking for both Comet functionality and a Web Framework, you might have a look here: http://github.com/clemesha/hotdot which is a complete example of combining Django, Orbited, and Twisted.\n" ]
[ 3, 2, 1, 1, 1 ]
[]
[]
[ "orbited", "python", "twisted" ]
stackoverflow_0001047306_orbited_python_twisted.txt
Q: How can I generate RSS with arbitrary tags and enclosures Right now, I'm using PyRSS2Gen to generate an RSS document (resyndicating a modification of an rss feed that was parsed with feedparser), but I can't figure out how to add uncommon tags to the item. items = [ PyRSS2Gen.RSSItem( title = x.title, link = x.link, description = x.summary, guid = x.link, pubDate = datetime( x.modified_parsed[0], x.modified_parsed[1], x.modified_parsed[2], x.modified_parsed[3], x.modified_parsed[4], x.modified_parsed[5]) ) for x in parsed_feed.entries] rss = PyRSS2Gen.RSS2( title = "Resyndicator", link = parsed_feed['feed'].get("link"), description = "etc", language = parsed_feed['feed'].get("language"), copyright = parsed_feed['feed'].get("copyright"), managingEditor = parsed_feed['feed'].get("managingEditor"), webMaster = parsed_feed['feed'].get("webMaster"), pubDate = parsed_feed['feed'].get("pubDate"), lastBuildDate = parsed_feed['feed'].get("lastBuildDate"), categories = parsed_feed['feed'].get("categories"), generator = parsed_feed['feed'].get("generator"), docs = parsed_feed['feed'].get("docs"), items = items ) The original feed has a <show_id></show_id> tag, as well as an enclosure <enclosure url="http://url.com" length="10" type="" /> and I need to include that in the generated version as well. A: The documentation explains: To add your own attributes (needed for namespace declarations), redefine element_attrs or rss_attrs in your subclass [of RSS and RSSData]. That's the whole point about subclassing, isn't it? :) A: There are two ways. First, you could change the code directly. Edit 'publish' and put whatever you want wherever you want it. But if you want to take the suggestion from the documentation, derive from RSS2 and implement your own publish_extensions, like this: class YourRSS2Item(PyRSS2Gen.RSSItem): def publish_extensions(self, handler): handler.startElement("show_id") handler.endElement("show_id") 'handler' follows the SAX2 API (start_element, characters, end_element). And as for making an enclosure, use the Enclosure class, as in item = RSSItem( .... enclosure = Enclosure("http://url.com", 10, ""), ...)
How can I generate RSS with arbitrary tags and enclosures
Right now, I'm using PyRSS2Gen to generate an RSS document (resyndicating a modification of an rss feed that was parsed with feedparser), but I can't figure out how to add uncommon tags to the item. items = [ PyRSS2Gen.RSSItem( title = x.title, link = x.link, description = x.summary, guid = x.link, pubDate = datetime( x.modified_parsed[0], x.modified_parsed[1], x.modified_parsed[2], x.modified_parsed[3], x.modified_parsed[4], x.modified_parsed[5]) ) for x in parsed_feed.entries] rss = PyRSS2Gen.RSS2( title = "Resyndicator", link = parsed_feed['feed'].get("link"), description = "etc", language = parsed_feed['feed'].get("language"), copyright = parsed_feed['feed'].get("copyright"), managingEditor = parsed_feed['feed'].get("managingEditor"), webMaster = parsed_feed['feed'].get("webMaster"), pubDate = parsed_feed['feed'].get("pubDate"), lastBuildDate = parsed_feed['feed'].get("lastBuildDate"), categories = parsed_feed['feed'].get("categories"), generator = parsed_feed['feed'].get("generator"), docs = parsed_feed['feed'].get("docs"), items = items ) The original feed has a <show_id></show_id> tag, as well as an enclosure <enclosure url="http://url.com" length="10" type="" /> and I need to include that in the generated version as well.
[ "The documentation explains:\n\nTo add your\n own attributes (needed for namespace\n declarations), redefine\n element_attrs or rss_attrs in your\n subclass [of RSS and RSSData].\n\nThat's the whole point about subclassing, isn't it? :)\n", "There are two ways. First, you could change the code directly. Edit 'publish' and put whatever you want wherever you want it.\nBut if you want to take the suggestion from the documentation, derive from RSS2 and implement your own publish_extensions, like this:\nclass YourRSS2Item(PyRSS2Gen.RSSItem):\n def publish_extensions(self, handler):\n handler.startElement(\"show_id\")\n handler.endElement(\"show_id\")\n\n'handler' follows the SAX2 API (start_element, characters, end_element).\nAnd as for making an enclosure, use the Enclosure class, as in\nitem = RSSItem( .... enclosure = Enclosure(\"http://url.com\", 10, \"\"), ...)\n\n" ]
[ 1, 1 ]
[]
[]
[ "feedparser", "python", "rss" ]
stackoverflow_0001766823_feedparser_python_rss.txt
Q: Python: Analyzing complex statements during execution I am wondering if there is any way to get some meta information about the interpretation of a python statement during execution. Let's assume this is a complex statement of some single statements joined with or (A, B, ... are boolean functions) if A or B and ((C or D and E) or F) or G and H: and I want to know which part of the statement is causing the statement to evaluate to True so I can do something with this knowledge. In the example, there would be 3 possible candidates: A B and ((C or D and E) or F) G and H And in the second case, I would like to know if it was (C or D and E) or F that evaluated to True and so on... Is there any way without parsing the statement? Can I hook up to the interpreter in some way or utilize the inspect module in a way that I haven't found yet? I do not want to debug, it's really about knowing which part of this or-chain triggered the statement at runtime. Edit - further information: The type of application that I want to use this in is a categorizing algorithm that inputs an object and outputs a certain category for this object, based on its attributes. I need to know which attributes were decisive for the category. As you might guess, the complex statement from above comes from the categorization algorithm. The code for this algorithm is generated from a formal pseudo-code and contains about 3,000 nested if-elif-statements that determine the category in a hierarchical way like if obj.attr1 < 23 and (is_something(obj.attr10) or eats_spam_for_breakfast(obj)): return 'Category1' elif obj.attr3 == 'Welcome Home' or count_something(obj) >= 2: return 'Category2a' elif ... So aside from the category itself, I need to flag the attributes that were decisive for that category, so if I'd delete all other attributes, the object would still be assigned to the same category (due to the ors within the statements). The statements can be really long, up to 1,000 chars, and deeply nested. Every object can have up to 200 attributes. Thanks a lot for your help! Edit 2: Haven't found time in the last two weeks. Thanks for providing this solution, it works! A: Could you recode your original code: if A or B and ((C or D and E) or F) or G and H: as, say: e = Evaluator() if e('A or B and ((C or D and E) or F) or G and H'): ...? If so, there's hope!-). The Evaluator class, upon __call__, would compile its string argument, then eval the result with (an empty real dict for globals, and) a pseudo-dict for locals that actually delegates the value lookups to the locals and globals of its caller (just takes a little black magic, but, not too bad;-) and also takes note of what names it's looked up. Given Python's and and or's short-circuiting behavior, you can infer from the actual set of names that were actually looked up, which one determined the truth value of the expression (or each subexpression) -- in an X or Y or Z, the first true value (if any) will be the last one looked up, and in a X and Y and Z, the first false one will. Would this help? If yes, and if you need help with the coding, I'll be happy to expand on this, but first I'd like some confirmation that getting the code for Evaluator would indeed be solving whatever problem it is that you're trying to address!-) Edit: so here's coding implementing Evaluator and exemplifying its use: import inspect import random class TracingDict(object): def __init__(self, loc, glob): self.loc = loc self.glob = glob self.vars = [] def __getitem__(self, name): try: v = self.loc[name] except KeyError: v = self.glob[name] self.vars.append((name, v)) return v class Evaluator(object): def __init__(self): f = inspect.currentframe() f = inspect.getouterframes(f)[1][0] self.d = TracingDict(f.f_locals, f.f_globals) def __call__(self, expr): return eval(expr, {}, self.d) def f(A, B, C, D, E): e = Evaluator() res = e('A or B and ((C or D and E) or F) or G and H') print 'R=%r from %s' % (res, e.d.vars) for x in range(20): A, B, C, D, E, F, G, H = [random.randrange(2) for x in range(8)] f(A, B, C, D, E) and here's output from a sample run: R=1 from [('A', 1)] R=1 from [('A', 1)] R=1 from [('A', 1)] R=1 from [('A', 0), ('B', 1), ('C', 1)] R=1 from [('A', 1)] R=1 from [('A', 0), ('B', 0), ('G', 1), ('H', 1)] R=1 from [('A', 1)] R=1 from [('A', 1)] R=1 from [('A', 0), ('B', 1), ('C', 1)] R=1 from [('A', 1)] R=1 from [('A', 0), ('B', 1), ('C', 1)] R=1 from [('A', 1)] R=1 from [('A', 1)] R=1 from [('A', 1)] R=0 from [('A', 0), ('B', 0), ('G', 0)] R=1 from [('A', 1)] R=1 from [('A', 1)] R=1 from [('A', 1)] R=0 from [('A', 0), ('B', 0), ('G', 0)] R=1 from [('A', 0), ('B', 1), ('C', 1)] You can see that often (about 50% of the time) A is true, which short-circuits everything. When A is false, B evaluates -- when B is also false, then G is next, when B is true, then C. A: As far as I remember, Python does not return True or False per se: Important exception: the Boolean operations or and and always return one of their operands. The Python Standard Library - Truth Value Testing Therefore, following is valid: A = 1 B = 0 result = B or A # result == 1 A: """I do not want to debug, it's really about knowing which part of this or-chain triggered the statement at runtime.""": you might need to explain what is the difference between "debug" and "knowing which part". Do you mean that you the observer need to be told at runtime what is going on (why??) so that you can do something different, or do you mean that the code needs to "know" so that it can do something different? In any case, assuming that your A, B, C etc don't have side effects, why can't you simply split up your or-chain and test the components: part1 = A part2 = B and ((C or D and E) or F) part3 = G and H whodunit = "1" if part1 else "2" if part2 else "3" if part3 else "nobody" print "Perp is", whodunit if part1 or part2 or part3: do_something() ?? Update: """The difference between debug and 'knowing which part' is that I need to assign a flag for the variables that were used in the statement that first evaluated to True (at runtime)""" So you are saying that given the condition "A or B", that if A is True and B is True, A gets all the glory (or all the blame)? I'm finding it very hard to believe that categorisation software such as you describe is based on "or" having a short-circuit evaluation. Are you sure that there's an intent behind the code being "A or B" and not "B or A"? Could the order be random, or influenced by the order that the variables where originally input? In any case, generating Python code automatically and then reverse-engineering it appears to be a long way around the problem. Why not just generate code with the part1 = yadda; part2 = blah; etc nature? A: The Python interpreter doesn't give you a way to introspect the evaluation of an expression at runtime. The sys.settrace() function lets you register a callback that is invoked for every line of source code, but that's too coarse-grained for what you want to do. That said, I've experimented with a crazy hack to have the function invoked for every bytecode executed: Python bytecode tracing. But even then, I don't know how to find the execution state, for example, the values on the interpreter stack. I think the only way to get at what you want is to modify the code algorithmically. You could either transform your source (though you said you didn't want to parse the code), or you could transform the compiled bytecode. Neither is a simple undertaking, and I'm sure there are a dozen difficult hurdles to overcome if you try it. Sorry to be discouraging... BTW: What application do you have for this sort of technology? A: I would just put something like this before the big statement (assuming the statement is in a class): for i in ("A","B","C","D","E","F","G","H"): print i,self.__dict__[i]
Python: Analyzing complex statements during execution
I am wondering if there is any way to get some meta information about the interpretation of a python statement during execution. Let's assume this is a complex statement of some single statements joined with or (A, B, ... are boolean functions) if A or B and ((C or D and E) or F) or G and H: and I want to know which part of the statement is causing the statement to evaluate to True so I can do something with this knowledge. In the example, there would be 3 possible candidates: A B and ((C or D and E) or F) G and H And in the second case, I would like to know if it was (C or D and E) or F that evaluated to True and so on... Is there any way without parsing the statement? Can I hook up to the interpreter in some way or utilize the inspect module in a way that I haven't found yet? I do not want to debug, it's really about knowing which part of this or-chain triggered the statement at runtime. Edit - further information: The type of application that I want to use this in is a categorizing algorithm that inputs an object and outputs a certain category for this object, based on its attributes. I need to know which attributes were decisive for the category. As you might guess, the complex statement from above comes from the categorization algorithm. The code for this algorithm is generated from a formal pseudo-code and contains about 3,000 nested if-elif-statements that determine the category in a hierarchical way like if obj.attr1 < 23 and (is_something(obj.attr10) or eats_spam_for_breakfast(obj)): return 'Category1' elif obj.attr3 == 'Welcome Home' or count_something(obj) >= 2: return 'Category2a' elif ... So aside from the category itself, I need to flag the attributes that were decisive for that category, so if I'd delete all other attributes, the object would still be assigned to the same category (due to the ors within the statements). The statements can be really long, up to 1,000 chars, and deeply nested. Every object can have up to 200 attributes. Thanks a lot for your help! Edit 2: Haven't found time in the last two weeks. Thanks for providing this solution, it works!
[ "Could you recode your original code:\nif A or B and ((C or D and E) or F) or G and H:\n\nas, say:\ne = Evaluator()\nif e('A or B and ((C or D and E) or F) or G and H'):\n\n...? If so, there's hope!-). The Evaluator class, upon __call__, would compile its string argument, then eval the result with (an empty real dict for globals, and) a pseudo-dict for locals that actually delegates the value lookups to the locals and globals of its caller (just takes a little black magic, but, not too bad;-) and also takes note of what names it's looked up. Given Python's and and or's short-circuiting behavior, you can infer from the actual set of names that were actually looked up, which one determined the truth value of the expression (or each subexpression) -- in an X or Y or Z, the first true value (if any) will be the last one looked up, and in a X and Y and Z, the first false one will.\nWould this help? If yes, and if you need help with the coding, I'll be happy to expand on this, but first I'd like some confirmation that getting the code for Evaluator would indeed be solving whatever problem it is that you're trying to address!-)\nEdit: so here's coding implementing Evaluator and exemplifying its use:\nimport inspect\nimport random\n\nclass TracingDict(object):\n\n def __init__(self, loc, glob):\n self.loc = loc\n self.glob = glob\n self.vars = []\n\n def __getitem__(self, name):\n try: v = self.loc[name]\n except KeyError: v = self.glob[name]\n self.vars.append((name, v))\n return v\n\n\nclass Evaluator(object):\n\n def __init__(self):\n f = inspect.currentframe()\n f = inspect.getouterframes(f)[1][0]\n self.d = TracingDict(f.f_locals, f.f_globals)\n\n def __call__(self, expr):\n return eval(expr, {}, self.d)\n\n\ndef f(A, B, C, D, E):\n e = Evaluator()\n res = e('A or B and ((C or D and E) or F) or G and H')\n print 'R=%r from %s' % (res, e.d.vars)\n\nfor x in range(20):\n A, B, C, D, E, F, G, H = [random.randrange(2) for x in range(8)]\n f(A, B, C, D, E)\n\nand here's output from a sample run:\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 0), ('B', 1), ('C', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 0), ('B', 0), ('G', 1), ('H', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 0), ('B', 1), ('C', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 0), ('B', 1), ('C', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=0 from [('A', 0), ('B', 0), ('G', 0)]\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=1 from [('A', 1)]\nR=0 from [('A', 0), ('B', 0), ('G', 0)]\nR=1 from [('A', 0), ('B', 1), ('C', 1)]\n\nYou can see that often (about 50% of the time) A is true, which short-circuits everything. When A is false, B evaluates -- when B is also false, then G is next, when B is true, then C.\n", "As far as I remember, Python does not return True or False per se:\n\nImportant exception: the Boolean\n operations or and and always return\n one of their operands.\n\nThe Python Standard Library - Truth Value Testing\nTherefore, following is valid:\nA = 1\nB = 0\nresult = B or A # result == 1\n\n", "\"\"\"I do not want to debug, it's really about knowing which part of this or-chain triggered the statement at runtime.\"\"\": you might need to explain what is the difference between \"debug\" and \"knowing which part\".\nDo you mean that you the observer need to be told at runtime what is going on (why??) so that you can do something different, or do you mean that the code needs to \"know\" so that it can do something different?\nIn any case, assuming that your A, B, C etc don't have side effects, why can't you simply split up your or-chain and test the components:\npart1 = A\npart2 = B and ((C or D and E) or F)\npart3 = G and H\nwhodunit = \"1\" if part1 else \"2\" if part2 else \"3\" if part3 else \"nobody\"\nprint \"Perp is\", whodunit\nif part1 or part2 or part3:\n do_something()\n\n??\nUpdate:\n\"\"\"The difference between debug and 'knowing which part' is that I need to assign a flag for the variables that were used in the statement that first evaluated to True (at runtime)\"\"\"\nSo you are saying that given the condition \"A or B\", that if A is True and B is True, A gets all the glory (or all the blame)? I'm finding it very hard to believe that categorisation software such as you describe is based on \"or\" having a short-circuit evaluation. Are you sure that there's an intent behind the code being \"A or B\" and not \"B or A\"? Could the order be random, or influenced by the order that the variables where originally input?\nIn any case, generating Python code automatically and then reverse-engineering it appears to be a long way around the problem. Why not just generate code with the part1 = yadda; part2 = blah; etc nature?\n", "The Python interpreter doesn't give you a way to introspect the evaluation of an expression at runtime. The sys.settrace() function lets you register a callback that is invoked for every line of source code, but that's too coarse-grained for what you want to do.\nThat said, I've experimented with a crazy hack to have the function invoked for every bytecode executed: Python bytecode tracing.\nBut even then, I don't know how to find the execution state, for example, the values on the interpreter stack.\nI think the only way to get at what you want is to modify the code algorithmically. You could either transform your source (though you said you didn't want to parse the code), or you could transform the compiled bytecode. Neither is a simple undertaking, and I'm sure there are a dozen difficult hurdles to overcome if you try it.\nSorry to be discouraging...\nBTW: What application do you have for this sort of technology?\n", "I would just put something like this before the big statement (assuming the statement is in a class):\nfor i in (\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\"):\n print i,self.__dict__[i]\n\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "interpreter", "logic", "python" ]
stackoverflow_0001793660_interpreter_logic_python.txt