content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Objective reasons for using Python or Ruby for a new REST Web API
So this thread is definitely NOT a thread for why Python is better than Ruby or the inverse. Instead, this thread is for objective criticism on why you would pick one over the other to write a RESTful web API that's going to be used by many different clients, (mobile, web browsers, tablets etc).
Again, don't compare Ruby on Rails vs Django. This isn't a web app that's dependent on high level frameworks such as RoR or Django. I'd just like to hear why someone might choose one over the other to write a RESTful web API that they had to start tomorrow, completely from scratch and reasons they might go from one to another.
For me, syntax and language features are completely superfluous. The both offer an abundant amount of features and certainly both can achieve the same exact end goals. I think if someone flips a coin, it's a good enough reason to use one over the other. I'd just love to see what some of you web service experts who are very passionate about their work respond to why they would use one over the other in a very objective format.
A:
I would say the important thing is that regardless of which you choose, make sure that your choice does not leak through your REST API. It should not matter to the client of your API which you chose.
A:
I know Ruby, don't know python... you can see which way I'm leaning toward, right?
A:
Choose the one you're most familiar with and most likely to get things done with the fastest.
A:
Yeah, flip a coin. The truth is that you're going to find minimalist frameworks in either language. Heroku is a pretty strong reason to say Ruby but there may be other similar hosts for Python. But Heroku makes it stupid easy to deploy your api into the cloud whether it's Rails or some other Ruby project that uses Rack. WSGI doesn't give you this option.
As for as the actually implementation though, I'm guessing that you'll find that they're both completely competent languages and both a joy to program in.
A:
I think they are fairly evenly matched in features. I prefer Python, but I have been using it for over a decade so I freely admit that what follows is totally biased.
IMHO Python is more mature - there are more libraries for it (although Ruby may be catching up), and the included libraries I think are better designed. The language evolution process is more mature too, with each proposed feature discussed in public via the PEPs before the decision is made to include them in a release. I get the impression that development of the Ruby language is much more ad-hoc.
Python is widely used in a lot of areas apart from web development - scientific computing, CGI rendering pipelines, distributed computing, Linux GUI tools etc. Ruby got very little attention before Rails came along, so I get the impression that most Ruby work is focused on web development. That may not be a problem if that is all you want to do with the language, but it does mean that Python has a more diverse user base and a more diverse set of libraries.
Python is faster too.
A:
Ruby + Sinatra
Very easy to use with/as rack middleware - someone's already mentioned heroku
A:
Either will do a great job and you'll gain in other ways from learning something new. Why not spend as couple of days with each? See how far you can get with a simple subset of the problem, then see how you feel. For bonus points report back here and answer your own question!
|
Objective reasons for using Python or Ruby for a new REST Web API
|
So this thread is definitely NOT a thread for why Python is better than Ruby or the inverse. Instead, this thread is for objective criticism on why you would pick one over the other to write a RESTful web API that's going to be used by many different clients, (mobile, web browsers, tablets etc).
Again, don't compare Ruby on Rails vs Django. This isn't a web app that's dependent on high level frameworks such as RoR or Django. I'd just like to hear why someone might choose one over the other to write a RESTful web API that they had to start tomorrow, completely from scratch and reasons they might go from one to another.
For me, syntax and language features are completely superfluous. The both offer an abundant amount of features and certainly both can achieve the same exact end goals. I think if someone flips a coin, it's a good enough reason to use one over the other. I'd just love to see what some of you web service experts who are very passionate about their work respond to why they would use one over the other in a very objective format.
|
[
"I would say the important thing is that regardless of which you choose, make sure that your choice does not leak through your REST API. It should not matter to the client of your API which you chose.\n",
"I know Ruby, don't know python... you can see which way I'm leaning toward, right?\n",
"Choose the one you're most familiar with and most likely to get things done with the fastest.\n",
"Yeah, flip a coin. The truth is that you're going to find minimalist frameworks in either language. Heroku is a pretty strong reason to say Ruby but there may be other similar hosts for Python. But Heroku makes it stupid easy to deploy your api into the cloud whether it's Rails or some other Ruby project that uses Rack. WSGI doesn't give you this option. \nAs for as the actually implementation though, I'm guessing that you'll find that they're both completely competent languages and both a joy to program in. \n",
"I think they are fairly evenly matched in features. I prefer Python, but I have been using it for over a decade so I freely admit that what follows is totally biased.\nIMHO Python is more mature - there are more libraries for it (although Ruby may be catching up), and the included libraries I think are better designed. The language evolution process is more mature too, with each proposed feature discussed in public via the PEPs before the decision is made to include them in a release. I get the impression that development of the Ruby language is much more ad-hoc.\nPython is widely used in a lot of areas apart from web development - scientific computing, CGI rendering pipelines, distributed computing, Linux GUI tools etc. Ruby got very little attention before Rails came along, so I get the impression that most Ruby work is focused on web development. That may not be a problem if that is all you want to do with the language, but it does mean that Python has a more diverse user base and a more diverse set of libraries.\nPython is faster too.\n",
"Ruby + Sinatra\nVery easy to use with/as rack middleware - someone's already mentioned heroku\n",
"Either will do a great job and you'll gain in other ways from learning something new. Why not spend as couple of days with each? See how far you can get with a simple subset of the problem, then see how you feel. For bonus points report back here and answer your own question! \n"
] |
[
6,
5,
4,
4,
2,
2,
1
] |
[] |
[] |
[
"api",
"python",
"rest",
"ruby",
"web_services"
] |
stackoverflow_0001850640_api_python_rest_ruby_web_services.txt
|
Q:
Python raw_input into dictionary declared in a class
I am incredibly new to Python and I really need to be able to work this out. I want to be asking the user via raw_input what the module and grade is and then putting this into the dictionary already defined in the Student class as grades. I've got no idea what to do! Thanks in advance!
students = [] # List containing all student objects (the database)
def printMenu():
print "------Main Menu------\n" "1. Add a student\n" "2. Print all students\n" "3. Remove a student\n" "---------------------\n"
class Student:
firstName = ""
lastName = ""
age = 0
studentID = ""
degree = ""
grades = {"Module Name":"","Grade":""}
def setFirstName(self, firstName):
self.firstName = firstName
def getFirstName(self):
return self.firstName
def setLastName(self, lastName):
self.lastName = lastName
def getLastName(self):
return self.lastName
def setDegree(self,degree):
self.degree = degree
def getDegree(self):
return self.degree
def setGrades(self, grades):
self.grades = grades
def getGrades(self):
return self.grades
def setStudentID(self, studentid):
self.studentid = studentid
def getStudentID(self):
return self.studentid
def setAge(self, age):
self.age = age
def getAge(self):
return self.age
def addStudent():
count = 0
firstName = raw_input("Please enter the student's first name: ")
lastName = raw_input("Please enter the student's last name: ")
degree = raw_input("Please enter the student's degree: ")
while count != -1:
student.grades = raw_input("Please enter the student's module name: ")
#student.grades["Grade"] = raw_input("Please enter the grade for %: " % grades)
studentid = raw_input("Please enter the student's ID: ")
age = raw_input("Please enter the student's age: ")
student = Student() # Create a new student object
student.setFirstName(firstName) # Set this student's first name
student.setLastName(lastName)
student.setDegree(degree)
student.setGrades(grades)
student.setStudentID(studentid)
student.setAge(age)
students.append(student) # Add this student to the database
A:
A few things:
Move the initialization of your class attributes into an __init__ method:
Get rid of all the getters and setters as Jeffrey says.
Use a dict that has module names as keys and grades as values:
Some code snippets:
def __init__(self, firstName, lastName, age, studentID, degree):
self.firstName = firstName
self.lastName = lastName
...
self.grades = {}
and:
while True:
module_name = raw_input("Please enter the student's module name: ")
if not module_name:
break
grade = raw_input("Please enter the grade for %s: " % module_name)
student.grades[module_name] = grade
|
Python raw_input into dictionary declared in a class
|
I am incredibly new to Python and I really need to be able to work this out. I want to be asking the user via raw_input what the module and grade is and then putting this into the dictionary already defined in the Student class as grades. I've got no idea what to do! Thanks in advance!
students = [] # List containing all student objects (the database)
def printMenu():
print "------Main Menu------\n" "1. Add a student\n" "2. Print all students\n" "3. Remove a student\n" "---------------------\n"
class Student:
firstName = ""
lastName = ""
age = 0
studentID = ""
degree = ""
grades = {"Module Name":"","Grade":""}
def setFirstName(self, firstName):
self.firstName = firstName
def getFirstName(self):
return self.firstName
def setLastName(self, lastName):
self.lastName = lastName
def getLastName(self):
return self.lastName
def setDegree(self,degree):
self.degree = degree
def getDegree(self):
return self.degree
def setGrades(self, grades):
self.grades = grades
def getGrades(self):
return self.grades
def setStudentID(self, studentid):
self.studentid = studentid
def getStudentID(self):
return self.studentid
def setAge(self, age):
self.age = age
def getAge(self):
return self.age
def addStudent():
count = 0
firstName = raw_input("Please enter the student's first name: ")
lastName = raw_input("Please enter the student's last name: ")
degree = raw_input("Please enter the student's degree: ")
while count != -1:
student.grades = raw_input("Please enter the student's module name: ")
#student.grades["Grade"] = raw_input("Please enter the grade for %: " % grades)
studentid = raw_input("Please enter the student's ID: ")
age = raw_input("Please enter the student's age: ")
student = Student() # Create a new student object
student.setFirstName(firstName) # Set this student's first name
student.setLastName(lastName)
student.setDegree(degree)
student.setGrades(grades)
student.setStudentID(studentid)
student.setAge(age)
students.append(student) # Add this student to the database
|
[
"A few things:\n\nMove the initialization of your class attributes into an __init__ method:\nGet rid of all the getters and setters as Jeffrey says. \nUse a dict that has module names as keys and grades as values:\n\nSome code snippets:\ndef __init__(self, firstName, lastName, age, studentID, degree):\n self.firstName = firstName\n self.lastName = lastName\n ...\n self.grades = {}\n\nand:\n while True:\n module_name = raw_input(\"Please enter the student's module name: \")\n if not module_name:\n break\n grade = raw_input(\"Please enter the grade for %s: \" % module_name)\n student.grades[module_name] = grade\n\n"
] |
[
2
] |
[] |
[] |
[
"class",
"dictionary",
"python",
"variables"
] |
stackoverflow_0001853093_class_dictionary_python_variables.txt
|
Q:
Codec Errors in Python
Does anyone know the name of a codec that can translate any random assortment of bytes into a string? I have been getting the following error after encoding, encrypting, and decoding a string in tkinter.Text.
UnicodeDecodeError: 'utf8' codec can't decode
byte 0x99 in position 151: unexpected code byte
Code used to generate the error follow below. The UTF8 codec listed at the top has problems translating some bytes back into a string. What I am looking for is an answer that solves the problem, not direction.
from tkinter import *
import traceback
from tkinter.scrolledtext import ScrolledText
CODEC = 'utf8'
################################################################################
class MarkovDemo:
def __init__(self, master):
self.prompt_size = Label(master, anchor=W, text='Encode Word Size')
self.prompt_size.pack(side=TOP, fill=X)
self.size_entry = Entry(master)
self.size_entry.insert(0, '8')
self.size_entry.pack(fill=X)
self.prompt_plain = Label(master, anchor=W, text='Plaintext Characters')
self.prompt_plain.pack(side=TOP, fill=X)
self.plain_entry = Entry(master)
self.plain_entry.insert(0, '""')
self.plain_entry.pack(fill=X)
self.showframe = Frame(master)
self.showframe.pack(fill=X, anchor=W)
self.showvar = StringVar(master)
self.showvar.set("encode")
self.showfirstradio = Radiobutton(self.showframe,
text="Encode Plaintext",
variable=self.showvar,
value="encode",
command=self.reevaluate)
self.showfirstradio.pack(side=LEFT)
self.showallradio = Radiobutton(self.showframe,
text="Decode Cyphertext",
variable=self.showvar,
value="decode",
command=self.reevaluate)
self.showallradio.pack(side=LEFT)
self.inputbox = ScrolledText(master, width=60, height=10, wrap=WORD)
self.inputbox.pack(fill=BOTH, expand=1)
self.dynamic_var = IntVar()
self.dynamic_box = Checkbutton(master, variable=self.dynamic_var,
text='Dynamic Evaluation',
offvalue=False, onvalue=True,
command=self.reevaluate)
self.dynamic_box.pack()
self.output = Label(master, anchor=W, text="This is your output:")
self.output.pack(fill=X)
self.outbox = ScrolledText(master, width=60, height=10, wrap=WORD)
self.outbox.pack(fill=BOTH, expand=1)
self.inputbox.bind('<Key>', self.reevaluate)
def select_all(event=None):
event.widget.tag_add(SEL, 1.0, 'end-1c')
event.widget.mark_set(INSERT, 1.0)
event.widget.see(INSERT)
return 'break'
self.inputbox.bind('<Control-Key-a>', select_all)
self.outbox.bind('<Control-Key-a>', select_all)
self.inputbox.bind('<Control-Key-/>', lambda event: 'break')
self.outbox.bind('<Control-Key-/>', lambda event: 'break')
self.outbox.config(state=DISABLED)
def reevaluate(self, event=None):
if event is not None:
if event.char == '':
return
if self.dynamic_var.get():
text = self.inputbox.get(1.0, END)[:-1]
if len(text) < 10:
return
text = text.replace('\n \n', '\n\n')
mode = self.showvar.get()
assert mode in ('decode', 'encode'), 'Bad mode!'
if mode == 'encode':
# Encode Plaintext
try:
# Evaluate the plaintext characters
plain = self.plain_entry.get()
if plain:
PC = eval(self.plain_entry.get())
else:
PC = ''
self.plain_entry.delete(0, END)
self.plain_entry.insert(0, '""')
# Evaluate the word size
size = self.size_entry.get()
if size:
XD = int(size)
while grid_size(text, XD, PC) > 1 << 20:
XD -= 1
else:
XD = 0
grid = 0
while grid <= 1 << 20:
grid = grid_size(text, XD, PC)
XD += 1
XD -= 1
# Correct the size and encode
self.size_entry.delete(0, END)
self.size_entry.insert(0, str(XD))
cyphertext, key, prime = encrypt_str(text, XD, PC)
except:
traceback.print_exc()
else:
buffer = ''
for block in key:
buffer += repr(block)[2:-1] + '\n'
buffer += repr(prime)[2:-1] + '\n\n' + cyphertext
self.outbox.config(state=NORMAL)
self.outbox.delete(1.0, END)
self.outbox.insert(END, buffer)
self.outbox.config(state=DISABLED)
else:
# Decode Cyphertext
try:
header, cypher = text.split('\n\n', 1)
lines = header.split('\n')
for index, item in enumerate(lines):
try:
lines[index] = eval('b"' + item + '"')
except:
lines[index] = eval("b'" + item + "'")
plain = decrypt_str(cypher, tuple(lines[:-1]), lines[-1])
except:
traceback.print_exc()
else:
self.outbox.config(state=NORMAL)
self.outbox.delete(1.0, END)
self.outbox.insert(END, plain)
self.outbox.config(state=DISABLED)
else:
text = self.inputbox.get(1.0, END)[:-1]
text = text.replace('\n \n', '\n\n')
mode = self.showvar.get()
assert mode in ('decode', 'encode'), 'Bad mode!'
if mode == 'encode':
try:
XD = int(self.size_entry.get())
PC = eval(self.plain_entry.get())
size = grid_size(text, XD, PC)
assert size
except:
pass
else:
buffer = 'Grid size will be:\n' + convert(size)
self.outbox.config(state=NORMAL)
self.outbox.delete(1.0, END)
self.outbox.insert(END, buffer)
self.outbox.config(state=DISABLED)
################################################################################
import random
CRYPT = random.SystemRandom()
################################################################################
# This section includes functions that
# can test the required key and bootstrap.
# sudoko_key
# - should be a proper "markov" key
def _check_sudoku_key(sudoku_key):
# Ensure key is a tuple with more than one item.
assert isinstance(sudoku_key, tuple), '"sudoku_key" must be a tuple'
assert len(sudoku_key) > 1, '"sudoku_key" must have more than one item'
# Test first item.
item = sudoku_key[0]
assert isinstance(item, bytes), 'first item must be an instance of bytes'
assert len(item) > 1, 'first item must have more than one byte'
assert len(item) == len(set(item)), 'first item must have unique bytes'
# Test the rest of the key.
for obj in sudoku_key[1:]:
assert isinstance(obj, bytes), 'remaining items must be of bytes'
assert len(obj) == len(item), 'all items must have the same length'
assert len(obj) == len(set(obj)), \
'remaining items must have unique bytes'
assert len(set(item)) == len(set(item).union(set(obj))), \
'all items must have the same bytes'
# boot_strap
# - should be a proper "markov" bootstrap
# - we will call this a "primer"
# sudoko_key
# - should be a proper "markov" key
def _check_boot_strap(boot_strap, sudoku_key):
assert isinstance(boot_strap, bytes), '"boot_strap" must be a bytes object'
assert len(boot_strap) == len(sudoku_key) - 1, \
'"boot_strap" length must be one less than "sudoku_key" length'
item = sudoku_key[0]
assert len(set(item)) == len(set(item).union(set(boot_strap))), \
'"boot_strap" may only have bytes found in "sudoku_key"'
################################################################################
# This section includes functions capable
# of creating the required key and bootstrap.
# bytes_set should be any collection of bytes
# - it should be possible to create a set from them
# - these should be the bytes on which encryption will follow
# word_size
# - this will be the size of the "markov" chains program uses
# - this will be the number of dimensions the "grid" will have
# - one less character will make up bootstrap (or primer)
def make_sudoku_key(bytes_set, word_size):
key_set = set(bytes_set)
blocks = []
for block in range(word_size):
blocks.append(bytes(CRYPT.sample(key_set, len(key_set))))
return tuple(blocks)
# sudoko_key
# - should be a proper "markov" key
def make_boot_strap(sudoku_key):
block = sudoku_key[0]
return bytes(CRYPT.choice(block) for byte in range(len(sudoku_key) - 1))
################################################################################
# This section contains functions needed to
# create the multidimensional encryption grid.
# sudoko_key
# - should be a proper "markov" key
def make_grid(sudoku_key):
grid = expand_array(sudoku_key[0], sudoku_key[1])
for block in sudoku_key[2:]:
grid = expand_array(grid, block)
return grid
# grid
# - should be an X dimensional grid from make_grid
# block_size
# - comes from length of one block in a sudoku_key
def make_decode_grid(grid, block_size):
cache = []
for part in range(0, len(grid), block_size):
old = grid[part:part+block_size]
new = [None] * block_size
key = sorted(old)
for index, byte in enumerate(old):
new[key.index(byte)] = key[index]
cache.append(bytes(new))
return b''.join(cache)
# grid
# - should be an X dimensional grid from make_grid
# block
# - should be a block from a sudoku_key
# - should have same unique bytes as the expanding grid
def expand_array(grid, block):
cache = []
grid_size = len(grid)
block_size = len(block)
for byte in block:
index = grid.index(bytes([byte]))
for part in range(0, grid_size, block_size):
cache.append(grid[part+index:part+block_size])
cache.append(grid[part:part+index])
return b''.join(cache)
################################################################################
# The first three functions can be used to check an encryption
# grid. The eval_index function is used to evaluate a grid cell.
# grid
# - grid object to be checked
# - grid should come from the make_grid function
# - must have unique bytes along each axis
# block_size
# - comes from length of one block in a sudoku_key
# - this is the length of one edge along the grid
# - each axis is this many unit long exactly
# word_size
# - this is the number of blocks in a sudoku_key
# - this is the number of dimensions in a grid
# - this is the length needed to create a needed markon chain
def check_grid(grid, block_size, word_size):
build_index(grid, block_size, word_size, [])
# create an index to access the grid with
def build_index(grid, block_size, word_size, index):
for number in range(block_size):
index.append(number)
if len(index) == word_size:
check_cell(grid, block_size, word_size, index)
else:
build_index(grid, block_size, word_size, index)
index.pop()
# compares the contents of a cell along each grid axis
def check_cell(grid, block_size, word_size, index):
master = eval_index(grid, block_size, index)
for axis in range(word_size):
for value in range(block_size):
if index[axis] != value:
copy = list(index)
copy[axis] = value
slave = eval_index(grid, block_size, copy)
assert slave != master, 'Cell not unique along axis!'
# grid
# - grid object to be accessed and evaluated
# - grid should come from the make_grid function
# - must have unique bytes along each axis
# block_size
# - comes from length of one block in a sudoku_key
# - this is the length of one edge along the grid
# - each axis is this many unit long exactly
# index
# - list of coordinates to access the grid
# - should be of length word_size
# - should be of length equal to number of dimensions in the grid
def eval_index(grid, block_size, index):
offset = 0
for power, value in enumerate(reversed(index)):
offset += value * block_size ** power
return grid[int(offset)]
################################################################################
# The following functions act as a suite that can ultimately
# encrpyt strings, though other functions can be built from them.
# bytes_obj
# - the bytes to encode
# byte_map
# - byte tranform map for inserting into the index
# grid
# - X dimensional grid used to evaluate markov chains
# index
# - list that starts the index for accessing grid (primer)
# - it should be of length word_size - 1
# block_size
# - length of each edge in a grid
def _encode(bytes_obj, byte_map, grid, index, block_size):
cache = bytes()
index = [0] + index
for byte in bytes_obj:
if byte in byte_map:
index.append(byte_map[byte])
index = index[1:]
cache += bytes([eval_index(grid, block_size, index)])
else:
cache += bytes([byte])
return cache, index[1:]
# bytes_obj
# - the bytes to encode
# sudoko_key
# - should be a proper "markov" key
# - this key will be automatically checked for correctness
# boot_strap
# - should be a proper "markov" bootstrap
def encrypt(bytes_obj, sudoku_key, boot_strap):
_check_sudoku_key(sudoku_key)
_check_boot_strap(boot_strap, sudoku_key)
# make byte_map
array = sorted(sudoku_key[0])
byte_map = dict((byte, value) for value, byte in enumerate(array))
# create two more arguments for encode
grid = make_grid(sudoku_key)
index = list(map(byte_map.__getitem__, boot_strap))
# run the actual encoding algorithm and create reversed map
code, index = _encode(bytes_obj, byte_map, grid, index, len(sudoku_key[0]))
rev_map = dict(reversed(item) for item in byte_map.items())
# fix the boot_strap and return the results
boot_strap = bytes(rev_map[number] for number in index)
return code, boot_strap
# string
# - should be the string that you want encoded
# word_size
# - length you want the markov chains to be of
# plain_chars
# - characters that you do not want to encrypt
def encrypt_str(string, word_size, plain_chars=''):
byte_obj = string.encode(CODEC)
encode_on = set(byte_obj).difference(set(plain_chars.encode()))
sudoku_key = make_sudoku_key(encode_on, word_size)
boot_strap = make_boot_strap(sudoku_key)
cyphertext = encrypt(byte_obj, sudoku_key, boot_strap)[0]
# return encrypted string, key, and original bootstrap
return cyphertext.decode(CODEC), sudoku_key, boot_strap
def grid_size(string, word_size, plain_chars):
encode_on = set(string.encode()).difference(set(plain_chars.encode()))
return len(encode_on) ** word_size
################################################################################
# The following functions act as a suite that can ultimately
# decrpyt strings, though other functions can be built from them.
# bytes_obj
# - the bytes to encode
# byte_map
# - byte tranform map for inserting into the index
# grid
# - X dimensional grid used to evaluate markov chains
# index
# - list that starts the index for accessing grid (primer)
# - it should be of length word_size - 1
# block_size
# - length of each edge in a grid
def _decode(bytes_obj, byte_map, grid, index, block_size):
cache = bytes()
index = [0] + index
for byte in bytes_obj:
if byte in byte_map:
index.append(byte_map[byte])
index = index[1:]
decoded = eval_index(grid, block_size, index)
index[-1] = byte_map[decoded]
cache += bytes([decoded])
else:
cache += bytes([byte])
return cache, index[1:]
# bytes_obj
# - the bytes to decode
# sudoko_key
# - should be a proper "markov" key
# - this key will be automatically checked for correctness
# boot_strap
# - should be a proper "markov" bootstrap
def decrypt(bytes_obj, sudoku_key, boot_strap):
_check_sudoku_key(sudoku_key)
_check_boot_strap(boot_strap, sudoku_key)
# make byte_map
array = sorted(sudoku_key[0])
byte_map = dict((byte, value) for value, byte in enumerate(array))
# create two more arguments for decode
grid = make_grid(sudoku_key)
grid = make_decode_grid(grid, len(sudoku_key[0]))
index = list(map(byte_map.__getitem__, boot_strap))
# run the actual decoding algorithm and create reversed map
code, index = _decode(bytes_obj, byte_map, grid, index, len(sudoku_key[0]))
rev_map = dict(reversed(item) for item in byte_map.items())
# fix the boot_strap and return the results
boot_strap = bytes(rev_map[number] for number in index)
return code, boot_strap
# string
# - should be the string that you want decoded
# word_size
# - length you want the markov chains to be of
# plain_chars
# - characters that you do not want to encrypt
def decrypt_str(string, sudoku_key, boot_strap):
byte_obj = string.encode(CODEC)
plaintext = decrypt(byte_obj, sudoku_key, boot_strap)[0]
# return encrypted string, key, and original bootstrap
return plaintext.decode(CODEC)
################################################################################
def convert(number):
"Convert bytes into human-readable representation."
assert 0 < number < 1 << 110, 'Number Out Of Range'
ordered = reversed(tuple(format_bytes(partition_number(number, 1 << 10))))
cleaned = ', '.join(item for item in ordered if item[0] != '0')
return cleaned
################################################################################
def partition_number(number, base):
"Continually divide number by base until zero."
div, mod = divmod(number, base)
yield mod
while div:
div, mod = divmod(div, base)
yield mod
def format_bytes(parts):
"Format partitioned bytes into human-readable strings."
for power, number in enumerate(parts):
yield '{} {}'.format(number, format_suffix(power, number))
def format_suffix(power, number):
"Compute the suffix for a certain power of bytes."
return (PREFIX[power] + 'byte').capitalize() + ('s' if number != 1 else '')
################################################################################
PREFIX = ' kilo mega giga tera peta exa zetta yotta bronto geop'.split(' ')
################################################################################
if __name__ == '__main__':
root = Tk()
root.title('Markov Demo')
demo = MarkovDemo(root)
root.mainloop()
A:
Strings are by definition a sequence of bytes that only have meaning when interpreted with the knowledge of the encoding. That's one reason why the equivalent of Python 2's string type in Python 3 is the bytes type. As long as you know the encoding of the strings you're working with, I'm not sure you specifically need to recode it just to compress/encrypt it. Details of what you're actually doing might make a difference, though.
A:
Python's decode has error settings. The default is strict which throws an exception.
Wherever you are doing the decoding, you can specify 'ignore' or 'replace' as a setting, and this will take care of your problems.
Please see the codecs documentation.
A:
In Python HOWTOs from the Python v3.1.1 documentation, there is a helpful section regarding Unicode HOWTO. The table of content contains an entry to Python’s Unicode Support that explains string & byte.
The String Type
>>> b'\x80abc'.decode("utf-8", "strict")
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0:
unexpected code byte
>>> b'\x80abc'.decode("utf-8", "replace")
'\ufffdabc'
>>> b'\x80abc'.decode("utf-8", "ignore")
'abc'
Converting to Bytes
>>> u = chr(40960) + 'abcd' + chr(1972)
>>> u.encode('utf-8')
b'\xea\x80\x80abcd\xde\xb4'
>>> u.encode('ascii')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeEncodeError: 'ascii' codec can't encode character '\ua000' in
position 0: ordinal not in range(128)
>>> u.encode('ascii', 'ignore')
b'abcd'
>>> u.encode('ascii', 'replace')
b'?abcd?'
>>> u.encode('ascii', 'xmlcharrefreplace')
b'ꀀabcd޴'
One possible solution to the problem listed above involves covert all occurrences of
.encode(CODEC) with .encode(CODEC, 'ignore'). Likewise, all .decode(CODEC) become .decode(CODEC, 'ignore').
|
Codec Errors in Python
|
Does anyone know the name of a codec that can translate any random assortment of bytes into a string? I have been getting the following error after encoding, encrypting, and decoding a string in tkinter.Text.
UnicodeDecodeError: 'utf8' codec can't decode
byte 0x99 in position 151: unexpected code byte
Code used to generate the error follow below. The UTF8 codec listed at the top has problems translating some bytes back into a string. What I am looking for is an answer that solves the problem, not direction.
from tkinter import *
import traceback
from tkinter.scrolledtext import ScrolledText
CODEC = 'utf8'
################################################################################
class MarkovDemo:
def __init__(self, master):
self.prompt_size = Label(master, anchor=W, text='Encode Word Size')
self.prompt_size.pack(side=TOP, fill=X)
self.size_entry = Entry(master)
self.size_entry.insert(0, '8')
self.size_entry.pack(fill=X)
self.prompt_plain = Label(master, anchor=W, text='Plaintext Characters')
self.prompt_plain.pack(side=TOP, fill=X)
self.plain_entry = Entry(master)
self.plain_entry.insert(0, '""')
self.plain_entry.pack(fill=X)
self.showframe = Frame(master)
self.showframe.pack(fill=X, anchor=W)
self.showvar = StringVar(master)
self.showvar.set("encode")
self.showfirstradio = Radiobutton(self.showframe,
text="Encode Plaintext",
variable=self.showvar,
value="encode",
command=self.reevaluate)
self.showfirstradio.pack(side=LEFT)
self.showallradio = Radiobutton(self.showframe,
text="Decode Cyphertext",
variable=self.showvar,
value="decode",
command=self.reevaluate)
self.showallradio.pack(side=LEFT)
self.inputbox = ScrolledText(master, width=60, height=10, wrap=WORD)
self.inputbox.pack(fill=BOTH, expand=1)
self.dynamic_var = IntVar()
self.dynamic_box = Checkbutton(master, variable=self.dynamic_var,
text='Dynamic Evaluation',
offvalue=False, onvalue=True,
command=self.reevaluate)
self.dynamic_box.pack()
self.output = Label(master, anchor=W, text="This is your output:")
self.output.pack(fill=X)
self.outbox = ScrolledText(master, width=60, height=10, wrap=WORD)
self.outbox.pack(fill=BOTH, expand=1)
self.inputbox.bind('<Key>', self.reevaluate)
def select_all(event=None):
event.widget.tag_add(SEL, 1.0, 'end-1c')
event.widget.mark_set(INSERT, 1.0)
event.widget.see(INSERT)
return 'break'
self.inputbox.bind('<Control-Key-a>', select_all)
self.outbox.bind('<Control-Key-a>', select_all)
self.inputbox.bind('<Control-Key-/>', lambda event: 'break')
self.outbox.bind('<Control-Key-/>', lambda event: 'break')
self.outbox.config(state=DISABLED)
def reevaluate(self, event=None):
if event is not None:
if event.char == '':
return
if self.dynamic_var.get():
text = self.inputbox.get(1.0, END)[:-1]
if len(text) < 10:
return
text = text.replace('\n \n', '\n\n')
mode = self.showvar.get()
assert mode in ('decode', 'encode'), 'Bad mode!'
if mode == 'encode':
# Encode Plaintext
try:
# Evaluate the plaintext characters
plain = self.plain_entry.get()
if plain:
PC = eval(self.plain_entry.get())
else:
PC = ''
self.plain_entry.delete(0, END)
self.plain_entry.insert(0, '""')
# Evaluate the word size
size = self.size_entry.get()
if size:
XD = int(size)
while grid_size(text, XD, PC) > 1 << 20:
XD -= 1
else:
XD = 0
grid = 0
while grid <= 1 << 20:
grid = grid_size(text, XD, PC)
XD += 1
XD -= 1
# Correct the size and encode
self.size_entry.delete(0, END)
self.size_entry.insert(0, str(XD))
cyphertext, key, prime = encrypt_str(text, XD, PC)
except:
traceback.print_exc()
else:
buffer = ''
for block in key:
buffer += repr(block)[2:-1] + '\n'
buffer += repr(prime)[2:-1] + '\n\n' + cyphertext
self.outbox.config(state=NORMAL)
self.outbox.delete(1.0, END)
self.outbox.insert(END, buffer)
self.outbox.config(state=DISABLED)
else:
# Decode Cyphertext
try:
header, cypher = text.split('\n\n', 1)
lines = header.split('\n')
for index, item in enumerate(lines):
try:
lines[index] = eval('b"' + item + '"')
except:
lines[index] = eval("b'" + item + "'")
plain = decrypt_str(cypher, tuple(lines[:-1]), lines[-1])
except:
traceback.print_exc()
else:
self.outbox.config(state=NORMAL)
self.outbox.delete(1.0, END)
self.outbox.insert(END, plain)
self.outbox.config(state=DISABLED)
else:
text = self.inputbox.get(1.0, END)[:-1]
text = text.replace('\n \n', '\n\n')
mode = self.showvar.get()
assert mode in ('decode', 'encode'), 'Bad mode!'
if mode == 'encode':
try:
XD = int(self.size_entry.get())
PC = eval(self.plain_entry.get())
size = grid_size(text, XD, PC)
assert size
except:
pass
else:
buffer = 'Grid size will be:\n' + convert(size)
self.outbox.config(state=NORMAL)
self.outbox.delete(1.0, END)
self.outbox.insert(END, buffer)
self.outbox.config(state=DISABLED)
################################################################################
import random
CRYPT = random.SystemRandom()
################################################################################
# This section includes functions that
# can test the required key and bootstrap.
# sudoko_key
# - should be a proper "markov" key
def _check_sudoku_key(sudoku_key):
# Ensure key is a tuple with more than one item.
assert isinstance(sudoku_key, tuple), '"sudoku_key" must be a tuple'
assert len(sudoku_key) > 1, '"sudoku_key" must have more than one item'
# Test first item.
item = sudoku_key[0]
assert isinstance(item, bytes), 'first item must be an instance of bytes'
assert len(item) > 1, 'first item must have more than one byte'
assert len(item) == len(set(item)), 'first item must have unique bytes'
# Test the rest of the key.
for obj in sudoku_key[1:]:
assert isinstance(obj, bytes), 'remaining items must be of bytes'
assert len(obj) == len(item), 'all items must have the same length'
assert len(obj) == len(set(obj)), \
'remaining items must have unique bytes'
assert len(set(item)) == len(set(item).union(set(obj))), \
'all items must have the same bytes'
# boot_strap
# - should be a proper "markov" bootstrap
# - we will call this a "primer"
# sudoko_key
# - should be a proper "markov" key
def _check_boot_strap(boot_strap, sudoku_key):
assert isinstance(boot_strap, bytes), '"boot_strap" must be a bytes object'
assert len(boot_strap) == len(sudoku_key) - 1, \
'"boot_strap" length must be one less than "sudoku_key" length'
item = sudoku_key[0]
assert len(set(item)) == len(set(item).union(set(boot_strap))), \
'"boot_strap" may only have bytes found in "sudoku_key"'
################################################################################
# This section includes functions capable
# of creating the required key and bootstrap.
# bytes_set should be any collection of bytes
# - it should be possible to create a set from them
# - these should be the bytes on which encryption will follow
# word_size
# - this will be the size of the "markov" chains program uses
# - this will be the number of dimensions the "grid" will have
# - one less character will make up bootstrap (or primer)
def make_sudoku_key(bytes_set, word_size):
key_set = set(bytes_set)
blocks = []
for block in range(word_size):
blocks.append(bytes(CRYPT.sample(key_set, len(key_set))))
return tuple(blocks)
# sudoko_key
# - should be a proper "markov" key
def make_boot_strap(sudoku_key):
block = sudoku_key[0]
return bytes(CRYPT.choice(block) for byte in range(len(sudoku_key) - 1))
################################################################################
# This section contains functions needed to
# create the multidimensional encryption grid.
# sudoko_key
# - should be a proper "markov" key
def make_grid(sudoku_key):
grid = expand_array(sudoku_key[0], sudoku_key[1])
for block in sudoku_key[2:]:
grid = expand_array(grid, block)
return grid
# grid
# - should be an X dimensional grid from make_grid
# block_size
# - comes from length of one block in a sudoku_key
def make_decode_grid(grid, block_size):
cache = []
for part in range(0, len(grid), block_size):
old = grid[part:part+block_size]
new = [None] * block_size
key = sorted(old)
for index, byte in enumerate(old):
new[key.index(byte)] = key[index]
cache.append(bytes(new))
return b''.join(cache)
# grid
# - should be an X dimensional grid from make_grid
# block
# - should be a block from a sudoku_key
# - should have same unique bytes as the expanding grid
def expand_array(grid, block):
cache = []
grid_size = len(grid)
block_size = len(block)
for byte in block:
index = grid.index(bytes([byte]))
for part in range(0, grid_size, block_size):
cache.append(grid[part+index:part+block_size])
cache.append(grid[part:part+index])
return b''.join(cache)
################################################################################
# The first three functions can be used to check an encryption
# grid. The eval_index function is used to evaluate a grid cell.
# grid
# - grid object to be checked
# - grid should come from the make_grid function
# - must have unique bytes along each axis
# block_size
# - comes from length of one block in a sudoku_key
# - this is the length of one edge along the grid
# - each axis is this many unit long exactly
# word_size
# - this is the number of blocks in a sudoku_key
# - this is the number of dimensions in a grid
# - this is the length needed to create a needed markon chain
def check_grid(grid, block_size, word_size):
build_index(grid, block_size, word_size, [])
# create an index to access the grid with
def build_index(grid, block_size, word_size, index):
for number in range(block_size):
index.append(number)
if len(index) == word_size:
check_cell(grid, block_size, word_size, index)
else:
build_index(grid, block_size, word_size, index)
index.pop()
# compares the contents of a cell along each grid axis
def check_cell(grid, block_size, word_size, index):
master = eval_index(grid, block_size, index)
for axis in range(word_size):
for value in range(block_size):
if index[axis] != value:
copy = list(index)
copy[axis] = value
slave = eval_index(grid, block_size, copy)
assert slave != master, 'Cell not unique along axis!'
# grid
# - grid object to be accessed and evaluated
# - grid should come from the make_grid function
# - must have unique bytes along each axis
# block_size
# - comes from length of one block in a sudoku_key
# - this is the length of one edge along the grid
# - each axis is this many unit long exactly
# index
# - list of coordinates to access the grid
# - should be of length word_size
# - should be of length equal to number of dimensions in the grid
def eval_index(grid, block_size, index):
offset = 0
for power, value in enumerate(reversed(index)):
offset += value * block_size ** power
return grid[int(offset)]
################################################################################
# The following functions act as a suite that can ultimately
# encrpyt strings, though other functions can be built from them.
# bytes_obj
# - the bytes to encode
# byte_map
# - byte tranform map for inserting into the index
# grid
# - X dimensional grid used to evaluate markov chains
# index
# - list that starts the index for accessing grid (primer)
# - it should be of length word_size - 1
# block_size
# - length of each edge in a grid
def _encode(bytes_obj, byte_map, grid, index, block_size):
cache = bytes()
index = [0] + index
for byte in bytes_obj:
if byte in byte_map:
index.append(byte_map[byte])
index = index[1:]
cache += bytes([eval_index(grid, block_size, index)])
else:
cache += bytes([byte])
return cache, index[1:]
# bytes_obj
# - the bytes to encode
# sudoko_key
# - should be a proper "markov" key
# - this key will be automatically checked for correctness
# boot_strap
# - should be a proper "markov" bootstrap
def encrypt(bytes_obj, sudoku_key, boot_strap):
_check_sudoku_key(sudoku_key)
_check_boot_strap(boot_strap, sudoku_key)
# make byte_map
array = sorted(sudoku_key[0])
byte_map = dict((byte, value) for value, byte in enumerate(array))
# create two more arguments for encode
grid = make_grid(sudoku_key)
index = list(map(byte_map.__getitem__, boot_strap))
# run the actual encoding algorithm and create reversed map
code, index = _encode(bytes_obj, byte_map, grid, index, len(sudoku_key[0]))
rev_map = dict(reversed(item) for item in byte_map.items())
# fix the boot_strap and return the results
boot_strap = bytes(rev_map[number] for number in index)
return code, boot_strap
# string
# - should be the string that you want encoded
# word_size
# - length you want the markov chains to be of
# plain_chars
# - characters that you do not want to encrypt
def encrypt_str(string, word_size, plain_chars=''):
byte_obj = string.encode(CODEC)
encode_on = set(byte_obj).difference(set(plain_chars.encode()))
sudoku_key = make_sudoku_key(encode_on, word_size)
boot_strap = make_boot_strap(sudoku_key)
cyphertext = encrypt(byte_obj, sudoku_key, boot_strap)[0]
# return encrypted string, key, and original bootstrap
return cyphertext.decode(CODEC), sudoku_key, boot_strap
def grid_size(string, word_size, plain_chars):
encode_on = set(string.encode()).difference(set(plain_chars.encode()))
return len(encode_on) ** word_size
################################################################################
# The following functions act as a suite that can ultimately
# decrpyt strings, though other functions can be built from them.
# bytes_obj
# - the bytes to encode
# byte_map
# - byte tranform map for inserting into the index
# grid
# - X dimensional grid used to evaluate markov chains
# index
# - list that starts the index for accessing grid (primer)
# - it should be of length word_size - 1
# block_size
# - length of each edge in a grid
def _decode(bytes_obj, byte_map, grid, index, block_size):
cache = bytes()
index = [0] + index
for byte in bytes_obj:
if byte in byte_map:
index.append(byte_map[byte])
index = index[1:]
decoded = eval_index(grid, block_size, index)
index[-1] = byte_map[decoded]
cache += bytes([decoded])
else:
cache += bytes([byte])
return cache, index[1:]
# bytes_obj
# - the bytes to decode
# sudoko_key
# - should be a proper "markov" key
# - this key will be automatically checked for correctness
# boot_strap
# - should be a proper "markov" bootstrap
def decrypt(bytes_obj, sudoku_key, boot_strap):
_check_sudoku_key(sudoku_key)
_check_boot_strap(boot_strap, sudoku_key)
# make byte_map
array = sorted(sudoku_key[0])
byte_map = dict((byte, value) for value, byte in enumerate(array))
# create two more arguments for decode
grid = make_grid(sudoku_key)
grid = make_decode_grid(grid, len(sudoku_key[0]))
index = list(map(byte_map.__getitem__, boot_strap))
# run the actual decoding algorithm and create reversed map
code, index = _decode(bytes_obj, byte_map, grid, index, len(sudoku_key[0]))
rev_map = dict(reversed(item) for item in byte_map.items())
# fix the boot_strap and return the results
boot_strap = bytes(rev_map[number] for number in index)
return code, boot_strap
# string
# - should be the string that you want decoded
# word_size
# - length you want the markov chains to be of
# plain_chars
# - characters that you do not want to encrypt
def decrypt_str(string, sudoku_key, boot_strap):
byte_obj = string.encode(CODEC)
plaintext = decrypt(byte_obj, sudoku_key, boot_strap)[0]
# return encrypted string, key, and original bootstrap
return plaintext.decode(CODEC)
################################################################################
def convert(number):
"Convert bytes into human-readable representation."
assert 0 < number < 1 << 110, 'Number Out Of Range'
ordered = reversed(tuple(format_bytes(partition_number(number, 1 << 10))))
cleaned = ', '.join(item for item in ordered if item[0] != '0')
return cleaned
################################################################################
def partition_number(number, base):
"Continually divide number by base until zero."
div, mod = divmod(number, base)
yield mod
while div:
div, mod = divmod(div, base)
yield mod
def format_bytes(parts):
"Format partitioned bytes into human-readable strings."
for power, number in enumerate(parts):
yield '{} {}'.format(number, format_suffix(power, number))
def format_suffix(power, number):
"Compute the suffix for a certain power of bytes."
return (PREFIX[power] + 'byte').capitalize() + ('s' if number != 1 else '')
################################################################################
PREFIX = ' kilo mega giga tera peta exa zetta yotta bronto geop'.split(' ')
################################################################################
if __name__ == '__main__':
root = Tk()
root.title('Markov Demo')
demo = MarkovDemo(root)
root.mainloop()
|
[
"Strings are by definition a sequence of bytes that only have meaning when interpreted with the knowledge of the encoding. That's one reason why the equivalent of Python 2's string type in Python 3 is the bytes type. As long as you know the encoding of the strings you're working with, I'm not sure you specifically need to recode it just to compress/encrypt it. Details of what you're actually doing might make a difference, though.\n",
"Python's decode has error settings. The default is strict which throws an exception.\nWherever you are doing the decoding, you can specify 'ignore' or 'replace' as a setting, and this will take care of your problems.\nPlease see the codecs documentation.\n",
"In Python HOWTOs from the Python v3.1.1 documentation, there is a helpful section regarding Unicode HOWTO. The table of content contains an entry to Python’s Unicode Support that explains string & byte.\n\nThe String Type\n>>> b'\\x80abc'.decode(\"utf-8\", \"strict\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nUnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0:\n unexpected code byte\n>>> b'\\x80abc'.decode(\"utf-8\", \"replace\")\n'\\ufffdabc'\n>>> b'\\x80abc'.decode(\"utf-8\", \"ignore\")\n'abc'\n\n\nConverting to Bytes\n>>> u = chr(40960) + 'abcd' + chr(1972)\n>>> u.encode('utf-8')\nb'\\xea\\x80\\x80abcd\\xde\\xb4'\n>>> u.encode('ascii')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nUnicodeEncodeError: 'ascii' codec can't encode character '\\ua000' in\n position 0: ordinal not in range(128)\n>>> u.encode('ascii', 'ignore')\nb'abcd'\n>>> u.encode('ascii', 'replace')\nb'?abcd?'\n>>> u.encode('ascii', 'xmlcharrefreplace')\nb'ꀀabcd޴'\n\n\nOne possible solution to the problem listed above involves covert all occurrences of \n.encode(CODEC) with .encode(CODEC, 'ignore'). Likewise, all .decode(CODEC) become .decode(CODEC, 'ignore').\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"byte",
"character_encoding",
"python"
] |
stackoverflow_0001835696_byte_character_encoding_python.txt
|
Q:
Hello world Pyamf small error message
Hi i am trying to link flex to django with Pyamf
As a first step i tried the basic Hello World
http://pyamf.org/wiki/DjangoHowto
But that results in an ErrorFault.
I use django 1.0.2
amfgateway.py in the root folder of my project (same level as settings)
import pyamf
from pyamf.remoting.gateway.django import DjangoGateway
from django.contrib.auth.models import User
pyamf.register_class(User, 'django.contrib.auth.models.User')
def get_users(requet):
return User.objects.all()
def echo(request, data):
return data
services = {
'myservice.echo': echo,
'myservice.get_users': get_users,
}
edoGateway = DjangoGateway(services, expose_request=False)
In urls.py
urlpatterns = patterns('',
# test pyamf
url(r'^gateway/', 'amfgateway.edoGateway'),
...
)
Then when i test the example with pyamf client
from pyamf.remoting.client import RemotingService
gw = RemotingService('http://127.0.0.1:8000/gateway/')
service = gw.getService('myservice')
print service.echo('Hello World!')
I get
ErrorFault level=error code=500 type=u'AttributeError' description=u"Cannot find a view
for the path ['/gateway/myservice/echo'], 'DjangoGateway' object has no attribute 'nam
e'"
Traceback:
u"Cannot find a view for the path ['/gateway/myservice/echo'], 'DjangoGateway' object ha
s no attribute 'name'"
A:
I think you may need to take the request parameter out of your echo def, at least the method on the pyamf example site doesn't have that parameter in the method
A:
Although the error is unrelated, JMP is correct - you have expose_request=False on the gateway and the service definition for echo has the first argument as the Django Http request object.
This isn't going to work, however PyAMF does allow some granularity here, you can use the expose_request decorator, e.g.:
from pyamf.remoting.gateway import expose_request
@expose_request
def echo(request, data):
return echo
|
Hello world Pyamf small error message
|
Hi i am trying to link flex to django with Pyamf
As a first step i tried the basic Hello World
http://pyamf.org/wiki/DjangoHowto
But that results in an ErrorFault.
I use django 1.0.2
amfgateway.py in the root folder of my project (same level as settings)
import pyamf
from pyamf.remoting.gateway.django import DjangoGateway
from django.contrib.auth.models import User
pyamf.register_class(User, 'django.contrib.auth.models.User')
def get_users(requet):
return User.objects.all()
def echo(request, data):
return data
services = {
'myservice.echo': echo,
'myservice.get_users': get_users,
}
edoGateway = DjangoGateway(services, expose_request=False)
In urls.py
urlpatterns = patterns('',
# test pyamf
url(r'^gateway/', 'amfgateway.edoGateway'),
...
)
Then when i test the example with pyamf client
from pyamf.remoting.client import RemotingService
gw = RemotingService('http://127.0.0.1:8000/gateway/')
service = gw.getService('myservice')
print service.echo('Hello World!')
I get
ErrorFault level=error code=500 type=u'AttributeError' description=u"Cannot find a view
for the path ['/gateway/myservice/echo'], 'DjangoGateway' object has no attribute 'nam
e'"
Traceback:
u"Cannot find a view for the path ['/gateway/myservice/echo'], 'DjangoGateway' object ha
s no attribute 'name'"
|
[
"I think you may need to take the request parameter out of your echo def, at least the method on the pyamf example site doesn't have that parameter in the method\n",
"Although the error is unrelated, JMP is correct - you have expose_request=False on the gateway and the service definition for echo has the first argument as the Django Http request object.\nThis isn't going to work, however PyAMF does allow some granularity here, you can use the expose_request decorator, e.g.:\nfrom pyamf.remoting.gateway import expose_request\n\n@expose_request\ndef echo(request, data):\n return echo\n\n"
] |
[
3,
2
] |
[] |
[] |
[
"apache_flex",
"django",
"pyamf",
"python"
] |
stackoverflow_0000631436_apache_flex_django_pyamf_python.txt
|
Q:
Django edit form based on add form?
I've made a nice form, and a big complicated 'add' function for handling it. It starts like this...
def add(req):
if req.method == 'POST':
form = ArticleForm(req.POST)
if form.is_valid():
article = form.save(commit=False)
article.author = req.user
# more processing ...
Now I don't really want to duplicate all that functionality in the edit() method, so I figured edit could use the exact same template, and maybe just add an id field to the form so the add function knew what it was editing. But there's a couple problems with this
Where would I set article.id in the add func? It would have to be after form.save because that's where the article gets created, but it would never even reach that, because the form is invalid due to unique constraints (unless the user edited everything). I can just remove the is_valid check, but then form.save fails instead.
If the form actually is invalid, the field I dynamically added in the edit function isn't preserved.
So how do I deal with this?
A:
If you are extending your form from a ModelForm, use the instance keyword argument. Here we pass either an existing instance or a new one, depending on whether we're editing or adding an existing article. In both cases the author field is set on the instance, so commit=False is not required. Note also that I'm assuming only the author may edit their own articles, hence the HttpResponseForbidden response.
from django.http import HttpResponseForbidden
from django.shortcuts import get_object_or_404, redirect, render, reverse
@login_required
def edit(request, id=None, template_name='article_edit_template.html'):
if id:
article = get_object_or_404(Article, pk=id)
if article.author != request.user:
return HttpResponseForbidden()
else:
article = Article(author=request.user)
form = ArticleForm(request.POST or None, instance=article)
if request.POST and form.is_valid():
form.save()
# Save was successful, so redirect to another page
redirect_url = reverse(article_save_success)
return redirect(redirect_url)
return render(request, template_name, {
'form': form
})
And in your urls.py:
(r'^article/new/$', views.edit, {}, 'article_new'),
(r'^article/edit/(?P<id>\d+)/$', views.edit, {}, 'article_edit'),
The same edit view is used for both adds and edits, but only the edit url pattern passes an id to the view. To make this work well with your form you'll need to omit the author field from the form:
class ArticleForm(forms.ModelForm):
class Meta:
model = Article
exclude = ('author',)
A:
You can have hidden ID field in form and for edit form it will be passed with the form for add form you can set it in req.POST e.g.
formData = req.POST.copy()
formData['id'] = getNewID()
and pass that formData to form
|
Django edit form based on add form?
|
I've made a nice form, and a big complicated 'add' function for handling it. It starts like this...
def add(req):
if req.method == 'POST':
form = ArticleForm(req.POST)
if form.is_valid():
article = form.save(commit=False)
article.author = req.user
# more processing ...
Now I don't really want to duplicate all that functionality in the edit() method, so I figured edit could use the exact same template, and maybe just add an id field to the form so the add function knew what it was editing. But there's a couple problems with this
Where would I set article.id in the add func? It would have to be after form.save because that's where the article gets created, but it would never even reach that, because the form is invalid due to unique constraints (unless the user edited everything). I can just remove the is_valid check, but then form.save fails instead.
If the form actually is invalid, the field I dynamically added in the edit function isn't preserved.
So how do I deal with this?
|
[
"If you are extending your form from a ModelForm, use the instance keyword argument. Here we pass either an existing instance or a new one, depending on whether we're editing or adding an existing article. In both cases the author field is set on the instance, so commit=False is not required. Note also that I'm assuming only the author may edit their own articles, hence the HttpResponseForbidden response.\nfrom django.http import HttpResponseForbidden\nfrom django.shortcuts import get_object_or_404, redirect, render, reverse\n\n\n@login_required\ndef edit(request, id=None, template_name='article_edit_template.html'):\n if id:\n article = get_object_or_404(Article, pk=id)\n if article.author != request.user:\n return HttpResponseForbidden()\n else:\n article = Article(author=request.user)\n\n form = ArticleForm(request.POST or None, instance=article)\n if request.POST and form.is_valid():\n form.save()\n\n # Save was successful, so redirect to another page\n redirect_url = reverse(article_save_success)\n return redirect(redirect_url)\n\n return render(request, template_name, {\n 'form': form\n })\n\nAnd in your urls.py:\n(r'^article/new/$', views.edit, {}, 'article_new'),\n(r'^article/edit/(?P<id>\\d+)/$', views.edit, {}, 'article_edit'),\n\nThe same edit view is used for both adds and edits, but only the edit url pattern passes an id to the view. To make this work well with your form you'll need to omit the author field from the form:\nclass ArticleForm(forms.ModelForm):\n class Meta:\n model = Article\n exclude = ('author',)\n\n",
"You can have hidden ID field in form and for edit form it will be passed with the form for add form you can set it in req.POST e.g. \nformData = req.POST.copy()\nformData['id'] = getNewID()\n\nand pass that formData to form\n"
] |
[
113,
3
] |
[] |
[] |
[
"django",
"forms",
"logic",
"python"
] |
stackoverflow_0001854237_django_forms_logic_python.txt
|
Q:
Writing a Faster Python Spider
I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed.
What I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages.
I'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.
A:
You could use MapReduce like Google does, either via Hadoop (specifically with Python: 1 and 2), Disco, or Happy.
The traditional line of thought, is write your program in standard Python, if you find it is too slow, profile it, and optimize the specific slow spots. You can make these slow spots faster by dropping down to C, using C/C++ extensions or even ctypes.
If you are spidering just one site, consider using wget -r (an example).
A:
Where are you storing the results? You can use PiCloud's cloud library to parallelize your scraping easily across a cluster of servers.
A:
As you are new to Python, I think the following may be helpful for you :)
if you are writing regex to search for certain pattern in the page, compile your regex wherever you can and reuse the compiled object
BeautifulSoup is a html/xml parser that may be of some use for your project.
A:
Spidering somebody's site with millions of requests isn't very polite. Can you instead ask the webmaster for an archive of the site? Once you have that, it's a simple matter of text searching.
A:
You waste a lot of time waiting for network requests when spidering, so you'll definitely want to make your requests in parallel. I would probably save the result data to disk and then have a second process looping over the files searching for the term. That phase could easily be distributed across multiple machines if you needed extra performance.
A:
What Adam said. I did this once to map out Xanga's network. The way I made it faster is by having a thread-safe set containing all usernames I had to look up. Then I had 5 or so threads making requests at the same time and processing them. You're going to spend way more time waiting for the page to DL than you will processing any of the text (most likely), so just find ways to increase the number of requests you can get at the same time.
|
Writing a Faster Python Spider
|
I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed.
What I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages.
I'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.
|
[
"You could use MapReduce like Google does, either via Hadoop (specifically with Python: 1 and 2), Disco, or Happy.\nThe traditional line of thought, is write your program in standard Python, if you find it is too slow, profile it, and optimize the specific slow spots. You can make these slow spots faster by dropping down to C, using C/C++ extensions or even ctypes.\nIf you are spidering just one site, consider using wget -r (an example).\n",
"Where are you storing the results? You can use PiCloud's cloud library to parallelize your scraping easily across a cluster of servers.\n",
"As you are new to Python, I think the following may be helpful for you :)\n\nif you are writing regex to search for certain pattern in the page, compile your regex wherever you can and reuse the compiled object\nBeautifulSoup is a html/xml parser that may be of some use for your project.\n\n",
"Spidering somebody's site with millions of requests isn't very polite. Can you instead ask the webmaster for an archive of the site? Once you have that, it's a simple matter of text searching.\n",
"You waste a lot of time waiting for network requests when spidering, so you'll definitely want to make your requests in parallel. I would probably save the result data to disk and then have a second process looping over the files searching for the term. That phase could easily be distributed across multiple machines if you needed extra performance.\n",
"What Adam said. I did this once to map out Xanga's network. The way I made it faster is by having a thread-safe set containing all usernames I had to look up. Then I had 5 or so threads making requests at the same time and processing them. You're going to spend way more time waiting for the page to DL than you will processing any of the text (most likely), so just find ways to increase the number of requests you can get at the same time.\n"
] |
[
10,
5,
5,
3,
3,
0
] |
[] |
[] |
[
"python",
"web_crawler"
] |
stackoverflow_0001853673_python_web_crawler.txt
|
Q:
No readyReadStandardOutput signal from QProcess
Why do I never get the readyReadStandardOutput signal when I run the following?
import os, sys, textwrap
from PyQt4 import QtGui, QtCore
out_file = open("sleep_loop.py", 'w')
out_file.write(textwrap.dedent("""
import time
while True:
print "sleeping..."
time.sleep(1)"""))
out_file.close()
def started():
print "started"
def on_error(error):
errors = ["Failed to start", "Crashed", "Timedout", "Read error",
"Write Error", "Unknown Error"]
print "error: ", errors[error]
def on_state_change(new_state):
states = ["Not running", "Starting", "Running"]
print "new state: ", states[new_state]
def on_out():
print "got out"
proc = QtCore.QProcess()
sig = QtCore.SIGNAL
proc.connect(proc, sig("started()"), started)
proc.connect(proc, sig("error(ProcessError)"), on_error)
proc.connect(proc, sig("readyReadStandardOutput()"), on_out)
proc.connect(proc, sig("stateChanged(ProcessState)"),
on_state_change)
proc.start("python sleep_loop.py")
app = QtGui.QApplication(sys.argv)
widget = QtGui.QWidget()
widget.show()
app.exec_()
proc.close()
A:
Two problems here:
You should create QApplication instance before creating everything else.
Your child process is buffering its output.
Here is the fixed code, only two lines changed:
app = QApplication moved before proc = QProcess
child process now has sys.stdout.flush()
And now everything works as you expected:
import os, sys, textwrap
from PyQt4 import QtGui, QtCore
out_file = open("sleep_loop.py", 'w')
out_file.write(textwrap.dedent("""
import time, sys
while True:
print "sleeping..."
sys.stdout.flush()
time.sleep(1)"""))
out_file.close()
def started():
print "started"
def on_error(error):
errors = ["Failed to start", "Crashed", "Timedout", "Read error",
"Write Error", "Unknown Error"]
print "error: ", errors[error]
def on_state_change(new_state):
states = ["Not running", "Starting", "Running"]
print "new state: ", states[new_state]
def on_out():
print "got out"
app = QtGui.QApplication(sys.argv)
proc = QtCore.QProcess()
sig = QtCore.SIGNAL
proc.connect(proc, sig("started()"), started)
proc.connect(proc, sig("error(ProcessError)"), on_error)
proc.connect(proc, sig("readyReadStandardOutput()"), on_out)
proc.connect(proc, sig("stateChanged(ProcessState)"),
on_state_change)
proc.start("python sleep_loop.py")
widget = QtGui.QWidget()
widget.show()
app.exec_()
proc.close()
|
No readyReadStandardOutput signal from QProcess
|
Why do I never get the readyReadStandardOutput signal when I run the following?
import os, sys, textwrap
from PyQt4 import QtGui, QtCore
out_file = open("sleep_loop.py", 'w')
out_file.write(textwrap.dedent("""
import time
while True:
print "sleeping..."
time.sleep(1)"""))
out_file.close()
def started():
print "started"
def on_error(error):
errors = ["Failed to start", "Crashed", "Timedout", "Read error",
"Write Error", "Unknown Error"]
print "error: ", errors[error]
def on_state_change(new_state):
states = ["Not running", "Starting", "Running"]
print "new state: ", states[new_state]
def on_out():
print "got out"
proc = QtCore.QProcess()
sig = QtCore.SIGNAL
proc.connect(proc, sig("started()"), started)
proc.connect(proc, sig("error(ProcessError)"), on_error)
proc.connect(proc, sig("readyReadStandardOutput()"), on_out)
proc.connect(proc, sig("stateChanged(ProcessState)"),
on_state_change)
proc.start("python sleep_loop.py")
app = QtGui.QApplication(sys.argv)
widget = QtGui.QWidget()
widget.show()
app.exec_()
proc.close()
|
[
"Two problems here:\n\nYou should create QApplication instance before creating everything else.\nYour child process is buffering its output.\n\nHere is the fixed code, only two lines changed:\n\napp = QApplication moved before proc = QProcess\nchild process now has sys.stdout.flush()\n\nAnd now everything works as you expected:\nimport os, sys, textwrap\n\nfrom PyQt4 import QtGui, QtCore\n\nout_file = open(\"sleep_loop.py\", 'w')\nout_file.write(textwrap.dedent(\"\"\"\n import time, sys\n\n while True:\n print \"sleeping...\"\n sys.stdout.flush()\n time.sleep(1)\"\"\"))\nout_file.close()\n\ndef started():\n print \"started\"\n\ndef on_error(error):\n errors = [\"Failed to start\", \"Crashed\", \"Timedout\", \"Read error\", \n \"Write Error\", \"Unknown Error\"]\n print \"error: \", errors[error] \n\ndef on_state_change(new_state):\n states = [\"Not running\", \"Starting\", \"Running\"]\n print \"new state: \", states[new_state]\n\ndef on_out():\n print \"got out\"\n\napp = QtGui.QApplication(sys.argv)\nproc = QtCore.QProcess()\nsig = QtCore.SIGNAL\nproc.connect(proc, sig(\"started()\"), started)\nproc.connect(proc, sig(\"error(ProcessError)\"), on_error)\nproc.connect(proc, sig(\"readyReadStandardOutput()\"), on_out)\nproc.connect(proc, sig(\"stateChanged(ProcessState)\"), \n on_state_change)\nproc.start(\"python sleep_loop.py\")\n\nwidget = QtGui.QWidget()\nwidget.show()\napp.exec_()\n\nproc.close()\n\n"
] |
[
4
] |
[] |
[] |
[
"pyqt",
"python",
"qprocess",
"qt"
] |
stackoverflow_0001854247_pyqt_python_qprocess_qt.txt
|
Q:
List view is not refreshed if setTabText() is called
Yes, I know this sounds crazy. But here's the situation.
I composed a minimal code reproducing the bug. The code creates main window with QTabWidget, which, in turn, has one tab with QListView and a button. List view is connected to QAbstractListModel. Initially, list model contains empty list. If user clicks on a button, it is populated with 3 elements and corresponding signal is emitted. On this signal, tab widget emits a signal with new title, which is caught by QMainWindow and used to change tab title.
So, the problem is, if I call setTabText() with this new title, list view remains empty until I click on it (then new items instantly appear). If I use new title in setWindowTitle() instead, new items appear in list view right after pressing the button. Am I doing something wrong, or is there some bug in QTabWidget (or Python mapping)?
Code is the following:
from PyQt4 import QtGui, QtCore
import sys
class MainWindow(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
self.setWindowTitle("Test")
self._tabbar = QtGui.QTabWidget()
self.setCentralWidget(self._tabbar)
tab = SearchWindow(self)
tab.titleChanged.connect(self._refreshTabTitle)
self._tabbar.addTab(tab, "Initial title")
def _refreshTabTitle(self, title):
# if line 1 is commented - no bug, if line 2 is commented - bug exists
self._tabbar.setTabText(0, title) # line 1
#self.setWindowTitle(title) # line 2
class SearchWindow(QtGui.QSplitter):
titleChanged = QtCore.pyqtSignal(str)
def __init__(self, parent):
QtGui.QSplitter.__init__(self, QtCore.Qt.Vertical, parent)
results_model = ResultsModel(self)
results_view = QtGui.QListView()
results_view.setModel(results_model)
self.addWidget(results_view)
search_button = QtGui.QPushButton(">>")
search_button.clicked.connect(results_model.refreshResults)
self.addWidget(search_button)
results_model.searchFinished.connect(self._refreshTitle)
def _refreshTitle(self):
self.titleChanged.emit("New title")
class ResultsModel(QtCore.QAbstractListModel):
searchFinished = QtCore.pyqtSignal()
def __init__(self, parent):
QtCore.QAbstractListModel.__init__(self, parent)
self._results = []
def rowCount(self, parent):
return len(self._results)
def data(self, index, role=QtCore.Qt.DisplayRole):
if not index.isValid():
return None
elif index.row() = len(self._results):
return None
elif role == QtCore.Qt.DisplayRole:
return self._results[index.row()]
def refreshResults(self):
self._results = ['result1', 'result2', 'result3']
self.reset()
self.searchFinished.emit()
app = QtGui.QApplication(sys.argv)
wnd = MainWindow()
wnd.show()
sys.exit(app.exec_())
Tested on Mac OS 10.6.2, Qt SDK 2009.04 (4.5), pyQt 4.6.1 (maybe this is the problem and I need to use 4.5?), Python 3.1.
A:
Could not reproduce your problem using Linux, Qt 4.5.3, pyQt 4.5.4, python 2.5.2.
I guess this is definitely version/platform-dependent. You should try Qt 4.5.3 + pyQt 4.5.4 + python 2.5.2 on MacOS. If you can reproduce the problem, it is more like a bug in MacOS qt port. If you can't you should try newer qt versions under Windows or Linux.
|
List view is not refreshed if setTabText() is called
|
Yes, I know this sounds crazy. But here's the situation.
I composed a minimal code reproducing the bug. The code creates main window with QTabWidget, which, in turn, has one tab with QListView and a button. List view is connected to QAbstractListModel. Initially, list model contains empty list. If user clicks on a button, it is populated with 3 elements and corresponding signal is emitted. On this signal, tab widget emits a signal with new title, which is caught by QMainWindow and used to change tab title.
So, the problem is, if I call setTabText() with this new title, list view remains empty until I click on it (then new items instantly appear). If I use new title in setWindowTitle() instead, new items appear in list view right after pressing the button. Am I doing something wrong, or is there some bug in QTabWidget (or Python mapping)?
Code is the following:
from PyQt4 import QtGui, QtCore
import sys
class MainWindow(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
self.setWindowTitle("Test")
self._tabbar = QtGui.QTabWidget()
self.setCentralWidget(self._tabbar)
tab = SearchWindow(self)
tab.titleChanged.connect(self._refreshTabTitle)
self._tabbar.addTab(tab, "Initial title")
def _refreshTabTitle(self, title):
# if line 1 is commented - no bug, if line 2 is commented - bug exists
self._tabbar.setTabText(0, title) # line 1
#self.setWindowTitle(title) # line 2
class SearchWindow(QtGui.QSplitter):
titleChanged = QtCore.pyqtSignal(str)
def __init__(self, parent):
QtGui.QSplitter.__init__(self, QtCore.Qt.Vertical, parent)
results_model = ResultsModel(self)
results_view = QtGui.QListView()
results_view.setModel(results_model)
self.addWidget(results_view)
search_button = QtGui.QPushButton(">>")
search_button.clicked.connect(results_model.refreshResults)
self.addWidget(search_button)
results_model.searchFinished.connect(self._refreshTitle)
def _refreshTitle(self):
self.titleChanged.emit("New title")
class ResultsModel(QtCore.QAbstractListModel):
searchFinished = QtCore.pyqtSignal()
def __init__(self, parent):
QtCore.QAbstractListModel.__init__(self, parent)
self._results = []
def rowCount(self, parent):
return len(self._results)
def data(self, index, role=QtCore.Qt.DisplayRole):
if not index.isValid():
return None
elif index.row() = len(self._results):
return None
elif role == QtCore.Qt.DisplayRole:
return self._results[index.row()]
def refreshResults(self):
self._results = ['result1', 'result2', 'result3']
self.reset()
self.searchFinished.emit()
app = QtGui.QApplication(sys.argv)
wnd = MainWindow()
wnd.show()
sys.exit(app.exec_())
Tested on Mac OS 10.6.2, Qt SDK 2009.04 (4.5), pyQt 4.6.1 (maybe this is the problem and I need to use 4.5?), Python 3.1.
|
[
"Could not reproduce your problem using Linux, Qt 4.5.3, pyQt 4.5.4, python 2.5.2. \nI guess this is definitely version/platform-dependent. You should try Qt 4.5.3 + pyQt 4.5.4 + python 2.5.2 on MacOS. If you can reproduce the problem, it is more like a bug in MacOS qt port. If you can't you should try newer qt versions under Windows or Linux.\n"
] |
[
0
] |
[] |
[] |
[
"pyqt",
"pyqt4",
"python",
"qt"
] |
stackoverflow_0001815154_pyqt_pyqt4_python_qt.txt
|
Q:
Replace Multiple lines in Jython
I have written a small program to replace a set of characters , but i also want two or more replace command in a single program .
Apart from it i also want to add an bracket after random set of characters.
This is my Program
file_read=open('<%=odiRef.getOption("READ")%>/EXPORT.XML','r')
file_write=open('<%=odiRef.getOption("READ")%>/EXPORT_1.XML','w')
count_record=file_read.read()
while count_record :
s=count_record.replace('<Field name="ExeDb"type="java.lang.String"><![CDATA[S]]></Field>','<Field name="ExeDb" type="java.lang.String"><![CDATA[W]]></Field>')
file_write.write(s)
t=count_record.replace('<Field name="Txt" type="java.lang.String"><![CDATA[','<Field name="Txt" type="java.lang.String"><![CDATA[TRIM(')
file_write.write(t)
count_record=file_read.read()
print s
file_read.close()
file_write.close()
As you can see when i try to do with read line i get two lines in the final file.
1) I want both the replace command to work but with only single file.
2) Also is there is any way to read and write in a single file , i dont know why r+ was not working properly.
3) I also want to modify the line
t=count_record.replace('<Field name="Txt" type="java.lang.String"><![CDATA[','<Field name="Txt" type="java.lang.String"><![CDATA[TRIM(')
to somethings like
t=count_record.replace('<Field name="Txt" type="java.lang.String"><![CDATA[','<Field name="Txt" type="java.lang.String"><![CDATA[TRIM($$$) ')
where $$$ represents words or character present in the source File.
in short adding ) close bracket at the end , irrespective of any number of words or character after opening bracket .
Thanks so much for all your help.
A:
This is wrong on many levels - you can not simultaneously read and write from the same file, file.read() command reads entire contents, and you dont have to save after each replace. Something like this:
file = open('myfile', 'r+')
contents = file.read()
file.seek(0) # rewind
file.write(contents.replace('something', 'else').replace('and this too', 'replaced'))
Comment code is garbled, including here... You need to replace that using regular expressions. See module "re" description, you basically need something like this:
import re
contents = re.sub(
'<Field name="Txt" type="java.lang.String"><!\[CDATA\[TRIM\(([^)]*)\]\]></Field>',
'<Field name="Txt" type="java.lang.String"><![CDATA[TRIM(\1)]]></Field>',
contents
)
|
Replace Multiple lines in Jython
|
I have written a small program to replace a set of characters , but i also want two or more replace command in a single program .
Apart from it i also want to add an bracket after random set of characters.
This is my Program
file_read=open('<%=odiRef.getOption("READ")%>/EXPORT.XML','r')
file_write=open('<%=odiRef.getOption("READ")%>/EXPORT_1.XML','w')
count_record=file_read.read()
while count_record :
s=count_record.replace('<Field name="ExeDb"type="java.lang.String"><![CDATA[S]]></Field>','<Field name="ExeDb" type="java.lang.String"><![CDATA[W]]></Field>')
file_write.write(s)
t=count_record.replace('<Field name="Txt" type="java.lang.String"><![CDATA[','<Field name="Txt" type="java.lang.String"><![CDATA[TRIM(')
file_write.write(t)
count_record=file_read.read()
print s
file_read.close()
file_write.close()
As you can see when i try to do with read line i get two lines in the final file.
1) I want both the replace command to work but with only single file.
2) Also is there is any way to read and write in a single file , i dont know why r+ was not working properly.
3) I also want to modify the line
t=count_record.replace('<Field name="Txt" type="java.lang.String"><![CDATA[','<Field name="Txt" type="java.lang.String"><![CDATA[TRIM(')
to somethings like
t=count_record.replace('<Field name="Txt" type="java.lang.String"><![CDATA[','<Field name="Txt" type="java.lang.String"><![CDATA[TRIM($$$) ')
where $$$ represents words or character present in the source File.
in short adding ) close bracket at the end , irrespective of any number of words or character after opening bracket .
Thanks so much for all your help.
|
[
"This is wrong on many levels - you can not simultaneously read and write from the same file, file.read() command reads entire contents, and you dont have to save after each replace. Something like this:\nfile = open('myfile', 'r+')\ncontents = file.read()\nfile.seek(0) # rewind \nfile.write(contents.replace('something', 'else').replace('and this too', 'replaced'))\n\nComment code is garbled, including here... You need to replace that using regular expressions. See module \"re\" description, you basically need something like this: \nimport re\ncontents = re.sub(\n '<Field name=\"Txt\" type=\"java.lang.String\"><!\\[CDATA\\[TRIM\\(([^)]*)\\]\\]></Field>', \n '<Field name=\"Txt\" type=\"java.lang.String\"><![CDATA[TRIM(\\1)]]></Field>', \n contents\n)\n\n"
] |
[
3
] |
[] |
[] |
[
"jython",
"python",
"replace",
"string"
] |
stackoverflow_0001853921_jython_python_replace_string.txt
|
Q:
Django: PYTHON_EGG_CACHE, access denied error
I am deploying my django application on a server, and on last stages I am getting this error:
ExtractionError at /admin/
Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 13] Permission denied: '/.python-eggs'
The Python egg cache directory is currently set to:
/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
Request Method: GET
Request URL: http://go-ban.org/admin/
Exception Type: ExtractionError
Exception Value:
Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 13] Permission denied: '/.python-eggs'
The Python egg cache directory is currently set to:
/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
Exception Location: /usr/lib/python2.5/site-packages/pkg_resources.py in extraction_error, line 887
Python Executable: /usr/bin/python
Python Version: 2.5.2
Python Path: ['/home/oleg/sites/goban', '/usr/lib/python2.5/site-packages/PIL-1.1.7-py2.5-linux-i686.egg', '/usr/lib/python2.5/site-packages/PyAMF-0.5.1-py2.5-linux-i686.egg', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages/PIL', '/var/lib/python-support/python2.5']
Server time: Sun, 6 Dec 2009 14:05:47 +0200
Maybe someone come across similar issue?
The strangest thing here is that I am using another django site on this host with no such error :(
Related question
apache user can not write to .python-eggs
A:
Well, you are using some strange setuptools-enabled library.
But anyway, is there a problem for you to setup PYTHON_EGG_CACHE environment variable to any directory writable for application user?
|
Django: PYTHON_EGG_CACHE, access denied error
|
I am deploying my django application on a server, and on last stages I am getting this error:
ExtractionError at /admin/
Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 13] Permission denied: '/.python-eggs'
The Python egg cache directory is currently set to:
/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
Request Method: GET
Request URL: http://go-ban.org/admin/
Exception Type: ExtractionError
Exception Value:
Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 13] Permission denied: '/.python-eggs'
The Python egg cache directory is currently set to:
/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
Exception Location: /usr/lib/python2.5/site-packages/pkg_resources.py in extraction_error, line 887
Python Executable: /usr/bin/python
Python Version: 2.5.2
Python Path: ['/home/oleg/sites/goban', '/usr/lib/python2.5/site-packages/PIL-1.1.7-py2.5-linux-i686.egg', '/usr/lib/python2.5/site-packages/PyAMF-0.5.1-py2.5-linux-i686.egg', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages/PIL', '/var/lib/python-support/python2.5']
Server time: Sun, 6 Dec 2009 14:05:47 +0200
Maybe someone come across similar issue?
The strangest thing here is that I am using another django site on this host with no such error :(
Related question
apache user can not write to .python-eggs
|
[
"Well, you are using some strange setuptools-enabled library.\nBut anyway, is there a problem for you to setup PYTHON_EGG_CACHE environment variable to any directory writable for application user?\n"
] |
[
6
] |
[] |
[] |
[
"django",
"python",
"python_egg_cache"
] |
stackoverflow_0001855219_django_python_python_egg_cache.txt
|
Q:
Dealing with URLs in Django
So, basically what I'm trying to do is a hockey pool application, and there are a ton of ways I should be able to filter to view the data. For example, filter by free agent, goals, assists, position, etc.
I'm planning on doing this with a bunch of query strings, but I'm not sure what the best approach would be to pass along the these query strings. Lets say I wanted to be on page 2 (as I'm using pagination for splitting the pages), sort by goals, and only show forwards, I would have the following query set:
?page=2&sort=g&position=f
But if I was on that page, and it was showing me all this corresponding info, if I was to click say, points instead of goals, I would still want all my other filters in tact, so like this:
?page=2&sort=p&position=f
Since HTTP is stateless, I'm having trouble on what the best approach to this would be.. If anyone has some good ideas they would be much appreciated, thanks ;)
Shawn J
A:
Firstly, think about whether you really want to save all the parameters each time. In the example you give, you change the sort order but preserve the page number. Does this really make sense, considering you will now have different elements on that page. Even more, if you change the filters, the currently selected page number might not even exist.
Anyway, assuming that is what you want, you don't need to worry about state or cookies or any of that, seeing as all the information you need is already in the GET parameters. All you need to do is to replace one of these parameters as required, then re-encode the string. Easy to do in a template tag, since GET parameters are stored as a QueryDict which is basically just a dictionary.
Something like (untested):
@register.simple_tag
def url_with_changed_parameter(request, param, value):
params = request.GET
request[param] = value
return "%s?%s" % (request.path, params.urlencode())
and you would use it in your template:
{% url_with_changed_parameter request "page" 2 %}
A:
Have you looked at django-filter? It's really awesome.
A:
Check out filter mechanism in the admin application, it includes dealing with dynamically constructed URLs with filter information supplied in the query string.
In addition - consider saving actual state information in cookies/sessions.
A:
If You want to save all the "parameters", I'd say they are resource identifiers and should normally be the part of URI.
|
Dealing with URLs in Django
|
So, basically what I'm trying to do is a hockey pool application, and there are a ton of ways I should be able to filter to view the data. For example, filter by free agent, goals, assists, position, etc.
I'm planning on doing this with a bunch of query strings, but I'm not sure what the best approach would be to pass along the these query strings. Lets say I wanted to be on page 2 (as I'm using pagination for splitting the pages), sort by goals, and only show forwards, I would have the following query set:
?page=2&sort=g&position=f
But if I was on that page, and it was showing me all this corresponding info, if I was to click say, points instead of goals, I would still want all my other filters in tact, so like this:
?page=2&sort=p&position=f
Since HTTP is stateless, I'm having trouble on what the best approach to this would be.. If anyone has some good ideas they would be much appreciated, thanks ;)
Shawn J
|
[
"Firstly, think about whether you really want to save all the parameters each time. In the example you give, you change the sort order but preserve the page number. Does this really make sense, considering you will now have different elements on that page. Even more, if you change the filters, the currently selected page number might not even exist. \nAnyway, assuming that is what you want, you don't need to worry about state or cookies or any of that, seeing as all the information you need is already in the GET parameters. All you need to do is to replace one of these parameters as required, then re-encode the string. Easy to do in a template tag, since GET parameters are stored as a QueryDict which is basically just a dictionary.\nSomething like (untested):\n@register.simple_tag\ndef url_with_changed_parameter(request, param, value):\n params = request.GET\n request[param] = value\n return \"%s?%s\" % (request.path, params.urlencode())\n\nand you would use it in your template:\n{% url_with_changed_parameter request \"page\" 2 %}\n\n",
"Have you looked at django-filter? It's really awesome.\n",
"Check out filter mechanism in the admin application, it includes dealing with dynamically constructed URLs with filter information supplied in the query string.\nIn addition - consider saving actual state information in cookies/sessions.\n",
"If You want to save all the \"parameters\", I'd say they are resource identifiers and should normally be the part of URI.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"django",
"http",
"python",
"request"
] |
stackoverflow_0001855184_django_http_python_request.txt
|
Q:
Converting a Tuples List into a nested List using Python
I want to convert a tuples list into a nested list using Python. How do I do that?
I have a sorted list of tuples (sorted by the second value):
[(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1), (12, 1),
(10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]
Now I want it to have like this (second value ignored and nested in lists):
[ [1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2] ]
I've seen other threads in here with map used for such things, but I don't completely understand it. Can anyone provide insight as to the 'correct' python way of doing this?
A:
from operator import itemgetter
from itertools import groupby
lst = [(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1),
(12, 1), (10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]
result = [[x for x, y in group]
for key, group in groupby(lst, key=itemgetter(1))]
groupby(lst, key=itemgetter(1)) generates groups of consecutive elements of lst within which all elements have the same 1st (counting from zero) item. The [x for x, y in group] keeps the 0th item of each element within each group.
A:
It is a bit convoluted, but you can do it with the itertools.groupby function:
>>> lst = [(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1), (12, 1),
(10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]
>>> from operator import itemgetter
>>> import itertools
>>> [map(itemgetter(0), group) for (key,group) in itertools.groupby(lst, itemgetter(1))]
[[1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2]]
>>>
Explanation:
groupby returns an iterator for each group, where a group is defined as a sequence of entries that have the same value returned by function passed as a separate parameter. itemgetter(1) generates a function that returns x[1] when called with argument x.
Since the groupby iterator returns two values - the key that was used and sequences of the original values which are tuples, we then need to strip out the second value in each tuple, which is what map(itemgetter(0), group) does.
A:
Maybe not the most pythonesque answer, but this works:
d = {}
a = [(1,5), (5,4), (13,3), (4,3), (3,2), (14,1), (12,1)]
for value in a:
if value[0] not in d:
d[ value[0] ] = []
d[ value[0] ].append( a[1] )
print d.values()
A:
The simple solution:
n_list = []
c_snd = None
for (fst, snd) in o_list:
if snd == c_snd: n_list[-1].append(fst)
else:
c_snd = snd
n_list.append([fst])
Explanation: use c_snd to store the current second part of the tuple. If that changes, start a new list in n_list for this new second value, starting with fst, otherwise add fst to the last list in n_list.
A:
Don't know how fast this will be for bigger sets, but you could do something like that:
input = [
(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1),
(12, 1), (10, 1), (9, 1), (8, 1), (7, 1), (6, 1),
(2, 1)
]
output = [[] for _ in xrange(input[0][1])]
for value, key in input:
output[-key].append(value)
print output # => [[1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2]]
|
Converting a Tuples List into a nested List using Python
|
I want to convert a tuples list into a nested list using Python. How do I do that?
I have a sorted list of tuples (sorted by the second value):
[(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1), (12, 1),
(10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]
Now I want it to have like this (second value ignored and nested in lists):
[ [1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2] ]
I've seen other threads in here with map used for such things, but I don't completely understand it. Can anyone provide insight as to the 'correct' python way of doing this?
|
[
"from operator import itemgetter\nfrom itertools import groupby\n\nlst = [(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1),\n (12, 1), (10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]\n\nresult = [[x for x, y in group]\n for key, group in groupby(lst, key=itemgetter(1))]\n\ngroupby(lst, key=itemgetter(1)) generates groups of consecutive elements of lst within which all elements have the same 1st (counting from zero) item. The [x for x, y in group] keeps the 0th item of each element within each group.\n",
"It is a bit convoluted, but you can do it with the itertools.groupby function:\n>>> lst = [(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1), (12, 1), \n (10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]\n>>> from operator import itemgetter \n>>> import itertools\n>>> [map(itemgetter(0), group) for (key,group) in itertools.groupby(lst, itemgetter(1))]\n[[1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2]]\n>>> \n\nExplanation:\ngroupby returns an iterator for each group, where a group is defined as a sequence of entries that have the same value returned by function passed as a separate parameter. itemgetter(1) generates a function that returns x[1] when called with argument x. \nSince the groupby iterator returns two values - the key that was used and sequences of the original values which are tuples, we then need to strip out the second value in each tuple, which is what map(itemgetter(0), group) does.\n",
"Maybe not the most pythonesque answer, but this works:\nd = {}\n\na = [(1,5), (5,4), (13,3), (4,3), (3,2), (14,1), (12,1)]\n\nfor value in a:\n if value[0] not in d:\n d[ value[0] ] = []\n d[ value[0] ].append( a[1] )\n\nprint d.values()\n\n",
"The simple solution:\nn_list = []\nc_snd = None\nfor (fst, snd) in o_list:\n if snd == c_snd: n_list[-1].append(fst)\n else:\n c_snd = snd\n n_list.append([fst])\n\nExplanation: use c_snd to store the current second part of the tuple. If that changes, start a new list in n_list for this new second value, starting with fst, otherwise add fst to the last list in n_list.\n",
"Don't know how fast this will be for bigger sets, but you could do something like that:\ninput = [\n (1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1),\n (12, 1), (10, 1), (9, 1), (8, 1), (7, 1), (6, 1),\n (2, 1)\n]\n\noutput = [[] for _ in xrange(input[0][1])]\nfor value, key in input:\n output[-key].append(value)\n\nprint output # => [[1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2]]\n\n"
] |
[
11,
2,
1,
1,
0
] |
[] |
[] |
[
"list",
"nested",
"python",
"tuples"
] |
stackoverflow_0001855471_list_nested_python_tuples.txt
|
Q:
Dynamically Refreshed Pages produced by Python
I've been researching this on and off for a number of months now, but I am incapable of finding clear direction.
My goal is to have a page which has a form on it and a graph on it. The form can be filled out and then sent to the CGI Python script (yeah, I'll move to WSGI or fast_cgi later, I'm starting simple!) I'd like the form to be able to send multiple times, so the user can update the graph, but I don't want the page to reload every time it doe that. I have a form and a graph now, but they're on separate pages and work as a conventional script.
I'd like to avoid ALL frameworks except JQuery (as I love it, don't like dealing with the quirks of different browsers, etc).
A nudge in the right direction(s) is all I'm asking for here, or be as specific as you care to.
(I've found similar guides to doing this in PHP, I believe, but for some reason, they didn't serve my purpose.)
EDIT: The graph is generated using Flot (a JQuery plugin) using points generated from the form input and processed in the Python script. The Python script prints the Javascript which produces the graph in the end. It could all be done in Javascript, but I want the heavier stuff to be handled server-side, hence the Python.
Thanks!
A:
I'm assuming that you have two pages at the moment - a page which shows the form, and a page which receives the POST request and displays the graph.
Will a little jQuery you can do exactly what you want.
First add to your form page an empty div with id="results". Next in your graph plotting page put the output you want to show to the user in a div with the same id.
Now attach an onclick handler to the submit button (or to the individual parts of the form if you want it to be more dynamic). This should serialize the form, submit it to the plotting page snatch the contents of the id="results" div and stuff them into the id="results" div on the the form page.
This will appear to the user as the graph appearing on the page whenever they click submit.
Here is a sketch of the jQuery code you will need
$(function(){
// Submit form
// Get the returned html, and get the contents of #results and
// put it into this page into #results
var submit = function() {
$.ajax({
type: "POST",
data: $("form").serialize(),
success: function(data, textStatus) {
$("#results").replaceWith($("#results", $(data)));
}
});
};
$("form input[type=submit]").click(submit);
// I think you'll need this as well to make sure the form doesn't submit via the browser
$("form").submit(function () { return false; });
});
Edit
Just to clarify on the above, if you want the form to redraw the graph whenever the user clicks any of the controls not just when the user clicks submit, add a few more things like this
$("form input[type=text]").keypress(submit);
$("form input[type=checkbox], form select").change(submit)
A:
If you'll be loading HTML and Javascript that needs to be executed, and your only reason for not wanting to load a new page is to preserve the surrounding elements, you could probably just stick the form in an IFRAME. When the form is POSTed, only the contents of the IFRAME are replaced with the new contents. No AJAX required either. You might find that the answers here give you sufficient direction, or Google for things like "form post to iframe".
A:
I'd like the form to be able to send multiple times, so the user can update the graph, but I don't want the page to reload every time it doe that.
The general pattern goes like that:
Generate an XMLHttpRequest (in form's onsubmit or it's 'submit' button onclick handler) that goes to your Python script. Optionally disable the submit button.
Server side - generate the graph (assuming raw HTML+JS, as hinted by your comment to another answer)
Client side, XmlHttp response handler. Replace the necessary part of your page with the HTML obtained via the response. Get responseText from the request (it contains whatever your Python script produced) and set innerHtml of a control that displays your graph.
The key points are:
using XMLHttpRequest (so that the browser doesn't automatically replace your page with the response).
manipulating the page yourself in the response handler. innerHtml is just one of the options here.
Edit: Here is a simple example of creating and using an XMLHttpRequest. JQuery makes it much simpler, the value of this example is getting to know how it works 'under the hood'.
A:
Update img.src attribute in onsubmit() handler.
img.src url points to your Python script that should generate an image in response.
onsubmit() for your form could be registered and written using JQuery.
|
Dynamically Refreshed Pages produced by Python
|
I've been researching this on and off for a number of months now, but I am incapable of finding clear direction.
My goal is to have a page which has a form on it and a graph on it. The form can be filled out and then sent to the CGI Python script (yeah, I'll move to WSGI or fast_cgi later, I'm starting simple!) I'd like the form to be able to send multiple times, so the user can update the graph, but I don't want the page to reload every time it doe that. I have a form and a graph now, but they're on separate pages and work as a conventional script.
I'd like to avoid ALL frameworks except JQuery (as I love it, don't like dealing with the quirks of different browsers, etc).
A nudge in the right direction(s) is all I'm asking for here, or be as specific as you care to.
(I've found similar guides to doing this in PHP, I believe, but for some reason, they didn't serve my purpose.)
EDIT: The graph is generated using Flot (a JQuery plugin) using points generated from the form input and processed in the Python script. The Python script prints the Javascript which produces the graph in the end. It could all be done in Javascript, but I want the heavier stuff to be handled server-side, hence the Python.
Thanks!
|
[
"I'm assuming that you have two pages at the moment - a page which shows the form, and a page which receives the POST request and displays the graph.\nWill a little jQuery you can do exactly what you want.\nFirst add to your form page an empty div with id=\"results\". Next in your graph plotting page put the output you want to show to the user in a div with the same id.\nNow attach an onclick handler to the submit button (or to the individual parts of the form if you want it to be more dynamic). This should serialize the form, submit it to the plotting page snatch the contents of the id=\"results\" div and stuff them into the id=\"results\" div on the the form page.\nThis will appear to the user as the graph appearing on the page whenever they click submit.\nHere is a sketch of the jQuery code you will need\n$(function(){\n // Submit form\n // Get the returned html, and get the contents of #results and\n // put it into this page into #results\n var submit = function() {\n $.ajax({\n type: \"POST\",\n data: $(\"form\").serialize(),\n success: function(data, textStatus) {\n $(\"#results\").replaceWith($(\"#results\", $(data)));\n }\n });\n };\n $(\"form input[type=submit]\").click(submit);\n // I think you'll need this as well to make sure the form doesn't submit via the browser\n $(\"form\").submit(function () { return false; });\n});\n\nEdit\nJust to clarify on the above, if you want the form to redraw the graph whenever the user clicks any of the controls not just when the user clicks submit, add a few more things like this\n$(\"form input[type=text]\").keypress(submit);\n$(\"form input[type=checkbox], form select\").change(submit)\n\n",
"If you'll be loading HTML and Javascript that needs to be executed, and your only reason for not wanting to load a new page is to preserve the surrounding elements, you could probably just stick the form in an IFRAME. When the form is POSTed, only the contents of the IFRAME are replaced with the new contents. No AJAX required either. You might find that the answers here give you sufficient direction, or Google for things like \"form post to iframe\".\n",
"\nI'd like the form to be able to send multiple times, so the user can update the graph, but I don't want the page to reload every time it doe that.\n\nThe general pattern goes like that:\n\nGenerate an XMLHttpRequest (in form's onsubmit or it's 'submit' button onclick handler) that goes to your Python script. Optionally disable the submit button.\nServer side - generate the graph (assuming raw HTML+JS, as hinted by your comment to another answer)\nClient side, XmlHttp response handler. Replace the necessary part of your page with the HTML obtained via the response. Get responseText from the request (it contains whatever your Python script produced) and set innerHtml of a control that displays your graph.\n\nThe key points are:\n\nusing XMLHttpRequest (so that the browser doesn't automatically replace your page with the response).\nmanipulating the page yourself in the response handler. innerHtml is just one of the options here.\n\nEdit: Here is a simple example of creating and using an XMLHttpRequest. JQuery makes it much simpler, the value of this example is getting to know how it works 'under the hood'.\n",
"Update img.src attribute in onsubmit() handler.\n\nimg.src url points to your Python script that should generate an image in response.\nonsubmit() for your form could be registered and written using JQuery. \n\n"
] |
[
4,
2,
1,
0
] |
[] |
[] |
[
"ajax",
"javascript",
"python"
] |
stackoverflow_0001855748_ajax_javascript_python.txt
|
Q:
Call method from string
If I have a Python class, and would like to call a function from it depending on a variable, how would I do so? I imagined following could do it:
class CallMe: # Class
def App(): # Method one
...
def Foo(): # Method two
...
variable = "App" # Method to call
CallMe.variable() # Calling App()
But it couldn't. Any other way to do this?
A:
You can do this:
getattr(CallMe, variable)()
getattr is a builtin method, it returns the value of the named attributed of object. The value in this case is a method object that you can call with ()
A:
You can use getattr, or you can assign bound or unbound methods to the variable. Bound methods are tied to a particular instance of the class, and unbound methods are tied to the class, so you have to pass an instance in as the first parameter.
e.g.
class CallMe:
def App(self):
print "this is App"
def Foo(self):
print "I'm Foo"
obj = CallMe()
# bound method:
var = obj.App
var() # prints "this is App"
# unbound method:
var = CallMe.Foo
var(obj) # prints "I'm Foo"
A:
Your class has been declared as an "old-style class". I recommend you make all your classes be "new-style classes".
The difference between the old and the new is that new-style classes can use inheritance, which you might not need right away. But it's a good habit to get into.
Here is all you have to do to make a new-style class: you use the Python syntax to say that it inherits from "object". You do that by putting parentheses after the class name and putting the name object inside the parentheses. Like so:
class CallMe(object): # Class
def App(): # Method one
...
def Foo(): # Method two
...
As I said, you might not need to use inheritance right away, but this is a good habit to get into. There are several questions here on StackOverflow to the effect of "I'm trying to do X and it doesn't work" and it turns out the person had coded an old-style class.
A:
Your code does not look like python, may be you want to do like this?
class CallMe:
def App(self): #// Method one
print "hello"
def Foo(self): #// Method two
return None
variable = App #// Method to call
CallMe().variable() #// Calling App()
|
Call method from string
|
If I have a Python class, and would like to call a function from it depending on a variable, how would I do so? I imagined following could do it:
class CallMe: # Class
def App(): # Method one
...
def Foo(): # Method two
...
variable = "App" # Method to call
CallMe.variable() # Calling App()
But it couldn't. Any other way to do this?
|
[
"You can do this:\ngetattr(CallMe, variable)()\n\ngetattr is a builtin method, it returns the value of the named attributed of object. The value in this case is a method object that you can call with ()\n",
"You can use getattr, or you can assign bound or unbound methods to the variable. Bound methods are tied to a particular instance of the class, and unbound methods are tied to the class, so you have to pass an instance in as the first parameter.\ne.g.\nclass CallMe:\n def App(self):\n print \"this is App\"\n\n def Foo(self):\n print \"I'm Foo\"\n\nobj = CallMe()\n\n# bound method:\nvar = obj.App\nvar() # prints \"this is App\"\n\n# unbound method:\nvar = CallMe.Foo\nvar(obj) # prints \"I'm Foo\"\n\n",
"Your class has been declared as an \"old-style class\". I recommend you make all your classes be \"new-style classes\".\nThe difference between the old and the new is that new-style classes can use inheritance, which you might not need right away. But it's a good habit to get into.\nHere is all you have to do to make a new-style class: you use the Python syntax to say that it inherits from \"object\". You do that by putting parentheses after the class name and putting the name object inside the parentheses. Like so:\nclass CallMe(object): # Class\n\n def App(): # Method one\n\n ...\n\n def Foo(): # Method two\n\n ...\n\nAs I said, you might not need to use inheritance right away, but this is a good habit to get into. There are several questions here on StackOverflow to the effect of \"I'm trying to do X and it doesn't work\" and it turns out the person had coded an old-style class.\n",
"Your code does not look like python, may be you want to do like this?\nclass CallMe:\n\n def App(self): #// Method one\n print \"hello\"\n\n def Foo(self): #// Method two\n return None\n\n variable = App #// Method to call\n\nCallMe().variable() #// Calling App()\n\n"
] |
[
119,
2,
1,
0
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001855558_oop_python.txt
|
Q:
Fast conversion of numeric data into fixed width format file in Python
What is the fastest way of converting records holding only numeric data into fixed with format strings and writing them to a file in Python? For example, suppose record is a huge list consisting of objects with attributes id, x, y, and wt and we frequently need to flush them to an external file. The flushing can be done with the following snippet:
with open(serial_fname(), "w") as f:
for r in records:
f.write("%07d %11.5e %11.5e %7.5f\n" % (r.id, r.x, r.y, r.wt))
However my code is spending too much time generating external files leaving too little time for doing what it is supposed to do between the flushes.
Amendmend to the original question:
I ran into this problem while writing a server software that keeps track of a global record set by pulling the information from several "producer" systems and relays any changes to the record set to "consumer" systems in real-time or near real-time in preprocessed form. Many of the consumer systems are Matlab applications.
I have listed below some suggestions I have received so far (thanks) with some comments:
Dump only the changes, not the whole data set: I'm actually doing this already. The resulting change sets are still huge.
Use binary (or some other more efficient) file format: I'm pretty much constrained by what Matlab can read reasonably efficiently and in addition to that the format should be platform independent.
Use database: I am actually trying to bypass the current database solution that is deemed both too slow and cumbersome, especially on Matlab's side.
Dividing task to separate processes: At the moment the dumping code is running in its own thread. However because of the GIL it is still consuming the same core. I guess I could move it to completely separate process.
A:
I was trying to check if numpy.savetxt could speed things up a bit so I wrote the following simulation:
import sys
import numpy as np
fmt = '%7.0f %11.5e %11.5e %7.5f'
records = 10000
np.random.seed(1234)
aray = np.random.rand(records, 4)
def writ(f, aray=aray, fmt=fmt):
fw = f.write
for row in aray:
fw(fmt % tuple(row))
def prin(f, aray=aray, fmt=fmt):
for row in aray:
print>>f, fmt % tuple(row)
def stxt(f, aray=aray, fmt=fmt):
np.savetxt(f, aray, fmt)
nul = open('/dev/null', 'w')
def tonul(func, nul=nul):
func(nul)
def main():
print 'looping:'
loop(sys.stdout, aray)
print 'savetxt:'
savetxt(sys.stdout, aray)
I found the results (on my 2.4 GHz Core Duo Macbook Pro, with Mac OS X 10.5.8, Python 2.5.4 from the DMG on python.org, numpy 1.4 rc1 built from sources) slightly surprising, but they're quite repeatable so I thought they may be of interest:
$ py25 -mtimeit -s'import ft' 'ft.tonul(ft.writ)'
10 loops, best of 3: 101 msec per loop
$ py25 -mtimeit -s'import ft' 'ft.tonul(ft.prin)'
10 loops, best of 3: 98.3 msec per loop
$ py25 -mtimeit -s'import ft' 'ft.tonul(ft.stxt)'
10 loops, best of 3: 104 msec per loop
so, savetxt seems to be a few percent slower than a loop calling write... but good old print (also in a loop) seems to be a few percents faster than write (I guess it's avoiding some kind of call overhead). I realize that a difference of 2.5% or so isn't very important, but it's not in the direction I intuitively expected it to be, so I thought I'd report it. (BTW, using a real file instead of /dev/null only uniformly adds 6 or 7 milliseconds, so it doesn't change things much, one way or another).
A:
I don't see anything about your snippet of code that I could really optimize. So, I think we need to do something completely different to solve your problem.
Your problem seems to be that you are chewing large amounts of data, and it's slow to format the data into strings and write the strings to a file. You said "flush" which implies you need to save the data regularly.
Are you saving all the data regularly, or just the changed data? If you are dealing with a very large data set, changing just some data, and writing all of the data... that's an angle we could attack to solve your problem.
If you have a large data set, and you want to update it from time to time... you are a candidate for a database. A real database, written in C for speed, will let you throw lots of data updates at it, and will keep all the records in a consistent state. Then you can, at intervals, run a "report" which will pull the records and write your fixed-width text file from them.
In other words, I'm proposing you divide the problem into two parts: updating the data set piecemeal as you compute or receive more data, and dumping the entire data set into your fixed-width text format, for your further processing.
Note that you could actually generate the text file from the database without stopping the Python process that is updating it. You would get an incomplete snapshot, but if the records are independent, that should be okay.
If your further processing is in Python also, you could just leave the data in the database forever. Don't bother round-tripping the data through a fixed-width text file. I'm assuming you are using a fixed-width text file because it's easy to extract the data again for future processing.
If you use the database idea, try to use PostgreSQL. It's free and it's a real database. For using a database with Python, you should use an ORM. One of the best is SqlAlchemy.
Another thing to consider: if you are saving the data in a fixed-width text file format for future parsing and use of the data in another application, and if that application can read JSON as well as fixed-width, maybe you could use a C module that writes JSON. It might not be any faster, but it might; you could benchmark it and see.
Other than the above, my only other idea is to split your program into a "worker" part and an "updater" part, where the worker generates updated records and the updater part saves the records to disk. Perhaps have them communicate by having the worker put the updated records, in text format, to the standard output; and have the updater read from standard input and update its record of the data. Instead of an SQL database, the updater could use a dictionary to store the text records; as new ones arrived, it could simply update the dictionary. Something like this:
for line in sys.stdin:
id = line[:7] # fixed width: id is 7 wide
records[id] = line # will insert or update as needed
You could actually have the updater keep two dictionaries, and keep updating one while the other one is written out to disk.
Dividing into a worker and an updater is a good way to make sure the worker doesn't spend all its time updating, and a great way to balance the work across multiple CPU cores.
I'm out of ideas for now.
A:
you can try to build all the output strings in the memory, e.g. use a long string.
and then write this long string in the file.
more faster:
you may want to use binary files rather text files for logging information. But then you need to write another tool to view the binary files.
A:
Now that you updated your question, I have a slightly better idea of what you are facing.
I don't know what the "current database solution that is deemed both too slow and cumbersome" is, but I still think a database would help if used correctly.
Run the Python code to collect data, and use an ORM module to insert/update the data into the database. Then run a separate process to make a "report", which would be the fixed-width text files. The database would be doing all the work of generating your text file. If necessary, put the database on its own server, since hardware is pretty cheap these days.
A:
You could use try to push your loop to C using ctypes.
|
Fast conversion of numeric data into fixed width format file in Python
|
What is the fastest way of converting records holding only numeric data into fixed with format strings and writing them to a file in Python? For example, suppose record is a huge list consisting of objects with attributes id, x, y, and wt and we frequently need to flush them to an external file. The flushing can be done with the following snippet:
with open(serial_fname(), "w") as f:
for r in records:
f.write("%07d %11.5e %11.5e %7.5f\n" % (r.id, r.x, r.y, r.wt))
However my code is spending too much time generating external files leaving too little time for doing what it is supposed to do between the flushes.
Amendmend to the original question:
I ran into this problem while writing a server software that keeps track of a global record set by pulling the information from several "producer" systems and relays any changes to the record set to "consumer" systems in real-time or near real-time in preprocessed form. Many of the consumer systems are Matlab applications.
I have listed below some suggestions I have received so far (thanks) with some comments:
Dump only the changes, not the whole data set: I'm actually doing this already. The resulting change sets are still huge.
Use binary (or some other more efficient) file format: I'm pretty much constrained by what Matlab can read reasonably efficiently and in addition to that the format should be platform independent.
Use database: I am actually trying to bypass the current database solution that is deemed both too slow and cumbersome, especially on Matlab's side.
Dividing task to separate processes: At the moment the dumping code is running in its own thread. However because of the GIL it is still consuming the same core. I guess I could move it to completely separate process.
|
[
"I was trying to check if numpy.savetxt could speed things up a bit so I wrote the following simulation:\nimport sys\nimport numpy as np\n\nfmt = '%7.0f %11.5e %11.5e %7.5f'\nrecords = 10000\n\nnp.random.seed(1234)\naray = np.random.rand(records, 4)\n\ndef writ(f, aray=aray, fmt=fmt):\n fw = f.write\n for row in aray:\n fw(fmt % tuple(row))\n\ndef prin(f, aray=aray, fmt=fmt):\n for row in aray:\n print>>f, fmt % tuple(row)\n\ndef stxt(f, aray=aray, fmt=fmt):\n np.savetxt(f, aray, fmt)\n\nnul = open('/dev/null', 'w')\ndef tonul(func, nul=nul):\n func(nul)\n\ndef main():\n print 'looping:'\n loop(sys.stdout, aray)\n print 'savetxt:'\n savetxt(sys.stdout, aray)\n\nI found the results (on my 2.4 GHz Core Duo Macbook Pro, with Mac OS X 10.5.8, Python 2.5.4 from the DMG on python.org, numpy 1.4 rc1 built from sources) slightly surprising, but they're quite repeatable so I thought they may be of interest:\n$ py25 -mtimeit -s'import ft' 'ft.tonul(ft.writ)'\n10 loops, best of 3: 101 msec per loop\n$ py25 -mtimeit -s'import ft' 'ft.tonul(ft.prin)'\n10 loops, best of 3: 98.3 msec per loop\n$ py25 -mtimeit -s'import ft' 'ft.tonul(ft.stxt)'\n10 loops, best of 3: 104 msec per loop\n\nso, savetxt seems to be a few percent slower than a loop calling write... but good old print (also in a loop) seems to be a few percents faster than write (I guess it's avoiding some kind of call overhead). I realize that a difference of 2.5% or so isn't very important, but it's not in the direction I intuitively expected it to be, so I thought I'd report it. (BTW, using a real file instead of /dev/null only uniformly adds 6 or 7 milliseconds, so it doesn't change things much, one way or another).\n",
"I don't see anything about your snippet of code that I could really optimize. So, I think we need to do something completely different to solve your problem.\nYour problem seems to be that you are chewing large amounts of data, and it's slow to format the data into strings and write the strings to a file. You said \"flush\" which implies you need to save the data regularly.\nAre you saving all the data regularly, or just the changed data? If you are dealing with a very large data set, changing just some data, and writing all of the data... that's an angle we could attack to solve your problem.\nIf you have a large data set, and you want to update it from time to time... you are a candidate for a database. A real database, written in C for speed, will let you throw lots of data updates at it, and will keep all the records in a consistent state. Then you can, at intervals, run a \"report\" which will pull the records and write your fixed-width text file from them.\nIn other words, I'm proposing you divide the problem into two parts: updating the data set piecemeal as you compute or receive more data, and dumping the entire data set into your fixed-width text format, for your further processing.\nNote that you could actually generate the text file from the database without stopping the Python process that is updating it. You would get an incomplete snapshot, but if the records are independent, that should be okay.\nIf your further processing is in Python also, you could just leave the data in the database forever. Don't bother round-tripping the data through a fixed-width text file. I'm assuming you are using a fixed-width text file because it's easy to extract the data again for future processing.\nIf you use the database idea, try to use PostgreSQL. It's free and it's a real database. For using a database with Python, you should use an ORM. One of the best is SqlAlchemy.\nAnother thing to consider: if you are saving the data in a fixed-width text file format for future parsing and use of the data in another application, and if that application can read JSON as well as fixed-width, maybe you could use a C module that writes JSON. It might not be any faster, but it might; you could benchmark it and see.\nOther than the above, my only other idea is to split your program into a \"worker\" part and an \"updater\" part, where the worker generates updated records and the updater part saves the records to disk. Perhaps have them communicate by having the worker put the updated records, in text format, to the standard output; and have the updater read from standard input and update its record of the data. Instead of an SQL database, the updater could use a dictionary to store the text records; as new ones arrived, it could simply update the dictionary. Something like this:\nfor line in sys.stdin:\n id = line[:7] # fixed width: id is 7 wide\n records[id] = line # will insert or update as needed\n\nYou could actually have the updater keep two dictionaries, and keep updating one while the other one is written out to disk.\nDividing into a worker and an updater is a good way to make sure the worker doesn't spend all its time updating, and a great way to balance the work across multiple CPU cores.\nI'm out of ideas for now.\n",
"you can try to build all the output strings in the memory, e.g. use a long string.\nand then write this long string in the file.\nmore faster:\nyou may want to use binary files rather text files for logging information. But then you need to write another tool to view the binary files. \n",
"Now that you updated your question, I have a slightly better idea of what you are facing.\nI don't know what the \"current database solution that is deemed both too slow and cumbersome\" is, but I still think a database would help if used correctly.\nRun the Python code to collect data, and use an ORM module to insert/update the data into the database. Then run a separate process to make a \"report\", which would be the fixed-width text files. The database would be doing all the work of generating your text file. If necessary, put the database on its own server, since hardware is pretty cheap these days.\n",
"You could use try to push your loop to C using ctypes.\n"
] |
[
3,
2,
0,
0,
0
] |
[] |
[] |
[
"large_data_volumes",
"performance",
"python"
] |
stackoverflow_0001854713_large_data_volumes_performance_python.txt
|
Q:
how to access sys.argv (or any string variable) in raw mode?
I'm having difficulties parsing filepaths sent as arguments:
If I type:
os.path.normpath('D:\Data2\090925')
I get
'D:\\Data2\x0090925'
Obviously the \0 in the folder name is upsetting the formatting. I can correct it with the following:
os.path.normpath(r'D:\Data2\090925')
which gives
'D:\\Data2\\090925'
My problem is, how do I achieve the same result with sys.argv? Namely:
os.path.normpath(sys.argv[1])
I can't find a way for feeding sys.argv in a raw mode into os.path.normpath() to avoid issues with folders starting with zero!
Also, I'm aware that I could feed the script with python script.py D:/Data2/090925 , and it would work perfectly, but unfortunately the windows system stubbornly supplies me with the '\', not the '/', so I really need to solve this issue instead of avoiding it.
UPDATE1 to complement:
if I use the script test.py:
import os, sys
if __name__ == '__main__':
print 'arg 1: ',sys.argv[1]
print 'arg 1 (normpath): ',os.path.normpath(sys.argv[1])
print 'os.path.dirname :', os.path.dirname(os.path.normpath(sys.argv[1]))
I get the following:
C:\Python>python test.py D:\Data2\091002\
arg 1: D:\Data2\091002\
arg 1 (normpath): D:\Data2\091002
os.path.dirname : D:\Data2
i.e.: I've lost 091002...
UPDATE2: as the comments below informed me, the problem is solved for the example I gave when normpath is removed:
import os, sys
if __name__ == '__main__':
print 'arg 1: ',sys.argv[1]
print 'os.path.dirname :', os.path.dirname(sys.argv[1])
print 'os.path.split(sys.argv[1])):', os.path.split(sys.argv[1])
Which gives:
C:\Python>python test.py D:\Data2\091002\
arg 1: D:\Data2\091002\
os.path.dirname : D:\Data2\091002
os.path.split : ('D:\\Data2\\090925', '')
And if I use D:\Data2\091002 :
C:\Python>python test.py D:\Data2\091002
arg 1: D:\Data2\091002
os.path.dirname : D:\Data2
os.path.split : ('D:\\Data2', '090925')
Which is something I can work with: Thanks!
A:
"Losing" the last part of your path is nothing to do with escaping (or lack of it) in sys.argv.
It is the expected behaviour if you use os.path.normpath() and then os.path.dirname().
>>> import os
>>> os.path.normpath("c:/foo/bar/")
'c:\\foo\\bar'
>>> os.path.dirname('c:\\foo\\bar')
'c:\\foo'
A:
Here is a snippet of Python code to add a backslash to the end of the directory path:
def add_path_slash(s_path):
if not s_path:
return s_path
if 2 == len(s_path) and ':' == s_path[1]:
return s_path # it is just a drive letter
if s_path[-1] in ('/', '\\'):
return s_path
return s_path + '\\'
|
how to access sys.argv (or any string variable) in raw mode?
|
I'm having difficulties parsing filepaths sent as arguments:
If I type:
os.path.normpath('D:\Data2\090925')
I get
'D:\\Data2\x0090925'
Obviously the \0 in the folder name is upsetting the formatting. I can correct it with the following:
os.path.normpath(r'D:\Data2\090925')
which gives
'D:\\Data2\\090925'
My problem is, how do I achieve the same result with sys.argv? Namely:
os.path.normpath(sys.argv[1])
I can't find a way for feeding sys.argv in a raw mode into os.path.normpath() to avoid issues with folders starting with zero!
Also, I'm aware that I could feed the script with python script.py D:/Data2/090925 , and it would work perfectly, but unfortunately the windows system stubbornly supplies me with the '\', not the '/', so I really need to solve this issue instead of avoiding it.
UPDATE1 to complement:
if I use the script test.py:
import os, sys
if __name__ == '__main__':
print 'arg 1: ',sys.argv[1]
print 'arg 1 (normpath): ',os.path.normpath(sys.argv[1])
print 'os.path.dirname :', os.path.dirname(os.path.normpath(sys.argv[1]))
I get the following:
C:\Python>python test.py D:\Data2\091002\
arg 1: D:\Data2\091002\
arg 1 (normpath): D:\Data2\091002
os.path.dirname : D:\Data2
i.e.: I've lost 091002...
UPDATE2: as the comments below informed me, the problem is solved for the example I gave when normpath is removed:
import os, sys
if __name__ == '__main__':
print 'arg 1: ',sys.argv[1]
print 'os.path.dirname :', os.path.dirname(sys.argv[1])
print 'os.path.split(sys.argv[1])):', os.path.split(sys.argv[1])
Which gives:
C:\Python>python test.py D:\Data2\091002\
arg 1: D:\Data2\091002\
os.path.dirname : D:\Data2\091002
os.path.split : ('D:\\Data2\\090925', '')
And if I use D:\Data2\091002 :
C:\Python>python test.py D:\Data2\091002
arg 1: D:\Data2\091002
os.path.dirname : D:\Data2
os.path.split : ('D:\\Data2', '090925')
Which is something I can work with: Thanks!
|
[
"\"Losing\" the last part of your path is nothing to do with escaping (or lack of it) in sys.argv.\nIt is the expected behaviour if you use os.path.normpath() and then os.path.dirname().\n>>> import os\n>>> os.path.normpath(\"c:/foo/bar/\")\n'c:\\\\foo\\\\bar'\n>>> os.path.dirname('c:\\\\foo\\\\bar')\n'c:\\\\foo'\n\n",
"Here is a snippet of Python code to add a backslash to the end of the directory path:\ndef add_path_slash(s_path):\n if not s_path:\n return s_path\n if 2 == len(s_path) and ':' == s_path[1]:\n return s_path # it is just a drive letter\n if s_path[-1] in ('/', '\\\\'):\n return s_path\n return s_path + '\\\\'\n\n"
] |
[
5,
0
] |
[] |
[] |
[
"python",
"string_literals"
] |
stackoverflow_0001855477_python_string_literals.txt
|
Q:
Python module database configuration?
I have a python module which contains a few objects, one of which uses a MySQL connection to persist some data. What's the best way to allow for easy configuration of the MySQL connection information without making the user go into the installed module location and edit files?
A:
Allow the user to write configuration information for the program in a file of the format that ConfigParser knows how to parse -- this way, the user doesn't have to "go into the installed module location" but can edit the configuration file in more convenient places.
It is traditional and helpful for the program to attempt to read both a "per-user" configuration file (in the user's home directory or subdir thereof), and a "per-location" configuration file that a system administrator could use to provide the users with some defaults; so, the read method of config parser objects accepts a list of filenames and tries to parse each of them in sequence (see the simple example in the docs I'm pointing to).
A:
You could pull the configuration details for MySQL from an .ini file using ConfigParser.
|
Python module database configuration?
|
I have a python module which contains a few objects, one of which uses a MySQL connection to persist some data. What's the best way to allow for easy configuration of the MySQL connection information without making the user go into the installed module location and edit files?
|
[
"Allow the user to write configuration information for the program in a file of the format that ConfigParser knows how to parse -- this way, the user doesn't have to \"go into the installed module location\" but can edit the configuration file in more convenient places.\nIt is traditional and helpful for the program to attempt to read both a \"per-user\" configuration file (in the user's home directory or subdir thereof), and a \"per-location\" configuration file that a system administrator could use to provide the users with some defaults; so, the read method of config parser objects accepts a list of filenames and tries to parse each of them in sequence (see the simple example in the docs I'm pointing to).\n",
"You could pull the configuration details for MySQL from an .ini file using ConfigParser.\n"
] |
[
4,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001856356_python.txt
|
Q:
Python popen wont work with block devices
I am wring a small forensic app which looks like this so far
import time, os,sys
def getter():
filename = sys.argv[1]
print "File Entered: " + filename
os.system('file ' + filename)
print "\n"
pipe = os.popen("xxd " + filename, "r")
print pipe.read()
I input via the command line a file and it prints out the type of file and then it is supposed to create a pipe from the terminal back to the python app under the pipe name "pipe". I then read the pipe named "pipe"
This works great on smaller text files, but it will not work on block devices even when I run this app as super user. In the end this will recover files based on the output of xxd.
any tips, thank you.
A:
I assume you've tried the same command from the shell prompt and it worked well, even when piped to less or something of the sort.
I have a strong feeling that using subprocess will fix this behavior; This module replaces the popen call which is deprecated, and gives a great degree of flexibility in running and getting results from the executed commands.
output = Popen(["xxd", "file"], stdout=PIPE).communicate()[0]
Try to run the process under its own shell, and read its pipe. If this still does not work, make sure to set a reasonable limit to its buffer size, and make sure you're not using communicate which will make you wait till the process terminates (a long time when handling a block device).
Comment my answer if it didn't work, and try to accurately describe what wen't wrong - is it freezing, just no output or corrupt results?
A:
What happens if you just run xxd /dev/diskwhatever from the shell prompt? Does it work, and how much information does it spew out? Assuming that, as superuser, you do have read permission, the attempt to read everything at one gulp in your code's last line would be the point where failure could be expected (since the amount of information can be really huge); the workaround would be to read a little at a time, instead of doing a single .read() call.
Edit: whether you get your pipe in the nice modern way (with subprocess) or the deprecated-but-still-working old way (with popen), makes no difference to this problem. In either case, you can get one line at a time by just looping over the pipe object, for example:
child = subprocess.Popen(whatever, stdout=subprocess.PIPE)
for line in child.stdout:
print "One more line:", line
A:
The problem probably lies with how you call xxd, given that your application doesn't do anything at all with the file.
|
Python popen wont work with block devices
|
I am wring a small forensic app which looks like this so far
import time, os,sys
def getter():
filename = sys.argv[1]
print "File Entered: " + filename
os.system('file ' + filename)
print "\n"
pipe = os.popen("xxd " + filename, "r")
print pipe.read()
I input via the command line a file and it prints out the type of file and then it is supposed to create a pipe from the terminal back to the python app under the pipe name "pipe". I then read the pipe named "pipe"
This works great on smaller text files, but it will not work on block devices even when I run this app as super user. In the end this will recover files based on the output of xxd.
any tips, thank you.
|
[
"I assume you've tried the same command from the shell prompt and it worked well, even when piped to less or something of the sort.\nI have a strong feeling that using subprocess will fix this behavior; This module replaces the popen call which is deprecated, and gives a great degree of flexibility in running and getting results from the executed commands.\noutput = Popen([\"xxd\", \"file\"], stdout=PIPE).communicate()[0]\n\nTry to run the process under its own shell, and read its pipe. If this still does not work, make sure to set a reasonable limit to its buffer size, and make sure you're not using communicate which will make you wait till the process terminates (a long time when handling a block device).\nComment my answer if it didn't work, and try to accurately describe what wen't wrong - is it freezing, just no output or corrupt results?\n",
"What happens if you just run xxd /dev/diskwhatever from the shell prompt? Does it work, and how much information does it spew out? Assuming that, as superuser, you do have read permission, the attempt to read everything at one gulp in your code's last line would be the point where failure could be expected (since the amount of information can be really huge); the workaround would be to read a little at a time, instead of doing a single .read() call.\nEdit: whether you get your pipe in the nice modern way (with subprocess) or the deprecated-but-still-working old way (with popen), makes no difference to this problem. In either case, you can get one line at a time by just looping over the pipe object, for example:\nchild = subprocess.Popen(whatever, stdout=subprocess.PIPE)\nfor line in child.stdout:\n print \"One more line:\", line\n\n",
"The problem probably lies with how you call xxd, given that your application doesn't do anything at all with the file.\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"pipe",
"python",
"subprocess"
] |
stackoverflow_0001856372_pipe_python_subprocess.txt
|
Q:
SyntaxError inconsistency in Python?
Consider these two snippets:
try:
a+a=a
except SyntaxError:
print "first exception caught"
.
try:
eval("a+a=a")
except SyntaxError:
print "second exception caught"
In the second case the "second exception .." statement is printed (exception caught), while in the first one isn't.
Is first exception (lets call it "SyntaxError1") any different from second one ("SyntaxError2")?
Is there any way to catch SyntaxError1 (thus supressing compilation-time errors)? Wrapping large blocks of code in eval is unsatisfactory ;)
A:
In the first case, the exception is raised by the compiler, which is running before the try/except structure even exists (since it's the compiler itself that will set it up right after parsing). In the second case, the compiler is running twice -- and the exception is getting raised when the compiler runs as part of eval, after the first run of the compiler has already set up the try/except.
So, to intercept syntax errors, one way or another, you have to arrange for the compiler to run twice -- eval is one way, explicit compile built-in function calls another, import is quite handy (after writing the code to another file), exec and execfile other possibilities yet. But however you do it, syntax errors can be caught only after the compiler has run one first time to set up the try/except blocks you need!
A:
Short answer: No.
Syntax errors happen when the code is parsed, which for normal Python code is before the code is executed - the code is not executing inside the try/except block since the code is not executing, period.
However when you eval or exec some code, then you are parsing it at runtime, so you can catch the exception.
|
SyntaxError inconsistency in Python?
|
Consider these two snippets:
try:
a+a=a
except SyntaxError:
print "first exception caught"
.
try:
eval("a+a=a")
except SyntaxError:
print "second exception caught"
In the second case the "second exception .." statement is printed (exception caught), while in the first one isn't.
Is first exception (lets call it "SyntaxError1") any different from second one ("SyntaxError2")?
Is there any way to catch SyntaxError1 (thus supressing compilation-time errors)? Wrapping large blocks of code in eval is unsatisfactory ;)
|
[
"In the first case, the exception is raised by the compiler, which is running before the try/except structure even exists (since it's the compiler itself that will set it up right after parsing). In the second case, the compiler is running twice -- and the exception is getting raised when the compiler runs as part of eval, after the first run of the compiler has already set up the try/except.\nSo, to intercept syntax errors, one way or another, you have to arrange for the compiler to run twice -- eval is one way, explicit compile built-in function calls another, import is quite handy (after writing the code to another file), exec and execfile other possibilities yet. But however you do it, syntax errors can be caught only after the compiler has run one first time to set up the try/except blocks you need!\n",
"Short answer: No.\nSyntax errors happen when the code is parsed, which for normal Python code is before the code is executed - the code is not executing inside the try/except block since the code is not executing, period.\nHowever when you eval or exec some code, then you are parsing it at runtime, so you can catch the exception.\n"
] |
[
24,
4
] |
[] |
[] |
[
"exception",
"python"
] |
stackoverflow_0001856408_exception_python.txt
|
Q:
Defining PYTHONPATH for http requests on a shared server
I'm installing Django on Bluehost and one of the steps to install it was to install flup on their server. I did so and everything works great when I'm logged in via the SSH. However when I actually hit the page in my browser it can't find flup. I get this error in the server log:
ERROR: No module named flup. Unable to load the flup package. In order to run django as a FastCGI application, you will need to get flup from http://www.saddi.com/software/flup/ If you've already installed flup, then make sure you have it in your PYTHONPATH.
Since it recognizes flup when I'm in the SSH my best guess is that there's some other bash file I need to change to get PYTHONPATH pointing to the right places for the http request. But since it's a shared server I don't have a whole lot of privileges outside of my home directory.
Any ideas?
A:
If you can identify what module exactly is trying to import flup, you can prepend that import with a sys.path.append of the path to which you have installed flup -- as long as the sys.path.append happens before the import flup, you're in clover.
|
Defining PYTHONPATH for http requests on a shared server
|
I'm installing Django on Bluehost and one of the steps to install it was to install flup on their server. I did so and everything works great when I'm logged in via the SSH. However when I actually hit the page in my browser it can't find flup. I get this error in the server log:
ERROR: No module named flup. Unable to load the flup package. In order to run django as a FastCGI application, you will need to get flup from http://www.saddi.com/software/flup/ If you've already installed flup, then make sure you have it in your PYTHONPATH.
Since it recognizes flup when I'm in the SSH my best guess is that there's some other bash file I need to change to get PYTHONPATH pointing to the right places for the http request. But since it's a shared server I don't have a whole lot of privileges outside of my home directory.
Any ideas?
|
[
"If you can identify what module exactly is trying to import flup, you can prepend that import with a sys.path.append of the path to which you have installed flup -- as long as the sys.path.append happens before the import flup, you're in clover.\n"
] |
[
2
] |
[] |
[] |
[
"bluehost",
"django",
"flup",
"python",
"pythonpath"
] |
stackoverflow_0001856439_bluehost_django_flup_python_pythonpath.txt
|
Q:
What is the best way to call Java code from Python?
I have a Java class library (3rd party, proprietary) and I want my python script to call its functions. I already have java code that uses this library. What is the best way to achieve this?
A:
Can you run your current Python scripts under Jython ? If so, that's probably the best way, since the Java library can be exposed directly into Jython as scriptable objects.
Failing that, there are a number of solutions listed here.
A:
The other answer is JPype, which allows CPython to talk to Java. It's useful if you can't switch to Jython.
A:
There's also JCC which is not mentioned on the Caltech page.
Quote:
JCC is a C++ code generator that
produces a C++ object interface
wrapping a Java library via Java's
Native Interface (JNI). JCC also
generates C++ wrappers that conform to
Python's C type system making the
instances of Java classes directly
available to a Python interpreter.
When generating Python wrappers, JCC
produces a complete Python extension
via the distutils or setuptools
packages.
|
What is the best way to call Java code from Python?
|
I have a Java class library (3rd party, proprietary) and I want my python script to call its functions. I already have java code that uses this library. What is the best way to achieve this?
|
[
"Can you run your current Python scripts under Jython ? If so, that's probably the best way, since the Java library can be exposed directly into Jython as scriptable objects.\nFailing that, there are a number of solutions listed here.\n",
"The other answer is JPype, which allows CPython to talk to Java. It's useful if you can't switch to Jython.\n",
"There's also JCC which is not mentioned on the Caltech page.\nQuote:\n\nJCC is a C++ code generator that\n produces a C++ object interface\n wrapping a Java library via Java's\n Native Interface (JNI). JCC also\n generates C++ wrappers that conform to\n Python's C type system making the\n instances of Java classes directly\n available to a Python interpreter.\nWhen generating Python wrappers, JCC\n produces a complete Python extension\n via the distutils or setuptools\n packages.\n\n"
] |
[
15,
8,
3
] |
[] |
[] |
[
"java",
"python"
] |
stackoverflow_0001855320_java_python.txt
|
Q:
Intelligently launching the default editor from inside a Python CLI program?
The answers in this question didn't get to the heart of the problem. In a CLI-based Python program, I want the user to be able to edit a file and then return to the program. Before returning, I want them to be able to cancel their edits. This should feel like the commit-note-editing feature in Subversion.
What are the current best practices for this type of task?
A:
You could try looking through the sources to Mercurial, which is written in Python.
They use os.environ to read the value of environment variables HGEDITOR, VISUAL, and EDITOR, defaulting to vi. Then they use os.system to launch the editor on a temp file created with tempfile.mkstemp. When the editor is done, they read the file. If it has any real content in it, the operation continues, otherwise, it is aborted.
If you want to see how Mercurial does it, the details are in ui.py and util.py.
A:
Subversion, et al use the $EDITOR environment variable to determine which program to use to edit text files. Of course, $EDITOR will only work if you're on a unixy platform in a shell. You'll have to do something different for Windows (cmd /c start tempfile.txt) or Mac OS X (open tempfile.txt).
But, this is essentially what the answers and related answers to your other question said.
If you just want to be able to "cancel" edits, then make a temporary copy of the file, and invoke your editor on that. Your program can then copy the contents of the temporary file into the real file or, if the user cancels, don't. This is basically how Subversion does it.
|
Intelligently launching the default editor from inside a Python CLI program?
|
The answers in this question didn't get to the heart of the problem. In a CLI-based Python program, I want the user to be able to edit a file and then return to the program. Before returning, I want them to be able to cancel their edits. This should feel like the commit-note-editing feature in Subversion.
What are the current best practices for this type of task?
|
[
"You could try looking through the sources to Mercurial, which is written in Python.\nThey use os.environ to read the value of environment variables HGEDITOR, VISUAL, and EDITOR, defaulting to vi. Then they use os.system to launch the editor on a temp file created with tempfile.mkstemp. When the editor is done, they read the file. If it has any real content in it, the operation continues, otherwise, it is aborted.\nIf you want to see how Mercurial does it, the details are in ui.py and util.py.\n",
"Subversion, et al use the $EDITOR environment variable to determine which program to use to edit text files. Of course, $EDITOR will only work if you're on a unixy platform in a shell. You'll have to do something different for Windows (cmd /c start tempfile.txt) or Mac OS X (open tempfile.txt). \nBut, this is essentially what the answers and related answers to your other question said. \nIf you just want to be able to \"cancel\" edits, then make a temporary copy of the file, and invoke your editor on that. Your program can then copy the contents of the temporary file into the real file or, if the user cancels, don't. This is basically how Subversion does it. \n"
] |
[
10,
2
] |
[] |
[] |
[
"editor",
"python"
] |
stackoverflow_0001856792_editor_python.txt
|
Q:
Urllib2 Send Post data through proxy
I have configured a proxy using proxyhandler and sent a request with some POST data:
cookiejar = cookielib.CookieJar()
proxies = {'http':'http://some-proxy:port/'}
opener = urllib2.build_opener(urllib2.ProxyHandler(proxies),urllib2.HTTPCookieProcessor(cookiejar) )
opener.addheaders = [('User-agent', "USER AGENT")]
urllib2.install_opener(opener)
url = "URL"
opener.open(url, urllib.urlencode({"DATA1":"DATA1"}))
then I get a 405 http error (Method not allowed)
may I get some assistance? I cannot figure out what is going wrong
Thanks in advance
A:
The problem was user-agent header.
|
Urllib2 Send Post data through proxy
|
I have configured a proxy using proxyhandler and sent a request with some POST data:
cookiejar = cookielib.CookieJar()
proxies = {'http':'http://some-proxy:port/'}
opener = urllib2.build_opener(urllib2.ProxyHandler(proxies),urllib2.HTTPCookieProcessor(cookiejar) )
opener.addheaders = [('User-agent', "USER AGENT")]
urllib2.install_opener(opener)
url = "URL"
opener.open(url, urllib.urlencode({"DATA1":"DATA1"}))
then I get a 405 http error (Method not allowed)
may I get some assistance? I cannot figure out what is going wrong
Thanks in advance
|
[
"The problem was user-agent header.\n"
] |
[
1
] |
[] |
[] |
[
"proxy",
"python",
"urllib2"
] |
stackoverflow_0001856814_proxy_python_urllib2.txt
|
Q:
Mac Based Python GUI Libraries
I am currently building a GUI based Python application on my mac and was wondering could anyone suggest a good GUI library to use?
I was looking at python's gui programming faq and there was a lot of options making it hard to choose.
I am developing on snow leopard and cross-platform is not essential (if it makes a difference).
A:
If you're not concerned about cross-platform compatibility, then PyObjC (also see Apple's info about PyObjC) provides a direct bridge to the native OS X Cocoa interfaces.
PyObjC (pronounced pie-obz-see) is the key piece which makes it possible to write Cocoa applications in Python. It enables Python objects to message Objective-C objects as if they're fellow Python objects, and likewise facilitates Objective-C objects to message Python objects as brethren.
Note that Apple tends to support and then not support these non-native interfaces to Cocoa; it's a good sign that there are recent releases of PyObjC.
A:
wxPython and Qt (via PyQT or PySide) provide native OS X widgets and work across all major platforms.
A:
There's a relatively new project active now called PyGUI which aims to provide a more modern cross-platform GUI for Python apps. On OS X, it uses PyObjC to provide native GUI elements. It might be easier to get started using it rather than delving directly into PyObjC and Interface Builder.
|
Mac Based Python GUI Libraries
|
I am currently building a GUI based Python application on my mac and was wondering could anyone suggest a good GUI library to use?
I was looking at python's gui programming faq and there was a lot of options making it hard to choose.
I am developing on snow leopard and cross-platform is not essential (if it makes a difference).
|
[
"If you're not concerned about cross-platform compatibility, then PyObjC (also see Apple's info about PyObjC) provides a direct bridge to the native OS X Cocoa interfaces.\n\nPyObjC (pronounced pie-obz-see) is the key piece which makes it possible to write Cocoa applications in Python. It enables Python objects to message Objective-C objects as if they're fellow Python objects, and likewise facilitates Objective-C objects to message Python objects as brethren.\n\nNote that Apple tends to support and then not support these non-native interfaces to Cocoa; it's a good sign that there are recent releases of PyObjC.\n",
"wxPython and Qt (via PyQT or PySide) provide native OS X widgets and work across all major platforms.\n",
"There's a relatively new project active now called PyGUI which aims to provide a more modern cross-platform GUI for Python apps. On OS X, it uses PyObjC to provide native GUI elements. It might be easier to get started using it rather than delving directly into PyObjC and Interface Builder.\n"
] |
[
5,
5,
3
] |
[] |
[] |
[
"macos",
"python",
"user_interface"
] |
stackoverflow_0001856924_macos_python_user_interface.txt
|
Q:
python crypt.crypt in ruby?
hi i need this code in ruby I don't know how I write the crypt.crypt method in ruby, any ideas?
(I want to simulate the linux comand .htpasswd)
import random
import crypt
letters = 'abcdefghijklmnopqrstuvwxyz' \
'ABCDEFGHIJKLMNOPQRSTUVWXYZ' \
'0123456789/.'
salt = random.choice(letters) + random.choice(letters)
password = "bla"
print crypt.crypt(password, salt)
A:
Jordan already told you about String#crypt, so I'll just show you an easier way to create your letters array:
letters = [*'a'..'z'] + [*'A'..'Z'] + [*0..9] + %w(/ .)
Update: since this got upvoted after more than 2 years, I might as well add the 1.9 way of doing this (using multiple splats and character literals):
letters = [*?a..?z, *?A..?Z, *0..9, ?/, ?.]
A:
I believe Ruby's String#crypt is equivalent to Python's crypt.crypt, so the Ruby equivalent to your code would be something like:
letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789/.'
salt = letters[rand letters.length].chr + letters[rand letters.length].chr
password = "bla"
puts password.crypt(salt)
|
python crypt.crypt in ruby?
|
hi i need this code in ruby I don't know how I write the crypt.crypt method in ruby, any ideas?
(I want to simulate the linux comand .htpasswd)
import random
import crypt
letters = 'abcdefghijklmnopqrstuvwxyz' \
'ABCDEFGHIJKLMNOPQRSTUVWXYZ' \
'0123456789/.'
salt = random.choice(letters) + random.choice(letters)
password = "bla"
print crypt.crypt(password, salt)
|
[
"Jordan already told you about String#crypt, so I'll just show you an easier way to create your letters array:\nletters = [*'a'..'z'] + [*'A'..'Z'] + [*0..9] + %w(/ .)\n\nUpdate: since this got upvoted after more than 2 years, I might as well add the 1.9 way of doing this (using multiple splats and character literals):\nletters = [*?a..?z, *?A..?Z, *0..9, ?/, ?.]\n\n",
"I believe Ruby's String#crypt is equivalent to Python's crypt.crypt, so the Ruby equivalent to your code would be something like:\nletters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789/.'\nsalt = letters[rand letters.length].chr + letters[rand letters.length].chr\n\npassword = \"bla\"\n\nputs password.crypt(salt)\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"python",
"ruby"
] |
stackoverflow_0001849638_python_ruby.txt
|
Q:
Proper way to implement a Direct Connect client in Twisted?
I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files.
Occasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server.
What is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client?
If there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources.
Thanks!
A:
Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.
|
Proper way to implement a Direct Connect client in Twisted?
|
I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files.
Occasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server.
What is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client?
If there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources.
Thanks!
|
[
"Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.\n"
] |
[
3
] |
[] |
[] |
[
"p2p",
"python",
"twisted"
] |
stackoverflow_0001856786_p2p_python_twisted.txt
|
Q:
Python attribute error: type object '_socketobject' has no attribute 'gethostbyname'
I am trying to do this in my program:
dest = socket.gethostbyname(host)
I have included the line:
from socket import *
in the beginning of the file.
I am getting this error:
AttributeError: type object
'_socketobject' has no attribute
'gethostbyname'
I am running Vista 64bit. Could there be a problem with my OS? I have turned down my firewall and everything.
A:
You shoulod either use
import socket
dest = socket.gethostbyname(host)
or use
from socket import *
dest = gethostbyname(host)
Note: the first option is by far the recommended one.
A:
After from socket import *, you'd need to call just the barename gethostbyname -- the barename socket now refers to a type, not to the module. That import * is horrible practice, by the way: do, instead, import socket, and then socket.gethostbyname will work just fine!
|
Python attribute error: type object '_socketobject' has no attribute 'gethostbyname'
|
I am trying to do this in my program:
dest = socket.gethostbyname(host)
I have included the line:
from socket import *
in the beginning of the file.
I am getting this error:
AttributeError: type object
'_socketobject' has no attribute
'gethostbyname'
I am running Vista 64bit. Could there be a problem with my OS? I have turned down my firewall and everything.
|
[
"You shoulod either use\nimport socket\ndest = socket.gethostbyname(host)\n\nor use\nfrom socket import *\ndest = gethostbyname(host)\n\nNote: the first option is by far the recommended one.\n",
"After from socket import *, you'd need to call just the barename gethostbyname -- the barename socket now refers to a type, not to the module. That import * is horrible practice, by the way: do, instead, import socket, and then socket.gethostbyname will work just fine!\n"
] |
[
20,
2
] |
[] |
[] |
[
"attributeerror",
"gethostbyname",
"python"
] |
stackoverflow_0001857146_attributeerror_gethostbyname_python.txt
|
Q:
python soappy add header
I have the following PHP example code:
$client = new SoapClient("http://example.com/example.wsdl");
$h = Array();
array_push($h, new SoapHeader("http://example2.com/example2/", "h", "v"));
$client->__setSoapHeaders($h);
$s = $client->__soapCall('Op', $data);
My question: what's the SOAPpy equivalent for the SoapHeader() and __setSoapHeaders() part?
Related question
How to add header while making soap request using soappy
A:
Here's an example using suds library (an alternative to SOAPpy). It assumes that the custom header is not defined in the wsdl.
from suds.client import Client
from suds.sax.element import Element
client = Client("http://example.com/example.wsdl")
# <tns:h xmlns:tns="http://example2.com/example2/">v</tns:h>
tns = ("tns", "http://example2.com/example2/")
h = Element('h', ns=tns).setText('v')
client.set_options(soapheaders=h)
#
s = client.service.Op(data)
|
python soappy add header
|
I have the following PHP example code:
$client = new SoapClient("http://example.com/example.wsdl");
$h = Array();
array_push($h, new SoapHeader("http://example2.com/example2/", "h", "v"));
$client->__setSoapHeaders($h);
$s = $client->__soapCall('Op', $data);
My question: what's the SOAPpy equivalent for the SoapHeader() and __setSoapHeaders() part?
Related question
How to add header while making soap request using soappy
|
[
"Here's an example using suds library (an alternative to SOAPpy). It assumes that the custom header is not defined in the wsdl.\nfrom suds.client import Client\nfrom suds.sax.element import Element\n\nclient = Client(\"http://example.com/example.wsdl\")\n\n# <tns:h xmlns:tns=\"http://example2.com/example2/\">v</tns:h>\ntns = (\"tns\", \"http://example2.com/example2/\")\nh = Element('h', ns=tns).setText('v')\nclient.set_options(soapheaders=h) \n#\ns = client.service.Op(data)\n\n"
] |
[
1
] |
[] |
[] |
[
"header",
"python",
"soap",
"soappy"
] |
stackoverflow_0001856963_header_python_soap_soappy.txt
|
Q:
Deploying Django
When finding web hosting for Rails apps, the hoster must have support for ruby on rails -- that is evident. What about hosting for Django? What support does the hoster need to provide? Python, or more than just Python?
This might seem like an obvious question, but I'm new to web development frameworks so I must ask :)
A:
It just needs to support Python 2.3 or later (but not 3.0, yet), preferably with mod_wsgi support (although it also works with a bunch of other options, if required).
A:
Technically, as other responders say, the host needs very little (hey, Django even runs with Google app engine for all the latter's limitations!-). But if you want a little bit more (as in, say, support for any issues you might encounter!), I suggest you read this site as well -- it will take you but a short time, and it may prove to be really useful info.
A:
Other responses have covered the technical question, but it should also be mentioned that djangofriendly.com is an invaluable resource for selecting a Django web host.
A:
Python is all that is needed.
I think there is a cPanel plugin which allows your users to create and deploy Django applications, so if you have a VPS or Reseller account, or your host is running cPanel, you could simply tell them to install it. If I find the link to the plugin I will post it here.
A:
Well, Python is not the only thing need if you run a dedicated server.
You need(please correct me if I'm missing something):
A webserver which will communicate
with your Django web application,
e.g.: Apache with mod_wsgi.
A database interface, such as MySQL or PostgreSQL (just to mention some popular).
Python.
(Dependencies, Libraries, etc.)
You may want to read this or some general resources
If you use some hosting service, then you'll probably need to find a provider who claims to be able to run Django ;)
|
Deploying Django
|
When finding web hosting for Rails apps, the hoster must have support for ruby on rails -- that is evident. What about hosting for Django? What support does the hoster need to provide? Python, or more than just Python?
This might seem like an obvious question, but I'm new to web development frameworks so I must ask :)
|
[
"It just needs to support Python 2.3 or later (but not 3.0, yet), preferably with mod_wsgi support (although it also works with a bunch of other options, if required).\n",
"Technically, as other responders say, the host needs very little (hey, Django even runs with Google app engine for all the latter's limitations!-). But if you want a little bit more (as in, say, support for any issues you might encounter!), I suggest you read this site as well -- it will take you but a short time, and it may prove to be really useful info.\n",
"Other responses have covered the technical question, but it should also be mentioned that djangofriendly.com is an invaluable resource for selecting a Django web host.\n",
"Python is all that is needed.\nI think there is a cPanel plugin which allows your users to create and deploy Django applications, so if you have a VPS or Reseller account, or your host is running cPanel, you could simply tell them to install it. If I find the link to the plugin I will post it here.\n",
"Well, Python is not the only thing need if you run a dedicated server.\nYou need(please correct me if I'm missing something):\n\nA webserver which will communicate\nwith your Django web application,\ne.g.: Apache with mod_wsgi.\nA database interface, such as MySQL or PostgreSQL (just to mention some popular).\nPython.\n(Dependencies, Libraries, etc.)\n\nYou may want to read this or some general resources\nIf you use some hosting service, then you'll probably need to find a provider who claims to be able to run Django ;)\n"
] |
[
6,
3,
3,
2,
0
] |
[] |
[] |
[
"django",
"python",
"ruby_on_rails"
] |
stackoverflow_0001022914_django_python_ruby_on_rails.txt
|
Q:
Send selected text to a command line argument
I found this utility, pytranslate, which translates a variety of languages into each other using Google's translation API. It works exactly as described.
However I've gotten sick of selecting a word I do not understand, then middle-clicking it into the command. The command format is as such:
pytranslate WORD
Is there a program/script that has the ability to detect when I select a word, or a series of words with my mouse, and then executes the above command with the selected text in the place of WORD in a terminal window?
Example:
Selected text:
Bonjour mondiale...
Resultant command/result:
pytranslate Bonjour mondiale
Detected source language: fr
Hello World
A:
#!/bin/bash
pytranslate "$(xsel -p)"
Now just put this in ~/bin (make sure that's included in your PATH), and run it. (You may need to install the xsel package.) It will take the current contents of the primary selection buffer and pass it to pytranslate.
If you want it as a button, create a launcher which runs this in a terminal, and use bash's read command to do "Press ENTER to continue".
A:
Taking inspiration from Roger Pate's brilliant one liner I've created a simple looping script for pytranslate. This is currently provisional - as I haven't implemented error catching yet - wait for new edits.
#!/bin/bash
# Primary Clipboard poller using xsel (middle click) and pytranslate
# Checks for changes every 1 second
# If change has occured, a command is executed (pytranslate here)
########## Information ##########
# It now stores definitions in a text file - saves bandwith and reduces hits on google (caseless)
# Works for Romance languagse
#TODO
# Character based langauges
# Catch errors
if [ ! -e "pytranslatecache" ]; then
touch pytranslatecache
fi
while [ 1 ]
do
OLDENTRY="$(xsel -p)"
sleep 1
NEWENTRY="$(xsel -p)"
if [ "$NEWENTRY" != "$OLDENTRY" ] ; then
if [ "$(grep -F -o -i "$NEWENTRY" pytranslatecache)" = "$NEWENTRY" ] ; then
echo "From Cache:"
echo "$(grep -i "$NEWENTRY" pytranslatecache)"
else
DEFINITION=""$(pytranslate -s fr "$(xsel -p)")""
echo "$NEWENTRY"":"$DEFINITION
echo "$NEWENTRY"":"$DEFINITION >> pytranslatecache
fi
fi
# Catch Errors - Commands
if [ $? != 0 ]; then
{
echo "Failed to translate string."
} fi
done
A:
Would you be able to use clipboard support in the PyGTK package to do the job? It claims to have access to the "primary" X clipboard which it says is where you'd normally find the selected text.
A:
Note: this answer was useless to the questioner, who wasn't using Windows. Given that the title doesn't specify the OS, I'll leave it around for Windows users who may come this way.
You can easily whip one up yourself using the pywin32 package and the win32clipboard module. See, for example, this question.
I've done this in the past with a routine that just polled the clipboard periodically, every few seconds or so, and whenever it found a change it grabbed the contents and did something with it. In your case, use the subprocess package to call out to pytranslate with the text.
|
Send selected text to a command line argument
|
I found this utility, pytranslate, which translates a variety of languages into each other using Google's translation API. It works exactly as described.
However I've gotten sick of selecting a word I do not understand, then middle-clicking it into the command. The command format is as such:
pytranslate WORD
Is there a program/script that has the ability to detect when I select a word, or a series of words with my mouse, and then executes the above command with the selected text in the place of WORD in a terminal window?
Example:
Selected text:
Bonjour mondiale...
Resultant command/result:
pytranslate Bonjour mondiale
Detected source language: fr
Hello World
|
[
"#!/bin/bash\npytranslate \"$(xsel -p)\"\n\nNow just put this in ~/bin (make sure that's included in your PATH), and run it. (You may need to install the xsel package.) It will take the current contents of the primary selection buffer and pass it to pytranslate.\nIf you want it as a button, create a launcher which runs this in a terminal, and use bash's read command to do \"Press ENTER to continue\".\n",
"Taking inspiration from Roger Pate's brilliant one liner I've created a simple looping script for pytranslate. This is currently provisional - as I haven't implemented error catching yet - wait for new edits.\n#!/bin/bash\n# Primary Clipboard poller using xsel (middle click) and pytranslate\n# Checks for changes every 1 second\n# If change has occured, a command is executed (pytranslate here)\n########## Information ########## \n# It now stores definitions in a text file - saves bandwith and reduces hits on google (caseless)\n# Works for Romance languagse\n#TODO\n# Character based langauges\n# Catch errors\n\nif [ ! -e \"pytranslatecache\" ]; then\ntouch pytranslatecache\nfi\n\nwhile [ 1 ]\ndo\n OLDENTRY=\"$(xsel -p)\"\n sleep 1\n NEWENTRY=\"$(xsel -p)\"\n if [ \"$NEWENTRY\" != \"$OLDENTRY\" ] ; then\n if [ \"$(grep -F -o -i \"$NEWENTRY\" pytranslatecache)\" = \"$NEWENTRY\" ] ; then\n echo \"From Cache:\"\n echo \"$(grep -i \"$NEWENTRY\" pytranslatecache)\" \n else\n DEFINITION=\"\"$(pytranslate -s fr \"$(xsel -p)\")\"\"\n echo \"$NEWENTRY\"\":\"$DEFINITION\n echo \"$NEWENTRY\"\":\"$DEFINITION >> pytranslatecache\n fi\n fi\n# Catch Errors - Commands\n if [ $? != 0 ]; then\n {\n echo \"Failed to translate string.\"\n } fi\ndone\n\n",
"Would you be able to use clipboard support in the PyGTK package to do the job? It claims to have access to the \"primary\" X clipboard which it says is where you'd normally find the selected text.\n",
"Note: this answer was useless to the questioner, who wasn't using Windows. Given that the title doesn't specify the OS, I'll leave it around for Windows users who may come this way.\n\nYou can easily whip one up yourself using the pywin32 package and the win32clipboard module. See, for example, this question.\nI've done this in the past with a routine that just polled the clipboard periodically, every few seconds or so, and whenever it found a change it grabbed the contents and did something with it. In your case, use the subprocess package to call out to pytranslate with the text.\n"
] |
[
4,
3,
1,
0
] |
[] |
[] |
[
"clipboard",
"google_translate",
"python",
"terminal",
"ubuntu_9.04"
] |
stackoverflow_0001857287_clipboard_google_translate_python_terminal_ubuntu_9.04.txt
|
Q:
Is there an Open Source Python library for sanitizing HTML and removing all Javascript?
I want to write a web application that allows users to enter any HTML that can occur inside a <div> element. This HTML will then end up being displayed to other users, so I want to make sure that the site doesn't open people up to XSS attacks.
Is there a nice library in Python that will clean out all the event handler attributes, <script> elements and other Javascript cruft from HTML or a DOM tree?
I am intending to use Beautiful Soup to regularize the HTML to make sure it doesn't contain unclosed tags and such. But, as far as I can tell, it has no pre-packaged way to strip all Javascript.
If there is a nice library in some other language, that might also work, but I would really prefer Python.
I've done a bunch of Google searching and hunted around on pypi, but haven't been able to find anything obvious.
Related
Sanitising user input using Python
A:
As Klaus mentions, the clear consensus in the community is to use BeautifulSoup for these tasks:
soup = BeautifulSoup.BeautifulSoup(html)
for script_elt in soup.findAll('script'):
script_elt.extract()
html = str(soup)
A:
Whitelist approach to allowed tags, attributes and their values is the only reliable way. Take a look at Recipe 496942: Cross-site scripting (XSS) defense
What is wrong with existing markup languages such as used on this very site?
A:
You could use BeautifulSoup. It allows you to traverse the markup structure fairly easily, even if it's not well-formed. I don't know that there's something made to order that works only on script tags.
A:
I would honestly look at using something like bbcode or some other alternative markup with it.
A:
Eric,
Have you thought about using a 'SAX' type parser for the HTML? I'm really not sure
though that it would ignore the events properly though. It would also be a bit harder to construct than using something like Beautiful Soup. Handling syntax errors may be a problem with SAX as well.
What I like to do in situations like this is to construct python objects (subclassed from an XML_Element class) from the parsed HTML. Then remove any undesired objects from the tree, and finally re-serialize the objects back to html. It's not all that hard in python.
Regards,
|
Is there an Open Source Python library for sanitizing HTML and removing all Javascript?
|
I want to write a web application that allows users to enter any HTML that can occur inside a <div> element. This HTML will then end up being displayed to other users, so I want to make sure that the site doesn't open people up to XSS attacks.
Is there a nice library in Python that will clean out all the event handler attributes, <script> elements and other Javascript cruft from HTML or a DOM tree?
I am intending to use Beautiful Soup to regularize the HTML to make sure it doesn't contain unclosed tags and such. But, as far as I can tell, it has no pre-packaged way to strip all Javascript.
If there is a nice library in some other language, that might also work, but I would really prefer Python.
I've done a bunch of Google searching and hunted around on pypi, but haven't been able to find anything obvious.
Related
Sanitising user input using Python
|
[
"As Klaus mentions, the clear consensus in the community is to use BeautifulSoup for these tasks:\nsoup = BeautifulSoup.BeautifulSoup(html)\nfor script_elt in soup.findAll('script'):\n script_elt.extract()\nhtml = str(soup)\n\n",
"Whitelist approach to allowed tags, attributes and their values is the only reliable way. Take a look at Recipe 496942: Cross-site scripting (XSS) defense\nWhat is wrong with existing markup languages such as used on this very site?\n",
"You could use BeautifulSoup. It allows you to traverse the markup structure fairly easily, even if it's not well-formed. I don't know that there's something made to order that works only on script tags.\n",
"I would honestly look at using something like bbcode or some other alternative markup with it.\n",
"Eric,\nHave you thought about using a 'SAX' type parser for the HTML? I'm really not sure\nthough that it would ignore the events properly though. It would also be a bit harder to construct than using something like Beautiful Soup. Handling syntax errors may be a problem with SAX as well. \nWhat I like to do in situations like this is to construct python objects (subclassed from an XML_Element class) from the parsed HTML. Then remove any undesired objects from the tree, and finally re-serialize the objects back to html. It's not all that hard in python.\nRegards, \n"
] |
[
5,
4,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"parsing",
"python",
"xss"
] |
stackoverflow_0001854806_javascript_parsing_python_xss.txt
|
Q:
How do I import the render_to_response method from Django 1.1 inside of Google App Engine?
I'm a Python newbie, so I'm sure this is easy. Here's the code in my main.py:
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
from google.appengine.dist import use_library
use_library('django', '1.1')
# Use django form library.
from django import forms
from django.shortcuts import render_to_response
The last line breaks with an ImportError. If I don't include that, then I get an error that "render_to_response" isn't available. What am I doing wrong?
A:
Well, render_to_response is a shortcut for this, so give this a try:
from django.template import Context, loader
from django.http import HttpResponse
def render_to_response(tmpl, data):
t = loader.get_template(tmpl)
c = Context(data)
return HttpResponse(t.render(c))
render_to_response("templates/index.html", {"foo": "bar"})
A:
In this case the problem ended up being that Google App Engine uses version 0.96 of Django and it actually couldn't find the 'redirect' method, because that's only in Django 1.1. If you use the GAE utility method 'use_library' you can specify what version of the Django framework you want to use and this problem goes away.
|
How do I import the render_to_response method from Django 1.1 inside of Google App Engine?
|
I'm a Python newbie, so I'm sure this is easy. Here's the code in my main.py:
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
from google.appengine.dist import use_library
use_library('django', '1.1')
# Use django form library.
from django import forms
from django.shortcuts import render_to_response
The last line breaks with an ImportError. If I don't include that, then I get an error that "render_to_response" isn't available. What am I doing wrong?
|
[
"Well, render_to_response is a shortcut for this, so give this a try:\nfrom django.template import Context, loader\nfrom django.http import HttpResponse\n\ndef render_to_response(tmpl, data):\n t = loader.get_template(tmpl)\n c = Context(data)\n return HttpResponse(t.render(c))\n\nrender_to_response(\"templates/index.html\", {\"foo\": \"bar\"})\n\n",
"In this case the problem ended up being that Google App Engine uses version 0.96 of Django and it actually couldn't find the 'redirect' method, because that's only in Django 1.1. If you use the GAE utility method 'use_library' you can specify what version of the Django framework you want to use and this problem goes away.\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"google_app_engine",
"python"
] |
stackoverflow_0001738466_django_google_app_engine_python.txt
|
Q:
"Better option" from the python library, any list?
I just found out the existence of the optparse module. I personally always used getopt, so I did not care to look for something better. It's clear, however, that optparse is much more advanced, so I would expect it to be the preferred way in the future to get options from the command line.
Anyway, this event struck me. I am now wondering if there are modules or functions out there I am using since the beginning of time, that have much better alternatives in the standard library. Is there such a compact and quick to browse list, on the liking of "previous solutions: getopt. better solution: optparse (since python 2.x)" ?
Edit marked as CW as agreed.
parsing command line options: getopt, optparse, argparse
package management: distutils, setuptools
A:
I suggest this might be a good place to start such a list
note that there is pep389 to replace optparse with argparse
collections.defaultdict works nicer in most places you would use dict.setdefault
the collections module is a good one to become familiar with as it has lots of new stuff in Python3
Generator expressions are often better than list comprehensions if you don't need to keep the list
Ternary operator b if a else c instead of a and b or c with all it's problems
multiprocessing replaces any other way you were doing it ;)
itertools.izip_longest avoids having to use workarounds when you are zipping uneven things
A:
Not exactly compact, and referring only to the standard library (and other parts of standard Python) but not any third-party packages, there are all the "What's New in Python X.X?" essays.
Other than that, and Google, I don't think there are any such lists except in random blogs and so forth.
A:
I wouldn't absolutely agree with a statement "optparse is better than getopt". If optparse suites you better, it doesn't mean someone wouldn't find getopt much preferable. They are intended for different purposes: getopt is much simpler and requires less understanding to start using (especially if you are familiar with getopt from other sources: e.g. shell scripting), optparse is more powerful and is more detailed. However, if I need to just get one or two command lime parameters, I might even use simple if statement.
To summarize, every tool has its purpose, and every situation might require a different tool which is more suitable for that situation.
A:
I use Richard Gruet's Python Quick Reference which is a great reference for all things Python including some of the more important parts of the standard library. It does a good job of making changes in the language and library prevalent using colour coding and notes.
Take a look at his section on getopt, for instance, and the list of modules and packages in base distribution.
It's not been updated for Python 3 yet, but I live in hope!
|
"Better option" from the python library, any list?
|
I just found out the existence of the optparse module. I personally always used getopt, so I did not care to look for something better. It's clear, however, that optparse is much more advanced, so I would expect it to be the preferred way in the future to get options from the command line.
Anyway, this event struck me. I am now wondering if there are modules or functions out there I am using since the beginning of time, that have much better alternatives in the standard library. Is there such a compact and quick to browse list, on the liking of "previous solutions: getopt. better solution: optparse (since python 2.x)" ?
Edit marked as CW as agreed.
parsing command line options: getopt, optparse, argparse
package management: distutils, setuptools
|
[
"I suggest this might be a good place to start such a list\nnote that there is pep389 to replace optparse with argparse\ncollections.defaultdict works nicer in most places you would use dict.setdefault \nthe collections module is a good one to become familiar with as it has lots of new stuff in Python3\nGenerator expressions are often better than list comprehensions if you don't need to keep the list\nTernary operator b if a else c instead of a and b or c with all it's problems\nmultiprocessing replaces any other way you were doing it ;)\nitertools.izip_longest avoids having to use workarounds when you are zipping uneven things\n",
"Not exactly compact, and referring only to the standard library (and other parts of standard Python) but not any third-party packages, there are all the \"What's New in Python X.X?\" essays.\nOther than that, and Google, I don't think there are any such lists except in random blogs and so forth.\n",
"I wouldn't absolutely agree with a statement \"optparse is better than getopt\". If optparse suites you better, it doesn't mean someone wouldn't find getopt much preferable. They are intended for different purposes: getopt is much simpler and requires less understanding to start using (especially if you are familiar with getopt from other sources: e.g. shell scripting), optparse is more powerful and is more detailed. However, if I need to just get one or two command lime parameters, I might even use simple if statement. \nTo summarize, every tool has its purpose, and every situation might require a different tool which is more suitable for that situation.\n",
"I use Richard Gruet's Python Quick Reference which is a great reference for all things Python including some of the more important parts of the standard library. It does a good job of making changes in the language and library prevalent using colour coding and notes.\nTake a look at his section on getopt, for instance, and the list of modules and packages in base distribution.\nIt's not been updated for Python 3 yet, but I live in hope!\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"argparse",
"command_line_arguments",
"getopt",
"optparse",
"python"
] |
stackoverflow_0001857432_argparse_command_line_arguments_getopt_optparse_python.txt
|
Q:
How do I break up the controllers (views) into cohesive files in a Django project?
I am currently working through the tutorial on Django's website. Upon completing the following command:
python manage.py startapp polls
it creates the following structure:
polls/
__init__.py
models.py
tests.py
views.py
As I was going through the tutorial it occurred to me that the views file could grow to this huge incohesive monolithic file that has every action in the entire web application.
Is there a way to break this file up into cohesive classes or files? I tried changing the settings.py and the url.py to point to a different directory, but it appears that the script that generates the file structure creates a "views" module when it creates the file, and I don't see a way to change/override this behavior from the script.
A:
You could split up views in a similar manner to how this blog entry splits models
http://www.nomadjourney.com/2009/11/splitting-up-django-models/
eg
/myapp
* /views
o __init__.py
o bar.py
o foo.py
with appropriate import statements in the __init__.py file
This might be appropriate for an expanding app. Also views are more flexible than models in the way they can be structured so you could do backend/ members/ frontend/ modules or just admin_views.py etc.
A:
The view functions don't have to be in views.py, they can be anywhere, as long as they're mapped properly in urls.py. So it's up to you how you organize your project.
but it appears that the script that generates the file structure creates a "views" module when it creates the file, and I don't see a way to change/override this behavior from the script.
You can totally ignore that script and what it generates. It doesn't do anything magical behind the scenes; it just creates those files for you.
A:
Each app you create for the project will have its own views.py file (assuming it uses views), so you don't need to worry about it becoming monolithic.
Just make sure to keep your apps' functionality focused.
From the Django docs:
Projects vs. apps
What's the difference between a
project and an app? An app is a Web
application that does something --
e.g., a weblog system, a database of
public records or a simple poll app. A
project is a collection of
configuration and apps for a
particular Web site. A project can
contain multiple apps. An app can be
in multiple projects.
A:
I worked though that tutorial recently. I had figured that most of the "core" logic would go in to various classes or methods in other supporting files. Then the views.py would contain basic calls to setup and execute the methods.
Given this design, I'd expect that a single view function might end up with 3 to 5 lines of code. Setup, execute method and return.
Basically, what I mean is an implementation of the Facade pattern.
I expect the tutorial avoided this approach because it adds levels of redirection (misdirection?) that can make it harder for a introductee to follow the code.
A:
One of the things I do to make my apps' views more compact is to factor my views pretty aggressively. I hate writing any code twice, so this comes naturally for me. Anywhere I can, I use a generic view to perform the needed actions. A good percentage of the functionality of my views is decorators, which perform common actions on the views that need it.
for instance, I have a post_limit decorator that checks if the user has recently modified any instance in a certain model(configurable view by view) and produces an error if she has, as a means of flood protection.
In fact, many views work so similarly, that they don't even get their own function bodies, I just wrap generic views with the appropriate decorators, and the only views that get much custom code are aggregation type sites, such as the landing page, that collect lots of different information in subtle ways, so that they look 'just right'
|
How do I break up the controllers (views) into cohesive files in a Django project?
|
I am currently working through the tutorial on Django's website. Upon completing the following command:
python manage.py startapp polls
it creates the following structure:
polls/
__init__.py
models.py
tests.py
views.py
As I was going through the tutorial it occurred to me that the views file could grow to this huge incohesive monolithic file that has every action in the entire web application.
Is there a way to break this file up into cohesive classes or files? I tried changing the settings.py and the url.py to point to a different directory, but it appears that the script that generates the file structure creates a "views" module when it creates the file, and I don't see a way to change/override this behavior from the script.
|
[
"You could split up views in a similar manner to how this blog entry splits models\nhttp://www.nomadjourney.com/2009/11/splitting-up-django-models/\neg \n/myapp\n\n * /views\n o __init__.py\n o bar.py\n o foo.py\n\nwith appropriate import statements in the __init__.py file\nThis might be appropriate for an expanding app. Also views are more flexible than models in the way they can be structured so you could do backend/ members/ frontend/ modules or just admin_views.py etc.\n",
"The view functions don't have to be in views.py, they can be anywhere, as long as they're mapped properly in urls.py. So it's up to you how you organize your project.\n\nbut it appears that the script that generates the file structure creates a \"views\" module when it creates the file, and I don't see a way to change/override this behavior from the script.\n\nYou can totally ignore that script and what it generates. It doesn't do anything magical behind the scenes; it just creates those files for you.\n",
"Each app you create for the project will have its own views.py file (assuming it uses views), so you don't need to worry about it becoming monolithic.\nJust make sure to keep your apps' functionality focused.\nFrom the Django docs:\n\nProjects vs. apps\nWhat's the difference between a\n project and an app? An app is a Web\n application that does something --\n e.g., a weblog system, a database of\n public records or a simple poll app. A\n project is a collection of\n configuration and apps for a\n particular Web site. A project can\n contain multiple apps. An app can be\n in multiple projects.\n\n",
"I worked though that tutorial recently. I had figured that most of the \"core\" logic would go in to various classes or methods in other supporting files. Then the views.py would contain basic calls to setup and execute the methods. \nGiven this design, I'd expect that a single view function might end up with 3 to 5 lines of code. Setup, execute method and return. \nBasically, what I mean is an implementation of the Facade pattern.\nI expect the tutorial avoided this approach because it adds levels of redirection (misdirection?) that can make it harder for a introductee to follow the code.\n",
"One of the things I do to make my apps' views more compact is to factor my views pretty aggressively. I hate writing any code twice, so this comes naturally for me. Anywhere I can, I use a generic view to perform the needed actions. A good percentage of the functionality of my views is decorators, which perform common actions on the views that need it.\nfor instance, I have a post_limit decorator that checks if the user has recently modified any instance in a certain model(configurable view by view) and produces an error if she has, as a means of flood protection. \nIn fact, many views work so similarly, that they don't even get their own function bodies, I just wrap generic views with the appropriate decorators, and the only views that get much custom code are aggregation type sites, such as the landing page, that collect lots of different information in subtle ways, so that they look 'just right'\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001857427_django_python.txt
|
Q:
Python proxy checker, change to threaded version
i have some python proxy checker.
and to speed up check, i was decided change to multithreaded version,
and thread module is first for me, i was tried several times to convert to thread version
and look for many info, but it not so much easy for novice python programmer.
if anyone can help me really much appreciate!!
thanks in advance!
import urllib2, socket
socket.setdefaulttimeout(180)
# read the list of proxy IPs in proxyList
proxyList = open('listproxy.txt').read()
def is_bad_proxy(pip):
try:
proxy_handler = urllib2.ProxyHandler({'http': pip})
opener = urllib2.build_opener(proxy_handler)
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib2.install_opener(opener)
req=urllib2.Request('http://www.yahoo.com') # <---check whether proxy alive
sock=urllib2.urlopen(req)
except urllib2.HTTPError, e:
print 'Error code: ', e.code
return e.code
except Exception, detail:
print "ERROR:", detail
return 1
return 0
for item in proxyList:
if is_bad_proxy(item):
print "Bad Proxy", item
else:
print item, "is working"
A:
urllib2.install_opener() function installs global opener, i.e. it's not thread safe. So don't use it and call opener.open() method instead of global urllib2.urlopen() function. Also use Queue class from Queue module to hold the list of proxies to check. The rest of your code is OK to use in threaded mode.
|
Python proxy checker, change to threaded version
|
i have some python proxy checker.
and to speed up check, i was decided change to multithreaded version,
and thread module is first for me, i was tried several times to convert to thread version
and look for many info, but it not so much easy for novice python programmer.
if anyone can help me really much appreciate!!
thanks in advance!
import urllib2, socket
socket.setdefaulttimeout(180)
# read the list of proxy IPs in proxyList
proxyList = open('listproxy.txt').read()
def is_bad_proxy(pip):
try:
proxy_handler = urllib2.ProxyHandler({'http': pip})
opener = urllib2.build_opener(proxy_handler)
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib2.install_opener(opener)
req=urllib2.Request('http://www.yahoo.com') # <---check whether proxy alive
sock=urllib2.urlopen(req)
except urllib2.HTTPError, e:
print 'Error code: ', e.code
return e.code
except Exception, detail:
print "ERROR:", detail
return 1
return 0
for item in proxyList:
if is_bad_proxy(item):
print "Bad Proxy", item
else:
print item, "is working"
|
[
"urllib2.install_opener() function installs global opener, i.e. it's not thread safe. So don't use it and call opener.open() method instead of global urllib2.urlopen() function. Also use Queue class from Queue module to hold the list of proxies to check. The rest of your code is OK to use in threaded mode.\n"
] |
[
4
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0001857799_multithreading_python.txt
|
Q:
How to extract every possible values of python Dict's values to list
DICTA={'bw':['BW','VW'],'b':['BB','VV'],'a':['AA']}
DICTB={'yn':['$YN','$YNN'],'ye':['$YE','A$Y'],'y':['Y$']}
How to extract every possible values of that 2 Dict to
["BWYN","VWYN","BBYN","VVYN","AAYN","BWYNN","VWYNN","BBYNN","VVYNN","AAYNN",
"BWYE","VWYE","BBYE","VVYE","AAYE","ABWY","AVWY","ABBY","AVVY","AAAY",
"YBW","YVW","YBB","YVV","YAA"]
PS: Order does not matter
PPS: Not homework, but like to know how it can be implemented in efficient way.
A:
Many possible minor variants on the following fundamental theme:
print [y.replace('$', x)
for y in (v for y in DICTB.values() for v in y)
for x in (v for y in DICTA.values() for v in y)
]
A:
I like to go with itertools myself, but essentially the same as Alex's solution:
from itertools import product
[
y.replace('$', x)
for (x, y) in product(sum(DICTA.values(), []), sum(DICTB.values(), []))
]
|
How to extract every possible values of python Dict's values to list
|
DICTA={'bw':['BW','VW'],'b':['BB','VV'],'a':['AA']}
DICTB={'yn':['$YN','$YNN'],'ye':['$YE','A$Y'],'y':['Y$']}
How to extract every possible values of that 2 Dict to
["BWYN","VWYN","BBYN","VVYN","AAYN","BWYNN","VWYNN","BBYNN","VVYNN","AAYNN",
"BWYE","VWYE","BBYE","VVYE","AAYE","ABWY","AVWY","ABBY","AVVY","AAAY",
"YBW","YVW","YBB","YVV","YAA"]
PS: Order does not matter
PPS: Not homework, but like to know how it can be implemented in efficient way.
|
[
"Many possible minor variants on the following fundamental theme:\nprint [y.replace('$', x)\n for y in (v for y in DICTB.values() for v in y)\n for x in (v for y in DICTA.values() for v in y)\n]\n\n",
"I like to go with itertools myself, but essentially the same as Alex's solution:\nfrom itertools import product\n\n[\n y.replace('$', x)\n for (x, y) in product(sum(DICTA.values(), []), sum(DICTB.values(), []))\n]\n\n"
] |
[
4,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001857820_python.txt
|
Q:
With sqlalchemy how to dynamically bind to database engine on a per-request basis
I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security.
Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date.
We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter.
Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files.
My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request.
We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation.
Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this:
# in lib/base.py on the BaseController class
def __call__(self, environ, start_response):
# note: web session contains {'username': XXX, 'password': YYY}
url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session
url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session
finance = create_engine(url1)
staff = create_engine(url2)
db_configure(staff, finance) # see below
... etc
# in another file
Session = scoped_session(sessionmaker())
def db_configure(staff, finance):
s = Session()
from db.finance import Employee, Customer, Invoice
for c in [
Employee,
Customer,
Invoice,
]:
s.bind_mapper(c, finance)
from db.staff import Project, Hour
for c in [
Project,
Hour,
]:
s.bind_mapper(c, staff)
s.close() # prevents leaking connections between sessions?
So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly.
But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way.
Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.
A:
Binding global objects (mappers, metadata) to user-specific connection is not good way. As well as using scoped session. I suggest to create new session for each request and configure it to use user-specific connections. The following sample assumes that you use separate metadata objects for each database:
binds = {}
finance_engine = create_engine(url1)
binds.update(dict.fromkeys(finance_metadata.sorted_tables, finance_engine))
# The following line is required when mappings to joint tables are used (e.g.
# in joint table inheritance) due to bug (or misfeature) in SQLAlchemy 0.5.4.
# This issue might be fixed in newer versions.
binds.update(dict.fromkeys([Employee, Customer, Invoice], finance_engine))
staff_engine = create_engine(url2)
binds.update(dict.fromkeys(staff_metadata.sorted_tables, staff_engine))
# See comment above.
binds.update(dict.fromkeys([Project, Hour], staff_engine))
session = sessionmaker(binds=binds)()
|
With sqlalchemy how to dynamically bind to database engine on a per-request basis
|
I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security.
Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date.
We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter.
Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files.
My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request.
We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation.
Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this:
# in lib/base.py on the BaseController class
def __call__(self, environ, start_response):
# note: web session contains {'username': XXX, 'password': YYY}
url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session
url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session
finance = create_engine(url1)
staff = create_engine(url2)
db_configure(staff, finance) # see below
... etc
# in another file
Session = scoped_session(sessionmaker())
def db_configure(staff, finance):
s = Session()
from db.finance import Employee, Customer, Invoice
for c in [
Employee,
Customer,
Invoice,
]:
s.bind_mapper(c, finance)
from db.staff import Project, Hour
for c in [
Project,
Hour,
]:
s.bind_mapper(c, staff)
s.close() # prevents leaking connections between sessions?
So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly.
But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way.
Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.
|
[
"Binding global objects (mappers, metadata) to user-specific connection is not good way. As well as using scoped session. I suggest to create new session for each request and configure it to use user-specific connections. The following sample assumes that you use separate metadata objects for each database:\nbinds = {}\n\nfinance_engine = create_engine(url1)\nbinds.update(dict.fromkeys(finance_metadata.sorted_tables, finance_engine))\n# The following line is required when mappings to joint tables are used (e.g.\n# in joint table inheritance) due to bug (or misfeature) in SQLAlchemy 0.5.4.\n# This issue might be fixed in newer versions.\nbinds.update(dict.fromkeys([Employee, Customer, Invoice], finance_engine))\n\nstaff_engine = create_engine(url2)\nbinds.update(dict.fromkeys(staff_metadata.sorted_tables, staff_engine))\n# See comment above.\nbinds.update(dict.fromkeys([Project, Hour], staff_engine))\n\nsession = sessionmaker(binds=binds)()\n\n"
] |
[
4
] |
[
"I would look at the connection pooling and see if you can't find a way to have one pool per user.\nYou can dispose() the pool when the user's session has expired\n"
] |
[
-1
] |
[
"postgresql",
"pylons",
"python",
"sqlalchemy",
"web_applications"
] |
stackoverflow_0001857465_postgresql_pylons_python_sqlalchemy_web_applications.txt
|
Q:
Properties dictionary for Events in the Google Wave Python API
The Google Wave documentation contains the Robot Events, but doesn't list what values will be put into the properties dictionary. Is this documented anywhere?
A:
I had the same question and found some hints in the following Google group thread: http://groups.google.com/group/google-wave-api/browse_thread/thread/8d19dbcb6f2147cc
For now, I think the closest you can get to an answer to this question is Jason Salas' response: "the good news is that you can look in the logs that Google App Engine generates for your robot in your dashboard (choose Info from the drop-down menu under your application), and you can manually read the JSON string and get the "properties" content."
I guess we'll just have to wait until the API stabilizes and the documentation is more complete.
A:
They are listed in API Reference documentation, in the waveapi.events module.
Here's the list:
WAVELET_BLIP_CREATED
WAVELET_BLIP_REMOVED
WAVELET_PARTICIPANTS_CHANGED
WAVELET_SELF_ADDED
WAVELET_SELF_REMOVED
WAVELET_TIMESTAMP_CHANGED
WAVELET_TITLE_CHANGED
WAVELET_VERSION_CHANGED
BLIP_CONTRIBUTORS_CHANGED
BLIP_DELETED
BLIP_SUBMITTED
BLIP_TIMESTAMP_CHANGED
BLIP_VERSION_CHANGED
DOCUMENT_CHANGED
FORM_BUTTON_CLICKED
|
Properties dictionary for Events in the Google Wave Python API
|
The Google Wave documentation contains the Robot Events, but doesn't list what values will be put into the properties dictionary. Is this documented anywhere?
|
[
"I had the same question and found some hints in the following Google group thread: http://groups.google.com/group/google-wave-api/browse_thread/thread/8d19dbcb6f2147cc\nFor now, I think the closest you can get to an answer to this question is Jason Salas' response: \"the good news is that you can look in the logs that Google App Engine generates for your robot in your dashboard (choose Info from the drop-down menu under your application), and you can manually read the JSON string and get the \"properties\" content.\"\nI guess we'll just have to wait until the API stabilizes and the documentation is more complete.\n",
"They are listed in API Reference documentation, in the waveapi.events module.\nHere's the list:\nWAVELET_BLIP_CREATED\nWAVELET_BLIP_REMOVED\nWAVELET_PARTICIPANTS_CHANGED\nWAVELET_SELF_ADDED\nWAVELET_SELF_REMOVED\nWAVELET_TIMESTAMP_CHANGED\nWAVELET_TITLE_CHANGED\nWAVELET_VERSION_CHANGED\nBLIP_CONTRIBUTORS_CHANGED\nBLIP_DELETED\nBLIP_SUBMITTED\nBLIP_TIMESTAMP_CHANGED\nBLIP_VERSION_CHANGED\nDOCUMENT_CHANGED\nFORM_BUTTON_CLICKED\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"events",
"google_wave",
"python"
] |
stackoverflow_0001747986_events_google_wave_python.txt
|
Q:
Bind different ip addresses to urllib2 object in seperate threads
The following code binds specified ip address to socket in main program globally.
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
Suppose main program has 10 threads, each with a urllib2 instance running inside the thread. How to bind 10 different ip addresses to each urllib2 object?
A:
You can define a dictionary mapping thread identifier to IP address or use threading.local() global object to define it per thread:
socket_data = threading.local()
socket_data = bind_ip = None
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
if socket_data.bind_ip is not None:
sock.bind((socket_data.bind_ip, 0))
return sock
socket.socket = bound_socket
def thread_target(bind_ip):
socket_data.bind_ip = bind_ip
# the rest code
for bind_ip in [...]:
thread = Thread(target=thread_target, args=(bind_ip,))
# ...
But note, that is rather dirty hack. A better way is to extend connect() method in subclass of HTTPConnection and redefine http_open() method in subclass of HTTPHandler.
|
Bind different ip addresses to urllib2 object in seperate threads
|
The following code binds specified ip address to socket in main program globally.
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
Suppose main program has 10 threads, each with a urllib2 instance running inside the thread. How to bind 10 different ip addresses to each urllib2 object?
|
[
"You can define a dictionary mapping thread identifier to IP address or use threading.local() global object to define it per thread:\nsocket_data = threading.local()\nsocket_data = bind_ip = None\n\ntrue_socket = socket.socket\n\ndef bound_socket(*a, **k):\n sock = true_socket(*a, **k)\n if socket_data.bind_ip is not None:\n sock.bind((socket_data.bind_ip, 0))\n return sock\n\nsocket.socket = bound_socket\n\ndef thread_target(bind_ip):\n socket_data.bind_ip = bind_ip\n # the rest code\n\nfor bind_ip in [...]:\n thread = Thread(target=thread_target, args=(bind_ip,))\n # ...\n\nBut note, that is rather dirty hack. A better way is to extend connect() method in subclass of HTTPConnection and redefine http_open() method in subclass of HTTPHandler.\n"
] |
[
1
] |
[] |
[] |
[
"ip_address",
"python",
"urllib2"
] |
stackoverflow_0001858310_ip_address_python_urllib2.txt
|
Q:
cat filename.* > Datei
I'm looking to translate the unix-command
$ cat filename.* > Datei
into a Python program. Can somebody help ?
A:
Something like this should get you started:
import glob
outfile = file("Datei", "wb")
for f in glob.glob("filename.*"):
infile = open(f, "rb")
outfile.write(infile.read())
infile.close()
outfile.close()
UPDATE: Of course, input files need to be opened, too.
UPDATE: Explicitly use binary mode.
A:
import glob
output = open('Datei', 'wb')
chunk_size = 8192
for filename in glob.glob('filename.*'):
input = open(filename, 'rb')
buffer = input.read(chunk_size)
while buffer: # False if buffer == ""
output.write(buffer)
buffer = input.read(chunk_size)
input.close()
output.close()
A:
alternatively
import os
f=open("outfile.txt","a")
for file in os.listdir("."):
if file.startswith("filename."):
for line in open(file):
f.write(line)
f.close()
A:
Thank you for your help.
My script now:
LOGFILEDIR="/olatfile/logs"
VORMONAT=time.strftime("%Y-%m", time.localtime(time.time()-3600*24*30))
LOGDATEIEN=LOGFILEDIR+"/olat.log."+VORMONAT +"-*"
print LOGDATEIEN
OUTPUT=LOGFILEDIR+"/olat.log."+VORMONAT
LOGFILE=OUTPUT
output = open(OUTPUT, 'wb')
chunk_size = 8096
for filename in glob.glob(LOGDATEIEN):
input = open(filename, 'rb')
buffer = input.read(chunk_size)
while len(buffer) > 0:
output.write(buffer)
buffer = input.read(chunk_size)
input.close()
output.close()
An application create everyday a logfile like "olat.log.07-12-2009"
My idea was to cat all the logs from one moth into one logfile and analyze this one.
|
cat filename.* > Datei
|
I'm looking to translate the unix-command
$ cat filename.* > Datei
into a Python program. Can somebody help ?
|
[
"Something like this should get you started:\nimport glob\n\noutfile = file(\"Datei\", \"wb\")\nfor f in glob.glob(\"filename.*\"):\n infile = open(f, \"rb\")\n outfile.write(infile.read())\n infile.close()\noutfile.close()\n\nUPDATE: Of course, input files need to be opened, too.\nUPDATE: Explicitly use binary mode.\n",
"import glob\n\noutput = open('Datei', 'wb')\nchunk_size = 8192\nfor filename in glob.glob('filename.*'):\n input = open(filename, 'rb')\n buffer = input.read(chunk_size)\n while buffer: # False if buffer == \"\"\n output.write(buffer)\n buffer = input.read(chunk_size)\n input.close()\noutput.close()\n\n",
"alternatively\nimport os\nf=open(\"outfile.txt\",\"a\")\nfor file in os.listdir(\".\"):\n if file.startswith(\"filename.\"):\n for line in open(file):\n f.write(line)\nf.close()\n\n",
"Thank you for your help.\nMy script now:\nLOGFILEDIR=\"/olatfile/logs\" \nVORMONAT=time.strftime(\"%Y-%m\", time.localtime(time.time()-3600*24*30)) \nLOGDATEIEN=LOGFILEDIR+\"/olat.log.\"+VORMONAT +\"-*\" \nprint LOGDATEIEN \nOUTPUT=LOGFILEDIR+\"/olat.log.\"+VORMONAT \nLOGFILE=OUTPUT \noutput = open(OUTPUT, 'wb') \nchunk_size = 8096 \nfor filename in glob.glob(LOGDATEIEN): \n input = open(filename, 'rb') \n buffer = input.read(chunk_size) \n while len(buffer) > 0: \n output.write(buffer) \n buffer = input.read(chunk_size) \n input.close() \noutput.close() \n\nAn application create everyday a logfile like \"olat.log.07-12-2009\"\nMy idea was to cat all the logs from one moth into one logfile and analyze this one.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"cat",
"python"
] |
stackoverflow_0001818758_cat_python.txt
|
Q:
How to pass items as args lists in map?
This is a piece of my code. Lambda accepts 3 parameters, and I wanted to pass them as a tuple of positional arguments, but apparently map supplies them as a single argument.
How can I supply those tuples in the bottom as lists of arguments? (I know I can rewrite the lambda, but it will become not well readable)
adds = map((lambda j, f, a:
j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''),
((' ', ' -not -path "{0}" ', 'exclude'),
(' -or ', '-path "{0}"', 'include')))
A:
The map() description:
map(function, iterable, ...)
Apply function to every item of iterable and return a list of the results. If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables in parallel. If one iterable is shorter than another it is assumed to be extended with None items. If function is None, the identity function is assumed; if there are multiple arguments, map() returns a list consisting of tuples containing the corresponding items from all iterables (a kind of transpose operation). The iterable arguments may be a sequence or any iterable object; the result is always a list.
You need to place the arguments in parallel lists or tuples, and pass them to map() as 3 iterables.
A:
Try putting parens around them
adds = map((lambda (j, f, a):
j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''),
((' ', ' -not -path "{0}" ', 'exclude'),
(' -or ', '-path "{0}"', 'include')))
A:
An alternative is to use itertools.starmap which accepts pre-zipped arguments:
adds = itertools.starmap((lambda j, f, a:
j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''),
((' ', ' -not -path "{0}" ', 'exclude'),
(' -or ', '-path "{0}"', 'include')))
A:
One way is to rewrite as a list comprehension:
adds = [
j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''
for j, f, a in
((' ', ' -not -path "{0}" ', 'exclude'),
(' -or ', '-path "{0}"', 'include'))]
|
How to pass items as args lists in map?
|
This is a piece of my code. Lambda accepts 3 parameters, and I wanted to pass them as a tuple of positional arguments, but apparently map supplies them as a single argument.
How can I supply those tuples in the bottom as lists of arguments? (I know I can rewrite the lambda, but it will become not well readable)
adds = map((lambda j, f, a:
j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''),
((' ', ' -not -path "{0}" ', 'exclude'),
(' -or ', '-path "{0}"', 'include')))
|
[
"The map() description:\n\nmap(function, iterable, ...)\nApply function to every item of iterable and return a list of the results. If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables in parallel. If one iterable is shorter than another it is assumed to be extended with None items. If function is None, the identity function is assumed; if there are multiple arguments, map() returns a list consisting of tuples containing the corresponding items from all iterables (a kind of transpose operation). The iterable arguments may be a sequence or any iterable object; the result is always a list.\n\nYou need to place the arguments in parallel lists or tuples, and pass them to map() as 3 iterables.\n",
"Try putting parens around them\nadds = map((lambda (j, f, a):\n j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''),\n ((' ', ' -not -path \"{0}\" ', 'exclude'),\n (' -or ', '-path \"{0}\"', 'include')))\n\n",
"An alternative is to use itertools.starmap which accepts pre-zipped arguments:\nadds = itertools.starmap((lambda j, f, a:\n j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''),\n ((' ', ' -not -path \"{0}\" ', 'exclude'),\n (' -or ', '-path \"{0}\"', 'include')))\n\n",
"One way is to rewrite as a list comprehension:\nadds = [\n j.join([f.format(i) for i in parse.options[a]]) if parse.options[a] else ''\n for j, f, a in\n ((' ', ' -not -path \"{0}\" ', 'exclude'),\n (' -or ', '-path \"{0}\"', 'include'))]\n\n"
] |
[
5,
3,
3,
1
] |
[] |
[] |
[
"arguments",
"map",
"python"
] |
stackoverflow_0001858720_arguments_map_python.txt
|
Q:
Flexible numeric string parsing in Python
Are there any Python libraries that help parse and validate numeric strings beyond what is supported by the built-in float() function? For example, in addition to simple numbers (1234.56) and scientific notation (3.2e15), I would like to be able to parse formats like:
Numbers with commas: 2,147,483,647
Named large numbers: 5.5 billion
Fractions: 1/4
I did a bit of searching and could not find anything, though I would be surprised if such a library did not already exist.
A:
If you want to convert "localized" numbers such as the American "2,147,483,647" form, you can use the atof() function from the locale module. Example:
import locale
locale.setlocale(locale.LC_NUMERIC, 'en_US')
print locale.atof('1,234,456.23') # Prints 1234456.23
As for fractions, Python now handles them directly (since version 2.6); they can even be built from a string:
from fractions import Fraction
x = Fraction('1/4')
print float(x) # 0.25
Thus, you can parse a number written in any of the first 3 ways you mention, only with the help of the above two standard modules:
try:
num = float(num_str)
except ValueError:
try:
num = locale.atof(num_str)
except ValueError:
try:
num = float(Fraction(num_str))
except ValueError:
raise Exception("Cannot parse '%s'" % num_str) # Or handle '42 billion' here
# 'num' has the numerical value of 'num_str', here.
A:
It should be pretty straightforward to build one in pyparsing - in fact, one of the tutorial pyparsing projects does some of this (wordsToNum.py on this page) does some of it already. You're talking about things that don't really have standard representations (standard in the sense of ISO 8602, not standard in the sense of "what everybody knows"), so it could easily be that nobody's done just what you're looking for.
A:
babel has support for the first case (i18n numbers with commas). Docs: http://babel.edgewall.org/wiki/ApiDocs/babel.numbers.
Supporting simple named numbers should not be too hard to code up yourself, same with fractions.
|
Flexible numeric string parsing in Python
|
Are there any Python libraries that help parse and validate numeric strings beyond what is supported by the built-in float() function? For example, in addition to simple numbers (1234.56) and scientific notation (3.2e15), I would like to be able to parse formats like:
Numbers with commas: 2,147,483,647
Named large numbers: 5.5 billion
Fractions: 1/4
I did a bit of searching and could not find anything, though I would be surprised if such a library did not already exist.
|
[
"If you want to convert \"localized\" numbers such as the American \"2,147,483,647\" form, you can use the atof() function from the locale module. Example:\nimport locale\nlocale.setlocale(locale.LC_NUMERIC, 'en_US')\nprint locale.atof('1,234,456.23') # Prints 1234456.23\n\nAs for fractions, Python now handles them directly (since version 2.6); they can even be built from a string:\nfrom fractions import Fraction\nx = Fraction('1/4')\nprint float(x) # 0.25\n\nThus, you can parse a number written in any of the first 3 ways you mention, only with the help of the above two standard modules:\ntry:\n num = float(num_str)\nexcept ValueError:\n try:\n num = locale.atof(num_str)\n except ValueError:\n try:\n num = float(Fraction(num_str))\n except ValueError:\n raise Exception(\"Cannot parse '%s'\" % num_str) # Or handle '42 billion' here\n# 'num' has the numerical value of 'num_str', here. \n\n",
"It should be pretty straightforward to build one in pyparsing - in fact, one of the tutorial pyparsing projects does some of this (wordsToNum.py on this page) does some of it already. You're talking about things that don't really have standard representations (standard in the sense of ISO 8602, not standard in the sense of \"what everybody knows\"), so it could easily be that nobody's done just what you're looking for.\n",
"babel has support for the first case (i18n numbers with commas). Docs: http://babel.edgewall.org/wiki/ApiDocs/babel.numbers.\nSupporting simple named numbers should not be too hard to code up yourself, same with fractions.\n"
] |
[
6,
1,
0
] |
[
"I haven't heard of one. Do you know of any such library for any other languages? That way you could leverage their documentation and tests. \nIf you can't find one, write a bunch of testcases, then we can help you fill out the parsing code.\nGoogle must have one, try searching for 5.5billion * 10, but I don't think they have opensourced anything like that. Depending on how you need to use it, you might be able to use Google to do some of the work ;)\n"
] |
[
-1
] |
[
"numbers",
"parsing",
"python",
"validation"
] |
stackoverflow_0001858117_numbers_parsing_python_validation.txt
|
Q:
Creating portable Django apps - help needed
I'm building a Django app, which I comfortably run (test :)) on a Ubuntu Linux host. I would like to package the app without source code and distribute it to another production machine. Ideally the app could be run by ./runapp command which starts a CherryPy server that runs the python/django code.
I've discovered several ways of doing this:
Distributing the .pyc files only and building and installing all the requirements on target machine.
Using one of the many tools to package Python apps into a distributable package.
I'm really gunning for nr.2 option, I'd like to have my Django app contained, so it's possible to distribute it without needing to install or configure additional things. Searching the interwebs provided me with more questions than answers and a very sour taste that Django packing is an arcane art that everybody knows but nobody speaks about. :)
I've tried Freeze (fails), Cx_freeze (easy install version fails, repository version works, but the app output fails) and red up on dbuilder.py (which is supposed to work but doesn't work really - I guess). If I understand correctly most problems originate form the way that Django imports modules (example) but I have no idea how to solve it.
I'll be more than happy if anyone can provide any pointers or good resources online regarding packing/distributing standalone Django applications.
A:
I suggest you base your distro on setuptools (a tool that enhances the standard Python distro mechanizm distutils).
Using setuptools, you should be able to create a Python egg containing your application. The egg's metadata can contain a list of dependencies that will be automatically installed by easy_install (can include Django + any third-party modules/packages that you use).
setuptools/distutils distros can include scripts that will be installed to /usr/bin, so that's how you can include your runapp script.
If you're not familiar with virtualenv, I suggest you take a look at that as well. It is a way to create isolated Python environments, it will be very useful for testing your distro.
Here's a blog post with some info on virtualenv, as well as a discussion about a couple of other nice to know tools: Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip
A:
The --noreload option will stop Django auto-detecting which modules have changed. I don't know if that will fix it, but it might.
Another option (and it's not ideal) is to obscure some of your core functionality by packaging it as a dll, which your plain text code will call.
|
Creating portable Django apps - help needed
|
I'm building a Django app, which I comfortably run (test :)) on a Ubuntu Linux host. I would like to package the app without source code and distribute it to another production machine. Ideally the app could be run by ./runapp command which starts a CherryPy server that runs the python/django code.
I've discovered several ways of doing this:
Distributing the .pyc files only and building and installing all the requirements on target machine.
Using one of the many tools to package Python apps into a distributable package.
I'm really gunning for nr.2 option, I'd like to have my Django app contained, so it's possible to distribute it without needing to install or configure additional things. Searching the interwebs provided me with more questions than answers and a very sour taste that Django packing is an arcane art that everybody knows but nobody speaks about. :)
I've tried Freeze (fails), Cx_freeze (easy install version fails, repository version works, but the app output fails) and red up on dbuilder.py (which is supposed to work but doesn't work really - I guess). If I understand correctly most problems originate form the way that Django imports modules (example) but I have no idea how to solve it.
I'll be more than happy if anyone can provide any pointers or good resources online regarding packing/distributing standalone Django applications.
|
[
"I suggest you base your distro on setuptools (a tool that enhances the standard Python distro mechanizm distutils).\nUsing setuptools, you should be able to create a Python egg containing your application. The egg's metadata can contain a list of dependencies that will be automatically installed by easy_install (can include Django + any third-party modules/packages that you use).\nsetuptools/distutils distros can include scripts that will be installed to /usr/bin, so that's how you can include your runapp script.\nIf you're not familiar with virtualenv, I suggest you take a look at that as well. It is a way to create isolated Python environments, it will be very useful for testing your distro.\nHere's a blog post with some info on virtualenv, as well as a discussion about a couple of other nice to know tools: Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip\n",
"The --noreload option will stop Django auto-detecting which modules have changed. I don't know if that will fix it, but it might.\nAnother option (and it's not ideal) is to obscure some of your core functionality by packaging it as a dll, which your plain text code will call.\n"
] |
[
7,
0
] |
[] |
[] |
[
"cherrypy",
"django",
"linux",
"python",
"software_distribution"
] |
stackoverflow_0001858749_cherrypy_django_linux_python_software_distribution.txt
|
Q:
What is a library that provides a very simple, universal usage of Naives Bayes classifier?
I know that NLTK has this. But...is that only for NLP?
Is there a library that specializes in classification?
A:
http://code.google.com/p/pybayes/
A:
Reverend may be another nice option: http://divmod.org/trac/wiki/DivmodReverend
|
What is a library that provides a very simple, universal usage of Naives Bayes classifier?
|
I know that NLTK has this. But...is that only for NLP?
Is there a library that specializes in classification?
|
[
"http://code.google.com/p/pybayes/\n",
"Reverend may be another nice option: http://divmod.org/trac/wiki/DivmodReverend\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001858447_python.txt
|
Q:
Mark string as safe in Mako
I'm using Pylons with Mako templates and I want to avoid typing this all the time:
${ h.some_function_that_outputs_html() | n }
I want to somehow mark the function, or a variable as safe (you can do that in Django) so I don't have to pipe-en all the time. Any ideas?
A:
I just found out that if you put a html method in your class, then Mako will just call that method and output whatever it returns in the template.
So I did:
def __html__(self):
return unicode(self)
That's basically what h.literal does.
A:
According to the mako docs about filtering, you can set the default filters that are applied inside templates when creating a new Template as well as for the TemplateLookup (in which case this would apply by default for all the templates that it looks up), with the default_filters argument.
Pylons uses this argument with TemplateLookup to set the defaults for your project inside the config/environment.py file:
# Create the Mako TemplateLookup, with the default auto-escaping
config['pylons.app_globals'].mako_lookup = TemplateLookup(
directories=paths['templates'],
error_handler=handle_mako_error,
module_directory=os.path.join(app_conf['cache_dir'], 'templates'),
input_encoding='utf-8', default_filters=['escape'],
imports=['from webhelpers.html import escape'])
This is why you get the escaping by default (which is not the case when you use Mako by yourself). So you could either change it globally in the config file, or not rely on the standard lookup. Beware that you should of course then explicitly use a filter to escape those things that do need escaping.
You can also pass a string "marked as safe" with the Pylons helper h.literal, for example if you would pass h.literal('This will <b>not</b> be escaped') to the template, say as a variable named spam, you could just use ${spam} without any escaping.
If you want the same effect when you call a certain function from inside a template, this function would need to return such a literal, or provide a helper for that function that calls h.literal on the result if you want to leave the original function alone. (or I guess you could also call it via a "Filtering def" (see same Mako doc as above), haven't experimented with that yet)
|
Mark string as safe in Mako
|
I'm using Pylons with Mako templates and I want to avoid typing this all the time:
${ h.some_function_that_outputs_html() | n }
I want to somehow mark the function, or a variable as safe (you can do that in Django) so I don't have to pipe-en all the time. Any ideas?
|
[
"I just found out that if you put a html method in your class, then Mako will just call that method and output whatever it returns in the template.\nSo I did:\ndef __html__(self):\n return unicode(self)\n\nThat's basically what h.literal does.\n",
"According to the mako docs about filtering, you can set the default filters that are applied inside templates when creating a new Template as well as for the TemplateLookup (in which case this would apply by default for all the templates that it looks up), with the default_filters argument.\nPylons uses this argument with TemplateLookup to set the defaults for your project inside the config/environment.py file:\n# Create the Mako TemplateLookup, with the default auto-escaping\nconfig['pylons.app_globals'].mako_lookup = TemplateLookup(\n directories=paths['templates'],\n error_handler=handle_mako_error,\n module_directory=os.path.join(app_conf['cache_dir'], 'templates'),\n input_encoding='utf-8', default_filters=['escape'],\n imports=['from webhelpers.html import escape'])\n\nThis is why you get the escaping by default (which is not the case when you use Mako by yourself). So you could either change it globally in the config file, or not rely on the standard lookup. Beware that you should of course then explicitly use a filter to escape those things that do need escaping.\nYou can also pass a string \"marked as safe\" with the Pylons helper h.literal, for example if you would pass h.literal('This will <b>not</b> be escaped') to the template, say as a variable named spam, you could just use ${spam} without any escaping.\nIf you want the same effect when you call a certain function from inside a template, this function would need to return such a literal, or provide a helper for that function that calls h.literal on the result if you want to leave the original function alone. (or I guess you could also call it via a \"Filtering def\" (see same Mako doc as above), haven't experimented with that yet)\n"
] |
[
10,
3
] |
[] |
[] |
[
"mako",
"pylons",
"python"
] |
stackoverflow_0001825343_mako_pylons_python.txt
|
Q:
Need help understanding how this recursive function is working
Here's a function (credit to user Abbot, for providing it in another question)
def traverse(ftp):
level = {}
for entry in (path for path in ftp.nlst() if path not in ('.', '..')):
ftp.cwd(entry)
level[entry] = traverse(ftp)
ftp.cwd('..')
return level
Here's what I don't understand: When python enters the function it creates an empty dictionary (level). In the for loop, it stores a directory name as a key in the dictionary. as for the that key's value, python enters the function again and searches for a directory and it becomes that key's value.
But, how is the level dictionary remembering the values inside? I mean, shouldn't it be reset/emptied everytime python enters the function?
A:
No. Every "instance" of the function has its own copy of level and there are no side effects between the various copies of level.
Take this folder tree:
root
`-home
|- lyrae
| |- ftp.py
| `- http.py
`- badp
Here's the (simplified) execution flow when you call ftp on root:
ftp(root) creates an empty level dictionary
ftp(root) enumerates subfolders: (home).
ftp(root) picks the first subfolder and changes directory into it.
ftp(root) sets level[home] to the result of ftp in the current folder.
ftp(home) creates an empty level dictionary
ftp(home) enumerates subfolders: (lyrae, badp).
ftp(home) picks the first subfolder and changes directory into it.
ftp(home) sets level[lyrae] to the result of ftp in the current folder.
ftp(lyrae) creates an empty level dictionary
ftp(lyrae) enumerates subfolders: ().
ftp(lyrae) is out of subfolders to parse and returns level.
ftp(home) completes the assignment: levels = {'lyrae': {}}
ftp(home) changes to the next folder.
ftp(home) sets level[badp] to the result of ftp in the current folder.
ftp(badp) creates an empty level dictionary
ftp(badp) enumerates subfolders: ().
ftp(badp) is out of subfolders to parse and returns level.
ftp(home) completes the assignment: levels = {'lyrae': {}, 'badp': {}}
ftp(home) is out of subfolders to parse and returns level.
ftp(root) completes the assignment: levels = {'home': {'lyrae': {}, 'badp': {}}}
ftp(root) is out of subfolders to parse and returns level.
A:
These other answers don't quite explain enough I think. Each recursive entrance into this function creates a new local level dictionary. But crucially, also returns it. This means that the local version of level from each recursion becomes a dictionary tree of levels. Once the recursion is unrolled you're left with a tree of dictionaries which refer to each other. This means that the local variables that get created don't get garbage collected because there's a reference to the top most level dictionary on the stack that's been returned from the outer most function.
A:
level is a local variable. Every "run" of the function has its own variable called level, so the variables don't interfere with each other.
A:
The scope of level is limited to the function only. Even if a function calls itself, it doesn't mean that that function call's internal variables (a different level) is the same as this one's.
A:
variable level exists only in the scope of a function, at the end of function local variables discarded, so for each execution of traverse there will be it's own level dictionary. Nothing will be re-written or over-written.
|
Need help understanding how this recursive function is working
|
Here's a function (credit to user Abbot, for providing it in another question)
def traverse(ftp):
level = {}
for entry in (path for path in ftp.nlst() if path not in ('.', '..')):
ftp.cwd(entry)
level[entry] = traverse(ftp)
ftp.cwd('..')
return level
Here's what I don't understand: When python enters the function it creates an empty dictionary (level). In the for loop, it stores a directory name as a key in the dictionary. as for the that key's value, python enters the function again and searches for a directory and it becomes that key's value.
But, how is the level dictionary remembering the values inside? I mean, shouldn't it be reset/emptied everytime python enters the function?
|
[
"No. Every \"instance\" of the function has its own copy of level and there are no side effects between the various copies of level.\nTake this folder tree:\nroot\n `-home\n |- lyrae\n | |- ftp.py\n | `- http.py\n `- badp\n\nHere's the (simplified) execution flow when you call ftp on root:\n\nftp(root) creates an empty level dictionary\nftp(root) enumerates subfolders: (home).\nftp(root) picks the first subfolder and changes directory into it.\nftp(root) sets level[home] to the result of ftp in the current folder.\n\n\n\nftp(home) creates an empty level dictionary\nftp(home) enumerates subfolders: (lyrae, badp).\nftp(home) picks the first subfolder and changes directory into it.\nftp(home) sets level[lyrae] to the result of ftp in the current folder.\n\n\n\nftp(lyrae) creates an empty level dictionary\nftp(lyrae) enumerates subfolders: ().\nftp(lyrae) is out of subfolders to parse and returns level.\n\n\n\nftp(home) completes the assignment: levels = {'lyrae': {}}\nftp(home) changes to the next folder.\nftp(home) sets level[badp] to the result of ftp in the current folder.\n\n\n\nftp(badp) creates an empty level dictionary\nftp(badp) enumerates subfolders: ().\nftp(badp) is out of subfolders to parse and returns level.\n\n\n\nftp(home) completes the assignment: levels = {'lyrae': {}, 'badp': {}}\nftp(home) is out of subfolders to parse and returns level.\n\n\n\nftp(root) completes the assignment: levels = {'home': {'lyrae': {}, 'badp': {}}}\nftp(root) is out of subfolders to parse and returns level.\n\n",
"These other answers don't quite explain enough I think. Each recursive entrance into this function creates a new local level dictionary. But crucially, also returns it. This means that the local version of level from each recursion becomes a dictionary tree of levels. Once the recursion is unrolled you're left with a tree of dictionaries which refer to each other. This means that the local variables that get created don't get garbage collected because there's a reference to the top most level dictionary on the stack that's been returned from the outer most function.\n",
"level is a local variable. Every \"run\" of the function has its own variable called level, so the variables don't interfere with each other.\n",
"The scope of level is limited to the function only. Even if a function calls itself, it doesn't mean that that function call's internal variables (a different level) is the same as this one's.\n",
"variable level exists only in the scope of a function, at the end of function local variables discarded, so for each execution of traverse there will be it's own level dictionary. Nothing will be re-written or over-written.\n"
] |
[
7,
2,
1,
1,
1
] |
[] |
[] |
[
"python",
"recursion"
] |
stackoverflow_0001860049_python_recursion.txt
|
Q:
How do I get the remote user agent inside a Genshi template when using Trac, and WSGI?
I'm trying to do some customization of a Trac project management website and have run into an interesting problem. The project has a set of images that are both SVG and PNG. The SVG images have numerous advantages including multiple hyperlinks and a smaller transmitted size against PNG which is bigger and can only link to a single document.
I realize that it is possible to use jQuery to sniff out the user agent after the page has been loaded and replace the PNG with the SVG version of the image, but this results in the PNG being sent to all clients. I also can have Genshi replace the PNG with SVG for all clients and then use jQuery to put the PNG back, but the same problem results. I could use jQuery to insert the appropriate images for all clients, but that just seems silly to require the client to do what the server should.
Is there a way I can get browser information inside of a Genshi template? It's a little more difficult than just calling for environment variables because of the fact that I'm running Trac using WSGI. I've looked through the output of repr(locals()) and didn't see anything that looked like it solved my problem. I'd also like to avoid modifying the Trac source code.
A:
user_agent = environ.get('HTTP_USER_AGENT', None)
Or if environ is wrapped in some sort of Request object:
user_agent = request.user_agent
btw, You should probably look at HTTP_ACCEPT header instead of HTTP_USER_AGENT to find out what representation should be sent.
A:
Okay, so I did some digging on the issue, not by grepping through the source code, but by writing a custom Genshi handler that spit out the recursive repr() of every element in locals (with help provided by a previous question that addressed how to print out all variables in scope). I had originally missed the req object. It looks like it's as simple as using req.environ['HTTP_USER_AGENT']. The problem was in finding the req object in the first place. Grepping through the source code I still can't find exactly where the templates are instantiated, so this proves to be much easier and better than a patch.
For completeness, here's the bit of Genshi template I used to replace the logo only for newer versions of Gecko based browsers. It's a little hacky and probably suboptimal, but it works and it doesn't send SVG to browsers that lie and say they're "like Gecko" but can't render SVG properly -- yes, I'm looking at you Webkit.
<py:if test="'Gecko/' in req.environ['HTTP_USER_AGENT'] and [int(x.split('/')[1]) for x in req.environ['HTTP_USER_AGENT'].split() if x.startswith('Gecko')][0] > 20080101">
<div py:match="div[@id='header']">
<object type="image/svg+xml" id="svgLogo" data="${href.chrome('site/logo.svg')}" style="width=${chrome['logo']['width']}px; height=${chrome['logo']['height']}px;"></object>
</div>
</py:if>
|
How do I get the remote user agent inside a Genshi template when using Trac, and WSGI?
|
I'm trying to do some customization of a Trac project management website and have run into an interesting problem. The project has a set of images that are both SVG and PNG. The SVG images have numerous advantages including multiple hyperlinks and a smaller transmitted size against PNG which is bigger and can only link to a single document.
I realize that it is possible to use jQuery to sniff out the user agent after the page has been loaded and replace the PNG with the SVG version of the image, but this results in the PNG being sent to all clients. I also can have Genshi replace the PNG with SVG for all clients and then use jQuery to put the PNG back, but the same problem results. I could use jQuery to insert the appropriate images for all clients, but that just seems silly to require the client to do what the server should.
Is there a way I can get browser information inside of a Genshi template? It's a little more difficult than just calling for environment variables because of the fact that I'm running Trac using WSGI. I've looked through the output of repr(locals()) and didn't see anything that looked like it solved my problem. I'd also like to avoid modifying the Trac source code.
|
[
"user_agent = environ.get('HTTP_USER_AGENT', None)\n\nOr if environ is wrapped in some sort of Request object:\nuser_agent = request.user_agent\n\nbtw, You should probably look at HTTP_ACCEPT header instead of HTTP_USER_AGENT to find out what representation should be sent.\n",
"Okay, so I did some digging on the issue, not by grepping through the source code, but by writing a custom Genshi handler that spit out the recursive repr() of every element in locals (with help provided by a previous question that addressed how to print out all variables in scope). I had originally missed the req object. It looks like it's as simple as using req.environ['HTTP_USER_AGENT']. The problem was in finding the req object in the first place. Grepping through the source code I still can't find exactly where the templates are instantiated, so this proves to be much easier and better than a patch.\nFor completeness, here's the bit of Genshi template I used to replace the logo only for newer versions of Gecko based browsers. It's a little hacky and probably suboptimal, but it works and it doesn't send SVG to browsers that lie and say they're \"like Gecko\" but can't render SVG properly -- yes, I'm looking at you Webkit.\n<py:if test=\"'Gecko/' in req.environ['HTTP_USER_AGENT'] and [int(x.split('/')[1]) for x in req.environ['HTTP_USER_AGENT'].split() if x.startswith('Gecko')][0] > 20080101\">\n <div py:match=\"div[@id='header']\">\n <object type=\"image/svg+xml\" id=\"svgLogo\" data=\"${href.chrome('site/logo.svg')}\" style=\"width=${chrome['logo']['width']}px; height=${chrome['logo']['height']}px;\"></object>\n </div>\n</py:if>\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"genshi",
"python",
"trac",
"wsgi"
] |
stackoverflow_0001855576_genshi_python_trac_wsgi.txt
|
Q:
Using AppEngine XMPP for Client Notifications
I've been looking for a way to tell clients about expired objects and AppEngine's XMPP implementation seems really interesting because it's scalable, should be reliable and can contain up to 100kb of data.
But as I understand it, before a client can listen to messages, he should have a gmail account. That's very impractical.
Is there maybe a way to make temporary readonly XMPP accounts to use with this?
A:
No this isn't true: you can have the AppEngine robot as contact over any Jabber/XMPP based networks.
Unless you are talking about the need for a GMAIL account to create an AppEngine robot... in which case YES you need to have a Google account.
A:
In that situation, I would perform ajax calls every 5 minutes in example to check it.
It's easy to implement and the data exchanged can be reduced to the max (taking advantage of "fast query/response" bonifications of google-app).
Regards.
A:
jldupont has it right for the first point : any JID should work :)
For the 2nd point, the only option is probably to set up your own server and allow anonymous access + temporary accounts.
|
Using AppEngine XMPP for Client Notifications
|
I've been looking for a way to tell clients about expired objects and AppEngine's XMPP implementation seems really interesting because it's scalable, should be reliable and can contain up to 100kb of data.
But as I understand it, before a client can listen to messages, he should have a gmail account. That's very impractical.
Is there maybe a way to make temporary readonly XMPP accounts to use with this?
|
[
"\nNo this isn't true: you can have the AppEngine robot as contact over any Jabber/XMPP based networks.\n\nUnless you are talking about the need for a GMAIL account to create an AppEngine robot... in which case YES you need to have a Google account.\n",
"In that situation, I would perform ajax calls every 5 minutes in example to check it.\nIt's easy to implement and the data exchanged can be reduced to the max (taking advantage of \"fast query/response\" bonifications of google-app).\nRegards.\n",
"jldupont has it right for the first point : any JID should work :) \nFor the 2nd point, the only option is probably to set up your own server and allow anonymous access + temporary accounts. \n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"google_app_engine",
"python",
"web_services",
"xmpp"
] |
stackoverflow_0001859634_google_app_engine_python_web_services_xmpp.txt
|
Q:
How do I read binary C++ protobuf data using Python protobuf?
The Python version of Google protobuf gives us only:
SerializeAsString()
Where as the C++ version gives us both:
SerializeToArray(...)
SerializeAsString()
We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string?
Is this the correct way of doing it?
binary = get_binary_data()
binary_size = get_binary_size()
string = None
for i in range(len(binary_size)):
string += i
message = new MyMessage()
message.ParseFromString(string)
Update:
Here's a new example, and a problem:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(data)
When we get to the foo_bar.ParseFromString(data) line, I get this error:
Exception Type: DecodeError
Exception Value: Too many bytes when decoding varint.
Update 2:
It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding).
This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
string = ''
for i in range(0, len(data)):
byte = data[i]
if byte != '\xcc': # yuck!
string += data[i]
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(string)
There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.
A:
I'm not an expert with Python, but you can pass the result of a file.read() operation into message.ParseFromString(...) without having to build a new string type or anything.
A:
Python strings can contain any character, i.e. they are capable of holding "binary" data directly. There should be no need to convert from string to "binary".
|
How do I read binary C++ protobuf data using Python protobuf?
|
The Python version of Google protobuf gives us only:
SerializeAsString()
Where as the C++ version gives us both:
SerializeToArray(...)
SerializeAsString()
We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string?
Is this the correct way of doing it?
binary = get_binary_data()
binary_size = get_binary_size()
string = None
for i in range(len(binary_size)):
string += i
message = new MyMessage()
message.ParseFromString(string)
Update:
Here's a new example, and a problem:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(data)
When we get to the foo_bar.ParseFromString(data) line, I get this error:
Exception Type: DecodeError
Exception Value: Too many bytes when decoding varint.
Update 2:
It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding).
This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
string = ''
for i in range(0, len(data)):
byte = data[i]
if byte != '\xcc': # yuck!
string += data[i]
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(string)
There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.
|
[
"I'm not an expert with Python, but you can pass the result of a file.read() operation into message.ParseFromString(...) without having to build a new string type or anything.\n",
"Python strings can contain any character, i.e. they are capable of holding \"binary\" data directly. There should be no need to convert from string to \"binary\".\n"
] |
[
4,
4
] |
[] |
[] |
[
"c++",
"protocol_buffers",
"python"
] |
stackoverflow_0001860187_c++_protocol_buffers_python.txt
|
Q:
How do I distinguish "module not found" from "module threw exception" on ImportError?
In Python, import does_not_exist raises ImportError, and
import exists
exists.py:
import does_not_exist
will also raise ImportError.
How should I tell the difference in code?
A:
The only method I know is to check if the toplevel modulename "exists" is in the Exception's message or not:
try:
import exists
except ImportError as exc:
if "exists" in str(exc):
pass
else:
raise
Could this be a feature request for Python's ImportError? Having a variable for the module name would certainly be convenient..
A:
You can use the tb_next of the traceback. It will be different from None if the exception occured on another module
import sys
try:
import exists
except Exception, e:
print "None on exists", sys.exc_info()[2].tb_next == None
try:
import notexists
except Exception, e:
print "None on notexists", sys.exc_info()[2].tb_next == None
>>> None on exists False
>>> None on notexists True
|
How do I distinguish "module not found" from "module threw exception" on ImportError?
|
In Python, import does_not_exist raises ImportError, and
import exists
exists.py:
import does_not_exist
will also raise ImportError.
How should I tell the difference in code?
|
[
"The only method I know is to check if the toplevel modulename \"exists\" is in the Exception's message or not:\ntry:\n import exists\nexcept ImportError as exc:\n if \"exists\" in str(exc):\n pass\n else:\n raise\n\nCould this be a feature request for Python's ImportError? Having a variable for the module name would certainly be convenient..\n",
"You can use the tb_next of the traceback. It will be different from None if the exception occured on another module\nimport sys\ntry:\n import exists\nexcept Exception, e:\n print \"None on exists\", sys.exc_info()[2].tb_next == None\n\ntry:\n import notexists\nexcept Exception, e:\n print \"None on notexists\", sys.exc_info()[2].tb_next == None\n\n>>> None on exists False\n>>> None on notexists True\n\n"
] |
[
3,
2
] |
[] |
[] |
[
"importerror",
"python"
] |
stackoverflow_0001860363_importerror_python.txt
|
Q:
Pygame sprite transformation with interpolation
Im currently working on a Python/Pygame module to wrap some basic sprite animation. Animation in the sense that the image itself is static but I apply rotation and scale with start and end values with a sine wave interpolation. That is, sprite transformation like the ones that could be made in Flash. I hope you understand, otherwise feel free to ask and I try to clarify.
I cant find a module that does this already. Does anyone know of one? Would save me some work. :)
Edit: Oh, and if this transformation with interpolation thingie has a proper name I would love to hear it. Would probably make my search results better.
A:
You can transform images by pygame.transform, but interpolation is not included in pygame.
Rabbyt provides animation including interpolation, even though I haven't used it.
|
Pygame sprite transformation with interpolation
|
Im currently working on a Python/Pygame module to wrap some basic sprite animation. Animation in the sense that the image itself is static but I apply rotation and scale with start and end values with a sine wave interpolation. That is, sprite transformation like the ones that could be made in Flash. I hope you understand, otherwise feel free to ask and I try to clarify.
I cant find a module that does this already. Does anyone know of one? Would save me some work. :)
Edit: Oh, and if this transformation with interpolation thingie has a proper name I would love to hear it. Would probably make my search results better.
|
[
"You can transform images by pygame.transform, but interpolation is not included in pygame. \nRabbyt provides animation including interpolation, even though I haven't used it.\n"
] |
[
1
] |
[] |
[] |
[
"animation",
"interpolation",
"pygame",
"python",
"sprite"
] |
stackoverflow_0001859125_animation_interpolation_pygame_python_sprite.txt
|
Q:
Object appended to a list instance appears in a different instance of that list
I was writing this little piece of code as an exercise in object-oriented programming.
Here I'm trying to define a house as a list of rooms and each room as a list of devices (lamps, for example).
First I created all the objects and them appended the two rooms to the house and a different device to each room. Pretty basic.
The problem is that it seems that the device is being appended to both rooms. Why is that?
The code:
#! /usr/bin/python
class House:
def __init__(self, rooms = list()):
self.rooms = rooms
print('house created')
class Room:
def __init__(self, name = 'a room', devs = list()):
self.name = name
self.devs = devs
print('room ' + self.name + ' created')
class Device:
def __init__(self, name = 'a device'):
self.name = name
print('device ' + self.name + ' created')
def main():
#1
h = House()
r1 = Room(name = 'R1')
r2 = Room(name = 'R2')
d1 = Device(name = 'lamp1')
d2 = Device(name = 'lamp2')
#2
h.rooms.append(r1)
h.rooms.append(r2)
for room in h.rooms:
print room.name
print h.rooms[0]
print h.rooms[1]
h.rooms[1].devs.append(d1)
#3
for room in h.rooms:
print room.name
for dev in room.devs:
print('room ' + room.name + ' > ' + dev.name)
print room
print dev
if __name__ == '__main__' : main()
And the output.
house created
room R1 created
room R2 created
device lamp1 created
device lamp2 created
R1
R2
<__main__.Room instance at 0xb7d8a58c>
<__main__.Room instance at 0xb7d8a5ac>
R1
room R1 > lamp1
<__main__.Room instance at 0xb7d8a58c>
<__main__.Device instance at 0xb7d8a5cc>
R2
room R2 > lamp1
<__main__.Room instance at 0xb7d8a5ac>
<__main__.Device instance at 0xb7d8a5cc>
Note that the same instance of d1 is in both rooms, r1 and r2.
A:
Default parameter values for functions are evaluated only once. This means that all instances of House will use the same list instance for self.rooms (if rooms parameter wasn't given in construction). In the same manner, all instances of Room will share the same list for self.devs.
To solve this, write the code like this:
def __init__(self, rooms = None):
if rooms is None:
rooms = []
self.rooms = rooms
print('house created')
And the same thing for the other classes.
A:
The default argument is evaluated once, at the point of declaration of the method. That value is then used in all calls to the method.
There are other question on stackoverflow exploring the reasons for this design and how to best avoid these mutable default arguments.
A:
def __init__(self, name = 'a room', devs = list()):
self.name = name
self.devs = devs
print('room ' + self.name + ' created')
When you do this list() actually is always the same list. You don't get a new empty list each time the constructor is called, you get the same empty list. To fix that you'll want to make a copy.
Also list() is more idiomatically written as [].
def __init__(self, name='a room', devs=[]):
self.name = name
self.devs = list(devs)
print('room ' + self.name + ' created')
|
Object appended to a list instance appears in a different instance of that list
|
I was writing this little piece of code as an exercise in object-oriented programming.
Here I'm trying to define a house as a list of rooms and each room as a list of devices (lamps, for example).
First I created all the objects and them appended the two rooms to the house and a different device to each room. Pretty basic.
The problem is that it seems that the device is being appended to both rooms. Why is that?
The code:
#! /usr/bin/python
class House:
def __init__(self, rooms = list()):
self.rooms = rooms
print('house created')
class Room:
def __init__(self, name = 'a room', devs = list()):
self.name = name
self.devs = devs
print('room ' + self.name + ' created')
class Device:
def __init__(self, name = 'a device'):
self.name = name
print('device ' + self.name + ' created')
def main():
#1
h = House()
r1 = Room(name = 'R1')
r2 = Room(name = 'R2')
d1 = Device(name = 'lamp1')
d2 = Device(name = 'lamp2')
#2
h.rooms.append(r1)
h.rooms.append(r2)
for room in h.rooms:
print room.name
print h.rooms[0]
print h.rooms[1]
h.rooms[1].devs.append(d1)
#3
for room in h.rooms:
print room.name
for dev in room.devs:
print('room ' + room.name + ' > ' + dev.name)
print room
print dev
if __name__ == '__main__' : main()
And the output.
house created
room R1 created
room R2 created
device lamp1 created
device lamp2 created
R1
R2
<__main__.Room instance at 0xb7d8a58c>
<__main__.Room instance at 0xb7d8a5ac>
R1
room R1 > lamp1
<__main__.Room instance at 0xb7d8a58c>
<__main__.Device instance at 0xb7d8a5cc>
R2
room R2 > lamp1
<__main__.Room instance at 0xb7d8a5ac>
<__main__.Device instance at 0xb7d8a5cc>
Note that the same instance of d1 is in both rooms, r1 and r2.
|
[
"Default parameter values for functions are evaluated only once. This means that all instances of House will use the same list instance for self.rooms (if rooms parameter wasn't given in construction). In the same manner, all instances of Room will share the same list for self.devs.\nTo solve this, write the code like this:\ndef __init__(self, rooms = None):\n if rooms is None:\n rooms = []\n self.rooms = rooms\n print('house created')\n\nAnd the same thing for the other classes.\n",
"The default argument is evaluated once, at the point of declaration of the method. That value is then used in all calls to the method.\nThere are other question on stackoverflow exploring the reasons for this design and how to best avoid these mutable default arguments.\n",
"def __init__(self, name = 'a room', devs = list()):\n self.name = name\n self.devs = devs\n print('room ' + self.name + ' created')\n\nWhen you do this list() actually is always the same list. You don't get a new empty list each time the constructor is called, you get the same empty list. To fix that you'll want to make a copy.\nAlso list() is more idiomatically written as [].\ndef __init__(self, name='a room', devs=[]):\n self.name = name\n self.devs = list(devs)\n print('room ' + self.name + ' created')\n\n"
] |
[
9,
3,
0
] |
[] |
[] |
[
"arguments",
"default",
"mutable",
"python"
] |
stackoverflow_0001860737_arguments_default_mutable_python.txt
|
Q:
Inheritance in Python Such That All Base Functions Are Called
Basically, what I want is to do this:
class B:
def fn(self):
print 'B'
class A:
def fn(self):
print 'A'
@extendInherit
class C(A,B):
pass
c=C()
c.fn()
And have the output be
A
B
How would I implement the extendInherit decorator?
A:
This is not a job for decorators. You want to completely change the normal behaviour of a class, so this is actually a job for a metaclass.
import types
class CallAll(type):
""" MetaClass that adds methods to call all superclass implementations """
def __new__(meta, clsname, bases, attrs):
## collect a list of functions defined on superclasses
funcs = {}
for base in bases:
for name, val in vars(base).iteritems():
if type(val) is types.FunctionType:
if name in funcs:
funcs[name].append( val )
else:
funcs[name] = [val]
## now we have all methods, so decorate each of them
for name in funcs:
def caller(self, *args,**kwargs):
""" calls all baseclass implementations """
for func in funcs[name]:
func(self, *args,**kwargs)
attrs[name] = caller
return type.__new__(meta, clsname, bases, attrs)
class B:
def fn(self):
print 'B'
class A:
def fn(self):
print 'A'
class C(A,B, object):
__metaclass__=CallAll
c=C()
c.fn()
A:
I personally wouldn't try doing this with a decorator since using new-style classes and super(), the following can be achieved:
>>> class A(object):
... def __init__(self):
... super(A, self).__init__()
... print "A"
...
>>> class B(object):
... def __init__(self):
... super(B, self).__init__()
... print "B"
...
>>> class C(A, B):
... def __init__(self):
... super(C, self).__init__()
...
>>> foo = C()
B
A
I'd imagine method invocations would work the same way.
A:
A metaclass is a possible solution, but somewhat complex. super can do it very simply (with new style classes of course: there's no reason to use legacy classes in new code!):
class B(object):
def fn(self):
print 'B'
try: super(B, self).fn()
except AttributeError: pass
class A(object):
def fn(self):
print 'A'
try: super(A, self).fn()
except AttributeError: pass
class C(A, B): pass
c = C()
c.fn()
You need the try/except to support any order of single or multiple inheritance (since at some point there will be no further base along the method-resolution-order, MRO, defining a method named fn, you need to catch and ignore the resulting AttributeError). But as you see, differently from what you appear to think based on your comment to a different answer, you don't necessarily need to override fn in your leafmost class unless you need to do something specific to that class in such an override -- super works fine on purely inherited (not overridden) methods, too!
|
Inheritance in Python Such That All Base Functions Are Called
|
Basically, what I want is to do this:
class B:
def fn(self):
print 'B'
class A:
def fn(self):
print 'A'
@extendInherit
class C(A,B):
pass
c=C()
c.fn()
And have the output be
A
B
How would I implement the extendInherit decorator?
|
[
"This is not a job for decorators. You want to completely change the normal behaviour of a class, so this is actually a job for a metaclass.\nimport types\n\nclass CallAll(type):\n \"\"\" MetaClass that adds methods to call all superclass implementations \"\"\"\n def __new__(meta, clsname, bases, attrs):\n ## collect a list of functions defined on superclasses\n funcs = {}\n for base in bases:\n for name, val in vars(base).iteritems():\n if type(val) is types.FunctionType:\n if name in funcs:\n funcs[name].append( val )\n else:\n funcs[name] = [val]\n\n ## now we have all methods, so decorate each of them\n for name in funcs:\n def caller(self, *args,**kwargs):\n \"\"\" calls all baseclass implementations \"\"\"\n for func in funcs[name]:\n func(self, *args,**kwargs)\n attrs[name] = caller\n\n return type.__new__(meta, clsname, bases, attrs)\n\nclass B:\n def fn(self):\n print 'B'\n\nclass A:\n def fn(self):\n print 'A'\n\nclass C(A,B, object):\n __metaclass__=CallAll\n\nc=C()\nc.fn()\n\n",
"I personally wouldn't try doing this with a decorator since using new-style classes and super(), the following can be achieved:\n>>> class A(object):\n... def __init__(self):\n... super(A, self).__init__()\n... print \"A\"\n... \n>>> class B(object):\n... def __init__(self):\n... super(B, self).__init__()\n... print \"B\"\n... \n>>> class C(A, B):\n... def __init__(self):\n... super(C, self).__init__()\n... \n>>> foo = C()\nB\nA\n\nI'd imagine method invocations would work the same way.\n",
"A metaclass is a possible solution, but somewhat complex. super can do it very simply (with new style classes of course: there's no reason to use legacy classes in new code!):\nclass B(object):\n def fn(self):\n print 'B'\n try: super(B, self).fn()\n except AttributeError: pass\n\nclass A(object):\n def fn(self):\n print 'A'\n try: super(A, self).fn()\n except AttributeError: pass\n\nclass C(A, B): pass\n\nc = C()\nc.fn()\n\nYou need the try/except to support any order of single or multiple inheritance (since at some point there will be no further base along the method-resolution-order, MRO, defining a method named fn, you need to catch and ignore the resulting AttributeError). But as you see, differently from what you appear to think based on your comment to a different answer, you don't necessarily need to override fn in your leafmost class unless you need to do something specific to that class in such an override -- super works fine on purely inherited (not overridden) methods, too!\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"decorator",
"inheritance",
"multiple_inheritance",
"python"
] |
stackoverflow_0001859848_decorator_inheritance_multiple_inheritance_python.txt
|
Q:
How would I call 32bit exes in Windows 64bit with python?
I want to call a exe from python on a 64bit version of vista. I know to use subprocess, but all the 32bit apps are store in C:\Program Files (x86)\, and it doesn't like the spaces I believe. i have tried escape characters, doesn't fire, any ideas?
A:
textEditorExecutablePath = 'C:\\Program Files (x86)\\Notepad2\\Notepad2.exe'
filepathToOpen = 'C:\\file.txt'
subprocess.Popen([textEditorExecutablePath, filepathToOpen])
Works for me. How are you calling Popen?
|
How would I call 32bit exes in Windows 64bit with python?
|
I want to call a exe from python on a 64bit version of vista. I know to use subprocess, but all the 32bit apps are store in C:\Program Files (x86)\, and it doesn't like the spaces I believe. i have tried escape characters, doesn't fire, any ideas?
|
[
"textEditorExecutablePath = 'C:\\\\Program Files (x86)\\\\Notepad2\\\\Notepad2.exe'\nfilepathToOpen = 'C:\\\\file.txt'\nsubprocess.Popen([textEditorExecutablePath, filepathToOpen])\n\nWorks for me. How are you calling Popen?\n"
] |
[
1
] |
[] |
[] |
[
"32_bit",
"64_bit",
"python",
"windows"
] |
stackoverflow_0001861511_32_bit_64_bit_python_windows.txt
|
Q:
Problem installing Shoutpy + Boost.python on opensolaris
Im trying to install shoutpy on opensolaris 2009.6. It relies on boost.python. i've installed the boost_devel libraries from blastwave and linked /opt/csw/include/boost to /usr/include/boost . But when I try to easy_install shoutpy I get the following output
munderwo@opensolaris-test1:/usr/include$ pfexec easy_install shoutpy
Searching for shoutpy
Reading http://pypi.python.org/simple/shoutpy/
Reading http://dingoskidneys.com/shoutpy/
Best match: shoutpy 1.0.0
Downloading http://dingoskidneys.com/shoutpy/shoutpy-1.0.0.tar.gz
Processing shoutpy-1.0.0.tar.gz
Running shoutpy-1.0.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-w7XQfv/shoutpy-1.0.0/egg-dist-tmp-k11Dky
In file included from /usr/include/boost/python/object/make_instance.hpp:9,
from /usr/include/boost/python/object/make_ptr_instance.hpp:8,
from /usr/include/boost/python/to_python_indirect.hpp:11,
from /usr/include/boost/python/converter/arg_to_python.hpp:10,
from /usr/include/boost/python/call.hpp:15,
from /usr/include/boost/python/object_core.hpp:12,
from /usr/include/boost/python/args.hpp:25,
from /usr/include/boost/python.hpp:11,
from shoutpy.cc:26:
/usr/include/boost/python/object/instance.hpp:44: error: a casts to a type other than an integral or enumeration type cannot appear in a constant-expression
/usr/include/boost/python/object/instance.hpp:44: error: '->' cannot appear in a constant-expression
/usr/include/boost/python/object/instance.hpp:44: error: `&' cannot appear in a constant-expression
In file included from /usr/include/boost/python/converter/registry.hpp:9,
from /usr/include/boost/python/converter/registered.hpp:8,
from /usr/include/boost/python/object/make_instance.hpp:10,
from /usr/include/boost/python/object/make_ptr_instance.hpp:8,
from /usr/include/boost/python/to_python_indirect.hpp:11,
from /usr/include/boost/python/converter/arg_to_python.hpp:10,
from /usr/include/boost/python/call.hpp:15,
from /usr/include/boost/python/object_core.hpp:12,
from /usr/include/boost/python/args.hpp:25,
from /usr/include/boost/python.hpp:11,
from shoutpy.cc:26:
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: '->' cannot appear in a constant-expression
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: `&' cannot appear in a constant-expression
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: template argument 1 is invalid
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: `value' is not a member of `<declaration error>'
error: Setup script exited with error: command '/usr/lib/python2.6/pycc' failed with exit status 1
This is using python2.6, opensolaris 2009.06, boost 1.35.
any help would be great!
Cheers
Mark
Edit - this has been cross posted on serverfault as its a bit hard to classify where the problem domain is. https://serverfault.com/questions/88724/problem-with-opensolaris-boost-python-and-shoutpy
A:
Unfortunately I've never tried to compile shoutpy under OpenSolaris and I don't use it these days. Boost.python requires a lot from its C++ compiler. Use easy_install -b build_directory shoutpy so it'll keep the source code after it fails, then check the C++ compiler Python tries to use against those supported by boost.python.
I tried compiling it on my desktop Linux and it still works after I edit setup.py to link against libboost_python-mt instead of libboost_python which doesn't exist in Ubuntu (there are several libboost_python* depending on Python version and so forth).
|
Problem installing Shoutpy + Boost.python on opensolaris
|
Im trying to install shoutpy on opensolaris 2009.6. It relies on boost.python. i've installed the boost_devel libraries from blastwave and linked /opt/csw/include/boost to /usr/include/boost . But when I try to easy_install shoutpy I get the following output
munderwo@opensolaris-test1:/usr/include$ pfexec easy_install shoutpy
Searching for shoutpy
Reading http://pypi.python.org/simple/shoutpy/
Reading http://dingoskidneys.com/shoutpy/
Best match: shoutpy 1.0.0
Downloading http://dingoskidneys.com/shoutpy/shoutpy-1.0.0.tar.gz
Processing shoutpy-1.0.0.tar.gz
Running shoutpy-1.0.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-w7XQfv/shoutpy-1.0.0/egg-dist-tmp-k11Dky
In file included from /usr/include/boost/python/object/make_instance.hpp:9,
from /usr/include/boost/python/object/make_ptr_instance.hpp:8,
from /usr/include/boost/python/to_python_indirect.hpp:11,
from /usr/include/boost/python/converter/arg_to_python.hpp:10,
from /usr/include/boost/python/call.hpp:15,
from /usr/include/boost/python/object_core.hpp:12,
from /usr/include/boost/python/args.hpp:25,
from /usr/include/boost/python.hpp:11,
from shoutpy.cc:26:
/usr/include/boost/python/object/instance.hpp:44: error: a casts to a type other than an integral or enumeration type cannot appear in a constant-expression
/usr/include/boost/python/object/instance.hpp:44: error: '->' cannot appear in a constant-expression
/usr/include/boost/python/object/instance.hpp:44: error: `&' cannot appear in a constant-expression
In file included from /usr/include/boost/python/converter/registry.hpp:9,
from /usr/include/boost/python/converter/registered.hpp:8,
from /usr/include/boost/python/object/make_instance.hpp:10,
from /usr/include/boost/python/object/make_ptr_instance.hpp:8,
from /usr/include/boost/python/to_python_indirect.hpp:11,
from /usr/include/boost/python/converter/arg_to_python.hpp:10,
from /usr/include/boost/python/call.hpp:15,
from /usr/include/boost/python/object_core.hpp:12,
from /usr/include/boost/python/args.hpp:25,
from /usr/include/boost/python.hpp:11,
from shoutpy.cc:26:
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: '->' cannot appear in a constant-expression
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: `&' cannot appear in a constant-expression
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: template argument 1 is invalid
/usr/include/boost/python/converter/rvalue_from_python_data.hpp:99: error: `value' is not a member of `<declaration error>'
error: Setup script exited with error: command '/usr/lib/python2.6/pycc' failed with exit status 1
This is using python2.6, opensolaris 2009.06, boost 1.35.
any help would be great!
Cheers
Mark
Edit - this has been cross posted on serverfault as its a bit hard to classify where the problem domain is. https://serverfault.com/questions/88724/problem-with-opensolaris-boost-python-and-shoutpy
|
[
"Unfortunately I've never tried to compile shoutpy under OpenSolaris and I don't use it these days. Boost.python requires a lot from its C++ compiler. Use easy_install -b build_directory shoutpy so it'll keep the source code after it fails, then check the C++ compiler Python tries to use against those supported by boost.python.\nI tried compiling it on my desktop Linux and it still works after I edit setup.py to link against libboost_python-mt instead of libboost_python which doesn't exist in Ubuntu (there are several libboost_python* depending on Python version and so forth).\n"
] |
[
0
] |
[] |
[] |
[
"boost",
"boost_python",
"opensolaris",
"python"
] |
stackoverflow_0001797110_boost_boost_python_opensolaris_python.txt
|
Q:
Virtualenv: global site-packages vs the site-packages in the virtual environment
If I have a certain package installed both in the global site-packages and in the local one, which package will get imported? Will that even work or will I get an error?
Which packages should I put in the global site-packages and which in the local one?
A:
The previous answer wraps up question 1 but ignores question 2.
The general best practice I've seen for which packages to put globally:
First, the core Python packages, as these don't change with backwards-incompatible issues unless you're upgrading a major version, and you'll want whatever security fixes from a python upgrade to apply automatically to your virtualenvs.
Second, packages that are a pain to easy_install or pip install into each individual virtualenv but that don't change very often -- MySQLdb/psycopg and PIL, for example.
Pretty much everything else should go into your virtualenv's packages (I highly recommend using pip requirements files and virtualenvwrapper to make this minimally painful and easy to set up on other machines).
A:
Newly created virtual environment by default have access to global site-packages directory, unless created with --no-site-packages. Calling easy_install (installing new packages) with certain environment activated will cause local overwrite of already existing ones in global site-packages (similar to inheritance). Environment will use its own local packages, when missing - global ones.
|
Virtualenv: global site-packages vs the site-packages in the virtual environment
|
If I have a certain package installed both in the global site-packages and in the local one, which package will get imported? Will that even work or will I get an error?
Which packages should I put in the global site-packages and which in the local one?
|
[
"The previous answer wraps up question 1 but ignores question 2.\nThe general best practice I've seen for which packages to put globally: \nFirst, the core Python packages, as these don't change with backwards-incompatible issues unless you're upgrading a major version, and you'll want whatever security fixes from a python upgrade to apply automatically to your virtualenvs. \nSecond, packages that are a pain to easy_install or pip install into each individual virtualenv but that don't change very often -- MySQLdb/psycopg and PIL, for example. \nPretty much everything else should go into your virtualenv's packages (I highly recommend using pip requirements files and virtualenvwrapper to make this minimally painful and easy to set up on other machines).\n",
"Newly created virtual environment by default have access to global site-packages directory, unless created with --no-site-packages. Calling easy_install (installing new packages) with certain environment activated will cause local overwrite of already existing ones in global site-packages (similar to inheritance). Environment will use its own local packages, when missing - global ones.\n"
] |
[
9,
3
] |
[] |
[] |
[
"python",
"virtualenv"
] |
stackoverflow_0001860348_python_virtualenv.txt
|
Q:
Python IO Gurus: what are the differences between these two methods?
I have two methods for writing binary files: the first works with
data received by a server corresponding to a file upload (i.e., handling a form whose enctype="multipart/form-data"), and the second works with file data sent as email attachments (i.e., file data obtained by parsing an email message message body using get_payload()).
The odd thing is, they're not interchangeable: if I use the first one to save data parsed from an email attachment, it fails; similarly, the second function fails when dealing with uploaded file data.
What are the critical differences?
This is the first method:
def write_binary_file (folder, filename, f, chunk_size=4096):
"""Write the file data f to the folder and filename combination"""
result = False
if confirm_folder(folder):
try:
file_obj = open(os.path.join(folder, file_base_name(filename)), 'wb', chunk_size)
for file_chunk in read_buffer(f, chunk_size):
file_obj.write(file_chunk)
file_obj.close()
result = True
except (IOError):
print "file_utils.write_binary_file: could not write '%s' to '%s'" % (file_base_name(filename), folder)
return result
This is the second method:
def write_binary_file (folder, filename, filedata):
"""Write the binary file data to the folder and filename combination"""
result = False
if confirm_folder(folder):
try:
file_obj = open(os.path.join(folder, file_base_name(filename)), 'wb')
file_obj.write(filedata)
file_obj.close()
result = True
except (IOError):
print "file_utils.write_binary_file: could not write '%s' to '%s'" % (file_base_name(filename), folder)
return result
A:
The difference is that the HTTP upload method (the first one) - receives as its parameters the file-like object itself (the "f" variable) and creates a CGI module specific "read_buffer" to read data in chunks from that file object to copy them to the actual file.
Thsi can make sense in an http upload application,as it would allow the file copy to start while it is still uploading - I don't personaly think it would matter but for cases of several megabytes in upload, since your http response will be halted until all upload is done anyway, in a simple CGI script.
The other method receives "file_data" as parameters: allit has to do is write this data to a new file. (The other one has to read the data from a file-like-object and it still creates an intermediary object for that)
You can use the second method to save the HTTP data, just pass the kind of object it is expecting as its parameters, so, instead of calling the second function with the "f" arguemtn provided by your CGI field values, call it with "f.read() " -- this will cause all data to be read from the "f" file like object and the corresponding data to be seen by the method.
i.e.:
#second case:
write_binary_file(folder, filename, f.read() )
A:
The first one probably expects a file-like object as a parameter, from which it reads the data. The second one expects that parameter to be a string with the actual data to be written.
To be sure you have to look at what your read_buffer function does.
A:
The most obvious difference is the chunked reading of data. You don't specify the error, but I'm guessing that the chunked method fails in the call to read_buffer.
|
Python IO Gurus: what are the differences between these two methods?
|
I have two methods for writing binary files: the first works with
data received by a server corresponding to a file upload (i.e., handling a form whose enctype="multipart/form-data"), and the second works with file data sent as email attachments (i.e., file data obtained by parsing an email message message body using get_payload()).
The odd thing is, they're not interchangeable: if I use the first one to save data parsed from an email attachment, it fails; similarly, the second function fails when dealing with uploaded file data.
What are the critical differences?
This is the first method:
def write_binary_file (folder, filename, f, chunk_size=4096):
"""Write the file data f to the folder and filename combination"""
result = False
if confirm_folder(folder):
try:
file_obj = open(os.path.join(folder, file_base_name(filename)), 'wb', chunk_size)
for file_chunk in read_buffer(f, chunk_size):
file_obj.write(file_chunk)
file_obj.close()
result = True
except (IOError):
print "file_utils.write_binary_file: could not write '%s' to '%s'" % (file_base_name(filename), folder)
return result
This is the second method:
def write_binary_file (folder, filename, filedata):
"""Write the binary file data to the folder and filename combination"""
result = False
if confirm_folder(folder):
try:
file_obj = open(os.path.join(folder, file_base_name(filename)), 'wb')
file_obj.write(filedata)
file_obj.close()
result = True
except (IOError):
print "file_utils.write_binary_file: could not write '%s' to '%s'" % (file_base_name(filename), folder)
return result
|
[
"The difference is that the HTTP upload method (the first one) - receives as its parameters the file-like object itself (the \"f\" variable) and creates a CGI module specific \"read_buffer\" to read data in chunks from that file object to copy them to the actual file. \nThsi can make sense in an http upload application,as it would allow the file copy to start while it is still uploading - I don't personaly think it would matter but for cases of several megabytes in upload, since your http response will be halted until all upload is done anyway, in a simple CGI script. \nThe other method receives \"file_data\" as parameters: allit has to do is write this data to a new file. (The other one has to read the data from a file-like-object and it still creates an intermediary object for that)\nYou can use the second method to save the HTTP data, just pass the kind of object it is expecting as its parameters, so, instead of calling the second function with the \"f\" arguemtn provided by your CGI field values, call it with \"f.read() \" -- this will cause all data to be read from the \"f\" file like object and the corresponding data to be seen by the method.\ni.e.:\n#second case:\nwrite_binary_file(folder, filename, f.read() )\n\n",
"The first one probably expects a file-like object as a parameter, from which it reads the data. The second one expects that parameter to be a string with the actual data to be written.\nTo be sure you have to look at what your read_buffer function does.\n",
"The most obvious difference is the chunked reading of data. You don't specify the error, but I'm guessing that the chunked method fails in the call to read_buffer.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"file_io",
"python"
] |
stackoverflow_0001861651_file_io_python.txt
|
Q:
rdflib graph not updated. Why?
I am trying to understand this behavior. It's definitely not what I expect. I have two programs, one reader, and one writer. The reader opens a RDFlib graph store, then performs a query every 2 seconds
import rdflib
import random
from rdflib import store
import time
default_graph_uri = "urn:uuid:a19f9b78-cc43-4866-b9a1-4b009fe91f52"
s = rdflib.plugin.get('MySQL', store.Store)('rdfstore')
config_string = "host=localhost,password=foo,user=foo,db=foo"
rt = s.open(config_string,create=False)
if rt != store.VALID_STORE:
s.open(config_string,create=True)
while True:
graph = rdflib.ConjunctiveGraph(s, identifier = rdflib.URIRef(default_graph_uri))
rows = graph.query("SELECT ?id ?value { ?id <http://localhost#ha> ?value . }")
for r in rows:
print r[0], r[1]
time.sleep(2)
print " - - - - - - - - "
The second program is a writer that adds stuff to the triplestore
import rdflib
import random
from rdflib import store
default_graph_uri = "urn:uuid:a19f9b78-cc43-4866-b9a1-4b009fe91f52"
s = rdflib.plugin.get('MySQL', store.Store)('rdfstore')
config_string = "host=localhost,password=foo,user=foo,db=foo"
rt = s.open(config_string,create=False)
if rt != store.VALID_STORE:
s.open(config_string,create=True)
graph = rdflib.ConjunctiveGraph(s, identifier = rdflib.URIRef(default_graph_uri))
graph.add( (
rdflib.URIRef("http://localhost/"+str(random.randint(0,100))),
rdflib.URIRef("http://localhost#ha"),
rdflib.Literal(str(random.randint(0,100)))
)
)
graph.commit()
I would expect to see the number of results increment on the reader as I submit stuff using the writer, but this does not happen. The reader continues to return the same result as when it started. If however I stop the reader and restart it, the new results appear.
Does anybody know what am I doing wrong ?
A:
One easy fix is to put "graph.commit()" just after the line "graph = rdflib.ConjunctiveGraph(...)" in reader.
I'm not sure what's the cause and why commiting before read fixes this. I'm guessing that:
When opening MySQLdb connection, a transaction is started automatically
This transaction doesn't see updates from other, later transactions.
"graph.commit()" bubbles down to some "connection.commit()" somewhere that discards this transaction and starts a new one.
|
rdflib graph not updated. Why?
|
I am trying to understand this behavior. It's definitely not what I expect. I have two programs, one reader, and one writer. The reader opens a RDFlib graph store, then performs a query every 2 seconds
import rdflib
import random
from rdflib import store
import time
default_graph_uri = "urn:uuid:a19f9b78-cc43-4866-b9a1-4b009fe91f52"
s = rdflib.plugin.get('MySQL', store.Store)('rdfstore')
config_string = "host=localhost,password=foo,user=foo,db=foo"
rt = s.open(config_string,create=False)
if rt != store.VALID_STORE:
s.open(config_string,create=True)
while True:
graph = rdflib.ConjunctiveGraph(s, identifier = rdflib.URIRef(default_graph_uri))
rows = graph.query("SELECT ?id ?value { ?id <http://localhost#ha> ?value . }")
for r in rows:
print r[0], r[1]
time.sleep(2)
print " - - - - - - - - "
The second program is a writer that adds stuff to the triplestore
import rdflib
import random
from rdflib import store
default_graph_uri = "urn:uuid:a19f9b78-cc43-4866-b9a1-4b009fe91f52"
s = rdflib.plugin.get('MySQL', store.Store)('rdfstore')
config_string = "host=localhost,password=foo,user=foo,db=foo"
rt = s.open(config_string,create=False)
if rt != store.VALID_STORE:
s.open(config_string,create=True)
graph = rdflib.ConjunctiveGraph(s, identifier = rdflib.URIRef(default_graph_uri))
graph.add( (
rdflib.URIRef("http://localhost/"+str(random.randint(0,100))),
rdflib.URIRef("http://localhost#ha"),
rdflib.Literal(str(random.randint(0,100)))
)
)
graph.commit()
I would expect to see the number of results increment on the reader as I submit stuff using the writer, but this does not happen. The reader continues to return the same result as when it started. If however I stop the reader and restart it, the new results appear.
Does anybody know what am I doing wrong ?
|
[
"One easy fix is to put \"graph.commit()\" just after the line \"graph = rdflib.ConjunctiveGraph(...)\" in reader.\nI'm not sure what's the cause and why commiting before read fixes this. I'm guessing that:\n\nWhen opening MySQLdb connection, a transaction is started automatically\nThis transaction doesn't see updates from other, later transactions. \n\"graph.commit()\" bubbles down to some \"connection.commit()\" somewhere that discards this transaction and starts a new one.\n\n"
] |
[
3
] |
[] |
[] |
[
"python",
"rdf",
"rdflib"
] |
stackoverflow_0001860282_python_rdf_rdflib.txt
|
Q:
build python program with extensions using py2exe
I'm having a hard time finding py2exe recipes, especially for cases that require c extensions.
The following recipe works fine without the "ext_modules" part. With it I get "NameError: name 'Extension' is not defined.
from distutils.core import setup
import py2exe
import matplotlib
import os
s = os.popen('svnversion')
version = s.read()
f = open('cpa_version.py', 'w')
f.write('VERSION = "%s"\n'%(version.strip()))
f.close()
setup(console=['cpa.py'],
options={
'py2exe': {
'packages' : ['matplotlib', 'pytz', 'MySQLdb', 'pysqlite2'],
'includes' : ['PILfix', 'version'],
"excludes" : ['_gtkagg', '_tkagg',
"Tkconstants","Tkinter","tcl"],
"dll_excludes": ['libgdk-win32-2.0-0.dll',
'libgobject-2.0-0.dll',
'libgdk_pixbuf-2.0-0.dll',
'tcl84.dll', 'tk84.dll']
}
},
data_files=matplotlib.get_py2exe_datafiles(),
# how to build _classifier.c???
ext_modules = [Extension('_classifier',
sources = ['_classifier.c'],
include_dirs=[numpy.get_include()],
libraries = ['sqlite3'])]
)
_classifier.c includes the following
#include "sqlite3.h"
#include "Python.h"
#include "numpy/arrayobject.h"
#include <stdio.h>
any help would be greatly appreciated.
A:
After fixing the small error created by forgetting to import Extension, I ran into other errors stating a problem with the -lsqlite3 flag. Turns out I needed to follow the steps outlined here: http://cboard.cprogramming.com/cplusplus-programming/82135-sqlite-questions.html
Download sqlitedll-3_3_7.zip and sqlite-source-3_3_7.zip from sqlite.org/download.html
Extract sqlitedll-3.3.7.zip and then run from the command line:
dlltool -D sqlite3.dll -d sqlite3.def -l libsqlite3dll.a
Place libsqlite3dll.a (just created) in the MinGW lib directory.
Place sqlite3.dll in your system path (c:\Windows\System32\ worked for me)
Extract sqlite-source-3_3_7.zip and place sqlite3.h in your MinGW include directory.
When you link, you will need to supply the parameter: -lsqlite3dll (this meant changing libraries=['sqlite3'] to libraries=['sqlite3dll'])
...After that the build worked.
Here's the setup file again:
from distutils.core import setup, Extension
import py2exe
import matplotlib
import os
import numpy
setup(console=['cpa.py'],
options={
'py2exe': {
'packages' : ['matplotlib', 'pytz', 'MySQLdb', 'pysqlite2'],
'includes' : ['PILfix', 'version'],
"excludes" : ['_gtkagg', '_tkagg',
"Tkconstants","Tkinter","tcl"],
"dll_excludes": ['libgdk-win32-2.0-0.dll',
'libgobject-2.0-0.dll',
'libgdk_pixbuf-2.0-0.dll',
'tcl84.dll', 'tk84.dll']
}
},
data_files=matplotlib.get_py2exe_datafiles(),
ext_modules = [Extension('_classifier',
sources = ['_classifier.c'],
include_dirs=[numpy.get_include()],
libraries = ['sqlite3dll'])]
)
A:
Try changing
from distutils.core import setup
to
from distutils.core import setup, Extension
|
build python program with extensions using py2exe
|
I'm having a hard time finding py2exe recipes, especially for cases that require c extensions.
The following recipe works fine without the "ext_modules" part. With it I get "NameError: name 'Extension' is not defined.
from distutils.core import setup
import py2exe
import matplotlib
import os
s = os.popen('svnversion')
version = s.read()
f = open('cpa_version.py', 'w')
f.write('VERSION = "%s"\n'%(version.strip()))
f.close()
setup(console=['cpa.py'],
options={
'py2exe': {
'packages' : ['matplotlib', 'pytz', 'MySQLdb', 'pysqlite2'],
'includes' : ['PILfix', 'version'],
"excludes" : ['_gtkagg', '_tkagg',
"Tkconstants","Tkinter","tcl"],
"dll_excludes": ['libgdk-win32-2.0-0.dll',
'libgobject-2.0-0.dll',
'libgdk_pixbuf-2.0-0.dll',
'tcl84.dll', 'tk84.dll']
}
},
data_files=matplotlib.get_py2exe_datafiles(),
# how to build _classifier.c???
ext_modules = [Extension('_classifier',
sources = ['_classifier.c'],
include_dirs=[numpy.get_include()],
libraries = ['sqlite3'])]
)
_classifier.c includes the following
#include "sqlite3.h"
#include "Python.h"
#include "numpy/arrayobject.h"
#include <stdio.h>
any help would be greatly appreciated.
|
[
"After fixing the small error created by forgetting to import Extension, I ran into other errors stating a problem with the -lsqlite3 flag. Turns out I needed to follow the steps outlined here: http://cboard.cprogramming.com/cplusplus-programming/82135-sqlite-questions.html\n\nDownload sqlitedll-3_3_7.zip and sqlite-source-3_3_7.zip from sqlite.org/download.html\nExtract sqlitedll-3.3.7.zip and then run from the command line:\ndlltool -D sqlite3.dll -d sqlite3.def -l libsqlite3dll.a\nPlace libsqlite3dll.a (just created) in the MinGW lib directory.\nPlace sqlite3.dll in your system path (c:\\Windows\\System32\\ worked for me)\nExtract sqlite-source-3_3_7.zip and place sqlite3.h in your MinGW include directory.\nWhen you link, you will need to supply the parameter: -lsqlite3dll (this meant changing libraries=['sqlite3'] to libraries=['sqlite3dll'])\n\n...After that the build worked.\nHere's the setup file again:\nfrom distutils.core import setup, Extension\nimport py2exe\nimport matplotlib\nimport os\nimport numpy\n\nsetup(console=['cpa.py'],\n options={\n 'py2exe': {\n 'packages' : ['matplotlib', 'pytz', 'MySQLdb', 'pysqlite2'],\n 'includes' : ['PILfix', 'version'],\n \"excludes\" : ['_gtkagg', '_tkagg',\n \"Tkconstants\",\"Tkinter\",\"tcl\"],\n \"dll_excludes\": ['libgdk-win32-2.0-0.dll',\n 'libgobject-2.0-0.dll', \n 'libgdk_pixbuf-2.0-0.dll',\n 'tcl84.dll', 'tk84.dll']\n }\n },\n data_files=matplotlib.get_py2exe_datafiles(),\n ext_modules = [Extension('_classifier',\n sources = ['_classifier.c'],\n include_dirs=[numpy.get_include()],\n libraries = ['sqlite3dll'])]\n)\n\n",
"Try changing \nfrom distutils.core import setup\n\nto \nfrom distutils.core import setup, Extension\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"c",
"py2exe",
"python",
"recipe"
] |
stackoverflow_0001848275_c_py2exe_python_recipe.txt
|
Q:
Django search capabilities
Is there a easy way to add a search capability on fields in Django? Also please let me know what is Lucene search.
A:
Try Haystack. It's pretty easy to setup.
Apache Lucene is full-text search engine written in Java.
A:
I would use Haystack as mentioned above together with Xapian.
Xapian doesn't require you to run it as a process (which is some sort of advantage in my opinion).
A:
I second the Haystack suggestion. Here's a good blog post about it. In fact, the entire series—Large Problems in Django, Mostly Solved—is excellent reading. Here's a Google search that should find most of the entries.
A:
If your database is MySQL don't underestimate the power of QuerySet's search method. It uses MySQL's full-text index (assuming you've created one) to do full-text searching. It's the quickest to set up of all the options (it's built in!) and depending on your requirements it may be enough. If not, I also think Haystack is a good suggestion.
A:
Haystack setup on my blog application took me <30 min as Django newbie. I can strongly recommend it.
|
Django search capabilities
|
Is there a easy way to add a search capability on fields in Django? Also please let me know what is Lucene search.
|
[
"\nTry Haystack. It's pretty easy to setup.\nApache Lucene is full-text search engine written in Java.\n\n",
"I would use Haystack as mentioned above together with Xapian.\nXapian doesn't require you to run it as a process (which is some sort of advantage in my opinion).\n",
"I second the Haystack suggestion. Here's a good blog post about it. In fact, the entire series—Large Problems in Django, Mostly Solved—is excellent reading. Here's a Google search that should find most of the entries.\n",
"If your database is MySQL don't underestimate the power of QuerySet's search method. It uses MySQL's full-text index (assuming you've created one) to do full-text searching. It's the quickest to set up of all the options (it's built in!) and depending on your requirements it may be enough. If not, I also think Haystack is a good suggestion.\n",
"Haystack setup on my blog application took me <30 min as Django newbie. I can strongly recommend it.\n"
] |
[
6,
2,
1,
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001859866_django_python.txt
|
Q:
How to use Emacs with Python
I am new to emacs and I want to use emacs for python development. I am using Ubuntu 9.10. I frustrated to getting emacs work with python. I use GNU Emacs 23.1.50.1 (x86_64-pc-linux-gnu, GTK+ Version 2.18.0).
Here what I did.
*
Emacs come with python mode but it is confusing there are two types of mode one is python-mode.el and other one is python.el. I use emacs 23 so mine is python.el (I think). Do I need python-mode too? Code completion does not work when I press M-Tab , instead of it window manager works. I tried Esc-Tab but it says "No match" . How can I enable code completion?
After that I installed ropemacs
sudo aptitude install python-ropemacs
Then I created .emacs file at ~/.emacs
and I added followings to .emacs file
(require 'pymacs)
(pymacs-load "ropemacs" "rope-")
(setq ropemacs-enable-autoimport t)
Then when I hit M-/ (Alt-/) it doesn't work when I cick from the menu Rope->Code assist it opens a file dialog for choosing root project folder. I choose current folder which has there are some python code. When I try again Code assist from menu it says
"Completion for x: " nothing but empty set. How can make emacs python code completion work?
Then I downloaded anything.el, anything-config, anything-match-plugin to ~/.emacs.d folder Then I added following lines to .emacs file
(require 'anything-config)
(require 'anything-match-plugin)
(global-set-key "\C-ca" 'anything)
(global-set-key "\C-ce" 'anything-for-files)
Guess what it didnt work. I tried "M-x anything" again I get No match.(I guessed may me combination of C-ca (First control-a then e ) might work it says it isnt defined). Could you explain code completion for python with clear explanations (step by step) to someone dummy as me. Thanks.
Edit: I able emacs work with python with the link. Thanks all for answering
A:
I haven't tried anything, and I haven't had much luck with rope (giant source tree causes my emacs to hang upon any file save). Instead, I find the default completion works well enough for my purposes.
The default completion keybinding is M-/. That runs dabbrev-expand which expands the current word to "the most recent, preceding word for which this is a prefix." It's not perfect: It won't parse types, and it won't search imports, but it works in 90% of the cases.
(You'll have to deactivate rope.)
A:
I think you do want the package python-mode installed! The ropemacs variants appears to be for refactoring only, and pymacs is allows Python as an Emacs-extension language -- neither of which is what you need for standard support.
A:
I'm not really sure you had to do anything fancy to get Python development to work. On gNewSense deltah (fork of Ubuntu 8.04) all I did was edit a .py file with the first line being:
#!/usr/bin/python
And then Emacs just figures it out and gives you python mode options. I didn't have to install anything beyond Emacs.
Then again, this may not be helpful as gNewSense pre-installs Emacs by default. I'll have to do it on one of my vanilla Ubuntu installs.
A:
Emacs worked out of the box for me on Ubuntu 9.10.
Did you try C-c TAB (update imports) before trying code completion? I don't think it work unless you do that.
|
How to use Emacs with Python
|
I am new to emacs and I want to use emacs for python development. I am using Ubuntu 9.10. I frustrated to getting emacs work with python. I use GNU Emacs 23.1.50.1 (x86_64-pc-linux-gnu, GTK+ Version 2.18.0).
Here what I did.
*
Emacs come with python mode but it is confusing there are two types of mode one is python-mode.el and other one is python.el. I use emacs 23 so mine is python.el (I think). Do I need python-mode too? Code completion does not work when I press M-Tab , instead of it window manager works. I tried Esc-Tab but it says "No match" . How can I enable code completion?
After that I installed ropemacs
sudo aptitude install python-ropemacs
Then I created .emacs file at ~/.emacs
and I added followings to .emacs file
(require 'pymacs)
(pymacs-load "ropemacs" "rope-")
(setq ropemacs-enable-autoimport t)
Then when I hit M-/ (Alt-/) it doesn't work when I cick from the menu Rope->Code assist it opens a file dialog for choosing root project folder. I choose current folder which has there are some python code. When I try again Code assist from menu it says
"Completion for x: " nothing but empty set. How can make emacs python code completion work?
Then I downloaded anything.el, anything-config, anything-match-plugin to ~/.emacs.d folder Then I added following lines to .emacs file
(require 'anything-config)
(require 'anything-match-plugin)
(global-set-key "\C-ca" 'anything)
(global-set-key "\C-ce" 'anything-for-files)
Guess what it didnt work. I tried "M-x anything" again I get No match.(I guessed may me combination of C-ca (First control-a then e ) might work it says it isnt defined). Could you explain code completion for python with clear explanations (step by step) to someone dummy as me. Thanks.
Edit: I able emacs work with python with the link. Thanks all for answering
|
[
"I haven't tried anything, and I haven't had much luck with rope (giant source tree causes my emacs to hang upon any file save). Instead, I find the default completion works well enough for my purposes.\nThe default completion keybinding is M-/. That runs dabbrev-expand which expands the current word to \"the most recent, preceding word for which this is a prefix.\" It's not perfect: It won't parse types, and it won't search imports, but it works in 90% of the cases.\n(You'll have to deactivate rope.)\n",
"I think you do want the package python-mode installed! The ropemacs variants appears to be for refactoring only, and pymacs is allows Python as an Emacs-extension language -- neither of which is what you need for standard support.\n",
"I'm not really sure you had to do anything fancy to get Python development to work. On gNewSense deltah (fork of Ubuntu 8.04) all I did was edit a .py file with the first line being:\n#!/usr/bin/python\nAnd then Emacs just figures it out and gives you python mode options. I didn't have to install anything beyond Emacs.\nThen again, this may not be helpful as gNewSense pre-installs Emacs by default. I'll have to do it on one of my vanilla Ubuntu installs.\n",
"Emacs worked out of the box for me on Ubuntu 9.10.\nDid you try C-c TAB (update imports) before trying code completion? I don't think it work unless you do that.\n"
] |
[
3,
2,
0,
0
] |
[] |
[] |
[
"emacs",
"python"
] |
stackoverflow_0001862901_emacs_python.txt
|
Q:
Text extraction from email in Python
My users will send me posts by email ala Posterous
I'm using Google Apps Engine (GAE) to receive and parse emails. GAE returns the text part of the message.
I need to extract the post from the plain text part of the message.
The plain text can be "contaminated" with promotional headers, footers, signatures, etc.
Also I would like to leave out the "please post this:" or similar some people candidly include.
How would you achieve this?
Are there any tools (simpler than regex) I can use?
UPDATE
Examples:
(in all these examples the post is "Lorem ipsum sit amet..."
=====
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Victor P
victor.p@example.com
visit my blog at: www.example.com/victor
=====
Hello, I like your page. Please can you include this: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
=====
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
=====
If you find more examples of what a email can be, please feel free to include them in the post.
A:
I would go with a list of compiled regular expressions. Something along the lines of:
import re
regexes = (
re.compile("visit my blog at: .*$", re.IGNORECASE),
re.compile("please post this:", re.IGNORECASE),
re.compile("please can you include this:", re.IGNORECASE)
# etc
)
for filePath in files:
with open(filePath) as file:
for line in file:
for regex in regexes:
print(re.sub(regex, ""))
|
Text extraction from email in Python
|
My users will send me posts by email ala Posterous
I'm using Google Apps Engine (GAE) to receive and parse emails. GAE returns the text part of the message.
I need to extract the post from the plain text part of the message.
The plain text can be "contaminated" with promotional headers, footers, signatures, etc.
Also I would like to leave out the "please post this:" or similar some people candidly include.
How would you achieve this?
Are there any tools (simpler than regex) I can use?
UPDATE
Examples:
(in all these examples the post is "Lorem ipsum sit amet..."
=====
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Victor P
victor.p@example.com
visit my blog at: www.example.com/victor
=====
Hello, I like your page. Please can you include this: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
=====
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
=====
If you find more examples of what a email can be, please feel free to include them in the post.
|
[
"I would go with a list of compiled regular expressions. Something along the lines of: \nimport re\n\nregexes = (\n re.compile(\"visit my blog at: .*$\", re.IGNORECASE),\n re.compile(\"please post this:\", re.IGNORECASE),\n re.compile(\"please can you include this:\", re.IGNORECASE)\n # etc\n)\n\nfor filePath in files:\n with open(filePath) as file:\n for line in file:\n for regex in regexes:\n print(re.sub(regex, \"\"))\n\n"
] |
[
2
] |
[] |
[] |
[
"email",
"google_app_engine",
"python",
"regex"
] |
stackoverflow_0001860375_email_google_app_engine_python_regex.txt
|
Q:
Python elevator simulation problem
I have a homework assignment that's really baking my noodle. It involves an elevator simulation that takes user inputs for the number of floors and the number of people using the elevator. the people's starting floor and destination floors are random numbers within the floors.
I realize that my code is very sparse and that there's quite a few gaps, but I really don't know where to go from here.
I need help within the building class, such as how to make the run() and output() sections work. any other tips would be greatly appreciated and helpful. Note that i am not looking for someone to do the code for me, but to kind of hold my hand and tell me which way to go. Classes seem to be completely mystifying to me.
import random
floors=raw_input('Please enter the number of floors for the simulation:')
while floors.isalpha() or floors.isspace() or int(floors) <=0:
floors=raw_input('Please re enter a digit for number of floors:')
customers=raw_input('Please enter the number of customers in the building:')
while customers.isalpha() or customers.isspace() or int(customers) <0:
customers=raw_input('Please re enter a digit for number of customers:')
count = 1
class building:
def num_of_floors():
num_of_floors = floors
def customer_list():
customer_list = customers
def run(self):
def output(self):
print elevator.cur_floor
class elevator:
def num_of_floors():
building.num_of_floors
def register_list():
register_list = []
def cur_floor(building):
cur_floor = 1
def direction(self):
if elevator.cur_floor == 1:
direction = up
if elevator.cur_floor == floors:
direction = down
def move(self):
if elevator.direction == up:
cur_floor +=1
if elevator.direction == down:
cur_floor -=1
def register_customer(self, customer):
register_list.append(customer.ID)
def cancel_customer (self, customer):
register_list.remove(customer.ID)
class customer:
def cur_floor(customer):
cur_floor = random.randint(0,int(floors))
def dst_floor(customer):
dst_floor = random.randint(0,int(floors))
while dst_floor == cur_floor:
dst_floor = random.randint(0,int(floors))
def ID():
cust_id = count
count+=1
def cust_dict(cust_id,dst_floor):
cust_dict = {cust_id:dst_floor}
def in_elevator():
in_elevator = 0
if customer.ID in register_list:
in_elevator = 1
def finished():
if customer.ID not in register_list:
pass
A:
You need to understand the self
parameter to all methods.
You need to understand __init__,
the constructor.
You need to understand self.varible
for your member variables.
You need to understand how to setup a
main function.
You need to understand how to
return a value from a function or
method.
You need to understand how to assign to global variables from within a function or method.
A:
Maybe your building class should start like this.
class building:
def __init__(self, floors, customers):
self.num_of_floors = floors
self.customer_list = customers
self.elevator = elevator()
A:
You should definately spend some time on Python Tutorial or Dive into Python.
A:
The first parameter of every method is a reference to the object and is usually called self. You need it to reference instancemembers of an object.
Second, referencing global variables from inside a class is considered a bad idea. You can better pass them to a class via the constructor or parameters.
|
Python elevator simulation problem
|
I have a homework assignment that's really baking my noodle. It involves an elevator simulation that takes user inputs for the number of floors and the number of people using the elevator. the people's starting floor and destination floors are random numbers within the floors.
I realize that my code is very sparse and that there's quite a few gaps, but I really don't know where to go from here.
I need help within the building class, such as how to make the run() and output() sections work. any other tips would be greatly appreciated and helpful. Note that i am not looking for someone to do the code for me, but to kind of hold my hand and tell me which way to go. Classes seem to be completely mystifying to me.
import random
floors=raw_input('Please enter the number of floors for the simulation:')
while floors.isalpha() or floors.isspace() or int(floors) <=0:
floors=raw_input('Please re enter a digit for number of floors:')
customers=raw_input('Please enter the number of customers in the building:')
while customers.isalpha() or customers.isspace() or int(customers) <0:
customers=raw_input('Please re enter a digit for number of customers:')
count = 1
class building:
def num_of_floors():
num_of_floors = floors
def customer_list():
customer_list = customers
def run(self):
def output(self):
print elevator.cur_floor
class elevator:
def num_of_floors():
building.num_of_floors
def register_list():
register_list = []
def cur_floor(building):
cur_floor = 1
def direction(self):
if elevator.cur_floor == 1:
direction = up
if elevator.cur_floor == floors:
direction = down
def move(self):
if elevator.direction == up:
cur_floor +=1
if elevator.direction == down:
cur_floor -=1
def register_customer(self, customer):
register_list.append(customer.ID)
def cancel_customer (self, customer):
register_list.remove(customer.ID)
class customer:
def cur_floor(customer):
cur_floor = random.randint(0,int(floors))
def dst_floor(customer):
dst_floor = random.randint(0,int(floors))
while dst_floor == cur_floor:
dst_floor = random.randint(0,int(floors))
def ID():
cust_id = count
count+=1
def cust_dict(cust_id,dst_floor):
cust_dict = {cust_id:dst_floor}
def in_elevator():
in_elevator = 0
if customer.ID in register_list:
in_elevator = 1
def finished():
if customer.ID not in register_list:
pass
|
[
"\nYou need to understand the self\nparameter to all methods.\nYou need to understand __init__,\nthe constructor.\nYou need to understand self.varible\nfor your member variables.\nYou need to understand how to setup a\nmain function.\nYou need to understand how to\nreturn a value from a function or\nmethod.\nYou need to understand how to assign to global variables from within a function or method.\n\n",
"Maybe your building class should start like this.\nclass building: \n def __init__(self, floors, customers): \n self.num_of_floors = floors \n self.customer_list = customers\n self.elevator = elevator()\n\n",
"You should definately spend some time on Python Tutorial or Dive into Python.\n",
"The first parameter of every method is a reference to the object and is usually called self. You need it to reference instancemembers of an object.\nSecond, referencing global variables from inside a class is considered a bad idea. You can better pass them to a class via the constructor or parameters.\n"
] |
[
7,
3,
1,
0
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001863303_oop_python.txt
|
Q:
Problem with import curses.ascii
I am trying from curses.ascii import * to django project, but I get: No module named _curses, I am using Python 2.5, any suggestion? Anyway I only need isalpha() function to use....
A:
You didn't say which platform you are on, but there is probably a package which will install the curses bindings for you.
In debian/ubuntu for example it is part of the default python install
If you built the Python yourself, you may be missing the libcurses-dev
If you are on windows maybe check out this wcurses package
Otherwise curses is not supported on windows however there is a Console module
Edit: since the OP is just using isalpha
Strings have their own isalpha() method already
>>> "Hello".isalpha()
True
>>> "World!".isalpha()
False
The one with curses only works on single characters
>>> from curses.ascii import isalpha
>>> all(isalpha(x) for x in "Hello")
True
>>> all(isalpha(x) for x in "World!")
False
|
Problem with import curses.ascii
|
I am trying from curses.ascii import * to django project, but I get: No module named _curses, I am using Python 2.5, any suggestion? Anyway I only need isalpha() function to use....
|
[
"You didn't say which platform you are on, but there is probably a package which will install the curses bindings for you.\nIn debian/ubuntu for example it is part of the default python install\nIf you built the Python yourself, you may be missing the libcurses-dev\nIf you are on windows maybe check out this wcurses package\nOtherwise curses is not supported on windows however there is a Console module\nEdit: since the OP is just using isalpha\nStrings have their own isalpha() method already\n>>> \"Hello\".isalpha()\nTrue\n>>> \"World!\".isalpha()\nFalse\n\nThe one with curses only works on single characters\n>>> from curses.ascii import isalpha\n>>> all(isalpha(x) for x in \"Hello\")\nTrue\n>>> all(isalpha(x) for x in \"World!\")\nFalse\n\n"
] |
[
3
] |
[] |
[] |
[
"curses",
"django",
"python",
"windows"
] |
stackoverflow_0001863473_curses_django_python_windows.txt
|
Q:
Python: How do I read and parse a unicode utf-8 text file?
I am exporting UTF-8 text from Excel and I want to read and parse the incoming data using Python. I've read all the online info so I've already tried this, for example:
txtFile = codecs.open( 'halout.txt', 'r', 'utf-8' )
for line in txtFile:
print repr( line )
The error I am getting is:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: unexpected code byte
Looking at the text file in a Hex editor, the first values are FFFE I've also tried:
txtFile.seek( 2 )
right after the 'open' but that just causes a different error.
A:
That file is not UTF-8; it's UTF-16LE with a byte-order marker.
A:
That is a BOM
EDIT, from the coments, it seems to be a utf-16 bom
codecs.open('foo.txt', 'r', 'utf-16')
should work.
A:
Expanding on Johnathan's comment, this code should read the file correctly:
import codecs
txtFile = codecs.open( 'halout.txt', 'r', 'utf-16' )
for line in txtFile:
print repr( line )
A:
Try to see if the excel file has some blank rows (and then has values again), that might cause the unexpected error.
|
Python: How do I read and parse a unicode utf-8 text file?
|
I am exporting UTF-8 text from Excel and I want to read and parse the incoming data using Python. I've read all the online info so I've already tried this, for example:
txtFile = codecs.open( 'halout.txt', 'r', 'utf-8' )
for line in txtFile:
print repr( line )
The error I am getting is:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: unexpected code byte
Looking at the text file in a Hex editor, the first values are FFFE I've also tried:
txtFile.seek( 2 )
right after the 'open' but that just causes a different error.
|
[
"That file is not UTF-8; it's UTF-16LE with a byte-order marker.\n",
"That is a BOM\nEDIT, from the coments, it seems to be a utf-16 bom\ncodecs.open('foo.txt', 'r', 'utf-16')\n\nshould work.\n",
"Expanding on Johnathan's comment, this code should read the file correctly:\nimport codecs\ntxtFile = codecs.open( 'halout.txt', 'r', 'utf-16' )\nfor line in txtFile:\n print repr( line )\n\n",
"Try to see if the excel file has some blank rows (and then has values again), that might cause the unexpected error. \n"
] |
[
5,
2,
2,
0
] |
[] |
[] |
[
"codec",
"parsing",
"python",
"unicode",
"xml"
] |
stackoverflow_0001862963_codec_parsing_python_unicode_xml.txt
|
Q:
Matplotlib's GUI doesn't allow typing in save box?
I've been using matplotlib in python for some time now and I've finally gotten around to asking this question about an issue on my mac. When a plot shows up (after the plot() command, draw(), or show()), I have all the functionality I could want; I can move, zoom, etc. that I didn't do in the code.
When I go to save a figure with the view as I desire the save as box opens up and prompts for a filename. Anything I type appears in the terminal I used to execute the command! Selecting X11 and then typing has same result. Nothing seems to put the keyboards output into that box, but I can paste into the box using the mouse->Paste action and I can select files in the menu to overwrite and it works fine.
What's up with this?
Update:
The problem was wonderfully outlined and now has some solutions posted in this post: Why doesn't the save button work on a matplotlib plot?
A:
Just installed matplotlib 0.99.1 on Python 2.6.2 on Snow Leopard and ran the following code:
from pylab import *
plot([1,2,3])
show()
Then, I fiddled around with the plot for a while and clicked the save button. The save dialog box popped up normally and allowed me to save (and type) fine. This was using the TkAgg backend. However, I did get this error:
2009-12-08 00:40:18.772 Python[728:60f] -deltaZ is deprecated for NSEventTypeMagnify. Please use -magnification.
Which seems to be something to do with Snow Leopard changing some APIs.
Sorry for using typing this as a post instead of a comment, but code tags aren't allowed in comments :(
|
Matplotlib's GUI doesn't allow typing in save box?
|
I've been using matplotlib in python for some time now and I've finally gotten around to asking this question about an issue on my mac. When a plot shows up (after the plot() command, draw(), or show()), I have all the functionality I could want; I can move, zoom, etc. that I didn't do in the code.
When I go to save a figure with the view as I desire the save as box opens up and prompts for a filename. Anything I type appears in the terminal I used to execute the command! Selecting X11 and then typing has same result. Nothing seems to put the keyboards output into that box, but I can paste into the box using the mouse->Paste action and I can select files in the menu to overwrite and it works fine.
What's up with this?
Update:
The problem was wonderfully outlined and now has some solutions posted in this post: Why doesn't the save button work on a matplotlib plot?
|
[
"Just installed matplotlib 0.99.1 on Python 2.6.2 on Snow Leopard and ran the following code:\nfrom pylab import *\nplot([1,2,3])\nshow()\n\nThen, I fiddled around with the plot for a while and clicked the save button. The save dialog box popped up normally and allowed me to save (and type) fine. This was using the TkAgg backend. However, I did get this error:\n2009-12-08 00:40:18.772 Python[728:60f] -deltaZ is deprecated for NSEventTypeMagnify. Please use -magnification.\n\nWhich seems to be something to do with Snow Leopard changing some APIs.\nSorry for using typing this as a post instead of a comment, but code tags aren't allowed in comments :(\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0000644812_matplotlib_python.txt
|
Q:
grep -r in python
i'd like to implement the unix command 'grep -r' in a python function. i know about commands.getstatusoutput(), but for now i don't want to use that. i came up with this:
def grep_r (str, dir):
files = [ o[0]+"/"+f for o in os.walk(dir) for f in o[2] if os.path.isfile(o[0]+"/"+f) ]
return [ l for f in files for l in open(f) if str in l ]
but that of course doesn't use a regex, it just checks if 'str' is a substring of 'l'. so i tried the following:
def grep_r (pattern, dir):
r = re.compile(pattern)
files = [ o[0]+"/"+f for o in os.walk(dir) for f in o[2] if os.path.isfile(o[0]+"/"+f) ]
return [ l for f in files for l in open(f) if r.match(l) ]
but that doesn't work, it doesn't give me any matches even where the former function did. what changed? i could just split it up into a bunch of nested loops, but i'm more interested in being succinct than readable.
A:
You might want to search() instead of match() to catch matches in the middle of lines, as noted in http://docs.python.org/library/re.html#matching-vs-searching
Also, the structure and intent of your code is quite hidden. I've pythonized it.
def grep_r (pattern, dir):
r = re.compile(pattern)
for parent, dnames, fnames in os.walk(dir):
for fname in fnames:
filename = os.path.join(parent, fname)
if os.path.isfile(filename):
with open(filename) as f:
for line in f:
if r.search(line):
yield line
A:
re.match only checks the beginning of the string.
Use re.search()
From the docs:
Python offers two different primitive
operations based on regular
expressions: match checks for a match
only at the beginning of the string,
while search checks for a match
anywhere in the string (this is what
Perl does by default).
A:
Put all this code into a file called pygrep and chmod +x pygrep:
#!/usr/bin/python
import os
import re
import sys
def file_match(fname, pat):
try:
f = open(fname, "rt")
except IOError:
return
for i, line in enumerate(f):
if pat.search(line):
print "%s: %i: %s" % (fname, i+1, line)
f.close()
def grep(dir_name, s_pat):
pat = re.compile(s_pat)
for dirpath, dirnames, filenames in os.walk(dir_name):
for fname in filenames:
fullname = os.path.join(dirpath, fname)
file_match(fullname, pat)
if len(sys.argv) != 3:
u = "Usage: pygrep <dir_name> <pattern>\n"
sys.stderr.write(u)
sys.exit(1)
grep(sys.argv[1], sys.argv[2])
A:
import os, re
def grep_r(regex, dir):
for root, dirs, files in os.walk(dir):
for f in files:
for m in grep(regex, os.path.join(root, f)):
yield m
def grep(regex, filename):
for i, line in enumerate(open(filename)):
if re.match(regex, line): # or re.search depending on your default
yield "%s:%d: %s" % (os.path.basename(filename), i+1, line)
A:
why do you need to use regex?
path=os.path.join("/dir1","dir2","dir3")
pattern="test"
for r,d,f in os.walk(path):
for files in f:
for n,line in enumerate(open( os.path.join(r,files) ) ):
if pattern in line:
print "%s found in line: %d of file: %s" %(pattern, n+1, files)
|
grep -r in python
|
i'd like to implement the unix command 'grep -r' in a python function. i know about commands.getstatusoutput(), but for now i don't want to use that. i came up with this:
def grep_r (str, dir):
files = [ o[0]+"/"+f for o in os.walk(dir) for f in o[2] if os.path.isfile(o[0]+"/"+f) ]
return [ l for f in files for l in open(f) if str in l ]
but that of course doesn't use a regex, it just checks if 'str' is a substring of 'l'. so i tried the following:
def grep_r (pattern, dir):
r = re.compile(pattern)
files = [ o[0]+"/"+f for o in os.walk(dir) for f in o[2] if os.path.isfile(o[0]+"/"+f) ]
return [ l for f in files for l in open(f) if r.match(l) ]
but that doesn't work, it doesn't give me any matches even where the former function did. what changed? i could just split it up into a bunch of nested loops, but i'm more interested in being succinct than readable.
|
[
"You might want to search() instead of match() to catch matches in the middle of lines, as noted in http://docs.python.org/library/re.html#matching-vs-searching\nAlso, the structure and intent of your code is quite hidden. I've pythonized it.\ndef grep_r (pattern, dir):\n r = re.compile(pattern)\n for parent, dnames, fnames in os.walk(dir):\n for fname in fnames:\n filename = os.path.join(parent, fname)\n if os.path.isfile(filename):\n with open(filename) as f:\n for line in f:\n if r.search(line):\n yield line\n\n",
"re.match only checks the beginning of the string.\nUse re.search()\nFrom the docs:\n\nPython offers two different primitive\n operations based on regular\n expressions: match checks for a match\n only at the beginning of the string,\n while search checks for a match\n anywhere in the string (this is what\n Perl does by default).\n\n",
"Put all this code into a file called pygrep and chmod +x pygrep:\n#!/usr/bin/python\n\nimport os\nimport re\nimport sys\n\ndef file_match(fname, pat):\n try:\n f = open(fname, \"rt\")\n except IOError:\n return\n for i, line in enumerate(f):\n if pat.search(line):\n print \"%s: %i: %s\" % (fname, i+1, line)\n f.close()\n\n\ndef grep(dir_name, s_pat):\n pat = re.compile(s_pat)\n for dirpath, dirnames, filenames in os.walk(dir_name):\n for fname in filenames:\n fullname = os.path.join(dirpath, fname)\n file_match(fullname, pat)\n\nif len(sys.argv) != 3:\n u = \"Usage: pygrep <dir_name> <pattern>\\n\"\n sys.stderr.write(u)\n sys.exit(1)\n\ngrep(sys.argv[1], sys.argv[2])\n\n",
"import os, re\n\ndef grep_r(regex, dir):\n for root, dirs, files in os.walk(dir):\n for f in files:\n for m in grep(regex, os.path.join(root, f)):\n yield m\n\ndef grep(regex, filename):\n for i, line in enumerate(open(filename)):\n if re.match(regex, line): # or re.search depending on your default\n yield \"%s:%d: %s\" % (os.path.basename(filename), i+1, line)\n\n",
"why do you need to use regex?\npath=os.path.join(\"/dir1\",\"dir2\",\"dir3\")\npattern=\"test\"\nfor r,d,f in os.walk(path):\n for files in f:\n for n,line in enumerate(open( os.path.join(r,files) ) ):\n if pattern in line:\n print \"%s found in line: %d of file: %s\" %(pattern, n+1, files)\n\n"
] |
[
9,
6,
3,
2,
1
] |
[] |
[] |
[
"grep",
"python"
] |
stackoverflow_0001863236_grep_python.txt
|
Q:
Python after Ruby on Rails
I have been working with Ruby on Rails for over a year now and have been offered some development work with Python. I would like know if development with Python is as enjoyable as Ruby in terms of the clarity and ease of use. And how well is Python suited for Web development. I've heard of Pylons being a direct port of the Rails framework but does it provide the same level of comfort and features. Are there any popular websites built using Python and a framework that offers the same level of flexibilty as Rails.
Because Rails doesn't seem like work.
A:
Django is one of the most famous. It follows a different approach to web devlopment then ruby does, but it is just as powerful and feature rich. An example website running Django is lawrence.com
Pylons is another popular one, I don't know why you heard it was a Rails clone, because it is not. It is a lightweight framework that leverages the power of other open-source projects to give you flexibility in implementation. For example, you can choose to use SQLAlchemy, SQLObject or CouchDB for managing your database. Or you can choose between Mako, Genshi, Jinja2, or whatever you like for your templates. I think you get the picture. Some example website running of pylons are: freebase and Charlie Rose
There exist other web framework as well, but they are less popular.
Notably, TurboGears, which is now built upon Pylons. I would say it tries to pack more juice then pylons does, but it also constrain you more as it assumes more decisions for you. Still, you can stay away from them and do as you please, but it starts with a more constrained framework.
The last one I will mention is Zope, which is the big commercially backed one, that has been there for a while now, but I don't have much experience with it. I do believe it is the less "fun" to work with, but that's just my feeling, you can check it out yourself.
All in all, it comes down to your workflow, I personally, do not enjoy Ruby as a language as much as I do Python and it is natural that I thus like to work with python for web development then Ruby. You really need to try them out yourself, at least the first two I mentioned, try to build a small website, just to get a feel for it. All I can say is from my experience, people either like Rails or Python, not both...
Good Luck!
A:
One very good web development framework is Django
A:
The main two frameworks in Python are Pylons (with the coaligned Turbogears framework) and the more popular Django. Django stomps everything for doing content-based sites (CMS etc) because the admin is excellent.
However, your question makes you sound very much enthused with Ruby and I doubt you'll find anything you like as much. It goes both ways: I'm pretty meh on Rails but really like Python and node.js.
A:
I have done a lot of work with Python in the past year, mostly using Django. I enjoy it, and agree with others that it's great for content-heavy sites. Python and all of its frameworks very much follow the mantra of there being one correct way of doing things. I have learned that most of my pain extending Django lies in me approaching a problem wrongly and need to refactor the code. If you are a precise, logically-driven thinker, you'll enjoy Python a lot.
As far as websites that use Python for a code base, the biggest may be reddit and its family of sites. Django's website also lists sites that use it. I haven't had the privilege of using Pylons, but I also hear good things about it.
A:
Clarity and ease of use are some of Pythons biggest selling points. In saying that, the different Python web frameworks cover almost the entire spectrum from small and simple all the way up to large and complex with everything in between.
You should find that most Python web frameworks have less 'magic' than Rails - ie they are a bit more explicit which is arguably better from the clarity point of view.
In my opinion, even if you enjoy Rails and don't ever plan on leaving, you should still try out other languages and frameworks occasionally to give you a broader perspective.
Personally I like Turbogears2, but I think Django would make a good starting point for a Rails developer that wanted to try out something else.
|
Python after Ruby on Rails
|
I have been working with Ruby on Rails for over a year now and have been offered some development work with Python. I would like know if development with Python is as enjoyable as Ruby in terms of the clarity and ease of use. And how well is Python suited for Web development. I've heard of Pylons being a direct port of the Rails framework but does it provide the same level of comfort and features. Are there any popular websites built using Python and a framework that offers the same level of flexibilty as Rails.
Because Rails doesn't seem like work.
|
[
"Django is one of the most famous. It follows a different approach to web devlopment then ruby does, but it is just as powerful and feature rich. An example website running Django is lawrence.com\nPylons is another popular one, I don't know why you heard it was a Rails clone, because it is not. It is a lightweight framework that leverages the power of other open-source projects to give you flexibility in implementation. For example, you can choose to use SQLAlchemy, SQLObject or CouchDB for managing your database. Or you can choose between Mako, Genshi, Jinja2, or whatever you like for your templates. I think you get the picture. Some example website running of pylons are: freebase and Charlie Rose\nThere exist other web framework as well, but they are less popular.\nNotably, TurboGears, which is now built upon Pylons. I would say it tries to pack more juice then pylons does, but it also constrain you more as it assumes more decisions for you. Still, you can stay away from them and do as you please, but it starts with a more constrained framework.\nThe last one I will mention is Zope, which is the big commercially backed one, that has been there for a while now, but I don't have much experience with it. I do believe it is the less \"fun\" to work with, but that's just my feeling, you can check it out yourself.\nAll in all, it comes down to your workflow, I personally, do not enjoy Ruby as a language as much as I do Python and it is natural that I thus like to work with python for web development then Ruby. You really need to try them out yourself, at least the first two I mentioned, try to build a small website, just to get a feel for it. All I can say is from my experience, people either like Rails or Python, not both...\nGood Luck!\n",
"One very good web development framework is Django\n",
"The main two frameworks in Python are Pylons (with the coaligned Turbogears framework) and the more popular Django. Django stomps everything for doing content-based sites (CMS etc) because the admin is excellent.\nHowever, your question makes you sound very much enthused with Ruby and I doubt you'll find anything you like as much. It goes both ways: I'm pretty meh on Rails but really like Python and node.js.\n",
"I have done a lot of work with Python in the past year, mostly using Django. I enjoy it, and agree with others that it's great for content-heavy sites. Python and all of its frameworks very much follow the mantra of there being one correct way of doing things. I have learned that most of my pain extending Django lies in me approaching a problem wrongly and need to refactor the code. If you are a precise, logically-driven thinker, you'll enjoy Python a lot.\nAs far as websites that use Python for a code base, the biggest may be reddit and its family of sites. Django's website also lists sites that use it. I haven't had the privilege of using Pylons, but I also hear good things about it.\n",
"Clarity and ease of use are some of Pythons biggest selling points. In saying that, the different Python web frameworks cover almost the entire spectrum from small and simple all the way up to large and complex with everything in between.\nYou should find that most Python web frameworks have less 'magic' than Rails - ie they are a bit more explicit which is arguably better from the clarity point of view.\nIn my opinion, even if you enjoy Rails and don't ever plan on leaving, you should still try out other languages and frameworks occasionally to give you a broader perspective.\nPersonally I like Turbogears2, but I think Django would make a good starting point for a Rails developer that wanted to try out something else.\n"
] |
[
5,
3,
3,
3,
1
] |
[] |
[] |
[
"pylons",
"python",
"ruby_on_rails"
] |
stackoverflow_0001834829_pylons_python_ruby_on_rails.txt
|
Q:
Strip final 0 off a python string
#!/usr/bin/env python
import os, sys, subprocess, time
while True:
print subprocess.call("xsel", shell=True);
time.sleep(1);
Takes an entry from the clipboard and prints it, every 1 second.
Result:
copied0
entry0
from0
clipboard0
I do not know why it returns the final 0, but it apparently stops me from using string strip (int has not strip), hence the 0 makes the string an integer?
How can one strip final 0 off the python string in the result above?
I'm a BASH scripter converting to python.
A:
Edit: subprocess.call isn't returning a string, but an int -- that 0 you're seeing (after xsel's actual output). Use, instead:
print subprocess.Popen('xsel', stdout=subprocess.PIPE).communicate()[0]
A:
As Mark pointed out, subprocess.call() does not do what you want
Something like this should work
#!/usr/bin/env python
import os, sys, subprocess, time
while True:
p=subprocess.Popen(["xsel"],stdout=subprocess.PIPE)
print p.stdout.read()
time.sleep(1)
A:
"copied0".rstrip("0") should work
Actually, you better do like this, It wont show return code to the screen
import os, sys, subprocess, time
while True:
_ = subprocess.call("dir", shell=True);
time.sleep(1);
A:
It looks to me like it is running "xsel" which is printing its results to stdout, then printing the return code (0) to stdout. You are aren't getting the clip results from python.
You probably want subprocess.popen and to capture stdout.
A:
The 0 and new line feed at each line are the only things printed by the python print command, where zero is the shell return code from subprocess.call. The shell itself first prints it results first to stdout, which is why you see the word.
Edit: See the comments in S Mark's post for the epiphany.
A:
If the zero is always at the end of the string, and so you simply always want the last character removed, just do st=st[:-1].
Or, if you are not sure that there will be a zero at the end, you can do if st[-1]==0: st=st[:-1].
|
Strip final 0 off a python string
|
#!/usr/bin/env python
import os, sys, subprocess, time
while True:
print subprocess.call("xsel", shell=True);
time.sleep(1);
Takes an entry from the clipboard and prints it, every 1 second.
Result:
copied0
entry0
from0
clipboard0
I do not know why it returns the final 0, but it apparently stops me from using string strip (int has not strip), hence the 0 makes the string an integer?
How can one strip final 0 off the python string in the result above?
I'm a BASH scripter converting to python.
|
[
"Edit: subprocess.call isn't returning a string, but an int -- that 0 you're seeing (after xsel's actual output). Use, instead:\nprint subprocess.Popen('xsel', stdout=subprocess.PIPE).communicate()[0]\n\n",
"As Mark pointed out, subprocess.call() does not do what you want\nSomething like this should work\n#!/usr/bin/env python\nimport os, sys, subprocess, time\nwhile True:\n p=subprocess.Popen([\"xsel\"],stdout=subprocess.PIPE)\n print p.stdout.read()\n time.sleep(1)\n\n",
"\"copied0\".rstrip(\"0\") should work\nActually, you better do like this, It wont show return code to the screen\nimport os, sys, subprocess, time\nwhile True: \n _ = subprocess.call(\"dir\", shell=True);\n time.sleep(1);\n\n",
"It looks to me like it is running \"xsel\" which is printing its results to stdout, then printing the return code (0) to stdout. You are aren't getting the clip results from python.\nYou probably want subprocess.popen and to capture stdout.\n",
"The 0 and new line feed at each line are the only things printed by the python print command, where zero is the shell return code from subprocess.call. The shell itself first prints it results first to stdout, which is why you see the word.\nEdit: See the comments in S Mark's post for the epiphany.\n",
"If the zero is always at the end of the string, and so you simply always want the last character removed, just do st=st[:-1].\nOr, if you are not sure that there will be a zero at the end, you can do if st[-1]==0: st=st[:-1].\n"
] |
[
4,
4,
2,
2,
2,
1
] |
[] |
[] |
[
"integer",
"popen",
"python",
"string",
"subprocess"
] |
stackoverflow_0001864613_integer_popen_python_string_subprocess.txt
|
Q:
Re-format items inside list read from CSV file in Python
I have some lines in a CSV file like this:
1000001234,Account Name,0,0,"3,711.32",0,0,"18,629.64","22,340.96",COD,"20,000.00",Some string,Some string 2
If you notice, some numbers are enclosed in " " and has a thousand separator ",". I want to remove the thousand separator and the double quote enclosure. For the qoute enclosure, I'm thinking of using string.replace() but how about the comma inside the quote marks?
What's the best way of doing this in Python?
A:
You could simply parse the CSV, make the necessary changes and then write it again.
(I haven't tested this code but it should be something like this)
import csv
reader = csv.reader(open('IN.csv', 'r'))
writer = csv.writer(open('OUT.csv', 'w')
for row in reader:
# do stuff to the row here
# row is just a list of items
writer.writerow(row)
A:
Here is a bit of regular expression fiddling that will do the trick:
>>> import re
>>> p = re.compile('["]([^"]*)["]')
>>> x = """1000001234,Account Name,0,0,"3,711.32",0,0,"18,629.64","22,340.96",COD,"20,000.00",Some string,Some string 2"""
>>> p.sub(lambda m: m.groups()[0].replace(',',''), x)
'1000001234,Account Name,0,0,3711.32,0,0,18629.64,22340.96,COD,20000.00,Some string,Some string 2'
Removes the commas from the parts of the string that is between pairs of quotes.
A:
If all you want is to remove double quotes and commas from a string, a couple of replaces will do it:
s = s.replace('"','').replace(',','')
A faster way is to use s.translate, but that requires a minimum of preparation:
import string
identity = string.maketrans('', '')
...
s = s.translate(identity, '",')
This removes any occurrence of double quotes or commas, and does it pretty fast too. In general, the .translate method of string objects is the best way to remove certain kinds of characters from a string (as well as possibly performing some character-to-character translation, but, by using a translate table such as the identity one I show here, the translation part may in fact be easily bypassed). Note that .translate works a bit differently for Unicode objects (and therefore for Python 3 strings, too) -- I'm giving the approach that's suitable for plain Python 2 string objects.
A:
Here is something I just tested, you may not need pprint, I just want to use for clear output.
test.csv
1000001234,Account Name,0,0,"3,711.32",0,0,"18,629.64","22,340.96",COD,"20,000.00",Some string,Some string 2
1000001234,Account Name,0,0,"3,711.32",0,0,"18,629.64","22,340.96",COD,"20,000.00",Some string,Some string 2
Code, use csv reader, and pass each item to parseNum function to check valid digit or not.
from pprint import pprint
import csv
def parseNum(x):
xx=x.replace(",","")
if not xx.replace(".","").isdigit(): return x
return "." in xx and float(xx) or int(xx)
x=[map(parseNum,line) for line in csv.reader(open("test.csv"))]
pprint(x)
Output
[[1000001234,
'Account Name',
0,
0,
3711.3200000000002,
0,
0,
18629.639999999999,
22340.959999999999,
'COD',
20000.0,
'Some string',
'Some string 2'],
[1000001234,
'Account Name',
0,
0,
3711.3200000000002,
0,
0,
18629.639999999999,
22340.959999999999,
'COD',
20000.0,
'Some string',
'Some string 2']]
Note: If you need good precision on float numbers, replace float with Decimal
A:
Use the csv module. It has all sorts of constants and parameters to help you set the delimiters, quotes, and everything else for the type of file you are working with. It even has a Sniffer that can help you identify the csv format of the file. In fact this is the only module I have found that can properly and easily work with csv files.
http://docs.python.org/library/csv.html
A:
You should absolutely use the csv module. If you use a csv.reader, you only have one very small problem: testing fields to see if they're numbers, and stripping commas if they are. I've packaged it as a generator:
import csv
def read_and_fix_numbers(f):
"""Iterate over a file object that returns CSV data, stripping commas out of numbers."""
for row in csv.reader(f):
for field in row:
try:
x = float(field)
field.replace(",", "")
except ValueError:
pass
fixed.append(field)
yield fixed
Usage:
>>> data = '1000001234,Account Name,0,0,"3,711.32",0,0,"18,629.64","22,340.96",COD,"20,000.00",Some string,Some string 2'
>>> import StringIO
>>> f = StringIO.StringIO(data)
>>> for row in read_and_fix_numbers(f):
print row
['1000001234', 'Account Name', '0', '0', '3711.32', '0', '0', '18629.64', '22340.96', 'COD', '20000.00', 'Some string', 'Some string 2']
|
Re-format items inside list read from CSV file in Python
|
I have some lines in a CSV file like this:
1000001234,Account Name,0,0,"3,711.32",0,0,"18,629.64","22,340.96",COD,"20,000.00",Some string,Some string 2
If you notice, some numbers are enclosed in " " and has a thousand separator ",". I want to remove the thousand separator and the double quote enclosure. For the qoute enclosure, I'm thinking of using string.replace() but how about the comma inside the quote marks?
What's the best way of doing this in Python?
|
[
"You could simply parse the CSV, make the necessary changes and then write it again.\n(I haven't tested this code but it should be something like this)\nimport csv\nreader = csv.reader(open('IN.csv', 'r'))\nwriter = csv.writer(open('OUT.csv', 'w')\nfor row in reader:\n # do stuff to the row here\n # row is just a list of items\n writer.writerow(row)\n\n",
"Here is a bit of regular expression fiddling that will do the trick:\n>>> import re\n>>> p = re.compile('[\"]([^\"]*)[\"]')\n>>> x = \"\"\"1000001234,Account Name,0,0,\"3,711.32\",0,0,\"18,629.64\",\"22,340.96\",COD,\"20,000.00\",Some string,Some string 2\"\"\"\n>>> p.sub(lambda m: m.groups()[0].replace(',',''), x)\n'1000001234,Account Name,0,0,3711.32,0,0,18629.64,22340.96,COD,20000.00,Some string,Some string 2'\n\nRemoves the commas from the parts of the string that is between pairs of quotes.\n",
"If all you want is to remove double quotes and commas from a string, a couple of replaces will do it:\ns = s.replace('\"','').replace(',','')\n\nA faster way is to use s.translate, but that requires a minimum of preparation:\nimport string\nidentity = string.maketrans('', '')\n\n...\n\ns = s.translate(identity, '\",')\n\nThis removes any occurrence of double quotes or commas, and does it pretty fast too. In general, the .translate method of string objects is the best way to remove certain kinds of characters from a string (as well as possibly performing some character-to-character translation, but, by using a translate table such as the identity one I show here, the translation part may in fact be easily bypassed). Note that .translate works a bit differently for Unicode objects (and therefore for Python 3 strings, too) -- I'm giving the approach that's suitable for plain Python 2 string objects.\n",
"Here is something I just tested, you may not need pprint, I just want to use for clear output.\ntest.csv\n1000001234,Account Name,0,0,\"3,711.32\",0,0,\"18,629.64\",\"22,340.96\",COD,\"20,000.00\",Some string,Some string 2\n1000001234,Account Name,0,0,\"3,711.32\",0,0,\"18,629.64\",\"22,340.96\",COD,\"20,000.00\",Some string,Some string 2\n\nCode, use csv reader, and pass each item to parseNum function to check valid digit or not.\nfrom pprint import pprint\nimport csv\n\ndef parseNum(x):\n xx=x.replace(\",\",\"\")\n if not xx.replace(\".\",\"\").isdigit(): return x\n return \".\" in xx and float(xx) or int(xx)\n\nx=[map(parseNum,line) for line in csv.reader(open(\"test.csv\"))]\n\npprint(x)\n\nOutput\n[[1000001234,\n 'Account Name',\n 0,\n 0,\n 3711.3200000000002,\n 0,\n 0,\n 18629.639999999999,\n 22340.959999999999,\n 'COD',\n 20000.0,\n 'Some string',\n 'Some string 2'],\n [1000001234,\n 'Account Name',\n 0,\n 0,\n 3711.3200000000002,\n 0,\n 0,\n 18629.639999999999,\n 22340.959999999999,\n 'COD',\n 20000.0,\n 'Some string',\n 'Some string 2']]\n\nNote: If you need good precision on float numbers, replace float with Decimal \n",
"Use the csv module. It has all sorts of constants and parameters to help you set the delimiters, quotes, and everything else for the type of file you are working with. It even has a Sniffer that can help you identify the csv format of the file. In fact this is the only module I have found that can properly and easily work with csv files. \nhttp://docs.python.org/library/csv.html\n",
"You should absolutely use the csv module. If you use a csv.reader, you only have one very small problem: testing fields to see if they're numbers, and stripping commas if they are. I've packaged it as a generator:\nimport csv\n\ndef read_and_fix_numbers(f):\n \"\"\"Iterate over a file object that returns CSV data, stripping commas out of numbers.\"\"\"\n for row in csv.reader(f):\n for field in row:\n try:\n x = float(field)\n field.replace(\",\", \"\")\n except ValueError:\n pass\n fixed.append(field)\n yield fixed\n\nUsage:\n>>> data = '1000001234,Account Name,0,0,\"3,711.32\",0,0,\"18,629.64\",\"22,340.96\",COD,\"20,000.00\",Some string,Some string 2'\n>>> import StringIO\n>>> f = StringIO.StringIO(data)\n>>> for row in read_and_fix_numbers(f):\n print row\n['1000001234', 'Account Name', '0', '0', '3711.32', '0', '0', '18629.64', '22340.96', 'COD', '20000.00', 'Some string', 'Some string 2']\n\n"
] |
[
2,
2,
1,
1,
1,
1
] |
[] |
[] |
[
"csv",
"delimiter",
"parsing",
"python",
"replace"
] |
stackoverflow_0001864422_csv_delimiter_parsing_python_replace.txt
|
Q:
Convert UTF-8 octets to unicode code points
I have a set of UTF-8 octets and I need to convert them back to unicode code points. How can I do this in python.
e.g. UTF-8 octet ['0xc5','0x81'] should be converted to 0x141 codepoint.
A:
Python 3.x:
In Python 3.x, str is the class for Unicode text, and bytes is for containing octets.
If by "octets" you really mean strings in the form '0xc5' (rather than '\xc5') you can convert to bytes like this:
>>> bytes(int(x,0) for x in ['0xc5', '0x81'])
b'\xc5\x81'
You can then convert to str (ie: Unicode) using the str constructor...
>>> str(b'\xc5\x81', 'utf-8')
'Ł'
...or by calling .decode('utf-8') on the bytes object:
>>> b'\xc5\x81'.decode('utf-8')
'Ł'
>>> hex(ord('Ł'))
'0x141'
Pre-3.x:
Prior to 3.x, the str type was a byte array, and unicode was for Unicode text.
Again, if by "octets" you really mean strings in the form '0xc5' (rather than '\xc5') you can convert them like this:
>>> ''.join(chr(int(x,0)) for x in ['0xc5', '0x81'])
'\xc5\x81'
You can then convert to unicode using the constructor...
>>> unicode('\xc5\x81', 'utf-8')
u'\u0141'
...or by calling .decode('utf-8') on the str:
>>> '\xc5\x81'.decode('utf-8')
u'\u0141'
A:
In lovely 3.x, where all strs are Unicode, and bytes are what strs used to be:
>>> s = str(bytes([0xc5, 0x81]), 'utf-8')
>>> s
'Ł'
>>> ord(s)
321
>>> hex(ord(s))
'0x141'
Which is what you asked for.
A:
l = ['0xc5','0x81']
s = ''.join([chr(int(c, 16)) for c in l]).decode('utf8')
s
>>> u'\u0141'
A:
>>> "".join((chr(int(x,16)) for x in ['0xc5','0x81'])).decode("utf8")
u'\u0141'
|
Convert UTF-8 octets to unicode code points
|
I have a set of UTF-8 octets and I need to convert them back to unicode code points. How can I do this in python.
e.g. UTF-8 octet ['0xc5','0x81'] should be converted to 0x141 codepoint.
|
[
"Python 3.x:\nIn Python 3.x, str is the class for Unicode text, and bytes is for containing octets.\nIf by \"octets\" you really mean strings in the form '0xc5' (rather than '\\xc5') you can convert to bytes like this:\n>>> bytes(int(x,0) for x in ['0xc5', '0x81'])\nb'\\xc5\\x81'\n\nYou can then convert to str (ie: Unicode) using the str constructor...\n>>> str(b'\\xc5\\x81', 'utf-8')\n'Ł'\n\n...or by calling .decode('utf-8') on the bytes object:\n>>> b'\\xc5\\x81'.decode('utf-8')\n'Ł'\n>>> hex(ord('Ł'))\n'0x141'\n\nPre-3.x:\nPrior to 3.x, the str type was a byte array, and unicode was for Unicode text.\nAgain, if by \"octets\" you really mean strings in the form '0xc5' (rather than '\\xc5') you can convert them like this:\n>>> ''.join(chr(int(x,0)) for x in ['0xc5', '0x81'])\n'\\xc5\\x81'\n\nYou can then convert to unicode using the constructor...\n>>> unicode('\\xc5\\x81', 'utf-8')\nu'\\u0141'\n\n...or by calling .decode('utf-8') on the str:\n>>> '\\xc5\\x81'.decode('utf-8')\nu'\\u0141'\n\n",
"In lovely 3.x, where all strs are Unicode, and bytes are what strs used to be:\n>>> s = str(bytes([0xc5, 0x81]), 'utf-8')\n>>> s\n'Ł'\n>>> ord(s)\n321\n>>> hex(ord(s))\n'0x141'\n\nWhich is what you asked for.\n",
"l = ['0xc5','0x81']\ns = ''.join([chr(int(c, 16)) for c in l]).decode('utf8')\ns\n>>> u'\\u0141'\n\n",
">>> \"\".join((chr(int(x,16)) for x in ['0xc5','0x81'])).decode(\"utf8\")\nu'\\u0141'\n\n"
] |
[
16,
6,
3,
1
] |
[] |
[] |
[
"python",
"unicode",
"utf_8"
] |
stackoverflow_0001864701_python_unicode_utf_8.txt
|
Q:
running a method within another method. python
I am calling a method within another. and the error for this script i am getting is
NameError: name 'mnDialog' is not defined
Is there a reason for it? I think it has something to do with executing a command which isn't on the global level. (i didn't have the impression that python has a global and local variable declaration.) What is the right syntax or the go around this?
thank you for your time.
import maya.cmds as cmds
def mnProgRun():
def mnDialog(*args):
cmds.confirmDialog( title='Confirm', message='Are you sure?',button=['Yes','No'], defaultButton='Yes',cancelButton='No',dismissString='No' )
def mnMakeWin():
cmds.window( 'mnWin', title = 'testman', wh=(260,100))
cmds.columnLayout(adjustableColumn=False, columnAlign='center')
cmds.button( label="Yes,it works",align='center',width=120,height=25, backgroundColor=[0.5,1,0.5],command='cmds.scriptJob( event=["SelectionChanged","mnDialog"])')
cmds.button( label="No, Thank You!",align='center',width=120,height=25, backgroundColor=[1,0.5,0.5],command='cmds.deleteUI("mnWin")')
cmds.showWindow( 'mnWin' )
mnMakeWin()
mnProgRun()
A:
The problem is that the mnDialog is not being looked up from mnMakeWin, you are passing the name and it gets looked up later when you are not in the correct scope.
It may work to pass the function in instead of the name. I don't have maya installed, so I can't try it.
Otherwise you'll have to define mnDialog in the global scope which seems like an odd restriction to me
A:
mnDialog is a local variable in mnProgRun. It is not accessible outside the function scope. If you want it to be, define it at the appropriate scope.
(i didn't have the impression that python has a global and local variable declaration.)
You have the wrong impression.
A:
You should define mnDialog at the top level. It is not in the correct namespace.
Also, it's (almost) always unnecessarily complicating to nest functions in Python.
A:
maya always have problems with scoops,
you may define mnDialog() and mnMakeWin() outside the function, at the top scoop level,
its maya problem not from python, as i faced problem when calling class methods from maya ui command (ex button event).
hope that will help you :)
##edit
import maya.cmds as cmds
def mnDialog(*args):
cmds.confirmDialog( title='Confirm', message='Are you sure?',button=['Yes','No'],
defaultButton='Yes',cancelButton='No',dismissString='No' )
def mnMakeWin():
cmds.window( 'mnWin', title = 'testman', wh=(260,100))
cmds.columnLayout(adjustableColumn=False, columnAlign='center')
cmds.button( label="Yes,it works",align='center',width=120,height=25,
backgroundColor=[0.5,1,0.5],command='cmds.scriptJob( event=
["SelectionChanged","mnDialog"])')
cmds.button( label="No, Thank You!",align='center',width=120,height=25,
backgroundColor=[1,0.5,0.5],command='cmds.deleteUI("mnWin")')
cmds.showWindow( 'mnWin' )
def mnProgRun():
mnMakeWin()
#run
mnProgRun()
|
running a method within another method. python
|
I am calling a method within another. and the error for this script i am getting is
NameError: name 'mnDialog' is not defined
Is there a reason for it? I think it has something to do with executing a command which isn't on the global level. (i didn't have the impression that python has a global and local variable declaration.) What is the right syntax or the go around this?
thank you for your time.
import maya.cmds as cmds
def mnProgRun():
def mnDialog(*args):
cmds.confirmDialog( title='Confirm', message='Are you sure?',button=['Yes','No'], defaultButton='Yes',cancelButton='No',dismissString='No' )
def mnMakeWin():
cmds.window( 'mnWin', title = 'testman', wh=(260,100))
cmds.columnLayout(adjustableColumn=False, columnAlign='center')
cmds.button( label="Yes,it works",align='center',width=120,height=25, backgroundColor=[0.5,1,0.5],command='cmds.scriptJob( event=["SelectionChanged","mnDialog"])')
cmds.button( label="No, Thank You!",align='center',width=120,height=25, backgroundColor=[1,0.5,0.5],command='cmds.deleteUI("mnWin")')
cmds.showWindow( 'mnWin' )
mnMakeWin()
mnProgRun()
|
[
"The problem is that the mnDialog is not being looked up from mnMakeWin, you are passing the name and it gets looked up later when you are not in the correct scope.\nIt may work to pass the function in instead of the name. I don't have maya installed, so I can't try it.\nOtherwise you'll have to define mnDialog in the global scope which seems like an odd restriction to me\n",
"mnDialog is a local variable in mnProgRun. It is not accessible outside the function scope. If you want it to be, define it at the appropriate scope.\n\n(i didn't have the impression that python has a global and local variable declaration.)\n\nYou have the wrong impression.\n",
"You should define mnDialog at the top level. It is not in the correct namespace.\nAlso, it's (almost) always unnecessarily complicating to nest functions in Python.\n",
"maya always have problems with scoops,\nyou may define mnDialog() and mnMakeWin() outside the function, at the top scoop level,\nits maya problem not from python, as i faced problem when calling class methods from maya ui command (ex button event).\nhope that will help you :)\n##edit\nimport maya.cmds as cmds\ndef mnDialog(*args):\n\n cmds.confirmDialog( title='Confirm', message='Are you sure?',button=['Yes','No'],\n\n defaultButton='Yes',cancelButton='No',dismissString='No' )\n\ndef mnMakeWin():\n\n cmds.window( 'mnWin', title = 'testman', wh=(260,100))\n\n cmds.columnLayout(adjustableColumn=False, columnAlign='center')\n\n cmds.button( label=\"Yes,it works\",align='center',width=120,height=25, \n backgroundColor=[0.5,1,0.5],command='cmds.scriptJob( event=\n [\"SelectionChanged\",\"mnDialog\"])')\n\n cmds.button( label=\"No, Thank You!\",align='center',width=120,height=25, \n backgroundColor=[1,0.5,0.5],command='cmds.deleteUI(\"mnWin\")')\n\n cmds.showWindow( 'mnWin' )\n\ndef mnProgRun():\n mnMakeWin()\n\n#run\nmnProgRun()\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"command",
"maya",
"methods",
"python"
] |
stackoverflow_0001862945_command_maya_methods_python.txt
|
Q:
Showing processing message in Python
I want to show the processing information or log in the original page when the submitted request is being served until it completes the execution. I thought it would be meaningful to the user to know what is happening behind the request.
I don't find a clue to do so though, can you guys help me out as how people are doing like this one below one - for your reference
http://www.xml-sitemaps.com/
A:
there are two ways i could imagine handling this:
have your backend script (python) output the information of a long process to a log of some sort (text file, database, session, etc...) and then have javascript grab the information via ajax and update the current page.
same deal, but instead of ajax just have a meta refresh on the page which would grab the latest updated information.
A:
you may use python threading, which will create a new process in background
and display your messages on that thread
hope it helps ;)
|
Showing processing message in Python
|
I want to show the processing information or log in the original page when the submitted request is being served until it completes the execution. I thought it would be meaningful to the user to know what is happening behind the request.
I don't find a clue to do so though, can you guys help me out as how people are doing like this one below one - for your reference
http://www.xml-sitemaps.com/
|
[
"there are two ways i could imagine handling this:\n\nhave your backend script (python) output the information of a long process to a log of some sort (text file, database, session, etc...) and then have javascript grab the information via ajax and update the current page.\nsame deal, but instead of ajax just have a meta refresh on the page which would grab the latest updated information.\n\n",
"you may use python threading, which will create a new process in background\nand display your messages on that thread\nhope it helps ;)\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000308220_python.txt
|
Q:
Display data from a table using Django
Can someone please give me a small piece of code where we display data from a table and point me to code that goes into views.py and templates/index.html to display the table of contents?
A:
Read the Django tutorial. It shows, in 4 parts, the basics of this framework.
As for your question, a very quick example.
In views.py:
def display(request):
return render_to_response('template.tmpl', {'obj': models.Book.objects.all()})
In models.py:
class Book(models.Model):
author = models.CharField(max_length = 20)
title = models.CharField(max_length = 40)
publication_year = models.IntegerField()
In template.tmpl:
<table>
<tr>
<th>author</th>
<th>title</th>
<th>publication year</th>
</tr>
{% for b in obj %}
<tr>
<td>{{ b.author }}</td>
<td>{{ b.title }}</td>
<td>{{ b.publication_year }}</td>
</tr>
{% endfor %}
</table>
|
Display data from a table using Django
|
Can someone please give me a small piece of code where we display data from a table and point me to code that goes into views.py and templates/index.html to display the table of contents?
|
[
"Read the Django tutorial. It shows, in 4 parts, the basics of this framework. \nAs for your question, a very quick example.\nIn views.py:\ndef display(request):\n return render_to_response('template.tmpl', {'obj': models.Book.objects.all()})\n\nIn models.py:\nclass Book(models.Model):\n author = models.CharField(max_length = 20)\n title = models.CharField(max_length = 40)\n publication_year = models.IntegerField()\n\nIn template.tmpl:\n<table>\n<tr>\n <th>author</th>\n <th>title</th>\n <th>publication year</th>\n</tr>\n{% for b in obj %}\n<tr>\n <td>{{ b.author }}</td>\n <td>{{ b.title }}</td>\n <td>{{ b.publication_year }}</td>\n</tr>\n{% endfor %}\n</table>\n\n"
] |
[
10
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001865479_django_python.txt
|
Q:
Search a file for a string, and execute a function if string not found; in python
def checkCache(cachedText):
for line in open("cache"):
if cachedText + ":" in line:
print line
open("cache").close()
else:
requestDefinition(cachedText)
This code searches each line of a file (cache) for a specific string (cachedText + ":").
If it does not find the specific string, within the entire file it is meant to call another function (requestNewDefinition(cachedText)).
However my above code executes the function for each non-matching line.
How can one search a file for a string (cachedText + ":"), and if the string is not found anywhere in the file, execute another function?
Example Cache:
hello:world
foo:bar
A:
your for loop is broken. you are actually checking each line of the file and executing the function for each line which does not match.
note also that calling open("cache").close() will reopen the cache file and close it immediately, without closing the handle which was open at the beginning of the for loop.
one way to perform what you need is to make the else clause part of the for loop. beware that an else in a for loop is tricky !.
def checkCache(cachedText):
cache = open( "cache" )
for line in cache:
if cachedText + ":" in line:
break
else:
requestDefinition(cachedText)
cache.close()
the else part of a for loop executes at the end of the loop, only if no break was called in the loop.
A:
Something like this:
def checkCache(cachedText):
for line in open("cache"):
if cachedText + ":" in line:
print line
break
else:
requestDefinition(cachedText)
Notice how the else: is attached to the for, not the if. The else: is only executed if the for completes by exhausting the iterable, not executing the break, which would mean that the cachedText is not found anywhere in the file. See the Python documentation for more information.
A:
My guess is that you want something like this. If the line is found, you should "break". "break" will end the for loop. The else statement attached to the for loop (as opposed to an if statement) will only execute if the for loop iterated through each line without ever hitting the "break" condition. You still want to close the file after you're done.
def checkCache(cachedText):
f = open("cache")
for line in f:
if cachedText + ":" in line:
print line
break
else:
requestDefinition(cachedText)
f.close()
|
Search a file for a string, and execute a function if string not found; in python
|
def checkCache(cachedText):
for line in open("cache"):
if cachedText + ":" in line:
print line
open("cache").close()
else:
requestDefinition(cachedText)
This code searches each line of a file (cache) for a specific string (cachedText + ":").
If it does not find the specific string, within the entire file it is meant to call another function (requestNewDefinition(cachedText)).
However my above code executes the function for each non-matching line.
How can one search a file for a string (cachedText + ":"), and if the string is not found anywhere in the file, execute another function?
Example Cache:
hello:world
foo:bar
|
[
"your for loop is broken. you are actually checking each line of the file and executing the function for each line which does not match.\nnote also that calling open(\"cache\").close() will reopen the cache file and close it immediately, without closing the handle which was open at the beginning of the for loop.\none way to perform what you need is to make the else clause part of the for loop. beware that an else in a for loop is tricky !.\ndef checkCache(cachedText):\n cache = open( \"cache\" )\n for line in cache:\n if cachedText + \":\" in line:\n break\n else:\n requestDefinition(cachedText)\n cache.close()\n\nthe else part of a for loop executes at the end of the loop, only if no break was called in the loop.\n",
"Something like this:\ndef checkCache(cachedText):\n for line in open(\"cache\"):\n if cachedText + \":\" in line:\n print line\n break\n else:\n requestDefinition(cachedText)\n\nNotice how the else: is attached to the for, not the if. The else: is only executed if the for completes by exhausting the iterable, not executing the break, which would mean that the cachedText is not found anywhere in the file. See the Python documentation for more information.\n",
"My guess is that you want something like this. If the line is found, you should \"break\". \"break\" will end the for loop. The else statement attached to the for loop (as opposed to an if statement) will only execute if the for loop iterated through each line without ever hitting the \"break\" condition. You still want to close the file after you're done.\ndef checkCache(cachedText):\n f = open(\"cache\")\n for line in f:\n if cachedText + \":\" in line:\n print line\n break\n else:\n requestDefinition(cachedText)\n f.close()\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"caching",
"file",
"python",
"search",
"string"
] |
stackoverflow_0001865484_caching_file_python_search_string.txt
|
Q:
What does first argument to `type` do?
Some code.
In [1]: A = type('B', (), {})
In [2]: a = A()
In [3]: b = B()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/home/shabda/<ipython console> in <module>()
NameError: name 'B' is not defined
What does first argument to type doing here?
A:
It's setting the __name__ property of the created class.
When you say:
class B(object):
two things happen with that 'B':
The name 'B' is assigned the class. This is just like if you'd said "B = ...".
The __name__ property of the class is set to 'B'.
When you invoke the type constructor manually only the latter half is done. If you don't assign the result to 'B' then B will remain set (or unset) as it was before.
Note that a similar result could be obtained by saying:
class B(object):
pass
A = B
del B
Now A refers to a class that calls itself 'B', and B doesn't refer to anything.
A:
It's creating a new class with the name B:
Python 2.5.4 (r254:67916, Nov 19 2009, 22:14:20)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> type('B', (), {})
<class '__main__.B'>
See the second form of type here for more information.
When you assign the result of calling type to a variable, you're just giving the class B another name. It's equivalent to doing
>>> class B(object):
... pass
...
>>> A = B
>>> a = A()
>>> b = B()
A:
'B' is just a string which is the name of A
One place it is used is for the default __repr__ of classes and their objects
>>> A=type('B', (), {})
>>> A
<class '__main__.B'>
>>> a=A()
>>> a
<__main__.B object at 0xb7cf88ec>
The usual way to create a class has no way to explicitly set the __name__ attribute.
In this case it is implicitly set by the class constructor
>>> class A:pass
...
>>> A
<class __main__.A at 0xb7cf280c>
But there is nothing stopping you from changing the name afterward
>>> A.__name__
'A'
>>> A.__name__='B'
>>> A
<class __main__.B at 0xb7cf280c>
>>>
|
What does first argument to `type` do?
|
Some code.
In [1]: A = type('B', (), {})
In [2]: a = A()
In [3]: b = B()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/home/shabda/<ipython console> in <module>()
NameError: name 'B' is not defined
What does first argument to type doing here?
|
[
"It's setting the __name__ property of the created class.\nWhen you say:\nclass B(object):\n\ntwo things happen with that 'B':\n\nThe name 'B' is assigned the class. This is just like if you'd said \"B = ...\".\nThe __name__ property of the class is set to 'B'.\n\nWhen you invoke the type constructor manually only the latter half is done. If you don't assign the result to 'B' then B will remain set (or unset) as it was before.\nNote that a similar result could be obtained by saying:\nclass B(object):\n pass\nA = B\ndel B\n\nNow A refers to a class that calls itself 'B', and B doesn't refer to anything.\n",
"It's creating a new class with the name B:\nPython 2.5.4 (r254:67916, Nov 19 2009, 22:14:20)\n[GCC 4.3.4] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> type('B', (), {})\n<class '__main__.B'>\n\nSee the second form of type here for more information.\nWhen you assign the result of calling type to a variable, you're just giving the class B another name. It's equivalent to doing\n>>> class B(object):\n... pass\n...\n>>> A = B\n>>> a = A()\n>>> b = B()\n\n",
"'B' is just a string which is the name of A\nOne place it is used is for the default __repr__ of classes and their objects\n>>> A=type('B', (), {})\n>>> A\n<class '__main__.B'>\n>>> a=A()\n>>> a\n<__main__.B object at 0xb7cf88ec>\n\nThe usual way to create a class has no way to explicitly set the __name__ attribute.\nIn this case it is implicitly set by the class constructor\n>>> class A:pass\n... \n>>> A\n<class __main__.A at 0xb7cf280c>\n\nBut there is nothing stopping you from changing the name afterward\n>>> A.__name__\n'A'\n>>> A.__name__='B'\n>>> A\n<class __main__.B at 0xb7cf280c>\n>>> \n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"metaprogramming",
"python"
] |
stackoverflow_0001865250_metaprogramming_python.txt
|
Q:
How to use time > year 2038 on official Windows Python 2.5
The official Python 2.5 on Windows was build with Visual Studio.Net 2003, which uses 32 bit time_t. So when the year is > 2038, it just gives exceptions.
Although this is fixed in Python 2.6 (which changed time_t to 64 bit with VS2008), I'd like to use 2.5 because many modules are already compiled for it.
So here's my question - is there any solution to easily let my program handle year > 2038 and still using official Python 2.5? For example some pre-made libraries like "time64" or "longtime" etc...
Please do not tell me to upgrade to 2.6+ or forget about the bug - I have my reason to need to make it work, that's why I post the question here.
A:
The datetime module in the standard library should work fine for you. What do you need from module time that datetime doesn't offer?
A:
I don't mean to sound trite, but why not:
forget about the Y2038 bug with Python 2.5
upgrade to Python 2.6 at some point in the future before 2038
edit:
To clarify: (and I'm serious — I didn't mean to poke fun)
Presumably you can upgrade Python to 2.6 (or later) at some indefinite time between now and 2038. Maybe in 2012. Maybe in 2015. Maybe in 2037.
If you are aware of the differences between the Python timestamp variable in your application (I'm not much of a Python user), it seems like these would be the important aspects to consider:
what data is being saved persistently
how a Python 2.5 timestamp variable that has been persisted, gets restored using Python 2.6 (presumably it will "do the right thing")
whether old data will be stored in its persistent form long enough for ambiguities to arise (e.g. the year "96" is unambiguous when considered between 1950 and 2049, but if that data is kept around until the year 2230 then "96" could be 1996, 2096, or 2196)
If the answers are favorable, just use the regular timestamp with its 2038 bug. You'll have to compare that with the amount of redesign/refactoring you'd have to do to make your application work with an alternate timestamp (e.g. a database timestamp string or whatever).
A:
The best solution I've found is to get a source copy of Python 2.5, and re-compile the time module with compilers which defaults time_t to 64 bit, for example VS2005 or VS2008 (may also configure the C runtime to prevent side-by-side issue).
|
How to use time > year 2038 on official Windows Python 2.5
|
The official Python 2.5 on Windows was build with Visual Studio.Net 2003, which uses 32 bit time_t. So when the year is > 2038, it just gives exceptions.
Although this is fixed in Python 2.6 (which changed time_t to 64 bit with VS2008), I'd like to use 2.5 because many modules are already compiled for it.
So here's my question - is there any solution to easily let my program handle year > 2038 and still using official Python 2.5? For example some pre-made libraries like "time64" or "longtime" etc...
Please do not tell me to upgrade to 2.6+ or forget about the bug - I have my reason to need to make it work, that's why I post the question here.
|
[
"The datetime module in the standard library should work fine for you. What do you need from module time that datetime doesn't offer?\n",
"I don't mean to sound trite, but why not:\n\nforget about the Y2038 bug with Python 2.5\nupgrade to Python 2.6 at some point in the future before 2038\n\nedit:\nTo clarify: (and I'm serious — I didn't mean to poke fun)\nPresumably you can upgrade Python to 2.6 (or later) at some indefinite time between now and 2038. Maybe in 2012. Maybe in 2015. Maybe in 2037.\nIf you are aware of the differences between the Python timestamp variable in your application (I'm not much of a Python user), it seems like these would be the important aspects to consider:\n\nwhat data is being saved persistently\nhow a Python 2.5 timestamp variable that has been persisted, gets restored using Python 2.6 (presumably it will \"do the right thing\")\nwhether old data will be stored in its persistent form long enough for ambiguities to arise (e.g. the year \"96\" is unambiguous when considered between 1950 and 2049, but if that data is kept around until the year 2230 then \"96\" could be 1996, 2096, or 2196)\n\nIf the answers are favorable, just use the regular timestamp with its 2038 bug. You'll have to compare that with the amount of redesign/refactoring you'd have to do to make your application work with an alternate timestamp (e.g. a database timestamp string or whatever).\n",
"The best solution I've found is to get a source copy of Python 2.5, and re-compile the time module with compilers which defaults time_t to 64 bit, for example VS2005 or VS2008 (may also configure the C runtime to prevent side-by-side issue).\n"
] |
[
7,
7,
4
] |
[] |
[] |
[
"python",
"python_2.5",
"time",
"time_t",
"year2038"
] |
stackoverflow_0000839755_python_python_2.5_time_time_t_year2038.txt
|
Q:
Working with bit streams
I have a base64 encoded bit stream, I want to work with. After decoding it with base64.b64decode I get a bytes object (Py3k btw) containing the decoded code. The problem is now, that I would like to work on that bytes object with bit-wise operations, such as shifting, bit wise and etc, but that is not possible as it is a bytes sequence.
Is there any way to use binary operations on that decoded code?
A:
try using
list(bytestring)
Eg.
>>> bstring=b"Hello World"
>>> list( bstring)
[72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100]
>>>
If you want one huge bitfield instead of all those octets
>>> from functools import reduce
>>> reduce(lambda x,y:(x<<8)+y,list(b"Hello World"))
87521618088882533792115812
>>> bin(_)
'0b100100001100101011011000110110001101111001000000101011101101111011100100110110001100100'
>>>
You didn't say how you are counting the bits, perhaps they are supposed to be reversed
>>> reduce(lambda x,y:(x<<8)+y,list(b"Hello World"[::-1]))
121404708493354166158910792
>>> bits=bin(_)[2:]
and pad the string to even bytes
>>> bits=bits.zfill(((len(bits)-1)//8+1)*8)
>>> bits
'0110010001101100011100100110111101010111001000000110111101101100011011000110010101001000'
turn the first 6 bits into an int
>>> int(bits[:6],2)
25
and then the following 4 bits
>>> int(bits[6:10],2)
1
A:
If you need to make your bytes object mutable then you can construct a bytearray from it:
mutable = bytearray(b"immutable")
This will let you modify the individual bytes through normal assignments
mutable[0] = mutable[1] = 32
If you need to do bit-wise operations then I suggest trying bitstring (with apologies for recommending my own module). It works for Python 3 and lets you do bit-wise slicing, shifting, logical operations and much more.
>>> s = bitstring.BitArray(bytes=b'your_bytes_object')
>>> s.hex
'0x796f75725f62797465735f6f626a656374'
>>> ten_bits = s[5:15]
>>> print(ten_bits, ten_bits.int)
0b0010110111 183
>>> print(ten_bits << 2)
0b1011011100
>>> print(s[0:6] & '0b110100')
0b010100
A:
If you're using Python 2.x, you could try using Construct. It can do very elegant parsing of data, including bit data.
It hasn't had so much active development recently, so I'm not sure what would be involved in making it work for Python 3.x. But for 2.x it's great.
|
Working with bit streams
|
I have a base64 encoded bit stream, I want to work with. After decoding it with base64.b64decode I get a bytes object (Py3k btw) containing the decoded code. The problem is now, that I would like to work on that bytes object with bit-wise operations, such as shifting, bit wise and etc, but that is not possible as it is a bytes sequence.
Is there any way to use binary operations on that decoded code?
|
[
"try using\nlist(bytestring)\n\nEg.\n>>> bstring=b\"Hello World\"\n>>> list( bstring)\n[72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100]\n>>> \n\nIf you want one huge bitfield instead of all those octets\n>>> from functools import reduce\n>>> reduce(lambda x,y:(x<<8)+y,list(b\"Hello World\"))\n87521618088882533792115812\n>>> bin(_)\n'0b100100001100101011011000110110001101111001000000101011101101111011100100110110001100100'\n>>> \n\nYou didn't say how you are counting the bits, perhaps they are supposed to be reversed\n>>> reduce(lambda x,y:(x<<8)+y,list(b\"Hello World\"[::-1]))\n121404708493354166158910792\n>>> bits=bin(_)[2:]\n\nand pad the string to even bytes\n>>> bits=bits.zfill(((len(bits)-1)//8+1)*8)\n>>> bits\n'0110010001101100011100100110111101010111001000000110111101101100011011000110010101001000'\n\nturn the first 6 bits into an int\n>>> int(bits[:6],2)\n25\n\nand then the following 4 bits\n>>> int(bits[6:10],2)\n1\n\n",
"If you need to make your bytes object mutable then you can construct a bytearray from it:\nmutable = bytearray(b\"immutable\")\n\nThis will let you modify the individual bytes through normal assignments\nmutable[0] = mutable[1] = 32\n\nIf you need to do bit-wise operations then I suggest trying bitstring (with apologies for recommending my own module). It works for Python 3 and lets you do bit-wise slicing, shifting, logical operations and much more.\n>>> s = bitstring.BitArray(bytes=b'your_bytes_object')\n>>> s.hex\n'0x796f75725f62797465735f6f626a656374'\n>>> ten_bits = s[5:15]\n>>> print(ten_bits, ten_bits.int)\n0b0010110111 183\n>>> print(ten_bits << 2)\n0b1011011100\n>>> print(s[0:6] & '0b110100')\n0b010100\n\n",
"If you're using Python 2.x, you could try using Construct. It can do very elegant parsing of data, including bit data.\nIt hasn't had so much active development recently, so I'm not sure what would be involved in making it work for Python 3.x. But for 2.x it's great.\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001863571_python_python_3.x.txt
|
Q:
How do I mimic browser back arrow in my application page?
request.path gives me the clicked url.
I need a return link on that page which on clicking, should return to Referrer page,
just as it would happen in case of browser back arrow . I do not maintain any session and
do not want to hardcode the referrer page url.
A:
history.go(-1)
http://www.devguru.com/Technologies/Ecmascript/Quickref/history_go.html
A:
How about trying Javascript instead?
<a href="#" onclick="history.go(-1);">Go back</a>
A:
You can do so using Javascript:
<span onclick='javascript: history.back()'>Previous page</span>
|
How do I mimic browser back arrow in my application page?
|
request.path gives me the clicked url.
I need a return link on that page which on clicking, should return to Referrer page,
just as it would happen in case of browser back arrow . I do not maintain any session and
do not want to hardcode the referrer page url.
|
[
"history.go(-1)\nhttp://www.devguru.com/Technologies/Ecmascript/Quickref/history_go.html\n",
"How about trying Javascript instead?\n <a href=\"#\" onclick=\"history.go(-1);\">Go back</a>\n\n",
"You can do so using Javascript:\n<span onclick='javascript: history.back()'>Previous page</span>\n\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"django",
"html",
"python"
] |
stackoverflow_0001865981_django_html_python.txt
|
Q:
Which Python language rule allows the descriptor to be found first?
I bumped into the following last night and I'm still at a loss as to explain it:
class Foo(object):
@property
def dave(self):
vars(self)['dave'] = 1
return 2
f = Foo()
print f.dave
print f.dave
Running this code produces:
2
2
The question is why? My understanding of attribute access is that the instance dictionary is checked before the class dictionary, and the dictionary of any bases, however as seen above the instance dictionary doesn't appear to be getting checked before the descriptor is found in the class dictionary.
A:
My understanding of attribute access is that the instance dictionary is checked before the class dictionary, and the dictionary of any bases
Data descriptors are an exception:
For instance bindings, the precedence of descriptor invocation depends on the which descriptor methods are defined. Normally, data descriptors define both __get__() and __set__(), while non-data descriptors have just the __get__() method. Data descriptors always override a redefinition in an instance dictionary. In contrast, non-data descriptors can be overridden by instances.
http://docs.python.org/reference/datamodel.html#invoking-descriptors
|
Which Python language rule allows the descriptor to be found first?
|
I bumped into the following last night and I'm still at a loss as to explain it:
class Foo(object):
@property
def dave(self):
vars(self)['dave'] = 1
return 2
f = Foo()
print f.dave
print f.dave
Running this code produces:
2
2
The question is why? My understanding of attribute access is that the instance dictionary is checked before the class dictionary, and the dictionary of any bases, however as seen above the instance dictionary doesn't appear to be getting checked before the descriptor is found in the class dictionary.
|
[
"\nMy understanding of attribute access is that the instance dictionary is checked before the class dictionary, and the dictionary of any bases\n\nData descriptors are an exception:\n\nFor instance bindings, the precedence of descriptor invocation depends on the which descriptor methods are defined. Normally, data descriptors define both __get__() and __set__(), while non-data descriptors have just the __get__() method. Data descriptors always override a redefinition in an instance dictionary. In contrast, non-data descriptors can be overridden by instances.\nhttp://docs.python.org/reference/datamodel.html#invoking-descriptors\n\n"
] |
[
5
] |
[] |
[] |
[
"python",
"rules"
] |
stackoverflow_0001865902_python_rules.txt
|
Q:
Unit Testing File Modifications
A common task in programs I've been working on lately is modifying a text file in some way. (Hey, I'm on Linux. Everything's a file. And I do large-scale system admin.)
But the file the code modifies may not exist on my desktop box. And I probably don't want to modify it if it IS on my desktop.
I've read about unit testing in Dive Into Python, and it's pretty clear what I want to do when testing an app that converts decimal to Roman Numerals (the example in DintoP). The testing is nicely self-contained. You don't need to verify that the program PRINTS the right thing, you just need to verify that the functions are returning the right output to a given input.
In my case, however, we need to test that the program is modifying its environment correctly. Here's what I've come up with:
1) Create the "original" file in a standard location, perhaps /tmp.
2) Run the function that modifies the file, passing it the path to the file in /tmp.
3) Verify that the file in /tmp was changed correctly; pass/fail unit test accordingly.
This seems kludgy to me. (Gets even kludgier if you want to verify that backup copies of the file are created properly, etc.) Has anyone come up with a better way?
A:
You're talking about testing too much at once. If you start trying to attack a testing problem by saying "Let's verify that it modifies its environment correctly", you're doomed to failure. Environments have dozens, maybe even millions of potential variations.
Instead, look at the pieces ("units") of your program. For example, are you going to have a function that determines where the files are that have to be written? What are the inputs to that function? Perhaps an environment variable, perhaps some values read from a config file? Test that function, and don't actually do anything that modifies the filesystem. Don't pass it "realistic" values, pass it values that are easy to verify against. Make a temporary directory, populate it with files in your test's setUp method.
Then test the code that writes the files. Just make sure it's writing the right contents file contents. Don't even write to a real filesystem! You don't need to make "fake" file objects for this, just use Python's handy StringIO modules; they're "real" implementations of the "file" interface, they're just not the ones that your program is actually going to be writing to.
Ultimately you will have to test the final, everything-is-actually-hooked-up-for-real top-level function that passes the real environment variable and the real config file and puts everything together. But don't worry about that to get started. For one thing, you will start picking up tricks as you write individual tests for smaller functions and creating test mocks, fakes, and stubs will become second nature to you. For another: even if you can't quite figure out how to test that one function call, you will have a very high level of confidence that everything which it is calling works perfectly. Also, you'll notice that test-driven development forces you to make your APIs clearer and more flexible. For example: it's much easier to test something that calls an open() method on an object that came from somewhere abstract, than to test something that calls os.open on a string that you pass it. The open method is flexible; it can be faked, it can be implemented differently, but a string is a string and os.open doesn't give you any leeway to catch what methods are called on it.
You can also build testing tools to make repetitive tasks easy. For example, twisted provides facilities for creating temporary files for testing built right into its testing tool. It's not uncommon for testing tools or larger projects with their own test libraries to have functionality like this.
A:
You have two levels of testing.
Filtering and Modifying content. These are "low-level" operations that don't really require physical file I/O. These are the tests, decision-making, alternatives, etc. The "Logic" of the application.
File system operations. Create, copy, rename, delete, backup. Sorry, but those are proper file system operations that -- well -- require a proper file system for testing.
For this kind of testing, we often use a "Mock" object. You can design a "FileSystemOperations" class that embodies the various file system operations. You test this to be sure it does basic read, write, copy, rename, etc. There's no real logic in this. Just methods that invoke file system operations.
You can then create a MockFileSystem which dummies out the various operations. You can use this Mock object to test your other classes.
In some cases, all of your file system operations are in the os module. If that's the case, you can create a MockOS module with mock version of the operations you actually use.
Put your MockOS module on the PYTHONPATH and you can conceal the real OS module.
For production operations you use your well-tested "Logic" classes plus your FileSystemOperations class (or the real OS module.)
A:
For later readers who just want a way to test that code writing to files is working correctly, here is a "fake_open" that patches the open builtin of a module to use StringIO. fake_open returns a dict of opened files which can be examined in a unit test or doctest, all without needing a real file-system.
def fake_open(module):
"""Patch module's `open` builtin so that it returns StringIOs instead of
creating real files, which is useful for testing. Returns a dict that maps
opened file names to StringIO objects."""
from contextlib import closing
from StringIO import StringIO
streams = {}
def fakeopen(filename,mode):
stream = StringIO()
stream.close = lambda: None
streams[filename] = stream
return closing(stream)
module.open = fakeopen
return streams
A:
When I touch files in my code, I tend to prefer to mock the actual reading and writing of the file... so then I can give my classes exact contents I want in the test, and then assert that the test is writing back the contents I expect.
I've done this in Java, and I imagine it is quite simple in Python... but it may require designing your classes/functions in such a way that it is EASY to mock the use of an actual file.
For this, you can try passing in streams and then just pass in a simple string input/output stream which won't write to a file, or have a function that does the actual "write this string to a file" or "read this string from a file", and then replace that function in your tests.
A:
I think you are on the right track. Depending on what you need to do chroot may help you set up an environment for your scrpits that 'looks' real, but isn't.
If that doesn't work then you could write your scripts to take a 'root' path as an argument.
In a production run the root path is just /. For testing you create a shadow environment under /tmp/test and then run your scripts with a root path of /tmp/test.
A:
You might want to setup the test so that it runs inside a chroot jail, so you have all the environment the test needs, even if paths and file locations are hardcoded in the code [not really a good practice, but sometimes one gets the file locations from other places...] and then check the results via the exit code.
|
Unit Testing File Modifications
|
A common task in programs I've been working on lately is modifying a text file in some way. (Hey, I'm on Linux. Everything's a file. And I do large-scale system admin.)
But the file the code modifies may not exist on my desktop box. And I probably don't want to modify it if it IS on my desktop.
I've read about unit testing in Dive Into Python, and it's pretty clear what I want to do when testing an app that converts decimal to Roman Numerals (the example in DintoP). The testing is nicely self-contained. You don't need to verify that the program PRINTS the right thing, you just need to verify that the functions are returning the right output to a given input.
In my case, however, we need to test that the program is modifying its environment correctly. Here's what I've come up with:
1) Create the "original" file in a standard location, perhaps /tmp.
2) Run the function that modifies the file, passing it the path to the file in /tmp.
3) Verify that the file in /tmp was changed correctly; pass/fail unit test accordingly.
This seems kludgy to me. (Gets even kludgier if you want to verify that backup copies of the file are created properly, etc.) Has anyone come up with a better way?
|
[
"You're talking about testing too much at once. If you start trying to attack a testing problem by saying \"Let's verify that it modifies its environment correctly\", you're doomed to failure. Environments have dozens, maybe even millions of potential variations.\nInstead, look at the pieces (\"units\") of your program. For example, are you going to have a function that determines where the files are that have to be written? What are the inputs to that function? Perhaps an environment variable, perhaps some values read from a config file? Test that function, and don't actually do anything that modifies the filesystem. Don't pass it \"realistic\" values, pass it values that are easy to verify against. Make a temporary directory, populate it with files in your test's setUp method.\nThen test the code that writes the files. Just make sure it's writing the right contents file contents. Don't even write to a real filesystem! You don't need to make \"fake\" file objects for this, just use Python's handy StringIO modules; they're \"real\" implementations of the \"file\" interface, they're just not the ones that your program is actually going to be writing to.\nUltimately you will have to test the final, everything-is-actually-hooked-up-for-real top-level function that passes the real environment variable and the real config file and puts everything together. But don't worry about that to get started. For one thing, you will start picking up tricks as you write individual tests for smaller functions and creating test mocks, fakes, and stubs will become second nature to you. For another: even if you can't quite figure out how to test that one function call, you will have a very high level of confidence that everything which it is calling works perfectly. Also, you'll notice that test-driven development forces you to make your APIs clearer and more flexible. For example: it's much easier to test something that calls an open() method on an object that came from somewhere abstract, than to test something that calls os.open on a string that you pass it. The open method is flexible; it can be faked, it can be implemented differently, but a string is a string and os.open doesn't give you any leeway to catch what methods are called on it.\nYou can also build testing tools to make repetitive tasks easy. For example, twisted provides facilities for creating temporary files for testing built right into its testing tool. It's not uncommon for testing tools or larger projects with their own test libraries to have functionality like this.\n",
"You have two levels of testing.\n\nFiltering and Modifying content. These are \"low-level\" operations that don't really require physical file I/O. These are the tests, decision-making, alternatives, etc. The \"Logic\" of the application.\nFile system operations. Create, copy, rename, delete, backup. Sorry, but those are proper file system operations that -- well -- require a proper file system for testing.\n\nFor this kind of testing, we often use a \"Mock\" object. You can design a \"FileSystemOperations\" class that embodies the various file system operations. You test this to be sure it does basic read, write, copy, rename, etc. There's no real logic in this. Just methods that invoke file system operations.\nYou can then create a MockFileSystem which dummies out the various operations. You can use this Mock object to test your other classes.\nIn some cases, all of your file system operations are in the os module. If that's the case, you can create a MockOS module with mock version of the operations you actually use.\nPut your MockOS module on the PYTHONPATH and you can conceal the real OS module.\nFor production operations you use your well-tested \"Logic\" classes plus your FileSystemOperations class (or the real OS module.)\n",
"For later readers who just want a way to test that code writing to files is working correctly, here is a \"fake_open\" that patches the open builtin of a module to use StringIO. fake_open returns a dict of opened files which can be examined in a unit test or doctest, all without needing a real file-system.\ndef fake_open(module):\n \"\"\"Patch module's `open` builtin so that it returns StringIOs instead of\n creating real files, which is useful for testing. Returns a dict that maps\n opened file names to StringIO objects.\"\"\"\n from contextlib import closing\n from StringIO import StringIO\n streams = {}\n def fakeopen(filename,mode):\n stream = StringIO()\n stream.close = lambda: None\n streams[filename] = stream\n return closing(stream)\n module.open = fakeopen\n return streams\n\n",
"When I touch files in my code, I tend to prefer to mock the actual reading and writing of the file... so then I can give my classes exact contents I want in the test, and then assert that the test is writing back the contents I expect.\nI've done this in Java, and I imagine it is quite simple in Python... but it may require designing your classes/functions in such a way that it is EASY to mock the use of an actual file.\nFor this, you can try passing in streams and then just pass in a simple string input/output stream which won't write to a file, or have a function that does the actual \"write this string to a file\" or \"read this string from a file\", and then replace that function in your tests.\n",
"I think you are on the right track. Depending on what you need to do chroot may help you set up an environment for your scrpits that 'looks' real, but isn't.\nIf that doesn't work then you could write your scripts to take a 'root' path as an argument.\nIn a production run the root path is just /. For testing you create a shadow environment under /tmp/test and then run your scripts with a root path of /tmp/test. \n",
"You might want to setup the test so that it runs inside a chroot jail, so you have all the environment the test needs, even if paths and file locations are hardcoded in the code [not really a good practice, but sometimes one gets the file locations from other places...] and then check the results via the exit code.\n"
] |
[
16,
7,
3,
2,
1,
1
] |
[] |
[] |
[
"linux",
"python",
"unit_testing"
] |
stackoverflow_0000106766_linux_python_unit_testing.txt
|
Q:
How to create a .py file for Google App Engine?
I am just at the very start of what I think is gonna be a long journey exploring the world of applications in Google App Engine using Python.
I have just downloaded Python 2.6.4 from the Python official website and installed it on my computer (I am using Windows XP). I also downloaded the App Engine Python software development kit (SDK) (from this page) and installed it, too. Both of these steps were suggested on the second page of Google App Engine's Getting-Started Guide.
However, when I moved on to the third page, which is titled as "Hello, World!", I ran into a sort of complication. It is written there:
"Create a directory named helloworld.
All files for this application reside
in this directory. Inside the
helloworld directory, create a file
named helloworld.py, and give it the
following contents..."
I am kind of puzzled here: How do I create a .py file? Is it like I need to create a notepad file (.txt) and name it helloworld.py (when I did so, it didn't change into any different file of any different, but rather stayed an ordinary .txt file) or should I somehow create that file using Google App Engine Launcher that I have installed?
A:
When you downloaded and installed Python, you also installed IDLE. You can use this to easily write, run, debug and save .py files with syntax highlighting. To get started, just open IDLE, and select File -> New Window.
A:
A .py file is a text file containing Python syntax. Use your favourite programming editor, or even NotePad.
Regarding your second problem, there's an option in Windows Explorer about hiding file extensions. Make sure it isn't checked -- you might well actually have renamed your file helloworld.py.txt but only seen helloworld.py
A:
I am kind of puzzled here: How do I create a .py file? Is it like I need to create a notepad file (.txt) and name it helloworld.py?
I'm not on windows, but thats how it works on other operating systems: Create a file with an editor, then save as ...
ps. and .py is the extension (like .docx, .bat, ...), but it's just a convention (although a highly recommended one) ..
pps. heard the http://www.e-texteditor.com/ has more capabilities than notepad ..
A:
You have to be aware that the ending signifies recognition of files, not content. Name a file .py simply hints to the user (and the GUI) that it most likely is a python file.
That being said, python files are merely text files. You can simply create a text file with Notepad (or your editor of choice) and rename the ending to .py and work from there. Most modern editors will also highlight the syntax based on file endings.
A:
You will need a better editor than Notepad. With Notepad, use Save As..., and type "helloworld.py" in the dialog, including quotes so that the file extension is .py instead of .txt
|
How to create a .py file for Google App Engine?
|
I am just at the very start of what I think is gonna be a long journey exploring the world of applications in Google App Engine using Python.
I have just downloaded Python 2.6.4 from the Python official website and installed it on my computer (I am using Windows XP). I also downloaded the App Engine Python software development kit (SDK) (from this page) and installed it, too. Both of these steps were suggested on the second page of Google App Engine's Getting-Started Guide.
However, when I moved on to the third page, which is titled as "Hello, World!", I ran into a sort of complication. It is written there:
"Create a directory named helloworld.
All files for this application reside
in this directory. Inside the
helloworld directory, create a file
named helloworld.py, and give it the
following contents..."
I am kind of puzzled here: How do I create a .py file? Is it like I need to create a notepad file (.txt) and name it helloworld.py (when I did so, it didn't change into any different file of any different, but rather stayed an ordinary .txt file) or should I somehow create that file using Google App Engine Launcher that I have installed?
|
[
"When you downloaded and installed Python, you also installed IDLE. You can use this to easily write, run, debug and save .py files with syntax highlighting. To get started, just open IDLE, and select File -> New Window.\n",
"A .py file is a text file containing Python syntax. Use your favourite programming editor, or even NotePad.\nRegarding your second problem, there's an option in Windows Explorer about hiding file extensions. Make sure it isn't checked -- you might well actually have renamed your file helloworld.py.txt but only seen helloworld.py\n",
"\nI am kind of puzzled here: How do I create a .py file? Is it like I need to create a notepad file (.txt) and name it helloworld.py?\n\nI'm not on windows, but thats how it works on other operating systems: Create a file with an editor, then save as ...\nps. and .py is the extension (like .docx, .bat, ...), but it's just a convention (although a highly recommended one) ..\npps. heard the http://www.e-texteditor.com/ has more capabilities than notepad ..\n",
"You have to be aware that the ending signifies recognition of files, not content. Name a file .py simply hints to the user (and the GUI) that it most likely is a python file.\nThat being said, python files are merely text files. You can simply create a text file with Notepad (or your editor of choice) and rename the ending to .py and work from there. Most modern editors will also highlight the syntax based on file endings.\n",
"You will need a better editor than Notepad. With Notepad, use Save As..., and type \"helloworld.py\" in the dialog, including quotes so that the file extension is .py instead of .txt\n"
] |
[
3,
2,
1,
1,
1
] |
[] |
[] |
[
"file",
"google_app_engine",
"makefile",
"python"
] |
stackoverflow_0001866147_file_google_app_engine_makefile_python.txt
|
Q:
Syntax Error exec-ing python code from database
I'm loading some python code from a database (it's doing dynamic mapping of values that I can change at runtime without a code redeploy).
In my code, I'm doing this to execute the database code:
if lMapping:
print lMapping
exec lMapping
lValue = mapping(lValue, lCsvRow)
and here's the value of lMapping:
def mapping(pValue, pCsvRow):
lInterimStatus = pCsvRow[5]
lOutComeStatus = pCsvRow[6]
if len(lInterimStatus) == 0:
lStatus = lOutComeStatus
else:
lStatus = lInterimStatus
lStatus = lStatus.lower()
PRIMARY_STATUS_MAPPINGS = {
'completed with lender' : '4828',
'not taken up' : '4827',
'declined across all lenders' : '4726',
'declined broker duplicate' : '4726',
'pending' : '4725',
'pending (in progress with broker)' : '4725',
'pending (in progress with lender)' : '4725',
'lender accept in principle' : '4827',
'lender decline duplicate' : '4743',
'lender decline post code not supported' : '4743',
'lender decline score fail' : '4743',
'lender decline policy fail' : '4743',
'lender decline no client contact' : '4743',
'lender decline general' : '4743',
'lender decline bad data' : '4743',
}
return PRIMARY_STATUS_MAPPINGS[lStatus]
Whenever I do this I'm getting a syntax error on the exec line, I can't work out why:
(<type 'exceptions.SyntaxError'>:invalid syntax (<string>, line 1)
UPDATE
It works if I write the code from the database to a file first:
print lMapping
lFile = open('/mapping.py','w')
lFile.write(lMapping)
lFile.close()
lReadFile = open('/mapping.py')
exec lReadFile
lValue = mapping(lValue, lCsvRow)
A:
Do you BLOB or other binary type column to store code? Otherwise database might change line endings and exec will break with SyntaxError:
>>> s='''\
... print 'ok'
... '''
>>> s
"print 'ok'\n"
>>> exec s
ok
>>> exec s.replace('\n', '\r\n')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1
print 'ok'
^
SyntaxError: invalid syntax
Update: Writing to file on Windows in text mode will change line endings to platform native ones. Another way to normalize them is:
lMapping = os.linesep.join(lMapping.splitlines())
|
Syntax Error exec-ing python code from database
|
I'm loading some python code from a database (it's doing dynamic mapping of values that I can change at runtime without a code redeploy).
In my code, I'm doing this to execute the database code:
if lMapping:
print lMapping
exec lMapping
lValue = mapping(lValue, lCsvRow)
and here's the value of lMapping:
def mapping(pValue, pCsvRow):
lInterimStatus = pCsvRow[5]
lOutComeStatus = pCsvRow[6]
if len(lInterimStatus) == 0:
lStatus = lOutComeStatus
else:
lStatus = lInterimStatus
lStatus = lStatus.lower()
PRIMARY_STATUS_MAPPINGS = {
'completed with lender' : '4828',
'not taken up' : '4827',
'declined across all lenders' : '4726',
'declined broker duplicate' : '4726',
'pending' : '4725',
'pending (in progress with broker)' : '4725',
'pending (in progress with lender)' : '4725',
'lender accept in principle' : '4827',
'lender decline duplicate' : '4743',
'lender decline post code not supported' : '4743',
'lender decline score fail' : '4743',
'lender decline policy fail' : '4743',
'lender decline no client contact' : '4743',
'lender decline general' : '4743',
'lender decline bad data' : '4743',
}
return PRIMARY_STATUS_MAPPINGS[lStatus]
Whenever I do this I'm getting a syntax error on the exec line, I can't work out why:
(<type 'exceptions.SyntaxError'>:invalid syntax (<string>, line 1)
UPDATE
It works if I write the code from the database to a file first:
print lMapping
lFile = open('/mapping.py','w')
lFile.write(lMapping)
lFile.close()
lReadFile = open('/mapping.py')
exec lReadFile
lValue = mapping(lValue, lCsvRow)
|
[
"Do you BLOB or other binary type column to store code? Otherwise database might change line endings and exec will break with SyntaxError:\n>>> s='''\\\n... print 'ok'\n... '''\n>>> s\n\"print 'ok'\\n\"\n>>> exec s\nok\n>>> exec s.replace('\\n', '\\r\\n')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1\n print 'ok'\n ^\nSyntaxError: invalid syntax\n\nUpdate: Writing to file on Windows in text mode will change line endings to platform native ones. Another way to normalize them is:\nlMapping = os.linesep.join(lMapping.splitlines())\n\n"
] |
[
1
] |
[] |
[] |
[
"exec",
"python"
] |
stackoverflow_0001866439_exec_python.txt
|
Q:
How to design an application in a modular way?
I am looking for pointers, suggestions, links, warnings, ideas and even anecdotical accounts about "how to design an application in a modular way". I am going to use python for this project, but advice does not need to necessarily refer to this language, although I am only willing to implement a design based on OOP.
Here's some context to understand where I come from and what I am trying to achieve...
My project will be a small application that will consume web services and display the results in a variety of ways, including:
notification popup containing just the result of the call
tab in the main window of the application with graphics plotted from retrieved raw-data
buffer of messages (visible on domand) where results from various services will pile up
The application will be released as free (as-in-speech) software, and for this reason I would like to make it really easy for other devs to write plugins/modules that will extend the functionality of the main application without needing to change the core code.
At this point in time, plugins should essentially enable a developer to activate a new webservice, by defining the provider, the data manipulation (if any) and the way the data will be presented to the user.
I have extensive experience in developing with drupal which has a powerful modular approach, but that also follows a non-object-oriented design, so I suspect that for python, drupal design might not be the optimal solution.
If this is of any importance - the core will be natively developed for GNU/Linux.
Thank you in advance for your time!
A:
Try to keep things loosely coupled, and use interfaces liberally to help.
I'd start the design with the Separation of Concerns. The major architectural layers are:
Problem Domain (aka. Engine, Back-end): the domain classes, which do all the actual work, have domain knowledge implement domain behaviour
Persistence: storage management for domain classes, database/filesystem layer
User Interface: the GUI, which talks to the domain classes
System Interfaces: talking to other systems, eg. networking, web services
The domain classes do the work, but don't know about the UI. The persistence layer knows about the domain classes, enough to save/load as required. The system interface layer abstracts away external systems, which lets you plug a simulator in behind while testing. The UI should ideally use MVC, for maximum flexibility.
Without putting too fine a point on it, one would not ordinarily look to Drupal as an exemplar of good architectural design. It has grown rather organically, and there have been many upheavals of the design, as evidenced by the regular plugin breakage upon system upgrades.
I would also echo what MicSim said, regarding carefully designing the plugin interface and writing multiple different plugins to exercise it. This is the only way to really flesh out the issues of how the app and plugins interact.
A:
As you will deliver some basic functionality with your app, make sure that you code the part that should be extendable/replaceable already as a plugin by yourself. Then you'll best get a feeling about how your API should look like.
And to prove that the API is good, you should write a second and third plugin, because then you will discover that you made a lot of assumptions when writing the first one. Normally things clear up a bit after doing this 2nd and 3rd step.
Now, you should write one more plugin, because the last plugins you wrote resemble the first one in type, input data and presentation (maybe yet another weather webservice). Choose something total different, with absolutely different data, and you will see your API being still too tailored. (Else you did a good job!)
A:
Well, probably the first place to start is to sit down and figure out what the plug-in might need to fulfill its purpose.
You'd want to consider two main aspects in your design.
How will your framework pass requests / receive responses from the plug-in?
What helper classes or modules might be good to provide?
And probably also, since this sounds like a learning project.
What do you want to write yourself, and what are you happy just to pick out of an existing library?
I'd also recommend developing some basic plugins as you design the API. The experience of having to actually use what you design will allow you to see where a given approach might be making things harder than they need to be.
A:
design the api for your app, carefully (How To Design A Good API and Why it Matters)
make everything, which could be used independently a module, then group and build larger parts out of the simple parts (KISS)
don't repeat yourself (DRY)
write/publish short documentation frequently, for yourself and others (open source mantra) ...
A:
Look into the listener-subscriber pattern. Sooner or later, your app will be complex enough that you need to implement callbacks. When you hit that limit, use listener-subscriber (there's an implementation in wxPython).
For example, several modules will want to watch for new data from a number of feeds. Modules that link together might want to update themselves, based on new data.
|
How to design an application in a modular way?
|
I am looking for pointers, suggestions, links, warnings, ideas and even anecdotical accounts about "how to design an application in a modular way". I am going to use python for this project, but advice does not need to necessarily refer to this language, although I am only willing to implement a design based on OOP.
Here's some context to understand where I come from and what I am trying to achieve...
My project will be a small application that will consume web services and display the results in a variety of ways, including:
notification popup containing just the result of the call
tab in the main window of the application with graphics plotted from retrieved raw-data
buffer of messages (visible on domand) where results from various services will pile up
The application will be released as free (as-in-speech) software, and for this reason I would like to make it really easy for other devs to write plugins/modules that will extend the functionality of the main application without needing to change the core code.
At this point in time, plugins should essentially enable a developer to activate a new webservice, by defining the provider, the data manipulation (if any) and the way the data will be presented to the user.
I have extensive experience in developing with drupal which has a powerful modular approach, but that also follows a non-object-oriented design, so I suspect that for python, drupal design might not be the optimal solution.
If this is of any importance - the core will be natively developed for GNU/Linux.
Thank you in advance for your time!
|
[
"Try to keep things loosely coupled, and use interfaces liberally to help.\nI'd start the design with the Separation of Concerns. The major architectural layers are:\n\nProblem Domain (aka. Engine, Back-end): the domain classes, which do all the actual work, have domain knowledge implement domain behaviour\nPersistence: storage management for domain classes, database/filesystem layer\nUser Interface: the GUI, which talks to the domain classes\nSystem Interfaces: talking to other systems, eg. networking, web services\n\nThe domain classes do the work, but don't know about the UI. The persistence layer knows about the domain classes, enough to save/load as required. The system interface layer abstracts away external systems, which lets you plug a simulator in behind while testing. The UI should ideally use MVC, for maximum flexibility.\nWithout putting too fine a point on it, one would not ordinarily look to Drupal as an exemplar of good architectural design. It has grown rather organically, and there have been many upheavals of the design, as evidenced by the regular plugin breakage upon system upgrades.\nI would also echo what MicSim said, regarding carefully designing the plugin interface and writing multiple different plugins to exercise it. This is the only way to really flesh out the issues of how the app and plugins interact.\n",
"As you will deliver some basic functionality with your app, make sure that you code the part that should be extendable/replaceable already as a plugin by yourself. Then you'll best get a feeling about how your API should look like.\nAnd to prove that the API is good, you should write a second and third plugin, because then you will discover that you made a lot of assumptions when writing the first one. Normally things clear up a bit after doing this 2nd and 3rd step.\nNow, you should write one more plugin, because the last plugins you wrote resemble the first one in type, input data and presentation (maybe yet another weather webservice). Choose something total different, with absolutely different data, and you will see your API being still too tailored. (Else you did a good job!)\n",
"Well, probably the first place to start is to sit down and figure out what the plug-in might need to fulfill its purpose.\nYou'd want to consider two main aspects in your design.\n\nHow will your framework pass requests / receive responses from the plug-in?\nWhat helper classes or modules might be good to provide?\n\nAnd probably also, since this sounds like a learning project.\n\nWhat do you want to write yourself, and what are you happy just to pick out of an existing library?\n\nI'd also recommend developing some basic plugins as you design the API. The experience of having to actually use what you design will allow you to see where a given approach might be making things harder than they need to be.\n",
"\ndesign the api for your app, carefully (How To Design A Good API and Why it Matters)\nmake everything, which could be used independently a module, then group and build larger parts out of the simple parts (KISS)\ndon't repeat yourself (DRY)\nwrite/publish short documentation frequently, for yourself and others (open source mantra) ...\n\n",
"Look into the listener-subscriber pattern. Sooner or later, your app will be complex enough that you need to implement callbacks. When you hit that limit, use listener-subscriber (there's an implementation in wxPython).\nFor example, several modules will want to watch for new data from a number of feeds. Modules that link together might want to update themselves, based on new data.\n"
] |
[
13,
9,
2,
1,
1
] |
[] |
[] |
[
"design_patterns",
"modularity",
"oop",
"python",
"software_design"
] |
stackoverflow_0001865727_design_patterns_modularity_oop_python_software_design.txt
|
Q:
Authenticate imaplib.IMAP4_SSL against an Exchange imap server with AUTH=NTLM
Yesterday, the IT department made changes to the Exchange server. I was previously able to use imaplib to fetch messages from the server. But now it seems they have turned off the authentication mechanism I was using. From the output below, it looks as if the server now supports NTLM authentication only.
>>> from imaplib import IMAP4_SSL
>>> s = IMAP4_SSL("my.imap.server")
>>> s.capabilities
('IMAP4', 'IMAP4REV1', 'IDLE', 'LOGIN-REFERRALS', 'MAILBOX-REFERRALS',
'NAMESPACE', 'LITERAL+', 'UIDPLUS', 'CHILDREN', 'AUTH=NTLM')
>>> s.login("username", "password")
...
imaplib.error: Clear text passwords have been disabled for this protocol.
Questions:
How do I authenticate to the imap server using NTLM with imaplib? I assume I have need to use IMAP4_SSL.authenticate("NTLM", authobject) to do this? How do I set up the authobject callback.
Since SSL/TLS is the only way to connect to the server, re-enabling clear text password authentication should not be a security risk. Correct?
The process that connects to the imap server is running on Linux, BTW. So I am not able to use pywin32.
Edit:
I was able to figure out 1. myself. But how about 2.: Clear text passwords in IMAP over SSL is not a security problem, is it?
A:
I was able to use the python-ntlm project.
python-ntlm implements NTLM authentication for HTTP. It was easy to add NTLM authentication for IMAP by extending this project.
I submitted a patch for the project with my additions.
|
Authenticate imaplib.IMAP4_SSL against an Exchange imap server with AUTH=NTLM
|
Yesterday, the IT department made changes to the Exchange server. I was previously able to use imaplib to fetch messages from the server. But now it seems they have turned off the authentication mechanism I was using. From the output below, it looks as if the server now supports NTLM authentication only.
>>> from imaplib import IMAP4_SSL
>>> s = IMAP4_SSL("my.imap.server")
>>> s.capabilities
('IMAP4', 'IMAP4REV1', 'IDLE', 'LOGIN-REFERRALS', 'MAILBOX-REFERRALS',
'NAMESPACE', 'LITERAL+', 'UIDPLUS', 'CHILDREN', 'AUTH=NTLM')
>>> s.login("username", "password")
...
imaplib.error: Clear text passwords have been disabled for this protocol.
Questions:
How do I authenticate to the imap server using NTLM with imaplib? I assume I have need to use IMAP4_SSL.authenticate("NTLM", authobject) to do this? How do I set up the authobject callback.
Since SSL/TLS is the only way to connect to the server, re-enabling clear text password authentication should not be a security risk. Correct?
The process that connects to the imap server is running on Linux, BTW. So I am not able to use pywin32.
Edit:
I was able to figure out 1. myself. But how about 2.: Clear text passwords in IMAP over SSL is not a security problem, is it?
|
[
"I was able to use the python-ntlm project. \npython-ntlm implements NTLM authentication for HTTP. It was easy to add NTLM authentication for IMAP by extending this project.\nI submitted a patch for the project with my additions.\n"
] |
[
4
] |
[] |
[] |
[
"exchange_server",
"imap",
"ntlm",
"python"
] |
stackoverflow_0001866460_exchange_server_imap_ntlm_python.txt
|
Q:
Python reference problem
I'm experiencing a (for me) very weird problem in Python.
I have a class called Menu: (snippet)
class Menu:
"""Shows a menu with the defined items"""
menu_items = {}
characters = map(chr, range(97, 123))
def __init__(self, menu_items):
self.init_menu(menu_items)
def init_menu(self, menu_items):
i = 0
for item in menu_items:
self.menu_items[self.characters[i]] = item
i += 1
When I instantiate the class, I pass in a list of dictionaries. The dictionaries are created with this function:
def menu_item(description, action=None):
if action == None:
action = lambda : None
return {"description": description, "action": action}
And then the lists are created like this:
t = [menu_item("abcd")]
m3 = menu.Menu(t)
a = [ menu_item("Test")]
m2 = menu.Menu(a)
b = [ menu_item("Update", m2.getAction),
menu_item("Add"),
menu_item("Delete")]
m = menu.Menu(b)
When I run my program, I everytime get the same menu items. I've run the program with PDB and found out as soon as another instance of a class is created, the menu_items of all previous classes are set to latest list. It seems as if the menu_items member is static member.
What am I overseeing here?
A:
The menu_items dict is a class attribute that's shared between all Menu instances. Initialize it like this instead, and you should be fine:
class Menu:
"""Shows a menu with the defined items"""
characters = map(chr, range(97, 123))
def __init__(self, menu_items):
self.menu_items = {}
self.init_menu(menu_items)
[...]
Have a look at the Python tutorial section on classes for a more thorough discussion about the difference between class attributes and instance attributes.
A:
Since Pär answered your question here is some random advice: dict and zip are extremely useful functions :-)
class Menu:
"""Shows a menu with the defined items"""
characters = map(chr, range(97, 123))
def __init__(self, menu_items):
self.menu_items = dict(zip(self.characters, menu_items))
|
Python reference problem
|
I'm experiencing a (for me) very weird problem in Python.
I have a class called Menu: (snippet)
class Menu:
"""Shows a menu with the defined items"""
menu_items = {}
characters = map(chr, range(97, 123))
def __init__(self, menu_items):
self.init_menu(menu_items)
def init_menu(self, menu_items):
i = 0
for item in menu_items:
self.menu_items[self.characters[i]] = item
i += 1
When I instantiate the class, I pass in a list of dictionaries. The dictionaries are created with this function:
def menu_item(description, action=None):
if action == None:
action = lambda : None
return {"description": description, "action": action}
And then the lists are created like this:
t = [menu_item("abcd")]
m3 = menu.Menu(t)
a = [ menu_item("Test")]
m2 = menu.Menu(a)
b = [ menu_item("Update", m2.getAction),
menu_item("Add"),
menu_item("Delete")]
m = menu.Menu(b)
When I run my program, I everytime get the same menu items. I've run the program with PDB and found out as soon as another instance of a class is created, the menu_items of all previous classes are set to latest list. It seems as if the menu_items member is static member.
What am I overseeing here?
|
[
"The menu_items dict is a class attribute that's shared between all Menu instances. Initialize it like this instead, and you should be fine:\nclass Menu:\n \"\"\"Shows a menu with the defined items\"\"\"\n characters = map(chr, range(97, 123))\n\n def __init__(self, menu_items):\n self.menu_items = {}\n self.init_menu(menu_items)\n\n [...]\n\nHave a look at the Python tutorial section on classes for a more thorough discussion about the difference between class attributes and instance attributes.\n",
"Since Pär answered your question here is some random advice: dict and zip are extremely useful functions :-)\nclass Menu:\n \"\"\"Shows a menu with the defined items\"\"\"\n characters = map(chr, range(97, 123))\n\n def __init__(self, menu_items):\n self.menu_items = dict(zip(self.characters, menu_items))\n\n"
] |
[
16,
5
] |
[] |
[] |
[
"python",
"reference"
] |
stackoverflow_0001867068_python_reference.txt
|
Q:
Insert python variables into SQLITE DB from another cursor object
Using an example from the Python DOCs:
stocks = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
for t in stocks
c.execute('insert into stocks values (?,?,?,?,?)', t)
In my code, the stocks from above is generated from a query to another DB.
Since tuples are immutable, how do you pass additional values to the cursor execute statement (in addition to the tuple).
Is there a better solution then the example below?:
stocks = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
for t in stocks
t = list(t)
t.append('Some Arb Value')
t = tuple(t)
c.execute('insert into stocks values (?,?,?,?,?,?)', t)
You could also do this:
stocks = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
for t in stocks
c.execute('insert into stocks values (?,?,?,?,?,?)', (t[0],t[1],t[2],t[3],t[4],'some value')
However, the solutions above wont work for the executemany method i.e
c.executemany('insert into stocks values (?,?,?,?,?,?)', t)
Is there a better way of doing this?
A:
Tuples are immutable, but you can easily extract their contents and form new tuples. Also, I'm not sure, but I don't think the execute() call absolutely must have a tuple. Can't any sequence, including lists, work as well?
Anyway, here's what you need:
for t in stocks:
c.execute('insert into stock values (?,?,?,?,?,?)', t + ('some value',))
That adds a one-element tuple to the existing one, forming a new six-element tuple.
A:
I assume you meant to use stocks instead of t for the executemany version
For the executemany version, you can also do
c.executemany('insert into stocks (?,?,?,?,?,?)', (t + ('Arb value',) for t in stocks))
Using a generator expression instead of a list comprehension will keep you from creating an entire new data structure, which, if you have many inputs, is essential.
|
Insert python variables into SQLITE DB from another cursor object
|
Using an example from the Python DOCs:
stocks = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
for t in stocks
c.execute('insert into stocks values (?,?,?,?,?)', t)
In my code, the stocks from above is generated from a query to another DB.
Since tuples are immutable, how do you pass additional values to the cursor execute statement (in addition to the tuple).
Is there a better solution then the example below?:
stocks = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
for t in stocks
t = list(t)
t.append('Some Arb Value')
t = tuple(t)
c.execute('insert into stocks values (?,?,?,?,?,?)', t)
You could also do this:
stocks = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
for t in stocks
c.execute('insert into stocks values (?,?,?,?,?,?)', (t[0],t[1],t[2],t[3],t[4],'some value')
However, the solutions above wont work for the executemany method i.e
c.executemany('insert into stocks values (?,?,?,?,?,?)', t)
Is there a better way of doing this?
|
[
"Tuples are immutable, but you can easily extract their contents and form new tuples. Also, I'm not sure, but I don't think the execute() call absolutely must have a tuple. Can't any sequence, including lists, work as well?\nAnyway, here's what you need:\nfor t in stocks:\n c.execute('insert into stock values (?,?,?,?,?,?)', t + ('some value',))\n\nThat adds a one-element tuple to the existing one, forming a new six-element tuple.\n",
"I assume you meant to use stocks instead of t for the executemany version\nFor the executemany version, you can also do\nc.executemany('insert into stocks (?,?,?,?,?,?)', (t + ('Arb value',) for t in stocks))\n\nUsing a generator expression instead of a list comprehension will keep you from creating an entire new data structure, which, if you have many inputs, is essential.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"sqlite"
] |
stackoverflow_0001867018_python_sqlite.txt
|
Q:
Mechanize submit login form from http to https
I have a web page containing a login form which loads via HTTP, but it submits the data via HTTPS.
I'm using python-mechanize to log into this site, but it seems that the data is submitted via HTTP.
My code is looks like this:
import mechanize
b = mechanize.Browser()
b.open('http://site.com')
form = b.forms().next() # the login form is unnamed...
print form.action # prints "https://login.us.site.com"
form['user'] = "guest"
form['pass'] = "guest"
b.form = form
b.submit()
When the form is submitted, the connection is made via HTTP and contains something like:
send: 'POST https://login.us.site.com/ HTTP/1.1\r\nAccept-Encoding: identity\r\nContent-Length: 180\r\nHost: login.us.site.com\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n'...
Can anyone confirm this and eventually post a solution so that the form is submitted via HTTPS?
Later edit:
1) I'm using a HTTP proxy for http/https traffic (set in the environment - Linux machine)
2) I've watched the traffic with Wireshark and I can confirm that the traffic is sent via normal HTTP (I can see the content of the POST and mechanize doesn't send the same requests to the proxy as a webbrowser - the latter sends CONNECT login.us.site.com:443, while mechanize only POSTs https://login.us.site.com). However, I don't know what happens to the data as it leaves the proxy; perhaps it establishes a ssl connection to the target site?
A:
mechanize uses urllib2 internally and the later had a bug: HTTPS over (Squid) Proxy fails. The bug is fixed in Python 2.6.3, so updating Python should solve your problem.
A:
Ok, it seems to be a bug in mechanize
http://sourceforge.net/mailarchive/forum.php?thread_name=alpine.DEB.2.00.0910062211230.8646%40alice&forum_name=wwwsearch-general
|
Mechanize submit login form from http to https
|
I have a web page containing a login form which loads via HTTP, but it submits the data via HTTPS.
I'm using python-mechanize to log into this site, but it seems that the data is submitted via HTTP.
My code is looks like this:
import mechanize
b = mechanize.Browser()
b.open('http://site.com')
form = b.forms().next() # the login form is unnamed...
print form.action # prints "https://login.us.site.com"
form['user'] = "guest"
form['pass'] = "guest"
b.form = form
b.submit()
When the form is submitted, the connection is made via HTTP and contains something like:
send: 'POST https://login.us.site.com/ HTTP/1.1\r\nAccept-Encoding: identity\r\nContent-Length: 180\r\nHost: login.us.site.com\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n'...
Can anyone confirm this and eventually post a solution so that the form is submitted via HTTPS?
Later edit:
1) I'm using a HTTP proxy for http/https traffic (set in the environment - Linux machine)
2) I've watched the traffic with Wireshark and I can confirm that the traffic is sent via normal HTTP (I can see the content of the POST and mechanize doesn't send the same requests to the proxy as a webbrowser - the latter sends CONNECT login.us.site.com:443, while mechanize only POSTs https://login.us.site.com). However, I don't know what happens to the data as it leaves the proxy; perhaps it establishes a ssl connection to the target site?
|
[
"mechanize uses urllib2 internally and the later had a bug: HTTPS over (Squid) Proxy fails. The bug is fixed in Python 2.6.3, so updating Python should solve your problem.\n",
"Ok, it seems to be a bug in mechanize\nhttp://sourceforge.net/mailarchive/forum.php?thread_name=alpine.DEB.2.00.0910062211230.8646%40alice&forum_name=wwwsearch-general\n"
] |
[
2,
1
] |
[] |
[] |
[
"forms",
"https",
"mechanize",
"post",
"python"
] |
stackoverflow_0001866888_forms_https_mechanize_post_python.txt
|
Q:
How should I organise my functions with pyparsing?
I am parsing a file with python and pyparsing (it's the report file for PSAT in Matlab but that isn't important). here is what I have so far. I think it's a mess and would like some advice on how to improve it. Specifically, how should I organise my grammar definitions with pyparsing?
Should I have all my grammar definitions in one function? If so, it's going to be one huge function. If not, then how do I break it up. At the moment I have split it at the sections of the file. Is it worth making loads of functions that only ever get called once from one place. Neither really feels right to me.
Should I place all my input and output code in a separate file to the other class functions? It would make the purpose of the class much clearer.
I'm also interested to know if there is an easier way to parse a file, do some sanity checks and store the data in a class. I seem to spend a lot of my time doing this.
(I will accept answers of it's good enough or use X rather than pyparsing if people agree)
A:
I could go either way on using a single big method to create your parser vs. taking it in steps the way you have it now.
I can see that you have defined some useful helper utilities, such as slit ("suppress Literal", I presume), stringtolits, and decimaltable. This looks good to me.
I like that you are using results names, they really improve the robustness of your post-parsing code. I would recommend using the shortcut form that was added in pyparsing 1.4.7, in which you can replace
busname.setResultsName("bus1")
with
busname("bus1")
This can declutter your code quite a bit.
I would look back through your parse actions to see where you are using numeric indexes to access individual tokens, and go back and assign results names instead. Here is one case, where GetStats returns (ngroup + sgroup).setParseAction(self.process_stats). process_stats has references like:
self.num_load = tokens[0]["loads"]
self.num_generator = tokens[0]["generators"]
self.num_transformer = tokens[0]["transformers"]
self.num_line = tokens[0]["lines"]
self.num_bus = tokens[0]["buses"]
self.power_rate = tokens[1]["rate"]
I like that you have Group'ed the values and the stats, but go ahead and give them names, like "network" and "soln". Then you could write this parse action code as (I've also converted to the - to me - easier-to-read object-attribute notation instead of dict element notation):
self.num_load = tokens.network.loads
self.num_generator = tokens.network.generators
self.num_transformer = tokens.network.transformers
self.num_line = tokens.network.lines
self.num_bus = tokens.network.buses
self.power_rate = tokens.soln.rate
Also, a style question: why do you sometimes use the explicit And constructor, instead of using the '+' operator?
busdef = And([busname.setResultsName("bus1"),
busname.setResultsName("bus2"),
integer.setResultsName("linenum"),
decimaltable("pf qf pl ql".split())])
This is just as easily written:
busdef = (busname("bus1") + busname("bus2") +
integer("linenum") +
decimaltable("pf qf pl ql".split()))
Overall, I think this is about par for a file of this complexity. I have a similar format (proprietary, unfortunately, so cannot be shared) in which I built the code in pieces similar to the way you have, but in one large method, something like this:
def parser():
header = Group(...)
inputsummary = Group(...)
jobstats = Group(...)
measurements = Group(...)
return header("hdr") + inputsummary("inputs") + jobstats("stats") + measurements("meas")
The Group constructs are especially helpful in a large parser like this, to establish a sort of namespace for results names within each section of the parsed data.
|
How should I organise my functions with pyparsing?
|
I am parsing a file with python and pyparsing (it's the report file for PSAT in Matlab but that isn't important). here is what I have so far. I think it's a mess and would like some advice on how to improve it. Specifically, how should I organise my grammar definitions with pyparsing?
Should I have all my grammar definitions in one function? If so, it's going to be one huge function. If not, then how do I break it up. At the moment I have split it at the sections of the file. Is it worth making loads of functions that only ever get called once from one place. Neither really feels right to me.
Should I place all my input and output code in a separate file to the other class functions? It would make the purpose of the class much clearer.
I'm also interested to know if there is an easier way to parse a file, do some sanity checks and store the data in a class. I seem to spend a lot of my time doing this.
(I will accept answers of it's good enough or use X rather than pyparsing if people agree)
|
[
"I could go either way on using a single big method to create your parser vs. taking it in steps the way you have it now.\nI can see that you have defined some useful helper utilities, such as slit (\"suppress Literal\", I presume), stringtolits, and decimaltable. This looks good to me.\nI like that you are using results names, they really improve the robustness of your post-parsing code. I would recommend using the shortcut form that was added in pyparsing 1.4.7, in which you can replace\nbusname.setResultsName(\"bus1\")\n\nwith\nbusname(\"bus1\")\n\nThis can declutter your code quite a bit.\nI would look back through your parse actions to see where you are using numeric indexes to access individual tokens, and go back and assign results names instead. Here is one case, where GetStats returns (ngroup + sgroup).setParseAction(self.process_stats). process_stats has references like:\nself.num_load = tokens[0][\"loads\"]\nself.num_generator = tokens[0][\"generators\"]\nself.num_transformer = tokens[0][\"transformers\"]\nself.num_line = tokens[0][\"lines\"]\nself.num_bus = tokens[0][\"buses\"]\nself.power_rate = tokens[1][\"rate\"]\n\nI like that you have Group'ed the values and the stats, but go ahead and give them names, like \"network\" and \"soln\". Then you could write this parse action code as (I've also converted to the - to me - easier-to-read object-attribute notation instead of dict element notation):\nself.num_load = tokens.network.loads\nself.num_generator = tokens.network.generators\nself.num_transformer = tokens.network.transformers\nself.num_line = tokens.network.lines\nself.num_bus = tokens.network.buses\nself.power_rate = tokens.soln.rate\n\nAlso, a style question: why do you sometimes use the explicit And constructor, instead of using the '+' operator?\nbusdef = And([busname.setResultsName(\"bus1\"),\n busname.setResultsName(\"bus2\"),\n integer.setResultsName(\"linenum\"),\n decimaltable(\"pf qf pl ql\".split())])\n\nThis is just as easily written:\nbusdef = (busname(\"bus1\") + busname(\"bus2\") + \n integer(\"linenum\") + \n decimaltable(\"pf qf pl ql\".split()))\n\nOverall, I think this is about par for a file of this complexity. I have a similar format (proprietary, unfortunately, so cannot be shared) in which I built the code in pieces similar to the way you have, but in one large method, something like this:\ndef parser():\n header = Group(...)\n inputsummary = Group(...)\n jobstats = Group(...)\n measurements = Group(...)\n return header(\"hdr\") + inputsummary(\"inputs\") + jobstats(\"stats\") + measurements(\"meas\")\n\nThe Group constructs are especially helpful in a large parser like this, to establish a sort of namespace for results names within each section of the parsed data.\n"
] |
[
2
] |
[] |
[] |
[
"coding_style",
"pyparsing",
"python",
"refactoring"
] |
stackoverflow_0001866329_coding_style_pyparsing_python_refactoring.txt
|
Q:
Simple Automatic Classification of the (R-->R) Functions
Given data values of some real-time physical process (e.g. network traffic) it is to find a name of the function which "at best" matches with the data.
I have a set of functions of type y=f(t) where y and t are real:
funcs = set([cos, tan, exp, log])
and a list of data values:
vals = [59874.141, 192754.791, 342413.392, 1102604.284, 3299017.372]
What is the simplest possible method to find a function from given set which will generate the very similar values?
PS: t is increasing starting from some positive value by almost-equal intervals
A:
Just write the error ( quadratic sum of error at each point for instance ) for each function of the set and choose the function giving the minimum.
But you should still fit each function before choosing
A:
Scipy has functions for fitting data, but they use polynomes or splines. You can use one of Gauß' many discoveries, the method of least squares to fit other functions.
A:
I would try an approach based on fitting too. For each of the four test functions (f1-4 see below), the values of a and b that minimizes the squared error.
f1(t) = a*cos(b*t)
f2(t) = a*tan(b*t)
f3(t) = a*exp(b*t)
f4(t) = a*log(b*t)
After fitting the squared error of the four functions can be used for evaluating the fit goodness (low values means a good fit).
If fitting is not allowed at all, the four functions can be divided into two distinct subgroups, repeating functions (cos and tan) and strict increasing functions (exp and log).
Strict increasing functions can be identified by checking if all the given values are increasing throughout the measuring interval.
In pseudo code an algorithm could be structured like
if(vals are strictly increasing):
% Exp or log
if(increasing more rapidly at the end of the interval):
% exp detected
else:
% log detected
else:
% tan or cos
if(large changes in vals over a short period is detected):
% tan detected
else:
% cos detected
Be aware that this method is not that stable and will be easy to trick into faulty conclusions.
A:
See Curve Fitting
|
Simple Automatic Classification of the (R-->R) Functions
|
Given data values of some real-time physical process (e.g. network traffic) it is to find a name of the function which "at best" matches with the data.
I have a set of functions of type y=f(t) where y and t are real:
funcs = set([cos, tan, exp, log])
and a list of data values:
vals = [59874.141, 192754.791, 342413.392, 1102604.284, 3299017.372]
What is the simplest possible method to find a function from given set which will generate the very similar values?
PS: t is increasing starting from some positive value by almost-equal intervals
|
[
"Just write the error ( quadratic sum of error at each point for instance ) for each function of the set and choose the function giving the minimum.\nBut you should still fit each function before choosing\n",
"Scipy has functions for fitting data, but they use polynomes or splines. You can use one of Gauß' many discoveries, the method of least squares to fit other functions.\n",
"I would try an approach based on fitting too. For each of the four test functions (f1-4 see below), the values of a and b that minimizes the squared error. \nf1(t) = a*cos(b*t)\nf2(t) = a*tan(b*t)\nf3(t) = a*exp(b*t)\nf4(t) = a*log(b*t)\n\nAfter fitting the squared error of the four functions can be used for evaluating the fit goodness (low values means a good fit).\nIf fitting is not allowed at all, the four functions can be divided into two distinct subgroups, repeating functions (cos and tan) and strict increasing functions (exp and log).\nStrict increasing functions can be identified by checking if all the given values are increasing throughout the measuring interval. \nIn pseudo code an algorithm could be structured like\nif(vals are strictly increasing):\n % Exp or log\n if(increasing more rapidly at the end of the interval):\n % exp detected\n else:\n % log detected\nelse:\n % tan or cos\n if(large changes in vals over a short period is detected):\n % tan detected\n else:\n % cos detected\n\nBe aware that this method is not that stable and will be easy to trick into faulty conclusions.\n",
"See Curve Fitting\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"algorithm",
"function",
"math",
"python"
] |
stackoverflow_0001866862_algorithm_function_math_python.txt
|
Q:
Python Selector (URL routing library), experience/opinions?
Does anyone have opinions about or experience with Python Selector? It looks great, but I'm a bit put off by its "Alpha" status on pypi and lack of unit tests.
I mostly like that its simple, self contained, and pure WSGI. All other url routers I've found assume I'm using django, or pylons, or paste, or pull in lots of other dependencies, or just don't let me create a simple mapping of url patterns to wsgi apps. Really, all I want to do is:
mapper.add("/regex/{to}/{resource}", my_wsgi_app)
mapper.add("/another/.*", other_wsgi_app)
...etc...
Anyways, has anyone used it before, or know of projects that have?
A:
I've used Selector for the last couple years and found it perfectly stable. It's been at 0.8.11 for at least two years now.
I would draw two conclusions from that:
It could be basically unmaintained. If you find a bug in it or need a new feature, I wouldn't count on being able to get Luke Arno to jump up and fix it in a hurry (not saying he wouldn't, but I'm guessing that Selector isn't his main focus these days). Would you be comfortable maintaining a local fork in that case?
It's pretty much complete. The problem that it's trying to solve is contained in scope. It's a very small library without much code. The bugs have been shaken out and there's really nothing left to do on it. I think this is the main reason it hasn't been updated in a long time. It's basically done.
Open source developers, and Python developers in particular, have a long history of being very (probably overly) conservative about marking things as 1.0. The lack of unit tests can be a little off-putting, but again, it's a small library solving a problem with limited scope. The code is short and clear enough to read and convince yourself of its correctness.
A:
Have you looked at werkzeug.routing? It's hard to find anything that's simpler, more self-contained, or purer-WSGI than Werkzeug, in general -- I'm quite a fan of it!-)
|
Python Selector (URL routing library), experience/opinions?
|
Does anyone have opinions about or experience with Python Selector? It looks great, but I'm a bit put off by its "Alpha" status on pypi and lack of unit tests.
I mostly like that its simple, self contained, and pure WSGI. All other url routers I've found assume I'm using django, or pylons, or paste, or pull in lots of other dependencies, or just don't let me create a simple mapping of url patterns to wsgi apps. Really, all I want to do is:
mapper.add("/regex/{to}/{resource}", my_wsgi_app)
mapper.add("/another/.*", other_wsgi_app)
...etc...
Anyways, has anyone used it before, or know of projects that have?
|
[
"I've used Selector for the last couple years and found it perfectly stable. It's been at 0.8.11 for at least two years now. \nI would draw two conclusions from that: \n\nIt could be basically unmaintained. If you find a bug in it or need a new feature, I wouldn't count on being able to get Luke Arno to jump up and fix it in a hurry (not saying he wouldn't, but I'm guessing that Selector isn't his main focus these days). Would you be comfortable maintaining a local fork in that case?\nIt's pretty much complete. The problem that it's trying to solve is contained in scope. It's a very small library without much code. The bugs have been shaken out and there's really nothing left to do on it. I think this is the main reason it hasn't been updated in a long time. It's basically done.\n\nOpen source developers, and Python developers in particular, have a long history of being very (probably overly) conservative about marking things as 1.0. The lack of unit tests can be a little off-putting, but again, it's a small library solving a problem with limited scope. The code is short and clear enough to read and convince yourself of its correctness.\n",
"Have you looked at werkzeug.routing? It's hard to find anything that's simpler, more self-contained, or purer-WSGI than Werkzeug, in general -- I'm quite a fan of it!-)\n"
] |
[
8,
6
] |
[] |
[] |
[
"python",
"selector",
"url_routing",
"wsgi"
] |
stackoverflow_0001864393_python_selector_url_routing_wsgi.txt
|
Q:
What is "Python interpreter None"?
I just created my first application for Google App Engines, which is called "Hello World". It was the first lesson from Google App Engines How-to-get-started tutorial. I tested it on my computer (I am using Windows XP), and it was working just fine - whenever I would open a new window with my web browser (FireFox), I would always be able to see this phrase Hello, world!!!, which was a proof that it worked.
Before I created it I didn't know how to save a notepad file as a .py file. I asked this question here and quickly received an answer.
However, later I received a comment to my question saying this:
"App Engine does not support Python
2.6. You will have better luck using a Python 2.5 release. Sad, but true."
So I uninstalled Python 2.6.4 and installed Python 2.5.4. Also I uninstalled the Google App Engine Launcher and re-installed it again. While I was installing it a window popped saying that all the necessary prerequisites on my computer were found.
However, when I added my existed application and wanted to run it with App Engine Launcher, a message popped saying this:
"Python interpreter None not found.
Cannot run project
E:\python\helloworld. Please confirm
these values in your Preferences, or
take an appropriate measure to fix it
(e.g. install Python)."
I tried deleting my application and re-make it anew - to no avail. What should I do?
A:
Well, it is basically saying that it cannot find your just installed python interpreter.
It looks like you are on Windows so check your environment variables, particularly PATH and PYTHONPATH, they are probably still pointing to the previous 2.6 installation folder.
A:
You need to set the python interpreter in Window->Preferences->Google->AppEngine on your eclipse platform ( of course, I am assuming you are using the GAE Eclipse plugin, if not, disregard my suggestion).
|
What is "Python interpreter None"?
|
I just created my first application for Google App Engines, which is called "Hello World". It was the first lesson from Google App Engines How-to-get-started tutorial. I tested it on my computer (I am using Windows XP), and it was working just fine - whenever I would open a new window with my web browser (FireFox), I would always be able to see this phrase Hello, world!!!, which was a proof that it worked.
Before I created it I didn't know how to save a notepad file as a .py file. I asked this question here and quickly received an answer.
However, later I received a comment to my question saying this:
"App Engine does not support Python
2.6. You will have better luck using a Python 2.5 release. Sad, but true."
So I uninstalled Python 2.6.4 and installed Python 2.5.4. Also I uninstalled the Google App Engine Launcher and re-installed it again. While I was installing it a window popped saying that all the necessary prerequisites on my computer were found.
However, when I added my existed application and wanted to run it with App Engine Launcher, a message popped saying this:
"Python interpreter None not found.
Cannot run project
E:\python\helloworld. Please confirm
these values in your Preferences, or
take an appropriate measure to fix it
(e.g. install Python)."
I tried deleting my application and re-make it anew - to no avail. What should I do?
|
[
"Well, it is basically saying that it cannot find your just installed python interpreter.\nIt looks like you are on Windows so check your environment variables, particularly PATH and PYTHONPATH, they are probably still pointing to the previous 2.6 installation folder.\n",
"You need to set the python interpreter in Window->Preferences->Google->AppEngine on your eclipse platform ( of course, I am assuming you are using the GAE Eclipse plugin, if not, disregard my suggestion).\n"
] |
[
3,
1
] |
[] |
[] |
[
"app_launcher",
"google_app_engine",
"python"
] |
stackoverflow_0001867657_app_launcher_google_app_engine_python.txt
|
Q:
Django to use different settings.py file based on subdomains
How can Django use different settings.py file based on subdomains.
Can these utilities ("django-admin", "python manage.py") still be used if there were different settings connecting to different databases.
A:
ok you have two dimensions you need to cover with your settings:
Domain (site)
Current Machine
Here is what I recommend:
universal_settings.py - all the settings you want to inherit everywhere (all machines, all domains)
local_settings.py - settings on a per machine basis (database settings, mail server, etc)
site_1.py - settings that are specific to a one of your domains
site_2.py - settings that are specific to a one of your domains
site_n.py - you get the idea
the bottom of universal_settings.py should include:
from local_settings import *
This will override anything in the universal settings as necessary.
similarly, each of the site_1.py, site_2.py, site_n.py settings files should begin with:
from universal_settings import *
Finally you need to set up an apache (or nginx, or whatever) instance for each domain and use the appropriate site_n.py as the settings file for that server
This is the method that works best for me :)
|
Django to use different settings.py file based on subdomains
|
How can Django use different settings.py file based on subdomains.
Can these utilities ("django-admin", "python manage.py") still be used if there were different settings connecting to different databases.
|
[
"ok you have two dimensions you need to cover with your settings:\n\nDomain (site)\nCurrent Machine\n\nHere is what I recommend:\nuniversal_settings.py - all the settings you want to inherit everywhere (all machines, all domains)\nlocal_settings.py - settings on a per machine basis (database settings, mail server, etc)\nsite_1.py - settings that are specific to a one of your domains\nsite_2.py - settings that are specific to a one of your domains\nsite_n.py - you get the idea\nthe bottom of universal_settings.py should include:\nfrom local_settings import *\n\nThis will override anything in the universal settings as necessary.\nsimilarly, each of the site_1.py, site_2.py, site_n.py settings files should begin with:\nfrom universal_settings import *\n\nFinally you need to set up an apache (or nginx, or whatever) instance for each domain and use the appropriate site_n.py as the settings file for that server\nThis is the method that works best for me :)\n"
] |
[
5
] |
[] |
[] |
[
"django",
"python",
"subdomain"
] |
stackoverflow_0001866760_django_python_subdomain.txt
|
Q:
Conway's Life: Iterate over a closed universe in Python
I have just started learning Python, and I'm trying to write a program for Conway's Game of Life. I'm trying to create a closed universe with boundary conditions(which are the opposite side/corner). I think I have done this, but I'm not iterating over the loop when it runs, and can't work out how to do this. Thanks very much in advance!
def iterate_conway_closed(universe_t0, n_iter=1, display=False):
# This function is similar to your iterate_conway_open function
# but instead of adding a zero guard ring it should add a
# ‘wrapping ring’ as explained earlier. Use the same apply_rules
# function as part 1) to actually apply the rules once you have
# padded the universe.
# Note that unlike the always 0 guard ring for an open universe
# the wrapping ring will need updating after every call to
# apply rules to reflect changes in the universe
height, width=universe_t0.shape
universe_array=numpy.zeros((height+2, width+2), dtype=numpy.uint8)
universe_array[1:-1, 1:-1]=universe_t0
def count(n_iter):
n=0
while n<= n_iter:
yield n
n+=1
for n in range(0,n_iter):
universe_array[:1,1:-1]=universe_t0[-1:,:] # Maps the bottom row
universe_array[0,1:-1]=universe_t0[-1:,0:]# Maps the top row
universe_array[1:-1,0]=universe_t0[:,0]# Maps the left column
universe_array[1:-1,-1]=universe_t0[:,-1]# Maps the right column
universe_array[0,0]=universe_t0[-1,0]# Maps the bottom left corner
universe_array[0,-1]=universe_t0[-1,-1]# Maps the bottom right corner
universe_array[-1,0]=universe_t0[0,0]# Maps the top left corner
universe_array[-1,-1]=universe_t0[0,-1]# Maps the top right corner
for i in range(0, n_iter):
universe_array=apply_rules(universe_array)
if display==True:
b_print(universe_array)
return universe_array[1:-1, 1:-1]
A:
your problem could be the break statement
Edit: Also, you probably want just 1 loop, first you initialize universe_array n_iter times, doing nothing after the first time. then you apply the rules n_iter times, you most likely want to put them in the same loop so that the universe gets correctly updated after each iteration.
A:
as I understand you want to create an iterator function. What you need for this is to replace your return with yield.
As example if you wish to make an iterator-function, which returns the range from 0 to x do this:
def count(x=0):
n = 0
while n <= x:
yield n
n += 1
Then you can iterate over it by a for-statement:
for i in count(10):
print(i)
# prints 0,1,2,3,4,5,6,7,8,9,10 (instead of "," there are line breaks)
The break is also a problem, because it will be always executed, when the while-loop walks through.
|
Conway's Life: Iterate over a closed universe in Python
|
I have just started learning Python, and I'm trying to write a program for Conway's Game of Life. I'm trying to create a closed universe with boundary conditions(which are the opposite side/corner). I think I have done this, but I'm not iterating over the loop when it runs, and can't work out how to do this. Thanks very much in advance!
def iterate_conway_closed(universe_t0, n_iter=1, display=False):
# This function is similar to your iterate_conway_open function
# but instead of adding a zero guard ring it should add a
# ‘wrapping ring’ as explained earlier. Use the same apply_rules
# function as part 1) to actually apply the rules once you have
# padded the universe.
# Note that unlike the always 0 guard ring for an open universe
# the wrapping ring will need updating after every call to
# apply rules to reflect changes in the universe
height, width=universe_t0.shape
universe_array=numpy.zeros((height+2, width+2), dtype=numpy.uint8)
universe_array[1:-1, 1:-1]=universe_t0
def count(n_iter):
n=0
while n<= n_iter:
yield n
n+=1
for n in range(0,n_iter):
universe_array[:1,1:-1]=universe_t0[-1:,:] # Maps the bottom row
universe_array[0,1:-1]=universe_t0[-1:,0:]# Maps the top row
universe_array[1:-1,0]=universe_t0[:,0]# Maps the left column
universe_array[1:-1,-1]=universe_t0[:,-1]# Maps the right column
universe_array[0,0]=universe_t0[-1,0]# Maps the bottom left corner
universe_array[0,-1]=universe_t0[-1,-1]# Maps the bottom right corner
universe_array[-1,0]=universe_t0[0,0]# Maps the top left corner
universe_array[-1,-1]=universe_t0[0,-1]# Maps the top right corner
for i in range(0, n_iter):
universe_array=apply_rules(universe_array)
if display==True:
b_print(universe_array)
return universe_array[1:-1, 1:-1]
|
[
"your problem could be the break statement\nEdit: Also, you probably want just 1 loop, first you initialize universe_array n_iter times, doing nothing after the first time. then you apply the rules n_iter times, you most likely want to put them in the same loop so that the universe gets correctly updated after each iteration.\n",
"as I understand you want to create an iterator function. What you need for this is to replace your return with yield.\nAs example if you wish to make an iterator-function, which returns the range from 0 to x do this:\n def count(x=0):\n n = 0\n while n <= x:\n yield n\n n += 1\n\nThen you can iterate over it by a for-statement:\nfor i in count(10):\n print(i)\n\n# prints 0,1,2,3,4,5,6,7,8,9,10 (instead of \",\" there are line breaks)\n\nThe break is also a problem, because it will be always executed, when the while-loop walks through.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001867716_python.txt
|
Q:
Using Google App Engine to display a Rss/Atom feed
Im thinking of setting up a Google App that simply displays an RSS or Atom feed. The idea is that every once in a while (a cron job or at the push of a magic button) the feed is read and copied into the apps internal data, ready to be viewed. This would be done in Python.
I found this page that seems to explain what I want to do. But that is assuming Im using some of the other Google products as it relies on the Google API.
My idea was more in line that added some new content, hosted it locally on my machine, went to the Google App administration panel, pushed a button and my (locally hosted) feed was read and copied.
My questions now are:
Is the RSS (or Atom, one is enough) format specified enough to handle add/edit/delete?
Are there any flavors or such I should worry about?
Have this been done before? Would save me some work.
A:
One option is to use the universal feed parser library, which will take care of most of these issues for you. Another option would be to use a PubSubHubbub-powered service such as Superfeedr, which will POST updates to you in a pre-sanitized form, eliminating most of your polling and parsing issues.
A:
What about using an additional library, like for instance Feedparser?
|
Using Google App Engine to display a Rss/Atom feed
|
Im thinking of setting up a Google App that simply displays an RSS or Atom feed. The idea is that every once in a while (a cron job or at the push of a magic button) the feed is read and copied into the apps internal data, ready to be viewed. This would be done in Python.
I found this page that seems to explain what I want to do. But that is assuming Im using some of the other Google products as it relies on the Google API.
My idea was more in line that added some new content, hosted it locally on my machine, went to the Google App administration panel, pushed a button and my (locally hosted) feed was read and copied.
My questions now are:
Is the RSS (or Atom, one is enough) format specified enough to handle add/edit/delete?
Are there any flavors or such I should worry about?
Have this been done before? Would save me some work.
|
[
"One option is to use the universal feed parser library, which will take care of most of these issues for you. Another option would be to use a PubSubHubbub-powered service such as Superfeedr, which will POST updates to you in a pre-sanitized form, eliminating most of your polling and parsing issues.\n",
"What about using an additional library, like for instance Feedparser?\n"
] |
[
3,
0
] |
[] |
[] |
[
"atom_feed",
"feed",
"google_app_engine",
"python",
"rss"
] |
stackoverflow_0001860407_atom_feed_feed_google_app_engine_python_rss.txt
|
Q:
Iteration in a single line
I have some code of the form:
for i in range(nIterations):
y = f(y)
Where f is a function defined elsewhere. hopefully the idea of that code is that after it's run y will have had f applied to it nIterations times.
Is there a way in python to write this in a single line?
A:
like this?
for i in range(nIterations): y = f(y)
A for loop with one command can be written as a single line.
EDIT
Or maybe slightly cleaner:
for _ in xrange(nIterations): y = f(y)
Since you don't want to have a something that can be split into two separate statements (i think), here's another one:
reduce(lambda y, _: f(y), xrange(nIterations), initValue)
Still, I would recommend to just use your original code, which is much more intuitive and readable. Also note what Guido van Rossum has to say on loops versus repeat.
Note by the way that (in python 2.x) xrange is more efficient than range for large nIterations as it returns an actual iterator and not an allocated list.
A:
So like this you mean?
for i in range(nIterations): y = f(y)
While this might seem nice and pretty, I'd argue (as has been done in the comments below your post) that this doesn't improve readability, and is best off left as 2 lines.
A:
Just stick it all on one line like this: for i in range(nIterations): y = f(y)
The decision to have code on one line or multiple has been an argument for years - there is no performance increase - just lay it out how you like it and how you can read it best.
A:
Your question lacks context, but this could be rewritten using map function or list comprehension (both one-liners)
A:
Not exactly one line, but once you define the power operation for functions:
def f_pow(f, n):
if n == 1:
return f
else:
return lambda x: f_pow(f, n-1)(f(x))
you can write this:
f_pow(f, nIterations)(y)
A:
reduce(lambda y,_: f(y),xrange(niterations),y)
A:
Ok this is probably a very weird an incomprehensible use of the reduce function, so for real code I'd stick with what you have. But just for the fun of it, here goes:
reduce(lambda a, b: f(a), range(nIterations), y)
A:
If you make y mutable, then you can use list comprehension. But this isn't something I'd use in real code, unless really necessary.
def f(y):
y[0] += 5
y = [0]
[f(y) for _ in xrange(10)]
print y[0] # => 50
A:
While I'd suggest you keep the original code snippet as it is much clearer, you can accomplish this with a single line of code using the reduce function:
reduce(lambda a,b: f(a), xrange(nIterations), y)
A:
You can create such snippets using semicolons ; if you need to execute more than one instruction inside the loop, here is an example:
for i in xrange(nIterations): x=f(i); y=f(x); z=f(y)
A:
y=[f(y) for i in range(niteration)]
hope that helps ;)
|
Iteration in a single line
|
I have some code of the form:
for i in range(nIterations):
y = f(y)
Where f is a function defined elsewhere. hopefully the idea of that code is that after it's run y will have had f applied to it nIterations times.
Is there a way in python to write this in a single line?
|
[
"like this?\nfor i in range(nIterations): y = f(y)\n\nA for loop with one command can be written as a single line.\nEDIT\nOr maybe slightly cleaner:\nfor _ in xrange(nIterations): y = f(y)\n\nSince you don't want to have a something that can be split into two separate statements (i think), here's another one:\nreduce(lambda y, _: f(y), xrange(nIterations), initValue)\n\nStill, I would recommend to just use your original code, which is much more intuitive and readable. Also note what Guido van Rossum has to say on loops versus repeat.\nNote by the way that (in python 2.x) xrange is more efficient than range for large nIterations as it returns an actual iterator and not an allocated list.\n",
"So like this you mean? \nfor i in range(nIterations): y = f(y)\n\nWhile this might seem nice and pretty, I'd argue (as has been done in the comments below your post) that this doesn't improve readability, and is best off left as 2 lines.\n",
"Just stick it all on one line like this: for i in range(nIterations): y = f(y)\nThe decision to have code on one line or multiple has been an argument for years - there is no performance increase - just lay it out how you like it and how you can read it best.\n",
"Your question lacks context, but this could be rewritten using map function or list comprehension (both one-liners)\n",
"Not exactly one line, but once you define the power operation for functions:\ndef f_pow(f, n):\n if n == 1:\n return f\n else:\n return lambda x: f_pow(f, n-1)(f(x))\n\nyou can write this:\nf_pow(f, nIterations)(y)\n\n",
"reduce(lambda y,_: f(y),xrange(niterations),y)\n",
"Ok this is probably a very weird an incomprehensible use of the reduce function, so for real code I'd stick with what you have. But just for the fun of it, here goes:\nreduce(lambda a, b: f(a), range(nIterations), y)\n\n",
"If you make y mutable, then you can use list comprehension. But this isn't something I'd use in real code, unless really necessary.\ndef f(y):\n y[0] += 5\n\ny = [0]\n[f(y) for _ in xrange(10)]\nprint y[0] # => 50\n\n",
"While I'd suggest you keep the original code snippet as it is much clearer, you can accomplish this with a single line of code using the reduce function:\nreduce(lambda a,b: f(a), xrange(nIterations), y)\n\n",
"You can create such snippets using semicolons ; if you need to execute more than one instruction inside the loop, here is an example:\nfor i in xrange(nIterations): x=f(i); y=f(x); z=f(y)\n\n",
"y=[f(y) for i in range(niteration)]\nhope that helps ;)\n"
] |
[
6,
3,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001867715_python.txt
|
Q:
Highlight a row in wxGrid (with wxPython) when a cell in that row changes programatically
How do I highlight a row after an update of the underlying PyGridTableBase? I can't seem to get my head around it.
I am able to create alternate row highlighting lines when the table is first drawn:
### Style the table a little.
def GetAttr(self, row, col, prop):
attr = gridlib.GridCellAttr()
if self.is_header(row):
attr.SetReadOnly(1)
attr.SetFont(wx.Font(10, wx.FONTFAMILY_DEFAULT, wx.NORMAL, wx.NORMAL))
attr.SetBackgroundColour(wx.LIGHT_GREY)
attr.SetAlignment(wx.ALIGN_CENTRE, wx.ALIGN_CENTRE)
return attr
if prop is 'highlight':
attr.SetBackgroundColour('#14FF63')
self.SetRowAttr(attr, row)
# Odd Even Rows
if row%2 == 1:
attr.SetBackgroundColour('#CCE6FF')
return attr
return None
but not when an even fires.
Any help much appreciated.
A:
You are doing the right thing, the only problem that comes to mind is that you perhaps didn't manually refresh the grid after the GridTableBase update. Here is a small working example that will hopefully help you out.
import wx, wx.grid
class GridData(wx.grid.PyGridTableBase):
_cols = "a b c".split()
_data = [
"1 2 3".split(),
"4 5 6".split(),
"7 8 9".split()
]
_highlighted = set()
def GetColLabelValue(self, col):
return self._cols[col]
def GetNumberRows(self):
return len(self._data)
def GetNumberCols(self):
return len(self._cols)
def GetValue(self, row, col):
return self._data[row][col]
def SetValue(self, row, col, val):
self._data[row][col] = val
def GetAttr(self, row, col, kind):
attr = wx.grid.GridCellAttr()
attr.SetBackgroundColour(wx.GREEN if row in self._highlighted else wx.WHITE)
return attr
def set_value(self, row, col, val):
self._highlighted.add(row)
self.SetValue(row, col, val)
class Test(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None)
self.data = GridData()
self.grid = wx.grid.Grid(self)
self.grid.SetTable(self.data)
btn = wx.Button(self, label="set a2 to x")
btn.Bind(wx.EVT_BUTTON, self.OnTest)
self.Sizer = wx.BoxSizer(wx.VERTICAL)
self.Sizer.Add(self.grid, 1, wx.EXPAND)
self.Sizer.Add(btn, 0, wx.EXPAND)
def OnTest(self, event):
self.data.set_value(1, 0, "x")
self.grid.Refresh()
app = wx.PySimpleApp()
app.TopWindow = Test()
app.TopWindow.Show()
app.MainLoop()
|
Highlight a row in wxGrid (with wxPython) when a cell in that row changes programatically
|
How do I highlight a row after an update of the underlying PyGridTableBase? I can't seem to get my head around it.
I am able to create alternate row highlighting lines when the table is first drawn:
### Style the table a little.
def GetAttr(self, row, col, prop):
attr = gridlib.GridCellAttr()
if self.is_header(row):
attr.SetReadOnly(1)
attr.SetFont(wx.Font(10, wx.FONTFAMILY_DEFAULT, wx.NORMAL, wx.NORMAL))
attr.SetBackgroundColour(wx.LIGHT_GREY)
attr.SetAlignment(wx.ALIGN_CENTRE, wx.ALIGN_CENTRE)
return attr
if prop is 'highlight':
attr.SetBackgroundColour('#14FF63')
self.SetRowAttr(attr, row)
# Odd Even Rows
if row%2 == 1:
attr.SetBackgroundColour('#CCE6FF')
return attr
return None
but not when an even fires.
Any help much appreciated.
|
[
"You are doing the right thing, the only problem that comes to mind is that you perhaps didn't manually refresh the grid after the GridTableBase update. Here is a small working example that will hopefully help you out.\nimport wx, wx.grid\n\nclass GridData(wx.grid.PyGridTableBase):\n _cols = \"a b c\".split()\n _data = [\n \"1 2 3\".split(),\n \"4 5 6\".split(),\n \"7 8 9\".split()\n ]\n _highlighted = set()\n\n def GetColLabelValue(self, col):\n return self._cols[col]\n\n def GetNumberRows(self):\n return len(self._data)\n\n def GetNumberCols(self):\n return len(self._cols)\n\n def GetValue(self, row, col):\n return self._data[row][col]\n\n def SetValue(self, row, col, val):\n self._data[row][col] = val\n\n def GetAttr(self, row, col, kind):\n attr = wx.grid.GridCellAttr()\n attr.SetBackgroundColour(wx.GREEN if row in self._highlighted else wx.WHITE)\n return attr\n\n def set_value(self, row, col, val):\n self._highlighted.add(row)\n self.SetValue(row, col, val)\n\nclass Test(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n\n self.data = GridData()\n self.grid = wx.grid.Grid(self)\n self.grid.SetTable(self.data)\n\n btn = wx.Button(self, label=\"set a2 to x\")\n btn.Bind(wx.EVT_BUTTON, self.OnTest)\n\n self.Sizer = wx.BoxSizer(wx.VERTICAL)\n self.Sizer.Add(self.grid, 1, wx.EXPAND)\n self.Sizer.Add(btn, 0, wx.EXPAND)\n\n def OnTest(self, event):\n self.data.set_value(1, 0, \"x\")\n self.grid.Refresh()\n\n\napp = wx.PySimpleApp()\napp.TopWindow = Test()\napp.TopWindow.Show()\napp.MainLoop()\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"wxpython"
] |
stackoverflow_0001866321_python_wxpython.txt
|
Q:
Set an object's superclass at __init__?
Is it possible, when instantiating an object, to pass-in a class which the object should derive from?
For instance:
class Red(object):
def x(self):
print '#F00'
class Blue(object):
def x(self):
print '#00F'
class Circle(object):
def __init__(self, parent):
# here, we set Bar's parent to `parent`
self.x()
class Square(object):
def __init__(self, parent):
# here, we set Bar's parent to `parent`
self.x()
self.sides = 4
red_circle = Circle(parent=Red)
blue_circle = Circle(parent=Blue)
blue_square = Square(parent=Blue)
Which would have similar effects as:
class Circle(Red):
def __init__(self):
self.x()
without, however, affecting other instances of Circle.
A:
Perhaps what you are looking for is a class factory:
#!/usr/bin/env python
class Foo(object):
def x(self):
print('y')
def Bar(parent=Foo):
class Adoptee(parent):
def __init__(self):
self.x()
return Adoptee()
obj=Bar(parent=Foo)
A:
I agree with @AntsAasma. You should probably consider using dependency injection. Atleast in the example given (which I'm sure is greatly simplified to illustrate your problem), the color of a shape is better represented by via a has-a relationship rather than with a is-a relationship.
You could implement this via passing in the desired color object to the constructor, storing a reference to it, and delegating the function call to this object. This greatly simplifies the implementation while still retaining the desired behavior. See an example here:
class Red(object):
def x(self):
print '#F00'
class Blue(object):
def x(self):
print '#00F'
class Shape(object):
def __init__(self,color):
self._color=color
def x(self):
return self._color.x()
class Circle(Shape):
def __init__(self, color):
Shape.__init__(self,color)
self.x()
class Square(Shape):
def __init__(self, color):
Shape.__init__(self,color)
self.x()
self.sides = 4
red_circle = Circle(color=Red())
blue_circle = Circle(color=Blue())
blue_square = Square(color=Blue())
Edit: Fixed names of constructor arguments in sample code
A:
It sounds like you are trying to use inheritance for something that it isn't meant for. If you would explain why you want to do this, maybe a more idiomatic and robust way to achieve your goals can be found.
A:
If you really need it, then you could use type constructor, e.g. within a factory function (or inside __new__ method, but this is probably safer approach):
class Foo(object):
def x(self):
print 'y'
class Bar(object):
def __init__(self):
self.x()
def magic(cls, parent, *args, **kwargs):
new = type(cls.__name__, (parent,), cls.__dict__.copy())
return new(*args, **kwargs)
obj = magic(Bar, parent = Foo)
A:
As everybody else says, that's a pretty weird usage, but, if you really want it, it's surely feasible (except for the mysterious Bar that you pull out of thin air in comments;-). For example:
class Circle(object):
def __init__(self, parent):
self.__class__ = type('Circle', (self.__class__, parent), {})
self.x()
This gives each instance of Circle its own personal class (all named Circle, but all different) -- this part is actually the key reason this idiom is sometimes very useful (when you want a "per-instance customized special method" with new-style classes: since the special method always gets looked up on the class, to customize it per-instance you need each instance to have a distinct class!-). If you'd rather do as much class-sharing as feasible you may want a little memoizing factory function to help:
_memo = {}
def classFor(*bases):
if bases in _memo: return _memo[bases]
name = '_'.join(c.__name__ for c in bases)
c = _memo[bases] = type(name, bases, {})
return c
(here I'm also using a different approach to the resulting class's name, using class names such as Circle_Red and Circle_Blue for your examples rather than just Circle). Then:
class Circle(object):
def __init__(self, parent):
self.__class__ = classFor(Circle, parent)
self.x()
So the technique is smooth and robust, but I still don't see it as a good match to the use case you exemplify with. However, it might be useful in other use cases, so I'm showing it.
|
Set an object's superclass at __init__?
|
Is it possible, when instantiating an object, to pass-in a class which the object should derive from?
For instance:
class Red(object):
def x(self):
print '#F00'
class Blue(object):
def x(self):
print '#00F'
class Circle(object):
def __init__(self, parent):
# here, we set Bar's parent to `parent`
self.x()
class Square(object):
def __init__(self, parent):
# here, we set Bar's parent to `parent`
self.x()
self.sides = 4
red_circle = Circle(parent=Red)
blue_circle = Circle(parent=Blue)
blue_square = Square(parent=Blue)
Which would have similar effects as:
class Circle(Red):
def __init__(self):
self.x()
without, however, affecting other instances of Circle.
|
[
"Perhaps what you are looking for is a class factory:\n#!/usr/bin/env python\nclass Foo(object):\n def x(self):\n print('y')\n\ndef Bar(parent=Foo):\n class Adoptee(parent):\n def __init__(self):\n self.x()\n return Adoptee()\nobj=Bar(parent=Foo)\n\n",
"I agree with @AntsAasma. You should probably consider using dependency injection. Atleast in the example given (which I'm sure is greatly simplified to illustrate your problem), the color of a shape is better represented by via a has-a relationship rather than with a is-a relationship.\nYou could implement this via passing in the desired color object to the constructor, storing a reference to it, and delegating the function call to this object. This greatly simplifies the implementation while still retaining the desired behavior. See an example here:\nclass Red(object):\n def x(self):\n print '#F00'\n\nclass Blue(object):\n def x(self):\n print '#00F'\n\nclass Shape(object):\n def __init__(self,color):\n self._color=color\n def x(self):\n return self._color.x()\n\nclass Circle(Shape):\n def __init__(self, color):\n Shape.__init__(self,color)\n self.x()\n\nclass Square(Shape):\n def __init__(self, color):\n Shape.__init__(self,color)\n self.x()\n self.sides = 4\n\nred_circle = Circle(color=Red())\nblue_circle = Circle(color=Blue())\nblue_square = Square(color=Blue())\n\nEdit: Fixed names of constructor arguments in sample code\n",
"It sounds like you are trying to use inheritance for something that it isn't meant for. If you would explain why you want to do this, maybe a more idiomatic and robust way to achieve your goals can be found.\n",
"If you really need it, then you could use type constructor, e.g. within a factory function (or inside __new__ method, but this is probably safer approach):\nclass Foo(object):\n def x(self):\n print 'y'\n\nclass Bar(object):\n def __init__(self):\n self.x()\n\ndef magic(cls, parent, *args, **kwargs):\n new = type(cls.__name__, (parent,), cls.__dict__.copy())\n return new(*args, **kwargs)\n\nobj = magic(Bar, parent = Foo)\n\n",
"As everybody else says, that's a pretty weird usage, but, if you really want it, it's surely feasible (except for the mysterious Bar that you pull out of thin air in comments;-). For example:\nclass Circle(object):\n def __init__(self, parent):\n self.__class__ = type('Circle', (self.__class__, parent), {})\n self.x()\n\nThis gives each instance of Circle its own personal class (all named Circle, but all different) -- this part is actually the key reason this idiom is sometimes very useful (when you want a \"per-instance customized special method\" with new-style classes: since the special method always gets looked up on the class, to customize it per-instance you need each instance to have a distinct class!-). If you'd rather do as much class-sharing as feasible you may want a little memoizing factory function to help:\n_memo = {}\ndef classFor(*bases):\n if bases in _memo: return _memo[bases]\n name = '_'.join(c.__name__ for c in bases)\n c = _memo[bases] = type(name, bases, {})\n return c\n\n(here I'm also using a different approach to the resulting class's name, using class names such as Circle_Red and Circle_Blue for your examples rather than just Circle). Then:\nclass Circle(object):\n def __init__(self, parent):\n self.__class__ = classFor(Circle, parent)\n self.x()\n\nSo the technique is smooth and robust, but I still don't see it as a good match to the use case you exemplify with. However, it might be useful in other use cases, so I'm showing it.\n"
] |
[
7,
4,
1,
0,
0
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001867258_oop_python.txt
|
Q:
Properly specifying path for git pull from my local development machine
I'm trying to setup Fabric so that I can automatically deploy my Django app to my web server.
What I want to do is to pull the data from my Development machine (os X) to the server.
How do I correctly specify my path in the git url?
This is the error I'm getting:
$ git pull
fatal: '/Users/Bryan/work/tempReview_app/.': unable to chdir or not a git archive
fatal: The remote end hung up unexpectedly
This is .git/config:
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
ignorecase = true
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = /Users/Bryan/work/my_app/.
[branch "master"]
remote = origin
merge = refs/heads/master
A:
On your server, create a folder called myapp. Chdir to this folder, and then run
server ~/myapp$ git init
Then, let git know about your server. After this, push to the server's repository from your local machine.
local ~/myapp$ git remote add origin user@server:~/myapp.git
local ~/myapp$ git push origin master
Anytime you want to push changes to your server, just run git push. If you make a mistake, just log in to your server and git co last-known-good-commit or something to that effect.
Git hooks are also very useful in situations such as the one you're facing. I would give you pointers on that but I don't know what your workflow is like, so it probably wouldn't be very helpful.
|
Properly specifying path for git pull from my local development machine
|
I'm trying to setup Fabric so that I can automatically deploy my Django app to my web server.
What I want to do is to pull the data from my Development machine (os X) to the server.
How do I correctly specify my path in the git url?
This is the error I'm getting:
$ git pull
fatal: '/Users/Bryan/work/tempReview_app/.': unable to chdir or not a git archive
fatal: The remote end hung up unexpectedly
This is .git/config:
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
ignorecase = true
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = /Users/Bryan/work/my_app/.
[branch "master"]
remote = origin
merge = refs/heads/master
|
[
"On your server, create a folder called myapp. Chdir to this folder, and then run\nserver ~/myapp$ git init\n\nThen, let git know about your server. After this, push to the server's repository from your local machine.\nlocal ~/myapp$ git remote add origin user@server:~/myapp.git\nlocal ~/myapp$ git push origin master\n\nAnytime you want to push changes to your server, just run git push. If you make a mistake, just log in to your server and git co last-known-good-commit or something to that effect.\nGit hooks are also very useful in situations such as the one you're facing. I would give you pointers on that but I don't know what your workflow is like, so it probably wouldn't be very helpful.\n"
] |
[
1
] |
[] |
[] |
[
"git",
"python"
] |
stackoverflow_0001868393_git_python.txt
|
Q:
Message instance has no attribute 'read' in Google app engine mail receive
Code in receive handler
class LogSenderHandler(InboundMailHandler):
def receive(self, mail_message):
logging.info("Received a message from: " + mail_message.sender)
#logging.info("Received a message from: " + mail_message.attachments)
logging.info("Received a message from: " + mail_message.date)
logging.info("Received a message from: " + mail_message.subject)
report = DocFile()
report.doc_name = mail_message.subject
if mail_message.attachments is not None:
report.doc_file = mail_message.attachments
else:
report.doc_file = mail_message.bodies(content_type='text/plain')
report.put()
application = webapp.WSGIApplication([LogSenderHandler.mapping()], debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
code in url.py
inbound_services:
- mail
handlers:
- url: /_ah/mail/.+
script: handle_incoming_email.py
error when i try to send a simple email from http://localhost:8080/_ah/admin/inboundmail
Message send failure
Traceback (most recent call last):
File "F:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 509, in __call__
handler.post(*groups)
File "F:\Program Files\Google\google_appengine\google\appengine\ext\webapp\mail_handlers.py", line 58, in post
self.receive(mail.InboundEmailMessage(self.request.body))
File "F:\Program Files\Google\google_appengine\google\appengine\api\mail.py", line 547, in __init__
self.update_from_mime_message(mime_message)
File "F:\Program Files\Google\google_appengine\google\appengine\api\mail.py", line 1081, in update_from_mime_message
mime_message = _parse_mime_message(mime_message)
File "F:\Program Files\Google\google_appengine\google\appengine\api\mail.py", line 232, in _parse_mime_message
return email.message_from_file(mime_message)
File "F:\Python25\lib\email\__init__.py", line 66, in message_from_file
return Parser(*args, **kws).parse(fp)
File "F:\Python25\lib\email\parser.py", line 68, in parse
data = fp.read(8192)
AttributeError: Message instance has no attribute 'read'
EDIT
This error comes only on local machine and not on app engine
A:
Do you have the latest version of the API? for the incoming mail function need to be the 1.2.6 or greatest.
Later i saw in google groups the "solution" I quote Joshua Smith
"I've found that you need to restart the local dev environment before
doing any inbound mail testing. Otherwise, you'll get that read error"
So every time that you change your code you MUST restart the server, even if only add white spaces (seriously)
|
Message instance has no attribute 'read' in Google app engine mail receive
|
Code in receive handler
class LogSenderHandler(InboundMailHandler):
def receive(self, mail_message):
logging.info("Received a message from: " + mail_message.sender)
#logging.info("Received a message from: " + mail_message.attachments)
logging.info("Received a message from: " + mail_message.date)
logging.info("Received a message from: " + mail_message.subject)
report = DocFile()
report.doc_name = mail_message.subject
if mail_message.attachments is not None:
report.doc_file = mail_message.attachments
else:
report.doc_file = mail_message.bodies(content_type='text/plain')
report.put()
application = webapp.WSGIApplication([LogSenderHandler.mapping()], debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
code in url.py
inbound_services:
- mail
handlers:
- url: /_ah/mail/.+
script: handle_incoming_email.py
error when i try to send a simple email from http://localhost:8080/_ah/admin/inboundmail
Message send failure
Traceback (most recent call last):
File "F:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 509, in __call__
handler.post(*groups)
File "F:\Program Files\Google\google_appengine\google\appengine\ext\webapp\mail_handlers.py", line 58, in post
self.receive(mail.InboundEmailMessage(self.request.body))
File "F:\Program Files\Google\google_appengine\google\appengine\api\mail.py", line 547, in __init__
self.update_from_mime_message(mime_message)
File "F:\Program Files\Google\google_appengine\google\appengine\api\mail.py", line 1081, in update_from_mime_message
mime_message = _parse_mime_message(mime_message)
File "F:\Program Files\Google\google_appengine\google\appengine\api\mail.py", line 232, in _parse_mime_message
return email.message_from_file(mime_message)
File "F:\Python25\lib\email\__init__.py", line 66, in message_from_file
return Parser(*args, **kws).parse(fp)
File "F:\Python25\lib\email\parser.py", line 68, in parse
data = fp.read(8192)
AttributeError: Message instance has no attribute 'read'
EDIT
This error comes only on local machine and not on app engine
|
[
"Do you have the latest version of the API? for the incoming mail function need to be the 1.2.6 or greatest.\nLater i saw in google groups the \"solution\" I quote Joshua Smith\n\"I've found that you need to restart the local dev environment before\ndoing any inbound mail testing. Otherwise, you'll get that read error\"\nSo every time that you change your code you MUST restart the server, even if only add white spaces (seriously)\n"
] |
[
0
] |
[] |
[] |
[
"email",
"google_app_engine",
"python"
] |
stackoverflow_0001862041_email_google_app_engine_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.