content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python: Memory usage and optimization when modifying lists The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere. A: Without knowing the specifics of what you're doing with this list, it's hard to know exactly what would be best in this case. If your processing stage depends on the current index of the list element, this won't work, but if not, it appears you've left off the most Pythonic (and in many ways, easiest) approach: generators. If all you're doing is iterating over each element, processing it in some way, then either including that element in the list or not, use a generator. Then you never need to store the entire iterable in memory. def process_and_generate_data(source_iterable): for item in source_iterable: dosomestuff(item) if not somecondition(item): yield item You would need to have a processing loop that dealt with persisting the processed iterable (writing it back to a file, or whatever), or if you have multiple processing stages you'd prefer to separate into different generators you could have your processing loop pass one generator to the next. A: From your description it sounds like a deque ("deck") would be exactly what you are looking for: http://docs.python.org/library/collections.html#deque-objects "Iterate" across it by repeatedly calling pop() and then, if you want to keep the popped item in the deque, returning that item to the front with appendleft(item). To keep up with when you're done iterating and have seen everything in the deque, either put in a marker object like None that you watch for, or just ask for the deque's len() when you start a particular loop and use range() to pop() exactly that many items. I believe you will find all of the operations you need are then O(1). A: Python stores only references to objects in the list - not the elements themselves. If you grow a list item by item, the list (that is the list of references to the objects) will grow one by one, eventually reaching the end of the excess memory that Python preallocated at the end of the list (of references!). It then copies the list (of references!) into a new larger place while your list elements stay at their old location. As your code visits all the elements in the old list anyway, copying the references to a new list by new_list[i]=old_list[i] will be nearly no burden at all. The only performance hint is to allocate all new elements at once instead of appending them (OTOH the Python docs say that amortized append is still O(1) as the number of excess elements grows with the list size). If you are lacking the place for the new list (of references) then I fear you are out of luck - any data structure that would evade the O(n) in-place insert/delete will likely be bigger than a simple array of 4- or 8-byte entries. A: A doubly linked list is worse than just reallocating the list. A Python list uses 5 words + one word per element. A doubly linked list will use 5 words per element. Even if you use a singly linked list, it's still going to be 4 words per element - a lot worse than the less than 2 words per element that rebuilding the list will take. From memory usage perspective, moving items up the list and deleting the slack at the end is the best approach. Python will release the memory if the list gets less than half full. The question to ask yourself is, does it really matter. The list entries probably point to some data, unless you have lots of duplicate objects in the list, the memory used for the list is insignificant compared to the data. Given that, you might just as well just build a new list. For building a new list, the approach you suggested is not that good. There's no apparent reason why you couldn't just go over the list once. Also, calling gc.collect() is unnecessary and actually harmful - the CPython reference counting will release the old list immediately anyway, and even the other garbage collectors are better off collecting when they hit memory pressure. So something like this will work: while processingdata: retained = [] for item in somelist: dosomething(item) if not somecondition(item): retained.append(item) somelist = retained If you don't mind using side effects in list comprehensions, then the following is also an option: def process_and_decide(item): dosomething(item) return not somecondition(item) while processingdata: somelist = [item for item in somelist if process_and_decide(item)] The inplace method can also be refactored so the mechanism and business logic are separated: def inplace_filter(func, list_): pos = 0 for item in list_: if func(item): list_[pos] = item pos += 1 del list_[pos:] while processingdata: inplace_filter(process_and_decide, somelist) A: You do not provide enough information I can find to answer this question really well. I don't know your use case well enough to tell you what data structures will get you the time complexities you want if you have to optimize for time. The typical solution is to build a new list rather than repeated deletions, but obviously this doubles(ish) memory usage. If you have memory usage issues, you might want to abandon using in-memory Python constructs and go with an on-disk database. Many databases are available and sqlite ships with Python. Depending on your usage and how tight your memory requirements are, array.array or numpy might help you, but this is highly dependent on what you need to do. array.array will have all the same time complexities as list and numpy arrays sort of will but work in some different ways. Using lazy iterators (like generators and the stuff in the itertools module) can often reduce memory usage by a factor of n. Using a database will improve time to delete items from arbitrary locations (though order will be lost if this is important). Using a dict will do the same, but potentially at high memory usage. You can also consider blist as a drop-in replacement for a list that might get some of the compromises you want. I don't believe it will drastically increase memory usage, but it will change item removal to O(log n). This comes at the cost of making other operations more expensive, of course. I would have to see testing to believe that the constant factor for memory use for your doubly linked list implementation would be less than the 2 that you get by simply creating a new list. I really doubt it. You will have to share more about your problem class for a more concrete answer, I think, but the general advice is Iterate over a list building a new list as you go along (or using a generator to yield the items when you need them). If you actually need a list, this will have a memory factor of 2, which scales fine but doesn't help if you are short on memory period. If you are running out of memory, rather than microoptimization you probably want an on-disk database or to store your data in a file. A: Brandon Craig Rhodes suggests using a collections.deque, which can suit this problem: no additional memory is required for the operation and it is kept O(n). I do not know the total memory usage and how it compares to a list; it's worth noting that a deque has to store a lot more references and I would not be surprised if it isn't as memory intensive as using two lists. You would have to test or study it to know yourself. If you were to use a deque, I would deploy it slightly differently than Rhodes suggests: from collections import deque d = deque(range(30)) n = deque() print d while True: try: item = d.popleft() except IndexError: break if item % 3 != 0: n.append(item) print n There is no significant memory difference doing it this way, but there's a lot less opportunity to flub up than mutating the same deque as you go.
Python: Memory usage and optimization when modifying lists
The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.
[ "Without knowing the specifics of what you're doing with this list, it's hard to know exactly what would be best in this case. If your processing stage depends on the current index of the list element, this won't work, but if not, it appears you've left off the most Pythonic (and in many ways, easiest) approach: generators.\nIf all you're doing is iterating over each element, processing it in some way, then either including that element in the list or not, use a generator. Then you never need to store the entire iterable in memory.\ndef process_and_generate_data(source_iterable):\n for item in source_iterable:\n dosomestuff(item)\n if not somecondition(item):\n yield item\n\nYou would need to have a processing loop that dealt with persisting the processed iterable (writing it back to a file, or whatever), or if you have multiple processing stages you'd prefer to separate into different generators you could have your processing loop pass one generator to the next.\n", "From your description it sounds like a deque (\"deck\") would be exactly what you are looking for:\nhttp://docs.python.org/library/collections.html#deque-objects\n\"Iterate\" across it by repeatedly calling pop() and then, if you want to keep the popped item in the deque, returning that item to the front with appendleft(item). To keep up with when you're done iterating and have seen everything in the deque, either put in a marker object like None that you watch for, or just ask for the deque's len() when you start a particular loop and use range() to pop() exactly that many items.\nI believe you will find all of the operations you need are then O(1).\n", "Python stores only references to objects in the list - not the elements themselves. If you grow a list item by item, the list (that is the list of references to the objects) will grow one by one, eventually reaching the end of the excess memory that Python preallocated at the end of the list (of references!). It then copies the list (of references!) into a new larger place while your list elements stay at their old location. As your code visits all the elements in the old list anyway, copying the references to a new list by new_list[i]=old_list[i] will be nearly no burden at all. The only performance hint is to allocate all new elements at once instead of appending them (OTOH the Python docs say that amortized append is still O(1) as the number of excess elements grows with the list size). If you are lacking the place for the new list (of references) then I fear you are out of luck - any data structure that would evade the O(n) in-place insert/delete will likely be bigger than a simple array of 4- or 8-byte entries.\n", "A doubly linked list is worse than just reallocating the list. A Python list uses 5 words + one word per element. A doubly linked list will use 5 words per element. Even if you use a singly linked list, it's still going to be 4 words per element - a lot worse than the less than 2 words per element that rebuilding the list will take.\nFrom memory usage perspective, moving items up the list and deleting the slack at the end is the best approach. Python will release the memory if the list gets less than half full. The question to ask yourself is, does it really matter. The list entries probably point to some data, unless you have lots of duplicate objects in the list, the memory used for the list is insignificant compared to the data. Given that, you might just as well just build a new list.\nFor building a new list, the approach you suggested is not that good. There's no apparent reason why you couldn't just go over the list once. Also, calling gc.collect() is unnecessary and actually harmful - the CPython reference counting will release the old list immediately anyway, and even the other garbage collectors are better off collecting when they hit memory pressure. So something like this will work:\nwhile processingdata:\n retained = []\n for item in somelist:\n dosomething(item)\n if not somecondition(item):\n retained.append(item)\n somelist = retained\n\nIf you don't mind using side effects in list comprehensions, then the following is also an option:\ndef process_and_decide(item):\n dosomething(item)\n return not somecondition(item)\n\nwhile processingdata:\n somelist = [item for item in somelist if process_and_decide(item)]\n\nThe inplace method can also be refactored so the mechanism and business logic are separated:\ndef inplace_filter(func, list_):\n pos = 0\n for item in list_:\n if func(item):\n list_[pos] = item\n pos += 1\n del list_[pos:]\n\nwhile processingdata:\n inplace_filter(process_and_decide, somelist)\n\n", "You do not provide enough information I can find to answer this question really well. I don't know your use case well enough to tell you what data structures will get you the time complexities you want if you have to optimize for time. The typical solution is to build a new list rather than repeated deletions, but obviously this doubles(ish) memory usage.\nIf you have memory usage issues, you might want to abandon using in-memory Python constructs and go with an on-disk database. Many databases are available and sqlite ships with Python. Depending on your usage and how tight your memory requirements are, array.array or numpy might help you, but this is highly dependent on what you need to do. array.array will have all the same time complexities as list and numpy arrays sort of will but work in some different ways. Using lazy iterators (like generators and the stuff in the itertools module) can often reduce memory usage by a factor of n.\nUsing a database will improve time to delete items from arbitrary locations (though order will be lost if this is important). Using a dict will do the same, but potentially at high memory usage.\nYou can also consider blist as a drop-in replacement for a list that might get some of the compromises you want. I don't believe it will drastically increase memory usage, but it will change item removal to O(log n). This comes at the cost of making other operations more expensive, of course.\nI would have to see testing to believe that the constant factor for memory use for your doubly linked list implementation would be less than the 2 that you get by simply creating a new list. I really doubt it.\nYou will have to share more about your problem class for a more concrete answer, I think, but the general advice is\n\nIterate over a list building a new list as you go along (or using a generator to yield the items when you need them). If you actually need a list, this will have a memory factor of 2, which scales fine but doesn't help if you are short on memory period.\nIf you are running out of memory, rather than microoptimization you probably want an on-disk database or to store your data in a file.\n\n", "Brandon Craig Rhodes suggests using a collections.deque, which can suit this problem: no additional memory is required for the operation and it is kept O(n). I do not know the total memory usage and how it compares to a list; it's worth noting that a deque has to store a lot more references and I would not be surprised if it isn't as memory intensive as using two lists. You would have to test or study it to know yourself.\nIf you were to use a deque, I would deploy it slightly differently than Rhodes suggests:\nfrom collections import deque\nd = deque(range(30))\nn = deque()\n\nprint d\n\nwhile True:\n try:\n item = d.popleft()\n except IndexError:\n break\n\n if item % 3 != 0:\n n.append(item)\n\nprint n\n\nThere is no significant memory difference doing it this way, but there's a lot less opportunity to flub up than mutating the same deque as you go.\n" ]
[ 6, 4, 3, 3, 2, 2 ]
[]
[]
[ "iteration", "list", "memory", "optimization", "python" ]
stackoverflow_0002631053_iteration_list_memory_optimization_python.txt
Q: Convert list to sequence of variables I was wondering if this was possible... I have a sequence of variables that have to be assigned to a do.something (a, b) a and b variables accordingly. Something like this: # # Have a list of sequenced variables. list = 2:90 , 1:140 , 3:-40 , 4:60 # # "Template" on where to assign the variables from the list. do.something (a,b) # # Assign the variables from the list in a sequence with possibility of "in between" functions like print and time.sleep() added. do.something (2,90) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (1,140) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (3,-40) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (4,60) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) Any ideas? A: arglist = [(2, 90), (1, 140), (3, -40), (4, 60)] for args in arglist: do.something(*args) time.sleep(1) print "Did something (%d,%d)" % args The * means "use the values in this tuple as the arguments". Hint: you can do the same with keyword arguments by using ** and a dict.
Convert list to sequence of variables
I was wondering if this was possible... I have a sequence of variables that have to be assigned to a do.something (a, b) a and b variables accordingly. Something like this: # # Have a list of sequenced variables. list = 2:90 , 1:140 , 3:-40 , 4:60 # # "Template" on where to assign the variables from the list. do.something (a,b) # # Assign the variables from the list in a sequence with possibility of "in between" functions like print and time.sleep() added. do.something (2,90) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (1,140) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (3,-40) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (4,60) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) Any ideas?
[ "arglist = [(2, 90), (1, 140), (3, -40), (4, 60)]\nfor args in arglist:\n do.something(*args)\n time.sleep(1)\n print \"Did something (%d,%d)\" % args\n\nThe * means \"use the values in this tuple as the arguments\". Hint: you can do the same with keyword arguments by using ** and a dict.\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0002633609_python.txt
Q: Python calling class methods with the wrong number of parameters I'm just beginning to learn python. I wrote an example script to test OOP in python, but something very odd has happened. When I call a class method, Python is calling the function with one more parameter than given. Here is the code: 1. class Bar: 2. num1,num2 = 0,0 3. def __init__(num1,num2): 4. num1,num2 = num1,num2 5. def foo(): 6. if num1 > num2: 7. print num1,'is greater than ',num2,'!' 8. elif num1 is num2: 9. print num1,' is equal to ',num2,'!' 10. else: 11. print num1,' is less than ',num2,'!' 12. a,b = 42,84 13. t = Bar(a,b) 14. t.foo 15. 16. t.num1 = t.num1^t.num2 17. t.num2 = t.num2^t.num1 18. t.num1 = t.num1^t.num2 19. 20. t.foo And the error message I get: python test.py Traceback (most recent call last): File "test.py", line 13, in t = Bar(a,b) TypeError: init() takes exactly 2 arguments (3 given) Can anyone help? Thanks in advance A: The first argument passed to an instance method is the instance itself. Typically this is called self when defining the function: def __init__(self, num1, num2): Consider reading the tutorial. A: Couple things: Your class is named Bar but you're calling it as bar(a, b). Change that to Bar(a, b) to solve this problem. Classes need to inherit from something (if nothing, then object). You need class Bar(object): Instance methods, in Python, are always supplied one parameter, which is the object itself, before other passed parameters. So your def __init__(num1, num2): should be def __init__(self, num1, num2):, and the same for def foo(). All of your instance variables need to be prefixed by self.. So num1 should be self.num1, etc. The ^ operator is boolean XOR. Not sure if this is what you want, it's often confused with the power operator, **. Here's your example, cleaned up and fixed accordingly: class Bar(object): num1, num2 = 0, 0 def __init__(self, num1, num2): self.num1, self.num2 = num1, num2 def foo(self): if self.num1 > self.num2: print self.num1,'is greater than ',self.num2,'!' elif self.num1 is self.num2: print self.num1,' is equal to ',self.num2,'!' else: print self.num1,' is less than ',self.num2,'!' a, b = 42, 84 t = Bar(a, b) t.foo() t.num1 = t.num1 ^ t.num2 t.num2 = t.num2 ^ t.num1 t.num1 = t.num1 ^ t.num2 t.foo() And the result: 42 is less than 84 ! 84 is greater than 42 ! A: a) By convention, the first parameter of a method is called self. b) On line 4 you are self-assigning. Maybe you want to say self.num1, self.num2 c) If you want to call the method foo of t (lines 14 and 20) you should add parentheses at the end: t.foo() d) The indentation is idiomatically given by 4 spaces, which makes reading much easier. EDIT: You might want to look at chapters 15-18 of Allen Downey's book "Think Python: How to Think Like a Computer Scientist". This book is very short, nicely written, and easy to read. It is freely available here. EDIT2: I hadn't noticed this before, but as dash-tom-bang pointed out in a comment below, in this context it is best if (on line 8) you compared for equality using == instead of is. A: Python identifiers are case-sensitive... bar != Bar... Also, you need to explicitly declare self as the first argument of the __init__() method, something like: def __init__(self, num1, num2): #...etc. BTW, see bcherry's answer as he/she covers other typical Python beginner mistakes (such as not explicitly prefixing the instance variables with self.... A: In regards to that particular error message, you're missing the self argument on line 3. It should read: def __init__(self, num1,num2): Unlike instance methods in other OOP languages, in Python, you have to explicitly name the instance as the first argument when defining such a method. Then, when you call obj.some_method(), obj is automatically and implicitly passed as the first argument to some_method. The first argument is idiomatically named self, but nothing is stopping you from naming it anything else.
Python calling class methods with the wrong number of parameters
I'm just beginning to learn python. I wrote an example script to test OOP in python, but something very odd has happened. When I call a class method, Python is calling the function with one more parameter than given. Here is the code: 1. class Bar: 2. num1,num2 = 0,0 3. def __init__(num1,num2): 4. num1,num2 = num1,num2 5. def foo(): 6. if num1 > num2: 7. print num1,'is greater than ',num2,'!' 8. elif num1 is num2: 9. print num1,' is equal to ',num2,'!' 10. else: 11. print num1,' is less than ',num2,'!' 12. a,b = 42,84 13. t = Bar(a,b) 14. t.foo 15. 16. t.num1 = t.num1^t.num2 17. t.num2 = t.num2^t.num1 18. t.num1 = t.num1^t.num2 19. 20. t.foo And the error message I get: python test.py Traceback (most recent call last): File "test.py", line 13, in t = Bar(a,b) TypeError: init() takes exactly 2 arguments (3 given) Can anyone help? Thanks in advance
[ "The first argument passed to an instance method is the instance itself. Typically this is called self when defining the function:\n def __init__(self, num1, num2):\n\nConsider reading the tutorial.\n", "Couple things:\n\nYour class is named Bar but you're calling it as bar(a, b). Change that to Bar(a, b) to solve this problem.\nClasses need to inherit from something (if nothing, then object). You need class Bar(object):\nInstance methods, in Python, are always supplied one parameter, which is the object itself, before other passed parameters. So your def __init__(num1, num2): should be def __init__(self, num1, num2):, and the same for def foo().\nAll of your instance variables need to be prefixed by self.. So num1 should be self.num1, etc.\nThe ^ operator is boolean XOR. Not sure if this is what you want, it's often confused with the power operator, **.\n\nHere's your example, cleaned up and fixed accordingly:\nclass Bar(object):\n num1, num2 = 0, 0\n def __init__(self, num1, num2):\n self.num1, self.num2 = num1, num2\n\n def foo(self):\n if self.num1 > self.num2:\n print self.num1,'is greater than ',self.num2,'!'\n elif self.num1 is self.num2:\n print self.num1,' is equal to ',self.num2,'!'\n else:\n print self.num1,' is less than ',self.num2,'!'\n\na, b = 42, 84\nt = Bar(a, b)\nt.foo()\n\nt.num1 = t.num1 ^ t.num2\nt.num2 = t.num2 ^ t.num1\nt.num1 = t.num1 ^ t.num2\n\nt.foo()\n\nAnd the result:\n42 is less than 84 !\n84 is greater than 42 !\n\n", "a) By convention, the first parameter of a method is called self.\nb) On line 4 you are self-assigning. Maybe you want to say self.num1, self.num2\nc) If you want to call the method foo of t (lines 14 and 20) you should add parentheses at the end: t.foo()\nd) The indentation is idiomatically given by 4 spaces, which makes reading much easier.\nEDIT: You might want to look at chapters 15-18 of Allen Downey's book \"Think Python: How to Think Like a Computer Scientist\". This book is very short, nicely written, and easy to read. It is freely available here.\nEDIT2: I hadn't noticed this before, but as dash-tom-bang pointed out in a comment below, in this context it is best if (on line 8) you compared for equality using == instead of is.\n", "Python identifiers are case-sensitive... bar != Bar...\nAlso, you need to explicitly declare self as the first argument of the __init__() method, something like:\ndef __init__(self, num1, num2):\n #...etc.\n\nBTW, see bcherry's answer as he/she covers other typical Python beginner mistakes (such as not explicitly prefixing the instance variables with self....\n", "In regards to that particular error message, you're missing the self argument on line 3. It should read:\ndef __init__(self, num1,num2):\n\nUnlike instance methods in other OOP languages, in Python, you have to explicitly name the instance as the first argument when defining such a method. Then, when you call obj.some_method(), obj is automatically and implicitly passed as the first argument to some_method.\nThe first argument is idiomatically named self, but nothing is stopping you from naming it anything else.\n" ]
[ 7, 5, 3, 1, 1 ]
[]
[]
[ "methods", "oop", "parameters", "python" ]
stackoverflow_0002633775_methods_oop_parameters_python.txt
Q: XCode 3.2 Ruby and Python templates Under xcode 3.2 my ObjectiveC + Python/Ruby projects can still be opened updated and compiled, but you cannot create new projects. Given that all traces of ruby and python are missing from xcode 3.2 (ie create project and add new ruby/python file), is there an easy way to get the templates installed again? I found some info about copying them into a folder somewhere, but I cant seem to get it to work, I suspect the folder location has changed for 3.2. A: The folder for application templates in 3.2 is: /Developer/Library/Xcode/Project Templates/Application Templates for python are at: http://svn.red-bean.com/pyobjc/trunk/pyobjc/pyobjc-xcode/Project%20Templates/ use: $svn co <address of template you want> /Developer/Library/Xcode/Project Templates/Application/<Folder you want it in> e.g. $svn co http://svn.red-bean.com/pyobjc/trunk/pyobjc/pyobjc-xcode/Project%20Templates/Cocoa-Python%20Document-based%20Application/ /Developer/Library/Xcode/Project\ Templates/Application/Cocoa-Python\ NSDocument\ based\ Application A: Here's the word on this from Chris Espinosa on the Xcode-Users mailing list: We are deemphasizing Cocoa-Python and Cocoa-Ruby, though existing project will continue to build in Xcode. You can duplicate one of your existing projects and use the new Rename command to start a new project. Bugs filed against the removal of these templates will be duplicated to No Python/Ruby templates in Xcode, and we'll use that bug to gauge the need for that support in the future. I'd say file a bug report at https://bugreport.apple.com to voice your opinion on the subject. A: Beginning with Xcode 3.2, Apple decided to not include project and file templates from 3rd party projects (including PyObjC, RubyCocoa or MacRuby). Since these template files were often updated more frequently than Xcode's release cycle, the templates shipped with Xcode were often out of date. Developers are now encouraged to install the templates directly from those projects' repositories. PyObjC templates are currently available only in SVN, though the PyObjC devs intend to make them available on the website "soon". This question details how to install new templates. A: Download the Cocoa-Ruby Templates to your '/Developer/Library/Xcode/Project Templates/Application/Ruby Application/' and '/Developer/Library/Xcode/File Templates/Ruby/' directories. Install the latest version of Ruby Cocoa. More info can be found on the Ruby Cocoa website. A: The above coments were all good and helpful but didn't really help. Easiest thing I could find is to create an empty python/ruby project in xcode 3.1, then make copies of this project folder for every new project you work on. When you open the new/blank project, 3.2 has a new feature that lets you rename the project so you can have proper names for eacn new project.
XCode 3.2 Ruby and Python templates
Under xcode 3.2 my ObjectiveC + Python/Ruby projects can still be opened updated and compiled, but you cannot create new projects. Given that all traces of ruby and python are missing from xcode 3.2 (ie create project and add new ruby/python file), is there an easy way to get the templates installed again? I found some info about copying them into a folder somewhere, but I cant seem to get it to work, I suspect the folder location has changed for 3.2.
[ "The folder for application templates in 3.2 is:\n/Developer/Library/Xcode/Project Templates/Application\nTemplates for python are at:\nhttp://svn.red-bean.com/pyobjc/trunk/pyobjc/pyobjc-xcode/Project%20Templates/\nuse:\n$svn co <address of template you want> /Developer/Library/Xcode/Project Templates/Application/<Folder you want it in>\n\ne.g.\n$svn co http://svn.red-bean.com/pyobjc/trunk/pyobjc/pyobjc-xcode/Project%20Templates/Cocoa-Python%20Document-based%20Application/ /Developer/Library/Xcode/Project\\ Templates/Application/Cocoa-Python\\ NSDocument\\ based\\ Application\n\n", "Here's the word on this from Chris Espinosa on the Xcode-Users mailing list:\n\nWe are deemphasizing Cocoa-Python and\n Cocoa-Ruby, though existing project\n will continue to build in Xcode. You\n can duplicate one of your existing\n projects and use the new Rename\n command to start a new project.\nBugs filed against the removal of\n these templates will be duplicated to\n No\n Python/Ruby templates in Xcode, and\n we'll use that bug to gauge the need\n for that support in the future.\n\nI'd say file a bug report at https://bugreport.apple.com to voice your opinion on the subject.\n", "Beginning with Xcode 3.2, Apple decided to not include project and file templates from 3rd party projects (including PyObjC, RubyCocoa or MacRuby). Since these template files were often updated more frequently than Xcode's release cycle, the templates shipped with Xcode were often out of date. Developers are now encouraged to install the templates directly from those projects' repositories. PyObjC templates are currently available only in SVN, though the PyObjC devs intend to make them available on the website \"soon\". This question details how to install new templates.\n", "\nDownload the Cocoa-Ruby Templates to your '/Developer/Library/Xcode/Project Templates/Application/Ruby Application/' and '/Developer/Library/Xcode/File Templates/Ruby/' directories. \nInstall the latest version of Ruby Cocoa.\n\nMore info can be found on the Ruby Cocoa website.\n", "The above coments were all good and helpful but didn't really help. Easiest thing I could find is to create an empty python/ruby project in xcode 3.1, then make copies of this project folder for every new project you work on.\nWhen you open the new/blank project, 3.2 has a new feature that lets you rename the project so you can have proper names for eacn new project.\n" ]
[ 6, 5, 3, 1, 0 ]
[]
[]
[ "cocoa", "pyobjc", "python", "ruby", "xcode" ]
stackoverflow_0001382252_cocoa_pyobjc_python_ruby_xcode.txt
Q: Efficient JSON encoding for data that may be binary, but is often text I need to send a JSON packet across the wire with the contents of an arbitrary file. This may be a binary file (like a ZIP file), but most often it will be plain ASCII text. I'm currently using base64 encoding, which handles all files, but it increases the size of the data significantly - even if the file is ASCII to begin with. Is there a more efficient way I can encode the data, other than manually checking for any non-ASCII characters and then deciding whether or not to base64-encode it? I'm currently writing this in Python, but will probably need to do the same in Java, C# and C++, so an easily portable solution would be preferable. A: Use quoted-printable encoding. Any language should support that. http://en.wikipedia.org/wiki/Quoted-printable
Efficient JSON encoding for data that may be binary, but is often text
I need to send a JSON packet across the wire with the contents of an arbitrary file. This may be a binary file (like a ZIP file), but most often it will be plain ASCII text. I'm currently using base64 encoding, which handles all files, but it increases the size of the data significantly - even if the file is ASCII to begin with. Is there a more efficient way I can encode the data, other than manually checking for any non-ASCII characters and then deciding whether or not to base64-encode it? I'm currently writing this in Python, but will probably need to do the same in Java, C# and C++, so an easily portable solution would be preferable.
[ "Use quoted-printable encoding. Any language should support that.\nhttp://en.wikipedia.org/wiki/Quoted-printable\n" ]
[ 2 ]
[]
[]
[ "c++", "encoding", "java", "json", "python" ]
stackoverflow_0002634135_c++_encoding_java_json_python.txt
Q: Annoying Twisted Python problem I'm trying to answer the following question out of personal interest: What is the fastest way to send 100,000 HTTP requests in Python? And this is what I have came up so far, but I'm experiencing something very stange. When installSignalHandlers is True, it just hangs. I can see that the DelayedCall instances are in reactor._newTimedCalls, but processResponse never gets called. When installSignalHandlers is False, it throws an error and works. from twisted.internet import reactor from twisted.web.client import Agent from threading import Semaphore, Thread import time concurrent = 100 s = Semaphore(concurrent) reactor.suggestThreadPoolSize(concurrent) t=Thread( target=reactor.run, kwargs={'installSignalHandlers':True}) t.daemon=True t.start() agent = Agent(reactor) def processResponse(response,url): print response.code, url s.release() def processError(response,url): print "error", url s.release() def addTask(url): req = agent.request('HEAD', url) req.addCallback(processResponse, url) req.addErrback(processError, url) for url in open('urllist.txt'): addTask(url.strip()) s.acquire() while s._Semaphore__value!=concurrent: time.sleep(0.1) reactor.stop() And here is the error that it throws when installSignalHandlers is True: (Note: This is the expected behaviour! The question is why it doesn't work when installSignalHandlers is False.) Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 396, in fireEvent DeferredList(beforeResults).addCallback(self._continueFiring) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 224, in addCallback callbackKeywords=kw) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 213, in addCallbacks self._runCallbacks() File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 371, in _runCallbacks self.result = callback(self.result, *args, **kw) --- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 409, in _continueFiring callable(*args, **kwargs) File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1165, in _reallyStartRunning self._handleSignals() File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1105, in _handleSignals signal.signal(signal.SIGINT, self.sigInt) exceptions.ValueError: signal only works in main thread What am I doing wrong and what is the right way? I'm new to twisted. @moshez: Thanks. It works now: from twisted.internet import reactor, threads from urlparse import urlparse import httplib import itertools concurrent = 100 finished=itertools.count(1) reactor.suggestThreadPoolSize(concurrent) def getStatus(ourl): url = urlparse(ourl) conn = httplib.HTTPConnection(url.netloc) conn.request("HEAD", url.path) res = conn.getresponse() return res.status def processResponse(response,url): print response, url processedOne() def processError(error,url): print "error", url#, error processedOne() def processedOne(): if finished.next()==added: reactor.stop() def addTask(url): req = threads.deferToThread(getStatus, url) req.addCallback(processResponse, url) req.addErrback(processError, url) added=0 for url in open('urllist.txt'): added+=1 addTask(url.strip()) try: reactor.run() except KeyboardInterrupt: reactor.stop() A: You're using waaaaay too much "reactor calls" (for example, there's a good chance that agent.request calls into the reactor) from the main thread. I'm not sure if that's your problem, but it's still not supported -- the only reactor calls to make from the non-reactor thread is reactor.callFromThread. Also, the whole architecture seems strange. Why are you not running the reactor on the main thread? Reading a whole file with 10,000 requests, and splitting them, should not be a problem to do from the reactor, even if you do it all at once. You can probably hit a pure-Twisted solution not using any threads.
Annoying Twisted Python problem
I'm trying to answer the following question out of personal interest: What is the fastest way to send 100,000 HTTP requests in Python? And this is what I have came up so far, but I'm experiencing something very stange. When installSignalHandlers is True, it just hangs. I can see that the DelayedCall instances are in reactor._newTimedCalls, but processResponse never gets called. When installSignalHandlers is False, it throws an error and works. from twisted.internet import reactor from twisted.web.client import Agent from threading import Semaphore, Thread import time concurrent = 100 s = Semaphore(concurrent) reactor.suggestThreadPoolSize(concurrent) t=Thread( target=reactor.run, kwargs={'installSignalHandlers':True}) t.daemon=True t.start() agent = Agent(reactor) def processResponse(response,url): print response.code, url s.release() def processError(response,url): print "error", url s.release() def addTask(url): req = agent.request('HEAD', url) req.addCallback(processResponse, url) req.addErrback(processError, url) for url in open('urllist.txt'): addTask(url.strip()) s.acquire() while s._Semaphore__value!=concurrent: time.sleep(0.1) reactor.stop() And here is the error that it throws when installSignalHandlers is True: (Note: This is the expected behaviour! The question is why it doesn't work when installSignalHandlers is False.) Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 396, in fireEvent DeferredList(beforeResults).addCallback(self._continueFiring) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 224, in addCallback callbackKeywords=kw) File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 213, in addCallbacks self._runCallbacks() File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 371, in _runCallbacks self.result = callback(self.result, *args, **kw) --- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 409, in _continueFiring callable(*args, **kwargs) File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1165, in _reallyStartRunning self._handleSignals() File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1105, in _handleSignals signal.signal(signal.SIGINT, self.sigInt) exceptions.ValueError: signal only works in main thread What am I doing wrong and what is the right way? I'm new to twisted. @moshez: Thanks. It works now: from twisted.internet import reactor, threads from urlparse import urlparse import httplib import itertools concurrent = 100 finished=itertools.count(1) reactor.suggestThreadPoolSize(concurrent) def getStatus(ourl): url = urlparse(ourl) conn = httplib.HTTPConnection(url.netloc) conn.request("HEAD", url.path) res = conn.getresponse() return res.status def processResponse(response,url): print response, url processedOne() def processError(error,url): print "error", url#, error processedOne() def processedOne(): if finished.next()==added: reactor.stop() def addTask(url): req = threads.deferToThread(getStatus, url) req.addCallback(processResponse, url) req.addErrback(processError, url) added=0 for url in open('urllist.txt'): added+=1 addTask(url.strip()) try: reactor.run() except KeyboardInterrupt: reactor.stop()
[ "You're using waaaaay too much \"reactor calls\" (for example, there's a good chance that agent.request calls into the reactor) from the main thread. I'm not sure if that's your problem, but it's still not supported -- the only reactor calls to make from the non-reactor thread is reactor.callFromThread.\nAlso, the whole architecture seems strange. Why are you not running the reactor on the main thread? Reading a whole file with 10,000 requests, and splitting them, should not be a problem to do from the reactor, even if you do it all at once.\nYou can probably hit a pure-Twisted solution not using any threads.\n" ]
[ 6 ]
[]
[]
[ "python", "reactor", "twisted" ]
stackoverflow_0002634272_python_reactor_twisted.txt
Q: Google App Engine : PolyModel + SelfReferenceProperty Is a PolyModel-based class able to be used as a SelfReferenceProperty ? I have the below code : class BaseClass(polymodel.PolyModel): attribute1 = db.IntegerProperty() attribute2 = db.StringProperty() class ParentClass(BaseClass): attribute3 = db.StringProperty() class ChildClass(BaseClass): parent = db.SelfReferenceProperty(collection_name = 'children') p = ParentClass() p.attribute1 = 1 p.attribute2 = "Parent Description" p.attribute3 = "Parent additional data" p.put() c = ChildClass() c.attribute1 = 5 c.attribute2 = "Child Description" c.parent = p.key() c.put() I execute this code and check the datastore via the development server's admin interface. The parent instance is saved to the datastore class = 'BaseClass,ParentClass', but the child is not. There is no error output to the browser (debug is turned on) and nothing in the launcher's log for my app. Is this possible to do ? A: It's a lie to say I changed nothing here. I actually had to change "parent" attribute to "parent_ref". Also the references didn't work as I expected until I changed from SelfReferenceProperty to ReferenceProperty(Parent, collection_name = 'children') But the end result is that polymorphic self-referencing does work.
Google App Engine : PolyModel + SelfReferenceProperty
Is a PolyModel-based class able to be used as a SelfReferenceProperty ? I have the below code : class BaseClass(polymodel.PolyModel): attribute1 = db.IntegerProperty() attribute2 = db.StringProperty() class ParentClass(BaseClass): attribute3 = db.StringProperty() class ChildClass(BaseClass): parent = db.SelfReferenceProperty(collection_name = 'children') p = ParentClass() p.attribute1 = 1 p.attribute2 = "Parent Description" p.attribute3 = "Parent additional data" p.put() c = ChildClass() c.attribute1 = 5 c.attribute2 = "Child Description" c.parent = p.key() c.put() I execute this code and check the datastore via the development server's admin interface. The parent instance is saved to the datastore class = 'BaseClass,ParentClass', but the child is not. There is no error output to the browser (debug is turned on) and nothing in the launcher's log for my app. Is this possible to do ?
[ "It's a lie to say I changed nothing here. I actually had to change \"parent\" attribute to \"parent_ref\". Also the references didn't work as I expected until I changed from SelfReferenceProperty to ReferenceProperty(Parent, collection_name = 'children')\nBut the end result is that polymorphic self-referencing does work.\n" ]
[ 0 ]
[]
[]
[ "google_app_engine", "polymodel", "python" ]
stackoverflow_0002634101_google_app_engine_polymodel_python.txt
Q: python webbrowser.open(url) httpd = make_server('', 80, server) webbrowser.open(url) httpd.serve_forever() This works cross platform except when I launch it on a putty ssh terminal. How can i trick the console in opening the w3m browser in a separate process so it can continue to launch the server? Or if it is not possible to skip webbrowser.open when running on a shell without x? A: Maybe use threads? Either put the server setup separate from the main thread or the browsweropen instead as in: import threading import webbrowser def start_browser(server_ready_event, url): print "[Browser Thread] Waiting for server to start" server_ready_event.wait() print "[Browser Thread] Opening browser" webbrowser.open(url) url = "someurl" server_ready = threading.Event() browser_thread = threading.Thread(target=start_browser, args=(server_ready, url)) browser_thread.start() print "[Main Thread] Starting server" httpd = make_server('', 80, server) print "[Main Thread] Server started" server_ready.set() httpd.serve_forever() browser_thread.join() (putting the server setup in the main thread lets it catch ctrl+c events i think) A: Defining the BROWSER environment variable in a login script to something like w3m should fix the problem. Edit: I realize that you don't want your script to block while the browser is running. In that case perhaps something simple like: BROWSER="echo Please visit %s with a web browser" would work better. A: According to the Python docs: Under Unix, graphical browsers are preferred under X11, but text-mode browsers will be used if graphical browsers are not available or an X11 display isn’t available. If text-mode browsers are used, the calling process will block until the user exits the browser. So you will need to detect if you are in a console-only environment, and take an appropriate action such as NOT opening the browser. Alternatively, you might be able to define the BROWSER environment variable - as Alexandre suggests - and have it run a script that either does nothing or opens the browser in the background via &.
python webbrowser.open(url)
httpd = make_server('', 80, server) webbrowser.open(url) httpd.serve_forever() This works cross platform except when I launch it on a putty ssh terminal. How can i trick the console in opening the w3m browser in a separate process so it can continue to launch the server? Or if it is not possible to skip webbrowser.open when running on a shell without x?
[ "Maybe use threads? Either put the server setup separate from the main thread or the browsweropen instead as in:\nimport threading\nimport webbrowser\n\ndef start_browser(server_ready_event, url):\n print \"[Browser Thread] Waiting for server to start\"\n server_ready_event.wait()\n print \"[Browser Thread] Opening browser\"\n webbrowser.open(url)\n\nurl = \"someurl\"\nserver_ready = threading.Event()\nbrowser_thread = threading.Thread(target=start_browser, args=(server_ready, url))\nbrowser_thread.start()\n\nprint \"[Main Thread] Starting server\"\nhttpd = make_server('', 80, server)\nprint \"[Main Thread] Server started\"\nserver_ready.set()\n\nhttpd.serve_forever()\nbrowser_thread.join()\n\n(putting the server setup in the main thread lets it catch ctrl+c events i think)\n", "Defining the BROWSER environment variable in a login script to something like w3m should fix the problem.\nEdit: I realize that you don't want your script to block while the browser is running.\nIn that case perhaps something simple like:\nBROWSER=\"echo Please visit %s with a web browser\" would work better.\n", "According to the Python docs:\n\nUnder Unix, graphical browsers are preferred under X11, but text-mode browsers will be used if graphical browsers are not available or an X11 display isn’t available. If text-mode browsers are used, the calling process will block until the user exits the browser.\n\nSo you will need to detect if you are in a console-only environment, and take an appropriate action such as NOT opening the browser.\nAlternatively, you might be able to define the BROWSER environment variable - as Alexandre suggests - and have it run a script that either does nothing or opens the browser in the background via &.\n" ]
[ 6, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002634235_python.txt
Q: Using __str__ representation for printing objects in containers I've noticed that when an instance with an overloaded __str__ method is passed to the print function as an argument, it prints as intended. However, when passing a container that contains one of those instances to print, it uses the __repr__ method instead. That is to say, print(x) displays the correct string representation of x, and print(x, y) works correctly, but print([x]) or print((x, y)) prints the __repr__ representation instead. First off, why does this happen? Secondly, is there a way to correct that behavior of print in this circumstance? A: The problem with the container using the objects' __str__ would be the total ambiguity -- what would it mean, say, if print L showed [1, 2]? L could be ['1, 2'] (a single item list whose string item contains a comma) or any of four 2-item lists (since each item can be a string or int). The ambiguity of type is common for print of course, but the total ambiguity for number of items (since each comma could be delimiting items or part of a string item) was the decisive consideration. A: I'm not sure why exactly the __str__ method of a list returns the __repr__ of the objects contained within - so I looked it up: [Python-3000] PEP: str(container) should call str(item), not repr(item) Arguments for it: -- containers refuse to guess what the user wants to see on str(container) - surroundings, delimiters, and so on; -- repr(item) usually displays type information - apostrophes around strings, class names, etc. So it's more clear about what exactly is in the list (since the object's string representation could have commas, etc.). The behavior is not going away, per Guido "BDFL" van Rossum: Let me just save everyone a lot of time and say that I'm opposed to this change, and that I believe that it would cause way too much disturbance to be accepted this close to beta. Now, there are two ways to resolve this issue for your code. The first is to subclass list and implement your own __str__ method. class StrList(list): def __str__(self): string = "[" for index, item in enumerate(self): string += str(item) if index != len(self)-1: string += ", " return string + "]" class myClass(object): def __str__(self): return "myClass" def __repr__(self): return object.__repr__(self) And now to test it: >>> objects = [myClass() for _ in xrange(10)] >>> print objects [<__main__.myClass object at 0x02880DB0>, #... >>> objects = StrList(objects) >>> print objects [myClass, myClass, myClass #... >>> import random >>> sample = random.sample(objects, 4) >>> print sample [<__main__.myClass object at 0x02880F10>, ... I personally think this is a terrible idea. Some functions - such as random.sample, as demonstrated - actually return list objects - even if you sub-classed lists. So if you take this route there may be a lot of result = strList(function(mylist)) calls, which could be inefficient. It's also a bad idea because then you'll probably have half of your code using regular list objects since you don't print them and the other half using strList objects, which can lead to your code getting messier and more confusing. Still, the option is there, and this is the only way to get the print function (or statement, for 2.x) to behave the way you want it to. The other solution is just to write your own function strList() which returns the string the way you want it: def strList(theList): string = "[" for index, item in enumerate(theList): string += str(item) if index != len(theList)-1: string += ", " return string + "]" >>> mylist = [myClass() for _ in xrange(10)] >>> print strList(mylist) [myClass, myClass, myClass #... Both solutions require that you refactor existing code, unfortunately - but the behavior of str(container) is here to stay. A: Because when you print the list, generally you're looking from the programmer's perspective, or debugging. If you meant to display the list, you'd process its items in a meaningful way, so repr is used. If you want your objects to be printed while in containers, define repr class MyObject: def __str__(self): return "" __repr__ = __str__ Of course, repr should return a string that could be used as code to recreate your object, but you can do what you want.
Using __str__ representation for printing objects in containers
I've noticed that when an instance with an overloaded __str__ method is passed to the print function as an argument, it prints as intended. However, when passing a container that contains one of those instances to print, it uses the __repr__ method instead. That is to say, print(x) displays the correct string representation of x, and print(x, y) works correctly, but print([x]) or print((x, y)) prints the __repr__ representation instead. First off, why does this happen? Secondly, is there a way to correct that behavior of print in this circumstance?
[ "The problem with the container using the objects' __str__ would be the total ambiguity -- what would it mean, say, if print L showed [1, 2]? L could be ['1, 2'] (a single item list whose string item contains a comma) or any of four 2-item lists (since each item can be a string or int). The ambiguity of type is common for print of course, but the total ambiguity for number of items (since each comma could be delimiting items or part of a string item) was the decisive consideration.\n", "I'm not sure why exactly the __str__ method of a list returns the __repr__ of the objects contained within - so I looked it up: [Python-3000] PEP: str(container) should call str(item), not repr(item)\n\nArguments for it:\n-- containers refuse to guess what the user wants to see on str(container) - surroundings, delimiters, and so on;\n-- repr(item) usually displays type information - apostrophes around strings, class names, etc.\n\nSo it's more clear about what exactly is in the list (since the object's string representation could have commas, etc.). The behavior is not going away, per Guido \"BDFL\" van Rossum:\n\nLet me just save everyone a lot of\n time and say that I'm opposed to this\n change, and that I believe that it\n would cause way too much disturbance\n to be accepted this close to beta.\n\n\nNow, there are two ways to resolve this issue for your code.\nThe first is to subclass list and implement your own __str__ method.\nclass StrList(list):\n def __str__(self):\n string = \"[\"\n for index, item in enumerate(self):\n string += str(item)\n if index != len(self)-1:\n string += \", \"\n return string + \"]\"\n\nclass myClass(object):\n def __str__(self):\n return \"myClass\"\n\n def __repr__(self):\n return object.__repr__(self)\n\nAnd now to test it:\n>>> objects = [myClass() for _ in xrange(10)]\n>>> print objects\n[<__main__.myClass object at 0x02880DB0>, #...\n>>> objects = StrList(objects)\n>>> print objects\n[myClass, myClass, myClass #...\n>>> import random\n>>> sample = random.sample(objects, 4)\n>>> print sample\n[<__main__.myClass object at 0x02880F10>, ...\n\nI personally think this is a terrible idea. Some functions - such as random.sample, as demonstrated - actually return list objects - even if you sub-classed lists. So if you take this route there may be a lot of result = strList(function(mylist)) calls, which could be inefficient. It's also a bad idea because then you'll probably have half of your code using regular list objects since you don't print them and the other half using strList objects, which can lead to your code getting messier and more confusing. Still, the option is there, and this is the only way to get the print function (or statement, for 2.x) to behave the way you want it to.\nThe other solution is just to write your own function strList() which returns the string the way you want it:\ndef strList(theList):\n string = \"[\"\n for index, item in enumerate(theList):\n string += str(item)\n if index != len(theList)-1:\n string += \", \"\n return string + \"]\"\n\n>>> mylist = [myClass() for _ in xrange(10)]\n>>> print strList(mylist)\n[myClass, myClass, myClass #...\n\nBoth solutions require that you refactor existing code, unfortunately - but the behavior of str(container) is here to stay.\n", "Because when you print the list, generally you're looking from the programmer's perspective, or debugging. If you meant to display the list, you'd process its items in a meaningful way, so repr is used.\nIf you want your objects to be printed while in containers, define repr\nclass MyObject:\n def __str__(self): return \"\"\n\n __repr__ = __str__\n\nOf course, repr should return a string that could be used as code to recreate your object, but you can do what you want.\n" ]
[ 13, 5, 1 ]
[]
[]
[ "operator_overloading", "python" ]
stackoverflow_0002634552_operator_overloading_python.txt
Q: How can I run a GAE application on a private server? I want to develop a GAE application using python, but I fear that Google will be the only company able to host the code. Is it possible to run a GAE app on a private server or other host? (Note that a previous version of the question incorrectly referred to GWT). A: Assuming that by GWT you mean GAE (GWT is for Java and anybody can serve it), appscale is probably the best way to host GAE applications anywhere you'd like (including on Amazon EC2 and in your own data center). Anybody can also start a business providing GAE service with AppScale (on Amazon, their own data center, or whatever), which might be attractive for smaller apps (that don't warrant many EC2 or dedicated servers). Anyway, thanks to AppScale and similar efforts, you definitely need not fear "that google will be the only host to host the code". A: You're mixing GWT (a Java to JavaScript compiler) with GAE (the Google server API). GWT can be served by anybody, after compilation it's just a bunch of .js files; a GAE web app can be served only on Google's servers. The API is public, and the developer's SDK does work and is OSS; but i don't think it would be a desirable platform for a real service provider. OTOH, according to the Google Code GAE SDK project it's the same infrastructure they use; but it's hard to beleive the backends used to run without GoogleFS, BigTable, MapReduce, etc. could be as scalable as theirs...
How can I run a GAE application on a private server?
I want to develop a GAE application using python, but I fear that Google will be the only company able to host the code. Is it possible to run a GAE app on a private server or other host? (Note that a previous version of the question incorrectly referred to GWT).
[ "Assuming that by GWT you mean GAE (GWT is for Java and anybody can serve it), appscale is probably the best way to host GAE applications anywhere you'd like (including on Amazon EC2 and in your own data center). Anybody can also start a business providing GAE service with AppScale (on Amazon, their own data center, or whatever), which might be attractive for smaller apps (that don't warrant many EC2 or dedicated servers). Anyway, thanks to AppScale and similar efforts, you definitely need not fear \"that google will be the only host to host the code\". \n", "You're mixing GWT (a Java to JavaScript compiler) with GAE (the Google server API).\nGWT can be served by anybody, after compilation it's just a bunch of .js files; a GAE web app can be served only on Google's servers.\nThe API is public, and the developer's SDK does work and is OSS; but i don't think it would be a desirable platform for a real service provider. OTOH, according to the Google Code GAE SDK project it's the same infrastructure they use; but it's hard to beleive the backends used to run without GoogleFS, BigTable, MapReduce, etc. could be as scalable as theirs...\n" ]
[ 9, 1 ]
[]
[]
[ "google_app_engine", "hosting", "python" ]
stackoverflow_0002634543_google_app_engine_hosting_python.txt
Q: Easiest way to automatically download required modules in Python? I would like to release a python module I wrote which depends on several packages. What's the easiest way to make it so these packages are programmatically downloaded just in case they are not available on the system that's being run? Most of these modules should be available by easy_install or pip or something like that. I simply want to avoid having the user install each module separately. thanks. A: pip uses requirements files, which have a very straightforward format. For more Python packaging tooling recommendations, see the latest from the Python Packaging Authority (PyPA). A: See the setuptools docs on how to declare your dependencies -- this will allow easy_install to find, download and install all of them (and transitive closure thereof) if everything's available in PyPi, or otherwise if you specify the dependencies' URLs.
Easiest way to automatically download required modules in Python?
I would like to release a python module I wrote which depends on several packages. What's the easiest way to make it so these packages are programmatically downloaded just in case they are not available on the system that's being run? Most of these modules should be available by easy_install or pip or something like that. I simply want to avoid having the user install each module separately. thanks.
[ "pip uses requirements files, which have a very straightforward format.\nFor more Python packaging tooling recommendations, see the latest from the Python Packaging Authority (PyPA).\n", "See the setuptools docs on how to declare your dependencies -- this will allow easy_install to find, download and install all of them (and transitive closure thereof) if everything's available in PyPi, or otherwise if you specify the dependencies' URLs.\n" ]
[ 21, 4 ]
[]
[]
[ "module", "python", "python_module", "setuptools" ]
stackoverflow_0002634874_module_python_python_module_setuptools.txt
Q: Django exclude(**kwargs) help I had a question for you, something that I can't seem to find the solution for... Basically, I have a model called Environment, and I am passing all of them to a view, and there are particular environments that I would like to exclude. Now, I know there is a exclude function on a queryset, but I can't seem to figure out how to use it for multiple options... For example, I tried this but it didn't work: kwargs = {"name": "env1", "name": "env2"} envs = Environment.objects.exclude( kwards ) But the only thing that it will exclude is the last "name" value in the list of kwargs. I understand why it does that now, but I still can't seem to exclude multiple objects with one command. Any help is much appreciated! Shawn A: The way to do this would be: Enviroment.objects.exclude(name="env1").exclude(name="env2") or Enviroment.objects.exclude(Q(name="env1") | Q(name="env2")) A: Enviroment.objects.exclude(name__in=["env1","env2"])
Django exclude(**kwargs) help
I had a question for you, something that I can't seem to find the solution for... Basically, I have a model called Environment, and I am passing all of them to a view, and there are particular environments that I would like to exclude. Now, I know there is a exclude function on a queryset, but I can't seem to figure out how to use it for multiple options... For example, I tried this but it didn't work: kwargs = {"name": "env1", "name": "env2"} envs = Environment.objects.exclude( kwards ) But the only thing that it will exclude is the last "name" value in the list of kwargs. I understand why it does that now, but I still can't seem to exclude multiple objects with one command. Any help is much appreciated! Shawn
[ "The way to do this would be:\nEnviroment.objects.exclude(name=\"env1\").exclude(name=\"env2\")\n\nor\nEnviroment.objects.exclude(Q(name=\"env1\") | Q(name=\"env2\"))\n\n", "Enviroment.objects.exclude(name__in=[\"env1\",\"env2\"])\n" ]
[ 4, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002634071_django_python.txt
Q: mount command pid Trying to mount a device and get the pid of mount command. cmd="/bin/mount /dev/sda1 /mnt" os.system(cmd) Now how to obtain the pid of mount command? There plenty of mounted device available on my system, something like ps | grep mount won't work. A: As the comments suggest I'm not sure how useful it is to get the mount pid, but if you use the subprocess module you can easily get the pid. >>> import subprocess >>> p = subprocess.Popen("ls", shell=True) >>> p.pid 4136 >>>
mount command pid
Trying to mount a device and get the pid of mount command. cmd="/bin/mount /dev/sda1 /mnt" os.system(cmd) Now how to obtain the pid of mount command? There plenty of mounted device available on my system, something like ps | grep mount won't work.
[ "As the comments suggest I'm not sure how useful it is to get the mount pid, but if you use the subprocess module you can easily get the pid.\n>>> import subprocess\n>>> p = subprocess.Popen(\"ls\", shell=True)\n>>> p.pid\n4136\n>>>\n\n" ]
[ 2 ]
[]
[]
[ "mount", "pid", "python" ]
stackoverflow_0002635210_mount_pid_python.txt
Q: Extending Python’s int type to accept only values within a given range I would like to create a custom data type which basically behaves like an ordinary int, but with the value restricted to be within a given range. I guess I need some kind of factory function, but I cannot figure out how to do it. myType = MyCustomInt(minimum=7, maximum=49, default=10) i = myType(16) # OK i = myType(52) # raises ValueError i = myType() # i == 10 positiveInt = MyCustomInt(minimum=1) # no maximum restriction negativeInt = MyCustomInt(maximum=-1) # no minimum restriction nonsensicalInt = MyCustomInt() # well, the same as an ordinary int Any hint is appreciated. Thanks! A: Use __new__ to override the construction of immutable types: def makeLimitedInt(minimum, maximum, default): class LimitedInt(int): def __new__(cls, x= default, *args, **kwargs): instance= int.__new__(cls, x, *args, **kwargs) if not minimum<=instance<=maximum: raise ValueError('Value outside LimitedInt range') return instance return LimitedInt A: Assignment in Python is a statement, not an expression, therefore there's no way to define assignment on a type since assigning rebinds the name completely. The best you could do is define a set() method that takes the value that you want, at which point you can just create a "normal" class to handle the validation. A: There is no need to define a new type: def restrict_range(minimum=None, maximum=None, default=None, type_=int): def restricted(*args, **kwargs): if default is not None and not (args or kwargs): # no arguments supplied return default value = type_(*args, **kwargs) if (minimum is not None and value < minimum or maximum is not None and value > maximum): raise ValueError return value return restricted Example restricted_int = restrict_range(7, 49, 10) assert restricted_int("1110", 2) == 14 assert restricted_int(16) == 16 assert restricted_int() == 10 try: restricted_int(52) assert 0 except ValueError: pass A: You can derive a class from an int in python e.g. class MyInt(int), but that type in python is immutable (you can't change the value) once it's created. You could do something like this though: class MyInt: def __init__(self, i, max=None, min=None): self.max = max self.min = min self.set(i) def set(self, i): if i > self.max: raise ValueError if i < self.min: raise ValueError self.i = i def toInt(self): return self.i def __getattr__(self, name): # Forward e.g. addition etc operations to the integer # Beware that e.g. going MyInt(1)+MyInt(1) # will return an ordinary int of "2" though # so you'd need to do something like # "result = MyInt(MyInt(1)+MyInt(1)) method = getattr(self.i, name) def call(*args): L = [] for arg in args: if isinstance(arg, MyInt): L.append(arg.toInt()) else: L.append(arg) return method(*L) return call It may be better to use ordinary validation functions depending on what you want though if it's simpler. EDIT: Is working now - Reverted back to a simpler earlier version, having addition etc functions which return other MyInt instances just isn't worth it :-)
Extending Python’s int type to accept only values within a given range
I would like to create a custom data type which basically behaves like an ordinary int, but with the value restricted to be within a given range. I guess I need some kind of factory function, but I cannot figure out how to do it. myType = MyCustomInt(minimum=7, maximum=49, default=10) i = myType(16) # OK i = myType(52) # raises ValueError i = myType() # i == 10 positiveInt = MyCustomInt(minimum=1) # no maximum restriction negativeInt = MyCustomInt(maximum=-1) # no minimum restriction nonsensicalInt = MyCustomInt() # well, the same as an ordinary int Any hint is appreciated. Thanks!
[ "Use __new__ to override the construction of immutable types:\ndef makeLimitedInt(minimum, maximum, default):\n class LimitedInt(int):\n def __new__(cls, x= default, *args, **kwargs):\n instance= int.__new__(cls, x, *args, **kwargs)\n if not minimum<=instance<=maximum:\n raise ValueError('Value outside LimitedInt range')\n return instance\n return LimitedInt\n\n", "Assignment in Python is a statement, not an expression, therefore there's no way to define assignment on a type since assigning rebinds the name completely. The best you could do is define a set() method that takes the value that you want, at which point you can just create a \"normal\" class to handle the validation.\n", "There is no need to define a new type:\ndef restrict_range(minimum=None, maximum=None, default=None, type_=int):\n def restricted(*args, **kwargs):\n if default is not None and not (args or kwargs): # no arguments supplied\n return default\n value = type_(*args, **kwargs)\n if (minimum is not None and value < minimum or \n maximum is not None and value > maximum):\n raise ValueError\n return value\n return restricted\n\nExample\nrestricted_int = restrict_range(7, 49, 10)\n\nassert restricted_int(\"1110\", 2) == 14\nassert restricted_int(16) == 16\nassert restricted_int() == 10\ntry: \n restricted_int(52)\n assert 0\nexcept ValueError:\n pass\n\n", "You can derive a class from an int in python e.g. class MyInt(int), but that type in python is immutable (you can't change the value) once it's created. \nYou could do something like this though:\nclass MyInt:\n def __init__(self, i, max=None, min=None):\n self.max = max\n self.min = min\n self.set(i)\n\n def set(self, i):\n if i > self.max: raise ValueError\n if i < self.min: raise ValueError\n self.i = i\n\n def toInt(self):\n return self.i\n\n def __getattr__(self, name):\n # Forward e.g. addition etc operations to the integer\n # Beware that e.g. going MyInt(1)+MyInt(1) \n # will return an ordinary int of \"2\" though\n # so you'd need to do something like \n # \"result = MyInt(MyInt(1)+MyInt(1))\n\n method = getattr(self.i, name)\n def call(*args):\n L = []\n for arg in args:\n if isinstance(arg, MyInt):\n L.append(arg.toInt())\n else: L.append(arg)\n return method(*L)\n return call\n\nIt may be better to use ordinary validation functions depending on what you want though if it's simpler.\nEDIT: Is working now - Reverted back to a simpler earlier version, having addition etc functions which return other MyInt instances just isn't worth it :-)\n" ]
[ 6, 1, 1, 0 ]
[]
[]
[ "python", "types" ]
stackoverflow_0002635148_python_types.txt
Q: How to read pdf, ppt, xl, doc files content into a string in php/python Pls suggest me any inbuilt command or package? A: well, it shouldn't be too hard to find something from the net. Here's one for Python called pyPDF. Check PyPi also for such modules. As for reading doc,ppt,xls files, one way is to use COM. A: The content as in "binary" or the actual text? To read the file as "binary" in php: http://php.net/manual/en/function.file.php In python: http://docs.python.org/tutorial/inputoutput.html#reading-and-writing-files Actually reading the contents of the file is a lot more difficult and requires additonall libraries. For instance have a look at this question on SO (Python): python convert microsoft office docs to plain text on linux A: Try this: $data = fopen('myfile.png', 'rb'); // read in binary mode. if ($data) { header('Content-Type: image/png'); fpassthru($data); } You should change content-type accordingly.
How to read pdf, ppt, xl, doc files content into a string in php/python
Pls suggest me any inbuilt command or package?
[ "well, it shouldn't be too hard to find something from the net. Here's one for Python called pyPDF. Check PyPi also for such modules. As for reading doc,ppt,xls files, one way is to use COM.\n", "The content as in \"binary\" or the actual text?\nTo read the file as \"binary\" in php:\nhttp://php.net/manual/en/function.file.php\nIn python:\nhttp://docs.python.org/tutorial/inputoutput.html#reading-and-writing-files\nActually reading the contents of the file is a lot more difficult and requires additonall libraries. For instance have a look at this question on SO (Python):\npython convert microsoft office docs to plain text on linux\n", "Try this:\n$data = fopen('myfile.png', 'rb'); // read in binary mode.\n\nif ($data) {\n header('Content-Type: image/png');\n fpassthru($data);\n}\n\nYou should change content-type accordingly.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "file", "php", "python" ]
stackoverflow_0002635757_file_php_python.txt
Q: Google appengine authentication on iPhone web app on the home screen I'm using Google appengine for developing an web application that is meant to be used on both the browser and iphone. I have purchased a domain name for this application, so that I have a pretty URL. I've used the User API for authentication. This works just fine on desktop browsers and iPhone Safari. The user could add the application to the home screen (by tapping the "+" at the bottom toolbar). However when that's done, it seems like the cookies set by Google are not in effect within this "application", and the user is effectively logged out. To make matters worse, when the user clicks on the login link (as generated by GAE), the app closes and opens safari to complete the login. Since the session is apparently not shared between the two, the login process is futile, and the "home-screen" version of the app continues to be logged out. It seems that the cookies are not shared between a "home-screen" app and Safari. It also seems that the "home-screen" app will only work within it's own domain, and any redirect to any other domain will open Safari. Any idea how I can go about fixing this? A: Solved this, and just wanted to post the solution here. The fix was as simple as setting the link href="javascript:window.location.href=\"whatever\";". The weirdest thing! No idea why I'd be forced to use JS for this.
Google appengine authentication on iPhone web app on the home screen
I'm using Google appengine for developing an web application that is meant to be used on both the browser and iphone. I have purchased a domain name for this application, so that I have a pretty URL. I've used the User API for authentication. This works just fine on desktop browsers and iPhone Safari. The user could add the application to the home screen (by tapping the "+" at the bottom toolbar). However when that's done, it seems like the cookies set by Google are not in effect within this "application", and the user is effectively logged out. To make matters worse, when the user clicks on the login link (as generated by GAE), the app closes and opens safari to complete the login. Since the session is apparently not shared between the two, the login process is futile, and the "home-screen" version of the app continues to be logged out. It seems that the cookies are not shared between a "home-screen" app and Safari. It also seems that the "home-screen" app will only work within it's own domain, and any redirect to any other domain will open Safari. Any idea how I can go about fixing this?
[ "Solved this, and just wanted to post the solution here.\nThe fix was as simple as setting the link href=\"javascript:window.location.href=\\\"whatever\\\";\".\nThe weirdest thing! No idea why I'd be forced to use JS for this.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "iphone", "iphone_web_app", "python" ]
stackoverflow_0002612780_google_app_engine_iphone_iphone_web_app_python.txt
Q: Python: some newbie questions on sys.stderr and using function as argument I'm just starting on Python and maybe I'm worrying too much too soon, but anyways... log = "/tmp/trefnoc.log" def logThis (text, display=""): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text if display != None: print msg + display logfile = open(log, "a") logfile.write(msg + "\n") logfile.close() return msg def logThisAndExit (text, display=""): msg = logThis(text, display=None) sys.exit(msg + display) That is working, but I don't like how it looks. Is there a better way to write this (maybe with just 1 function) and is there any other thing I should be concerned under exiting? Now to some background (but not about trefnoc)... Sometimes I will call logThis just to log and display. Other times I want to call it and exit. Initially I was doing this: logThis ("ERROR. EXITING") sys.exit() Then I figured that wouldn't properly set the stderr, thus the current code shown on the top. My first idea was actually passing "sys.exit" as an argument, and defining just logThis ("ERROR. EXITING", call=sys.exit) defined as following (showing just the relevant differenced part): def logThis (text, display="", call=print): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text call msg + display But that obviously didn't work. I think Python doesn't store functions inside variables. I couldn't (quickly) find anywhere if Python can have variables taking functions or not! Maybe using an eval function? I really always try to avoid them, tho. Sure I thought of using if instead of another def, but that wouldn't be any better or worst. Anyway, any thoughts? A: There's no reason for "logThisAndExit", it doesn't save you much typing over sys.exit(logThis(text)+display) (compare logThisAndExit(text, display)) or sys.exit(logThis(text)) (compare logThisAndExit(text)) Not that I'm entirely sure why you like your exit messages formatted as log lines. In answer to your original question: you're missing parentheses: call(msg+display) works fine. But I think that's waaaay overengineering for logging/exiting stuff. Anyone who maintains your code will have to understand your function to know when it's exiting and when it's not. A: For logging, it is probably easier to use the logging module. For exiting, if you have any error, use: sys.exit(1) and if there is no error, either just let the script run out of statements or: sys.exit(0) A: You could modify logThis to take a final argument called shouldExit which defaults to None, then as a final step in that method, if the value is true then call sys.exit. A: print is a keyword, not a function, in python < 3. try this: def do_print(x): print x def logThis (text, display="", call=do_print): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text call(msg + display) Is there any reason you don't use the logging module? (see http://onlamp.com/pub/a/python/2005/06/02/logging.html) A: Just as reference, this is my final code after assimilating hints from David and moshez. In the end I decided I wanted just 1 function for now. Thanks everyone! log = "/tmp/trefnoc.log" def logThis (text, display=""): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text if display != None: print msg + display logfile = open(log, "a") logfile.write(msg + "\n") logfile.close() return msg # how to call it on exit: sys.exit(logThis("ERROR, EXITING", display=None))
Python: some newbie questions on sys.stderr and using function as argument
I'm just starting on Python and maybe I'm worrying too much too soon, but anyways... log = "/tmp/trefnoc.log" def logThis (text, display=""): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text if display != None: print msg + display logfile = open(log, "a") logfile.write(msg + "\n") logfile.close() return msg def logThisAndExit (text, display=""): msg = logThis(text, display=None) sys.exit(msg + display) That is working, but I don't like how it looks. Is there a better way to write this (maybe with just 1 function) and is there any other thing I should be concerned under exiting? Now to some background (but not about trefnoc)... Sometimes I will call logThis just to log and display. Other times I want to call it and exit. Initially I was doing this: logThis ("ERROR. EXITING") sys.exit() Then I figured that wouldn't properly set the stderr, thus the current code shown on the top. My first idea was actually passing "sys.exit" as an argument, and defining just logThis ("ERROR. EXITING", call=sys.exit) defined as following (showing just the relevant differenced part): def logThis (text, display="", call=print): msg = str(now.strftime("%Y-%m-%d %H:%M")) + " TREfNOC: " + text call msg + display But that obviously didn't work. I think Python doesn't store functions inside variables. I couldn't (quickly) find anywhere if Python can have variables taking functions or not! Maybe using an eval function? I really always try to avoid them, tho. Sure I thought of using if instead of another def, but that wouldn't be any better or worst. Anyway, any thoughts?
[ "There's no reason for \"logThisAndExit\", it doesn't save you much typing over\nsys.exit(logThis(text)+display)\n\n(compare logThisAndExit(text, display))\nor\nsys.exit(logThis(text))\n\n(compare logThisAndExit(text))\nNot that I'm entirely sure why you like your exit messages formatted as log lines.\nIn answer to your original question: you're missing parentheses: call(msg+display) works fine. But I think that's waaaay overengineering for logging/exiting stuff. Anyone who maintains your code will have to understand your function to know when it's exiting and when it's not. \n", "For logging, it is probably easier to use the logging module.\nFor exiting, if you have any error, use:\nsys.exit(1)\n\nand if there is no error, either just let the script run out of statements or:\nsys.exit(0)\n\n", "You could modify logThis to take a final argument called shouldExit which defaults to None, then as a final step in that method, if the value is true then call sys.exit.\n", "print is a keyword, not a function, in python < 3. try this:\ndef do_print(x):\n print x\n\ndef logThis (text, display=\"\", call=do_print):\n msg = str(now.strftime(\"%Y-%m-%d %H:%M\")) + \" TREfNOC: \" + text\n call(msg + display)\n\nIs there any reason you don't use the logging module? (see http://onlamp.com/pub/a/python/2005/06/02/logging.html)\n", "Just as reference, this is my final code after assimilating hints from David and moshez. In the end I decided I wanted just 1 function for now. Thanks everyone!\nlog = \"/tmp/trefnoc.log\"\n\ndef logThis (text, display=\"\"):\n msg = str(now.strftime(\"%Y-%m-%d %H:%M\")) + \" TREfNOC: \" + text\n if display != None:\n print msg + display\n logfile = open(log, \"a\")\n logfile.write(msg + \"\\n\")\n logfile.close()\n return msg\n\n# how to call it on exit:\nsys.exit(logThis(\"ERROR, EXITING\", display=None))\n\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "function_pointers", "python", "stderr" ]
stackoverflow_0002634091_function_pointers_python_stderr.txt
Q: django + xmppy: send a message to two recipients I'm trying to use xmpppy for sending jabber-messages from a django-website. This works entirely fine. However, the message only gets sent to the -first- of the recipients in the list. This happens when I run the following function from django, and also if I run it from an interactive python-shell. The weird part though, is that if I extract the -body- of the function and run that interactively, then all the recipients (there's just 2 at the moment) get the message. Also, I do know that the inner for-loop gets run the correct count times (2), because the print-statement does run twice, and return two different message-ids. The function looks like this: def hello_jabber(request, text): jid=xmpp.protocol.JID(settings.JABBER_ID) cl=xmpp.Client(jid.getDomain(),debug=[]) con=cl.connect() auth=cl.auth(jid.getNode(),settings.JABBER_PW,resource=jid.getResource()) for friend in settings.JABBER_FRIENDS: id=cl.send(xmpp.protocol.Message(friend,friend + ' is awesome:' + text)) print 'sent message with id ' + str(id) cl.disconnect() return render_to_response('jabber/sent.htm', locals()) A: Activate the debug options in xmpppy to see what does the xmpp client.
django + xmppy: send a message to two recipients
I'm trying to use xmpppy for sending jabber-messages from a django-website. This works entirely fine. However, the message only gets sent to the -first- of the recipients in the list. This happens when I run the following function from django, and also if I run it from an interactive python-shell. The weird part though, is that if I extract the -body- of the function and run that interactively, then all the recipients (there's just 2 at the moment) get the message. Also, I do know that the inner for-loop gets run the correct count times (2), because the print-statement does run twice, and return two different message-ids. The function looks like this: def hello_jabber(request, text): jid=xmpp.protocol.JID(settings.JABBER_ID) cl=xmpp.Client(jid.getDomain(),debug=[]) con=cl.connect() auth=cl.auth(jid.getNode(),settings.JABBER_PW,resource=jid.getResource()) for friend in settings.JABBER_FRIENDS: id=cl.send(xmpp.protocol.Message(friend,friend + ' is awesome:' + text)) print 'sent message with id ' + str(id) cl.disconnect() return render_to_response('jabber/sent.htm', locals())
[ "Activate the debug options in xmpppy to see what does the xmpp client.\n" ]
[ 0 ]
[]
[]
[ "django", "python", "xmpppy" ]
stackoverflow_0002635754_django_python_xmpppy.txt
Q: Installing a Python program on Linux I wrote a Python program. I would like to add to it an installation script that will set up everything necessary - like desktop icon, entry in the menu, home directory file, etc. I'm working on Linux (ubuntu). When a Python program is installed, what needs to happen in general? I know that it probably depends on the nature of the program. Can you give me some general ideas? Or, point me in the right direction? I have no idea how to look for this on Google. Thanks A: If it's a Python program you're trying to package, you should consider using its 'standard' distribution framework distutils. I can't replicate the entire document here but I'd recommend that you read it. Once you're done with that, check out the Hitchhikers guide to packaging which contains details on distribute - the extensions to distutils that allow you to package and distribute more effectively. A: For Ubuntu if you want it to be easily distributable to other Ubuntu users it'll have to be packaged properly, which is no simple task. You might want to consult their Packaging Guide for more information. Otherwise, generally speaking there are a few standard packaging options for Python. Setuptools is popular, but becoming reviled lately. Read James Bennett's blog post "On Packaging" for a decent in-depth look into the ups and downs of the Python packaging world. A: You could create an rpm easily using checkinstall. Search for checkinstall in google and download it. It will allow you to create an rpm and set the options. A: How a program is launched and placed in the menu is determined by a .desktop file (you can read the specification or just look at some examples from /usr/share/applications). Properly installing a program (placing all files in the right directories and so on) requires either making a package like a deb or rpm, or you could use something like distutils or setuptools. It may also help to just look at some (open source) examples of Python programs for Linux.
Installing a Python program on Linux
I wrote a Python program. I would like to add to it an installation script that will set up everything necessary - like desktop icon, entry in the menu, home directory file, etc. I'm working on Linux (ubuntu). When a Python program is installed, what needs to happen in general? I know that it probably depends on the nature of the program. Can you give me some general ideas? Or, point me in the right direction? I have no idea how to look for this on Google. Thanks
[ "If it's a Python program you're trying to package, you should consider using its 'standard' distribution framework distutils. I can't replicate the entire document here but I'd recommend that you read it. Once you're done with that, check out the Hitchhikers guide to packaging which contains details on distribute - the extensions to distutils that allow you to package and distribute more effectively.\n", "For Ubuntu if you want it to be easily distributable to other Ubuntu users it'll have to be packaged properly, which is no simple task. You might want to consult their Packaging Guide for more information.\nOtherwise, generally speaking there are a few standard packaging options for Python. Setuptools is popular, but becoming reviled lately. Read James Bennett's blog post \"On Packaging\" for a decent in-depth look into the ups and downs of the Python packaging world.\n", "You could create an rpm easily using checkinstall. Search for checkinstall in google and download it. It will allow you to create an rpm and set the options.\n", "How a program is launched and placed in the menu is determined by a .desktop file (you can read the specification or just look at some examples from /usr/share/applications). Properly installing a program (placing all files in the right directories and so on) requires either making a package like a deb or rpm, or you could use something like distutils or setuptools.\nIt may also help to just look at some (open source) examples of Python programs for Linux.\n" ]
[ 4, 1, 1, 0 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0002635433_linux_python.txt
Q: Why does TheyWorkForYou (TWFY) web API always returns '{}' I'm calling a web API exposed by TheyWorkForYou (TWFI). http://www.theyworkforyou.com/api/ I'm using the Python bindings provided by twfython: http://code.google.com/p/twfython/ I wrote some code to call this API a few months ago, at which time it worked fine. But now I dig it out to run it again, no matter what query I ask of the API, it always returns '{}' (an empty dictionary). For example the following code, which should return a list of all MPs: from twfy import TWFY API_KEY = 'XXXXXXXXXXXXXXXXXXXXXX' twfy = TWFY.TWFY(API_KEY) print twfy.api.getMPs(output='js') Am I being really dumb? What else should I check? A: You can run the getMPs call on their website directly, and it also produces no output. So you're probably right about there actually being no MPs at the moment. Do you get the same output if you call getMSPs? This one seems like it should return data. A: From the horse's mouth, MAtthew Somerville at ORG: The API is working as documented - when there is no MP (ie. everywhere between dissolution and election, getMP will return no MP unless you specify the always_return parameter (which is why that parameter exists). This has always been the case after e.g. death of MP, resignation of Iris Robinson. Also, getMPs (note the 's') will not return any MPs for a date for which there are no MPs - so you should specify the dissolution date if you want the list of MPs as on that date (and sorry there's not an always_return option there)
Why does TheyWorkForYou (TWFY) web API always returns '{}'
I'm calling a web API exposed by TheyWorkForYou (TWFI). http://www.theyworkforyou.com/api/ I'm using the Python bindings provided by twfython: http://code.google.com/p/twfython/ I wrote some code to call this API a few months ago, at which time it worked fine. But now I dig it out to run it again, no matter what query I ask of the API, it always returns '{}' (an empty dictionary). For example the following code, which should return a list of all MPs: from twfy import TWFY API_KEY = 'XXXXXXXXXXXXXXXXXXXXXX' twfy = TWFY.TWFY(API_KEY) print twfy.api.getMPs(output='js') Am I being really dumb? What else should I check?
[ "You can run the getMPs call on their website directly, and it also produces no output. So you're probably right about there actually being no MPs at the moment.\nDo you get the same output if you call getMSPs? This one seems like it should return data.\n", "From the horse's mouth, MAtthew Somerville at ORG:\nThe API is working as documented - when there is no MP (ie. everywhere between dissolution and election, getMP will return no MP unless you specify the always_return parameter (which is why that parameter exists). This has always been the case after e.g. death of MP, resignation of Iris Robinson.\nAlso, getMPs (note the 's') will not return any MPs for a date for which there are no MPs - so you should specify the dissolution date if you want the list of MPs as on that date (and sorry there's not an always_return option there)\n" ]
[ 2, 2 ]
[]
[]
[ "python" ]
stackoverflow_0002634055_python.txt
Q: Where can I find a good tutorial for py2exe? Can somebody point me at a good tutorial for py2exe? I've read over the official tutorial but it is rather light on details, compared to all the options one can use when building an executable out of a python script. For the record, my python script uses Python 2.5.2, wxPython/wxWidgets 2.8 and MySQLdb 1.2.2; so if you have specific tips for py2exe with those packages that would be much appreciated (and yes, I've seen the Py2EXE and wxPython page). A: Regarding "Py2EXE and wxPython", the page mentions the import statement "from wxPython.wx import *". This is the old wxPython (several years old, I think). In my app, I just do "import wx", and I don't have any major troubles. I have one tip for wxPython and py2exe: you need a manifest if you want your app to look any good on Windows XP. This email has details: http://mail.python.org/pipermail/python-list/2004-June/268126.html A: I'm going to release py2exe GUI so that you can easy compile your apps without writing setup scripts. More info here A: Don't know about a better tutorial, but there is some information to be found at the news list. http://news.gmane.org/gmane.comp.python.py2exe A: Since this question was asked, I've updated the official py2exe tutorial to include substantially more information about bundling the Microsoft C runtime DLL. http://www.py2exe.org/index.cgi/Tutorial#Step5 If anyone reading this question knows about things which they think are missing from the official tutorial, can I encourage them to add that knowledge to the official tutorial, which is a wiki.
Where can I find a good tutorial for py2exe?
Can somebody point me at a good tutorial for py2exe? I've read over the official tutorial but it is rather light on details, compared to all the options one can use when building an executable out of a python script. For the record, my python script uses Python 2.5.2, wxPython/wxWidgets 2.8 and MySQLdb 1.2.2; so if you have specific tips for py2exe with those packages that would be much appreciated (and yes, I've seen the Py2EXE and wxPython page).
[ "Regarding \"Py2EXE and wxPython\", the page mentions the import statement \"from wxPython.wx import *\". This is the old wxPython (several years old, I think). In my app, I just do \"import wx\", and I don't have any major troubles.\nI have one tip for wxPython and py2exe: you need a manifest if you want your app to look any good on Windows XP. This email has details: http://mail.python.org/pipermail/python-list/2004-June/268126.html\n", "I'm going to release py2exe GUI so that you can easy compile your apps without writing setup scripts. More info here\n", "Don't know about a better tutorial, but there is some information to be found at the news list.\nhttp://news.gmane.org/gmane.comp.python.py2exe\n", "Since this question was asked, I've updated the official py2exe tutorial to include substantially more information about bundling the Microsoft C runtime DLL.\nhttp://www.py2exe.org/index.cgi/Tutorial#Step5\nIf anyone reading this question knows about things which they think are missing from the official tutorial, can I encourage them to add that knowledge to the official tutorial, which is a wiki.\n" ]
[ 4, 2, 1, 1 ]
[]
[]
[ "py2exe", "python", "wxpython" ]
stackoverflow_0000176322_py2exe_python_wxpython.txt
Q: Optimizing code using PIL Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))') A: while not directly performance related you could do some things to improve your code: if not Found: return "Not Found" is idiomatic way to write condition in Python. You don't need this clause, however, since this return statement could be reached only if image wasn't found. in GetImage you should create filename once with os.path.join(os.getcwd(),"Images",name) to minimize errors and not to repeat yourself. Also it wouldn't normally work if you don't have the image file. Since you're not handling error in the Find, I'd suggest the following: def Find(name): fname = os.path.join(os.getcwd(), "Images", name + '.bmp') image = Image.open(fname) imagebbox = image.getbbox() screen = ImageGrab.grab() YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() image = image.getdata() for y in range(0, YLimit): for x in range(0, XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: return x, y # returns None implicitly Your major problem is that you're doing pixel by pixel search, it's going to be slow on any meaningful size image. A: This algorithm is quite computation intensive, I don't believe you can speed it up without changing the approach. Lets do some math : Say the screen is 1024x768 (we're still in the year 2000) Say your test image is 100x100 Then you end up doing 924*668 blits of 100x100 Thats equivalent to about 7848 full screen blits. It's bound to be slow with this brute force approach.
Optimizing code using PIL
Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))')
[ "while not directly performance related you could do some things to improve your code:\nif not Found:\n return \"Not Found\"\n\nis idiomatic way to write condition in Python. You don't need this clause, however, since this return statement could be reached only if image wasn't found.\nin GetImage you should create filename once with os.path.join(os.getcwd(),\"Images\",name) to minimize errors and not to repeat yourself. Also it wouldn't normally work if you don't have the image file. Since you're not handling error in the Find, I'd suggest the following:\ndef Find(name):\n fname = os.path.join(os.getcwd(), \"Images\", name + '.bmp')\n image = Image.open(fname)\n imagebbox = image.getbbox()\n screen = ImageGrab.grab()\n YLimit = screen.getbbox()[3] - imagebbox[3]\n XLimit = screen.getbbox()[2] - imagebbox[2]\n image = image.convert(\"L\")\n Screen = screen.convert(\"L\")\n Screen.load()\n image.load()\n image = image.getdata()\n for y in range(0, YLimit):\n for x in range(0, XLimit):\n BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3]\n ScreenGrab = screen.crop(BoxCoordinates)\n ScreenGrab = ScreenGrab.getdata()\n if image == ScreenGrab:\n return x, y\n # returns None implicitly\n\nYour major problem is that you're doing pixel by pixel search, it's going to be slow on any meaningful size image.\n", "This algorithm is quite computation intensive, I don't believe you can speed it up without changing the approach.\nLets do some math : \nSay the screen is 1024x768 (we're still in the year 2000)\nSay your test image is 100x100\nThen you end up doing 924*668 blits of 100x100\nThats equivalent to about 7848 full screen blits.\nIt's bound to be slow with this brute force approach.\n" ]
[ 1, 1 ]
[]
[]
[ "image", "performance", "python" ]
stackoverflow_0002636450_image_performance_python.txt
Q: PGU Tiles collision detection I've been using PGU(Phil's Pygame Utilities) for a while. It has a dictionary called tdata, which is passed as an argument while loading tiles tdata = { tileno:(agroup, hit_handler, config)} I'm making a pacman clone in which I have 2 groups : player and ghost, for which I want to collision detection with the same type of tile. For example, if the tile no is 2, I want this tile to have agroups as both player and ghost. I tried doing the following: tdata = {0x02 :('player', tile_hit_1, config), 0x02 : ('ghost', tile_hit_2, config)} However, on doing this, it only gives collision detection for ghost, not the player. Any ideas on how I can do collision detection for both the player and the ghost with the same type of tile? A: I've had a look at the source code at: http://code.google.com/p/pgu/ In vid.py (http://code.google.com/p/pgu/source/browse/trunk/pgu/vid.py) there is code for loading tdata information. Line 300: def tga_load_tiles(self,fname,size,tdata={}): Then on lines 324 and 325: agroups,hit,config = tdata[n] tile.agroups = self.string2groups(agroups) So looking at the definiton of string2groups which begins on line 369. The agroups parameter is a string which is split on commas. So I think you can put the name of more than one group in the string. Try: tdata = {0x02: ('player,ghost', tile_hit, config)}
PGU Tiles collision detection
I've been using PGU(Phil's Pygame Utilities) for a while. It has a dictionary called tdata, which is passed as an argument while loading tiles tdata = { tileno:(agroup, hit_handler, config)} I'm making a pacman clone in which I have 2 groups : player and ghost, for which I want to collision detection with the same type of tile. For example, if the tile no is 2, I want this tile to have agroups as both player and ghost. I tried doing the following: tdata = {0x02 :('player', tile_hit_1, config), 0x02 : ('ghost', tile_hit_2, config)} However, on doing this, it only gives collision detection for ghost, not the player. Any ideas on how I can do collision detection for both the player and the ghost with the same type of tile?
[ "I've had a look at the source code at: http://code.google.com/p/pgu/\nIn vid.py (http://code.google.com/p/pgu/source/browse/trunk/pgu/vid.py) there is code for loading tdata information.\nLine 300: def tga_load_tiles(self,fname,size,tdata={}):\nThen on lines 324 and 325:\nagroups,hit,config = tdata[n]\ntile.agroups = self.string2groups(agroups)\n\nSo looking at the definiton of string2groups which begins on line 369. The agroups parameter is a string which is split on commas. So I think you can put the name of more than one group in the string.\nTry:\ntdata = {0x02: ('player,ghost', tile_hit, config)}\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0002611839_pygame_python.txt
Q: Python3k ctypes printf printf returns 1 instead of "Hello World!" which is the desired result. I googled it and think its because of the changes in the way sequences are treated. How do I modify the code to print "Hello World!"? www.mail-archive.com/python-3000@python.org/msg15119.html import ctypes msvcrt=ctypes.cdll.msvcrt string=b"Hello World!" msvcrt.printf("%s", string) A: The first argument needs to be a byte string as well: msvcrt.printf(b"%s", string) The return value of printf is the number of characters printed, which should be 12 in this case. Edit: If you want the string to be returned instead of printed, you can use sprintf instead. This is dangerous and NOT recommended. s = ctypes.create_string_buffer(100) #must be large enough!! msvcrt.sprintf(s, b'%s', b'Hello World!') val = s.value I don't know why you'd want to do this though, since Python has its own string formatting. sprintf is a dangerous method since it is susceptible to buffer overflows.
Python3k ctypes printf
printf returns 1 instead of "Hello World!" which is the desired result. I googled it and think its because of the changes in the way sequences are treated. How do I modify the code to print "Hello World!"? www.mail-archive.com/python-3000@python.org/msg15119.html import ctypes msvcrt=ctypes.cdll.msvcrt string=b"Hello World!" msvcrt.printf("%s", string)
[ "The first argument needs to be a byte string as well:\nmsvcrt.printf(b\"%s\", string)\n\nThe return value of printf is the number of characters printed, which should be 12 in this case.\nEdit:\nIf you want the string to be returned instead of printed, you can use sprintf instead. This is dangerous and NOT recommended.\ns = ctypes.create_string_buffer(100) #must be large enough!!\nmsvcrt.sprintf(s, b'%s', b'Hello World!')\nval = s.value\n\nI don't know why you'd want to do this though, since Python has its own string formatting. sprintf is a dangerous method since it is susceptible to buffer overflows.\n" ]
[ 4 ]
[]
[]
[ "ctypes", "printf", "python" ]
stackoverflow_0002636597_ctypes_printf_python.txt
Q: boost python version I'm trying to use boost.python library in a C++ project (Windows + VS9) but it always tries to link against pyton25.lib. Is it possible to link with version 2.6.x of python? thanks A: You need to recompile boost-python library pointing Boost.Build to needed python version. P.S. This heals a problem of undefined references while linking with library needed. I beleive you've already turned of autolinking. A: You could try putting -lpython26 when linking
boost python version
I'm trying to use boost.python library in a C++ project (Windows + VS9) but it always tries to link against pyton25.lib. Is it possible to link with version 2.6.x of python? thanks
[ "You need to recompile boost-python library pointing Boost.Build to needed python version.\nP.S. This heals a problem of undefined references while linking with library needed. I beleive you've already turned of autolinking.\n", "You could try putting -lpython26 when linking\n" ]
[ 1, 0 ]
[]
[]
[ "boost", "python" ]
stackoverflow_0002635933_boost_python.txt
Q: python bind a type to a variable I am a Python noob. I create a class as follows: class t1: x = '' def __init__(self, x): self.x = x class t2: y = '' z = '' def __init__(self, x, y, z): self.y = t1.__init__(x) self.z = z Now contrary to C++ or Java, I do not bind the data type to y while writing the class definition. It is only because the constructor code is such that it shows that y is of type t1. Can we bind a data type while declaring y? A: No. Variables in Python do not have types - y does not have a type. At any moment in time, y refers to an object, and that object has a type. This: y = '' binds y to an object of type str. You can change it later to refer to an object of a different type. y itself has no intrinsic type. See Fredrik Lundh's excellent "Reset your brain" article for further explanation. (By the way, this: self.y = t1.__init__(x) is a rather strange piece of code. Did you mean to say self.y = t1(x)?) A: It's out of the scope, but please note: class A(object): x = None In this context, x is a class variable, not an instance variable and is shared by each instance. It's commonly used in the borg pattern. class A(object): def __init__(self, x): self.y = None self.x = x Here, self.y and self.x are instance variables.
python bind a type to a variable
I am a Python noob. I create a class as follows: class t1: x = '' def __init__(self, x): self.x = x class t2: y = '' z = '' def __init__(self, x, y, z): self.y = t1.__init__(x) self.z = z Now contrary to C++ or Java, I do not bind the data type to y while writing the class definition. It is only because the constructor code is such that it shows that y is of type t1. Can we bind a data type while declaring y?
[ "No. Variables in Python do not have types - y does not have a type. At any moment in time, y refers to an object, and that object has a type. This:\ny = ''\n\nbinds y to an object of type str. You can change it later to refer to an object of a different type. y itself has no intrinsic type.\nSee Fredrik Lundh's excellent \"Reset your brain\" article for further explanation.\n(By the way, this: self.y = t1.__init__(x) is a rather strange piece of code. Did you mean to say self.y = t1(x)?)\n", "It's out of the scope, but please note:\nclass A(object):\n x = None\n\nIn this context, x is a class variable, not an instance variable and is shared by each instance. It's commonly used in the borg pattern.\nclass A(object):\n def __init__(self, x):\n self.y = None\n self.x = x\n\nHere, self.y and self.x are instance variables.\n" ]
[ 6, 3 ]
[]
[]
[ "python", "variables" ]
stackoverflow_0002636491_python_variables.txt
Q: convert a list of booleans to string How do I convert this: [True, True, False, True, True, False, True] Into this: 'AB DE G' Note: C and F are missing in the output because the corresponding items in the input list are False. A: Assuming your list of booleans is not too long: bools = [True, True, False, True, True, False, True] print ''.join(chr(ord('A') + i) if b else ' ' for i, b in enumerate(bools)) A: You can use string.uppercase instead of chr/ord. This will give you locale-dependent results. For ascii you can use string.ascii_uppercase. >>> import string >>> bools = [True, True, False, True, True, False, True] >>> ''.join(string.uppercase[i] if b else ' ' for i, b in enumerate(bools)) 'AB DE G' A: In [1]: ''.join(map(lambda b, c: c if b else ' ', [True, True, False, True, True, False, True], 'ABCDEFG')) Out[1]: 'AB DE G' A: inputs = [True, True, False, True, True, False, True] outputs = [] for i,b in enumerate(inputs): if b: outputs.append(chr(65+i)) # 65 = ord('A') else: outputs.append(' ') outputstring = ''.join(outputs) or the list comprehension version inputs = [True, True, False, True, True, False, True] outputstring = ''.join(chr(65+i) if b else ' ' for i,b in enumerate(inputs)) A: Here's generalized solution based on numpy.where(): #!/usr/bin/env python import string, itertools def where(selectors, x, y): return (xx if s else yy for xx, yy, s in itertools.izip(x, y, selectors)) condition = [True, True, False, True, True, False, True] print ''.join(where(condition, string.uppercase, itertools.cycle(' '))) # -> AB DE G import numpy as np print ''.join(np.where(condition, list(string.uppercase)[:len(condition)], ' ')) # -> AB DE G
convert a list of booleans to string
How do I convert this: [True, True, False, True, True, False, True] Into this: 'AB DE G' Note: C and F are missing in the output because the corresponding items in the input list are False.
[ "Assuming your list of booleans is not too long:\nbools = [True, True, False, True, True, False, True]\n\nprint ''.join(chr(ord('A') + i) if b else ' ' for i, b in enumerate(bools))\n\n", "You can use string.uppercase instead of chr/ord. This will give you locale-dependent results. For ascii you can use string.ascii_uppercase.\n>>> import string\n>>> bools = [True, True, False, True, True, False, True]\n>>> ''.join(string.uppercase[i] if b else ' ' for i, b in enumerate(bools))\n\n'AB DE G'\n\n", "In [1]: ''.join(map(lambda b, c: c if b else ' ',\n [True, True, False, True, True, False, True],\n 'ABCDEFG'))\nOut[1]: 'AB DE G'\n\n", "inputs = [True, True, False, True, True, False, True]\noutputs = []\nfor i,b in enumerate(inputs):\n if b:\n outputs.append(chr(65+i)) # 65 = ord('A')\n else:\n outputs.append(' ')\noutputstring = ''.join(outputs)\n\nor the list comprehension version\ninputs = [True, True, False, True, True, False, True]\noutputstring = ''.join(chr(65+i) if b else ' ' for i,b in enumerate(inputs))\n\n", "Here's generalized solution based on numpy.where():\n#!/usr/bin/env python\nimport string, itertools\n\ndef where(selectors, x, y):\n return (xx if s else yy for xx, yy, s in itertools.izip(x, y, selectors))\n\ncondition = [True, True, False, True, True, False, True]\nprint ''.join(where(condition, string.uppercase, itertools.cycle(' ')))\n# -> AB DE G\n\nimport numpy as np\nprint ''.join(np.where(condition, list(string.uppercase)[:len(condition)], ' '))\n# -> AB DE G\n\n" ]
[ 11, 9, 3, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002635964_python.txt
Q: django + south + python: strange behavior when using a text string received as a parameter in a function this is my first question. I'm trying to execute a SQL query in django (south migration): from django.db import connection # ... class Migration(SchemaMigration): # ... def transform_id_to_pk(self, table): try: db.delete_primary_key(table) except: pass finally: cursor = connection.cursor() # This does not work cursor.execute('SELECT MAX("id") FROM "%s"', [table]) # I don't know if this works. try: minvalue = cursor.fetchone()[0] except: minvalue = 1 seq_name = table + '_id_seq' db.execute('CREATE SEQUENCE "%s" START WITH %s OWNED BY "%s"."id"', [seq_name, minvalue, table]) db.execute('ALTER TABLE "%s" ALTER COLUMN id SET DEFAULT nextval("%s")', [table, seq_name + '::regclass']) db.create_primary_key(table, ['id']) # ... I use this function like this: self.transform_id_to_pk('my_table_name') So it should: Find the biggest existent ID or 0 (it crashes) Create a sequence name Create the sequence Update the ID field to use sequence Update the ID as PK But it crashes and the error says: File "../apps/accounting/migrations/0003_setup_tables.py", line 45, in forwards self.delegation_table_setup(orm) File "../apps/accounting/migrations/0003_setup_tables.py", line 478, in delegation_table_setup self.transform_id_to_pk('accounting_delegation') File "../apps/accounting/migrations/0003_setup_tables.py", line 20, in transform_id_to_pk cursor.execute(u'SELECT MAX("id") FROM "%s"', [table.encode('utf-8')]) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "E'accounting_delegation'" does not exist LINE 1: SELECT MAX("id") FROM "E'accounting_delegation'" ^ I have shortened the file paths for convenience. What does that "E'accounting_delegation'" mean? How could I get rid of it? Thank you! Carlos. A: The problem is that you're using DB-API parameterization for things that are not SQL data. When you do something like: cursor.execute('INSERT INTO table_foo VALUES (%s, %s)', (col1, col2)) the DB-API module (django's frontend for whatever database you are using, in this case) will know to escape the contents of 'col1' and 'col2' appropriately, and replace the %s's with them. Note that there are no quotes around the %s's. But that only works for SQL data, not for SQL metadata, such as table names and sequence names, because they need to be quoted differently (or not at all.) When you do cursor.execute('INSERT INTO "%s" VALUES (%s, %s)', (tablename, col1, col2)) the tablename gets quoted as if you mean it to be string data to insert, and you end up with, for example, "'table_foo'". You need to separate your SQL metadata, which is part of the query, and your SQL data, which is not, like so: sql = 'INSERT INTO TABLE "%s" VALUES (%%s, %%s)' % (tablename,) cursor.execute(sql, (col1, col2)) Note that because the django DB-API frontend's paramstyle is 'pyformat' (it uses %s for placeholders) you need to escape those when you do the string formatting to create the SQL you want to execute. And note that this isn't secure against SQL injection attacks when you take the tablename from an insecure source and don't validate it.
django + south + python: strange behavior when using a text string received as a parameter in a function
this is my first question. I'm trying to execute a SQL query in django (south migration): from django.db import connection # ... class Migration(SchemaMigration): # ... def transform_id_to_pk(self, table): try: db.delete_primary_key(table) except: pass finally: cursor = connection.cursor() # This does not work cursor.execute('SELECT MAX("id") FROM "%s"', [table]) # I don't know if this works. try: minvalue = cursor.fetchone()[0] except: minvalue = 1 seq_name = table + '_id_seq' db.execute('CREATE SEQUENCE "%s" START WITH %s OWNED BY "%s"."id"', [seq_name, minvalue, table]) db.execute('ALTER TABLE "%s" ALTER COLUMN id SET DEFAULT nextval("%s")', [table, seq_name + '::regclass']) db.create_primary_key(table, ['id']) # ... I use this function like this: self.transform_id_to_pk('my_table_name') So it should: Find the biggest existent ID or 0 (it crashes) Create a sequence name Create the sequence Update the ID field to use sequence Update the ID as PK But it crashes and the error says: File "../apps/accounting/migrations/0003_setup_tables.py", line 45, in forwards self.delegation_table_setup(orm) File "../apps/accounting/migrations/0003_setup_tables.py", line 478, in delegation_table_setup self.transform_id_to_pk('accounting_delegation') File "../apps/accounting/migrations/0003_setup_tables.py", line 20, in transform_id_to_pk cursor.execute(u'SELECT MAX("id") FROM "%s"', [table.encode('utf-8')]) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "E'accounting_delegation'" does not exist LINE 1: SELECT MAX("id") FROM "E'accounting_delegation'" ^ I have shortened the file paths for convenience. What does that "E'accounting_delegation'" mean? How could I get rid of it? Thank you! Carlos.
[ "The problem is that you're using DB-API parameterization for things that are not SQL data. When you do something like:\ncursor.execute('INSERT INTO table_foo VALUES (%s, %s)', (col1, col2))\n\nthe DB-API module (django's frontend for whatever database you are using, in this case) will know to escape the contents of 'col1' and 'col2' appropriately, and replace the %s's with them. Note that there are no quotes around the %s's. But that only works for SQL data, not for SQL metadata, such as table names and sequence names, because they need to be quoted differently (or not at all.) When you do\ncursor.execute('INSERT INTO \"%s\" VALUES (%s, %s)', (tablename, col1, col2))\n\nthe tablename gets quoted as if you mean it to be string data to insert, and you end up with, for example, \"'table_foo'\". You need to separate your SQL metadata, which is part of the query, and your SQL data, which is not, like so:\nsql = 'INSERT INTO TABLE \"%s\" VALUES (%%s, %%s)' % (tablename,)\ncursor.execute(sql, (col1, col2))\n\nNote that because the django DB-API frontend's paramstyle is 'pyformat' (it uses %s for placeholders) you need to escape those when you do the string formatting to create the SQL you want to execute. And note that this isn't secure against SQL injection attacks when you take the tablename from an insecure source and don't validate it.\n" ]
[ 4 ]
[]
[]
[ "django", "django_south", "python", "string" ]
stackoverflow_0002636839_django_django_south_python_string.txt
Q: Using memcache to store obj's in google app engine I'm trying to use memcache to cache data retrevied from the datastore. Storing stings works fine. But can't one store an object? I get an error "TypeError: 'str' object is not callable" when trying to store with this: pageData = StandardPage(page) memcache.add(memcacheid, pageData, 60) I've read in the documentation that it requires "The value type can be any value supported by the Python pickle module for serializing values." But don't really understand what that is. Or how to convert pageData to it. Any ideas? ..fredrik EDIT: I was a bit unclear. PageType is an class that amongst other thing get data from the datastore and manipulate it. The class looks like this: class PageType(): def __init__(self, page): self.pageData = PageData(data.Modules.gql('WHERE page = :page', page = page.key()).fetch(100)) self.modules = [] def renderEditPage(self): self.addModules() return self.modules class StandardPage(PageTypes.PageType): templateName = 'Altan StandardPage' templateFile = 'templates/page.html' def __init__(self, page): self.pageData = PageTypes.PageData(data.Modules.gql('WHERE page = :page', page = page.key()).fetch(100)) self.modules = [] self.childModules = [] for child in page.childPages: self.childModules.append(PageTypes.PageData(data.Modules.gql('WHERE page = :page', page = child.key()).fetch(100))) def addModules(self): self.modules.append(PageTypes.getStandardHeading(self, 'MainHeading')) self.modules.append(PageTypes.getStandardTextBox(self, 'FirstTextBox')) self.modules.append(PageTypes.getStandardTextBox(self, 'SecondTextBox')) self.modules.append(PageTypes.getStandardHeading(self, 'ListHeading')) self.modules.append(PageTypes.getStandardTextBox(self, 'ListTextBox')) self.modules.append(PageTypes.getDynamicModules(self)) A: You can use db.model_to_protobuf to turn your object into something that can be stored in memcache. Similarly, db.model_from_protobuf will get your object back. Resource: Datastore Functions
Using memcache to store obj's in google app engine
I'm trying to use memcache to cache data retrevied from the datastore. Storing stings works fine. But can't one store an object? I get an error "TypeError: 'str' object is not callable" when trying to store with this: pageData = StandardPage(page) memcache.add(memcacheid, pageData, 60) I've read in the documentation that it requires "The value type can be any value supported by the Python pickle module for serializing values." But don't really understand what that is. Or how to convert pageData to it. Any ideas? ..fredrik EDIT: I was a bit unclear. PageType is an class that amongst other thing get data from the datastore and manipulate it. The class looks like this: class PageType(): def __init__(self, page): self.pageData = PageData(data.Modules.gql('WHERE page = :page', page = page.key()).fetch(100)) self.modules = [] def renderEditPage(self): self.addModules() return self.modules class StandardPage(PageTypes.PageType): templateName = 'Altan StandardPage' templateFile = 'templates/page.html' def __init__(self, page): self.pageData = PageTypes.PageData(data.Modules.gql('WHERE page = :page', page = page.key()).fetch(100)) self.modules = [] self.childModules = [] for child in page.childPages: self.childModules.append(PageTypes.PageData(data.Modules.gql('WHERE page = :page', page = child.key()).fetch(100))) def addModules(self): self.modules.append(PageTypes.getStandardHeading(self, 'MainHeading')) self.modules.append(PageTypes.getStandardTextBox(self, 'FirstTextBox')) self.modules.append(PageTypes.getStandardTextBox(self, 'SecondTextBox')) self.modules.append(PageTypes.getStandardHeading(self, 'ListHeading')) self.modules.append(PageTypes.getStandardTextBox(self, 'ListTextBox')) self.modules.append(PageTypes.getDynamicModules(self))
[ "You can use db.model_to_protobuf to turn your object into something that can be stored in memcache. Similarly, db.model_from_protobuf will get your object back.\nResource:\nDatastore Functions\n" ]
[ 0 ]
[]
[]
[ "google_app_engine", "memcached", "python" ]
stackoverflow_0002636931_google_app_engine_memcached_python.txt
Q: Python file - Want to change it to a function or a class I have a python program/file that I want to run repeatedly and calculate the averages of some variables over these runs. To do so, I thought it might be convenient to convert this program into a function or a class. One way I can think of is to add a def Main(): line at the top and indent every line manually within it. Is there an easier way? I am using pydev on eclipse. Thanks. A: Yes you are on the right track. Add the def Main(): line Then in pydev select all the other code and then hit tab which will indent all the code To make the code runnable from the command line ie let it call the Main function you will need to add some code that is executed when the module is loaded. So at the end of the file add if __name__ == '__main__': Main() The if command stops the code being run if the file is loaded via an import. A: IMHO, Your own way is good and straightforward. Also you need not indent each and every line manually. Most of the editors have support for indenting whole selected block. A: Add prints of these variables, collect the outputs from many runs, and compute the average. Or maybe I don't understand the question... A: Not really. Any code that is not inside a class/function will be executed on module load time (once). I don't know about eclipse for sure, but in most sane text editors you can select a block of text and press Tab to indent the block. Some editors have this in menu, something like "Edit->Indent". A: Put at the top: while True: or do what you said.... Or you could turn it into a class.. but that would be quite unnecessary in this case A: What about using class iterators.
Python file - Want to change it to a function or a class
I have a python program/file that I want to run repeatedly and calculate the averages of some variables over these runs. To do so, I thought it might be convenient to convert this program into a function or a class. One way I can think of is to add a def Main(): line at the top and indent every line manually within it. Is there an easier way? I am using pydev on eclipse. Thanks.
[ "Yes you are on the right track.\nAdd the def Main(): line\nThen in pydev select all the other code and then hit tab which will indent all the code\nTo make the code runnable from the command line ie let it call the Main function you will need to add some code that is executed when the module is loaded. So at the end of the file add\nif __name__ == '__main__':\n Main()\n\nThe if command stops the code being run if the file is loaded via an import.\n", "IMHO, Your own way is good and straightforward.\nAlso you need not indent each and every line manually. \nMost of the editors have support for indenting whole selected block.\n", "Add prints of these variables, collect the outputs from many runs, and compute the average.\nOr maybe I don't understand the question... \n", "Not really. \nAny code that is not inside a class/function will be executed on module load time (once).\nI don't know about eclipse for sure, but in most sane text editors you can select a block of text and press Tab to indent the block. Some editors have this in menu, something like \"Edit->Indent\".\n", "Put at the top:\nwhile True:\n\nor do what you said.... Or you could turn it into a class.. but that would be quite unnecessary in this case\n", "What about using class iterators.\n" ]
[ 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "pydev", "python" ]
stackoverflow_0002636808_pydev_python.txt
Q: Django model manager didn't work with related object when I do aggregated query I'm having trouble doing an aggregation query on a many-to-many related field. Here are my models: class SortedTagManager(models.Manager): use_for_related_fields = True def get_query_set(self): orig_query_set = super(SortedTagManager, self).get_query_set() # FIXME `used` is wrongly counted return orig_query_set.distinct().annotate( used=models.Count('users')).order_by('-used') class Tag(models.Model): content = models.CharField(max_length=32, unique=True) creator = models.ForeignKey(User, related_name='tags_i_created') users = models.ManyToManyField(User, through='TaggedNote', related_name='tags_i_used') objects_sorted_by_used = SortedTagManager() class TaggedNote(models.Model): """Association table of both (Tag , Note) and (Tag, User)""" note = models.ForeignKey(Note) # Note is what's tagged in my app tag = models.ForeignKey(Tag) tagged_by = models.ForeignKey(User) class Meta: unique_together = (('note', 'tag'),) However, the value of the aggregated field used is only correct when the model is queried directly: for t in Tag.objects.all(): print t.used # this works correctly for t in user.tags_i_used.all(): print t.used #prints n^2 when it should give n Would you please tell me what's wrong with it? Thanks in advance. A: I have figured out what's wrong and how to fix it now :) As stated in the Django doc: Django interprets the first Manager defined in a class as the "default" Manager, and several parts of Django will use that Manager exclusively for that model. In my case, I should make sure that SortedTagManager is the first Manager defined. 2.I should have count notes instead of users: Count('notes', distinct=True)
Django model manager didn't work with related object when I do aggregated query
I'm having trouble doing an aggregation query on a many-to-many related field. Here are my models: class SortedTagManager(models.Manager): use_for_related_fields = True def get_query_set(self): orig_query_set = super(SortedTagManager, self).get_query_set() # FIXME `used` is wrongly counted return orig_query_set.distinct().annotate( used=models.Count('users')).order_by('-used') class Tag(models.Model): content = models.CharField(max_length=32, unique=True) creator = models.ForeignKey(User, related_name='tags_i_created') users = models.ManyToManyField(User, through='TaggedNote', related_name='tags_i_used') objects_sorted_by_used = SortedTagManager() class TaggedNote(models.Model): """Association table of both (Tag , Note) and (Tag, User)""" note = models.ForeignKey(Note) # Note is what's tagged in my app tag = models.ForeignKey(Tag) tagged_by = models.ForeignKey(User) class Meta: unique_together = (('note', 'tag'),) However, the value of the aggregated field used is only correct when the model is queried directly: for t in Tag.objects.all(): print t.used # this works correctly for t in user.tags_i_used.all(): print t.used #prints n^2 when it should give n Would you please tell me what's wrong with it? Thanks in advance.
[ "I have figured out what's wrong and how to fix it now :)\n\nAs stated in the Django doc:\n\n\nDjango interprets the first Manager defined in a class as the \"default\" Manager, and several parts of Django will use that Manager exclusively for that model. \n\nIn my case, I should make sure that SortedTagManager is the first Manager defined.\n2.I should have count notes instead of users:\nCount('notes', distinct=True)\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_aggregation", "django_models", "python" ]
stackoverflow_0002635587_django_django_aggregation_django_models_python.txt
Q: Redhat | How to compile Python 2.6 for 64bit I'm trying to compile Python 2.6 for 64bit, I tried various compile commands but not sure whether those are correct ./configure --with-universal-archs=32-bit --prefix="$HOME/python" make make install What is the correct syntax ... ? A: What exactly doesn't work? Do you get any error message? Try simple compilation without installing first: $ cd path/to/python/source $ ./configure $ make all ... wait for some time ... $ make test # this runs python's test suite, you can usually skip this $ ./python # note the ./ runs the just installed python instead of system's python $ # note: do not run make install yet, or you will override system's python, see below also, make sure you have make (GNU Make or otherwise) installed. Where did you get the source? If you're getting it directly from the repository, there is a chance that the source is broken or you may need to re-run autotool. After testing that the compilation actually works, then you can: $ cd path/to/python/source/ $ ./configure --prefix=/where/you/want/to/install/it $ make all ... wait for some time ... $ make test # this runs python's test suite, you can usually skip this $ make install
Redhat | How to compile Python 2.6 for 64bit
I'm trying to compile Python 2.6 for 64bit, I tried various compile commands but not sure whether those are correct ./configure --with-universal-archs=32-bit --prefix="$HOME/python" make make install What is the correct syntax ... ?
[ "What exactly doesn't work? Do you get any error message? \nTry simple compilation without installing first:\n$ cd path/to/python/source\n$ ./configure\n$ make all\n... wait for some time ...\n$ make test # this runs python's test suite, you can usually skip this\n$ ./python # note the ./ runs the just installed python instead of system's python\n$ # note: do not run make install yet, or you will override system's python, see below\n\nalso, make sure you have make (GNU Make or otherwise) installed.\nWhere did you get the source? If you're getting it directly from the repository, there is a chance that the source is broken or you may need to re-run autotool.\nAfter testing that the compilation actually works, then you can:\n$ cd path/to/python/source/\n$ ./configure --prefix=/where/you/want/to/install/it\n$ make all\n... wait for some time ...\n$ make test # this runs python's test suite, you can usually skip this\n$ make install\n\n" ]
[ 1 ]
[]
[]
[ "python", "redhat" ]
stackoverflow_0002637166_python_redhat.txt
Q: extract specific element from nested elements using lxml html Hi all I am having some problems that I think can be attributed to xpath problems. I am using the html module from the lxml package to try and get at some data. I am providing the most simplified situation below, but keep in mind the html I am working with is much uglier. <table> <tr> <td> <table> <tr><td></td></tr> <tr><td> <table> <tr><td><u><b>Header1</b></u></td></tr> <tr><td>Data</td></tr> </table> </td></tr> </table> </td></tr> </table> What I really want is the deeply nested table, because it has the header text "Header1". I am trying like so: from lxml import html page = '...' tree = html.fromstring(page) print tree.xpath('//table[//*[contains(text(), "Header1")]]') but that gives me all of the table elements. I just want the one table that contains this text. I understand what is going on but am having a hard time figuring out how to do this besides breaking out some nasty regex. Any thoughts? A: Use: //td[text() = 'Header1']/ancestor::table[1] A: Find the header you are interested in and then pull out its table. //u[b = 'Header1']/ancestor::table[1] or //td[not(.//table) and .//b = 'Header1']/ancestor::table[1] Note that // always starts at the document root (!). You can't do: //table[//*[contains(text(), "Header1")]] and expect the inner predicate (//*…) to magically start at the right context. Use .// to start at the context node. Even then, this: //table[.//*[contains(text(), "Header1")]] won't work since even the outermost table contains the text 'Header1' somewhere deep down, so the predicate evaluates to true for every table in your example. Use not() like I did to make sure no other tables are nested. Also, don't test the condition on every node .//*, since it can't be true for every node to begin with. It's more efficient to be specific. A: Perhaps this would work for you: tree.xpath("//table[not(descendant::table)]/*[contains(., 'Header1')]") The not(descendant::table) bit ensures that you're getting the innermost table. A: table, = tree.xpath('//*[.="Header1"]/ancestor::table[1]') //*[text()="Header1"] selects an element anywhere in a document with text Header1. ancestor::table[1] selects the first ancestor of the element that is table. Complete example #!/usr/bin/env python from lxml import html page = """ <table> <tr> <td> <table> <tr><td></td></tr> <tr><td> <table> <tr><td><u><b>Header1</b></u></td></tr> <tr><td>Data</td></tr> </table> </td></tr> </table> </td></tr> </table> """ tree = html.fromstring(page) table, = tree.xpath('//*[.="Header1"]/ancestor::table[1]') print html.tostring(table)
extract specific element from nested elements using lxml html
Hi all I am having some problems that I think can be attributed to xpath problems. I am using the html module from the lxml package to try and get at some data. I am providing the most simplified situation below, but keep in mind the html I am working with is much uglier. <table> <tr> <td> <table> <tr><td></td></tr> <tr><td> <table> <tr><td><u><b>Header1</b></u></td></tr> <tr><td>Data</td></tr> </table> </td></tr> </table> </td></tr> </table> What I really want is the deeply nested table, because it has the header text "Header1". I am trying like so: from lxml import html page = '...' tree = html.fromstring(page) print tree.xpath('//table[//*[contains(text(), "Header1")]]') but that gives me all of the table elements. I just want the one table that contains this text. I understand what is going on but am having a hard time figuring out how to do this besides breaking out some nasty regex. Any thoughts?
[ "Use:\n//td[text() = 'Header1']/ancestor::table[1]\n\n", "Find the header you are interested in and then pull out its table.\n\n//u[b = 'Header1']/ancestor::table[1]\n\nor \n\n//td[not(.//table) and .//b = 'Header1']/ancestor::table[1]\n\nNote that // always starts at the document root (!). You can't do:\n\n//table[//*[contains(text(), \"Header1\")]]\n\nand expect the inner predicate (//*…) to magically start at the right context. Use .// to start at the context node. Even then, this:\n\n//table[.//*[contains(text(), \"Header1\")]]\n\nwon't work since even the outermost table contains the text 'Header1' somewhere deep down, so the predicate evaluates to true for every table in your example. Use not() like I did to make sure no other tables are nested.\nAlso, don't test the condition on every node .//*, since it can't be true for every node to begin with. It's more efficient to be specific.\n", "Perhaps this would work for you:\ntree.xpath(\"//table[not(descendant::table)]/*[contains(., 'Header1')]\")\n\nThe not(descendant::table) bit ensures that you're getting the innermost table.\n", "table, = tree.xpath('//*[.=\"Header1\"]/ancestor::table[1]')\n\n\n//*[text()=\"Header1\"] selects an element anywhere in a document with text Header1. \nancestor::table[1] selects the first ancestor of the element that is table.\n\nComplete example\n#!/usr/bin/env python\nfrom lxml import html\n\npage = \"\"\"\n<table>\n <tr>\n <td>\n <table>\n <tr><td></td></tr>\n <tr><td>\n <table>\n <tr><td><u><b>Header1</b></u></td></tr> \n <tr><td>Data</td></tr>\n </table>\n </td></tr>\n </table>\n </td></tr>\n</table>\n\"\"\"\n\ntree = html.fromstring(page)\ntable, = tree.xpath('//*[.=\"Header1\"]/ancestor::table[1]')\nprint html.tostring(table)\n\n" ]
[ 3, 2, 0, 0 ]
[]
[]
[ "html", "lxml", "parsing", "python", "xpath" ]
stackoverflow_0002634931_html_lxml_parsing_python_xpath.txt
Q: Displaying a Forecast Widget on a website I am looking for displaying Forecast information in my Django Website. Do you have any idea of how I can do that ? I looked at https://registration.weather.com/ursa/wow/ but it is in english and I didn't find anyway to put it in French. I looked at libgweather, but it didn't helps me a lot. Do you know how I can do that ? Thank you for your help, A: Here it is : http://france.meteofrance.com/france/accueil/partenaire
Displaying a Forecast Widget on a website
I am looking for displaying Forecast information in my Django Website. Do you have any idea of how I can do that ? I looked at https://registration.weather.com/ursa/wow/ but it is in english and I didn't find anyway to put it in French. I looked at libgweather, but it didn't helps me a lot. Do you know how I can do that ? Thank you for your help,
[ "Here it is : http://france.meteofrance.com/france/accueil/partenaire\n" ]
[ 2 ]
[]
[]
[ "forecasting", "python", "web", "widget" ]
stackoverflow_0002637690_forecasting_python_web_widget.txt
Q: parsing list in python I have list in python which has following entries name-1 name-2 name-3 name-4 name-1 name-2 name-3 name-4 name-1 name-2 name-3 name-4 I would like remove name-1 from list except its first appearance -- resultant list should look like name-1 name-2 name-3 name-4 name-2 name-3 name-4 name-2 name-3 name-4 How to achieve this ? A: def remove_but_first( lst, it): first = lst.index( it ) # everything up to the first occurance of it, then the rest of the list without all it return lst[:first+1] + [ x for x in lst[first:] if x != it ] s = [1,2,3,4,1,5,6] print remove_but_first( s, 1) A: Assuming name-1 denotes "the first element": [names[0]] + [n for n in names[1:] if n != names[0]] EDIT: If the overall goal is to de-duplicate the entire list, just use set(names). A: Based on Marcelo's solution: [name for cnt,name in enumerate(names) if (name != names[0] or cnt > 0)] A: Find the index of the first element you wish to remove, then filter the rest of the list. The following works in Python 2.5: def removeAllButFirst(elem, myList): idx = myList.index(elem) return myList[0:idx+1] + filter(lambda x: x != elem, myList[idx+1:]) A: mylist = ['name-1', 'name-2', 'name-3', 'name-4', 'name-1', 'name-2', 'name-3', 'name-4', 'name-1', 'name-2', 'name-3', 'name-4'] newlist = filter(lambda x: x != 'name-1', mylist) newlist.insert(mylist.index('name-1'), 'name-1') print newlist ['name-1', 'name-2', 'name-3', 'name-4', 'name-2', 'name-3', 'name-4', 'name-2', 'name-3', 'name-4']
parsing list in python
I have list in python which has following entries name-1 name-2 name-3 name-4 name-1 name-2 name-3 name-4 name-1 name-2 name-3 name-4 I would like remove name-1 from list except its first appearance -- resultant list should look like name-1 name-2 name-3 name-4 name-2 name-3 name-4 name-2 name-3 name-4 How to achieve this ?
[ "def remove_but_first( lst, it):\n first = lst.index( it )\n # everything up to the first occurance of it, then the rest of the list without all it\n return lst[:first+1] + [ x for x in lst[first:] if x != it ]\n\ns = [1,2,3,4,1,5,6]\nprint remove_but_first( s, 1)\n\n", "Assuming name-1 denotes \"the first element\":\n[names[0]] + [n for n in names[1:] if n != names[0]]\n\nEDIT: If the overall goal is to de-duplicate the entire list, just use set(names).\n", "Based on Marcelo's solution:\n[name for cnt,name in enumerate(names) if (name != names[0] or cnt > 0)]\n", "Find the index of the first element you wish to remove, then filter the rest of the list.\nThe following works in Python 2.5:\ndef removeAllButFirst(elem, myList):\n idx = myList.index(elem)\n return myList[0:idx+1] + filter(lambda x: x != elem, myList[idx+1:])\n\n", "mylist = ['name-1', 'name-2', 'name-3', 'name-4', 'name-1', 'name-2', 'name-3', 'name-4', 'name-1', 'name-2', 'name-3', 'name-4']\nnewlist = filter(lambda x: x != 'name-1', mylist)\nnewlist.insert(mylist.index('name-1'), 'name-1')\nprint newlist\n['name-1', 'name-2', 'name-3', 'name-4', 'name-2', 'name-3', 'name-4', 'name-2', 'name-3', 'name-4']\n\n" ]
[ 3, 2, 2, 1, 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0002637480_list_python.txt
Q: How do I match contents of an element in XPath (lxml)? I want to parse HTML with lxml using XPath expressions. My problem is matching for the contents of a tag: For example given the <a href="http://something">Example</a> element I can match the href attribute using .//a[@href='http://something'] but the given the expression .//a[.='Example'] or even .//a[contains(.,'Example')] lxml throws the 'invalid node predicate' exception. What am I doing wrong? EDIT: Example code: from lxml import etree from cStringIO import StringIO html = '<a href="http://something">Example</a>' parser = etree.HTMLParser() tree = etree.parse(StringIO(html), parser) print tree.find(".//a[text()='Example']").tag Expected output is 'a'. I get 'SyntaxError: invalid node predicate' A: I would try with: .//a[text()='Example'] using xpath() method: tree.xpath(".//a[text()='Example']")[0].tag If case you would like to use iterfind(), findall(), find(), findtext(), keep in mind that advanced features like value comparison and functions are not available in ElementPath. lxml.etree supports the simple path syntax of the find, findall and findtext methods on ElementTree and Element, as known from the original ElementTree library (ElementPath). As an lxml specific extension, these classes also provide an xpath() method that supports expressions in the complete XPath syntax, as well as custom extension functions.
How do I match contents of an element in XPath (lxml)?
I want to parse HTML with lxml using XPath expressions. My problem is matching for the contents of a tag: For example given the <a href="http://something">Example</a> element I can match the href attribute using .//a[@href='http://something'] but the given the expression .//a[.='Example'] or even .//a[contains(.,'Example')] lxml throws the 'invalid node predicate' exception. What am I doing wrong? EDIT: Example code: from lxml import etree from cStringIO import StringIO html = '<a href="http://something">Example</a>' parser = etree.HTMLParser() tree = etree.parse(StringIO(html), parser) print tree.find(".//a[text()='Example']").tag Expected output is 'a'. I get 'SyntaxError: invalid node predicate'
[ "I would try with:\n.//a[text()='Example']\nusing xpath() method:\ntree.xpath(\".//a[text()='Example']\")[0].tag\n\nIf case you would like to use iterfind(), findall(), find(), findtext(), keep in mind that advanced features like value comparison and functions are not available in ElementPath.\n\nlxml.etree supports the simple path\n syntax of the find, findall and\n findtext methods on ElementTree and\n Element, as known from the original\n ElementTree library (ElementPath). As\n an lxml specific extension, these\n classes also provide an xpath() method\n that supports expressions in the\n complete XPath syntax, as well as\n custom extension functions.\n\n" ]
[ 21 ]
[]
[]
[ "lxml", "predicate", "python", "xpath" ]
stackoverflow_0002637760_lxml_predicate_python_xpath.txt
Q: Running py.test from emacs What I would like is for C-c C-c to run py.test and display the output in the other buffer if the name of the file being edited begins with test_, and to normally run py-execute-buffer otherwise. How would I do this? I am using emacs 23.1.1 with python-mode and can access py.test from the command line. A: This isn't particularly well-tested; it's just a rough idea. (defun py-do-it () (interactive) (if (string-match (rx bos "test_") (file-name-nondirectory (buffer-file-name))) (compile "py.test") (py-execute-buffer))) (add-hook 'python-mode-hook (lambda () (local-set-key (kbd "F5") ;or whatever 'py-do-it)))
Running py.test from emacs
What I would like is for C-c C-c to run py.test and display the output in the other buffer if the name of the file being edited begins with test_, and to normally run py-execute-buffer otherwise. How would I do this? I am using emacs 23.1.1 with python-mode and can access py.test from the command line.
[ "This isn't particularly well-tested; it's just a rough idea.\n(defun py-do-it ()\n (interactive)\n (if (string-match\n (rx bos \"test_\")\n (file-name-nondirectory (buffer-file-name)))\n (compile \"py.test\")\n (py-execute-buffer)))\n\n(add-hook 'python-mode-hook\n (lambda ()\n (local-set-key\n (kbd \"F5\") ;or whatever\n 'py-do-it)))\n\n" ]
[ 8 ]
[]
[]
[ "emacs", "pytest", "python" ]
stackoverflow_0002635523_emacs_pytest_python.txt
Q: SQL Alchemy related Objects Error from sqlalchemy.orm import relation, backref from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey, Date, Sequence from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class GUI_SCENARIO(Base): __tablename__ = 'GUI_SCENARIO' Scenario_ID = Column(Integer, primary_key=True) Definition_Date = Column(Date) guiScenarioDefinition = relation('GUI_SCENARIO_DEFINITION', order_by='GUI_SCENARIO_DEFINITION.Scenario_Definition_ID', backref='guiScenario') def __init__(self, Scenario_ID=None, Definition_Date=None): self.Scenario_ID = Scenario_ID self.Definition_Date = Definition_Date class GUI_SCENARIO_DEFINITION(Base): __tablename__='GUI_SCENARIO_DEFINITION' Scenario_Definition_ID = Column(Integer, Sequence('Scenario_Definition_ID_SEQ'), primary_key=True) Scenario_FK = Column(Integer, ForeignKey('GUI_SCENARIO.Scenario_ID')) Definition_Date=Column(Date) guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) def __init__(self, Scenario_FK, Definition_Date): self.Scenario_FK = Scenario_FK self.Definition_Date = Definition_Date guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) tableNameScenario = "GUI_SCENARIO" scenarioClass = getattr(MappingTablesScenario, tableNameScenario) tableScenario = Table(tableNameScenario, meta, autoload=True) mapper(scenarioClass, tableScenario) scenarioName = scenarioDefinition.name scenarioDefinitionDate = datetime.today() newScenario = MappingTablesScenario.GUI_SCENARIO(scenarioName, scenarioDefinitionDate) print newScenario.guiScenarioDefinition If I try to get the objects related to a scenarioObject, I always get this error: AttributeError: 'GUI_SCENARIO' object has no attribute 'guiScenarioDefinition' Does anyone know, why I get this error? I am using SQLAlchemy 0.5.8. A: I believe you should not call mapper(). When you use declarative_base, you define the table, the class and the mapping at once, in a shorthand style. I suggest you remove the call to mapper().
SQL Alchemy related Objects Error
from sqlalchemy.orm import relation, backref from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey, Date, Sequence from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class GUI_SCENARIO(Base): __tablename__ = 'GUI_SCENARIO' Scenario_ID = Column(Integer, primary_key=True) Definition_Date = Column(Date) guiScenarioDefinition = relation('GUI_SCENARIO_DEFINITION', order_by='GUI_SCENARIO_DEFINITION.Scenario_Definition_ID', backref='guiScenario') def __init__(self, Scenario_ID=None, Definition_Date=None): self.Scenario_ID = Scenario_ID self.Definition_Date = Definition_Date class GUI_SCENARIO_DEFINITION(Base): __tablename__='GUI_SCENARIO_DEFINITION' Scenario_Definition_ID = Column(Integer, Sequence('Scenario_Definition_ID_SEQ'), primary_key=True) Scenario_FK = Column(Integer, ForeignKey('GUI_SCENARIO.Scenario_ID')) Definition_Date=Column(Date) guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) def __init__(self, Scenario_FK, Definition_Date): self.Scenario_FK = Scenario_FK self.Definition_Date = Definition_Date guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) tableNameScenario = "GUI_SCENARIO" scenarioClass = getattr(MappingTablesScenario, tableNameScenario) tableScenario = Table(tableNameScenario, meta, autoload=True) mapper(scenarioClass, tableScenario) scenarioName = scenarioDefinition.name scenarioDefinitionDate = datetime.today() newScenario = MappingTablesScenario.GUI_SCENARIO(scenarioName, scenarioDefinitionDate) print newScenario.guiScenarioDefinition If I try to get the objects related to a scenarioObject, I always get this error: AttributeError: 'GUI_SCENARIO' object has no attribute 'guiScenarioDefinition' Does anyone know, why I get this error? I am using SQLAlchemy 0.5.8.
[ "I believe you should not call mapper(). When you use declarative_base, you define the table, the class and the mapping at once, in a shorthand style.\nI suggest you remove the call to mapper().\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0002637705_python_sqlalchemy.txt
Q: add a custom RFC822 header via IMAP? Is there an easy way to add a custom RFC822 header to a message on an IMAP server with imaplib? I am writing a python-based program that filters my IMAP mail store. When I did this with Procmail I had the option of adding headers. But there doesn't seem to be a way to do that with the Python imap implementation. Specifically, I want to add a custom header like: X-VY32-STATUS: Very Cool So that it appears in the mail headers: To: vy32@stackoverflow.com From: Yo@mama.com Subject: Test Message X-VY32-STATUS: Very Cool The regular message is down here. A: Better option would be to use custom server flags called keywords. A keyword is defined by the server implementation. Keywords do not begin with "\". Servers MAY permit the client to define new keywords in the mailbox. To add myflag to the message you can use STORE number +FLAGS (myflag) to search: SEARCH KEYWORD myflag Bear in mind that some servers don't allow custom flags. A: I don't think RFC 2060 allows the edit of received mail headers. If you want to edit the mail response header this can be done with the email package.
add a custom RFC822 header via IMAP?
Is there an easy way to add a custom RFC822 header to a message on an IMAP server with imaplib? I am writing a python-based program that filters my IMAP mail store. When I did this with Procmail I had the option of adding headers. But there doesn't seem to be a way to do that with the Python imap implementation. Specifically, I want to add a custom header like: X-VY32-STATUS: Very Cool So that it appears in the mail headers: To: vy32@stackoverflow.com From: Yo@mama.com Subject: Test Message X-VY32-STATUS: Very Cool The regular message is down here.
[ "Better option would be to use custom server flags called keywords.\nA keyword is defined by the server implementation. Keywords do not begin with \"\\\". Servers MAY permit the client to define new keywords in the mailbox.\nTo add myflag to the message you can use \nSTORE number +FLAGS (myflag) \n\nto search:\nSEARCH KEYWORD myflag\n\nBear in mind that some servers don't allow custom flags.\n", "I don't think RFC 2060 allows the edit of received mail headers. If you want to edit the mail response header this can be done with the email package.\n" ]
[ 1, 0 ]
[]
[]
[ "imap", "imaplib", "python" ]
stackoverflow_0002575805_imap_imaplib_python.txt
Q: Python and C++ Sockets converting packet data First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets. Now: assuming a packet definition in one of the C++ programs reads like this: unsigned char Packet[0x32]; // Packet[Length] int z=0; Packet[0]=0x00; // Spare Packet[1]=0x32; // Length Packet[2]=0x01; // Source Packet[3]=0x02; // Destination Packet[4]=0x01; // ID Packet[5]=0x00; // Spare for(z=0;z<=24;z+=8) { Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z)); Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z)); Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z)); Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z)); Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z)); Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z)); Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z)); Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z)); Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z)); Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z)); Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z)); } if(SendPacket(sock,(char*)&Packet,sizeof(Packet))) return 1; return 0; What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver? A: You can receive the packet's 50 bytes with a .recv call on a properly connected socked (it might actually take more than one call in the unlikely event the TCP packet gets fragmented, so check incoming length until you have exactly 50 bytes in hand;-). After that, understanding that C code is puzzling. The assignments of ints (presumably 4-bytes each) to Packet[9], Packet[13], etc, give the impression that the intention is to set 4 bytes at a time within Packet, but that's not what happens: each assignment sets exactly one byte in the packet, from the lowest byte of the int that's the RHS of the assignment. But those bytes are the bytes of (int)(720000+armcontrolpacket->dof0_rot*1000) and so on... So must those last 44 bytes of the packet be interpreted as 11 4-byte integers (signed? unsigned?) or 44 independent values? I'll guess the former, and do...: import struct f = '>x4bx11i' values = struct.unpack(f, packet) the format f indicates: big-endian, 4 unsigned-byte values surrounded by two ignored "spare" bytes, 11 4-byte signed integers. Tuple values ends up with 15 values: the four single bytes (50, 1, 2, 1 in your example), then 11 signed integers. You can use the same format string to pack a modified version of the tuple back into a 50-bytes packet to resend. Since you explicitly place the length in the packet it may be that different packets have different lenghts (though that's incompatible with the fixed-length declaration in your C sample) in which case you need to be a bit more accurate in receiving and unpacking it; however such details depend on information you don't give, so I'll stop trying to guess;-). A: Take a look at the struct module, specifically the pack and unpack functions. They work with format strings that allow you to specify what types you want to write or read and what endianness and alignment you want to use.
Python and C++ Sockets converting packet data
First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets. Now: assuming a packet definition in one of the C++ programs reads like this: unsigned char Packet[0x32]; // Packet[Length] int z=0; Packet[0]=0x00; // Spare Packet[1]=0x32; // Length Packet[2]=0x01; // Source Packet[3]=0x02; // Destination Packet[4]=0x01; // ID Packet[5]=0x00; // Spare for(z=0;z<=24;z+=8) { Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z)); Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z)); Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z)); Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z)); Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z)); Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z)); Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z)); Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z)); Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z)); Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z)); Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z)); } if(SendPacket(sock,(char*)&Packet,sizeof(Packet))) return 1; return 0; What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver?
[ "You can receive the packet's 50 bytes with a .recv call on a properly connected socked (it might actually take more than one call in the unlikely event the TCP packet gets fragmented, so check incoming length until you have exactly 50 bytes in hand;-).\nAfter that, understanding that C code is puzzling. The assignments of ints (presumably 4-bytes each) to Packet[9], Packet[13], etc, give the impression that the intention is to set 4 bytes at a time within Packet, but that's not what happens: each assignment sets exactly one byte in the packet, from the lowest byte of the int that's the RHS of the assignment. But those bytes are the bytes of (int)(720000+armcontrolpacket->dof0_rot*1000) and so on...\nSo must those last 44 bytes of the packet be interpreted as 11 4-byte integers (signed? unsigned?) or 44 independent values? I'll guess the former, and do...:\nimport struct\nf = '>x4bx11i'\nvalues = struct.unpack(f, packet)\n\nthe format f indicates: big-endian, 4 unsigned-byte values surrounded by two ignored \"spare\" bytes, 11 4-byte signed integers. Tuple values ends up with 15 values: the four single bytes (50, 1, 2, 1 in your example), then 11 signed integers. You can use the same format string to pack a modified version of the tuple back into a 50-bytes packet to resend.\nSince you explicitly place the length in the packet it may be that different packets have different lenghts (though that's incompatible with the fixed-length declaration in your C sample) in which case you need to be a bit more accurate in receiving and unpacking it; however such details depend on information you don't give, so I'll stop trying to guess;-).\n", "Take a look at the struct module, specifically the pack and unpack functions. They work with format strings that allow you to specify what types you want to write or read and what endianness and alignment you want to use.\n" ]
[ 4, 3 ]
[]
[]
[ "binary", "c++", "networking", "python", "sockets" ]
stackoverflow_0002637546_binary_c++_networking_python_sockets.txt
Q: Porting Python algorithm to C++ - different solution Thank you all for helping. Below this post I put the corrected version's of both scripts which now produce the equal output. Hello, I have written a little brute string generation script in python to generate all possible combinations of an alphabet within a given length. It works quite nice, but for the reason I wan't it to be faster I try to port it to C++. The problem is that my C++ Code is creating far too much combination for one word. Heres my example in python: ./test.py gives me aaa aab aac aad aa aba .... while ./test (the c++ programm gives me) aaa aaa aaa aaa aa Here I also get all possible combinations, but I get them twice ore more often. Here is the Code for both programms: #!/usr/bin/env python import sys #Brute String Generator #Start it with ./brutestringer.py 4 6 "abcdefghijklmnopqrstuvwxyz1234567890" "" #will produce all strings with length 4 to 6 and chars from a to z and numbers 0 to 9 def rec(w, p, baseString): for c in "abcd": if (p<w - 1): rec(w, p + 1, baseString + "%c" % c) print baseString for b in range(3,4): rec(b, 0, "") And here the C++ Code #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w-1)){ rec(w, (b+1), p+chars[i]); } cout << p << "\n"; } } int main () { int a=3, b=0; rec (a+1,b, ""); return 0; } Does anybody see my fault ? I don't have much experience with C++. Thanks indeed Here the corrected version: C++ #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w)){ rec(w, (b+1), p+chars[i]); } } cout << p << "\n"; } int main () { rec (3,0, ""); return 0; } Python #!/usr/bin/env python import sys def rec(w, b, p): for c in "abcd": if (b < w - 1): rec(w, b + 1, p + "%c" % c) print p rec(4, 0, "") Equal Output: $ ./test > 1 $ ./test.py 3 3 "abcd" "" > 2 $ diff 1 2 $ A: I think the Python code is also broken but maybe you don't notice because the print is indented by one space too many (hey, now I've seen a Python program with a one-off error!) Shouldn't the output only happen in the else case? And the reason why the output happens more often is that you call print/cout 4 times. I suggest to change the code: def rec(w, p, baseString): if w == p: print baseString else: for ... A: Just out of curiosity, is this fast enough? import itertools, string alphabet = string.lowercase + string.digits for numchars in (3, 4): for x in itertools.product(alphabet, repeat=numchars): print ''.join(x) (And make sure you're redirecting output to a file; scrolling huge amounts of text up the screen can be surprisingly slow). A: In rec the string p gets printed in every iteration of the loop: for(i=0;i<chars.size();i++){ // ... cout << p << "\n"; } The Python code you posted seems to do the same, but maybe there is something mixed up with the indentation there? Did you maybe mix tabs and spaces in the Python file, leading to surprising results? A: You say...: ./test.py gives me aaa aab (etc), but that's not true of the code you posted: what you get instead is aa aa aa aa a with four repetitions of the leading aa, etc etc. Of course you do: you have the print baseString statement inside the loop of for c in "abcd":, so necessarily it's executed four times. I imagine you want that print out of the loop -- and similarly for the C++ code, where you've also put the output statement smack inside the loop, so it gets repeated.
Porting Python algorithm to C++ - different solution
Thank you all for helping. Below this post I put the corrected version's of both scripts which now produce the equal output. Hello, I have written a little brute string generation script in python to generate all possible combinations of an alphabet within a given length. It works quite nice, but for the reason I wan't it to be faster I try to port it to C++. The problem is that my C++ Code is creating far too much combination for one word. Heres my example in python: ./test.py gives me aaa aab aac aad aa aba .... while ./test (the c++ programm gives me) aaa aaa aaa aaa aa Here I also get all possible combinations, but I get them twice ore more often. Here is the Code for both programms: #!/usr/bin/env python import sys #Brute String Generator #Start it with ./brutestringer.py 4 6 "abcdefghijklmnopqrstuvwxyz1234567890" "" #will produce all strings with length 4 to 6 and chars from a to z and numbers 0 to 9 def rec(w, p, baseString): for c in "abcd": if (p<w - 1): rec(w, p + 1, baseString + "%c" % c) print baseString for b in range(3,4): rec(b, 0, "") And here the C++ Code #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w-1)){ rec(w, (b+1), p+chars[i]); } cout << p << "\n"; } } int main () { int a=3, b=0; rec (a+1,b, ""); return 0; } Does anybody see my fault ? I don't have much experience with C++. Thanks indeed Here the corrected version: C++ #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w)){ rec(w, (b+1), p+chars[i]); } } cout << p << "\n"; } int main () { rec (3,0, ""); return 0; } Python #!/usr/bin/env python import sys def rec(w, b, p): for c in "abcd": if (b < w - 1): rec(w, b + 1, p + "%c" % c) print p rec(4, 0, "") Equal Output: $ ./test > 1 $ ./test.py 3 3 "abcd" "" > 2 $ diff 1 2 $
[ "I think the Python code is also broken but maybe you don't notice because the print is indented by one space too many (hey, now I've seen a Python program with a one-off error!)\nShouldn't the output only happen in the else case? And the reason why the output happens more often is that you call print/cout 4 times. I suggest to change the code:\ndef rec(w, p, baseString):\n if w == p:\n print baseString\n else:\n for ...\n\n", "Just out of curiosity, is this fast enough?\nimport itertools, string\nalphabet = string.lowercase + string.digits\nfor numchars in (3, 4):\n for x in itertools.product(alphabet, repeat=numchars):\n print ''.join(x)\n\n(And make sure you're redirecting output to a file; scrolling huge amounts of text up the screen can be surprisingly slow).\n", "In rec the string p gets printed in every iteration of the loop:\nfor(i=0;i<chars.size();i++){\n // ...\n cout << p << \"\\n\"; \n}\n\nThe Python code you posted seems to do the same, but maybe there is something mixed up with the indentation there? Did you maybe mix tabs and spaces in the Python file, leading to surprising results?\n", "You say...:\n\n./test.py\ngives me\naaa\n aab\n\n(etc), but that's not true of the code you posted: what you get instead is \naa\naa\naa\naa\na\n\nwith four repetitions of the leading aa, etc etc. Of course you do: you have the print baseString statement inside the loop of for c in \"abcd\":, so necessarily it's executed four times. I imagine you want that print out of the loop -- and similarly for the C++ code, where you've also put the output statement smack inside the loop, so it gets repeated.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "c++", "python" ]
stackoverflow_0002638361_c++_python.txt
Q: What is faster: multiple `send`s or using buffering? I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-) A: Because of the way TCP congestion control works, it's more efficient to send data all at once. TCP maintains a window of how much data it will allow to be "in the air" (sent but not yet acknowledged). TCP measures the acknowledgments coming back to figure out how much data it can have "in the air" without causing congestion (i.e., packet loss). If there isn't enough data coming from the application to fill the window, TCP can't make accurate measurements so it will conservatively shrink the window. If you only have a few, small headers and your calls to send are in rapid succession, the operating system will typically buffer the data for you and send it all in one packet. In that case, TCP congestion control isn't really an issue. However, each call to send involves a context switch from user mode to kernel mode, which incurs CPU overhead. In other words, you're still better off buffering in your application. There is (at least) one case where you're better off without buffering: when your buffer is slower than the context switching overhead. If you write a complicated buffer in Python, that might very well be the case. A buffer written in CPython is going to be quite a bit slower than the finely optimized buffer in the kernel. It's quite possible that buffering would cost you more than it buys you. When in doubt, measure. One word of caution though: premature optimization is the root of all evil. The difference in efficiency here is pretty small. If you haven't already established that this is a bottleneck for your application, go with whatever makes your life easier. You can always change it later. A: Unless you're sending a truly huge amount of data, you're probably better off using one buffer. If you use a geometric progression for growing your buffer size, the number of allocations becomes an amortized constant, and the time to allocate the buffer will generally follow. A: A send() call implies a round-trip to the kernel (the part of the OS which deals with the hardware directly). It has a unit cost of about a few hundred clock cycles. This is harmless unless you are trying to call send() millions of times. Usually, buffering is about calling send() only once in a while, when "enough data" has been gathered. "Enough" does not mean "the whole message" but something like "enough bytes so that the unit cost of the kernel round-trip is dwarfed". As a rule of thumb, an 8-kB buffer (8192 bytes) is traditionally considered as good. Anyway, for all performance-related questions, nothing beats an actual measure. Try it. Most of the time, there not any actual performance problem worth worrying about.
What is faster: multiple `send`s or using buffering?
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
[ "Because of the way TCP congestion control works, it's more efficient to send data all at once. TCP maintains a window of how much data it will allow to be \"in the air\" (sent but not yet acknowledged). TCP measures the acknowledgments coming back to figure out how much data it can have \"in the air\" without causing congestion (i.e., packet loss). If there isn't enough data coming from the application to fill the window, TCP can't make accurate measurements so it will conservatively shrink the window.\nIf you only have a few, small headers and your calls to send are in rapid succession, the operating system will typically buffer the data for you and send it all in one packet. In that case, TCP congestion control isn't really an issue. However, each call to send involves a context switch from user mode to kernel mode, which incurs CPU overhead. In other words, you're still better off buffering in your application.\nThere is (at least) one case where you're better off without buffering: when your buffer is slower than the context switching overhead. If you write a complicated buffer in Python, that might very well be the case. A buffer written in CPython is going to be quite a bit slower than the finely optimized buffer in the kernel. It's quite possible that buffering would cost you more than it buys you.\nWhen in doubt, measure.\nOne word of caution though: premature optimization is the root of all evil. The difference in efficiency here is pretty small. If you haven't already established that this is a bottleneck for your application, go with whatever makes your life easier. You can always change it later.\n", "Unless you're sending a truly huge amount of data, you're probably better off using one buffer. If you use a geometric progression for growing your buffer size, the number of allocations becomes an amortized constant, and the time to allocate the buffer will generally follow.\n", "A send() call implies a round-trip to the kernel (the part of the OS which deals with the hardware directly). It has a unit cost of about a few hundred clock cycles. This is harmless unless you are trying to call send() millions of times.\nUsually, buffering is about calling send() only once in a while, when \"enough data\" has been gathered. \"Enough\" does not mean \"the whole message\" but something like \"enough bytes so that the unit cost of the kernel round-trip is dwarfed\". As a rule of thumb, an 8-kB buffer (8192 bytes) is traditionally considered as good.\nAnyway, for all performance-related questions, nothing beats an actual measure. Try it. Most of the time, there not any actual performance problem worth worrying about.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "buffer", "c", "python", "send", "sockets" ]
stackoverflow_0002638490_buffer_c_python_send_sockets.txt
Q: mod_python with Python 2.6 on Windows How do I install mod_python to run with Python 2.6 on a Windows machine? I could not find an installer for Python 2.6. I downloaded this installer for (mod_python on Python 2.5): mod_python-3.3.1.win32-py2.5-Apache2.2.exe and extracted it to get PLATLIB and SCRIPTS folders. Where do I go from here? A: You don't. That's the install for Python 2.5 and will not work. You can try the instructions here or use mod_wsgi instead as they suggest. A: Nowhere. That is for Python 2.5. You'll need to build from source if you want it to work with 2.6, or wait for them to get around to it.
mod_python with Python 2.6 on Windows
How do I install mod_python to run with Python 2.6 on a Windows machine? I could not find an installer for Python 2.6. I downloaded this installer for (mod_python on Python 2.5): mod_python-3.3.1.win32-py2.5-Apache2.2.exe and extracted it to get PLATLIB and SCRIPTS folders. Where do I go from here?
[ "You don't. That's the install for Python 2.5 and will not work. You can try the instructions here or use mod_wsgi instead as they suggest.\n", "Nowhere. That is for Python 2.5. You'll need to build from source if you want it to work with 2.6, or wait for them to get around to it.\n" ]
[ 3, 2 ]
[]
[]
[ "apache", "mod_python", "python" ]
stackoverflow_0002639089_apache_mod_python_python.txt
Q: Can SQLAlchemy's reflection tools output python source? I want to reflect a schema using SQLAlchemy's MetaData.reflect() method, so that I can have a cache of the current schema. How can I do this? A: A simple and supported way to cache the result of reflection is to just pickle the MetaData object. If you prefer to generate Python code that initializes the metadata, then there's a tool called sqlautocode.
Can SQLAlchemy's reflection tools output python source?
I want to reflect a schema using SQLAlchemy's MetaData.reflect() method, so that I can have a cache of the current schema. How can I do this?
[ "A simple and supported way to cache the result of reflection is to just pickle the MetaData object. If you prefer to generate Python code that initializes the metadata, then there's a tool called sqlautocode.\n" ]
[ 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0002638717_python_sqlalchemy.txt
Q: Is there any free Python to C translator? Is there any free Python to C translator? for example capable to translate such lib as lib for Fast content-aware image resizing (which already depends on some C libs) to C files? A: Shedskin translates Python code to C++. A: I think that cython is what you're looking for http://www.cython.org/ A: The fantastic PyPy project which aims to: "translate a Python-level description of the Python language itself to lower level languages", has a C backend. That is one of the lower level languages it aims to translate programs to is C.
Is there any free Python to C translator?
Is there any free Python to C translator? for example capable to translate such lib as lib for Fast content-aware image resizing (which already depends on some C libs) to C files?
[ "Shedskin translates Python code to C++.\n", "I think that cython is what you're looking for http://www.cython.org/\n", "The fantastic PyPy project which aims to: \"translate a Python-level description of the Python language itself to lower level languages\", has a C backend. That is one of the lower level languages it aims to translate programs to is C.\n" ]
[ 7, 5, 2 ]
[]
[]
[ "c", "code_translation", "python" ]
stackoverflow_0002639195_c_code_translation_python.txt
Q: Create two separate windows in terminal Picture a terminal. There are two windows inside that terminal. One on top, one on bottom. The top one is much bigger. The top one receives asynchronous updates. The bottom one is for user input. It would work almost exactly the same as vim - the text editor. I'm writing this in Python. I'm guessing you would do this by using curses, but I'm not sure if it's possible. A: Yes, you want the python standard library implementation of ncurses for this. A: http://docs.python.org/library/curses.html Yes, curses + some code that will do parallel stuff
Create two separate windows in terminal
Picture a terminal. There are two windows inside that terminal. One on top, one on bottom. The top one is much bigger. The top one receives asynchronous updates. The bottom one is for user input. It would work almost exactly the same as vim - the text editor. I'm writing this in Python. I'm guessing you would do this by using curses, but I'm not sure if it's possible.
[ "Yes, you want the python standard library implementation of ncurses for this.\n", "http://docs.python.org/library/curses.html\nYes, curses + some code that will do parallel stuff\n" ]
[ 2, 1 ]
[]
[]
[ "curses", "python", "terminal" ]
stackoverflow_0002639853_curses_python_terminal.txt
Q: Preserving the dimensions of a slice from a Numpy 3d array I have a 3d array, a, of shape say a.shape = (10, 10, 10) When slicing, the dimensions are squeezed automatically i.e. a[:,:,5].shape = (10, 10) I'd like to preserve the number of dimensions but also ensure that the dimension that was squeezed is the one that shows 1 i.e. a[:,:,5].shape = (10, 10, 1) I have thought of re-casting the array and passing ndmin but that just adds the extra dimensions to the start of the shape tuple regardless of where the slice came from in the array a. A: a[:,:,[5]].shape # (10,10,1) a[:,:,5] is an example of basic slicing. a[:,:,[5]] is an example of integer array indexing -- combined with basic slicing. When using integer array indexing the resultant shape is always "identical to the (broadcast) indexing array shapes". Since [5] (as an array) has shape (1,), a[:,:,[5]] ends up having shape (10,10,1).
Preserving the dimensions of a slice from a Numpy 3d array
I have a 3d array, a, of shape say a.shape = (10, 10, 10) When slicing, the dimensions are squeezed automatically i.e. a[:,:,5].shape = (10, 10) I'd like to preserve the number of dimensions but also ensure that the dimension that was squeezed is the one that shows 1 i.e. a[:,:,5].shape = (10, 10, 1) I have thought of re-casting the array and passing ndmin but that just adds the extra dimensions to the start of the shape tuple regardless of where the slice came from in the array a.
[ "a[:,:,[5]].shape\n# (10,10,1)\n\n\na[:,:,5] is an example of basic slicing.\na[:,:,[5]] is an example of integer array indexing -- combined with basic slicing. When using integer array indexing the resultant shape is always \"identical to the (broadcast) indexing array shapes\". Since [5] (as an array) has shape (1,), \na[:,:,[5]] ends up having shape (10,10,1).\n" ]
[ 13 ]
[]
[]
[ "numpy", "python", "slice" ]
stackoverflow_0002640147_numpy_python_slice.txt
Q: Sending data from one Protocol to another Protocol in Twisted? One of my protocols is connected to a server, and with the output of that I'd like to send it to the other protocol. I need to access the 'msg' method in ClassA from ClassB but I keep getting: exceptions.AttributeError: 'NoneType' object has no attribute 'write' Actual code: from twisted.words.protocols import irc from twisted.internet import protocol from twisted.internet.protocol import Protocol, ClientFactory from twisted.internet import reactor IRC_USERNAME = 'xxx' IRC_CHANNEL = '#xxx' T_USERNAME = 'xxx' T_PASSWORD = md5.new('xxx').hexdigest() class ircBot(irc.IRCClient): def _get_nickname(self): return self.factory.nickname nickname = property(_get_nickname) def signedOn(self): self.join(self.factory.channel) print "Signed on as %s." % (self.nickname,) def joined(self, channel): print "Joined %s." % (channel,) def privmsg(self, user, channel, msg): if not user: return who = "%s: " % (user.split('!', 1)[0], ) print "%s %s" % (who, msg) class ircBotFactory(protocol.ClientFactory): protocol = ircBot def __init__(self, channel, nickname=IRC_USERNAME): self.channel = channel self.nickname = nickname def clientConnectionLost(self, connector, reason): print "Lost connection (%s), reconnecting." % (reason,) connector.connect() def clientConnectionFailed(self, connector, reason): print "Could not connect: %s" % (reason,) class SomeTClass(Protocol): def dataReceived(self, data): if data.startswith('SAY'): data = data.split(';', 1) # RAGE #return self.ircClient.msg(IRC_CHANNEL, 'test') def connectionMade(self): self.transport.write("mlogin %s %s\n" % (T_USERNAME, T_PASSWORD)) class tClientFactory(ClientFactory): def startedConnecting(self, connector): print 'Started to connect.' def buildProtocol(self, addr): print 'Connected.' return t() def clientConnectionLost(self, connector, reason): print 'Lost connection. Reason:', reason def clientConnectionFailed(self, connector, reason): print 'Connection failed. Reason:', reason if __name__ == "__main__": #chan = sys.argv[1] reactor.connectTCP('xxx', 6667, ircBotFactory(IRC_CHANNEL) ) reactor.connectTCP('xxx', 20184, tClientFactory() ) reactor.run() Any ideas please? :-) A: Twisted FAQ: How do I make input on one connection result in output on another? This seems like it's a Twisted question, but actually it's a Python question. Each Protocol object represents one connection; you can call its transport.write to write some data to it. These are regular Python objects; you can put them into lists, dictionaries, or whatever other data structure is appropriate to your application. As a simple example, add a list to your factory, and in your protocol's connectionMade and connectionLost, add it to and remove it from that list. Here's the Python code: from twisted.internet.protocol import Protocol, Factory from twisted.internet import reactor class MultiEcho(Protocol): def connectionMade(self): self.factory.echoers.append(self) def dataReceived(self, data): for echoer in self.factory.echoers: echoer.transport.write(data) def connectionLost(self, reason): self.factory.echoers.remove(self) class MultiEchoFactory(Factory): protocol = MultiEcho def __init__(self): self.echoers = [] reactor.listenTCP(4321, MultiEchoFactory()) reactor.run()
Sending data from one Protocol to another Protocol in Twisted?
One of my protocols is connected to a server, and with the output of that I'd like to send it to the other protocol. I need to access the 'msg' method in ClassA from ClassB but I keep getting: exceptions.AttributeError: 'NoneType' object has no attribute 'write' Actual code: from twisted.words.protocols import irc from twisted.internet import protocol from twisted.internet.protocol import Protocol, ClientFactory from twisted.internet import reactor IRC_USERNAME = 'xxx' IRC_CHANNEL = '#xxx' T_USERNAME = 'xxx' T_PASSWORD = md5.new('xxx').hexdigest() class ircBot(irc.IRCClient): def _get_nickname(self): return self.factory.nickname nickname = property(_get_nickname) def signedOn(self): self.join(self.factory.channel) print "Signed on as %s." % (self.nickname,) def joined(self, channel): print "Joined %s." % (channel,) def privmsg(self, user, channel, msg): if not user: return who = "%s: " % (user.split('!', 1)[0], ) print "%s %s" % (who, msg) class ircBotFactory(protocol.ClientFactory): protocol = ircBot def __init__(self, channel, nickname=IRC_USERNAME): self.channel = channel self.nickname = nickname def clientConnectionLost(self, connector, reason): print "Lost connection (%s), reconnecting." % (reason,) connector.connect() def clientConnectionFailed(self, connector, reason): print "Could not connect: %s" % (reason,) class SomeTClass(Protocol): def dataReceived(self, data): if data.startswith('SAY'): data = data.split(';', 1) # RAGE #return self.ircClient.msg(IRC_CHANNEL, 'test') def connectionMade(self): self.transport.write("mlogin %s %s\n" % (T_USERNAME, T_PASSWORD)) class tClientFactory(ClientFactory): def startedConnecting(self, connector): print 'Started to connect.' def buildProtocol(self, addr): print 'Connected.' return t() def clientConnectionLost(self, connector, reason): print 'Lost connection. Reason:', reason def clientConnectionFailed(self, connector, reason): print 'Connection failed. Reason:', reason if __name__ == "__main__": #chan = sys.argv[1] reactor.connectTCP('xxx', 6667, ircBotFactory(IRC_CHANNEL) ) reactor.connectTCP('xxx', 20184, tClientFactory() ) reactor.run() Any ideas please? :-)
[ "Twisted FAQ: \n\nHow do I make input on one connection\n result in output on another?\nThis seems like it's a Twisted\n question, but actually it's a Python\n question. Each Protocol object\n represents one connection; you can\n call its transport.write to write some\n data to it. These are regular Python\n objects; you can put them into lists,\n dictionaries, or whatever other data\n structure is appropriate to your\n application.\nAs a simple example, add a list to\n your factory, and in your protocol's\n connectionMade and connectionLost, add\n it to and remove it from that list.\n Here's the Python code:\nfrom twisted.internet.protocol import Protocol, Factory\nfrom twisted.internet import reactor\n\nclass MultiEcho(Protocol):\n def connectionMade(self):\n self.factory.echoers.append(self)\n def dataReceived(self, data):\n for echoer in self.factory.echoers:\n echoer.transport.write(data)\n def connectionLost(self, reason):\n self.factory.echoers.remove(self)\n\nclass MultiEchoFactory(Factory):\n protocol = MultiEcho\n def __init__(self):\n self.echoers = []\n\nreactor.listenTCP(4321, MultiEchoFactory())\nreactor.run()\n\n\n" ]
[ 4 ]
[]
[]
[ "python", "twisted" ]
stackoverflow_0002634494_python_twisted.txt
Q: Recognizing a file I have no idea how this works or if it is even possible but what I want to do is for example create a file type (lets imagine .test (in which a random file name would be random.test)). Now before I continue, its obviously easy to do this using for example: filename = "random.test" file = open(filename, 'w') file.write("some text here") But now what I would like to know is if it is possible to write the file .test so if I set it to open with a wxPython program (directly (running the "random.test" from the desktop)), it recognizes it and for example opens up a Message Dialog automatically. A: How this works varies by operating system, but, AFAIK, the general rule is that if you register your application with the operating system as recognizing that file type, then clicking on one or more files of that type causes the operating system to invoke your program with the names of the files as parameters, so your program will correctly handle the file opening if it has a commandline invocation of the following form: program_name [options] <file1> [<file2> ... <fileN>] In terms of identifying what file types your program can accept... on Mac OS X, this is done by listing the file types in the application bundle's "Info.plist" file in a dictionary with key CFBundleDocumentTypes. It is up to the user to perform the association, but the information in "Info.plist" determines which applications are considered candidates for registration. On Windows, you need to edit the registry to associate the program with the file type, you can also edit the registry to add "verbs" (right-click menu items) for your program.
Recognizing a file
I have no idea how this works or if it is even possible but what I want to do is for example create a file type (lets imagine .test (in which a random file name would be random.test)). Now before I continue, its obviously easy to do this using for example: filename = "random.test" file = open(filename, 'w') file.write("some text here") But now what I would like to know is if it is possible to write the file .test so if I set it to open with a wxPython program (directly (running the "random.test" from the desktop)), it recognizes it and for example opens up a Message Dialog automatically.
[ "How this works varies by operating system, but, AFAIK, the general rule is that if you register your application with the operating system as recognizing that file type, then clicking on one or more files of that type causes the operating system to invoke your program with the names of the files as parameters, so your program will correctly handle the file opening if it has a commandline invocation of the following form:\n\nprogram_name [options] <file1> [<file2> ... <fileN>]\n\nIn terms of identifying what file types your program can accept... on Mac OS X, this is done by listing the file types in the application bundle's \"Info.plist\" file in a dictionary with key CFBundleDocumentTypes. It is up to the user to perform the association, but the information in \"Info.plist\" determines which applications are considered candidates for registration. On Windows, you need to edit the registry to associate the program with the file type, you can also edit the registry to add \"verbs\" (right-click menu items) for your program.\n" ]
[ 2 ]
[]
[]
[ "file", "python" ]
stackoverflow_0002640138_file_python.txt
Q: Django/Python: Save an HTML table to Excel I have an HTML table that I'd like to be able to export to an Excel file. I already have an option to export the table into an IQY file, but I'd prefer something that didn't allow the user to refresh the data via Excel. I just want a feature that takes a snapshot of the table at the time the user clicks the link/button. I'd prefer it if the feature was a link/button on the HTML page that allows the user to save the query results displayed in the table. It would also be nice if the formatting from the HTML/CSS could be retained. Is there a way to do this at all? Or, something I can modify with the IQY? I can try to provide more details if needed. Thanks in advance. A: You can use the excellent xlwt module. It is very easy to use, and creates files in xls format (Excel 2003). Here is an (untested!) example of use for a Django view: from django.http import HttpResponse import xlwt def excel_view(request): normal_style = xlwt.easyxf(""" font: name Verdana """) response = HttpResponse(mimetype='application/ms-excel') wb = xlwt.Workbook() ws0 = wb.add_sheet('Worksheet') ws0.write(0, 0, "something", normal_style) wb.save(response) return response A: Use CSV. There's a module in Python ("csv") to generate it, and excel can read it natively. A: Excel support opening an HTML file containing a table as a spreadsheet (even with CSS formatting). You basically have to serve that HTML content from a django view, with the content-type application/ms-excel as Roberto said. Or if you feel adventurous, you could use something like Downloadify to prepare the file to be downloaded on the client side.
Django/Python: Save an HTML table to Excel
I have an HTML table that I'd like to be able to export to an Excel file. I already have an option to export the table into an IQY file, but I'd prefer something that didn't allow the user to refresh the data via Excel. I just want a feature that takes a snapshot of the table at the time the user clicks the link/button. I'd prefer it if the feature was a link/button on the HTML page that allows the user to save the query results displayed in the table. It would also be nice if the formatting from the HTML/CSS could be retained. Is there a way to do this at all? Or, something I can modify with the IQY? I can try to provide more details if needed. Thanks in advance.
[ "You can use the excellent xlwt module.\nIt is very easy to use, and creates files in xls format (Excel 2003).\nHere is an (untested!) example of use for a Django view:\nfrom django.http import HttpResponse\nimport xlwt\n\ndef excel_view(request):\n normal_style = xlwt.easyxf(\"\"\"\n font:\n name Verdana\n \"\"\") \n response = HttpResponse(mimetype='application/ms-excel')\n wb = xlwt.Workbook()\n ws0 = wb.add_sheet('Worksheet')\n ws0.write(0, 0, \"something\", normal_style)\n wb.save(response)\n return response\n\n", "Use CSV. There's a module in Python (\"csv\") to generate it, and excel can read it natively.\n", "Excel support opening an HTML file containing a table as a spreadsheet (even with CSS formatting).\nYou basically have to serve that HTML content from a django view, with the content-type application/ms-excel as Roberto said.\nOr if you feel adventurous, you could use something like Downloadify to prepare the file to be downloaded on the client side.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "django", "excel", "html_table", "python" ]
stackoverflow_0002640072_django_excel_html_table_python.txt
Q: How do I use Regex to find the ID in a YouTube link? when I try to extract this video ID (AIiMa2Fe-ZQ) with a regex expression, I can't get the dash an all the letters after. >>> id = re.search('(?<=\?v\=)\w+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ') >>> print id.group(0) >>> AIiMa2Fe A: Intead of \w+ use below. Word character (\w) doesn't include a dash. It only includes [a-zA-Z_0-9]. [\w-]+ A: >>> re.search('(?<=v=)[\w-]+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ').group() 'AIiMa2Fe-ZQ' \w is a short-hand for [a-zA-Z0-9_] in python2.x, you'll have to use re.A flag in py3k. You quite clearly have additional character in that videoid, i.e., hyphen. I've also removed redundant escape backslashes from the lookbehind. A: /(?:/v/|/watch\?v=|/watch#!v=)([A-Za-z0-9_-]+)/ Explain the RE There are three alternate YouTube formats: /v/[ID] and watch?v= and the new AJAX watch#!v= This RE captures all three. There is also new YouTube URL for user pages that is of the form /user/[user]?content={complex URI} This is not captured here by any regex... A: I don't know the pattern for youtube hashes, but just include the "-" in the possibilities as it is not considered an alpha: import re id = re.search('(?<=\?v\=)[\w-]+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ') print id.group(0) I have edited the above because as it turns out: >>> re.search("[\w|-]", "|").group(0) '|' The "|" in the character definition does not act as a special character but does indeed match the "|" pipe. My apologies. A: Use the urlparse module instead of regex for such kind of things. import urlparse parsed_url = urlparse.urlparse(url) if parsed_url.netloc.find('youtube.com') != -1 and parsed_url.path == '/watch': video = urlparse.parse_qs(parsed_url.query).get('v', None) if video is None: video = urlparse.parse_qs(parsed_url.fragment.strip('!')).get('v', None) if video is not None: print video[0] EDIT: Updated for the upcoming new youtube url format. A: I'd try this: >>> import re >>> a = re.compile(r'.*(\-\w+)$') >>> a.search('http://www.youtube.com/watch?v=AIiMa2Fe-ZQ').group(1) '-ZQ'
How do I use Regex to find the ID in a YouTube link?
when I try to extract this video ID (AIiMa2Fe-ZQ) with a regex expression, I can't get the dash an all the letters after. >>> id = re.search('(?<=\?v\=)\w+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ') >>> print id.group(0) >>> AIiMa2Fe
[ "Intead of \\w+ use below. Word character (\\w) doesn't include a dash. It only includes [a-zA-Z_0-9].\n[\\w-]+\n\n", ">>> re.search('(?<=v=)[\\w-]+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ').group()\n'AIiMa2Fe-ZQ'\n\n\\w is a short-hand for [a-zA-Z0-9_] in python2.x, you'll have to use re.A flag in py3k. You quite clearly have additional character in that videoid, i.e., hyphen. I've also removed redundant escape backslashes from the lookbehind.\n", "/(?:/v/|/watch\\?v=|/watch#!v=)([A-Za-z0-9_-]+)/\n\nExplain the RE\nThere are three alternate YouTube formats: /v/[ID] and watch?v= and the new AJAX watch#!v= This RE captures all three. There is also new YouTube URL for user pages that is of the form /user/[user]?content={complex URI} This is not captured here by any regex... \n", "I don't know the pattern for youtube hashes, but just include the \"-\" in the possibilities as it is not considered an alpha:\nimport re\nid = re.search('(?<=\\?v\\=)[\\w-]+', 'http://www.youtube.com/watch?v=AIiMa2Fe-ZQ')\nprint id.group(0)\n\nI have edited the above because as it turns out:\n>>> re.search(\"[\\w|-]\", \"|\").group(0)\n'|'\n\nThe \"|\" in the character definition does not act as a special character but does indeed match the \"|\" pipe. My apologies.\n", "Use the urlparse module instead of regex for such kind of things.\nimport urlparse\n\nparsed_url = urlparse.urlparse(url)\nif parsed_url.netloc.find('youtube.com') != -1 and parsed_url.path == '/watch':\n video = urlparse.parse_qs(parsed_url.query).get('v', None)\n\n if video is None:\n video = urlparse.parse_qs(parsed_url.fragment.strip('!')).get('v', None)\n\n if video is not None:\n print video[0]\n\nEDIT: Updated for the upcoming new youtube url format.\n", "I'd try this:\n>>> import re\n>>> a = re.compile(r'.*(\\-\\w+)$')\n>>> a.search('http://www.youtube.com/watch?v=AIiMa2Fe-ZQ').group(1)\n'-ZQ'\n\n" ]
[ 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "python", "regex", "youtube" ]
stackoverflow_0002639582_python_regex_youtube.txt
Q: Why the "mutable default argument fix" syntax is so ugly, asks python newbie Now following my series of "python newbie questions" and based on another question. Prerogative Go to http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables and scroll down to "Default Parameter Values". There you can find the following: def bad_append(new_item, a_list=[]): a_list.append(new_item) return a_list def good_append(new_item, a_list=None): if a_list is None: a_list = [] a_list.append(new_item) return a_list There's even an "Important warning" on python.org with this very same example, tho not really saying it's "better". One way to put it So, question here is: why is the "good" syntax over a known issue ugly like that in a programming language that promotes "elegant syntax" and "easy-to-use"? edit: Another way to put it I'm not asking why or how it happens (thanks Mark for the link). I'm asking why there's no simpler alternative built-in the language. I think a better way would probably being able to do something in the def itself, in which the name argument would be attached to a "local", or "new" within the def, mutable object. Something like: def better_append(new_item, a_list=immutable([])): a_list.append(new_item) return a_list I'm sure someone can come with a better syntax, but I'm also guessing there must be a very good explanation to why this hasn't been done. A: This is called the 'mutable defaults trap'. See: http://www.ferg.org/projects/python_gotchas.html#contents_item_6 Basically, a_list is initialized when the program is first interpreted, not each time you call the function (as you might expect from other languages). So you're not getting a new list each time you call the function, but you're reusing the same one. I guess the answer to the question is that if you want to append something to a list, just do it, don't create a function to do it. This: >>> my_list = [] >>> my_list.append(1) Is clearer and easier to read than: >>> my_list = my_append(1) In the practical case, if you needed this sort of behavior, you would probably create your own class which has methods to manage it's internal list. A: Default arguments are evaluated at the time the def statement is executed, which is the probably the most reasonable approach: it is often what is wanted. If it wasn't the case, it could cause confusing results when the environment changes a little. Differentiating with a magic local method or something like that is far from ideal. Python tries to make things pretty plain and there is no obvious, clear replacement for the current boilerplate that doesn't resort to messing with the rather consistent semantics Python currently has. A: The extremely specific use case of a function that lets you optionally pass a list to modify, but generates a new list unless you specifically do pass one in, is definitely not worth a special-case syntax. Seriously, if you're making a number of calls to this function, why ever would you want to special-case the first call in the series (by passing only one argument) to distinguish it from every other one (which will need two arguments to be able to keep enriching an existing list)?! E.g., consider something like (assuming of course that betterappend did something useful, because in the current example it would be crazy to call it in lieu of a direct .append!-): def thecaller(n): if fee(0): newlist = betterappend(foo()) else: newlist = betterappend(fie()) for x in range(1, n): if fee(x): betterappend(foo(), newlist) else: betterappend(fie(), newlist) this is simply insane, and should obviously be, instead, def thecaller(n): newlist = [] for x in range(n): if fee(x): betterappend(foo(), newlist) else: betterappend(fie(), newlist) always using two arguments, avoiding repetition, and building much simpler logic. Introducing special-case syntax encourages and supports the special-cased use case, and there's really not much sense in encouraging and supporting this extremely peculiar one -- the existing, perfectly regular syntax is just fine for the use case's extremely rare good uses;-). A: I've edited this answer to include thoughts from the many comments posted in the question. The example you give is flawed. It modifies the list that you pass it as a side effect. If that's how you intended the function to work, it wouldn't make sense to have a default argument. Nor would it make sense to return the updated list. Without a default argument, the problem goes away. If the intent was to return a new list, you need to make a copy of the input list. Python prefers that things be explicit, so it's up to you to make the copy. def better_append(new_item, a_list=[]): new_list = list(a_list) new_list.append(new_item) return new_list For something a little different, you can make a generator that can take a list or a generator as input: def generator_append(new_item, a_list=[]): for x in a_list: yield x yield new_item I think you're under the misconception that Python treats mutable and immutable default arguments differently; that's simply not true. Rather, the immutability of the argument makes you change your code in a subtle way to do the right thing automatically. Take your example and make it apply to a string rather than a list: def string_append(new_item, a_string=''): a_string = a_string + new_item return a_string This code doesn't change the passed string - it can't, because strings are immutable. It creates a new string, and assigns a_string to that new string. The default argument can be used over and over again because it doesn't change, you made a copy of it at the start. A: What if you were not talking about lists, but about AwesomeSets, a class you just defined? Would you want to define ".local" in every class? class Foo(object): def get(self): return Foo() local = property(get) could possibly work, but would get old really quick, really soon. Pretty soon, the "if a is None: a = CorrectObject()" pattern becomes second nature, and you won't find it ugly -- you'll find it illuminating. The problem is not one of syntax, but one of semantics -- the values of default parameters are evaluated at function definition time, not at function execution time. A: Probably you should not define these two functions as good and bad. You can use the first one with list or dictionaries to implement in place modifications of the corresponding objects. This method can give you headaches if you do not know how mutable objects work but given you known what you are doing it is OK in my opinion. So you have two different methods to pass parameters providing different behaviors. And this is good, I would not change it. A: I think you're confusing elegant syntax with syntactic sugar. The python syntax communicates both approaches clearly, it just happens that the correct approach appears less elegant (in terms of lines of syntax) than the incorrect approach. But since the incorrect approach, is well...incorrect, it's elegance is irrelevant. As to why something like you demonstrate in better_append is not implemented, I would guess that There should be one-- and preferably only one --obvious way to do it. trumps minor gains in elegance. A: This is better than good_append(), IMO: def ok_append(new_item, a_list=None): return a_list.append(new_item) if a_list else [ new_item ] You could also be extra careful and check that a_list was a list...
Why the "mutable default argument fix" syntax is so ugly, asks python newbie
Now following my series of "python newbie questions" and based on another question. Prerogative Go to http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables and scroll down to "Default Parameter Values". There you can find the following: def bad_append(new_item, a_list=[]): a_list.append(new_item) return a_list def good_append(new_item, a_list=None): if a_list is None: a_list = [] a_list.append(new_item) return a_list There's even an "Important warning" on python.org with this very same example, tho not really saying it's "better". One way to put it So, question here is: why is the "good" syntax over a known issue ugly like that in a programming language that promotes "elegant syntax" and "easy-to-use"? edit: Another way to put it I'm not asking why or how it happens (thanks Mark for the link). I'm asking why there's no simpler alternative built-in the language. I think a better way would probably being able to do something in the def itself, in which the name argument would be attached to a "local", or "new" within the def, mutable object. Something like: def better_append(new_item, a_list=immutable([])): a_list.append(new_item) return a_list I'm sure someone can come with a better syntax, but I'm also guessing there must be a very good explanation to why this hasn't been done.
[ "This is called the 'mutable defaults trap'. See: http://www.ferg.org/projects/python_gotchas.html#contents_item_6\nBasically, a_list is initialized when the program is first interpreted, not each time you call the function (as you might expect from other languages). So you're not getting a new list each time you call the function, but you're reusing the same one.\nI guess the answer to the question is that if you want to append something to a list, just do it, don't create a function to do it. \nThis: \n>>> my_list = []\n>>> my_list.append(1)\n\nIs clearer and easier to read than: \n>>> my_list = my_append(1)\n\nIn the practical case, if you needed this sort of behavior, you would probably create your own class which has methods to manage it's internal list. \n", "Default arguments are evaluated at the time the def statement is executed, which is the probably the most reasonable approach: it is often what is wanted. If it wasn't the case, it could cause confusing results when the environment changes a little.\nDifferentiating with a magic local method or something like that is far from ideal. Python tries to make things pretty plain and there is no obvious, clear replacement for the current boilerplate that doesn't resort to messing with the rather consistent semantics Python currently has.\n", "The extremely specific use case of a function that lets you optionally pass a list to modify, but generates a new list unless you specifically do pass one in, is definitely not worth a special-case syntax. Seriously, if you're making a number of calls to this function, why ever would you want to special-case the first call in the series (by passing only one argument) to distinguish it from every other one (which will need two arguments to be able to keep enriching an existing list)?! E.g., consider something like (assuming of course that betterappend did something useful, because in the current example it would be crazy to call it in lieu of a direct .append!-):\ndef thecaller(n):\n if fee(0):\n newlist = betterappend(foo())\n else:\n newlist = betterappend(fie())\n for x in range(1, n):\n if fee(x):\n betterappend(foo(), newlist)\n else:\n betterappend(fie(), newlist)\n\nthis is simply insane, and should obviously be, instead,\ndef thecaller(n):\n newlist = []\n for x in range(n):\n if fee(x):\n betterappend(foo(), newlist)\n else:\n betterappend(fie(), newlist)\n\nalways using two arguments, avoiding repetition, and building much simpler logic.\nIntroducing special-case syntax encourages and supports the special-cased use case, and there's really not much sense in encouraging and supporting this extremely peculiar one -- the existing, perfectly regular syntax is just fine for the use case's extremely rare good uses;-).\n", "I've edited this answer to include thoughts from the many comments posted in the question.\nThe example you give is flawed. It modifies the list that you pass it as a side effect. If that's how you intended the function to work, it wouldn't make sense to have a default argument. Nor would it make sense to return the updated list. Without a default argument, the problem goes away.\nIf the intent was to return a new list, you need to make a copy of the input list. Python prefers that things be explicit, so it's up to you to make the copy.\ndef better_append(new_item, a_list=[]): \n new_list = list(a_list)\n new_list.append(new_item) \n return new_list \n\nFor something a little different, you can make a generator that can take a list or a generator as input:\ndef generator_append(new_item, a_list=[]):\n for x in a_list:\n yield x\n yield new_item\n\nI think you're under the misconception that Python treats mutable and immutable default arguments differently; that's simply not true. Rather, the immutability of the argument makes you change your code in a subtle way to do the right thing automatically. Take your example and make it apply to a string rather than a list:\ndef string_append(new_item, a_string=''):\n a_string = a_string + new_item\n return a_string\n\nThis code doesn't change the passed string - it can't, because strings are immutable. It creates a new string, and assigns a_string to that new string. The default argument can be used over and over again because it doesn't change, you made a copy of it at the start.\n", "What if you were not talking about lists, but about AwesomeSets, a class you just defined? Would you want to define \".local\" in every class?\nclass Foo(object):\n def get(self):\n return Foo()\n local = property(get)\n\ncould possibly work, but would get old really quick, really soon. Pretty soon, the \"if a is None: a = CorrectObject()\" pattern becomes second nature, and you won't find it ugly -- you'll find it illuminating.\nThe problem is not one of syntax, but one of semantics -- the values of default parameters are evaluated at function definition time, not at function execution time. \n", "Probably you should not define these two functions as good and bad.\nYou can use the first one with list or dictionaries to implement in place modifications of the corresponding objects.\nThis method can give you headaches if you do not know how mutable objects work but given you known what you are doing it is OK in my opinion.\nSo you have two different methods to pass parameters providing different behaviors. And this is good, I would not change it.\n", "I think you're confusing elegant syntax with syntactic sugar. The python syntax communicates both approaches clearly, it just happens that the correct approach appears less elegant (in terms of lines of syntax) than the incorrect approach. But since the incorrect approach, is well...incorrect, it's elegance is irrelevant. As to why something like you demonstrate in better_append is not implemented, I would guess that There should be one-- and preferably only one --obvious way to do it. trumps minor gains in elegance. \n", "This is better than good_append(), IMO:\ndef ok_append(new_item, a_list=None):\n return a_list.append(new_item) if a_list else [ new_item ]\n\nYou could also be extra careful and check that a_list was a list...\n" ]
[ 11, 6, 5, 3, 2, 1, 0, 0 ]
[]
[]
[ "mutable", "names", "python" ]
stackoverflow_0002639915_mutable_names_python.txt
Q: Python string formatting too slow I use the following code to log a map, it is fast when it only contains zeroes, but as soon as there is actual data in the map it becomes unbearably slow... Is there any way to do this faster? log_file = open('testfile', 'w') for i, x in ((i, start + i * interval) for i in range(length)): log_file.write('%-5d %8.3f %13g %13g %13g %13g %13g %13g\n' % (i, x, map[0][i], map[1][i], map[2][i], map[3][i], map[4][i], map[5][i])) A: I suggest you run your code using the cProfile module and postprocess the results as described on http://docs.python.org/library/profile.html . This will let you know exactly how much time is spent in the call to str.__mod__ for the string formatting and how much is spent doing other things, like writing the file and doing the __getitem__ lookups for map[0][i] and such. A: First I checked % against backquoting. % is faster. THen I checked % (tuple) against 'string'.format(). An initial bug made me think it was faster. But no. % is faster. So, you are already doing your massive pile of float-to-string conversions the fastest way you can do it in Python. The Demo code below is ugly demo code. Please don't lecture me on xrange versus range or other pedantry. KThxBye. My ad-hoc and highly unscientific testing indicates that (a) % (1.234,) operations on Python 2.5 on linux is faster than % (1.234,...) operation Python 2.6 on linux, for the test code below, with the proviso that the attempt to use 'string'.format() won't work on python versions before 2.6. And so on. # this code should never be used in production. # should work on linux and windows now. import random import timeit import os import tempfile start = 0 interval = 0.1 amap = [] # list of lists tmap = [] # list of tuples def r(): return random.random()*500 for i in xrange(0,10000): amap.append ( [r(),r(),r(),r(),r(),r()] ) for i in xrange(0,10000): tmap.append ( (r(),r(),r(),r(),r(),r()) ) def testme_percent(): log_file = tempfile.TemporaryFile() try: for qmap in amap: s = '%g %g %g %g %g %g \n' % (qmap[0], qmap[1], qmap[2], qmap[3], qmap[4], qmap[5]) log_file.write( s) finally: log_file.close(); def testme_tuple_percent(): log_file = tempfile.TemporaryFile() try: for qtup in tmap: s = '%g %g %g %g %g %g \n' % qtup log_file.write( s ); finally: log_file.close(); def testme_backquotes_rule_yeah_baby(): log_file = tempfile.TemporaryFile() try: for qmap in amap: s = `qmap`+'\n' log_file.write( s ); finally: log_file.close(); def testme_the_new_way_to_format(): log_file = tempfile.TemporaryFile() try: for qmap in amap: s = '{0} {1} {2} {3} {4} {5} \n'.format(qmap[0], qmap[1], qmap[2], qmap[3], qmap[4], qmap[5]) log_file.write( s ); finally: log_file.close(); # python 2.5 helper default_number = 50 def _xtimeit(stmt="pass", timer=timeit.default_timer, number=default_number): """quick and dirty""" if stmt<>"pass": stmtcall = stmt+"()" ssetup = "from __main__ import "+stmt else: stmtcall = stmt ssetup = "pass" t = timeit.Timer(stmtcall,setup=ssetup) try: return t.timeit(number) except: t.print_exc() # no formatting operation in testme2 print "now timing variations on a theme" #times = [] #for i in range(0,10): n0 = _xtimeit( "pass",number=50) print "pass = ",n0 n1 = _xtimeit( "testme_percent",number=50); print "old style % formatting=",n1 n2 = _xtimeit( "testme_tuple_percent",number=50); print "old style % formatting with tuples=",n2 n3 = _xtimeit( "testme_backquotes_rule_yeah_baby",number=50); print "backquotes=",n3 n4 = _xtimeit( "testme_the_new_way_to_format",number=50); print "new str.format conversion=",n4 # times.append( n); print "done" I think you could optimize your code by building your TUPLES of floats somewhere else, wherever you built that map, in the first place, build your tuple list, and then applying the fmt_string % tuple this way: for tup in mytups: log_file.write( fmt_str % tup ) I was able to shave the 8.7 seconds down to 8.5 seconds by dropping the making-a-tuple part out of the for loop. Which ain't much. The big boy there is floating point formatting, which I believe is always going to be expensive. Alternative: Have you considered NOT writing such huge logs as text, and instead, saving them using the fastest "persistence" method available, and then writing a short utility to dump them to text, when needed? Some people use NumPy with very large numeric data sets, and it does not seem they would use a line-by-line dump to store their stuff. See: http://thsant.blogspot.com/2007/11/saving-numpy-arrays-which-is-fastest.html A: Without wishing to wade into the optimize-this-code morass, I would have written the code more like this: log_file = open('testfile', 'w') x = start map_iter = zip(range(length), map[0], map[1], map[2], map[3], map[4], map[5]) fmt = '%-5d %8.3f %13g %13g %13g %13g %13g %13g\n' for i, m0, m1, m2, m3, m4, m5 in mapiter: s = fmt % (i, x, m0, m1, m2, m3, m4, m5) log_file.write(s) x += interval But I will weigh in with the recommendation that you not name variables after python builtins, like map.
Python string formatting too slow
I use the following code to log a map, it is fast when it only contains zeroes, but as soon as there is actual data in the map it becomes unbearably slow... Is there any way to do this faster? log_file = open('testfile', 'w') for i, x in ((i, start + i * interval) for i in range(length)): log_file.write('%-5d %8.3f %13g %13g %13g %13g %13g %13g\n' % (i, x, map[0][i], map[1][i], map[2][i], map[3][i], map[4][i], map[5][i]))
[ "I suggest you run your code using the cProfile module and postprocess the results as described on http://docs.python.org/library/profile.html . This will let you know exactly how much time is spent in the call to str.__mod__ for the string formatting and how much is spent doing other things, like writing the file and doing the __getitem__ lookups for map[0][i] and such.\n", "First I checked % against backquoting. % is faster. THen I checked % (tuple) against 'string'.format(). An initial bug made me think it was faster. But no. % is faster.\nSo, you are already doing your massive pile of float-to-string conversions the fastest way you can do it in Python. \nThe Demo code below is ugly demo code. Please don't lecture me on xrange versus range or other pedantry. KThxBye.\nMy ad-hoc and highly unscientific testing indicates that (a) % (1.234,) operations on Python 2.5 on linux is faster than % (1.234,...) operation Python 2.6 on linux, for the test code below, with the proviso that the attempt to use 'string'.format() won't work on python versions before 2.6. And so on.\n# this code should never be used in production.\n# should work on linux and windows now.\n\nimport random\nimport timeit\nimport os\nimport tempfile\n\n\nstart = 0\ninterval = 0.1\n\namap = [] # list of lists\ntmap = [] # list of tuples\n\ndef r():\n return random.random()*500\n\nfor i in xrange(0,10000):\n amap.append ( [r(),r(),r(),r(),r(),r()] )\n\nfor i in xrange(0,10000):\n tmap.append ( (r(),r(),r(),r(),r(),r()) )\n\n\n\n\ndef testme_percent():\n log_file = tempfile.TemporaryFile()\n try:\n for qmap in amap:\n s = '%g %g %g %g %g %g \\n' % (qmap[0], qmap[1], qmap[2], qmap[3], qmap[4], qmap[5]) \n log_file.write( s)\n finally:\n log_file.close();\n\ndef testme_tuple_percent():\n log_file = tempfile.TemporaryFile()\n try: \n for qtup in tmap:\n s = '%g %g %g %g %g %g \\n' % qtup\n log_file.write( s );\n finally:\n log_file.close();\n\ndef testme_backquotes_rule_yeah_baby():\n log_file = tempfile.TemporaryFile()\n try:\n for qmap in amap:\n s = `qmap`+'\\n'\n log_file.write( s );\n finally:\n log_file.close(); \n\ndef testme_the_new_way_to_format():\n log_file = tempfile.TemporaryFile()\n try:\n for qmap in amap:\n s = '{0} {1} {2} {3} {4} {5} \\n'.format(qmap[0], qmap[1], qmap[2], qmap[3], qmap[4], qmap[5]) \n log_file.write( s );\n finally:\n log_file.close();\n\n# python 2.5 helper\ndefault_number = 50 \ndef _xtimeit(stmt=\"pass\", timer=timeit.default_timer,\n number=default_number):\n \"\"\"quick and dirty\"\"\"\n if stmt<>\"pass\":\n stmtcall = stmt+\"()\"\n ssetup = \"from __main__ import \"+stmt\n else:\n stmtcall = stmt\n ssetup = \"pass\"\n t = timeit.Timer(stmtcall,setup=ssetup)\n try:\n return t.timeit(number)\n except:\n t.print_exc()\n\n\n# no formatting operation in testme2\n\nprint \"now timing variations on a theme\"\n\n#times = []\n#for i in range(0,10):\n\nn0 = _xtimeit( \"pass\",number=50)\nprint \"pass = \",n0\n\nn1 = _xtimeit( \"testme_percent\",number=50);\nprint \"old style % formatting=\",n1\n\nn2 = _xtimeit( \"testme_tuple_percent\",number=50);\nprint \"old style % formatting with tuples=\",n2\n\nn3 = _xtimeit( \"testme_backquotes_rule_yeah_baby\",number=50);\nprint \"backquotes=\",n3\n\nn4 = _xtimeit( \"testme_the_new_way_to_format\",number=50);\nprint \"new str.format conversion=\",n4\n\n\n# times.append( n);\n\n\n\n\nprint \"done\" \n\nI think you could optimize your code by building your TUPLES of floats somewhere else, wherever you built that map, in the first place, build your tuple list, and then applying the fmt_string % tuple this way:\nfor tup in mytups:\n log_file.write( fmt_str % tup )\n\nI was able to shave the 8.7 seconds down to 8.5 seconds by dropping the making-a-tuple part out of the for loop. Which ain't much. The big boy there is floating point formatting, which I believe is always going to be expensive.\nAlternative: \nHave you considered NOT writing such huge logs as text, and instead, saving them using the fastest \"persistence\" method available, and then writing a short utility to dump them to text, when needed? Some people use NumPy with very large numeric data sets, and it does not seem they would use a line-by-line dump to store their stuff. See:\nhttp://thsant.blogspot.com/2007/11/saving-numpy-arrays-which-is-fastest.html\n", "Without wishing to wade into the optimize-this-code morass, I would have written the code more like this:\nlog_file = open('testfile', 'w')\nx = start\nmap_iter = zip(range(length), map[0], map[1], map[2], map[3], map[4], map[5])\nfmt = '%-5d %8.3f %13g %13g %13g %13g %13g %13g\\n'\nfor i, m0, m1, m2, m3, m4, m5 in mapiter:\n s = fmt % (i, x, m0, m1, m2, m3, m4, m5)\n log_file.write(s)\n x += interval\n\nBut I will weigh in with the recommendation that you not name variables after python builtins, like map.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "formatting", "python", "string" ]
stackoverflow_0002637530_formatting_python_string.txt
Q: Include empty directory with python setup.py sdist I have a Python package where I want to include an empty directory as part of the source distribution. I tried adding include empty_directory to the MANIFEST.in file, but when I run python setup.py sdist The empty directory is still not included. Any tips on how to do this? A: According to the docs: include pat1 pat2 - include all files matching any of the listed patterns exclude pat1 pat2 - exclude all files matching any of the listed patterns recursive-include dir pat1 pat2 - include all files under dir matching any of the listed patterns recursive-exclude dir pat1 pat2 - exclude all files under dir matching any of the listed patterns global-include pat1 pat2 - include all files anywhere in the source tree matching — & any of the listed patterns global-exclude pat1 pat2 - exclude all files anywhere in the source tree matching — & any of the listed patterns prune dir - exclude all files under dir graft dir - include all files under dir So seems like you want graft, not include. Also, it seems you can't include empty directories. You have to create a "empty.txt" file or something like this inside the directory to force its inclusion.
Include empty directory with python setup.py sdist
I have a Python package where I want to include an empty directory as part of the source distribution. I tried adding include empty_directory to the MANIFEST.in file, but when I run python setup.py sdist The empty directory is still not included. Any tips on how to do this?
[ "According to the docs:\n\n\ninclude pat1 pat2 - include all\n files matching any of the listed\n patterns\nexclude pat1 pat2 -\n exclude all files matching any of the listed patterns\nrecursive-include dir pat1 pat2 - include all files\n under dir matching any of the listed\n patterns\nrecursive-exclude dir pat1 pat2 - exclude all files under\n dir matching any of the listed\n patterns\nglobal-include pat1 pat2 - include all files anywhere in\n the source tree matching — & any of\n the listed patterns\nglobal-exclude pat1 pat2 - exclude\n all files anywhere in the source tree\n matching — & any of the listed\n patterns\nprune dir - exclude\n all files under dir\ngraft dir -\n include all files under dir\n\n\nSo seems like you want graft, not include.\nAlso, it seems you can't include empty directories. You have to create a \"empty.txt\" file or something like this inside the directory to force its inclusion.\n" ]
[ 12 ]
[]
[]
[ "installation", "python" ]
stackoverflow_0002640378_installation_python.txt
Q: How can I conditionally only log something if it's a certain Class? Something like this: if self.__class__ == "User": logging.debug("%s non_pks were found" % (str(len(non_pks))) ) In [2]: user = User.objects.get(pk=1) In [3]: user.__class__ Out[3]: <class 'django.contrib.auth.models.User'> In [4]: if user.__class__ == 'django.contrib.auth.models.User': print "yes" ...: In [5]: user.__class__ == 'django.contrib.auth.models.User' Out[5]: False In [6]: user.__class__ == 'User' Out[6]: False In [7]: user.__class__ == "<class 'django.contrib.auth.models.User'>" Out[7]: False A: Classes are first class objects in Python: >>> class Foo(object): ... pass ... >>> a = Foo() >>> a.__class__ == Foo True Note: they're not strings, they're objects. Don't compare to "Foo" but to Foo A: This should work: if user.__class__.__name__ == 'User':
How can I conditionally only log something if it's a certain Class?
Something like this: if self.__class__ == "User": logging.debug("%s non_pks were found" % (str(len(non_pks))) ) In [2]: user = User.objects.get(pk=1) In [3]: user.__class__ Out[3]: <class 'django.contrib.auth.models.User'> In [4]: if user.__class__ == 'django.contrib.auth.models.User': print "yes" ...: In [5]: user.__class__ == 'django.contrib.auth.models.User' Out[5]: False In [6]: user.__class__ == 'User' Out[6]: False In [7]: user.__class__ == "<class 'django.contrib.auth.models.User'>" Out[7]: False
[ "Classes are first class objects in Python:\n>>> class Foo(object):\n... pass\n... \n>>> a = Foo()\n>>> a.__class__ == Foo\nTrue\n\nNote: they're not strings, they're objects. Don't compare to \"Foo\" but to Foo\n", "This should work:\nif user.__class__.__name__ == 'User':\n\n" ]
[ 3, 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002641113_django_python.txt
Q: Populating Models from other Models in Django? This is somewhat related to the question posed in this question but I'm trying to do this with an abstract base class. For the purposes of this example lets use these models: class Comic(models.Model): name = models.CharField(max_length=20) desc = models.CharField(max_length=100) volume = models.IntegerField() ... <50 other things that make up a Comic> class Meta: abstract = True class InkedComic(Comic): lines = models.IntegerField() class ColoredComic(Comic): colored = models.BooleanField(default=False) In the view lets say we get a reference to an InkedComic id since the tracer, err I mean, inker is done drawing the lines and it's time to add color. Once the view has added all the color we want to save a ColoredComic to the db. Obviously we could do inked = InkedComic.object.get(pk=ink_id) colored = ColoredComic() colored.name = inked.name etc, etc. But really it'd be nice to do: colored = ColoredComic(inked_comic=inked) colored.colored = True colored.save() I tried to do class ColoredComic(Comic): colored = models.BooleanField(default=False) def __init__(self, inked_comic = False, *args, **kwargs): super(ColoredComic, self).__init__(*args, **kwargs) if inked_comic: self.__dict__.update(inked_comic.__dict__) self.__dict__.update({'id': None}) # Remove pk field value but it turns out the ColoredComic.objects.get(pk=1) call sticks the pk into the inked_comic keyword, which is obviously not intended. (and actually results in a int does not have a dict exception) My brain is fried at this point, am I missing something obvious, or is there a better way to do this? A: What about a static method on the class to handle this? colored = ColoredComic.create_from_Inked(pk=ink_id) colored.colored = True colored.save() Untested, but something to this effect (using your code from above) class ColoredComic(Comic): colored = models.BooleanField(default=False) @staticmethod def create_from_Inked(**kwargs): inked = InkedComic.objects.get(**kwargs) if inked: colored = ColoredComic.objects.create() colored.__dict__.update(inked.__dict__) colored.__dict__.update({'id': None}) # Remove pk field value return colored else: # or throw an exception... return None A: For that simple case, this will work: inked = InkedComic.object.get(pk=ink_id) inked.__class__ = ColoredComic inked.colored = True inked.save()
Populating Models from other Models in Django?
This is somewhat related to the question posed in this question but I'm trying to do this with an abstract base class. For the purposes of this example lets use these models: class Comic(models.Model): name = models.CharField(max_length=20) desc = models.CharField(max_length=100) volume = models.IntegerField() ... <50 other things that make up a Comic> class Meta: abstract = True class InkedComic(Comic): lines = models.IntegerField() class ColoredComic(Comic): colored = models.BooleanField(default=False) In the view lets say we get a reference to an InkedComic id since the tracer, err I mean, inker is done drawing the lines and it's time to add color. Once the view has added all the color we want to save a ColoredComic to the db. Obviously we could do inked = InkedComic.object.get(pk=ink_id) colored = ColoredComic() colored.name = inked.name etc, etc. But really it'd be nice to do: colored = ColoredComic(inked_comic=inked) colored.colored = True colored.save() I tried to do class ColoredComic(Comic): colored = models.BooleanField(default=False) def __init__(self, inked_comic = False, *args, **kwargs): super(ColoredComic, self).__init__(*args, **kwargs) if inked_comic: self.__dict__.update(inked_comic.__dict__) self.__dict__.update({'id': None}) # Remove pk field value but it turns out the ColoredComic.objects.get(pk=1) call sticks the pk into the inked_comic keyword, which is obviously not intended. (and actually results in a int does not have a dict exception) My brain is fried at this point, am I missing something obvious, or is there a better way to do this?
[ "What about a static method on the class to handle this?\ncolored = ColoredComic.create_from_Inked(pk=ink_id)\ncolored.colored = True\ncolored.save()\n\nUntested, but something to this effect (using your code from above)\nclass ColoredComic(Comic):\n colored = models.BooleanField(default=False)\n\n @staticmethod\n def create_from_Inked(**kwargs):\n inked = InkedComic.objects.get(**kwargs)\n if inked:\n colored = ColoredComic.objects.create()\n colored.__dict__.update(inked.__dict__)\n colored.__dict__.update({'id': None}) # Remove pk field value\n return colored\n else:\n # or throw an exception...\n return None\n\n", "For that simple case, this will work:\ninked = InkedComic.object.get(pk=ink_id)\ninked.__class__ = ColoredComic\ninked.colored = True\ninked.save()\n\n" ]
[ 3, 0 ]
[]
[]
[ "django", "django_models", "inheritance", "python" ]
stackoverflow_0002640896_django_django_models_inheritance_python.txt
Q: Pylons user authentication roll our own or openid or alternatives? what is the current state of user authentication? is it good to go with openid or another alternative, or we still have to write our own user/password? A: Take a look at: Pylons authentication? But, the direct answer to your question: You could use RPX along with openid as mentioned on Tony Landis' blog
Pylons user authentication roll our own or openid or alternatives?
what is the current state of user authentication? is it good to go with openid or another alternative, or we still have to write our own user/password?
[ "Take a look at: Pylons authentication?\nBut, the direct answer to your question:\nYou could use RPX along with openid as mentioned on Tony Landis' blog\n" ]
[ 1 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0002639046_pylons_python.txt
Q: How to use FFmpeg I'm trying to extract frames from a video and I've picked ffmpeg ( tell me if you know something better ) for this task. I've downloaded its source and don't know how to use it ?? how can I compile it? What is the recommended language for it ? I know Python and C++. Please note that my operating system is Windows Vista 64 bit . A: If you know C++, you can modify sample from article using ffmpeg. A: If you just want to extract the frames from a video and save them to file, you can just use ffmpeg at the command line: ffmpeg -i video.avi image%d.jpg For this method, you do not need to build ffmpeg as there should be a windows binary available for download. If you are wanting to display the frames or perform some other processing on them, you may want to use libavformat and libavcodec (main parts of the ffmpeg project) to extract the video frames in code. Here is a pretty good tutorial on how to get frames from a video using libavcodec and libavformat. libavformat and libavcodec are C libraries so I would use C or C++ if you want to interface directly to them. There is this python wrapper for ffmpeg that looks promising, but I haven't tried it. You can download the compiled ffmpeg libraries as well so you shouldn't have to build them yourself. ffmpeg will not build on MSVC++ as per the documentation so you would have to set up a mingw environment. A: If you only want use ffmpeg you should just get a build and not the source itself. To extract a frame from a video use the following command line: ffmpeg -i input.avi -r 1 -f image2 -s 120x96 images%05d.png Where input.avi is your video, 120x96 the dimension of the output image. There are a lot of options you can use to specify the exact frame in the movie, but that would definetely be too much to show here. Take a look at this page to get a more detailed description. Best wishes, Fabian
How to use FFmpeg
I'm trying to extract frames from a video and I've picked ffmpeg ( tell me if you know something better ) for this task. I've downloaded its source and don't know how to use it ?? how can I compile it? What is the recommended language for it ? I know Python and C++. Please note that my operating system is Windows Vista 64 bit .
[ "If you know C++, you can modify sample from article using ffmpeg.\n", "If you just want to extract the frames from a video and save them to file, you can just use ffmpeg at the command line:\nffmpeg -i video.avi image%d.jpg\n\nFor this method, you do not need to build ffmpeg as there should be a windows binary available for download.\nIf you are wanting to display the frames or perform some other processing on them, you may want to use libavformat and libavcodec (main parts of the ffmpeg project) to extract the video frames in code. Here is a pretty good tutorial on how to get frames from a video using libavcodec and libavformat. libavformat and libavcodec are C libraries so I would use C or C++ if you want to interface directly to them. There is this python wrapper for ffmpeg that looks promising, but I haven't tried it.\nYou can download the compiled ffmpeg libraries as well so you shouldn't have to build them yourself. ffmpeg will not build on MSVC++ as per the documentation so you would have to set up a mingw environment.\n", "If you only want use ffmpeg you should just get a build and not the source itself. \nTo extract a frame from a video use the following command line:\nffmpeg -i input.avi -r 1 -f image2 -s 120x96 images%05d.png\n\nWhere input.avi is your video, 120x96 the dimension of the output image. There are a lot of options you can use to specify the exact frame in the movie, but that would definetely be too much to show here. Take a look at this page to get a more detailed description.\nBest wishes,\nFabian\n" ]
[ 8, 6, 1 ]
[]
[]
[ "c++", "ffmpeg", "python" ]
stackoverflow_0001908411_c++_ffmpeg_python.txt
Q: class, dict, self, init, args? class attrdict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) self.__dict__ = self a = attrdict(x=1, y=2) print a.x, a.y b = attrdict() b.x, b.y = 1, 2 print b.x, b.y Could somebody explain the first four lines in words? I read about classes and methods. But here it seems very confusing. A: My shot at a line-by-line explanation: class attrdict(dict): This line declares a class attrdict as a subclass of the built-in dict class. def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) This is your standard __init__ method. The call to dict.__init__(...) is to utilize the super class' (in this case, dict) constructor (__init__) method. The final line, self.__dict__ = self makes it so the keyword-arguments (kwargs) you pass to the __init__ method can be accessed like attributes, i.e., a.x, a.y in the code below. Hope this helps clear up your confusion. A: You are not using positional arguments in your example. So the relevant code is: class attrdict(dict): def __init__(self, **kwargs): dict.__init__(self, **kwargs) self.__dict__ = self In the first line you define class attrdict as a subclass of dict. In the second line you define the function that automatically will initialize your instance. You pass keyword arguments (**kargs) to this function. When you instantiate a: a = attrdict(x=1, y=2) you are actually calling attrdict.__init__(a, {'x':1, 'y':2}) dict instance core initialization is done by initializing the dict builtin superclass. This is done in the third line passing the parameters received in attrdict.__init__. Thus, dict.__init__(self,{'x':1, 'y':2}) makes self (the instance a) a dictionary: self == {'x':1, 'y':2} The nice thing occurs in the last line: Each instance has a dictionary holding its attributes. This is self.__dict__ (i.e. a.__dict__). For example, if a.__dict__ = {'x':1, 'y':2} we could write a.x or a.y and get values 1 or 2, respectively. So, this is what line 4 does: self.__dict__ = self is equivalent to: a.__dict__ = a where a = {'x':1, 'y':2} Then I can call a.x and a.y. Hope is not too messy. A: Here's a good article that explains __dict__: The Dynamic dict The attrdict class exploits that by inheriting from a dictionary and then setting the object's __dict__ to that dictionary. So any attribute access occurs against the parent dictionary (i.e. the dict class it inherits from). The rest of the article is quite good too for understanding Python objects: Python Attributes and Methods
class, dict, self, init, args?
class attrdict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) self.__dict__ = self a = attrdict(x=1, y=2) print a.x, a.y b = attrdict() b.x, b.y = 1, 2 print b.x, b.y Could somebody explain the first four lines in words? I read about classes and methods. But here it seems very confusing.
[ "My shot at a line-by-line explanation:\nclass attrdict(dict):\n\nThis line declares a class attrdict as a subclass of the built-in dict class.\ndef __init__(self, *args, **kwargs): \n dict.__init__(self, *args, **kwargs)\n\nThis is your standard __init__ method. The call to dict.__init__(...) is to utilize the super\nclass' (in this case, dict) constructor (__init__) method.\nThe final line, self.__dict__ = self makes it so the keyword-arguments (kwargs) you pass to the __init__ method can be accessed like attributes, i.e., a.x, a.y in the code below.\nHope this helps clear up your confusion.\n", "You are not using positional arguments in your example. So the relevant code is:\nclass attrdict(dict):\n def __init__(self, **kwargs):\n dict.__init__(self, **kwargs)\n self.__dict__ = self\n\nIn the first line you define class attrdict as a subclass of dict.\nIn the second line you define the function that automatically will initialize your instance. You pass keyword arguments (**kargs) to this function. When you instantiate a:\n a = attrdict(x=1, y=2)\n\nyou are actually calling\nattrdict.__init__(a, {'x':1, 'y':2})\n\ndict instance core initialization is done by initializing the dict builtin superclass. This is done in the third line passing the parameters received in attrdict.__init__.\nThus,\ndict.__init__(self,{'x':1, 'y':2})\n\nmakes self (the instance a) a dictionary:\nself == {'x':1, 'y':2}\n\nThe nice thing occurs in the last line:\nEach instance has a dictionary holding its attributes. This is self.__dict__ (i.e. a.__dict__).\nFor example, if\na.__dict__ = {'x':1, 'y':2} \n\nwe could write a.x or a.y and get values 1 or 2, respectively.\nSo, this is what line 4 does: \nself.__dict__ = self\n\nis equivalent to:\na.__dict__ = a where a = {'x':1, 'y':2}\n\nThen I can call a.x and a.y.\nHope is not too messy.\n", "Here's a good article that explains __dict__:\nThe Dynamic dict\nThe attrdict class exploits that by inheriting from a dictionary and then setting the object's __dict__ to that dictionary. So any attribute access occurs against the parent dictionary (i.e. the dict class it inherits from).\nThe rest of the article is quite good too for understanding Python objects:\nPython Attributes and Methods\n" ]
[ 7, 5, 4 ]
[]
[]
[ "arguments", "class", "python", "self" ]
stackoverflow_0002641484_arguments_class_python_self.txt
Q: Django Querysets -- need a less expensive way to do this I have a problem with some code and I believe it is because of the expense of the queryset. I am looking for a much less expensive (in terms of time) way to to this.. log.info("Getting Users") employees = Employee.objects.filter(is_active = True) log.info("Have Users") if opt.supervisor: if opt.hierarchical: people = getSubs(employees, " ".join(args)) else: people = employees.filter(supervisor__name__icontains = " ".join(args)) else: log.info("Filtering Users") people = employees.filter(name__icontains = " ".join(args)) | \ employees.filter(unix_accounts__username__icontains = " ".join(args)) log.info("Filtered Users") log.info("Processing data") np = [] for person in people: unix, p4, bugz = "No", "No", "No" if len(person.unix_accounts.all()): unix = "Yes" if len(person.perforce_accounts.all()): p4 = "Yes" if len(person.bugzilla_accounts.all()): bugz = "Yes" if person.cell_phone != "": exphone = fixphone(person.cell_phone) elif person.other_phone != "": exphone = fixphone(person.other_phone) else: exphone = "" np.append({ 'name':person.name, 'office_phone': fixphone(person.office_phone), 'position': person.position, 'location': person.location.description, 'email': person.email, 'functional_area': person.functional_area.name, 'department': person.department.name, 'supervisor': person.supervisor.name, 'unix': unix, 'perforce': p4, 'bugzilla':bugz, 'cell_phone': fixphone(exphone), 'fax': fixphone(person.fax), 'last_update': person.last_update.ctime() }) log.info("Have data") Now this results in a log which looks like this.. 19:00:55 INFO phone phone Getting Users 19:00:57 INFO phone phone Have Users 19:00:57 INFO phone phone Processing data 19:01:30 INFO phone phone Have data As you can see it's taking over 30 seconds to simply iterate over the data. That is way too expensive. Can someone clue me into a more efficient way to do this. I thought that if I did the first filter that would make things easier but seems to have no effect. I'm at a loss on this one. Thanks To be clear this is about 1500 employees -- Not too many!! A: Or Q objects together instead of QuerySets. QuerySet.select_related() QuerySet.iterator() Use QuerySet.extra() to add IS NULL fields instead of the three len() calls in the loop.
Django Querysets -- need a less expensive way to do this
I have a problem with some code and I believe it is because of the expense of the queryset. I am looking for a much less expensive (in terms of time) way to to this.. log.info("Getting Users") employees = Employee.objects.filter(is_active = True) log.info("Have Users") if opt.supervisor: if opt.hierarchical: people = getSubs(employees, " ".join(args)) else: people = employees.filter(supervisor__name__icontains = " ".join(args)) else: log.info("Filtering Users") people = employees.filter(name__icontains = " ".join(args)) | \ employees.filter(unix_accounts__username__icontains = " ".join(args)) log.info("Filtered Users") log.info("Processing data") np = [] for person in people: unix, p4, bugz = "No", "No", "No" if len(person.unix_accounts.all()): unix = "Yes" if len(person.perforce_accounts.all()): p4 = "Yes" if len(person.bugzilla_accounts.all()): bugz = "Yes" if person.cell_phone != "": exphone = fixphone(person.cell_phone) elif person.other_phone != "": exphone = fixphone(person.other_phone) else: exphone = "" np.append({ 'name':person.name, 'office_phone': fixphone(person.office_phone), 'position': person.position, 'location': person.location.description, 'email': person.email, 'functional_area': person.functional_area.name, 'department': person.department.name, 'supervisor': person.supervisor.name, 'unix': unix, 'perforce': p4, 'bugzilla':bugz, 'cell_phone': fixphone(exphone), 'fax': fixphone(person.fax), 'last_update': person.last_update.ctime() }) log.info("Have data") Now this results in a log which looks like this.. 19:00:55 INFO phone phone Getting Users 19:00:57 INFO phone phone Have Users 19:00:57 INFO phone phone Processing data 19:01:30 INFO phone phone Have data As you can see it's taking over 30 seconds to simply iterate over the data. That is way too expensive. Can someone clue me into a more efficient way to do this. I thought that if I did the first filter that would make things easier but seems to have no effect. I'm at a loss on this one. Thanks To be clear this is about 1500 employees -- Not too many!!
[ "\nOr Q objects together instead of QuerySets.\nQuerySet.select_related()\nQuerySet.iterator()\nUse QuerySet.extra() to add IS NULL fields instead of the three len() calls in the loop.\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0002641655_django_django_models_python.txt
Q: pyinstaller: 2 instances of my cherrypy app exe get executed I have a cherrypy app that I've made an exe with pyinstaller. now when I run the exe it loads itself twice into memory. Watching the taskmanager shows the first instance load into about 1k, then a second later a second instance of hte exe loads into about 3k ram. If I close the bigger one both processes die. If I close hte smaller one only that one dies. Loading the exe with subprocess, if I try to proc.kill(), it only kills the small one leaving the other running in memory. Is this a sideeffect of using cherrypy and pyinstaller together? A: PyInstaller spawns a subprocess during its boot process. This is explained in a section of its manual. A: It would be important to know what version of CherryPy you are using. The 2.x line had an unfortunate design: the autoreloader feature always started a second instance of CherryPy, so the first could respawn the child when it was killed off. That was fixed in version 3 to only use one process. If you are using version 2, turn off the autoreload feature via the config entry: [global] autoreload.on = False
pyinstaller: 2 instances of my cherrypy app exe get executed
I have a cherrypy app that I've made an exe with pyinstaller. now when I run the exe it loads itself twice into memory. Watching the taskmanager shows the first instance load into about 1k, then a second later a second instance of hte exe loads into about 3k ram. If I close the bigger one both processes die. If I close hte smaller one only that one dies. Loading the exe with subprocess, if I try to proc.kill(), it only kills the small one leaving the other running in memory. Is this a sideeffect of using cherrypy and pyinstaller together?
[ "PyInstaller spawns a subprocess during its boot process. This is explained in a section of its manual.\n", "It would be important to know what version of CherryPy you are using. The 2.x line had an unfortunate design: the autoreloader feature always started a second instance of CherryPy, so the first could respawn the child when it was killed off. That was fixed in version 3 to only use one process. If you are using version 2, turn off the autoreload feature via the config entry:\n[global]\nautoreload.on = False\n\n" ]
[ 6, 1 ]
[]
[]
[ "cherrypy", "pyinstaller", "python" ]
stackoverflow_0002124603_cherrypy_pyinstaller_python.txt
Q: get_or_create generic relations in Django & python debugging in general I ran the code to create the generically related objects from this demo: http://www.djangoproject.com/documentation/models/generic_relations/ Everything is good intially: >>> bacon.tags.create(tag="fatty") <TaggedItem: fatty> >>> tag, newtag = bacon.tags.get_or_create(tag="fatty") >>> tag <TaggedItem: fatty> >>> newtag False But then the use case that I'm interested in for my app: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 123, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 343, in get_or_create raise e IntegrityError: app_taggeditem.content_type_id may not be NULL I tried a bunch of random things after looking at other code: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem) ValueError: Cannot assign "<class 'generics.app.models.TaggedItem'>": "TaggedItem.content_type" must be a "ContentType" instance. or: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem.content_type) InterfaceError: Error binding parameter 3 - probably unsupported type. etc. I'm sure somebody can give me the correct syntax, but the real problem here is that I have no idea what is going on. I have developed in strongly typed languages for over ten years (x86 assembly, C++ and C#) but am new to Python. I find it really difficult to follow what is going on in Python when things like this break. In the languages I mentioned previously it's fairly straightforward to figure things like this out -- check the method signature and check your parameters. Looking at the Django documentation for half an hour left me just as lost. Looking at the source for get_or_create(self, **kwargs) didn't help either since there is no method signature and the code appears very generic. A next step would be to debug the method and try to figure out what is happening, but this seems a bit extreme... I seem to be missing some fundamental operating principle here... what is it? How do I resolve issues like this on my own in the future? A: ContentType.objects.get_for_model() will give you the appropriate ContentType for a model. Pass the returned object as content_type. And don't worry too much about "getting it" when it comes to Django. Django is mostly insane to begin with, and experimentation and heavy reading of both documentation and source is encouraged. A: I've collected some Django debugging links here. The two best out of the group are Simon Willison's post (specifically, pdb might make you feel more at home in Python, coming from a C#/ VisualStudio background) and the Django debug toolbar.
get_or_create generic relations in Django & python debugging in general
I ran the code to create the generically related objects from this demo: http://www.djangoproject.com/documentation/models/generic_relations/ Everything is good intially: >>> bacon.tags.create(tag="fatty") <TaggedItem: fatty> >>> tag, newtag = bacon.tags.get_or_create(tag="fatty") >>> tag <TaggedItem: fatty> >>> newtag False But then the use case that I'm interested in for my app: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 123, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 343, in get_or_create raise e IntegrityError: app_taggeditem.content_type_id may not be NULL I tried a bunch of random things after looking at other code: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem) ValueError: Cannot assign "<class 'generics.app.models.TaggedItem'>": "TaggedItem.content_type" must be a "ContentType" instance. or: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem.content_type) InterfaceError: Error binding parameter 3 - probably unsupported type. etc. I'm sure somebody can give me the correct syntax, but the real problem here is that I have no idea what is going on. I have developed in strongly typed languages for over ten years (x86 assembly, C++ and C#) but am new to Python. I find it really difficult to follow what is going on in Python when things like this break. In the languages I mentioned previously it's fairly straightforward to figure things like this out -- check the method signature and check your parameters. Looking at the Django documentation for half an hour left me just as lost. Looking at the source for get_or_create(self, **kwargs) didn't help either since there is no method signature and the code appears very generic. A next step would be to debug the method and try to figure out what is happening, but this seems a bit extreme... I seem to be missing some fundamental operating principle here... what is it? How do I resolve issues like this on my own in the future?
[ "ContentType.objects.get_for_model() will give you the appropriate ContentType for a model. Pass the returned object as content_type.\nAnd don't worry too much about \"getting it\" when it comes to Django. Django is mostly insane to begin with, and experimentation and heavy reading of both documentation and source is encouraged.\n", "I've collected some Django debugging links here. The two best out of the group are Simon Willison's post (specifically, pdb might make you feel more at home in Python, coming from a C#/ VisualStudio background) and the Django debug toolbar.\n" ]
[ 10, 2 ]
[]
[]
[ "debugging", "django", "generic_relationship", "python" ]
stackoverflow_0002641780_debugging_django_generic_relationship_python.txt
Q: Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance! A: Once you've got ahold of the Plone site object, you can do a catalog query to find all content items in the site: >>> brains = site.portal_catalog.unrestrictedSearchResults() This returns a list of "catalog brains", each of which contains some metadata about the item. You can get the full item from the brain: >>> for b in brains: ... obj = b.getObject() Assuming your Plone site is using Archetypes-based content, you can then iterate through the fields of the item's schema and use each field's accessor to retrieve its value: >>> for field in obj.Schema().fields(): ... field_id = field.__name__ ... field_value = field.getAccessor(obj)() Since the ZODB is an object database that stores pickled Python objects, you will need to have the correct version of Archetypes present in your Python environment, as well as the package that defines the class of the objects you're trying to retrieve.
Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs
I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!
[ "Once you've got ahold of the Plone site object, you can do a catalog query to find all content items in the site:\n >>> brains = site.portal_catalog.unrestrictedSearchResults()\n\nThis returns a list of \"catalog brains\", each of which contains some metadata about the item. You can get the full item from the brain:\n >>> for b in brains:\n ... obj = b.getObject()\n\nAssuming your Plone site is using Archetypes-based content, you can then iterate through the fields of the item's schema and use each field's accessor to retrieve its value:\n >>> for field in obj.Schema().fields():\n ... field_id = field.__name__\n ... field_value = field.getAccessor(obj)()\n\nSince the ZODB is an object database that stores pickled Python objects, you will need to have the correct version of Archetypes present in your Python environment, as well as the package that defines the class of the objects you're trying to retrieve.\n" ]
[ 3 ]
[]
[]
[ "liferay", "plone", "python", "zodb", "zope" ]
stackoverflow_0002394493_liferay_plone_python_zodb_zope.txt
Q: storing record arrays in object arrays I'd like to convert a list of record arrays -- dtype is (uint32, float32) -- into a numpy array of dtype np.object: X = np.array(instances, dtype = np.object) where instances is a list of arrays with data type np.dtype([('f0', '<u4'), ('f1', '<f4')]). However, the above statement results in an array whose elements are also of type np.object: X[0] array([(67111L, 1.0), (104242L, 1.0)], dtype=object) Does anybody know why? The following statement should be equivalent to the above but gives the desired result: X = np.empty((len(instances),), dtype = np.object) X[:] = instances X[0] array([(67111L, 1.0), (104242L, 1.0), dtype=[('f0', '<u4'), ('f1', '<f4')]) thanks & best regards, peter A: Stéfan van der Walt (a numpy developer) explains: The ndarray constructor does its best to guess what kind of data you are feeding it, but sometimes it needs a bit of help.... I prefer to construct arrays explicitly, so there is no doubt what is happening under the hood: When you say something like instance1=np.array([(67111L,1.0),(104242L,1.0)],dtype=np.dtype([('f0', '<u4'), ('f1', '<f4')])) instance2=np.array([(67112L,2.0),(104243L,2.0)],dtype=np.dtype([('f0', '<u4'), ('f1', '<f4')])) instances=[instance1,instance2] Y=np.array(instances, dtype = np.object) np.array is forced to guess what is the dimension of the array you desire. instances is a list of two objects, each of length 2. So, quite reasonably, np.array guesses that Y should have shape (2,2): print(Y.shape) # (2, 2) In most cases, I think that is what would be desired. However, in your case, since this is not what you desire, you must construct the array explicitly: X=np.empty((len(instances),), dtype = np.object) print(X.shape) # (2,) Now there is no question about X's shape: (2, ) and so when you feed in the data X[:] = instances numpy is smart enough to regard instances as a sequence of two objects.
storing record arrays in object arrays
I'd like to convert a list of record arrays -- dtype is (uint32, float32) -- into a numpy array of dtype np.object: X = np.array(instances, dtype = np.object) where instances is a list of arrays with data type np.dtype([('f0', '<u4'), ('f1', '<f4')]). However, the above statement results in an array whose elements are also of type np.object: X[0] array([(67111L, 1.0), (104242L, 1.0)], dtype=object) Does anybody know why? The following statement should be equivalent to the above but gives the desired result: X = np.empty((len(instances),), dtype = np.object) X[:] = instances X[0] array([(67111L, 1.0), (104242L, 1.0), dtype=[('f0', '<u4'), ('f1', '<f4')]) thanks & best regards, peter
[ "Stéfan van der Walt (a numpy developer) explains:\n\nThe ndarray constructor does its best\n to guess what kind of data you are\n feeding it, but sometimes it needs a\n bit of help....\nI prefer to construct arrays\n explicitly, so there is no doubt what\n is happening under the hood:\n\nWhen you say something like\ninstance1=np.array([(67111L,1.0),(104242L,1.0)],dtype=np.dtype([('f0', '<u4'), ('f1', '<f4')]))\ninstance2=np.array([(67112L,2.0),(104243L,2.0)],dtype=np.dtype([('f0', '<u4'), ('f1', '<f4')]))\ninstances=[instance1,instance2]\nY=np.array(instances, dtype = np.object)\n\nnp.array is forced to guess what is the dimension of the array you desire. \ninstances is a list of two objects, each of length 2. So, quite reasonably, np.array guesses that Y should have shape (2,2):\nprint(Y.shape)\n# (2, 2)\n\nIn most cases, I think that is what would be desired. However,\nin your case, since this is not what you desire, you must construct the array explicitly:\nX=np.empty((len(instances),), dtype = np.object)\nprint(X.shape)\n# (2,)\n\nNow there is no question about X's shape: (2, ) and so when you feed in the data\nX[:] = instances\n\nnumpy is smart enough to regard instances as a sequence of two objects.\n" ]
[ 2 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0002641701_numpy_python.txt
Q: Python stream http client with keep-alive I need a python http client that can reuse connections and that supports consuming the stream as it comes in. It will be used to parse xml streams, sax style. I came up with a solution, but I'm not sure it is the best one (there are quite a few ways of writing an http client in python) class Downloader(): def __init__(self, host): self.conn = httplib.HTTPConnection(host) def get(self, url): self.conn.request("GET", url) resp = self.conn.getresponse() while True: data = resp.read(10) if not data: break yield data Thanks folks! A: urlgrabber supports keepalive and can return a file-like object. A: There is also pycurl. By default keepalive is turned on and you can write to a file for output. Follow the examples, they are quite helpful
Python stream http client with keep-alive
I need a python http client that can reuse connections and that supports consuming the stream as it comes in. It will be used to parse xml streams, sax style. I came up with a solution, but I'm not sure it is the best one (there are quite a few ways of writing an http client in python) class Downloader(): def __init__(self, host): self.conn = httplib.HTTPConnection(host) def get(self, url): self.conn.request("GET", url) resp = self.conn.getresponse() while True: data = resp.read(10) if not data: break yield data Thanks folks!
[ "urlgrabber supports keepalive and can return a file-like object.\n", "There is also pycurl. By default keepalive is turned on and you can write to a file for output.\nFollow the examples, they are quite helpful\n" ]
[ 1, 1 ]
[]
[]
[ "client", "http", "keep_alive", "python", "streaming" ]
stackoverflow_0002370692_client_http_keep_alive_python_streaming.txt
Q: Google App Engine: How to disable cache on 'static' files, or make cache smart I'm using the app engine locally, and sometimes the JS files are being cached between page refreshes, and it drives me crazy because I don't know if there's a bug in the javascript code I'm trying to write, or if the cache is acting up. How do I completely disable cache for *.js files? Or maybe the question is, how to have it be smart, like based on last-modified date. Thanks! UPDATE- So it turns out Chrome Dev (for mac at least) has caching issues, going back to Chrome Beta fixes all this. The answers have still been helpful though, thanks A: A common practice used by the major sites is to cache documents forever but include a unique identifier based on the release version or date into the url for the .js or .css call. For example: <script type="text/javascript" src="static/util.js?version=20100310"></script> This way you get optimum caching as well as always up to date files. The only trick is to figure out how to include an up to date version number in your url, which you can automate based on your deployment environment. A: Based on the docs, you can specify an app-wide cache expiration duration: Unless told otherwise, web browsers retain files they load from a website for a limited period of time. You can define a global default cache period for all static file handlers for an application by including the default_expiration element, a top-level element. You can also configure a cache duration for specific static file handler. (Script handlers can set cache durations by returning the appropriate HTTP headers to the browser.) default_expiration The length of time a static file served by a static file handler ought to be cached in the user's browser, if the handler does not specify its own expiration. The value is a string of numbers and units, separated by spaces, where units can be d for days, h for hours, m for minutes, and s for seconds. For example, "4d 5h" sets cache expiration to 4 days and 5 hours after the file is first loaded by the browser. default_expiration is optional. If omitted, the default behavior is to allow the browser to determine its own cache duration. ...and if you want to specify the expiration on a directory-by-directory basis: expiration The length of time a static file served by this handler ought to be cached in the user's browser. The value is a string of numbers and units, separated by spaces, where units can be d for days, h for hours, m for minutes, and s for seconds. For example, "4d 5h" sets cache expiration to 4 days and 5 hours after the file is first loaded by the browser. Try setting them to 0d0h or 1s and see if it disables caching entirely.
Google App Engine: How to disable cache on 'static' files, or make cache smart
I'm using the app engine locally, and sometimes the JS files are being cached between page refreshes, and it drives me crazy because I don't know if there's a bug in the javascript code I'm trying to write, or if the cache is acting up. How do I completely disable cache for *.js files? Or maybe the question is, how to have it be smart, like based on last-modified date. Thanks! UPDATE- So it turns out Chrome Dev (for mac at least) has caching issues, going back to Chrome Beta fixes all this. The answers have still been helpful though, thanks
[ "A common practice used by the major sites is to cache documents forever but include a unique identifier based on the release version or date into the url for the .js or .css call. For example:\n<script type=\"text/javascript\" src=\"static/util.js?version=20100310\"></script>\n\nThis way you get optimum caching as well as always up to date files. The only trick is to figure out how to include an up to date version number in your url, which you can automate based on your deployment environment.\n", "Based on the docs, you can specify an app-wide cache expiration duration:\n\nUnless told otherwise, web browsers retain files they load from a website for a limited period of time. You can define a global default cache period for all static file handlers for an application by including the default_expiration element, a top-level element. You can also configure a cache duration for specific static file handler. (Script handlers can set cache durations by returning the appropriate HTTP headers to the browser.)\ndefault_expiration\nThe length of time a static file served by a static file handler ought to be cached in the user's browser, if the handler does not specify its own expiration. The value is a string of numbers and units, separated by spaces, where units can be d for days, h for hours, m for minutes, and s for seconds. For example, \"4d 5h\" sets cache expiration to 4 days and 5 hours after the file is first loaded by the browser.\ndefault_expiration is optional. If omitted, the default behavior is to allow the browser to determine its own cache duration.\n\n...and if you want to specify the expiration on a directory-by-directory basis:\n\nexpiration\nThe length of time a static file served by this handler ought to be cached in the user's browser. The value is a string of numbers and units, separated by spaces, where units can be d for days, h for hours, m for minutes, and s for seconds. For example, \"4d 5h\" sets cache expiration to 4 days and 5 hours after the file is first loaded by the browser.\n\nTry setting them to 0d0h or 1s and see if it disables caching entirely.\n" ]
[ 15, 13 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0002642432_google_app_engine_python.txt
Q: Python - multiple copies of output when using multiprocessing Possible Duplicate: Multiprocessing launching too many instances of Python VM Module run via python myscript.py (not shell input) import uuid import time import multiprocessing def sleep_then_write(content): time.sleep(5) print(content) if __name__ == '__main__': for i in range(15): p = multiprocessing.Process(target=sleep_then_write, args=('Hello World',)) p.start() print('Ah, what a hard day of threading...') This script output the following: Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... AAh, what a hard day of threading.. h, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Firstly, why the heck did it print the bottom statement sixteen times (one for each process) instead of just the one time? Second, notice the AAh, and h, about half way down; that was the real output. This makes me wary of using threads ever, now. (Windows XP, Python 2.6.4, Core 2 Duo) A: multiprocessing works by starting several processes. Each process loads a copy of your script (that way it has access to the "target" function), and then runs the target function. You get the bottom print statement 16 times because the statement is sitting out there by itself and gets printed when you load the module. Put it inside a main block and it wont: if __name__ == "__main__": print('Ah, what a hard day of threading...') Regarding the "AAh" - you have multiple processes going and they'll produce output as they run, so you simply have the "A" from one process next to the "Ah" from another. When dealing with multi process or multi threaded environments you have to think through locking and communication. This is not unique to multiprocessing; any concurrent library will have the same issues. A: Due to Windows's lack of the standard fork system call, the multiprocessing module works somewhat funny on Windows. For one, it imports the main module (your script) once for each process. For a detailed explanation, see the "Windows" subtitle at http://docs.python.org/library/multiprocessing.html#multiprocessing-programming
Python - multiple copies of output when using multiprocessing
Possible Duplicate: Multiprocessing launching too many instances of Python VM Module run via python myscript.py (not shell input) import uuid import time import multiprocessing def sleep_then_write(content): time.sleep(5) print(content) if __name__ == '__main__': for i in range(15): p = multiprocessing.Process(target=sleep_then_write, args=('Hello World',)) p.start() print('Ah, what a hard day of threading...') This script output the following: Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... AAh, what a hard day of threading.. h, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Ah, what a hard day of threading... Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Firstly, why the heck did it print the bottom statement sixteen times (one for each process) instead of just the one time? Second, notice the AAh, and h, about half way down; that was the real output. This makes me wary of using threads ever, now. (Windows XP, Python 2.6.4, Core 2 Duo)
[ "multiprocessing works by starting several processes. Each process loads a copy of your script (that way it has access to the \"target\" function), and then runs the target function.\nYou get the bottom print statement 16 times because the statement is sitting out there by itself and gets printed when you load the module. Put it inside a main block and it wont:\nif __name__ == \"__main__\":\n print('Ah, what a hard day of threading...')\n\nRegarding the \"AAh\" - you have multiple processes going and they'll produce output as they run, so you simply have the \"A\" from one process next to the \"Ah\" from another.\nWhen dealing with multi process or multi threaded environments you have to think through locking and communication. This is not unique to multiprocessing; any concurrent library will have the same issues.\n", "Due to Windows's lack of the standard fork system call, the multiprocessing module works somewhat funny on Windows. For one, it imports the main module (your script) once for each process. For a detailed explanation, see the \"Windows\" subtitle at http://docs.python.org/library/multiprocessing.html#multiprocessing-programming\n" ]
[ 4, 2 ]
[]
[]
[ "multiprocessing", "python", "windows" ]
stackoverflow_0002641080_multiprocessing_python_windows.txt
Q: Queuing methods to be run on an object by different threads in Python Let's say I have an object who's class definition looks like: class Command: foo = 5 def run(self, bar): time.sleep(1) self.foo = bar return self.foo If this class is instantiated once, but different threads are hitting its run method (via an HTTP request, handled separately) passing in different args, what is the best method to queue them? Can this be done in the class definition itself? I am using an XML RPC server (separate class). For simplicity's sake, we can say it has one instance of the Command class instantiated as a class variable. When Command.run() is being hit by two separate threads, how can I make sure that one run() method is complete before the next one is started? I could do something like: while self.busy: time.sleep(1) self.busy = true ... self.busy = false return self.foo but that, as far as I know, would not give priority to the oldest request. I realize how redundant this whole exercise sounds, since I could just run the XML-RPC server synchronously. But, to make a long story short, there are multiple Command instantiations and I do not want to block requests for one because another is busy. I hope this makes more sense. Thanks. A: Here's a relatively simple approach (ignores exceptions, attribute-access, special methods, etc): import Queue import threading def serialize(q): """runs a serializer on queue q: put [-1]*4 on q to terminate.""" while True: # get output-queue for result, a callable, its args and kwds out_q, tocall, args, kwds = q.get() if out_q == -1: return result = tocall(*args, **kwds) out_q.put(result) class WrapCall(object): """Wraps a callable to serialize calls to it.""" def __init__(self, inq, ouq, tocall): self.inq = inq self.ouq = ouq self.tocall = tocall def __call__(self, *a, **k): self.inq.put((self.ouq, self.tocall, a, k)) if self.ouq is None: return None return self.ouq.get() class WrapObj(object): """Wraps any object to serialize all calls to its methods.""" def __init__(self, obj): self._o = obj self._q = Queue.Queue() t = threading.Thread(target=serialize, args=(self._q,)) t.setDaemon(True) t.start() self.t = t def __getattr__(self, n): """Wraps methods of self.w into an appropriate WrapCall instance.""" towrap = getattr(self._o, n) if not callable(towrap): raise TypeError('Cannot wrap noncallable attribute %r (type: %s)' % (n, type(towrap))) q = Queue.Queue() return WrapCall(self._q, q, towrap) def WrapperWait(self): """Return only when self.t has served all pending requests.""" q = Queue.Queue() w = WrapCall(self.__q, q, lambda: None) return w() With this "serializer", you can do myobj = WrapObj(Command()) and now all calls to myobj's (non-special) methods are serialized in a thread-safe way. For your specific case, where there's only one method on the object, you could simplify this a bit further, but this is an already-simplified version of something I wrote and use often (supporting attribute getting and setting, special methods, etc, is a tad too complex to be worth it; the complete version does support catching and reraising exceptions raised in the wrapper object's methods, an optimization for calls whose results or exceptions you don't care about, and a few more tweaks, but not serialization of attributes and special methods). A: That depends on how you plan to consume foo. The simplest is to use Python's queue module to synchronize the delivery of values to a consumer, but that assumes that the consumer wants to receive every value. You might have to be more specific to get a better answer.
Queuing methods to be run on an object by different threads in Python
Let's say I have an object who's class definition looks like: class Command: foo = 5 def run(self, bar): time.sleep(1) self.foo = bar return self.foo If this class is instantiated once, but different threads are hitting its run method (via an HTTP request, handled separately) passing in different args, what is the best method to queue them? Can this be done in the class definition itself? I am using an XML RPC server (separate class). For simplicity's sake, we can say it has one instance of the Command class instantiated as a class variable. When Command.run() is being hit by two separate threads, how can I make sure that one run() method is complete before the next one is started? I could do something like: while self.busy: time.sleep(1) self.busy = true ... self.busy = false return self.foo but that, as far as I know, would not give priority to the oldest request. I realize how redundant this whole exercise sounds, since I could just run the XML-RPC server synchronously. But, to make a long story short, there are multiple Command instantiations and I do not want to block requests for one because another is busy. I hope this makes more sense. Thanks.
[ "Here's a relatively simple approach (ignores exceptions, attribute-access, special methods, etc):\nimport Queue\nimport threading\n\ndef serialize(q):\n \"\"\"runs a serializer on queue q: put [-1]*4 on q to terminate.\"\"\"\n while True:\n # get output-queue for result, a callable, its args and kwds\n out_q, tocall, args, kwds = q.get()\n if out_q == -1:\n return\n result = tocall(*args, **kwds)\n out_q.put(result)\n\nclass WrapCall(object):\n \"\"\"Wraps a callable to serialize calls to it.\"\"\"\n\n def __init__(self, inq, ouq, tocall):\n self.inq = inq\n self.ouq = ouq\n self.tocall = tocall\n\n def __call__(self, *a, **k):\n self.inq.put((self.ouq, self.tocall, a, k))\n if self.ouq is None:\n return None\n return self.ouq.get()\n\nclass WrapObj(object):\n \"\"\"Wraps any object to serialize all calls to its methods.\"\"\"\n\n def __init__(self, obj):\n self._o = obj\n self._q = Queue.Queue()\n t = threading.Thread(target=serialize, args=(self._q,))\n t.setDaemon(True)\n t.start()\n self.t = t\n\n def __getattr__(self, n):\n \"\"\"Wraps methods of self.w into an appropriate WrapCall instance.\"\"\"\n towrap = getattr(self._o, n)\n if not callable(towrap):\n raise TypeError('Cannot wrap noncallable attribute %r (type: %s)'\n % (n, type(towrap)))\n q = Queue.Queue()\n return WrapCall(self._q, q, towrap)\n\n def WrapperWait(self):\n \"\"\"Return only when self.t has served all pending requests.\"\"\"\n q = Queue.Queue()\n w = WrapCall(self.__q, q, lambda: None)\n return w()\n\nWith this \"serializer\", you can do\nmyobj = WrapObj(Command())\n\nand now all calls to myobj's (non-special) methods are serialized in a thread-safe way.\nFor your specific case, where there's only one method on the object, you could simplify this a bit further, but this is an already-simplified version of something I wrote and use often (supporting attribute getting and setting, special methods, etc, is a tad too complex to be worth it; the complete version does support catching and reraising exceptions raised in the wrapper object's methods, an optimization for calls whose results or exceptions you don't care about, and a few more tweaks, but not serialization of attributes and special methods).\n", "That depends on how you plan to consume foo. The simplest is to use Python's queue module to synchronize the delivery of values to a consumer, but that assumes that the consumer wants to receive every value. You might have to be more specific to get a better answer.\n" ]
[ 2, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0002642515_multithreading_python.txt
Q: A web framework where AJAX was not an after thought AJAX is a pain in the ass because it essentially means you'll have to write two sets of similarish code: one for browsers with JavaScript enabled and those without. Not only this, but you have to connect JavaScript events to hook into your models and display the results. And if all that weren't bad enough, you need to send an address change with the request, otherwise the user won't be able to "click back" correctly (if confused look at what happens to the address bar when you click links in GMail). We're searching for something that had the foresight and design goals with all these concerns in mind. Performance and security are also obvious major concerns. We love config-based systems as well, where you don't have to write a lot of code you just drop it into an easily read config format. It's like asking for the holy grail right? A: Have you given a look to Pyjamas Quoted from the site Why should I use it? You can write web applications in python - a readable programming language - instead of in HTML and Javascript, both of which become quickly unreadable for even medium-sized applications. Your application's design can benefit from encapsulating high level concepts into classes and modules (instead of trying to fit as much HTML as you can stand into one page); you can reuse - and import - classes and modules. Also, the AJAX library takes care of all the browser interoperability issues on your behalf, leaving you free to focus on application development instead of learning all the "usual" browser incompatibilities. A: Two approaches to this problem generally. One is for the framework to try and do it all, like Microsoft's ASP.NET with its Ajax toolkit. This includes server side controls that produce Ajax functionality with all client- and server-side code generated for you. For example, their UpdatePanel control allows for partial page updates via an Ajax call. However, it is not universally popular as a framework in general because their Page and Control models are sometimes seen as too heavyweight and overbloated. A second, "slimmer" approach would be to separate the concerns. Let jQuery or a similar library deal with cross-browser inconsistencies and the client side of the Ajax call, and use a simple lightweight server-side web framework, such as Groovy on Grails or Microsoft ASP.NET MVC (there are others as I'm sure people will point out). Any decent framework should be capable of easily producing either JSON or XML data in response to an Ajax call. As for browsers with Javascript disabled - this is the twenty first century. Do you really have to cater for them any more? A: Yes, the NOLOH PHP Framework (the site itself was written in NOLOH) is that holy grail. NOLOH was developed from the ground up to address these issues. You develop in a single language on the server-side and it takes care of the rest. No need to worry about AJAX, or cross browser issues. NOLOH's been around since 2005 and is being used in various companies large and small. It significantly outperforms the competition in performance due to it's lightweight and on-demand nature. NOLOH recently gave a talk at Confoo, the most applicable parts of that presentation to your question are the live examples, and the basic coding. If you're curious about the power of NOLOH you can also check out this Steve Jobs like one more thing demonstrating the upcoming automatic Comet. Disclaimer: I'm a co-founder of NOLOH. Enjoy. A: The jQuery BBQ: Back Button & Query Library aims to help with ajax "back button" issue. You might check it out if you're considering jQuery for your ajax functionality.
A web framework where AJAX was not an after thought
AJAX is a pain in the ass because it essentially means you'll have to write two sets of similarish code: one for browsers with JavaScript enabled and those without. Not only this, but you have to connect JavaScript events to hook into your models and display the results. And if all that weren't bad enough, you need to send an address change with the request, otherwise the user won't be able to "click back" correctly (if confused look at what happens to the address bar when you click links in GMail). We're searching for something that had the foresight and design goals with all these concerns in mind. Performance and security are also obvious major concerns. We love config-based systems as well, where you don't have to write a lot of code you just drop it into an easily read config format. It's like asking for the holy grail right?
[ "Have you given a look to Pyjamas\nQuoted from the site\n\nWhy should I use it?\nYou can write web applications in\n python - a readable programming\n language - instead of in HTML and\n Javascript, both of which become\n quickly unreadable for even\n medium-sized applications. Your\n application's design can benefit from\n encapsulating high level concepts into\n classes and modules (instead of trying\n to fit as much HTML as you can stand\n into one page); you can reuse - and\n import - classes and modules.\nAlso, the AJAX library takes care of\n all the browser interoperability\n issues on your behalf, leaving you\n free to focus on application\n development instead of learning all\n the \"usual\" browser incompatibilities.\n\n", "Two approaches to this problem generally. One is for the framework to try and do it all, like Microsoft's ASP.NET with its Ajax toolkit. This includes server side controls that produce Ajax functionality with all client- and server-side code generated for you. For example, their UpdatePanel control allows for partial page updates via an Ajax call. However, it is not universally popular as a framework in general because their Page and Control models are sometimes seen as too heavyweight and overbloated.\nA second, \"slimmer\" approach would be to separate the concerns. Let jQuery or a similar library deal with cross-browser inconsistencies and the client side of the Ajax call, and use a simple lightweight server-side web framework, such as Groovy on Grails or Microsoft ASP.NET MVC (there are others as I'm sure people will point out). Any decent framework should be capable of easily producing either JSON or XML data in response to an Ajax call.\nAs for browsers with Javascript disabled - this is the twenty first century. Do you really have to cater for them any more?\n", "Yes, the NOLOH PHP Framework (the site itself was written in NOLOH) is that holy grail. NOLOH was developed from the ground up to address these issues. You develop in a single language on the server-side and it takes care of the rest. No need to worry about AJAX, or cross browser issues. NOLOH's been around since 2005 and is being used in various companies large and small. It significantly outperforms the competition in performance due to it's lightweight and on-demand nature.\nNOLOH recently gave a talk at Confoo, the most applicable parts of that presentation to your question are the live examples, and the basic coding.\nIf you're curious about the power of NOLOH you can also check out this Steve Jobs like one more thing demonstrating the upcoming automatic Comet.\nDisclaimer: I'm a co-founder of NOLOH.\nEnjoy. \n", "The jQuery BBQ: Back Button & Query Library aims to help with ajax \"back button\" issue. You might check it out if you're considering jQuery for your ajax functionality.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ ".net", "ajax", "frameworks", "php", "python" ]
stackoverflow_0002642364_.net_ajax_frameworks_php_python.txt
Q: How to evaluate a custom math expression in Python I'm writing a custom dice rolling parser (snicker if you must) in python. Basically, I want to use standard math evaluation but add the 'd' operator: #xdy sum = 0 for each in range(x): sum += randInt(1, y) return sum So that, for example, 1d6+2d6+2d6-72+4d100 = (5)+(1+1)+(6+2)-72+(5+39+38+59) = 84 I was using regex to replace all 'd's with the sum and then using eval, but my regex fell apart when dealing with parentheses on either side. Is there a faster way to go about this than implementing my own recursive parsing? Perhaps adding an operator to eval? Edit: I seem to have given a bad example, as the above example works with my current version. What I'm looking for is some way to evaluate, say, (5+(6d6))d(7-2*(1d4)). By "fell apart", I just meant that my current regex expression failed. I have been too vague about my failure, sorry for the confusion. Here's my current code: def evalDice(roll_matchgroup): roll_split = roll_matchgroup.group('roll').split('d') print roll_split roll_list = [] for die in range(int(roll_split[0])): roll = random.randint(1,int(roll_split[1])) roll_list.append(roll) def EvalRoll(roll): if not roll: return 0 rollPattern = re.compile('(?P<roll>\d*d\d+)') roll_string = rollPattern.sub(evalDice, roll.lower()) for this, "1d6+4d100" works just fine, but "(1d6+4)d100" or even "1d6+4d(100)" fails. A: You could use a callback function with re.sub. When you follow the link, search down to the paragraph beginning with "If repl is a function..." import re import random def xdy(matchobj): x,y=map(int,matchobj.groups()) s = 0 for each in range(x): s += random.randint(1, y) return str(s) s='1d6+2d6+2d6-72+4d100' t=re.sub('(\d+)d(\d+)',xdy,s) print(t) # 5+10+8-72+197 print(eval(t)) # 148 A: Python doesn't let you write brand new operators, and you can't do parentheses with a regular language. You'll have to write a recursive descent parser. This should be pretty simple for your dice-rolling language though. Alternatively, you could coopt an existing Python operator and use Pythons parsing tools to convert the text into an AST. A: Take a look at the PyParsing library. In particular, the examples page has a sample fairly close to what you want: dice2.py A dice roll parser and evaluator for evaluating strings such as "4d20+5.5+4d6.takeHighest(3)". A: This uses eval, which is pretty awful really, but here you go >>> x = '1d6+2d6+2d6-72+4d100' >>> eval(re.sub(r'(\d+)d(\d+)',r'sum((random.randint(1,x) for x in \1 * [\2]))', x)) Some quick notes: This replaces, say, 4d6 with sum((random.randint(1,x) for x in 4 * [6])). 4 * [6] yields the list [6,6,6,6]. ((random.randint(1,x) for x in [6,6,6,6])) is the generator equivalent of a list comprehension; this particular one will return four random numbers between 1 and 6. A: In my Supybot dice plugin I parse the expression with r'(?P<sign>[+-])((?P<dice>\d*)d(?P<sides>\d+)|(?P<mod>\d+))' then get total numbers of each dice and a total modifier, roll them and get total result (I wanted to show total numbers of each dice).
How to evaluate a custom math expression in Python
I'm writing a custom dice rolling parser (snicker if you must) in python. Basically, I want to use standard math evaluation but add the 'd' operator: #xdy sum = 0 for each in range(x): sum += randInt(1, y) return sum So that, for example, 1d6+2d6+2d6-72+4d100 = (5)+(1+1)+(6+2)-72+(5+39+38+59) = 84 I was using regex to replace all 'd's with the sum and then using eval, but my regex fell apart when dealing with parentheses on either side. Is there a faster way to go about this than implementing my own recursive parsing? Perhaps adding an operator to eval? Edit: I seem to have given a bad example, as the above example works with my current version. What I'm looking for is some way to evaluate, say, (5+(6d6))d(7-2*(1d4)). By "fell apart", I just meant that my current regex expression failed. I have been too vague about my failure, sorry for the confusion. Here's my current code: def evalDice(roll_matchgroup): roll_split = roll_matchgroup.group('roll').split('d') print roll_split roll_list = [] for die in range(int(roll_split[0])): roll = random.randint(1,int(roll_split[1])) roll_list.append(roll) def EvalRoll(roll): if not roll: return 0 rollPattern = re.compile('(?P<roll>\d*d\d+)') roll_string = rollPattern.sub(evalDice, roll.lower()) for this, "1d6+4d100" works just fine, but "(1d6+4)d100" or even "1d6+4d(100)" fails.
[ "You could use a callback function with re.sub. When you follow the link, search down to the paragraph beginning with \"If repl is a function...\"\nimport re\nimport random\n\ndef xdy(matchobj):\n x,y=map(int,matchobj.groups())\n s = 0\n for each in range(x):\n s += random.randint(1, y)\n return str(s)\ns='1d6+2d6+2d6-72+4d100'\nt=re.sub('(\\d+)d(\\d+)',xdy,s)\nprint(t)\n# 5+10+8-72+197\nprint(eval(t))\n# 148\n\n", "Python doesn't let you write brand new operators, and you can't do parentheses with a regular language. You'll have to write a recursive descent parser. This should be pretty simple for your dice-rolling language though.\nAlternatively, you could coopt an existing Python operator and use Pythons parsing tools to convert the text into an AST.\n", "Take a look at the PyParsing library. In particular, the examples page has a sample fairly close to what you want:\n\ndice2.py\nA dice roll parser and evaluator for evaluating strings such as \"4d20+5.5+4d6.takeHighest(3)\".\n\n", "This uses eval, which is pretty awful really, but here you go\n>>> x = '1d6+2d6+2d6-72+4d100'\n>>> eval(re.sub(r'(\\d+)d(\\d+)',r'sum((random.randint(1,x) for x in \\1 * [\\2]))', x))\n\n\nSome quick notes:\nThis replaces, say, 4d6 with sum((random.randint(1,x) for x in 4 * [6])).\n4 * [6] yields the list [6,6,6,6].\n((random.randint(1,x) for x in [6,6,6,6])) is the generator equivalent of a list comprehension; this particular one will return four random numbers between 1 and 6.\n", "In my Supybot dice plugin I parse the expression with\nr'(?P<sign>[+-])((?P<dice>\\d*)d(?P<sides>\\d+)|(?P<mod>\\d+))'\n\nthen get total numbers of each dice and a total modifier, roll them and get total result (I wanted to show total numbers of each dice).\n" ]
[ 6, 5, 2, 0, 0 ]
[]
[]
[ "eval", "math", "python" ]
stackoverflow_0002642650_eval_math_python.txt
Q: Java's equivalence to Python's "Got value: %s" % variable? Java's equivalence to Python's "Got value: %s" % variable? A: String.format("Got value: %s", variable); A: System.out.format("%s", aString) See Format and all its various incarnations.
Java's equivalence to Python's "Got value: %s" % variable?
Java's equivalence to Python's "Got value: %s" % variable?
[ "String.format(\"Got value: %s\", variable);\n\n", "System.out.format(\"%s\", aString)\n\nSee Format and all its various incarnations.\n" ]
[ 6, 3 ]
[]
[]
[ "formatting", "java", "python", "string" ]
stackoverflow_0002642908_formatting_java_python_string.txt
Q: What is the best way to create a running integer id on the AppEngine data storage? For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..? A: If you could live without the integer part, try the uuid module (from the standard library): http://docs.python.org/library/uuid.html That would typically give you a 36-character string, though, maybee that's to far from what you need. But it's unique.
What is the best way to create a running integer id on the AppEngine data storage?
For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?
[ "If you could live without the integer part, try the uuid module (from\nthe standard library): http://docs.python.org/library/uuid.html\nThat would typically give you a 36-character string, though, maybee\nthat's to far from what you need. But it's unique.\n" ]
[ 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0002640907_google_app_engine_python.txt
Q: Extending a series of nonuniform netcdf data in a numpy array I am new to python, apologies if this has been asked already. Using python and numpy, I am trying to gather data across many netcdf files into a single array by iteratively calling append(). Naively, I am trying to do something like this: from numpy import * from pupynere import netcdf_file x = array([]) y = [...some list of files...] for file in y: ncfile = netcdf_file(file,'r') xFragment = ncfile.variables["varname"][:] ncfile.close() x = append(x, xFragment) I know that under normal circumstances this is a bad idea, since it reallocates new memory on each append() call. But two things discourage preallocation of x: 1) The files are not necessarily the same size along axis 0 (but should be the same size along subsequent axes), so I would need to read the array sizes from each file beforehand to precalculate the final size of x. However... 2) From what I can tell, pupynere (and other netcdf modules) load the entire file into memory upon opening the file, rather than just a reference (such as many netcdf modules in other enviroments). So to preallocate, I'd have to open the files twice. There are many (>100) large (>1GB) files, so overallocating and reshaping is not practical, from what I can tell. My first question is whether I am missing some intelligent way to preallocate. My second question is more serious. The above snippet works for a single-dimension array. But if I try to load in a matrix, then initialisation becomes a problem. I can append a one-dimensional array to an empty array: append( array([]), array([1, 2, 3]) ) but I cannot append an empty array to a matrix: append( array([]), array([ [1, 2], [3, 4] ]), axis=0) Something like x.extend(xFragment) would work, I believe, but I don't think numpy arrays have this functionality. I could also avoid the initialisation problem by treating the first file as a special case, but I'd prefer to avoid that if there's a better way to do it. If anyone can offer help or a suggestion, or can identify a problem with my approach, then I'd be grateful. Thanks A: You can solve the two problems by first loading the arrays from the files files into a list of arrays, and then using concatenate to join all the arrays. Something like this: x = [] # a normal python list, not np.array y = [...some list of files...] for file in y: ncfile = netcdf_file(file,'r') xFragment = ncfile.variables["varname"][:] ncfile.close() x.append(xFragment) combined_array = concatenate(x, axis=0)
Extending a series of nonuniform netcdf data in a numpy array
I am new to python, apologies if this has been asked already. Using python and numpy, I am trying to gather data across many netcdf files into a single array by iteratively calling append(). Naively, I am trying to do something like this: from numpy import * from pupynere import netcdf_file x = array([]) y = [...some list of files...] for file in y: ncfile = netcdf_file(file,'r') xFragment = ncfile.variables["varname"][:] ncfile.close() x = append(x, xFragment) I know that under normal circumstances this is a bad idea, since it reallocates new memory on each append() call. But two things discourage preallocation of x: 1) The files are not necessarily the same size along axis 0 (but should be the same size along subsequent axes), so I would need to read the array sizes from each file beforehand to precalculate the final size of x. However... 2) From what I can tell, pupynere (and other netcdf modules) load the entire file into memory upon opening the file, rather than just a reference (such as many netcdf modules in other enviroments). So to preallocate, I'd have to open the files twice. There are many (>100) large (>1GB) files, so overallocating and reshaping is not practical, from what I can tell. My first question is whether I am missing some intelligent way to preallocate. My second question is more serious. The above snippet works for a single-dimension array. But if I try to load in a matrix, then initialisation becomes a problem. I can append a one-dimensional array to an empty array: append( array([]), array([1, 2, 3]) ) but I cannot append an empty array to a matrix: append( array([]), array([ [1, 2], [3, 4] ]), axis=0) Something like x.extend(xFragment) would work, I believe, but I don't think numpy arrays have this functionality. I could also avoid the initialisation problem by treating the first file as a special case, but I'd prefer to avoid that if there's a better way to do it. If anyone can offer help or a suggestion, or can identify a problem with my approach, then I'd be grateful. Thanks
[ "You can solve the two problems by first loading the arrays from the files files into a list of arrays, and then using concatenate to join all the arrays. Something like this:\nx = [] # a normal python list, not np.array\ny = [...some list of files...]\n\nfor file in y:\n ncfile = netcdf_file(file,'r')\n xFragment = ncfile.variables[\"varname\"][:]\n ncfile.close()\n x.append(xFragment)\n\ncombined_array = concatenate(x, axis=0)\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "netcdf", "numpy", "python" ]
stackoverflow_0002642951_arrays_netcdf_numpy_python.txt
Q: Python list comprehension to return edge values of a list If I have a list in python such as: stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] with length n (in this case 9) and I am interested in creating lists of length n/2 (in this case 4). I want all possible sets of n/2 values in the original list, for example: [1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] is there some list comprehension code I could use to iterate through the list and retrieve all of those sublists? I don't care about the order of the values within the lists, I am just trying to find a clever method of generating the lists. A: What you need is combinations function from itertools (EDIT: use permutation if the order is important) Note that this function is not available at Python 2.5. In that case you can copy the code from the above link: def combinations(iterable, r): # combinations('ABCD', 2) --> AB AC AD BC BD CD # combinations(range(4), 3) --> 012 013 023 123 pool = tuple(iterable) n = len(pool) if r > n: return indices = range(r) yield tuple(pool[i] for i in indices) while True: for i in reversed(range(r)): if indices[i] != i + n - r: break else: return indices[i] += 1 for j in range(i+1, r): indices[j] = indices[j-1] + 1 yield tuple(pool[i] for i in indices) and then stuff = range(9) what_i_want = [i for i in combinations(stuff, len(stuff)/2)] A: >>> stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> >>> n=len(stuff) >>> >>> [(stuff+stuff[:n/2-1])[i:i+n/2] for i in range(n)] [[1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7], [5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 1], [8, 9, 1, 2], [9, 1, 2, 3]] >>> Note: above code is based on assumption from your example [1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] If you really need all possible values, you need to use itertools.permutations or combinations function as others suggested. A: Use itertools.permutations() or itertools.combinations() (depending on whether you want, for example, both [1,2,3,4] and [4,3,2,1] or not) with the optional second argument to specify length. stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] itertools.permutations(stuff, 4) # will return all possible lists of length 4 itertools.combinations(stuff, 4) # will return all possible choices of 4 elements This is assuming that you don't only want contiguous elements. Update Since you specified that you don't care about order, what you're probably looking for is itertools.combinations().
Python list comprehension to return edge values of a list
If I have a list in python such as: stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9] with length n (in this case 9) and I am interested in creating lists of length n/2 (in this case 4). I want all possible sets of n/2 values in the original list, for example: [1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] is there some list comprehension code I could use to iterate through the list and retrieve all of those sublists? I don't care about the order of the values within the lists, I am just trying to find a clever method of generating the lists.
[ "What you need is combinations function from itertools\n(EDIT: use permutation if the order is important)\nNote that this function is not available at Python 2.5. In that case you can copy the code from the above link:\ndef combinations(iterable, r):\n # combinations('ABCD', 2) --> AB AC AD BC BD CD\n # combinations(range(4), 3) --> 012 013 023 123\n pool = tuple(iterable)\n n = len(pool)\n if r > n:\n return\n indices = range(r)\n yield tuple(pool[i] for i in indices)\n while True:\n for i in reversed(range(r)):\n if indices[i] != i + n - r:\n break\n else:\n return\n indices[i] += 1\n for j in range(i+1, r):\n indices[j] = indices[j-1] + 1\n yield tuple(pool[i] for i in indices)\n\nand then\nstuff = range(9)\nwhat_i_want = [i for i in combinations(stuff, len(stuff)/2)]\n\n", ">>> stuff = [1, 2, 3, 4, 5, 6, 7, 8, 9]\n>>>\n>>> n=len(stuff)\n>>>\n>>> [(stuff+stuff[:n/2-1])[i:i+n/2] for i in range(n)]\n[[1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7], [5, 6, 7, 8], [6, 7, 8, 9], [7, 8, 9, 1], [8, 9, 1, 2], [9, 1, 2, 3]]\n>>>\n\nNote: above code is based on assumption from your example\n[1, 2, 3, 4], [2, 3, 4, 5], ..., [9, 1, 2, 3] \n\nIf you really need all possible values, you need to use itertools.permutations or combinations function as others suggested.\n", "Use itertools.permutations() or itertools.combinations() (depending on whether you want, for example, both [1,2,3,4] and [4,3,2,1] or not) with the optional second argument to specify length.\nstuff = [1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nitertools.permutations(stuff, 4) # will return all possible lists of length 4\nitertools.combinations(stuff, 4) # will return all possible choices of 4 elements\n\nThis is assuming that you don't only want contiguous elements.\nUpdate\nSince you specified that you don't care about order, what you're probably looking for is itertools.combinations().\n" ]
[ 5, 5, 3 ]
[]
[]
[ "data_structures", "iteration", "list_comprehension", "python" ]
stackoverflow_0002643209_data_structures_iteration_list_comprehension_python.txt
Q: How to store an arbitrarily long int to a binary file in python? How can I store and retrieve the number 2**10000 in a binary file in python without converting it to a string? Can it be stored as 10,000 bits? A: The built-in pickle module can do it if you pass in protocol version 2 ("new binary protocol"): import pickle pickle.dumps(2**10000, 2) That returns a string of 1259 bytes. Of course, you'd want to write it to a file normally, so use pickle.dump(2**10000, file, 2) A: It's not clear to me if you are asking in general how to store large integers in a binary file or if the number 2**10000 is significant. If it is significant then using over a kilobyte to store it is obviously very wasteful (I can write it in 8 characters!) I'll assume the general case, but for starters you'd need 10001 bits to store 2**10000, not 10000, so there's a question over what to do about the extra 7 bits needed to pad to a byte boundary in the file. I'm just going to store it in 10008 bits (1251 bytes). This solution uses the bitstring module. from bitstring import BitArray fout = open('bignumber', 'wb') a = BitArray(uint=2**10000, length=10008) a.tofile(fout) and to read it back: the_number = BitArray(filename='bignumber').uint This really does just store the number and nothing else in the file.
How to store an arbitrarily long int to a binary file in python?
How can I store and retrieve the number 2**10000 in a binary file in python without converting it to a string? Can it be stored as 10,000 bits?
[ "The built-in pickle module can do it if you pass in protocol version 2 (\"new binary protocol\"):\nimport pickle\npickle.dumps(2**10000, 2)\n\nThat returns a string of 1259 bytes. Of course, you'd want to write it to a file normally, so use pickle.dump(2**10000, file, 2)\n", "It's not clear to me if you are asking in general how to store large integers in a binary file or if the number 2**10000 is significant. If it is significant then using over a kilobyte to store it is obviously very wasteful (I can write it in 8 characters!)\nI'll assume the general case, but for starters you'd need 10001 bits to store 2**10000, not 10000, so there's a question over what to do about the extra 7 bits needed to pad to a byte boundary in the file. I'm just going to store it in 10008 bits (1251 bytes). This solution uses the bitstring module.\nfrom bitstring import BitArray\nfout = open('bignumber', 'wb')\na = BitArray(uint=2**10000, length=10008)\na.tofile(fout)\n\nand to read it back:\nthe_number = BitArray(filename='bignumber').uint\n\nThis really does just store the number and nothing else in the file.\n" ]
[ 3, 2 ]
[]
[]
[ "python" ]
stackoverflow_0002641695_python.txt
Q: Can Microsoft Visual C++ 2008 Redistributable Package be freely redistributed I am planning to use py2exe to make an application developped with Python 2.6. It seems that my app need the VC redistribuables : http://www.py2exe.org/index.cgi/Tutorial#Step5 I've read this tutorial and the redistribuables license agreement and I am not sure if I can freely redistribute these files with my program. (I don't have VS2008 license) Can I bundle the redistribs into an installer and make the installation transparent for the user or do they have to download the files by their own from Microsoft website? Thanks in advance A: I think you should be fine if you simply include the installation of vcredist_x86.exe into your installation procedure (according to the document you linked to): Either you can instruct your users to download and run this themselves, or you could create an installer for your application (see Step 6 below), that includes vcredist_x86.exe (which is itself redistributable by anyone), and then run that as part of your application installation Mind you that I'm not a lawyer and thus this is not legal advice.
Can Microsoft Visual C++ 2008 Redistributable Package be freely redistributed
I am planning to use py2exe to make an application developped with Python 2.6. It seems that my app need the VC redistribuables : http://www.py2exe.org/index.cgi/Tutorial#Step5 I've read this tutorial and the redistribuables license agreement and I am not sure if I can freely redistribute these files with my program. (I don't have VS2008 license) Can I bundle the redistribs into an installer and make the installation transparent for the user or do they have to download the files by their own from Microsoft website? Thanks in advance
[ "I think you should be fine if you simply include the installation of vcredist_x86.exe into your installation procedure (according to the document you linked to):\n\nEither you can instruct your users to\n download and run this themselves, or\n you could create an installer for your\n application (see Step 6 below), that\n includes vcredist_x86.exe (which is\n itself redistributable by anyone), and\n then run that as part of your\n application installation\n\nMind you that I'm not a lawyer and thus this is not legal advice.\n" ]
[ 3 ]
[]
[]
[ "py2exe", "python", "redistributable", "visual_studio_2008" ]
stackoverflow_0002643760_py2exe_python_redistributable_visual_studio_2008.txt
Q: Embed a Python persistance layer into a C++ application - good idea? say I'm about to write an application with a thin GUI layer, a really fat calculation layer (doing computationally heavy calibrations and other long-running stuff) and fairly simple persistance layer. I'm looking at building the GUI + calculation layer in C++ (using Qt for the gui parts). Now - would it be a crazy idea to build the persistance layer in Python, using sqlalchemy, and embed it into the C++ application, letting the layers interface with eachother through lightweigth data transfer objects (written in C++ but accessible from python)? (the other alternative I'm leaning towards would probably be to write the app in Python from the start, using the PyQt wrapper, and then calling into C++ for the computational tasks) Thanks, Rickard A: I would go with the 'alternative' approach: Write as much as possible in Python (you can use the GUI bindings PyQt or PySide) and then only write the computationally intensive parts (when proven critical for performance) in C++ (have a look at Boost.Python). Developing in Python should be faster, easier and less error-prone then in C++ (unless you're a very experienced C++ developer; and then still). Exposing C++ via Boost.Python should be easier then the other way around.
Embed a Python persistance layer into a C++ application - good idea?
say I'm about to write an application with a thin GUI layer, a really fat calculation layer (doing computationally heavy calibrations and other long-running stuff) and fairly simple persistance layer. I'm looking at building the GUI + calculation layer in C++ (using Qt for the gui parts). Now - would it be a crazy idea to build the persistance layer in Python, using sqlalchemy, and embed it into the C++ application, letting the layers interface with eachother through lightweigth data transfer objects (written in C++ but accessible from python)? (the other alternative I'm leaning towards would probably be to write the app in Python from the start, using the PyQt wrapper, and then calling into C++ for the computational tasks) Thanks, Rickard
[ "I would go with the 'alternative' approach:\nWrite as much as possible in Python (you can use the GUI bindings PyQt or PySide) and then only write the computationally intensive parts (when proven critical for performance) in C++ (have a look at Boost.Python).\nDeveloping in Python should be faster, easier and less error-prone then in C++ (unless you're a very experienced C++ developer; and then still). Exposing C++ via Boost.Python should be easier then the other way around.\n" ]
[ 10 ]
[]
[]
[ "c++", "embedded_language", "orm", "python" ]
stackoverflow_0002643863_c++_embedded_language_orm_python.txt
Q: What's the simplest way to get the highest and lowest keys from a dictionary? self.mood_scale = { '-30':"Panic", '-20':'Fear', '-10':'Concern', '0':'Normal', '10':'Satisfaction', '20':'Happiness', '30':'Euphoria'} I need to set two variables: max_mood and min_mood, so I can put some limits on a ticker. What's the easiest way to get the lowest and the highest keys? A: >>> min(self.mood_scale, key=int) '-30' >>> max(self.mood_scale, key=int) '30' A: This should do it: max_mood = max(self.mood_scale) min_mood = min(self.mood_scale) Perhaps not the most efficient (since it has to get and traverse the list of keys twice), but certainly very obvious and clear. UPDATE: I didn't realize your keys were strings. Since it sounds as if that was a mistake, I'll let this stand as is, but do note that it requires keys to be actual integers. A: Is that valid Python? I think you mean: mood_scale = { '-30':"Panic", '-20':'Fear', '-10':'Concern', '0':'Normal', '10':'Satisfaction', '20':'Happiness', '30':'Euphoria'} print mood_scale[str(min(map(int,mood_scale)))] print mood_scale[str(max(map(int,mood_scale)))] Outputs Panic Euphoria Much better and faster with ints as keys mood_scale = { -30:"Panic", -20:'Fear', -10:'Concern', 0:'Normal', 10:'Satisfaction', 20:'Happiness', 30:'Euphoria'} print mood_scale[min(mood_scale))] print mood_scale[max(mood_scale))] Edit 2: Is much faster using the iterator print timeit.timeit( lambda: mood_scale[min(mood_scale.keys())]) print timeit.timeit( lambda: mood_scale[min(mood_scale)]) 1.05913901329 0.662925004959 Another solution could be to keep track of the max/min values upon insertion and simply do mood_scale.min() / max()
What's the simplest way to get the highest and lowest keys from a dictionary?
self.mood_scale = { '-30':"Panic", '-20':'Fear', '-10':'Concern', '0':'Normal', '10':'Satisfaction', '20':'Happiness', '30':'Euphoria'} I need to set two variables: max_mood and min_mood, so I can put some limits on a ticker. What's the easiest way to get the lowest and the highest keys?
[ ">>> min(self.mood_scale, key=int)\n'-30'\n>>> max(self.mood_scale, key=int)\n'30'\n\n", "This should do it:\nmax_mood = max(self.mood_scale)\nmin_mood = min(self.mood_scale)\n\nPerhaps not the most efficient (since it has to get and traverse the list of keys twice), but certainly very obvious and clear.\nUPDATE: I didn't realize your keys were strings. Since it sounds as if that was a mistake, I'll let this stand as is, but do note that it requires keys to be actual integers.\n", "Is that valid Python? I think you mean:\nmood_scale = {\n '-30':\"Panic\",\n '-20':'Fear',\n '-10':'Concern',\n '0':'Normal',\n '10':'Satisfaction',\n '20':'Happiness',\n '30':'Euphoria'}\n\nprint mood_scale[str(min(map(int,mood_scale)))]\nprint mood_scale[str(max(map(int,mood_scale)))]\n\nOutputs\n\nPanic \n Euphoria\n\nMuch better and faster with ints as keys\nmood_scale = {\n -30:\"Panic\",\n -20:'Fear',\n -10:'Concern',\n 0:'Normal',\n 10:'Satisfaction',\n 20:'Happiness',\n 30:'Euphoria'}\n\nprint mood_scale[min(mood_scale))]\nprint mood_scale[max(mood_scale))]\n\nEdit 2:\nIs much faster using the iterator\nprint timeit.timeit( lambda: mood_scale[min(mood_scale.keys())])\nprint timeit.timeit( lambda: mood_scale[min(mood_scale)])\n1.05913901329\n0.662925004959\n\nAnother solution could be to keep track of the max/min values upon insertion and simply do\nmood_scale.min() / max()\n" ]
[ 12, 9, 7 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0002644039_dictionary_python.txt
Q: object won't die (still references to it that I can't find) I'm using parallel-python and start a new job server in a function. after the functions ends it still exists even though I didn't return it out of the function (I used weakref to test this). I guess there's still some references to this object somewhere. My two theories: It starts threads and it logs to root logger. My questions: can I somehow findout in which namespace there is still a reference to this object. I have the weakref reference. Does anyone know how to detach a logger? What other debug suggestions do people have? here is my testcode: def pptester(): js=pp.Server(ppservers=nodes) js.set_ncpus(0) fh=file('tmp.tmp.tmp','w') tmp=[] for i in range(200): tmp.append(js.submit(ppworktest,(),(),('os','subprocess'))) js.print_stats() return weakref.ref(js) thanks in advance Wolfgang A: You can use gc.get_referrers(obj) to find out what is referencing the object. Because you'll most likely get a bunch of dicts as the response, you'll have to go up a couple of levels to make any sense of it.
object won't die (still references to it that I can't find)
I'm using parallel-python and start a new job server in a function. after the functions ends it still exists even though I didn't return it out of the function (I used weakref to test this). I guess there's still some references to this object somewhere. My two theories: It starts threads and it logs to root logger. My questions: can I somehow findout in which namespace there is still a reference to this object. I have the weakref reference. Does anyone know how to detach a logger? What other debug suggestions do people have? here is my testcode: def pptester(): js=pp.Server(ppservers=nodes) js.set_ncpus(0) fh=file('tmp.tmp.tmp','w') tmp=[] for i in range(200): tmp.append(js.submit(ppworktest,(),(),('os','subprocess'))) js.print_stats() return weakref.ref(js) thanks in advance Wolfgang
[ "You can use gc.get_referrers(obj) to find out what is referencing the object. Because you'll most likely get a bunch of dicts as the response, you'll have to go up a couple of levels to make any sense of it.\n" ]
[ 1 ]
[]
[]
[ "garbage_collection", "logging", "parallel_python", "python", "reference" ]
stackoverflow_0002644103_garbage_collection_logging_parallel_python_python_reference.txt
Q: Constructing an if statement from the client data in python I need to construct an if statement from the data coming from the client as below: conditions: condition1, condition2, condition3, condition4 logical operators: lo1, lo2, lo3 (Possible values: "and" "or") Eg. if condition1 lo1 condition2 lo3 condition4: # Do something I can think of eval/exec but not sure how safe they are! Any better approach or alternative? Appreciate your responses :) PS: Client-side: Flex, Server-side: Python, over internet Thanks A: Don't use eval. It's a huge security risk. If your conditions are relatively simple, I would consider giving the user a decent flex GUI in which to enter them, not just a raw text area, but a real expression creation tool. Look at the "advanced search" features in any reasonably sophisticate search application for examples. Then take the data they have entered into the GUI widgets and represent it as objects. You would model your expression as a chain of Expressions (15 "duck" 5.3 etc), Operators (< > = != etc), and Conjunctions (AND OR NOT etc), or something along those lines. Then I would marshal these to JSON, unmarshal them into python objects on the server side python code, and evaluate them with custom python code. Now, if you set of operators and expressions is very large, consider defining a Domain Specific Language and parsing that, which will be much safer than evaluating raw code. I haven't done a DSL myself, but I'm told python has good libraries for this (PLY might help). A: Define your own function that takes two conditions and an operator and evaluates: def my_eval(condition1, lo, condition2) return { 'and': condition1 and condition2, 'or': condition1 or condition2 }[lo] and then evaluate the lot: condition = conditions[0] for cond, op in zip(conditions[1:], operators): condition = my_eval(condition, op, cond) Feel free to preprocess condition1 and condition2 in my_eval, you probably don't intend to truth test the strings :-) A: Ignacio's answer is the way to go. Go through your data, all the way building up your complex condition. But you'll have to use eval for the basic conditions: condition = eval(conditions[0]) for cond, op in zip(conditions[1:], operators): lop = operator.and_ if op == "and" else operator.or_ condition = lop(condition, eval(cond)) if condition: # Do something You might want to make sure that there are no "evil" conditions in your condition list, e.g. by assuring that they always contain a comparison operator (==, <=, ....).
Constructing an if statement from the client data in python
I need to construct an if statement from the data coming from the client as below: conditions: condition1, condition2, condition3, condition4 logical operators: lo1, lo2, lo3 (Possible values: "and" "or") Eg. if condition1 lo1 condition2 lo3 condition4: # Do something I can think of eval/exec but not sure how safe they are! Any better approach or alternative? Appreciate your responses :) PS: Client-side: Flex, Server-side: Python, over internet Thanks
[ "Don't use eval. It's a huge security risk. If your conditions are relatively simple, I would consider giving the user a decent flex GUI in which to enter them, not just a raw text area, but a real expression creation tool. Look at the \"advanced search\" features in any reasonably sophisticate search application for examples. Then take the data they have entered into the GUI widgets and represent it as objects. You would model your expression as a chain of Expressions (15 \"duck\" 5.3 etc), Operators (< > = != etc), and Conjunctions (AND OR NOT etc), or something along those lines. Then I would marshal these to JSON, unmarshal them into python objects on the server side python code, and evaluate them with custom python code.\nNow, if you set of operators and expressions is very large, consider defining a Domain Specific Language and parsing that, which will be much safer than evaluating raw code. I haven't done a DSL myself, but I'm told python has good libraries for this (PLY might help).\n", "Define your own function that takes two conditions and an operator and evaluates:\ndef my_eval(condition1, lo, condition2)\n return {\n 'and': condition1 and condition2,\n 'or': condition1 or condition2\n }[lo]\n\nand then evaluate the lot:\ncondition = conditions[0]\nfor cond, op in zip(conditions[1:], operators):\n condition = my_eval(condition, op, cond)\n\nFeel free to preprocess condition1 and condition2 in my_eval, you probably don't intend to truth test the strings :-)\n", "Ignacio's answer is the way to go. Go through your data, all the way building up your complex condition. But you'll have to use eval for the basic conditions:\ncondition = eval(conditions[0])\nfor cond, op in zip(conditions[1:], operators):\n lop = operator.and_ if op == \"and\" else operator.or_\n condition = lop(condition, eval(cond))\n\nif condition:\n # Do something\n\nYou might want to make sure that there are no \"evil\" conditions in your condition list, e.g. by assuring that they always contain a comparison operator (==, <=, ....).\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002630493_python.txt
Q: How can I convert this string to list of lists? If a user types in [[0,0,0], [0,0,1], [1,1,0]] and press enter, the program should convert this string to several lists; one list holding [0][0][0], other for [0][0][1], and the last list for [1][1][0] Does python have a good way to handle this? A: >>> import ast >>> ast.literal_eval('[[0,0,0], [0,0,1], [1,1,0]]') [[0, 0, 0], [0, 0, 1], [1, 1, 0]] For tuples >>> ast.literal_eval('[(0,0,0), (0,0,1), (1,1,0)]') [(0, 0, 0), (0, 0, 1), (1, 1, 0)] A: >>> import json >>> json.loads('[[0,0,0], [0,0,1], [1,1,0]]') [[0, 0, 0], [0, 0, 1], [1, 1, 0]] A: This is a little more flexible than Satoru's, and doesn't use any libraries. Still, it won't work with more deeply nested lists. For that, I think you would need a recursive function (or loop), or eval. str = "[[0,0,0],[0,0,1],[1,1,0]]" strs = str.replace('[','').split('],') lists = [map(int, s.replace(']','').split(',')) for s in strs] lists now contains the list of lists you want. A: [[int(i) for i in x.strip(" []").split(",")] for x in s.strip('[]').split("],")] a list comprehension in a list comprehension... but that will melt your brain A: >>> import re >>> list_strs = re.findall(r'\[\d+\,\d+\,\d+\]', s) >>> [[[int(i)] for i in l[1:-1].split(',')] for l in list_str]
How can I convert this string to list of lists?
If a user types in [[0,0,0], [0,0,1], [1,1,0]] and press enter, the program should convert this string to several lists; one list holding [0][0][0], other for [0][0][1], and the last list for [1][1][0] Does python have a good way to handle this?
[ ">>> import ast\n>>> ast.literal_eval('[[0,0,0], [0,0,1], [1,1,0]]')\n[[0, 0, 0], [0, 0, 1], [1, 1, 0]]\n\nFor tuples\n>>> ast.literal_eval('[(0,0,0), (0,0,1), (1,1,0)]')\n[(0, 0, 0), (0, 0, 1), (1, 1, 0)]\n\n", ">>> import json\n>>> json.loads('[[0,0,0], [0,0,1], [1,1,0]]')\n[[0, 0, 0], [0, 0, 1], [1, 1, 0]]\n\n", "This is a little more flexible than Satoru's, and doesn't use any libraries. Still, it won't work with more deeply nested lists. For that, I think you would need a recursive function (or loop), or eval.\nstr = \"[[0,0,0],[0,0,1],[1,1,0]]\"\nstrs = str.replace('[','').split('],')\nlists = [map(int, s.replace(']','').split(',')) for s in strs]\n\nlists now contains the list of lists you want.\n", "[[int(i) for i in x.strip(\" []\").split(\",\")] for x in s.strip('[]').split(\"],\")]\na list comprehension in a list comprehension...\nbut that will melt your brain\n", ">>> import re \n>>> list_strs = re.findall(r'\\[\\d+\\,\\d+\\,\\d+\\]', s)\n>>> [[[int(i)] for i in l[1:-1].split(',')] for l in list_str]\n\n" ]
[ 47, 23, 10, 3, 0 ]
[ ">>> string='[[0,0,0], [0,0,1], [1,1,0]]'\n>>> eval(string)\n[[0, 0, 0], [0, 0, 1], [1, 1, 0]]\n>>> a=eval(string)\n>>> a\n[[0, 0, 0], [0, 0, 1], [1, 1, 0]]\n\nbefore passing your string to eval(), do the necessary sanitization first.\n" ]
[ -3 ]
[ "list", "python", "tuples" ]
stackoverflow_0002644221_list_python_tuples.txt
Q: Google Bot information? Does anyone know any more details about google's web-crawler (aka GoogleBot)? I was curious about what it was written in (I've made a few crawlers myself and am about to make another) and if it parses images and such. I'm assuming it does somewhere along the line, b/c the images in images.google.com are all resized. It also wouldn't surprise me if it was all written in Python and if they used all their own libraries for most everything, including html/image/pdf parsing. Maybe they don't though. Maybe it's all written in C/C++. Thanks in advance- A: you can find a bit about how googlebot works here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=158587 for example the "fetch as googlebot" tool lets you see a page as Googlebot sees it. A: The crawler is very likely written in C or C++, at least backrub's crawler was written in one of these. Be aware that the crawler only takes a snapshot of the page, then stores it in a temporary database for later processing. The indexing and other attached algorithms will extract the data, for example the image references. A: Officially allowed languages at Google, I think, are Python/C++/Java. The bot likely uses all 3 for different tasks.
Google Bot information?
Does anyone know any more details about google's web-crawler (aka GoogleBot)? I was curious about what it was written in (I've made a few crawlers myself and am about to make another) and if it parses images and such. I'm assuming it does somewhere along the line, b/c the images in images.google.com are all resized. It also wouldn't surprise me if it was all written in Python and if they used all their own libraries for most everything, including html/image/pdf parsing. Maybe they don't though. Maybe it's all written in C/C++. Thanks in advance-
[ "you can find a bit about how googlebot works here:\nhttp://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=158587\nfor example the \"fetch as googlebot\" tool lets you see a page as Googlebot sees it. \n", "The crawler is very likely written in C or C++, at least backrub's crawler was written in one of these.\nBe aware that the crawler only takes a snapshot of the page, then stores it in a temporary database for later processing. The indexing and other attached algorithms will extract the data, for example the image references.\n", "Officially allowed languages at Google, I think, are Python/C++/Java.\nThe bot likely uses all 3 for different tasks.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "c", "c++", "python" ]
stackoverflow_0002633302_c_c++_python.txt
Q: Fastest way to find the closest point to a given point in 3D, in Python So lets say I have 10,000 points in A and 10,000 points in B and want to find out the closest point in A for every B point. Currently, I simply loop through every point in B and A to find which one is closest in distance. ie. B = [(.5, 1, 1), (1, .1, 1), (1, 1, .2)] A = [(1, 1, .3), (1, 0, 1), (.4, 1, 1)] C = {} for bp in B: closestDist = -1 for ap in A: dist = sum(((bp[0]-ap[0])**2, (bp[1]-ap[1])**2, (bp[2]-ap[2])**2)) if(closestDist > dist or closestDist == -1): C[bp] = ap closestDist = dist print C However, I am sure there is a faster way to do this... any ideas? A: I typically use a kd-tree in such situations. There is a C++ implementation wrapped with SWIG and bundled with BioPython that's easy to use. A: You could use some spatial lookup structure. A simple option is an octree; fancier ones include the BSP tree. A: You could use numpy broadcasting. For example, from numpy import * import numpy as np a=array(A) b=array(B) #using looping for i in b: print sum((a-i)**2,1).argmin() will print 2,1,0 which are the rows in a that are closest to the 1,2,3 rows of B, respectively. Otherwise, you can use broadcasting: z = sum((a[:,:, np.newaxis] - b)**2,1) z.argmin(1) # gives array([2, 1, 0]) I hope that helps.
Fastest way to find the closest point to a given point in 3D, in Python
So lets say I have 10,000 points in A and 10,000 points in B and want to find out the closest point in A for every B point. Currently, I simply loop through every point in B and A to find which one is closest in distance. ie. B = [(.5, 1, 1), (1, .1, 1), (1, 1, .2)] A = [(1, 1, .3), (1, 0, 1), (.4, 1, 1)] C = {} for bp in B: closestDist = -1 for ap in A: dist = sum(((bp[0]-ap[0])**2, (bp[1]-ap[1])**2, (bp[2]-ap[2])**2)) if(closestDist > dist or closestDist == -1): C[bp] = ap closestDist = dist print C However, I am sure there is a faster way to do this... any ideas?
[ "I typically use a kd-tree in such situations.\nThere is a C++ implementation wrapped with SWIG and bundled with BioPython that's easy to use.\n", "You could use some spatial lookup structure. A simple option is an octree; fancier ones include the BSP tree.\n", "You could use numpy broadcasting. For example,\nfrom numpy import *\nimport numpy as np\n\na=array(A)\nb=array(B)\n#using looping\nfor i in b:\n print sum((a-i)**2,1).argmin()\n\nwill print 2,1,0 which are the rows in a that are closest to the 1,2,3 rows of B, respectively.\nOtherwise, you can use broadcasting:\nz = sum((a[:,:, np.newaxis] - b)**2,1)\nz.argmin(1) # gives array([2, 1, 0])\n\nI hope that helps.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "closest", "distance", "points", "python" ]
stackoverflow_0002641206_closest_distance_points_python.txt
Q: python extract from switch output I have some info back from a LAN switch as below Vlan 1 is administratively down, line protocol is down Vlan 2 is up, line protocol is up Helper address is 192.168.0.2 Vlan 3 is up, line protocol is up Helper address is not set Vlan 4 is up, line protocol is up Helper address is 192.168.0.2 Vlan 5 is down, line protocol is down Helper address is 192.168.0.2 Vlan 6 is down, line protocol is down Helper address is not set Helper address is not set And the output I'm trying for is Vlan 1,admin down,n/a Vlan 2,up/up, 192.168.0.2 Vlan 3, up/up, not set Vlan 4, up/up, 192.168.0.2 Vlan 5, down/down, 192.168.0.2 Vlan 6, down/down, not set So the helper isn't always there (line 1) sometimes it's set sometimes it isn't, sometimes there are two lines (last Vlan - I only need 1) and the Vlan can have states of admin down, up/up, up/down (not here) and down down. So using Python and pexpect I can get the above output, but I'm having difficulty parsing out the consecutive lines. I've tried enumerate and then use key+1 for the next line, but the fact that there can be 0,1 or 2 lines following the Vlan screws me. Any ideas please? A: import re x=""" Vlan 1 is administratively down, line protocol is down Vlan 2 is up, line protocol is up Helper address is 192.168.0.2 Vlan 3 is up, line protocol is up Helper address is not set Vlan 4 is up, line protocol is up Helper address is 192.168.0.2 Vlan 5 is down, line protocol is down Helper address is 192.168.0.2 Vlan 6 is down, line protocol is down Helper address is not set Helper address is not set """ x=x.replace(" is administratively down, line protocol is down ",", admin down, n/a") x=x.replace(" line protocol is ","") x=x.replace(" is down,",", down/") x=x.replace(" is up,",", up/") x=re.sub("(?:\s*Helper address is (.*))+",", \\1",x) print x Vlan 1, admin down, n/a Vlan 2, up/up, 192.168.0.2 Vlan 3, up/up, not set Vlan 4, up/up, 192.168.0.2 Vlan 5, down/down, 192.168.0.2 Vlan 6, down/down, not set A: Differentiate between lines of interest (which start with 'Vlan'; or not): for line in lines: if line.startswith("Vlan"): # parse Vlanline # ... else: # parse data from helper line # ... A: here's one way, import re data=open("file").read() r=re.split("\n[^ \t]+",data) for i in r: print "-->",i.split("\n") $ ./python.py --> ['Vlan 1 is administratively down, line protocol is down '] --> [' 2 is up, line protocol is up ', ' Helper address is 192.168.0.2 '] --> [' 3 is up, line protocol is up ', ' Helper address is not set '] --> [' 4 is up, line protocol is up ', ' Helper address is 192.168.0.2 '] --> [' 5 is down, line protocol is down ', ' Helper address is 192.168.0.2 '] --> [' 6 is down, line protocol is down ', ' Helper address is not set ', ' Helper address is not set', ''] now you can manipulate each item, since they are already grouped together A: ghostdog gave me a clue to the solution First I enumerated the table into a dictionary Then step through it. If the line began with VLAN I could then test for line+1 etc to see if it was a helper line then output them all as one line and slice it up as I needed Not the cleanest way, but works and thanks all for your help
python extract from switch output
I have some info back from a LAN switch as below Vlan 1 is administratively down, line protocol is down Vlan 2 is up, line protocol is up Helper address is 192.168.0.2 Vlan 3 is up, line protocol is up Helper address is not set Vlan 4 is up, line protocol is up Helper address is 192.168.0.2 Vlan 5 is down, line protocol is down Helper address is 192.168.0.2 Vlan 6 is down, line protocol is down Helper address is not set Helper address is not set And the output I'm trying for is Vlan 1,admin down,n/a Vlan 2,up/up, 192.168.0.2 Vlan 3, up/up, not set Vlan 4, up/up, 192.168.0.2 Vlan 5, down/down, 192.168.0.2 Vlan 6, down/down, not set So the helper isn't always there (line 1) sometimes it's set sometimes it isn't, sometimes there are two lines (last Vlan - I only need 1) and the Vlan can have states of admin down, up/up, up/down (not here) and down down. So using Python and pexpect I can get the above output, but I'm having difficulty parsing out the consecutive lines. I've tried enumerate and then use key+1 for the next line, but the fact that there can be 0,1 or 2 lines following the Vlan screws me. Any ideas please?
[ "import re\n\nx=\"\"\"\nVlan 1 is administratively down, line protocol is down \nVlan 2 is up, line protocol is up \n Helper address is 192.168.0.2 \nVlan 3 is up, line protocol is up \n Helper address is not set \nVlan 4 is up, line protocol is up \n Helper address is 192.168.0.2 \nVlan 5 is down, line protocol is down \n Helper address is 192.168.0.2 \nVlan 6 is down, line protocol is down \n Helper address is not set \n Helper address is not set\n\"\"\"\n\nx=x.replace(\" is administratively down, line protocol is down \",\", admin down, n/a\")\nx=x.replace(\" line protocol is \",\"\")\nx=x.replace(\" is down,\",\", down/\")\nx=x.replace(\" is up,\",\", up/\")\nx=re.sub(\"(?:\\s*Helper address is (.*))+\",\", \\\\1\",x)\n\nprint x\n\n\nVlan 1, admin down, n/a\nVlan 2, up/up, 192.168.0.2\nVlan 3, up/up, not set\nVlan 4, up/up, 192.168.0.2\nVlan 5, down/down, 192.168.0.2\nVlan 6, down/down, not set\n\n", "Differentiate between lines of interest (which start with 'Vlan'; or not):\nfor line in lines:\n if line.startswith(\"Vlan\"):\n # parse Vlanline\n # ...\n else:\n # parse data from helper line\n # ...\n\n", "here's one way, \nimport re \ndata=open(\"file\").read()\nr=re.split(\"\\n[^ \\t]+\",data)\nfor i in r:\n print \"-->\",i.split(\"\\n\")\n\n$ ./python.py\n--> ['Vlan 1 is administratively down, line protocol is down ']\n--> [' 2 is up, line protocol is up ', ' Helper address is 192.168.0.2 ']\n--> [' 3 is up, line protocol is up ', ' Helper address is not set ']\n--> [' 4 is up, line protocol is up ', ' Helper address is 192.168.0.2 ']\n--> [' 5 is down, line protocol is down ', ' Helper address is 192.168.0.2 ']\n--> [' 6 is down, line protocol is down ', ' Helper address is not set ', ' Helper address is not set', '']\n\nnow you can manipulate each item, since they are already grouped together\n", "ghostdog gave me a clue to the solution\nFirst I enumerated the table into a dictionary\nThen step through it. If the line began with VLAN I could then test for line+1 etc to see if it was a helper line\nthen output them all as one line and slice it up as I needed\nNot the cleanest way, but works and thanks all for your help\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "parsing", "python", "text" ]
stackoverflow_0002643532_parsing_python_text.txt
Q: Avoiding nesting two for loops Please have a look at the code below: import string from collections import defaultdict first_complex=open( "residue_a_chain_a_b_backup.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "residue_a_chain_a_c_backup.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] for x in first_complex_lines: if x[0]!="d": list_1.append( x ) for y in second_complex_lines: if y[0]!="d": list_2.append( y ) j=0 list_3=[] list_4=[] for a in list_1: pass for b in list_2: pass if a==b: list_3.append( a ) kvmap=defaultdict( int ) for k in list_3: kvmap[k]+=1 print kvmap Normally I use izip or izip_longest to club two for loops, but this time the length of the files are different. I don't want a None entry. If I use the above method, the run time becomes incremental and useless. How am I supposed to get the two for loops going? Cheers, Chavanak A: You want to convert list_2 to a set, and check for membership: list_1 = ['a', 'big', 'list'] list_2 = ['another', 'big', 'list'] target_set = set(list_2) for a in list_1: if a in target_set: print a Outputs: big list A set gives you the advantage of O(1) access time to determine membership, so you only have to read all the way through list_2 once (when creating the set). Thereafter, each comparison happens in constant time. A: The following code perform the same tasks as yours with greater conciseness, directness, and speed: with open('residue_a_chain_a_b_backup.txt', 'r') as f: list1 = [line for line in f if line[0] != 'd'] with open('residue_a_chain_a_c_backup.txt', 'r') as f: list2 = [line for line in f if line[0] != 'd'] set2 = set(list2) list3 = [line for line in list1 if line in set2] the following histogramming of lint3 into kvmap is already fine in your code. (In Python 2.5, to use the with statement, you need to start your module with from __future__ import with_statement; in 2.6, no need for that "import from the future", though it does no harm if you want to leave it in). A: Is it the intersection of two set you want, if so you can use the set interaction operation: list_1 = ['a', 'big', 'list'] list_2 = ['another', 'big', 'list'] intersection = (set(list_1) & set(list_2)) After running this, interaction is a set containing the common items of list_1 and list_2. A: Refining Alex's code very slightly: with open('residue_a_chain_a_c_backup.txt', 'r') as f: set2 = set([line.strip() for line in f if line[0] != 'd']) with open('residue_a_chain_a_b_backup.txt', 'r') as f: list1 = [line.strip() for line in f if line.strip() in set2]
Avoiding nesting two for loops
Please have a look at the code below: import string from collections import defaultdict first_complex=open( "residue_a_chain_a_b_backup.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "residue_a_chain_a_c_backup.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] for x in first_complex_lines: if x[0]!="d": list_1.append( x ) for y in second_complex_lines: if y[0]!="d": list_2.append( y ) j=0 list_3=[] list_4=[] for a in list_1: pass for b in list_2: pass if a==b: list_3.append( a ) kvmap=defaultdict( int ) for k in list_3: kvmap[k]+=1 print kvmap Normally I use izip or izip_longest to club two for loops, but this time the length of the files are different. I don't want a None entry. If I use the above method, the run time becomes incremental and useless. How am I supposed to get the two for loops going? Cheers, Chavanak
[ "You want to convert list_2 to a set, and check for membership:\nlist_1 = ['a', 'big', 'list']\nlist_2 = ['another', 'big', 'list']\n\ntarget_set = set(list_2)\n\nfor a in list_1:\n if a in target_set:\n print a\n\nOutputs:\nbig\nlist\n\nA set gives you the advantage of O(1) access time to determine membership, so you only have to read all the way through list_2 once (when creating the set). Thereafter, each comparison happens in constant time.\n", "The following code perform the same tasks as yours with greater conciseness, directness, and speed:\nwith open('residue_a_chain_a_b_backup.txt', 'r') as f:\n list1 = [line for line in f if line[0] != 'd']\nwith open('residue_a_chain_a_c_backup.txt', 'r') as f:\n list2 = [line for line in f if line[0] != 'd']\nset2 = set(list2)\nlist3 = [line for line in list1 if line in set2]\n\nthe following histogramming of lint3 into kvmap is already fine in your code. (In Python 2.5, to use the with statement, you need to start your module with from __future__ import with_statement; in 2.6, no need for that \"import from the future\", though it does no harm if you want to leave it in).\n", "Is it the intersection of two set you want, if so you can use the set interaction operation:\nlist_1 = ['a', 'big', 'list']\nlist_2 = ['another', 'big', 'list']\n\nintersection = (set(list_1) & set(list_2))\n\nAfter running this, interaction is a set containing the common items of list_1 and list_2.\n", "Refining Alex's code very slightly:\nwith open('residue_a_chain_a_c_backup.txt', 'r') as f:\n set2 = set([line.strip() for line in f if line[0] != 'd'])\n\nwith open('residue_a_chain_a_b_backup.txt', 'r') as f:\n list1 = [line.strip() for line in f if line.strip() in set2]\n\n" ]
[ 8, 3, 2, 1 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0002364382_for_loop_python.txt
Q: Need help running Python app as service in Ubuntu with Upstart I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job: /etc/init/greeenlog.conf # greeenlog description "I log stuff." start on startup stop on shutdown script exec /usr/local/greeenlog/main.pyw end script My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks. A: Thanks to unutbu's help, I have been able to correct my job. Apparently, these are the only environment variables that Upstart sets (retrieved in Python with os.environ): {'TERM': 'linux', 'PWD': '/', 'UPSTART_INSTANCE': '', 'UPSTART_JOB': 'greeenlog', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin'} My program relies on a couple of these variables being set, so here is the revised job with the right environment variables: # greeenlog description "I log stuff." start on startup stop on shutdown env DISPLAY=:0.0 env GTK_RC_FILES=/etc/gtk/gtkrc:/home/greeenguru/.gtkrc-1.2-gnome2 script exec /usr/local/greeenlog/main.pyw > /tmp/greeenlog.out 2>&1 end script Thank you!
Need help running Python app as service in Ubuntu with Upstart
I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job: /etc/init/greeenlog.conf # greeenlog description "I log stuff." start on startup stop on shutdown script exec /usr/local/greeenlog/main.pyw end script My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.
[ "Thanks to unutbu's help, I have been able to correct my job. Apparently, these are the only environment variables that Upstart sets (retrieved in Python with os.environ):\n{'TERM': 'linux', 'PWD': '/', 'UPSTART_INSTANCE': '', 'UPSTART_JOB': 'greeenlog', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin'}\n\nMy program relies on a couple of these variables being set, so here is the revised job with the right environment variables:\n# greeenlog\n\ndescription \"I log stuff.\"\n\nstart on startup\nstop on shutdown\n\nenv DISPLAY=:0.0\nenv GTK_RC_FILES=/etc/gtk/gtkrc:/home/greeenguru/.gtkrc-1.2-gnome2\n\nscript\n exec /usr/local/greeenlog/main.pyw > /tmp/greeenlog.out 2>&1\nend script\n\nThank you!\n" ]
[ 12 ]
[]
[]
[ "python", "ubuntu", "upstart" ]
stackoverflow_0002641136_python_ubuntu_upstart.txt
Q: Why urllib.urlopen doesn't (seem to) work with Stack Overflow? I need to retrieve my info from Stack Overflow. The web page that I want to retrieve is something like this. http://stackoverflow.com/users/260127/prosseek When I run the script, it doesn't seem return any results. import urllib sock = urllib.urlopen("http://stackoverflow.com/users/260127/prosseek") htmlSource = sock.read() sock.close() print htmlSource whereas I get almost instant result with this. import urllib sock = urllib.urlopen("http://diveintopython.net/") htmlSource = sock.read() sock.close() print htmlSource What might be wrong? PS. I don't know wether this should be asked at MetaStackOverflow or not. A: It "works" for me - the page returned is a 404 error. Try putting the URL https://stackoverflow.com/users/XXXXX/prosseek into your browser and see for yourself. I think you want to use https://stackoverflow.com/users/260127/prosseek instead.
Why urllib.urlopen doesn't (seem to) work with Stack Overflow?
I need to retrieve my info from Stack Overflow. The web page that I want to retrieve is something like this. http://stackoverflow.com/users/260127/prosseek When I run the script, it doesn't seem return any results. import urllib sock = urllib.urlopen("http://stackoverflow.com/users/260127/prosseek") htmlSource = sock.read() sock.close() print htmlSource whereas I get almost instant result with this. import urllib sock = urllib.urlopen("http://diveintopython.net/") htmlSource = sock.read() sock.close() print htmlSource What might be wrong? PS. I don't know wether this should be asked at MetaStackOverflow or not.
[ "It \"works\" for me - the page returned is a 404 error. Try putting the URL https://stackoverflow.com/users/XXXXX/prosseek into your browser and see for yourself.\nI think you want to use https://stackoverflow.com/users/260127/prosseek instead.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0002646880_python.txt
Q: Check if something is a list What is the easiest way to check if something is a list? A method doSomething has the parameters a and b. In the method, it will loop through the list a and do something. I'd like a way to make sure a is a list, before looping through - thus avoiding an error or the unfortunate circumstance of passing in a string then getting back a letter from each loop. This question must have been asked before - however my googles failed me. Cheers. A: To enable more usecases, but still treat strings as scalars, don't check for a being a list, check that it isn't a string: if not isinstance(a, basestring): ... A: Typechecking hurts the generality, simplicity, and maintainability of your code. It is seldom used in good, idiomatic Python programs. There are two main reasons people want to typecheck: To issue errors if the caller provides the wrong type. This is not worth your time. If the user provides an incompatible type for the operation you are performing, an error will already be raised when the compatibility is hit. It is worrisome that this might not happen immediately, but it typically doesn't take long at all and results in code that is more robust, simple, efficient, and easier to write. Oftentimes people insist on this with the hope they can catch all the dumb things a user can do. If a user is willing to do arbitrarily dumb things, there is nothing you can do to stop him. Typechecking mainly has the potential of keeping a user who comes in with his own types that are drop-in replacements for the ones replaced or when the user recognizes that your function should actually be polymorphic and provides something different that can accept the same operation. If I had a big system where lots of things made by lots of people should fit together right, I would use a system like zope.interface to make testing that everything fits together right. To do different things based on the types of the arguments received. This makes your code worse because your API is inconsistent. A function or method should do one thing, not fundamentally different things. This ends up being a feature not usually worth supporting. One common scenario is to have an argument that can either be a foo or a list of foos. A cleaner solution is simply to accept a list of foos. Your code is simpler and more consistent. If it's an important, common use case only to have one foo, you can consider having another convenience method/function that calls the one that accepts a list of foos and lose nothing. Providing the first API would not only have been more complicated and less consistent, but it would break when the types were not the exact values expected; in Python we distinguish between objects based on their capabilities, not their actual types. It's almost always better to accept an arbitrary iterable or a sequence instead of a list and anything that works like a foo instead of requiring a foo in particular. As you can tell, I do not think either reason is compelling enough to typecheck under normal circumstances. A: I'd like a way to make sure a is a list, before looping through Document the function. A: Usually it's considered not a good style to perform type-check in Python, but try if isinstance(a, list): ... (I think you may also check if a.__iter__ exists.)
Check if something is a list
What is the easiest way to check if something is a list? A method doSomething has the parameters a and b. In the method, it will loop through the list a and do something. I'd like a way to make sure a is a list, before looping through - thus avoiding an error or the unfortunate circumstance of passing in a string then getting back a letter from each loop. This question must have been asked before - however my googles failed me. Cheers.
[ "To enable more usecases, but still treat strings as scalars, don't check for a being a list, check that it isn't a string:\nif not isinstance(a, basestring):\n ...\n\n", "Typechecking hurts the generality, simplicity, and maintainability of your code. It is seldom used in good, idiomatic Python programs.\nThere are two main reasons people want to typecheck: \n\nTo issue errors if the caller provides the wrong type.\nThis is not worth your time. If the user provides an incompatible type for the operation you are performing, an error will already be raised when the compatibility is hit. It is worrisome that this might not happen immediately, but it typically doesn't take long at all and results in code that is more robust, simple, efficient, and easier to write.\nOftentimes people insist on this with the hope they can catch all the dumb things a user can do. If a user is willing to do arbitrarily dumb things, there is nothing you can do to stop him. Typechecking mainly has the potential of keeping a user who comes in with his own types that are drop-in replacements for the ones replaced or when the user recognizes that your function should actually be polymorphic and provides something different that can accept the same operation.\nIf I had a big system where lots of things made by lots of people should fit together right, I would use a system like zope.interface to make testing that everything fits together right.\nTo do different things based on the types of the arguments received.\nThis makes your code worse because your API is inconsistent. A function or method should do one thing, not fundamentally different things. This ends up being a feature not usually worth supporting.\nOne common scenario is to have an argument that can either be a foo or a list of foos. A cleaner solution is simply to accept a list of foos. Your code is simpler and more consistent. If it's an important, common use case only to have one foo, you can consider having another convenience method/function that calls the one that accepts a list of foos and lose nothing. Providing the first API would not only have been more complicated and less consistent, but it would break when the types were not the exact values expected; in Python we distinguish between objects based on their capabilities, not their actual types. It's almost always better to accept an arbitrary iterable or a sequence instead of a list and anything that works like a foo instead of requiring a foo in particular.\n\nAs you can tell, I do not think either reason is compelling enough to typecheck under normal circumstances.\n", "\nI'd like a way to make sure a is a list, before looping through\n\nDocument the function.\n", "Usually it's considered not a good style to perform type-check in Python, but try\nif isinstance(a, list):\n ...\n\n(I think you may also check if a.__iter__ exists.)\n" ]
[ 16, 10, 7, 5 ]
[]
[]
[ "python", "typechecking" ]
stackoverflow_0002645749_python_typechecking.txt
Q: Python - import error I've done what I shouldn't have done and written 4 modules (6 hours or so) without running any tests along the way. I have a method inside of /mydir/__init__.py called get_hash(), and a class inside of /mydir/utils.py called SpamClass. /mydir/utils.py imports get_hash() from /mydir/__init__. /mydir/__init__.py imports SpamClass from /mydir/utils.py. Both the class and the method work fine on their own but for some reason if I try to import /mydir/, I get an import error saying "Cannot import name get_hash" from /mydir/__init__.py. The only stack trace is the line saying that __init__.py imported SpamClass. The next line is where the error occurs in in SpamClass when trying to import get_hash. Why is this? A: This is a pretty easy problem to encounter. What's happening is this that the interpreter evaluates your __init__.py file line-by line. When you have the following code: import mydir.utils def get_hash(): return 1 The interpreter will suspend processing __init__.py at the point of import mydir.utils until it has fully executed 'mydir/utils.py' So when utils.py attempts to import get_hash(), it isn't defined because the interpreter hasn't gotten to it's definition yet. A: To add to what the others have said, another good approach to avoiding circular import problems is to avoid from module import stuff. If you just do standard import module at the top of each script, and write module.stuff in your functions, then by the time those functions run, the import will have finished and the module members will all be available. You then also don't have to worry about situations where some modules can update/change one of their members (or have it monkey-patched by a naughty third party). If you'd imported from the module, you'd still have your old, out-of-date copy of the member. Personally, I only use from-import for simple, dependency-free members that I'm likely to refer to a lot: in particular, symbolic constants. A: In absence of more information, I would say you have a circular import that you aren't working around. The simplest, most obvious fix is to not put anything in mydir/__init__.py that you want to use from any module inside mydir. So, move your get_hash function to another module inside the mydir package, and import that module where you need it.
Python - import error
I've done what I shouldn't have done and written 4 modules (6 hours or so) without running any tests along the way. I have a method inside of /mydir/__init__.py called get_hash(), and a class inside of /mydir/utils.py called SpamClass. /mydir/utils.py imports get_hash() from /mydir/__init__. /mydir/__init__.py imports SpamClass from /mydir/utils.py. Both the class and the method work fine on their own but for some reason if I try to import /mydir/, I get an import error saying "Cannot import name get_hash" from /mydir/__init__.py. The only stack trace is the line saying that __init__.py imported SpamClass. The next line is where the error occurs in in SpamClass when trying to import get_hash. Why is this?
[ "This is a pretty easy problem to encounter. What's happening is this that the interpreter evaluates your __init__.py file line-by line. When you have the following code:\n import mydir.utils\n def get_hash(): return 1\n\nThe interpreter will suspend processing __init__.py at the point of import mydir.utils until it has fully executed 'mydir/utils.py' So when utils.py attempts to import get_hash(), it isn't defined because the interpreter hasn't gotten to it's definition yet.\n", "To add to what the others have said, another good approach to avoiding circular import problems is to avoid from module import stuff.\nIf you just do standard import module at the top of each script, and write module.stuff in your functions, then by the time those functions run, the import will have finished and the module members will all be available.\nYou then also don't have to worry about situations where some modules can update/change one of their members (or have it monkey-patched by a naughty third party). If you'd imported from the module, you'd still have your old, out-of-date copy of the member.\nPersonally, I only use from-import for simple, dependency-free members that I'm likely to refer to a lot: in particular, symbolic constants.\n", "In absence of more information, I would say you have a circular import that you aren't working around. The simplest, most obvious fix is to not put anything in mydir/__init__.py that you want to use from any module inside mydir. So, move your get_hash function to another module inside the mydir package, and import that module where you need it.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "python", "python_import" ]
stackoverflow_0002647088_python_python_import.txt
Q: Unique user ID in a Pylons web application What is the best way to create a unique user ID in Python, using UUID? A: I'd go with uuid from uuid import uuid4 def new_user_id(): return uuid4().hex
Unique user ID in a Pylons web application
What is the best way to create a unique user ID in Python, using UUID?
[ "I'd go with uuid\nfrom uuid import uuid4\ndef new_user_id():\n return uuid4().hex\n\n" ]
[ 8 ]
[]
[]
[ "cassandra", "pylons", "python", "uuid" ]
stackoverflow_0002647080_cassandra_pylons_python_uuid.txt
Q: graphviz segmentation fault I'm building a graph with many nodes, around 3000. I wrote a simple python program to do the trick with graphviz, but it gives me segmentation fault and I don't know why, if the graph is too big or if i'm missing something. The code is: #!/usr/bin/env python # Import graphviz import sys sys.path.append('..') sys.path.append('/usr/lib/graphviz') import gv # Import pygraph from pygraph.classes.graph import graph from pygraph.classes.digraph import digraph from pygraph.algorithms.searching import breadth_first_search from pygraph.readwrite.dot import write # Graph creation gr = graph() file = open('nodes.dat', 'r') line = file.readline() while line: gr.add_nodes([line[0:-1]]) line = file.readline() file.close() print 'nodes finished, beginning edges' edges = open('edges_ok.dat', 'r') edge = edges.readline() while edge: gr.add_edge((edge.split()[0], edge.split()[1])) edge = edges.readline() edges.close() print 'edges finished' print 'Drawing' # Draw as PNG dot = write(gr) gvv = gv.readstring(dot) gv.layout(gvv,'dot') gv.render(gvv,'svg','graph.svg') and it crashes at the gv.layout() call. The files are somthing like: nodes: node1 node2 node3 edges_ok: node1 node2 node2 node3 A: I changed the layout type from dot to neato and that solved the problem. I searched a bit and it seems that the dot layout is a bit faulty on large graphs.
graphviz segmentation fault
I'm building a graph with many nodes, around 3000. I wrote a simple python program to do the trick with graphviz, but it gives me segmentation fault and I don't know why, if the graph is too big or if i'm missing something. The code is: #!/usr/bin/env python # Import graphviz import sys sys.path.append('..') sys.path.append('/usr/lib/graphviz') import gv # Import pygraph from pygraph.classes.graph import graph from pygraph.classes.digraph import digraph from pygraph.algorithms.searching import breadth_first_search from pygraph.readwrite.dot import write # Graph creation gr = graph() file = open('nodes.dat', 'r') line = file.readline() while line: gr.add_nodes([line[0:-1]]) line = file.readline() file.close() print 'nodes finished, beginning edges' edges = open('edges_ok.dat', 'r') edge = edges.readline() while edge: gr.add_edge((edge.split()[0], edge.split()[1])) edge = edges.readline() edges.close() print 'edges finished' print 'Drawing' # Draw as PNG dot = write(gr) gvv = gv.readstring(dot) gv.layout(gvv,'dot') gv.render(gvv,'svg','graph.svg') and it crashes at the gv.layout() call. The files are somthing like: nodes: node1 node2 node3 edges_ok: node1 node2 node2 node3
[ "I changed the layout type from dot to neato and that solved the problem.\nI searched a bit and it seems that the dot layout is a bit faulty on large graphs.\n" ]
[ 6 ]
[]
[]
[ "graphviz", "python", "segmentation_fault" ]
stackoverflow_0002628972_graphviz_python_segmentation_fault.txt
Q: Python List length as a string Is there a preferred (not ugly) way of outputting a list length as a string? Currently I am nesting function calls like so: print "Length: %s" % str(len(self.listOfThings)) This seems like a hack solution, is there a more graceful way of achieving the same result? A: You don't need the call to str: print "Length: %s" % len(self.listOfThings) Note that using % is being deprecated, and you should prefer to use str.format if you are using Python 2.6 or newer: print "Length: {0}".format(len(self.listOfThings)) A: "Length: %d" % len(self.listOfThings) should work great. The point of string formatting is to make your data into a string, so calling str is not what you want: provide the data itself, in this case an int. An int can be formatted many ways, the most common being %d, which provides a decimal representation of it (the way we're used to looking at numbers). For arbitrary stuff you can use %s, which calls str on the object being represented; calling str yourself should never be necessary. I would also consider "Length: %d" % (len(self.listOfThings),)—some people habitually use tuples as the argument to str.__mod__ because the way it works is sort of funny and they want to provide something more consistent. If I was using print in particular, I might just use print "Length:", len(self.listOfThings). I seldom actually use print, though. A: Well, you can leave out the str() call, but that's about it. How come is calling functions "a hack"?
Python List length as a string
Is there a preferred (not ugly) way of outputting a list length as a string? Currently I am nesting function calls like so: print "Length: %s" % str(len(self.listOfThings)) This seems like a hack solution, is there a more graceful way of achieving the same result?
[ "You don't need the call to str:\nprint \"Length: %s\" % len(self.listOfThings)\n\nNote that using % is being deprecated, and you should prefer to use str.format if you are using Python 2.6 or newer:\nprint \"Length: {0}\".format(len(self.listOfThings)) \n\n", "\"Length: %d\" % len(self.listOfThings) should work great. \nThe point of string formatting is to make your data into a string, so calling str is not what you want: provide the data itself, in this case an int. An int can be formatted many ways, the most common being %d, which provides a decimal representation of it (the way we're used to looking at numbers). For arbitrary stuff you can use %s, which calls str on the object being represented; calling str yourself should never be necessary.\nI would also consider \"Length: %d\" % (len(self.listOfThings),)—some people habitually use tuples as the argument to str.__mod__ because the way it works is sort of funny and they want to provide something more consistent.\nIf I was using print in particular, I might just use print \"Length:\", len(self.listOfThings). I seldom actually use print, though.\n", "Well, you can leave out the str() call, but that's about it. How come is calling functions \"a hack\"?\n" ]
[ 9, 2, 1 ]
[]
[]
[ "conventions", "python", "string" ]
stackoverflow_0002647672_conventions_python_string.txt
Q: customizing Django look and feel in Python I am learning Django and got it to work with wsgi. I'm following the tutorial here: http://docs.djangoproject.com/en/1.1/intro/tutorial01/ My question is: how can I customize the look and feel of Django? Is there a repository of templates that "look good", kind of like there are for Wordpress, that I can start from? I find the tutorial counterintuitive in that it goes immediately toward customizing the admin page of Django, rather than the main pages visible to users of the site. Is there an example of a "typical" Django site, with a decent template, that I can look at and built on/modify? The polls application is again not very representative since it's so specialized. any references on this would be greatly appreciated. thanks. A: Search for generic CSS/HTML templates, and add in the Django template language where you need it. Because unless you are trying to skin a particular app (such as the admin system), there is nothing Django-specific about any of your HTML. A: The fact that you're thinking in terms of Wordpress templates, and that you think the tutorial's poll application is highly specialised, are hints that you haven't really grasped what Django is. It isn't a content management system or a blog engine, although it can be used to build those things. There's no such thing as a typical Django site, and it simply doesn't make sense to have pre-packaged templates, because the front end could be absolutely anything at all - like a poll. You write the template like you would write any standalone HTML+CSS page, perhaps with placeholders for the content, then turn those placeholders into actual Django template tags. If you know how to do write HTML, then you know how to make a Django template. A: Actually Django does not have a "look and feel". You are probably referring to the built in Django Admin application. That app comes with its own templates. There are third party applications that can change the Admin interface, Django Grapelli is a great example. For any other application you want to build yourself, or download. Most likely you'll have to do the templates yourself. In order to come up with something pretty you need to learn about CSS/HTML/JS and design principles as the Django Templates will quite likely be out of your way. I always recommend HTML Dog for learning the basics of HTML, CSS and JS.
customizing Django look and feel in Python
I am learning Django and got it to work with wsgi. I'm following the tutorial here: http://docs.djangoproject.com/en/1.1/intro/tutorial01/ My question is: how can I customize the look and feel of Django? Is there a repository of templates that "look good", kind of like there are for Wordpress, that I can start from? I find the tutorial counterintuitive in that it goes immediately toward customizing the admin page of Django, rather than the main pages visible to users of the site. Is there an example of a "typical" Django site, with a decent template, that I can look at and built on/modify? The polls application is again not very representative since it's so specialized. any references on this would be greatly appreciated. thanks.
[ "Search for generic CSS/HTML templates, and add in the Django template language where you need it. Because unless you are trying to skin a particular app (such as the admin system), there is nothing Django-specific about any of your HTML.\n", "The fact that you're thinking in terms of Wordpress templates, and that you think the tutorial's poll application is highly specialised, are hints that you haven't really grasped what Django is. It isn't a content management system or a blog engine, although it can be used to build those things. \nThere's no such thing as a typical Django site, and it simply doesn't make sense to have pre-packaged templates, because the front end could be absolutely anything at all - like a poll.\nYou write the template like you would write any standalone HTML+CSS page, perhaps with placeholders for the content, then turn those placeholders into actual Django template tags. If you know how to do write HTML, then you know how to make a Django template.\n", "Actually Django does not have a \"look and feel\". You are probably referring to the built in Django Admin application. That app comes with its own templates.\nThere are third party applications that can change the Admin interface, Django Grapelli is a great example.\nFor any other application you want to build yourself, or download. Most likely you'll have to do the templates yourself. In order to come up with something pretty you need to learn about CSS/HTML/JS and design principles as the Django Templates will quite likely be out of your way.\nI always recommend HTML Dog for learning the basics of HTML, CSS and JS.\n" ]
[ 5, 2, 1 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0002647098_django_django_templates_python.txt
Q: How can I tell what directory an imported library comes from in python? I'm trying to modify a python library that I downloaded and am using. But the changes I'm making aren't doing anything. So I suspect that python is importing a different copy of this library from somewhere else on the filesystem. So... When I run import foolib in python, how can I tell where on the filesystem it's getting that library from? A: the correct answer is to use sys.modules... it works on everything, even sys. sys.modules is a dictionary where the keys are the imported names (modules or packages), and the values are their respective locations. here is some usage output from my Mac: $ python Python 2.5.1 (r251:54863, Feb 9 2009, 18:49:36) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys, os, django, google >>> sys.modules['sys'] <module 'sys' (built-in)> >>> sys.modules['os'] <module 'os' from '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/os.pyc'> >>> sys.modules['django'] <module 'django' from '/Library/Python/2.5/site-packages/Django-1.1.1-py2.5.egg/django/__init__.pyc'> >>> sys.modules['google'] <module 'google' from '/usr/local/google_appengine/google/__init__.py'> A: import foolib print foolib.__file__ Unfortunately, this only works for some modules. E.g. it works on a module I wrote, but not on sys. A: Look at the foolib.__file__.
How can I tell what directory an imported library comes from in python?
I'm trying to modify a python library that I downloaded and am using. But the changes I'm making aren't doing anything. So I suspect that python is importing a different copy of this library from somewhere else on the filesystem. So... When I run import foolib in python, how can I tell where on the filesystem it's getting that library from?
[ "the correct answer is to use sys.modules... it works on everything, even sys. sys.modules is a dictionary where the keys are the imported names (modules or packages), and the values are their respective locations. here is some usage output from my Mac:\n$ python\nPython 2.5.1 (r251:54863, Feb 9 2009, 18:49:36) \n[GCC 4.0.1 (Apple Inc. build 5465)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys, os, django, google\n>>> sys.modules['sys']\n<module 'sys' (built-in)>\n>>> sys.modules['os']\n<module 'os' from '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/os.pyc'>\n>>> sys.modules['django']\n<module 'django' from '/Library/Python/2.5/site-packages/Django-1.1.1-py2.5.egg/django/__init__.pyc'>\n>>> sys.modules['google']\n<module 'google' from '/usr/local/google_appengine/google/__init__.py'>\n\n", "import foolib\nprint foolib.__file__\n\nUnfortunately, this only works for some modules. E.g. it works on a module I wrote, but not on sys.\n", "Look at the foolib.__file__.\n" ]
[ 7, 6, 2 ]
[]
[]
[ "import", "python" ]
stackoverflow_0002647862_import_python.txt
Q: What are 'len', 'dir', 'vars' named? I was wondering what language to use when talking about a function that takes in a specific object, acts on it and returns something else. Clearly they're functions, but I was wondering if there's a more specific term. A couple examples of Python built-in functions that fit this spec are: 'len', 'dir', 'vars' I thought it was 'predicate', but apparently that's specific to functions that return a boolean value. A: There isn't really a generic term for these kinds of functions, although Python internally uses 'inquiry' for this kind of function. I rarely see them described as anything other than just plain 'function', though. A: Call them functions. That's something everyone will understand. You could also call them subroutines, methods, or procedures, but sometimes those have specific and different meanings in different languages. But "Function" is something most people will understand, no matter then language (even though there may be slight differences from one programming language to the next). A: They're unary functions, since they have a single parameter, but they're nothing more special than that. A: This is the traditional meaning of "function" in the mathematical sense. What you're describing, more than anything else, should be called a function. To distinguish it from a function that has side effects, you can also call it a "pure function" — this means the function will always return the same result given the same argument and won't affect anything else. A: I'd call them intrinsic functions.
What are 'len', 'dir', 'vars' named?
I was wondering what language to use when talking about a function that takes in a specific object, acts on it and returns something else. Clearly they're functions, but I was wondering if there's a more specific term. A couple examples of Python built-in functions that fit this spec are: 'len', 'dir', 'vars' I thought it was 'predicate', but apparently that's specific to functions that return a boolean value.
[ "There isn't really a generic term for these kinds of functions, although Python internally uses 'inquiry' for this kind of function. I rarely see them described as anything other than just plain 'function', though.\n", "Call them functions. That's something everyone will understand. You could also call them subroutines, methods, or procedures, but sometimes those have specific and different meanings in different languages. But \"Function\" is something most people will understand, no matter then language (even though there may be slight differences from one programming language to the next).\n", "They're unary functions, since they have a single parameter, but they're nothing more special than that.\n", "This is the traditional meaning of \"function\" in the mathematical sense. What you're describing, more than anything else, should be called a function. To distinguish it from a function that has side effects, you can also call it a \"pure function\" — this means the function will always return the same result given the same argument and won't affect anything else.\n", "I'd call them intrinsic functions.\n" ]
[ 6, 5, 2, 2, 1 ]
[]
[]
[ "computer_science", "python" ]
stackoverflow_0002648121_computer_science_python.txt
Q: Connecting to Python XML RPC from the Mac I wrote an XML RPC server in python and a simple Test Client for it in python. The Server runs on a linux box. I tested it by running the python client on the same linux machine and it works. I then tried to run the python client on a Mac and i get the following error socket.error: (61, 'Connection Refused') I can ping and ssh into the linux machine from the Mac. So i dont think its a configuration or firewall error. Does anyone have any idea what could be going wrong? The code for the client is as below: import xmlrpclib s = xmlrpclib.ServerProxy('http://143.252.249.141:8000') print s.GetUsers() print s.system.listMethods() A: "Connection Refused" means the connection was REFUSED - the machine 143.252.249.141 is up, and in the network, but is not accepting connections on port 8000 - it is actively refusing them. So maybe the server software isn't running on the server? Or is running in another port? Or is bound to a different IP address?
Connecting to Python XML RPC from the Mac
I wrote an XML RPC server in python and a simple Test Client for it in python. The Server runs on a linux box. I tested it by running the python client on the same linux machine and it works. I then tried to run the python client on a Mac and i get the following error socket.error: (61, 'Connection Refused') I can ping and ssh into the linux machine from the Mac. So i dont think its a configuration or firewall error. Does anyone have any idea what could be going wrong? The code for the client is as below: import xmlrpclib s = xmlrpclib.ServerProxy('http://143.252.249.141:8000') print s.GetUsers() print s.system.listMethods()
[ "\"Connection Refused\" means the connection was REFUSED - the machine 143.252.249.141 is up, and in the network, but is not accepting connections on port 8000 - it is actively refusing them.\nSo maybe the server software isn't running on the server? Or is running in another port? Or is bound to a different IP address?\n" ]
[ 1 ]
[]
[]
[ "macos", "python", "xml_rpc" ]
stackoverflow_0002648212_macos_python_xml_rpc.txt
Q: how would i manage to install python's boto library on shared hosting? how would i manage to install python's boto library on shared hosting? A: Why install virtualenv? I would try: easy_install boto or pip install boto pip and easy_install are python tools for installing other packages. Who is your hosting service? If they have these utilities, using them would be the easiest route. http://www.saltycrane.com/blog/2010/02/how-install-pip-ubuntu/ That link will tell you how to install pip and easy_install (because easy_install is part of the setup tools package) under a system that uses apt. Otherwise, look up instructions specific to your system. A: If you can SSH in, then I would install virtualenv and install bobo in a virtual Python environment. It's surprisingly easy and fully featured.
how would i manage to install python's boto library on shared hosting?
how would i manage to install python's boto library on shared hosting?
[ "Why install virtualenv? I would try:\neasy_install boto\nor\npip install boto\npip and easy_install are python tools for installing other packages. Who is your hosting service? If they have these utilities, using them would be the easiest route.\nhttp://www.saltycrane.com/blog/2010/02/how-install-pip-ubuntu/\nThat link will tell you how to install pip and easy_install (because easy_install is part of the setup tools package) under a system that uses apt. Otherwise, look up instructions specific to your system.\n", "If you can SSH in, then I would install virtualenv and install bobo in a virtual Python environment. It's surprisingly easy and fully featured.\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002646002_python.txt
Q: Python unit test. How to add some sleeping time between test cases? I am using python unit test module. I am wondering is there anyway to add some delay between every 2 test cases? Because my unit test is just making http request and I guess the server may block the frequent request from the same ip. A: Put a sleep inside the tearDown method of your TestCase import time class ExampleTestCase(unittest.TestCase): def tearDown(self): time.sleep(1) # sleep time in seconds tearDown() will be executed after every test within that TestCase class. The modules documentation can be found here. A: import time time.sleep(2.5) # sleeps for 2.5 seconds You might want to consider making the delay a random value between x and y.
Python unit test. How to add some sleeping time between test cases?
I am using python unit test module. I am wondering is there anyway to add some delay between every 2 test cases? Because my unit test is just making http request and I guess the server may block the frequent request from the same ip.
[ "Put a sleep inside the tearDown method of your TestCase\nimport time\n\nclass ExampleTestCase(unittest.TestCase):\n def tearDown(self):\n time.sleep(1) # sleep time in seconds\n\ntearDown() will be executed after every test within that TestCase class.\nThe modules documentation can be found here.\n", "import time\ntime.sleep(2.5) # sleeps for 2.5 seconds\n\nYou might want to consider making the delay a random value between x and y.\n" ]
[ 21, 3 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0002648329_python_unit_testing.txt
Q: virtual serial port on Arch linux I am using Arch linux and I need to create virtual serial port on it. I tried everything but it seems doesnt work. All I want is to connect that virtual port to another virtual port over TCP and after that to use it in my python application to communicate with python application to other side. Is that posible? Please help me. Thanx A: socat command is solution. First you need to install socat: pacman -S socat Just insert this in console, but first you should be login as root: socat PTY,link=/dev/ttyVirtualS0,echo=0 PTY,link=/dev/ttyVirtualS1,echo=0 and now we have two virtual serial ports which are virtualy connected: /dev/ttyVirtualS0 <-------> /dev/ttyVirtualS1
virtual serial port on Arch linux
I am using Arch linux and I need to create virtual serial port on it. I tried everything but it seems doesnt work. All I want is to connect that virtual port to another virtual port over TCP and after that to use it in my python application to communicate with python application to other side. Is that posible? Please help me. Thanx
[ "socat command is solution.\nFirst you need to install socat:\npacman -S socat\nJust insert this in console, but first you should be login as root:\nsocat PTY,link=/dev/ttyVirtualS0,echo=0 PTY,link=/dev/ttyVirtualS1,echo=0\nand now we have two virtual serial ports which are virtualy connected:\n/dev/ttyVirtualS0 <-------> /dev/ttyVirtualS1\n" ]
[ 3 ]
[]
[]
[ "python" ]
stackoverflow_0002119217_python.txt
Q: Python Pre-testing for exceptions when coverage fails I recently came across a simple but nasty bug. I had a list and I wanted to find the smallest member in it. I used Python's built-in min(). Everything worked great until in some strange scenario the list was empty (due to strange user input I could not have anticipated). My application crashed with a ValueError (BTW - not documented in the official docs). I have very extensive unit tests and I regularly check coverage to avoid surprises like this. I also use Pylint (everything is integrated in PyDev) and I never ignore warnings, yet I failed to catch this bug before my users did. Is there anything I can change in my methodology to avoid these kind of runtime errors? (which would have been caught at compile time in Java / C#?). I'm looking for something more than wrapping my code with a big try-except. What else can I do? How many other build in Python functions are hiding nasty surprises like this??? A: The problem here is that malformed external input crashed your program. The solution is to exhaustively unit test possible input scenarios at the boundaries of your code. You say your unit tests are 'extensive', but you clearly hadn't tested for this possibility. Code coverage is a useful tool, but it's important to remember that covering code is not the same as thoroughly testing it. Thorough testing is a combination of covering usage scenarios as well as lines of code. The methodology I use is to trust internal callers, but never to trust external callers or input. So I explicitly don't unit test for the empty list case in any code beyond the first function that receives the external input. But that input function should be exhaustively covered. In this case I think the library's exception is reasonable behaviour - it makes no sense to ask for the min of an empty list. The library can't legitimately set a value such as 0 for you, since you may be dealing with negative numbers, for example. I think the empty list should never have reached the code that asks for the min - it should have been identified at input, and either raised an exception there, or set it to 0 if that works for you, or whatever else it is that does work for you. A: Even in Java/C#, a class of exceptions the RuntimeError are unchecked and will not be detected by the compiler (that's why they're called RuntimeError not CompileError). In python, certain exceptions such as KeyboardInterrupt are particularly hairy since it can be raised practically at any arbitrary point in the program. I'm looking for something more than wrapping my code with a big try-except. Anything but that please. It is much better to let exceptions get to user and halt the program rather than letting error pass around silently (Zen of Python). Unlike Java, Python does not require all Exceptions to be caught because requiring all Exceptions to be caught makes it too easy for programmers to ignore the Exception (by writing blank exception handler). Just relax, let the error halt; let the user report it to you, so you can fix it. The other alternative is you stepping into a debugger for forty-two hours because customer's data is getting corrupted everywhere due to a blank mandatory exception handler. So, what you should change in your methodology is thinking that exception is bad; they're not pretty, but they're better than the alternatives. A: You could have used randomized testing: #!/usr/bin/env python import random from peckcheck import TestCase, an_int, main def a_seq(generator): return lambda size: [generator(size) for _ in xrange(random.randrange(size))] class TestMin(TestCase): def testInputNoThrow(self, x=a_seq(an_int)): min(x) if __name__=="__main__": main() To install peckcheck, type: $ pip install http://github.com/downloads/zed/peckcheck/peckcheck-0.1.v2.6.tar.gz Or just grub peckcheck.py A: I don't know of a direct answer to your question; I, too, would love it if pylint warned about such possibilities. My general practice, given that empty lists cause problems in all sorts of situations, is to check lists for truth before using them; for example: val = min(vals) if vals else 0 In many cases this is 'free', since you often need to check for None anyway. It can also pay off performance-wise to special-case empty lists, to avoid, i.e. starting a new thread, process or database transaction to process zero items.
Python Pre-testing for exceptions when coverage fails
I recently came across a simple but nasty bug. I had a list and I wanted to find the smallest member in it. I used Python's built-in min(). Everything worked great until in some strange scenario the list was empty (due to strange user input I could not have anticipated). My application crashed with a ValueError (BTW - not documented in the official docs). I have very extensive unit tests and I regularly check coverage to avoid surprises like this. I also use Pylint (everything is integrated in PyDev) and I never ignore warnings, yet I failed to catch this bug before my users did. Is there anything I can change in my methodology to avoid these kind of runtime errors? (which would have been caught at compile time in Java / C#?). I'm looking for something more than wrapping my code with a big try-except. What else can I do? How many other build in Python functions are hiding nasty surprises like this???
[ "The problem here is that malformed external input crashed your program. The solution is to exhaustively unit test possible input scenarios at the boundaries of your code. You say your unit tests are 'extensive', but you clearly hadn't tested for this possibility. Code coverage is a useful tool, but it's important to remember that covering code is not the same as thoroughly testing it. Thorough testing is a combination of covering usage scenarios as well as lines of code.\nThe methodology I use is to trust internal callers, but never to trust external callers or input. So I explicitly don't unit test for the empty list case in any code beyond the first function that receives the external input. But that input function should be exhaustively covered.\nIn this case I think the library's exception is reasonable behaviour - it makes no sense to ask for the min of an empty list. The library can't legitimately set a value such as 0 for you, since you may be dealing with negative numbers, for example.\nI think the empty list should never have reached the code that asks for the min - it should have been identified at input, and either raised an exception there, or set it to 0 if that works for you, or whatever else it is that does work for you.\n", "Even in Java/C#, a class of exceptions the RuntimeError are unchecked and will not be detected by the compiler (that's why they're called RuntimeError not CompileError).\nIn python, certain exceptions such as KeyboardInterrupt are particularly hairy since it can be raised practically at any arbitrary point in the program.\n\nI'm looking for something more than wrapping my code with a big try-except. \n\nAnything but that please. It is much better to let exceptions get to user and halt the program rather than letting error pass around silently (Zen of Python). \nUnlike Java, Python does not require all Exceptions to be caught because requiring all Exceptions to be caught makes it too easy for programmers to ignore the Exception (by writing blank exception handler).\nJust relax, let the error halt; let the user report it to you, so you can fix it. The other alternative is you stepping into a debugger for forty-two hours because customer's data is getting corrupted everywhere due to a blank mandatory exception handler.\nSo, what you should change in your methodology is thinking that exception is bad; they're not pretty, but they're better than the alternatives.\n", "You could have used randomized testing:\n#!/usr/bin/env python\nimport random\nfrom peckcheck import TestCase, an_int, main\n\ndef a_seq(generator):\n return lambda size: [generator(size) \n for _ in xrange(random.randrange(size))] \n\nclass TestMin(TestCase):\n def testInputNoThrow(self, x=a_seq(an_int)):\n min(x)\n\nif __name__==\"__main__\":\n main()\n\nTo install peckcheck, type: \n$ pip install http://github.com/downloads/zed/peckcheck/peckcheck-0.1.v2.6.tar.gz\n\nOr just grub peckcheck.py\n", "I don't know of a direct answer to your question; I, too, would love it if pylint warned about such possibilities. My general practice, given that empty lists cause problems in all sorts of situations, is to check lists for truth before using them; for example:\nval = min(vals) if vals else 0\n\nIn many cases this is 'free', since you often need to check for None anyway. It can also pay off performance-wise to special-case empty lists, to avoid, i.e. starting a new thread, process or database transaction to process zero items.\n" ]
[ 7, 4, 1, 0 ]
[]
[]
[ "code_coverage", "exception_handling", "python", "runtime_error", "unit_testing" ]
stackoverflow_0002647790_code_coverage_exception_handling_python_runtime_error_unit_testing.txt