content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: SPARQL Query gives unexpected result I hope someone can help me on this probably totally easy-to-solve problem: I want to run a SPARQL query against the following RDF (noted in N3, the RDF/XMl sits here). This is the desription of a journal article and descriptions of the journal, author and publisher: @prefix bibo: <http://purl.org/ontology/bibo/> . @prefix dc: <http://purl.org/dc/elements/1.1/> . @prefix ex: <http://example.org/thesis/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . <ex:XY> a bibo:Article; dc:creator ex:umstaetter; dc:date "2008-11-01"; dc:isPartOf ex:bibdienst; dc:title "DDC in Europa"@de; bibo:endPage "1221"; bibo:issue "11"; bibo:language "de"; bibo:pageStart "1194"; bibo:uri <http://www.zlb.de/Erschliessung020309BD.pdf>; bibo:volume "42" . <ex:bibdienst> a bibo:Journal; dc:publisher ex:zlb; dc:title "Bibliotheksdienst"@de; bibo:issn "00061972" . <ex:umstaetter> a foaf:person; foaf:birthday "1941-06-12"; foaf:gender "Male"; foaf:givenName "Walther"; foaf:homepage <http://www.ib.hu-berlin.de/~wumsta/index.html>; foaf:img "http://libreas.eu/ausgabe7/pictures/wumstaetter1.jpg"; foaf:name "Walther Umst\u00E4tter"; foaf:surname "Umst\u00E4tter"; foaf:title "Prof. Dr. rer. nat." . <ex:zlb> a foaf:Organization; foaf:homepage <http://www.zlb.de>; foaf:name "Zentral- und Landesbibliothek Berlin"@de . For testing purposes I wanted to read out the foaf:homepage of ex:zlb - the SPARQL I want to run is: PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX dc: <http://purl.org/dc/elements/1.1/> PREFIX bibo: <http://purl.org/ontology/bibo/> PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX ex: <http://example.org/thesis/> SELECT ?article ?publisher ?publisher_url WHERE { ?article dc:isPartOf ?journal . ?journal dc:publisher ?publisher . ?publisher foaf:homepage ?publisher_url } (Again: This is gonna be for testing only since there is only one entity of article.) Running it on my local machine with Python and RDflib doesn't give me a result. Neither does the Online Redland SPARQL Query Demo. Anyone out there who sees a solution? Am I on the right path or totally wrong? A: I don't think that you can use a QName in an XML attribute value; e.g. the value of rdf:about. So consider this line from your RDF/XML file: <bibo:Journal rdf:about="ex:bibdienst"> I think that this is actually saying that the subject URI is "ex:bibdienst". That is a syntactically valid URI, but it is not the same URI as appears as the object of the triple corresponding to this line: <dc:isPartOf rdf:resource="http://example.org/thesis/bibdienst" /> Try replacing the QNames in XML attribute values with the corresponding URIs and see if that fixes your problem. A: Yep Stephen C is totally correct that you can't use QNames in XML attributes, you can use XML entities instead which you define in a DTD block at the top of your document like so: eg. <!DOCTYPE rdf:RDF[ <!ENTITY rdf 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'> <!ENTITY rdfs 'http://www.w3.org/2000/01/rdf-schema#'> <!ENTITY xsd 'http://www.w3.org/2001/XMLSchema#'> <!ENTITY ex 'http://example.org/thesis/'> <!ENTITY dc 'http://purl.org/dc/elements/1.1/'> <!ENTITY foaf 'http://xmlns.com/foaf/0.1/'> <!ENTITY bibo 'http://purl.org/ontology/bibo/'> ]> Then you can define attributes like so: <bibo:Journal rdf:about="&ex;bibdienst">
SPARQL Query gives unexpected result
I hope someone can help me on this probably totally easy-to-solve problem: I want to run a SPARQL query against the following RDF (noted in N3, the RDF/XMl sits here). This is the desription of a journal article and descriptions of the journal, author and publisher: @prefix bibo: <http://purl.org/ontology/bibo/> . @prefix dc: <http://purl.org/dc/elements/1.1/> . @prefix ex: <http://example.org/thesis/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . <ex:XY> a bibo:Article; dc:creator ex:umstaetter; dc:date "2008-11-01"; dc:isPartOf ex:bibdienst; dc:title "DDC in Europa"@de; bibo:endPage "1221"; bibo:issue "11"; bibo:language "de"; bibo:pageStart "1194"; bibo:uri <http://www.zlb.de/Erschliessung020309BD.pdf>; bibo:volume "42" . <ex:bibdienst> a bibo:Journal; dc:publisher ex:zlb; dc:title "Bibliotheksdienst"@de; bibo:issn "00061972" . <ex:umstaetter> a foaf:person; foaf:birthday "1941-06-12"; foaf:gender "Male"; foaf:givenName "Walther"; foaf:homepage <http://www.ib.hu-berlin.de/~wumsta/index.html>; foaf:img "http://libreas.eu/ausgabe7/pictures/wumstaetter1.jpg"; foaf:name "Walther Umst\u00E4tter"; foaf:surname "Umst\u00E4tter"; foaf:title "Prof. Dr. rer. nat." . <ex:zlb> a foaf:Organization; foaf:homepage <http://www.zlb.de>; foaf:name "Zentral- und Landesbibliothek Berlin"@de . For testing purposes I wanted to read out the foaf:homepage of ex:zlb - the SPARQL I want to run is: PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX dc: <http://purl.org/dc/elements/1.1/> PREFIX bibo: <http://purl.org/ontology/bibo/> PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX ex: <http://example.org/thesis/> SELECT ?article ?publisher ?publisher_url WHERE { ?article dc:isPartOf ?journal . ?journal dc:publisher ?publisher . ?publisher foaf:homepage ?publisher_url } (Again: This is gonna be for testing only since there is only one entity of article.) Running it on my local machine with Python and RDflib doesn't give me a result. Neither does the Online Redland SPARQL Query Demo. Anyone out there who sees a solution? Am I on the right path or totally wrong?
[ "I don't think that you can use a QName in an XML attribute value; e.g. the value of rdf:about. So consider this line from your RDF/XML file:\n <bibo:Journal rdf:about=\"ex:bibdienst\">\n\nI think that this is actually saying that the subject URI is \"ex:bibdienst\". That is a syntactically valid URI, but it is not the same URI as appears as the object of the triple corresponding to this line:\n <dc:isPartOf rdf:resource=\"http://example.org/thesis/bibdienst\" />\n\nTry replacing the QNames in XML attribute values with the corresponding URIs and see if that fixes your problem.\n", "Yep Stephen C is totally correct that you can't use QNames in XML attributes, you can use XML entities instead which you define in a DTD block at the top of your document like so:\neg.\n<!DOCTYPE rdf:RDF[\n <!ENTITY rdf 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'>\n <!ENTITY rdfs 'http://www.w3.org/2000/01/rdf-schema#'>\n <!ENTITY xsd 'http://www.w3.org/2001/XMLSchema#'>\n <!ENTITY ex 'http://example.org/thesis/'>\n <!ENTITY dc 'http://purl.org/dc/elements/1.1/'>\n <!ENTITY foaf 'http://xmlns.com/foaf/0.1/'>\n <!ENTITY bibo 'http://purl.org/ontology/bibo/'>\n]>\n\nThen you can define attributes like so:\n<bibo:Journal rdf:about=\"&ex;bibdienst\">\n\n" ]
[ 7, 6 ]
[]
[]
[ "python", "rdf", "rdflib", "sparql" ]
stackoverflow_0001594518_python_rdf_rdflib_sparql.txt
Q: Sending a file with OBEX push in Python How to send a file using OBEX push to a device, which has an open OBEX port in Python? In my case it is a Bluetooth device. A: There is an "OBEX data server" DEBIAN package with DBus interface which could help you. Accessing DBus through Python is fairly easy. A: Try http://lightblue.sourceforge.net/ The documentation for the OBEX client is here: http://lightblue.sourceforge.net/doc/lightblue.obex-OBEXClient.html
Sending a file with OBEX push in Python
How to send a file using OBEX push to a device, which has an open OBEX port in Python? In my case it is a Bluetooth device.
[ "There is an \"OBEX data server\" DEBIAN package with DBus interface which could help you. Accessing DBus through Python is fairly easy.\n", "Try http://lightblue.sourceforge.net/\nThe documentation for the OBEX client is here: http://lightblue.sourceforge.net/doc/lightblue.obex-OBEXClient.html\n" ]
[ 0, 0 ]
[]
[]
[ "bluetooth", "obex", "python" ]
stackoverflow_0001563488_bluetooth_obex_python.txt
Q: Implement a listbox I need to implement a listbox for a mobile. The only relevant controls are up and down arrow keys. The listbox should display as many rows of items from a list as will fit on the screen (screen_rows), one row should be highighted (sel_row) and the display should wrap if the user hits up arrow when the first item is highlighted or down arrow if the last item is highlighted (that is, the last item should be displayed and highlighted if the user hits up when the first item is highlighted). Up arrow highlights the previous item and down arrow highlights the next item. I've put something together, but am concerned I've missed something in testing. There must be a standard way to do this, given the prevalence of listboxes out there. def up_key(self): if self.sel_row > 0: self.sel_row -= 1 elif self.top_item > 0: # top_item is the index of the first list item self.top_item -= 1 elif self.top_item == 0: if self.n_lines >= self.screen_rows: # n_lines is the number of items in the list self.top_item = self.n_lines - self.screen_rows self.sel_row = min(self.screen_rows-1, self.n_lines-1) else: self.top_item = 0 self.sel_row = self.n_lines-1 def down_key(self): if self.sel_row < self.screen_rows-1 and self.sel_row < self.n_lines-1: self.sel_row += 1 elif self.sel_row == self.screen_rows-1: bottom_item = self.top_item + self.screen_rows if bottom_item == self.n_lines: self.top_item = 0 self.sel_row = 0 if bottom_item < self.n_lines: self.top_item += 1 elif self.sel_row == self.n_lines-1: self.top_item = 0 self.sel_row = 0 def set_pos(self, pos): # display item with index pos if pos < 0: pos = 0 elif pos >= self.n_lines: pos = self.n_lines - 1 if pos < self.screen_rows: self.top_item = 0 self.sel_row = pos else: self.sel_row = min(self.screen_rows, self.n_lines)//2 - 1 self.top_item = pos - self.sel_row if self.top_item >= self.n_lines - self.screen_rows: self.top_item = self.n_lines - self.screen_rows - 1 self.sel_row = pos - self.top_item - 1 EDIT: after each function I call a redraw screen function, which redraws the screen with top_item at the top and sel-row highlighted. I've added a pseudo-code tag, in case someone has a version in something that's not python. A: Few Python programs implement listboxes from scratch -- they're normally just taken from existing toolkits. That may explain why there's no real cross-toolkit "standard"!-) Coming to your code, I imagine set_pos is meant to be called right after either up_key or down_key are finished (you don't make this entirely clear). My main worry would be the repetitiousness and asymmetry between your two _key routines. Surely given that your specs are so similar for up and down keys, you want to delegate to a single function which takes an "increment" argument, either +1 or -1. That common function could first do self.sel_row += increment, then immediately return in the common case where sel_row is still fine, i.e if self.top_item <= self.sel_row < self.top_item + self.screen_rows; otherwise deal with the cases where sel_row has exited the currently displayed region, by adjusting self.top_item, exiting if that causes no need to wraparound, or finally dealing with the wraparound cases. I'd be keen to apply "flat is better than nested" by repeatedly using constructs of the form "do some required state chance; if things are now fine, return" rather than logically more complex "if doing a simple thing will be OK, then do the simple thing; else if something a bit more complicated but not terrible is needed, then do the complicated something; else if we're in a really complicated case, deal with the really complicated problem" -- the latter is far more prone to error and harder to follow in any case.
Implement a listbox
I need to implement a listbox for a mobile. The only relevant controls are up and down arrow keys. The listbox should display as many rows of items from a list as will fit on the screen (screen_rows), one row should be highighted (sel_row) and the display should wrap if the user hits up arrow when the first item is highlighted or down arrow if the last item is highlighted (that is, the last item should be displayed and highlighted if the user hits up when the first item is highlighted). Up arrow highlights the previous item and down arrow highlights the next item. I've put something together, but am concerned I've missed something in testing. There must be a standard way to do this, given the prevalence of listboxes out there. def up_key(self): if self.sel_row > 0: self.sel_row -= 1 elif self.top_item > 0: # top_item is the index of the first list item self.top_item -= 1 elif self.top_item == 0: if self.n_lines >= self.screen_rows: # n_lines is the number of items in the list self.top_item = self.n_lines - self.screen_rows self.sel_row = min(self.screen_rows-1, self.n_lines-1) else: self.top_item = 0 self.sel_row = self.n_lines-1 def down_key(self): if self.sel_row < self.screen_rows-1 and self.sel_row < self.n_lines-1: self.sel_row += 1 elif self.sel_row == self.screen_rows-1: bottom_item = self.top_item + self.screen_rows if bottom_item == self.n_lines: self.top_item = 0 self.sel_row = 0 if bottom_item < self.n_lines: self.top_item += 1 elif self.sel_row == self.n_lines-1: self.top_item = 0 self.sel_row = 0 def set_pos(self, pos): # display item with index pos if pos < 0: pos = 0 elif pos >= self.n_lines: pos = self.n_lines - 1 if pos < self.screen_rows: self.top_item = 0 self.sel_row = pos else: self.sel_row = min(self.screen_rows, self.n_lines)//2 - 1 self.top_item = pos - self.sel_row if self.top_item >= self.n_lines - self.screen_rows: self.top_item = self.n_lines - self.screen_rows - 1 self.sel_row = pos - self.top_item - 1 EDIT: after each function I call a redraw screen function, which redraws the screen with top_item at the top and sel-row highlighted. I've added a pseudo-code tag, in case someone has a version in something that's not python.
[ "Few Python programs implement listboxes from scratch -- they're normally just taken from existing toolkits. That may explain why there's no real cross-toolkit \"standard\"!-)\nComing to your code, I imagine set_pos is meant to be called right after either up_key or down_key are finished (you don't make this entirely clear).\nMy main worry would be the repetitiousness and asymmetry between your two _key routines. Surely given that your specs are so similar for up and down keys, you want to delegate to a single function which takes an \"increment\" argument, either +1 or -1. That common function could first do self.sel_row += increment, then immediately return in the common case where sel_row is still fine, i.e if self.top_item <= self.sel_row < self.top_item + self.screen_rows; otherwise deal with the cases where sel_row has exited the currently displayed region, by adjusting self.top_item, exiting if that causes no need to wraparound, or finally dealing with the wraparound cases.\nI'd be keen to apply \"flat is better than nested\" by repeatedly using constructs of the form \"do some required state chance; if things are now fine, return\" rather than logically more complex \"if doing a simple thing will be OK, then do the simple thing; else if something a bit more complicated but not terrible is needed, then do the complicated something; else if we're in a really complicated case, deal with the really complicated problem\" -- the latter is far more prone to error and harder to follow in any case.\n" ]
[ 1 ]
[]
[]
[ "listbox", "pseudocode", "python" ]
stackoverflow_0001594589_listbox_pseudocode_python.txt
Q: Adding and subtracting dates without Standard Library I am working in a limited environment developing a python script. My issue is I must be able to accomplish datetime addition and subtractions. For example I get the following string: "09/10/20,09:59:47-16" Which is formatted as year/month/day,hour:minute:second-ms. How would I go about adding 30 seconds to this number in Python? I can't use anything more than basic addition and subtraction and string parsing functions. A: You are doing math in different bases. You need to parse the string and come up with a list of values, for example (year, month, day, hour, minute, second), and then do other-base math to add and subtract. For example, hours are base-24, so you need to use modulus to perform the calculations. This sounds suspiciously like homework, so I won't go into any more detail :) A: For completeness, datetime.datetime.strptime and datetime.timedelta are included in default python distribution. from datetime import datetime, timedelta got = "09/10/20,09:59:47-16" dt = datetime.strptime(got, '%y/%m/%d,%H:%M:%S-%f') dt = dt + timedelta(seconds=30) print dt.strftime('%y/%m/%d,%H:%M:%S-%f') prints exactly 09/10/20,10:00:17-160000 Docs here. A: The easiest way to perform date arithmetic is to not actually perform that arithmetic on the dates, but to do it to a simpler quantity. Typically, that simpler quantity is the number of seconds since a certain epoch. Jan 1, 1970 works out nicely. Knowing the epoch, and the number of days in each month, and which years are leap years, you can convert from this "number of seconds" representation to a date string pretty easily (if not slowly in the naive version). You will also need to convert the date string back to the simpler representation. This is again, not too hard. Once you have those two functions, arithmetic is simple. Just add or subtract the amount of time to/from your "number of seconds" representation. Then convert back to a date string. With all that said, I hope this is a homework assignment, because you absolutely should not be writing your own date handling functions in production code. A: Here's my solution to the problem: year = int(s[0:2]) month = int(s[3:5]) day = int(s[6:8]) hour = int(s[9:11]) minute = int(s[12:14]) second = int(s[15:17]) amount_to_add = 30 second = second + amount_to_add days_in_month = 30 if(month == 1 or month == 3 or month == 5 or month ==7 or month == 8 or month == 10 or month == 12): days_in_month = 31 if(month == 2): days_in_month = 28 if ((((year%4 == 0) and (year%100 != 0)) or (year%400 == 0)) and month == 2): days_in_month = 29 if(second > 60): minute = minute + second/60 second = second%60 if(minute > 60): hour = hour + minute/60 minute = minute%60 if(hour >24): day = day + hour/60 hour = hour%24 if(day > days_in_month): month = month + day/days_in_month day = day%days_in_month if(month > 12): year = year + month/12 month = month%12 Kind of a kludge but it gets the job done.
Adding and subtracting dates without Standard Library
I am working in a limited environment developing a python script. My issue is I must be able to accomplish datetime addition and subtractions. For example I get the following string: "09/10/20,09:59:47-16" Which is formatted as year/month/day,hour:minute:second-ms. How would I go about adding 30 seconds to this number in Python? I can't use anything more than basic addition and subtraction and string parsing functions.
[ "You are doing math in different bases. You need to parse the string and come up with a list of values, for example (year, month, day, hour, minute, second), and then do other-base math to add and subtract. For example, hours are base-24, so you need to use modulus to perform the calculations. This sounds suspiciously like homework, so I won't go into any more detail :)\n", "For completeness, datetime.datetime.strptime and datetime.timedelta are included in default python distribution.\nfrom datetime import datetime, timedelta\n\ngot = \"09/10/20,09:59:47-16\"\n\ndt = datetime.strptime(got, '%y/%m/%d,%H:%M:%S-%f')\ndt = dt + timedelta(seconds=30)\nprint dt.strftime('%y/%m/%d,%H:%M:%S-%f')\n\nprints exactly\n09/10/20,10:00:17-160000\n\nDocs here.\n", "The easiest way to perform date arithmetic is to not actually perform that arithmetic on the dates, but to do it to a simpler quantity.\nTypically, that simpler quantity is the number of seconds since a certain epoch. Jan 1, 1970 works out nicely. Knowing the epoch, and the number of days in each month, and which years are leap years, you can convert from this \"number of seconds\" representation to a date string pretty easily (if not slowly in the naive version).\nYou will also need to convert the date string back to the simpler representation. This is again, not too hard.\nOnce you have those two functions, arithmetic is simple. Just add or subtract the amount of time to/from your \"number of seconds\" representation. Then convert back to a date string.\nWith all that said, I hope this is a homework assignment, because you absolutely should not be writing your own date handling functions in production code.\n", "Here's my solution to the problem:\nyear = int(s[0:2])\nmonth = int(s[3:5])\nday = int(s[6:8])\nhour = int(s[9:11])\nminute = int(s[12:14])\nsecond = int(s[15:17])\n\namount_to_add = 30\nsecond = second + amount_to_add\ndays_in_month = 30\nif(month == 1 or month == 3 or month == 5 or month ==7 or month == 8 or month == 10 or month == 12):\n days_in_month = 31\nif(month == 2):\n days_in_month = 28\nif ((((year%4 == 0) and (year%100 != 0)) or (year%400 == 0)) and month == 2):\n days_in_month = 29\n\nif(second > 60):\n minute = minute + second/60\n second = second%60\nif(minute > 60):\n hour = hour + minute/60\n minute = minute%60\nif(hour >24):\n day = day + hour/60\n hour = hour%24\nif(day > days_in_month):\n month = month + day/days_in_month\n day = day%days_in_month\nif(month > 12):\n year = year + month/12\n month = month%12\n\nKind of a kludge but it gets the job done.\n" ]
[ 2, 2, 2, 0 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0001595704_datetime_python.txt
Q: Convert to UTC Timestamp # parses some string into that format. datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") # gets the seconds from the above date. timestamp1 = time.mktime(datetime1.timetuple()) # adds milliseconds to the above seconds. timeInMillis = int(timestamp1) * 1000 How do I (at any point in that code) turn the date into UTC format? I've been ploughing through the API for what seems like a century and cannot find anything that I can get working. Can anyone help? It's currently turning it into Eastern time i believe (however I'm in GMT but want UTC). EDIT: I gave the answer to the guy with the closest to what I finally found out. datetime1 = datetime.strptime(somestring, someformat) timeInSeconds = calendar.timegm(datetime1.utctimetuple()) timeInMillis = timeInSeconds * 1000 :) A: datetime.utcfromtimestamp is probably what you're looking for: >>> timestamp1 = time.mktime(datetime.now().timetuple()) >>> timestamp1 1256049553.0 >>> datetime.utcfromtimestamp(timestamp1) datetime.datetime(2009, 10, 20, 14, 39, 13) A: I think you can use the utcoffset() method: utc_time = datetime1 - datetime1.utcoffset() The docs give an example of this using the astimezone() method here. Additionally, if you're going to be dealing with timezones, you might want to look into the PyTZ library which has lots of helpful tools for converting datetime's into various timezones (including between EST and UTC) With PyTZ: from datetime import datetime import pytz utc = pytz.utc eastern = pytz.timezone('US/Eastern') # Using datetime1 from the question datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") # First, tell Python what timezone that string was in (you said Eastern) eastern_time = eastern.localize(datetime1) # Then convert it from Eastern to UTC utc_time = eastern_time.astimezone(utc) A: You probably want one of these two: import time import datetime from email.Utils import formatdate rightnow = time.time() utc = datetime.datetime.utcfromtimestamp(rightnow) print utc print formatdate(rightnow) The two outputs look like this 2009-10-20 14:46:52.725000 Tue, 20 Oct 2009 14:46:52 -0000 A: def getDateAndTime(seconds=None): """ Converts seconds since the Epoch to a time tuple expressing UTC. When 'seconds' is not passed in, convert the current time instead. :Parameters: - `seconds`: time in seconds from the epoch. :Return: Time in UTC format. """ return time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(seconds))` This converts local time to UTC time.mktime(time.localtime(calendar.timegm(utc_time))) http://feihonghsu.blogspot.com/2008/02/converting-from-local-time-to-utc.html If converting a struct_time to seconds-since-the-epoch is done using mktime, this conversion is in local timezone. There's no way to tell it to use any specific timezone, not even just UTC. The standard 'time' package always assumes that a time is in your local timezone.
Convert to UTC Timestamp
# parses some string into that format. datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") # gets the seconds from the above date. timestamp1 = time.mktime(datetime1.timetuple()) # adds milliseconds to the above seconds. timeInMillis = int(timestamp1) * 1000 How do I (at any point in that code) turn the date into UTC format? I've been ploughing through the API for what seems like a century and cannot find anything that I can get working. Can anyone help? It's currently turning it into Eastern time i believe (however I'm in GMT but want UTC). EDIT: I gave the answer to the guy with the closest to what I finally found out. datetime1 = datetime.strptime(somestring, someformat) timeInSeconds = calendar.timegm(datetime1.utctimetuple()) timeInMillis = timeInSeconds * 1000 :)
[ "datetime.utcfromtimestamp is probably what you're looking for:\n>>> timestamp1 = time.mktime(datetime.now().timetuple())\n>>> timestamp1\n1256049553.0\n>>> datetime.utcfromtimestamp(timestamp1)\ndatetime.datetime(2009, 10, 20, 14, 39, 13)\n\n", "I think you can use the utcoffset() method:\nutc_time = datetime1 - datetime1.utcoffset()\n\nThe docs give an example of this using the astimezone() method here.\nAdditionally, if you're going to be dealing with timezones, you might want to look into the PyTZ library which has lots of helpful tools for converting datetime's into various timezones (including between EST and UTC)\nWith PyTZ:\nfrom datetime import datetime\nimport pytz\n\nutc = pytz.utc\neastern = pytz.timezone('US/Eastern')\n\n# Using datetime1 from the question\ndatetime1 = datetime.strptime(somestring, \"%Y-%m-%dT%H:%M:%S\")\n\n# First, tell Python what timezone that string was in (you said Eastern)\neastern_time = eastern.localize(datetime1)\n\n# Then convert it from Eastern to UTC\nutc_time = eastern_time.astimezone(utc)\n\n", "You probably want one of these two:\nimport time\nimport datetime\n\nfrom email.Utils import formatdate\n\nrightnow = time.time()\n\nutc = datetime.datetime.utcfromtimestamp(rightnow)\nprint utc\n\nprint formatdate(rightnow) \n\nThe two outputs look like this\n2009-10-20 14:46:52.725000\nTue, 20 Oct 2009 14:46:52 -0000\n\n", "def getDateAndTime(seconds=None):\n \"\"\"\n Converts seconds since the Epoch to a time tuple expressing UTC.\n When 'seconds' is not passed in, convert the current time instead.\n :Parameters:\n - `seconds`: time in seconds from the epoch.\n :Return:\n Time in UTC format.\n\"\"\"\nreturn time.strftime(\"%Y-%m-%dT%H:%M:%SZ\", time.gmtime(seconds))`\n\nThis converts local time to UTC\ntime.mktime(time.localtime(calendar.timegm(utc_time)))\n\nhttp://feihonghsu.blogspot.com/2008/02/converting-from-local-time-to-utc.html\nIf converting a struct_time to seconds-since-the-epoch is done using mktime, this\nconversion is in local timezone. There's no way to tell it to use any specific timezone, not even just UTC. The standard 'time' package always assumes that a time is in your local timezone.\n" ]
[ 19, 6, 5, 5 ]
[]
[]
[ "datetime", "python", "utc" ]
stackoverflow_0001595047_datetime_python_utc.txt
Q: Can I execute an SQL Server DTS package from a Python script? I currently have a number of Python scripts that help prep a staging area for testing. One thing that the scripts do not handle is executing DTS packages on MS SQL Server. Is there a way to execute these packages using Python? A: Is calling the DTS run from the command line an option. If so here is an example for that. http://www.mssqltips.com/tip.asp?tip=1007 A: The answer is yes. As mentioned by lansinwd, you'd want to use the command line tool DTSRun. SQL Server tools will need to be installed on the machine executing the Python script. I'm not sure what percentage or which packages would be needed but the MSDN page on DTSRun should help answer that, if needed. A basic command line example is this: DTSRun /S "Server[\Instance]" /N "DTS_Package_Name" /E To run this from Python check out: http://docs.python.org/library/os.html#process-management From the web page: These functions may be used to create and manage processes. The various exec*() functions take a list of arguments for the new program loaded into the process
Can I execute an SQL Server DTS package from a Python script?
I currently have a number of Python scripts that help prep a staging area for testing. One thing that the scripts do not handle is executing DTS packages on MS SQL Server. Is there a way to execute these packages using Python?
[ "Is calling the DTS run from the command line an option. If so here is an example for that.\nhttp://www.mssqltips.com/tip.asp?tip=1007\n", "The answer is yes. As mentioned by lansinwd, you'd want to use the command line tool DTSRun. SQL Server tools will need to be installed on the machine executing the Python script. I'm not sure what percentage or which packages would be needed but the MSDN page on DTSRun should help answer that, if needed.\nA basic command line example is this: \nDTSRun /S \"Server[\\Instance]\" /N \"DTS_Package_Name\" /E\n\nTo run this from Python check out: http://docs.python.org/library/os.html#process-management\nFrom the web page: \n\nThese functions may be used to create\n and manage processes.\nThe various exec*() functions take a\n list of arguments for the new program\n loaded into the process\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "sql_server" ]
stackoverflow_0001596270_python_sql_server.txt
Q: Preserving extent from the old image I am using PIL 1.1.6, Python 2.5 on the Windows platform. In my program, I am performing a point operation (changing the pixel values) and then saving the new image. When I am loading the new and old image, they are not in the same extent. How to impose the extent of old image to the new image? My code is: img = Image.open("D:/BTC/dada_72.tif") out = Image.eval(img, lambda x: x * 5) out.save("D:/BTC/dada_72_Com.tif") A: Assuming by "extent" you mean "size" (pixels wide by pixels high), then there are several options depending on what you have as a "new" image. If "new" is an existing image (and you want to stretch/shrink/grow the new): from PIL import Image >>> im1 = Image.open('img1.jpg') >>> im2 = Image.open('img2.jpg').resize(im1.size) If you want to crop or pad "new" that's a bit more complex... If "new" is a new blank image: >>> im1 = Image.open('img1.jpg') >>> im2 = Image.new(im1.mode, im1.size)
Preserving extent from the old image
I am using PIL 1.1.6, Python 2.5 on the Windows platform. In my program, I am performing a point operation (changing the pixel values) and then saving the new image. When I am loading the new and old image, they are not in the same extent. How to impose the extent of old image to the new image? My code is: img = Image.open("D:/BTC/dada_72.tif") out = Image.eval(img, lambda x: x * 5) out.save("D:/BTC/dada_72_Com.tif")
[ "Assuming by \"extent\" you mean \"size\" (pixels wide by pixels high), then there are several options depending on what you have as a \"new\" image.\nIf \"new\" is an existing image (and you want to stretch/shrink/grow the new):\nfrom PIL import Image\n>>> im1 = Image.open('img1.jpg')\n>>> im2 = Image.open('img2.jpg').resize(im1.size)\n\nIf you want to crop or pad \"new\" that's a bit more complex...\nIf \"new\" is a new blank image:\n>>> im1 = Image.open('img1.jpg')\n>>> im2 = Image.new(im1.mode, im1.size)\n\n" ]
[ 0 ]
[]
[]
[ "image", "python", "python_imaging_library" ]
stackoverflow_0001594223_image_python_python_imaging_library.txt
Q: How should I optimize this filesystem I/O bound program? I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? A: Poor man's map-reduce: Use split to break the file up into as many pieces as you have CPUs. Use batch to run your muncher in parallel. Use cat to concatenate the results. A: If you are I/O bound, the best way I have found to optimize is to read or write the entire file into/out of memory at once, then operate out of RAM from there on. With extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used to do it. That is what you need to optimize. I don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I/O for each byte, that's what you need to do. Of course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time. A: Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode. If the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to/from the disks. Also useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes. A: Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so. A: Use buffered writes for step 4. Write a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it. You would have one buffer per file, so that most "writes" won't actually hit the disk. A: Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them? This would remove the save to and load from the disk that step 4 entails. If the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last.
How should I optimize this filesystem I/O bound program?
I have a python program that does something like this: Read a row from a csv file. Do some transformations on it. Break it up into the actual rows as they would be written to the database. Write those rows to individual csv files. Go back to step 1 unless the file has been totally read. Run SQL*Loader and load those files into the database. Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind. There are a few ideas that I have to solve this: Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't? Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete. Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow. Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?
[ "Poor man's map-reduce:\nUse split to break the file up into as many pieces as you have CPUs.\nUse batch to run your muncher in parallel.\nUse cat to concatenate the results.\n", "If you are I/O bound, the best way I have found to optimize is to read or write the entire file into/out of memory at once, then operate out of RAM from there on.\nWith extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used to do it. That is what you need to optimize.\nI don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I/O for each byte, that's what you need to do.\nOf course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time.\n", "Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode.\nIf the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to/from the disks.\nAlso useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes.\n", "Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so.\n", "Use buffered writes for step 4.\nWrite a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.\nYou would have one buffer per file, so that most \"writes\" won't actually hit the disk.\n", "Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them? \nThis would remove the save to and load from the disk that step 4 entails.\nIf the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last.\n" ]
[ 5, 3, 3, 2, 1, 1 ]
[ "The first thing is to be certain of what you should optimize. You seem to not know precisely where your time is going. Before spending more time wondering, use a performance profiler to see exactly where the time is going.\nhttp://docs.python.org/library/profile.html\nWhen you know exactly where the time is going, you'll be in a better position to know where to spend your time optimizing.\n" ]
[ -2 ]
[ "file_io", "optimization", "performance", "python" ]
stackoverflow_0001594604_file_io_optimization_performance_python.txt
Q: Multithreaded Downloading Through Proxies In Python What would be the best library for multithreaded harvesting/downloading with multiple proxy support? I've looked at Tkinter, it looks good but there are so many, does anyone have a specific recommendation? Many thanks! A: Twisted A: Is this something you can't just do by passing a URL to newly spawned threads and calling urllib2.urlopen in each one, or is there a more specific requirement? A: Also take a look at http://scrapy.org/, which is a scraping framework built on top of twisted.
Multithreaded Downloading Through Proxies In Python
What would be the best library for multithreaded harvesting/downloading with multiple proxy support? I've looked at Tkinter, it looks good but there are so many, does anyone have a specific recommendation? Many thanks!
[ "Twisted\n", "Is this something you can't just do by passing a URL to newly spawned threads and calling urllib2.urlopen in each one, or is there a more specific requirement?\n", "Also take a look at http://scrapy.org/, which is a scraping framework built on top of twisted. \n" ]
[ 1, 0, 0 ]
[]
[]
[ "download", "harvest", "multithreading", "proxy", "python" ]
stackoverflow_0001597093_download_harvest_multithreading_proxy_python.txt
Q: Fix permissions for rpm/setuptools packaging I have a project that requires post-install hooks for deployment. My method is to use setuptools to generate the skeleton rpm spec file and tar the source files. The problem is that I don't know how to control permissions with this method. The spec file looks like: %install python setup.py install --single-version-externally-managed --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES %files -f INSTALLED_FILES %defattr(755,%{user},%{user}) This works out reasonably well: in that all files get set set to the appropriate user and permissions. But the directories don't have the attributes set on them. I can't tell whether this is a problem, but it does seem strange: all the directories are owned by root with 755 permissions. Does anyone know a good (reasonably standard) way to make the directories owned by user? I ask because my company tends to prefer packaging applications that will deploy under an application-specific role-account. When I use setuptools to put the results in site-packages, .pyc files are copied over. But if I want to create a config file directory off the path, it seems like a good amount to work around. A: %defattr(755,%{user},%{user}) That line sets the default permissions, user, and group ownership on all files. You can override the default with something like: %attr(644, <username>, <username>) </path/to/file> If you want the default to be owned by a user other than root, then you probably need to define the 'user' macro up at the top of the spec: %define user myusername
Fix permissions for rpm/setuptools packaging
I have a project that requires post-install hooks for deployment. My method is to use setuptools to generate the skeleton rpm spec file and tar the source files. The problem is that I don't know how to control permissions with this method. The spec file looks like: %install python setup.py install --single-version-externally-managed --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES %files -f INSTALLED_FILES %defattr(755,%{user},%{user}) This works out reasonably well: in that all files get set set to the appropriate user and permissions. But the directories don't have the attributes set on them. I can't tell whether this is a problem, but it does seem strange: all the directories are owned by root with 755 permissions. Does anyone know a good (reasonably standard) way to make the directories owned by user? I ask because my company tends to prefer packaging applications that will deploy under an application-specific role-account. When I use setuptools to put the results in site-packages, .pyc files are copied over. But if I want to create a config file directory off the path, it seems like a good amount to work around.
[ "\n%defattr(755,%{user},%{user})\n\nThat line sets the default permissions, user, and group ownership on all files. You can override the default with something like:\n%attr(644, <username>, <username>) </path/to/file>\n\nIf you want the default to be owned by a user other than root, then you probably need to define the 'user' macro up at the top of the spec:\n%define user myusername\n\n" ]
[ 3 ]
[]
[]
[ "packaging", "python", "rpm", "setuptools" ]
stackoverflow_0001402224_packaging_python_rpm_setuptools.txt
Q: How to store dynamically generated HTML form elements from Javascript in Python? I have an HTML form that a user can add an arbitrary amount of input fields to through jQuery. The user is also able to remove any input field from any position. My current implementation is that each new input box has an id of "field[i]" so when the form is posted it is processed in Python as field1, field2 field3, ...field[n] i = 0 while self.request.get("field" + str(i)): temp = self.request.get("field" + str(i)) someList.append(temp) i += 1 (Assume the JavaScript handles removing of deleted elements and sorts the field names prior to post for simplicity) This approach is working for me, but is there a better way to handle this situation? I feel like this is a very brute force method. Platform information: Python 2.5.4; JavaScript; DHTML; jquery; Google App Engine Edit: It appears that self.request.get_all() was the solution: GAE Doc A: You could serialize the data with javascript and pass it in as json. Then you would just have a dictionary to work with in python. You would need something like simplejson, of course
How to store dynamically generated HTML form elements from Javascript in Python?
I have an HTML form that a user can add an arbitrary amount of input fields to through jQuery. The user is also able to remove any input field from any position. My current implementation is that each new input box has an id of "field[i]" so when the form is posted it is processed in Python as field1, field2 field3, ...field[n] i = 0 while self.request.get("field" + str(i)): temp = self.request.get("field" + str(i)) someList.append(temp) i += 1 (Assume the JavaScript handles removing of deleted elements and sorts the field names prior to post for simplicity) This approach is working for me, but is there a better way to handle this situation? I feel like this is a very brute force method. Platform information: Python 2.5.4; JavaScript; DHTML; jquery; Google App Engine Edit: It appears that self.request.get_all() was the solution: GAE Doc
[ "You could serialize the data with javascript and pass it in as json. Then you would just have a dictionary to work with in python. You would need something like simplejson, of course\n" ]
[ 1 ]
[]
[]
[ "dhtml", "google_app_engine", "javascript", "jquery", "python" ]
stackoverflow_0001597766_dhtml_google_app_engine_javascript_jquery_python.txt
Q: Python callback with SWIG wrapped type I'm trying to add a python callback to a C++ library as illustrated: template<typename T> void doCallback(shared_ptr<T> data) { PyObject* pyfunc; //I have this already PyObject* args = Py_BuildValue("(O)", data); PyEval_CallObject(pyfunc,args); } This fails because data hasn't gone through swig, and isn't a PyObject. I tried using: swigData = SWIG_NewPointerObj((void*)data, NULL, 0); But because its a template, I don't really know what to use for the second parameter. Even if I do hard code the 'correct' SWIGTYPE, it usually segfaults on PyEval_CallObject. So my questions are: Whats the best way to invoke swig type wrapping? Am I even going in the right direction here? Directors looked promising for implementing a callback, but I couldn't find an example of directors with python. Update: The proper wrapping is getting generated. I have other functions that return shared_ptrs and can call those correctly. A: shared_ptr<T> for unknown T isn't a type, so SWIG can't hope to wrap it. What you need to do is provide a SWIG wrapping for each instance of shared_ptr that you intend to use. So if for example you want to be able to doCallback() with both shared_ptr<Foo> and shared_ptr<Bar>, you will need: A wrapper for Foo A wrapper for Bar Wrappers for shared_ptr<Foo> and shared_ptr<Bar>. You make those like so: namespace boost { template<class T> class shared_ptr { public: T * operator-> () const; }; } %template(FooSharedPtr) boost::shared_ptr<Foo>; %template(BarSharedPtr) boost::shared_ptr<Bar>; A: My first answer misunderstood the question completely, so let's try this again. Your central problem is the free type parameter T in the definition of doCallback. As you point out in your question, there's no way to make a SWIG object out of a shared_ptr<T> without a concrete value for T: shared_ptr<T> isn't really a type. Thus I think that you have to specialize: for each concrete instantiation of doCallback that the host system uses, provide a template specialization for the target type. With that done, you can generate a Python-friendly wrapping of data, and pass it to your python function. The simplest technique for that is probably: swigData = SWIG_NewPointerObj((void*)(data.get()), SWIGType_Whatever, 0); ...though this can only work if your Python function doesn't save its argument anywhere, as the shared_ptr itself is not copied. If you do need to retain a reference to data, you'll need to use whatever mechanism SWIG usually uses to wrap shared_ptr. If there's no special-case smart-pointer magic going on, it's probably something like: pythonData = new shared_ptr<Whatever>(data); swigData = SWIG_NewPointerObj(pythonData, SWIGType_shared_ptr_to_Whatever, 1); Regardless, you then you have a Python-friendly SWIG object that's amenable to Py_BuildValue(). Hope this helps.
Python callback with SWIG wrapped type
I'm trying to add a python callback to a C++ library as illustrated: template<typename T> void doCallback(shared_ptr<T> data) { PyObject* pyfunc; //I have this already PyObject* args = Py_BuildValue("(O)", data); PyEval_CallObject(pyfunc,args); } This fails because data hasn't gone through swig, and isn't a PyObject. I tried using: swigData = SWIG_NewPointerObj((void*)data, NULL, 0); But because its a template, I don't really know what to use for the second parameter. Even if I do hard code the 'correct' SWIGTYPE, it usually segfaults on PyEval_CallObject. So my questions are: Whats the best way to invoke swig type wrapping? Am I even going in the right direction here? Directors looked promising for implementing a callback, but I couldn't find an example of directors with python. Update: The proper wrapping is getting generated. I have other functions that return shared_ptrs and can call those correctly.
[ "shared_ptr<T> for unknown T isn't a type, so SWIG can't hope to wrap it. What you need to do is provide a SWIG wrapping for each instance of shared_ptr that you intend to use. So if for example you want to be able to doCallback() with both shared_ptr<Foo> and shared_ptr<Bar>, you will need:\n\nA wrapper for Foo\nA wrapper for Bar\nWrappers for shared_ptr<Foo> and shared_ptr<Bar>.\n\nYou make those like so:\nnamespace boost {\n template<class T> class shared_ptr\n {\n public:\n T * operator-> () const;\n };\n }\n\n%template(FooSharedPtr) boost::shared_ptr<Foo>;\n%template(BarSharedPtr) boost::shared_ptr<Bar>;\n\n", "My first answer misunderstood the question completely, so let's try this again.\nYour central problem is the free type parameter T in the definition of doCallback. As you point out in your question, there's no way to make a SWIG object out of a shared_ptr<T> without a concrete value for T: shared_ptr<T> isn't really a type.\nThus I think that you have to specialize: for each concrete instantiation of doCallback that the host system uses, provide a template specialization for the target type. With that done, you can generate a Python-friendly wrapping of data, and pass it to your python function. The simplest technique for that is probably:\nswigData = SWIG_NewPointerObj((void*)(data.get()), SWIGType_Whatever, 0);\n\n...though this can only work if your Python function doesn't save its argument anywhere, as the shared_ptr itself is not copied.\nIf you do need to retain a reference to data, you'll need to use whatever mechanism SWIG usually uses to wrap shared_ptr. If there's no special-case smart-pointer magic going on, it's probably something like:\npythonData = new shared_ptr<Whatever>(data);\nswigData = SWIG_NewPointerObj(pythonData, SWIGType_shared_ptr_to_Whatever, 1);\n\nRegardless, you then you have a Python-friendly SWIG object that's amenable to Py_BuildValue().\nHope this helps.\n" ]
[ 0, 0 ]
[]
[]
[ "c++", "python", "swig" ]
stackoverflow_0001575802_c++_python_swig.txt
Q: Where do I start with a web bot? I simply want to create an automatic script that can run (preferably) on a web-server, and simply 'clicks' on an object of a web page. I am new to Python or whatever language this would be used for so I thought I would go here to ask where to start! This may seem like I want the script to scam advertisements or do something illegal, but it's simply to interact with another website. A: It doesn't have to be Python, I've seen it done in PHP and Perl, and you can probably do it in many other languages. The general approach is: 1) You give your app a URL and it makes an HTTP request to that URL. I think I have seen this done with php/wget. Probably many other ways to do it. 2) Scan the HTTP response for other URLs that you want to "click" (really, sending HTTP requests to them), and then send requests to those. Parsing the links usually requires some understanding of regular expressions (if you are not familiar with regular expressions, brush up on it - it's important stuff ;)). A: I would recommend using the WebBrowser control of the .NET package. You can access all the DOM elements and fully interact with any website. Here is a brief article If you still prefer python, mechanize might be a good way of doing that. A: I'd recommend the Python mechanize library. It's designed to act as a simulated browser. I've used it to drive several web interfaces from script. A: I'd probably start with Twill -- you can use its scripting language or Python API.
Where do I start with a web bot?
I simply want to create an automatic script that can run (preferably) on a web-server, and simply 'clicks' on an object of a web page. I am new to Python or whatever language this would be used for so I thought I would go here to ask where to start! This may seem like I want the script to scam advertisements or do something illegal, but it's simply to interact with another website.
[ "It doesn't have to be Python, I've seen it done in PHP and Perl, and you can probably do it in many other languages.\nThe general approach is:\n1) You give your app a URL and it makes an HTTP request to that URL. I think I have seen this done with php/wget. Probably many other ways to do it.\n2) Scan the HTTP response for other URLs that you want to \"click\" (really, sending HTTP requests to them), and then send requests to those. Parsing the links usually requires some understanding of regular expressions (if you are not familiar with regular expressions, brush up on it - it's important stuff ;)).\n", "I would recommend using the WebBrowser control of the .NET package. You can access all the DOM elements and fully interact with any website. Here is a brief article\nIf you still prefer python, mechanize might be a good way of doing that.\n", "I'd recommend the Python mechanize library. It's designed to act as a simulated browser. I've used it to drive several web interfaces from script.\n", "I'd probably start with Twill -- you can use its scripting language or Python API.\n" ]
[ 6, 4, 2, 1 ]
[]
[]
[ "bots", "python" ]
stackoverflow_0001597833_bots_python.txt
Q: Is there a better, pythonic way to do this? This is my first python program - Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds. Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way? CODE : import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) Thanks. A: Congratulations, your code is very nice. There are a few little tricks you could use to make it shorter/simpler. There is a nifty object type called defaultdict which is provided by the collections module. Instead of having to check if adDict has an adId key, you can set up a defaultdict which acts like a regular dict, except that it automatically provides you with an empty set() when there is no key. So you can change if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) to simply adDict[adId].add(userId) Also, instead of for row in reader: adId = row[0] userId = row[1] you could shorten that to for adId,userId in reader: Edit: As Parker kindly points out in the comments, for key, value in adDict.iteritems(): is the most efficient way to iterate over a dict, if you are going to use both the key and value in the loop. In Python3, you can use for key, value in adDict.items(): since items() returns an iterator. #!/usr/bin/env python import csv from collections import defaultdict adDict = defaultdict(set) reader = csv.reader(open("some.csv"), delimiter=' ') for adId,userId in reader: adDict[adId].add(userId) for key,value in adDict.iteritems(): print (key, ',' , len(value)) A: the line of code: adDict[adId] = set(userId) is unlikely to do what you want -- it will treat string userId as a sequence of letters, so for example if userId was aleax you'd get a set with four items, just like, say, set(['a', 'l', 'e', 'x']). Later, an .add(userId) when userId is aleax again will add a fifth item, the string 'aleax', because .add (differently from the set initializer, which takes an iterable as its argument) takes a single item as its argument. To make a set with a single item, use set([userId]) instead. This is a reasonably frequent bug so I wanted to explain it clearly. That being said, defaultdict as suggested in other answers is clearly the right approach (avoid setdefault, that was never a good design and doesn't have good performance either, as well as being pretty murky). I would also avoid the kinda-overkill of csv in favor of a simple loop with a .split and .strip on each line... A: You could shorten the for-loop to this: for row in reader: adDict.setdefault(row[0], set()).add(row[1]) A: Instead of: for row in reader: adId = row[0] userId = row[1] Use automatic sequence unpacking: for (adId, userId) in reader: In: if ( adId in adDict ): You don't need parentheses. Instead of: if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) Use defaultdict: from collections import defaultdict adDict = defaultDict(set) # ... adDict[adId].add(userId) Or, if you're not allowed to use other modules by your professor, use setdefault(): adDict.setdefault(adId, set()).add(userId) When printing: for key, value in adDict.items(): print (key, ',' , len(value)) Using string formatting might be easier to format: print "%s,%s" % (key, len(value)) Or, if you're using Python 3: print ("{0},{1}".format (key, len(value))) A: Since you only have a space-delimited file, I'd do: from __future__ import with_statement from collections import defaultdict ads = defaultdict(set) with open("some.csv") as f: for ad, user in (line.split(" ") for line in f): ads[ad].add(user) for ad in ads: print "%s, %s" % (ad, len(ads[ad])) A: There are some great answers in here. One trick I particularly like is to make my code easier to reuse in future like so import csv def parse_my_file(file_name): # some existing code goes here return aDict if __name__ == "__main__": #this gets executed if this .py file is run directly, rather than imported aDict = parse_my_file("some.csv") for key, value in adDict.items(): print (key, ',' , len(value)) Now you can import your csv parser from another module and get programmatic access to aDict. A: The only changes I'd make are extracting multiple elements from the reader at once, and using string formatting for print statements. import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') # Can extract multiple elements from a list in the iteration statement: for adId, userId in reader: if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): # I believe this gives you more control over how things are formatted: print ("%s, %d" % (key, len(value))) A: Just a few bits and pieces: For extracting the row list into variables: adId, userId = row The if statement does not need braces: if adId in adDict: You could use exceptions to handle a missing Key in the dict, but both ways work well, e.g.: try: adDict[adId].add(userId) except KeyError: adDict[adId] = set(userId)
Is there a better, pythonic way to do this?
This is my first python program - Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds. Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way? CODE : import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) Thanks.
[ "Congratulations, your code is very nice.\nThere are a few little tricks you could use to make it shorter/simpler.\nThere is a nifty object type called defaultdict which is provided by the collections module. Instead of having to check if adDict has an adId key, you can set up a defaultdict which acts like a regular dict, except that it automatically provides you with an empty set() when there is no key. So you can change\nif ( adId in adDict ):\n adDict[adId].add(userId)\nelse:\n adDict[adId] = set(userId)\n\nto simply\nadDict[adId].add(userId)\n\nAlso, instead of \nfor row in reader:\n adId = row[0]\n userId = row[1]\n\nyou could shorten that to\nfor adId,userId in reader:\n\nEdit: As Parker kindly points out in the comments, \nfor key, value in adDict.iteritems():\n\nis the most efficient way to iterate over a dict, if you are going to use both\nthe key and value in the loop. In Python3, you can use\nfor key, value in adDict.items():\n\nsince items() returns an iterator. \n#!/usr/bin/env python\nimport csv\nfrom collections import defaultdict\n\nadDict = defaultdict(set)\nreader = csv.reader(open(\"some.csv\"), delimiter=' ')\nfor adId,userId in reader:\n adDict[adId].add(userId)\nfor key,value in adDict.iteritems():\n print (key, ',' , len(value))\n\n", "the line of code:\nadDict[adId] = set(userId)\n\nis unlikely to do what you want -- it will treat string userId as a sequence of letters, so for example if userId was aleax you'd get a set with four items, just like, say, set(['a', 'l', 'e', 'x']). Later, an .add(userId) when userId is aleax again will add a fifth item, the string 'aleax', because .add (differently from the set initializer, which takes an iterable as its argument) takes a single item as its argument.\nTo make a set with a single item, use set([userId]) instead.\nThis is a reasonably frequent bug so I wanted to explain it clearly. That being said, defaultdict as suggested in other answers is clearly the right approach (avoid setdefault, that was never a good design and doesn't have good performance either, as well as being pretty murky).\nI would also avoid the kinda-overkill of csv in favor of a simple loop with a .split and .strip on each line...\n", "You could shorten the for-loop to this:\nfor row in reader:\n adDict.setdefault(row[0], set()).add(row[1])\n\n", "Instead of:\nfor row in reader:\n adId = row[0]\n userId = row[1]\n\nUse automatic sequence unpacking:\nfor (adId, userId) in reader:\n\nIn:\nif ( adId in adDict ):\n\nYou don't need parentheses.\nInstead of:\nif ( adId in adDict ):\n adDict[adId].add(userId)\nelse:\n adDict[adId] = set(userId)\n\nUse defaultdict:\nfrom collections import defaultdict\nadDict = defaultDict(set)\n\n# ...\n\nadDict[adId].add(userId)\n\nOr, if you're not allowed to use other modules by your professor, use setdefault():\nadDict.setdefault(adId, set()).add(userId)\n\nWhen printing:\nfor key, value in adDict.items():\n print (key, ',' , len(value))\n\nUsing string formatting might be easier to format:\nprint \"%s,%s\" % (key, len(value))\n\nOr, if you're using Python 3:\nprint (\"{0},{1}\".format (key, len(value)))\n\n", "Since you only have a space-delimited file, I'd do:\nfrom __future__ import with_statement\nfrom collections import defaultdict\n\nads = defaultdict(set)\nwith open(\"some.csv\") as f:\n for ad, user in (line.split(\" \") for line in f):\n ads[ad].add(user)\n\nfor ad in ads:\n print \"%s, %s\" % (ad, len(ads[ad]))\n\n", "There are some great answers in here.\nOne trick I particularly like is to make my code easier to reuse in future like so \nimport csv\n\ndef parse_my_file(file_name):\n # some existing code goes here\n return aDict\n\nif __name__ == \"__main__\":\n #this gets executed if this .py file is run directly, rather than imported\n aDict = parse_my_file(\"some.csv\")\n for key, value in adDict.items():\n print (key, ',' , len(value))\n\nNow you can import your csv parser from another module and get programmatic access to aDict. \n", "The only changes I'd make are extracting multiple elements from the reader at once, and using string formatting for print statements.\nimport csv\n\nadDict = {}\nreader = csv.reader(open(\"some.csv\"), delimiter=' ')\n# Can extract multiple elements from a list in the iteration statement:\nfor adId, userId in reader: \n if ( adId in adDict ):\n adDict[adId].add(userId)\n else:\n adDict[adId] = set(userId)\n\nfor key, value in adDict.items():\n # I believe this gives you more control over how things are formatted:\n print (\"%s, %d\" % (key, len(value)))\n\n", "Just a few bits and pieces:\nFor extracting the row list into variables:\nadId, userId = row\n\nThe if statement does not need braces:\nif adId in adDict:\n\nYou could use exceptions to handle a missing Key in the dict, but both ways work well, e.g.:\ntry:\n adDict[adId].add(userId)\nexcept KeyError:\n adDict[adId] = set(userId)\n\n" ]
[ 18, 10, 7, 3, 3, 3, 1, 1 ]
[]
[]
[ "dictionary", "python", "set" ]
stackoverflow_0001597764_dictionary_python_set.txt
Q: Adding row to numpy recarray Is there an easy way to add a record/row to a numpy recarray without creating a new recarray? Let's say I have a recarray that takes 1Gb in memory, I want to be able to add a row to it without having python take up 2Gb of memory temporarily. A: You can call yourrecarray.resize with a shape which has one more row, then assign to that new row. Of course. numpy might still have to allocate completely new memory if it just doesn't have room to grow the array in-place, but at least you stand a chance!-) Since an example was requested, here comes, modified off the canonical example list...: >>> import numpy >>> mydescriptor = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} >>> a = numpy.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=mydescriptor) >>> print a [('M', 64.0, 75.0) ('F', 25.0, 60.0)] >>> a.shape (2,) >>> a.resize(3) >>> a.shape (3,) >>> print a [('M', 64.0, 75.0) ('F', 25.0, 60.0) ('', 0.0, 0.0)] >>> a[2] = ('X', 17.0, 61.5) >>> print a [('M', 64.0, 75.0) ('F', 25.0, 60.0) ('X', 17.0, 61.5)]
Adding row to numpy recarray
Is there an easy way to add a record/row to a numpy recarray without creating a new recarray? Let's say I have a recarray that takes 1Gb in memory, I want to be able to add a row to it without having python take up 2Gb of memory temporarily.
[ "You can call yourrecarray.resize with a shape which has one more row, then assign to that new row. Of course. numpy might still have to allocate completely new memory if it just doesn't have room to grow the array in-place, but at least you stand a chance!-)\nSince an example was requested, here comes, modified off the canonical example list...:\n>>> import numpy\n>>> mydescriptor = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} \n>>> a = numpy.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=mydescriptor)\n>>> print a\n[('M', 64.0, 75.0) ('F', 25.0, 60.0)]\n>>> a.shape\n(2,)\n>>> a.resize(3)\n>>> a.shape\n(3,)\n>>> print a\n[('M', 64.0, 75.0) ('F', 25.0, 60.0) ('', 0.0, 0.0)]\n>>> a[2] = ('X', 17.0, 61.5)\n>>> print a\n[('M', 64.0, 75.0) ('F', 25.0, 60.0) ('X', 17.0, 61.5)]\n\n" ]
[ 10 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001598251_numpy_python.txt
Q: Printing basenames by Python How can you print the basenames of files by Python in the main folder and subfolders? My attempt #!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename <dir_name>\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) 1st problem solved with the off-by-one-error 2nd problem: The code gives me the output unsuccessfully man.aux about_8php.tex refman.pdf successful_notice.php ... I expect to get as an output aux tex pdf php ... A: Let me explain the debugging methodology a little bit. As you've encountered the situation in which len(sys.argv) != 1, you should ask youself: "What is the actual value of len(sys.argv)? Why it is so?". The answers are: >>> len(sys.argv) 2 >>> sys.argv ['/tmp/basename.py', '/path/to/home/Desktop/pgCodes/'] I guess now the problem should become more clear. Edit: To address your second question, things you are interested in are called file extensions or suffixes, not basenames. Here is the complete solution: import sys, os def iflatten(xss): 'Iterable(Iterable(a)) -> Iterable(a)' return (x for xs in xss for x in xs) def allfiles(dir): 'str -> Iterable(str)' return iflatten(files for path, dirs, files in os.walk(dir)) def ext(path): 'str -> str' (root, ext) = os.path.splitext(path) return ext[1:] def main(): assert len(sys.argv) == 2, 'usage: progname DIR' dir = sys.argv[1] exts = (ext(f) for f in allfiles(dir)) for e in exts: print e if __name__ == '__main__': main() A: if len(sys.argv) != 1: I think you mean 2. argv[0] is the name of the script; argv[1] is the first argument, etc. A: As others have noted, the first element of sys.argv is the program:: # argv.py import sys for index, arg in enumerate(sys.argv): print '%(index)s: %(arg)s' % locals() If I run this without parameters:: $ python argv.py 0: argv.py I see that the first and only item in argv is the name of the program/script. If I pass parameters:: $ python argv.py a b c 0: argv.py 1: a 2: b 3: c And so on. The other thing is that you really don't need to use os.path.basename on the items in the third element of the tuple yielded by os.walk:: import os import sys # Imagine some usage check here... # Slice sys.argv to skip the first element... for path in sys.argv[1:]: for root, dirs, files in os.walk(path): for name in files: # No need to use basename, since these are already base'd, so to speak... print name A: The length of sys.argv is 2 because you have an item at index 0 (the program name) and an item at index 1 (the first argument to the program). Changing your program to compare against 2 appears to give the correct results, without making any other changes. A: argv typically includes the name of the program/script invokved as the first element, and thus the length when passing it a single argument is actually 2, not 1.
Printing basenames by Python
How can you print the basenames of files by Python in the main folder and subfolders? My attempt #!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename <dir_name>\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) 1st problem solved with the off-by-one-error 2nd problem: The code gives me the output unsuccessfully man.aux about_8php.tex refman.pdf successful_notice.php ... I expect to get as an output aux tex pdf php ...
[ "Let me explain the debugging methodology a little bit.\nAs you've encountered the situation in which len(sys.argv) != 1, you should ask youself: \"What is the actual value of len(sys.argv)? Why it is so?\". The answers are:\n>>> len(sys.argv)\n2\n>>> sys.argv\n['/tmp/basename.py', '/path/to/home/Desktop/pgCodes/']\n\nI guess now the problem should become more clear.\nEdit: To address your second question, things you are interested in are called file extensions or suffixes, not basenames. Here is the complete solution:\nimport sys, os\n\ndef iflatten(xss):\n 'Iterable(Iterable(a)) -> Iterable(a)'\n return (x for xs in xss for x in xs)\n\ndef allfiles(dir):\n 'str -> Iterable(str)'\n return iflatten(files for path, dirs, files in os.walk(dir))\n\ndef ext(path):\n 'str -> str'\n (root, ext) = os.path.splitext(path)\n return ext[1:]\n\ndef main():\n assert len(sys.argv) == 2, 'usage: progname DIR'\n dir = sys.argv[1]\n\n exts = (ext(f) for f in allfiles(dir))\n for e in exts:\n print e\n\nif __name__ == '__main__':\n main()\n\n", "if len(sys.argv) != 1:\n\nI think you mean 2. argv[0] is the name of the script; argv[1] is the first argument, etc.\n", "As others have noted, the first element of sys.argv is the program::\n# argv.py\nimport sys\n\nfor index, arg in enumerate(sys.argv):\n print '%(index)s: %(arg)s' % locals()\n\nIf I run this without parameters::\n$ python argv.py \n0: argv.py\n\nI see that the first and only item in argv is the name of the program/script. If I pass parameters::\n$ python argv.py a b c\n0: argv.py\n1: a\n2: b\n3: c\n\nAnd so on.\nThe other thing is that you really don't need to use os.path.basename on the items in the third element of the tuple yielded by os.walk::\nimport os\nimport sys\n\n# Imagine some usage check here...\n\n# Slice sys.argv to skip the first element...\nfor path in sys.argv[1:]:\n for root, dirs, files in os.walk(path):\n for name in files:\n # No need to use basename, since these are already base'd, so to speak...\n print name\n\n", "The length of sys.argv is 2 because you have an item at index 0 (the program name) and an item at index 1 (the first argument to the program).\nChanging your program to compare against 2 appears to give the correct results, without making any other changes.\n", "argv typically includes the name of the program/script invokved as the first element, and thus the length when passing it a single argument is actually 2, not 1.\n" ]
[ 8, 3, 2, 1, 1 ]
[]
[]
[ "path", "python" ]
stackoverflow_0001598013_path_python.txt
Q: Python graceful fail on int() call? I have to make a rudimentary FSM in a class, and am writing it in Python. The assignment requires we read the transitions for the machine from a text file. So for example, a FSM with 3 states, each of which have 2 possible transitions, with possible inputs 'a' and 'b', wolud have a text file that looks like this: 2 # first line lists all final states 0 a 1 0 b 2 1 a 0 1 b 2 2 a 0 2 b 1 I am trying to come up with a more pythonic way to read a line at a time and convert the states to ints, while keeping the input vals as strings. Basically this is the idea: self.finalStates = f.readline().strip("\n").split(" ") for line in f: current_state, input_val, next_state = [int(x) for x in line.strip("\n").split(" ")] Of course, when it tries to int("a") it throws a ValueError. I know I could use a traditional loop and just catch the ValueError but I was hoping to have a more Pythonic way of doing this. A: You should really only be trying to parse the tokens that you expect to be integers for line in f: tokens = line.split(" ") current_state, input_val, next_state = int(tokens[0]), tokens[1], int(tokens[2]) Arguably more-readable: for line in f: current_state, input_val, next_state = parseline(line) def parseline(line): tokens = line.split(" ") return (int(tokens[0]), tokens[1], int(tokens[2])) A: This is something very functional, but I'm not sure if it's "pythonic"... And it may cause some people to scratch their heads. You should really have a "lazy" zip() to do it this way if you have a large number of values: types = [int, str, int] for line in f: current_state, input_val, next_state = multi_type(types, line) def multi_type(ts,xs): return [t(x) for (t,x) in zip(ts, xs.strip().split())] Also the arguments you use for strip and split can be omitted, because the defaults will work here. Edit: reformatted - I wouldn't use it as one long line in real code. A: You got excellent answers that match your problem well. However, in other cases, there may indeed be situations where you want to convert some fields to int if feasible (i.e. if they're all digits) and leave them as str otherwise (as the title of your question suggests) without knowing in advance which fields are ints and which ones are not. The traditional Python approach is try/except...: def maybeint(s): try: return int(s) except ValueError: return s ...which you need to wrap into a function as there's no way to do a try/except in an expression (e.g. in a list comprehension). So, you'd use it like: several_fields = [maybeint(x) for x in line.split()] However, it is possible to do this specific task inline, if you prefer: several_fields = [(int(x) if x.isdigit() else x) for x in line.split()] the if/else "ternary operator" looks a bit strange, but one can get used to it;-); and the isdigit method of a string gives True if the string is nonempty and only has digits. To repeat, this is not what you should do in your specific case, where you know the specific int-str-int pattern of input types; but it might be appropriate in a more general situation where you don't have such precise information in advance! A: self.finalStates = [int(state) for state in f.readline().split()] for line in f: words = line.split() current_state, input_val, next_state = int(words[0]), words[1], int(words[2]) # now do something with values Note that you can shorten line.strip("\n").split(" ") down to just line.split(). The default behavior of str.split() is to split on any white space, and it will return a set of words that have no leading or trailing white space of any sort. If you are converting the states to int in the loop, I presume you want the finalStates to be int as well.
Python graceful fail on int() call?
I have to make a rudimentary FSM in a class, and am writing it in Python. The assignment requires we read the transitions for the machine from a text file. So for example, a FSM with 3 states, each of which have 2 possible transitions, with possible inputs 'a' and 'b', wolud have a text file that looks like this: 2 # first line lists all final states 0 a 1 0 b 2 1 a 0 1 b 2 2 a 0 2 b 1 I am trying to come up with a more pythonic way to read a line at a time and convert the states to ints, while keeping the input vals as strings. Basically this is the idea: self.finalStates = f.readline().strip("\n").split(" ") for line in f: current_state, input_val, next_state = [int(x) for x in line.strip("\n").split(" ")] Of course, when it tries to int("a") it throws a ValueError. I know I could use a traditional loop and just catch the ValueError but I was hoping to have a more Pythonic way of doing this.
[ "You should really only be trying to parse the tokens that you expect to be integers\nfor line in f:\n tokens = line.split(\" \")\n current_state, input_val, next_state = int(tokens[0]), tokens[1], int(tokens[2])\n\nArguably more-readable:\nfor line in f:\n current_state, input_val, next_state = parseline(line)\n\ndef parseline(line):\n tokens = line.split(\" \")\n return (int(tokens[0]), tokens[1], int(tokens[2]))\n\n", "This is something very functional, but I'm not sure if it's \"pythonic\"... And it may cause some people to scratch their heads. You should really have a \"lazy\" zip() to do it this way if you have a large number of values:\ntypes = [int, str, int]\nfor line in f:\n current_state, input_val, next_state = multi_type(types, line)\n\ndef multi_type(ts,xs): return [t(x) for (t,x) in zip(ts, xs.strip().split())]\n\nAlso the arguments you use for strip and split can be omitted, because the defaults will work here.\nEdit: reformatted - I wouldn't use it as one long line in real code.\n", "You got excellent answers that match your problem well. However, in other cases, there may indeed be situations where you want to convert some fields to int if feasible (i.e. if they're all digits) and leave them as str otherwise (as the title of your question suggests) without knowing in advance which fields are ints and which ones are not.\nThe traditional Python approach is try/except...:\ndef maybeint(s):\n try: return int(s)\n except ValueError: return s\n\n...which you need to wrap into a function as there's no way to do a try/except in an expression (e.g. in a list comprehension). So, you'd use it like:\nseveral_fields = [maybeint(x) for x in line.split()]\n\nHowever, it is possible to do this specific task inline, if you prefer:\nseveral_fields = [(int(x) if x.isdigit() else x) for x in line.split()]\n\nthe if/else \"ternary operator\" looks a bit strange, but one can get used to it;-); and the isdigit method of a string gives True if the string is nonempty and only has digits.\nTo repeat, this is not what you should do in your specific case, where you know the specific int-str-int pattern of input types; but it might be appropriate in a more general situation where you don't have such precise information in advance!\n", "self.finalStates = [int(state) for state in f.readline().split()]\n\nfor line in f:\n words = line.split()\n current_state, input_val, next_state = int(words[0]), words[1], int(words[2])\n # now do something with values\n\nNote that you can shorten line.strip(\"\\n\").split(\" \") down to just line.split(). The default behavior of str.split() is to split on any white space, and it will return a set of words that have no leading or trailing white space of any sort.\nIf you are converting the states to int in the loop, I presume you want the finalStates to be int as well.\n" ]
[ 12, 4, 1, 0 ]
[]
[]
[ "fsm", "python" ]
stackoverflow_0001597114_fsm_python.txt
Q: How to read continous HTTP streaming data in Python? How to read binary streams from a HTTP streamer server in python. I did a search and someone said urllib2 can do the job but had blocking issues. Someone suggested Twisted framework. My questions are: If it's just a streaming client reads data on background, can I ignore the blocking issues caused by urllib2? What will happen if urllib2 doesn't catch up with streamer server? Will the data lost? If the streamer server requires user authentication via GET or POST some paramaters to server before retrieving data, can this be done by urllib2? Where can I find some stream client example of urllib2 and Twisted? Thank you. Jack A: To defeat urllib2's intrinsic buffering, you could do: import socket socket._fileobject.default_bufsize = 0 because it's actualy socket._fileobject that buffers underneath. No data will be lost anyway, but with the default buffering (8192 bytes at a time) data may end up overly chunked for real-time streaming purposes (completely removing the buffering might hurt performance, but you could try smaller chunks). For Twisted, see twisted.web2.stream and the many links therefrom.
How to read continous HTTP streaming data in Python?
How to read binary streams from a HTTP streamer server in python. I did a search and someone said urllib2 can do the job but had blocking issues. Someone suggested Twisted framework. My questions are: If it's just a streaming client reads data on background, can I ignore the blocking issues caused by urllib2? What will happen if urllib2 doesn't catch up with streamer server? Will the data lost? If the streamer server requires user authentication via GET or POST some paramaters to server before retrieving data, can this be done by urllib2? Where can I find some stream client example of urllib2 and Twisted? Thank you. Jack
[ "To defeat urllib2's intrinsic buffering, you could do:\nimport socket\nsocket._fileobject.default_bufsize = 0\n\nbecause it's actualy socket._fileobject that buffers underneath. No data will be lost anyway, but with the default buffering (8192 bytes at a time) data may end up overly chunked for real-time streaming purposes (completely removing the buffering might hurt performance, but you could try smaller chunks).\nFor Twisted, see twisted.web2.stream and the many links therefrom.\n" ]
[ 6 ]
[]
[]
[ "client", "python", "stream", "streaming" ]
stackoverflow_0001598331_client_python_stream_streaming.txt
Q: cElementTree invalid encoding problem I'm encoding challenged, so this is probably simple, but I'm stuck. I'm trying to parse an XML file emailed to the App Engine's new receive mail functionality. First, I just pasted the XML into the body of the message, and it parsed fine with CElementTree. Then I changed to using an attachment, and parsing it with CElementTree produces this error: SyntaxError: not well-formed (invalid token): line 3, column 10 I've output the XML from both emailing in the body and as an attachment, and they look the same to me. I assume pasting it in the box is changing the encoding in a way that attaching the file is not, but I don't know how to fix it. The first few lines look this: <?xml version="1.0" standalone="yes"?> <gpx xmlns="http://www.topografix.com/GPX/1/0" version="1.0" creator="TopoFusion 2.85" xmlns:TopoFusion="http://www.TopoFusion.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.TopoFusion.com http://www.TopoFusion.com/topofusion.xsd"> <name><![CDATA[Pacific Crest Trail section K hike 4]]></name><desc><![CDATA[Pacific Crest Trail section K hike 4. Five Lakes to Old Highway 40 near Donner. As described in Day Hikes on the PCT California edition by George & Patricia Semb. See pages 150-152 for access and exit trailheads. GPS data provided by the USFS]]></desc><author><![CDATA[MikeOnTheTrail]]></author><email><![CDATA[michaelonthetrail@yahoo.com]]></email><url><![CDATA[http://www.pcta.org]]></url> <urlname><![CDATA[Pacific Crest Trail Association Homepage]]></urlname> <time>2006-07-08T02:16:05Z</time> Edited to add more info: I have a GPX file that's a few thousand lines. If I paste it into the body of the message I can parse it correctly, like so: gpxcontent = message.bodies(content_type='text/plain') for x in gpxcontent: gpxcontent = x[1].decode() for event, elem in ET.iterparse(StringIO.StringIO(gpxcontent), events=("start", "start-ns")): If I attach it to the mail as an attachment, using Gmail. And then extract it like so: if isinstance(message.attachments, tuple): attachments = [message.attachments] gpxcontent = attachments[0][3].decode() for event, elem in ET.iterparse(StringIO.StringIO(gpxcontent), events=("start", "start-ns")): I get the error above. Line 3 column 10 seems to be the start of ![CDATA on the third line. A: Ah, nevermind. There's a bug in App Engine that is calling lower() on all attachments when you decode them. This made the CDATA string invalid. Here's a link to the bug report: http://code.google.com/p/googleappengine/issues/detail?id=2289#c2
cElementTree invalid encoding problem
I'm encoding challenged, so this is probably simple, but I'm stuck. I'm trying to parse an XML file emailed to the App Engine's new receive mail functionality. First, I just pasted the XML into the body of the message, and it parsed fine with CElementTree. Then I changed to using an attachment, and parsing it with CElementTree produces this error: SyntaxError: not well-formed (invalid token): line 3, column 10 I've output the XML from both emailing in the body and as an attachment, and they look the same to me. I assume pasting it in the box is changing the encoding in a way that attaching the file is not, but I don't know how to fix it. The first few lines look this: <?xml version="1.0" standalone="yes"?> <gpx xmlns="http://www.topografix.com/GPX/1/0" version="1.0" creator="TopoFusion 2.85" xmlns:TopoFusion="http://www.TopoFusion.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.TopoFusion.com http://www.TopoFusion.com/topofusion.xsd"> <name><![CDATA[Pacific Crest Trail section K hike 4]]></name><desc><![CDATA[Pacific Crest Trail section K hike 4. Five Lakes to Old Highway 40 near Donner. As described in Day Hikes on the PCT California edition by George & Patricia Semb. See pages 150-152 for access and exit trailheads. GPS data provided by the USFS]]></desc><author><![CDATA[MikeOnTheTrail]]></author><email><![CDATA[michaelonthetrail@yahoo.com]]></email><url><![CDATA[http://www.pcta.org]]></url> <urlname><![CDATA[Pacific Crest Trail Association Homepage]]></urlname> <time>2006-07-08T02:16:05Z</time> Edited to add more info: I have a GPX file that's a few thousand lines. If I paste it into the body of the message I can parse it correctly, like so: gpxcontent = message.bodies(content_type='text/plain') for x in gpxcontent: gpxcontent = x[1].decode() for event, elem in ET.iterparse(StringIO.StringIO(gpxcontent), events=("start", "start-ns")): If I attach it to the mail as an attachment, using Gmail. And then extract it like so: if isinstance(message.attachments, tuple): attachments = [message.attachments] gpxcontent = attachments[0][3].decode() for event, elem in ET.iterparse(StringIO.StringIO(gpxcontent), events=("start", "start-ns")): I get the error above. Line 3 column 10 seems to be the start of ![CDATA on the third line.
[ "Ah, nevermind. There's a bug in App Engine that is calling lower() on all attachments when you decode them. This made the CDATA string invalid. \nHere's a link to the bug report: http://code.google.com/p/googleappengine/issues/detail?id=2289#c2\n" ]
[ 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001597604_python_xml.txt
Q: Encoding of arguments to subprocess.Popen I have a Python extension to the Nautilus file browser (AFAIK this runs exclusively on GNU/Linux/Unix/etc environments). I decided to split out an expensive computation and run it as a subprocess, pickle the result and send it back over a pipe. My question concerns the arguments to the script. Since the computation requires a path argument and a boolean argument I figured I could do this in two ways: send the args in a pickled tuple over a pipe, or give them on the command line. I found that the pickled tuple approach is noticeably slower than just giving arguments, so I went with the subprocess argument approach. However, I'm worried about localisation issues that might arise. At present, in the caller I have: subprocess.Popen( [sys.executable, path_to_script, path.encode("utf-8"), str(recurse)], stdin = None, stdout = subprocess.PIPE) In the script: path = unicode(sys.argv[1], "utf-8") My concern is that encoding the path argument as UTF-8 is a mistake, but I don't know for sure. I want to avoid a "it works on my machine" syndrome. Will this fail if a user has, say, latin1 as their default character encoding? Or does it not matter? A: It does not matter: as long as your script knows to expect a utf-8 encoding for the argument, it can decode it properly. utf-8 is the correct choice because it will let you encode ANY Unicode string -- not just those for some languages but not others, as choices such as Latin-1 would entail! A: Use sys.getfilesystemencoding() if file names should be readable by user. However this can cause problems when there are characters not supported by the system encoding. To avoid this you can substitute missing characters with some character sequences (e.g. by registering you own error handling function with codecs.register_error()).
Encoding of arguments to subprocess.Popen
I have a Python extension to the Nautilus file browser (AFAIK this runs exclusively on GNU/Linux/Unix/etc environments). I decided to split out an expensive computation and run it as a subprocess, pickle the result and send it back over a pipe. My question concerns the arguments to the script. Since the computation requires a path argument and a boolean argument I figured I could do this in two ways: send the args in a pickled tuple over a pipe, or give them on the command line. I found that the pickled tuple approach is noticeably slower than just giving arguments, so I went with the subprocess argument approach. However, I'm worried about localisation issues that might arise. At present, in the caller I have: subprocess.Popen( [sys.executable, path_to_script, path.encode("utf-8"), str(recurse)], stdin = None, stdout = subprocess.PIPE) In the script: path = unicode(sys.argv[1], "utf-8") My concern is that encoding the path argument as UTF-8 is a mistake, but I don't know for sure. I want to avoid a "it works on my machine" syndrome. Will this fail if a user has, say, latin1 as their default character encoding? Or does it not matter?
[ "It does not matter: as long as your script knows to expect a utf-8 encoding for the argument, it can decode it properly. utf-8 is the correct choice because it will let you encode ANY Unicode string -- not just those for some languages but not others, as choices such as Latin-1 would entail!\n", "Use sys.getfilesystemencoding() if file names should be readable by user. However this can cause problems when there are characters not supported by the system encoding. To avoid this you can substitute missing characters with some character sequences (e.g. by registering you own error handling function with codecs.register_error()).\n" ]
[ 4, 2 ]
[]
[]
[ "localization", "python", "subprocess" ]
stackoverflow_0001598334_localization_python_subprocess.txt
Q: Specific use case for Django admin I have a couple special use cases for Django admin, and I'm curious about other peoples' opinions: I'd like to use a customized version the admin to allow users to edit certain objects on the site (customized to look more like the rest of the site). At this point users can only edit objects they own, but I'll eventually open this up to something more wiki-style where any user can edit any of the objects. In other words, I'd be designating all users as 'staff' and granting them permission to edit those objects. I was considering also doing this for other objects where not all users would be able to edit all objects. I'd use a custom view to make sure users only edit their own objects. The benefits are that I would have a starting point for the editing interface (as the admin creates it automatically) that I could just customize with ModelAdmin since the admin functionality is already pretty close to what I'd like. I feel like the first suggestion would be considered acceptable, while the second might not be. After checking out a few other resources (Valid use case for django admin? and the quote from the Django Book in that question) it seems like some Django developers feel like this is the wrong idea. My question is: why? Are there any good reasons not to use customized admin views to grant per-object permissions from a performance, stability, security, usability, etc. standpoint? It seems to me like it could save a lot of time for certain applications (and I may end up doing it anyway) but I wanted to understand the reasons for making such a distinction between the admin and everything else. A: You are free to do whatever you want. If you want to customize the Django admin, go for it, but you will likely not be as well supported by the mailing list and IRC once you deviate from the typical admin modifications path. While customizing the admin might seem like the easy solution right now, more than likely it is going to be more work than just recreating the necessary forms yourself once you really try to tweak how things work. Look into the generic create/edit/delete and generic details/list views--they will expose the basic functionality you need very quickly, and are going to be easier to extend than the admin. I believe the view that the "admin is not your app" comes from the fact that it is easier to use other mechanisms than hacking up the admin (plus, leaving the admin untouched makes forward compatibility much easier for the Django developers). A: I've previously made a django app do precisely this without modifying the actual admin code. Rather by creating a subclass of admin.ModelAdmin with several of it's methods extended with queryset filters. This will display only records that are owned by the user (in this case business is the AUTH_PROFILE_MODEL). There are various blogs on the web on how to achieve this. You can use this technique to filter lists, form select boxes, Form Fields validating saves etc. So Far it's survived from NFA to 1.0 to 1.1 but this method is susceptible to api changes. In practice I've found this far quicker to generate new row level access level admin forms for new models in the app as I have added them. You just create a new model with a user fk, subclass the AdminFilterByBusiness or just admin.site.register(NewModel,AdminFilterByBusiness) if it doesnt need anything custom. It works and is very DRY. You do however run the risk of not being able to leverage other published django apps. So consider this technique carefully for the project you are building. Example Filter admin Class below inspired by http://code.djangoproject.co/wiki/NewformsHOWTO #AdminFilterByBusiness {{{2 class AdminFilterByBusiness(admin.ModelAdmin): """ Used By News Items to show only objects a business user is related to """ def has_change_permission(self,request,obj=None): self.request = request if request.user.is_superuser: return True if obj == None: return super(AdminFilterByBusiness,self).has_change_permission(request,obj) if obj.business.user == request.user: return True return False def has_delete_permission(self,request,obj=None): self.request = request if request.user.is_superuser: return True if obj == None: return super(AdminFilterByBusiness,self).has_delete_permission(request,obj) if obj.business.user == request.user: return True return False def has_add_permission(self, request): self.request = request return super(AdminFilterByBusiness,self).has_add_permission(request) def queryset(self, request): # get the default queryset, pre-filter qs = super(AdminFilterByBusiness, self).queryset(request) # if not (request.user.is_superuser): # filter only shows blogs mapped to currently logged-in user try: qs = qs.filter(business=request.user.business_set.all()[0]) except: raise ValueError('Operator has not been created. Please Contact Admins') return qs def formfield_for_dbfield(self, db_field, **kwargs): """ Fix drop down lists to populate as per user request """ #regular return for superuser if self.request.user.is_superuser: return super(AdminFilterByBusiness, self).formfield_for_dbfield( db_field, **kwargs) if db_field.name == "business": return forms.ModelChoiceField( queryset = self.request.user.business_set.all() ) #default return super(AdminFilterByBusiness, self).formfield_for_dbfield(db_field, **kwargs) A: We limit the Django Admin -- unmodified -- for "back-office" access by our admins and support people. Not by users or customers. Some stylesheet changes to make the colors consistent with the rest of the site, but that's it. For the users (our customers), we provide proper view functions to do the various transactions. Even with heavily tailored forms, there are still a few things that we need to check and control. Django update transactions are very simple to write, and trying to customize admin seems more work that writing the transaction itself. Our transactions are not much more complex than shown in http://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view. Generally, our pages that have transactions almost always include workflow elements (or related content) that make them slightly more complex than the built-in admin interface. We'll have a half-dozen or so additional lines of code beyond the boilerplate. Our use cases aren't simple add/change/delete, so we need more functionality than the default admin app provides.
Specific use case for Django admin
I have a couple special use cases for Django admin, and I'm curious about other peoples' opinions: I'd like to use a customized version the admin to allow users to edit certain objects on the site (customized to look more like the rest of the site). At this point users can only edit objects they own, but I'll eventually open this up to something more wiki-style where any user can edit any of the objects. In other words, I'd be designating all users as 'staff' and granting them permission to edit those objects. I was considering also doing this for other objects where not all users would be able to edit all objects. I'd use a custom view to make sure users only edit their own objects. The benefits are that I would have a starting point for the editing interface (as the admin creates it automatically) that I could just customize with ModelAdmin since the admin functionality is already pretty close to what I'd like. I feel like the first suggestion would be considered acceptable, while the second might not be. After checking out a few other resources (Valid use case for django admin? and the quote from the Django Book in that question) it seems like some Django developers feel like this is the wrong idea. My question is: why? Are there any good reasons not to use customized admin views to grant per-object permissions from a performance, stability, security, usability, etc. standpoint? It seems to me like it could save a lot of time for certain applications (and I may end up doing it anyway) but I wanted to understand the reasons for making such a distinction between the admin and everything else.
[ "You are free to do whatever you want. If you want to customize the Django admin, go for it, but you will likely not be as well supported by the mailing list and IRC once you deviate from the typical admin modifications path.\nWhile customizing the admin might seem like the easy solution right now, more than likely it is going to be more work than just recreating the necessary forms yourself once you really try to tweak how things work. Look into the generic create/edit/delete and generic details/list views--they will expose the basic functionality you need very quickly, and are going to be easier to extend than the admin. \nI believe the view that the \"admin is not your app\" comes from the fact that it is easier to use other mechanisms than hacking up the admin (plus, leaving the admin untouched makes forward compatibility much easier for the Django developers). \n", "I've previously made a django app do precisely this without modifying the actual admin code. Rather by creating a subclass of admin.ModelAdmin with several of it's methods extended with queryset filters. This will display only records that are owned by the user (in this case business is the AUTH_PROFILE_MODEL). There are various blogs on the web on how to achieve this. \nYou can use this technique to filter lists, form select boxes, Form Fields validating saves etc.\nSo Far it's survived from NFA to 1.0 to 1.1 but this method is susceptible to api changes. \nIn practice I've found this far quicker to generate new row level access level admin forms for new models in the app as I have added them. You just create a new model with a user fk, subclass the AdminFilterByBusiness or just \nadmin.site.register(NewModel,AdminFilterByBusiness)\n\nif it doesnt need anything custom. It works and is very DRY.\nYou do however run the risk of not being able to leverage other published django apps. So consider this technique carefully for the project you are building. \nExample Filter admin Class below inspired by http://code.djangoproject.co/wiki/NewformsHOWTO\n#AdminFilterByBusiness {{{2\nclass AdminFilterByBusiness(admin.ModelAdmin):\n \"\"\"\n Used By News Items to show only objects a business user is related to\n \"\"\"\n def has_change_permission(self,request,obj=None):\n self.request = request\n\n if request.user.is_superuser:\n return True\n\n if obj == None:\n return super(AdminFilterByBusiness,self).has_change_permission(request,obj)\n\n if obj.business.user == request.user:\n return True\n return False\n\n def has_delete_permission(self,request,obj=None):\n\n self.request = request\n\n if request.user.is_superuser:\n return True\n\n if obj == None:\n return super(AdminFilterByBusiness,self).has_delete_permission(request,obj)\n\n if obj.business.user == request.user:\n return True\n return False\n\n def has_add_permission(self, request):\n\n self.request = request\n return super(AdminFilterByBusiness,self).has_add_permission(request)\n\n def queryset(self, request):\n # get the default queryset, pre-filter\n qs = super(AdminFilterByBusiness, self).queryset(request)\n #\n if not (request.user.is_superuser):\n # filter only shows blogs mapped to currently logged-in user\n try:\n qs = qs.filter(business=request.user.business_set.all()[0])\n except:\n raise ValueError('Operator has not been created. Please Contact Admins')\n return qs\n\n def formfield_for_dbfield(self, db_field, **kwargs):\n\n \"\"\" Fix drop down lists to populate as per user request \"\"\"\n #regular return for superuser\n if self.request.user.is_superuser:\n return super(AdminFilterByBusiness, self).formfield_for_dbfield(\n db_field, **kwargs)\n\n if db_field.name == \"business\":\n return forms.ModelChoiceField(\n queryset = self.request.user.business_set.all()\n )\n\n #default\n return super(AdminFilterByBusiness, self).formfield_for_dbfield(db_field, **kwargs)\n\n", "We limit the Django Admin -- unmodified -- for \"back-office\" access by our admins and support people. Not by users or customers. Some stylesheet changes to make the colors consistent with the rest of the site, but that's it.\nFor the users (our customers), we provide proper view functions to do the various transactions. Even with heavily tailored forms, there are still a few things that we need to check and control.\nDjango update transactions are very simple to write, and trying to customize admin seems more work that writing the transaction itself.\nOur transactions are not much more complex than shown in http://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view. \nGenerally, our pages that have transactions almost always include workflow elements (or related content) that make them slightly more complex than the built-in admin interface. We'll have a half-dozen or so additional lines of code beyond the boilerplate. \nOur use cases aren't simple add/change/delete, so we need more functionality than the default admin app provides.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "django", "django_admin", "django_views", "python" ]
stackoverflow_0001598248_django_django_admin_django_views_python.txt
Q: Configuring Python's default exception handling For an uncaught exception, Python by default prints a stack trace, the exception itself, and terminates. Is anybody aware of a way to tailor this behaviour on the program level (other than establishing my own global, catch-all exception handler), so that the stack trace is omitted? I would like to toggle in my app whether the stack trace is printed or not. A: You are looking for sys.excepthook: sys.excepthook(type, value, traceback) This function prints out a given traceback and exception to sys.stderr. When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook.
Configuring Python's default exception handling
For an uncaught exception, Python by default prints a stack trace, the exception itself, and terminates. Is anybody aware of a way to tailor this behaviour on the program level (other than establishing my own global, catch-all exception handler), so that the stack trace is omitted? I would like to toggle in my app whether the stack trace is printed or not.
[ "You are looking for sys.excepthook:\nsys.excepthook(type, value, traceback) \nThis function prints out a given traceback and exception to sys.stderr.\nWhen an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook.\n" ]
[ 27 ]
[]
[]
[ "python" ]
stackoverflow_0001599962_python.txt
Q: How to read a csv file with python I'm trying to read a csv file but it doesn't work. I can read my csv file but when I see what I read, there where white space between values. Here is my code # -*- coding: iso-8859-1 -*- import sql_db, tmpl_macros, os import security, form, common import csv class windows_dialect(csv.Dialect): """Describe the usual properties of unix-generated CSV files.""" delimiter = ',' quotechar = '"' doublequote = 1 skipinitialspace = 0 lineterminator = 'n' quoting = csv.QUOTE_MINIMAL def reco(d): cars = {210:'"', 211:'"', 213:"'", 136:'à', 143:'è', 142:'é'} for c in cars: d = d.replace(chr(c),cars[c]) return d def page_process(ctx): if ctx.req_equals('catalog_send'): if 'catalog_file' in ctx.locals.__dict__: contenu = ctx.locals.catalog_file[0].file.read() #contenu.encode('') p = csv.reader(contenu, delimiter=',') inserted = 0 modified = 0 (cr,db) = sql_db.cursor_get() for line in p: if line: logfile = open('/tmp/test.log', 'a') logfile.write(line[0]) logfile.write('\n') logfile.write('-----------------------------\n') logfile.close() A: I prefer to use numpy's genfromtxt rather than the standard csv library, because it generates numpy's recarray, which are clean data structures to store data in a table-like object. >>> from numpy import genfromtxt >>> data = genfromtxt(csvfile, delimiter=',', dtype=None) # data is a table-like structure (a numpy recarray) in which you can access columns and rows easily >>> data['firstcolumn'] <content of the first column> EDIT: This answer is quite old. While numpy.genfromtxt, nowadays most people would use pandas: >>> import pandas as pd >>> pd.read_csv(csvfile) This has the advantage of creating pandas.DataFrame, which is a better structure for data analysis. A: If you have control over the data, use tab-delimited instead:: import csv import string writer = open('junk.txt', 'wb') for x in range(10): writer.write('\t'.join(string.letters[:5])) writer.write('\r\n') writer.close() reader = csv.reader(open('junk.txt', 'r'), dialect='excel-tab') for line in reader: print line This produces expected results. A tip for getting more useful feedback: Demonstrate your problem through self-contained and complete example code that doesn't contain extraneous and unimportant artifacts. A: You don't do anything with the dialect you've defined. Did you mean to do this: csv.register_dialect('windows_dialect', windows_dialect) p = csv.reader(contenu, dialect='windows_dialect') Also not sure what the reco function is for.
How to read a csv file with python
I'm trying to read a csv file but it doesn't work. I can read my csv file but when I see what I read, there where white space between values. Here is my code # -*- coding: iso-8859-1 -*- import sql_db, tmpl_macros, os import security, form, common import csv class windows_dialect(csv.Dialect): """Describe the usual properties of unix-generated CSV files.""" delimiter = ',' quotechar = '"' doublequote = 1 skipinitialspace = 0 lineterminator = 'n' quoting = csv.QUOTE_MINIMAL def reco(d): cars = {210:'"', 211:'"', 213:"'", 136:'à', 143:'è', 142:'é'} for c in cars: d = d.replace(chr(c),cars[c]) return d def page_process(ctx): if ctx.req_equals('catalog_send'): if 'catalog_file' in ctx.locals.__dict__: contenu = ctx.locals.catalog_file[0].file.read() #contenu.encode('') p = csv.reader(contenu, delimiter=',') inserted = 0 modified = 0 (cr,db) = sql_db.cursor_get() for line in p: if line: logfile = open('/tmp/test.log', 'a') logfile.write(line[0]) logfile.write('\n') logfile.write('-----------------------------\n') logfile.close()
[ "I prefer to use numpy's genfromtxt rather than the standard csv library, because it generates numpy's recarray, which are clean data structures to store data in a table-like object.\n>>> from numpy import genfromtxt\n>>> data = genfromtxt(csvfile, delimiter=',', dtype=None)\n# data is a table-like structure (a numpy recarray) in which you can access columns and rows easily\n>>> data['firstcolumn']\n<content of the first column>\n\n\nEDIT: This answer is quite old. While numpy.genfromtxt, nowadays most people would use pandas:\n>>> import pandas as pd\n>>> pd.read_csv(csvfile)\n\nThis has the advantage of creating pandas.DataFrame, which is a better structure for data analysis.\n", "If you have control over the data, use tab-delimited instead::\nimport csv\nimport string\n\nwriter = open('junk.txt', 'wb')\nfor x in range(10):\n writer.write('\\t'.join(string.letters[:5]))\n writer.write('\\r\\n')\nwriter.close()\nreader = csv.reader(open('junk.txt', 'r'), dialect='excel-tab')\nfor line in reader:\n print line\n\nThis produces expected results.\nA tip for getting more useful feedback: Demonstrate your problem through self-contained and complete example code that doesn't contain extraneous and unimportant artifacts.\n", "You don't do anything with the dialect you've defined. Did you mean to do this:\ncsv.register_dialect('windows_dialect', windows_dialect)\np = csv.reader(contenu, dialect='windows_dialect')\n\nAlso not sure what the reco function is for.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0001593318_csv_python.txt
Q: setTrace() in Python Is there a way to use the setTrace() function in a script that has no method definitions? i.e. for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit) so ideally I would want the trace function to be called upon every iteration / line of code executed in the loop. I've done this with scripts that have had method definitions before, but am not sure how to get it to work in this instance. A: settrace() is really only intended for implementing debuggers. If you are using it to debug this program, you may be better off using PDB According to the documentation, settrace() will not do what you want. If you really want to do this line by line tracing, have a look at the compiler package which allows you to access and modify the AST Abstract Syntax Tree produced by the Python compiler. You should be able to use that to insert calls to a function which tracks the execution. A: I only use one simple syntax line to rule them all: import pdb; pdb.set_trace() Put it wherever you want to break execution and start debugging. Use pdb commands (n for next, l for list, etc). Cheers, H.
setTrace() in Python
Is there a way to use the setTrace() function in a script that has no method definitions? i.e. for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit) so ideally I would want the trace function to be called upon every iteration / line of code executed in the loop. I've done this with scripts that have had method definitions before, but am not sure how to get it to work in this instance.
[ "settrace() is really only intended for implementing debuggers. If you are using it to debug this program, you may be better off using PDB\nAccording to the documentation, settrace() will not do what you want.\nIf you really want to do this line by line tracing, have a look at the compiler package which allows you to access and modify the AST Abstract Syntax Tree produced by the Python compiler. You should be able to use that to insert calls to a function which tracks the execution.\n", "I only use one simple syntax line to rule them all:\nimport pdb; pdb.set_trace()\n\nPut it wherever you want to break execution and start debugging. Use pdb commands (n for next, l for list, etc).\nCheers,\nH.\n" ]
[ 2, 2 ]
[]
[]
[ "python", "trace" ]
stackoverflow_0001600726_python_trace.txt
Q: setTrace() in Python (redux) Apologies for reposting but I had to edit this question when I got to work and realized I needed to have an account to do so. So here it goes again (with a little more context). I'm trying to time how long a script takes to execute, and I am thinking of doing that by checking the elapsed time after every line of code is executed. I've done this before when the script has contained method definitions, but am not sure how it would work in this instance. So my question is: Is there a way to use the setTrace() function in a script that has no method definitions? i.e. for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit) A: No, as the docs say, "The trace function is invoked (with event set to 'call') whenever a new local scope is entered" -- if you never enter a local scope (and only execute in global scope), the trace function will never be called. Note that settrace is too invasive anyway for the purpose of timing "how long a script takes to execute" as it will alter what it's measuring too much; if what you say is actually what you want, just take the time at start of execution and register with atexit a function that gets the time again and prints the difference. If what you want is different, i.e., profiling, see cProfile . Further note that the code example you give couldn't possibly do anything useful (even though I've edited it to fix an indent error): first it loops, then it defs a function, finally it calls settrace... then immediately ends because there's no more code after that! If you want anything to happen before that loop start, and you want to have everything at module top level (bad idea, but, whatever), you have to put the "anything" lexically before the loop, not after...;-)
setTrace() in Python (redux)
Apologies for reposting but I had to edit this question when I got to work and realized I needed to have an account to do so. So here it goes again (with a little more context). I'm trying to time how long a script takes to execute, and I am thinking of doing that by checking the elapsed time after every line of code is executed. I've done this before when the script has contained method definitions, but am not sure how it would work in this instance. So my question is: Is there a way to use the setTrace() function in a script that has no method definitions? i.e. for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit)
[ "No, as the docs say, \"The trace function is invoked (with event set to 'call') whenever a new local scope is entered\" -- if you never enter a local scope (and only execute in global scope), the trace function will never be called. Note that settrace is too invasive anyway for the purpose of timing \"how long a script takes to execute\" as it will alter what it's measuring too much; if what you say is actually what you want, just take the time at start of execution and register with atexit a function that gets the time again and prints the difference. If what you want is different, i.e., profiling, see cProfile .\nFurther note that the code example you give couldn't possibly do anything useful (even though I've edited it to fix an indent error): first it loops, then it defs a function, finally it calls settrace... then immediately ends because there's no more code after that! If you want anything to happen before that loop start, and you want to have everything at module top level (bad idea, but, whatever), you have to put the \"anything\" lexically before the loop, not after...;-)\n" ]
[ 2 ]
[]
[]
[ "python", "trace" ]
stackoverflow_0001601217_python_trace.txt
Q: Is there a way to tell whether a function is getting executed in a unittest? I'm using a config file to get the info for my database. It always gets the hostname and then figures out what database options to use from this config file. I want to be able to tell if I'm inside a unittest here and use the in memory sqlite database instead. Is there a way to tell at that point whether I'm inside a unittest, or will I have to find a different way? A: Inject your database dependency into your class using IoC. This should be done by handing it a repository object in the constructor of your class. Note that you don't necessarily need an IoC container to do this. You just need a repository interface and two implementations of your repository. Note: In Python IoC works a little differently. See http://code.activestate.com/recipes/413268/ for more information. A: Use some sort of database configuration and configure which database to use, and configure the in-memory database during unit tests. A: This is kind of brute force but it works. Have an environmental variable UNIT_TEST that your code checks, and set it inside your unit test driver. A: As Robert Harvey points out, it would be better if you could hand the settings to the component instead of having the component looking it up itself. This allows you to swap the settings simply by providing another object, i.e TestComponent(TestSettings()) instead of TestComponent(LiveSettings()) or, if you want to use config files, TestComponent(Settings("test.conf")) instead of TestComponent(Settings("live.conf")) A: I recommend using a mocking library such as mocker. And FYI, writing your code to check and see if you're in a unit test is a bad code smell. Your unit tests should be executing the code that's being tested as you wrote it.
Is there a way to tell whether a function is getting executed in a unittest?
I'm using a config file to get the info for my database. It always gets the hostname and then figures out what database options to use from this config file. I want to be able to tell if I'm inside a unittest here and use the in memory sqlite database instead. Is there a way to tell at that point whether I'm inside a unittest, or will I have to find a different way?
[ "Inject your database dependency into your class using IoC. This should be done by handing it a repository object in the constructor of your class. Note that you don't necessarily need an IoC container to do this. You just need a repository interface and two implementations of your repository.\nNote: In Python IoC works a little differently. See http://code.activestate.com/recipes/413268/ for more information.\n", "Use some sort of database configuration and configure which database to use, and configure the in-memory database during unit tests.\n", "This is kind of brute force but it works. Have an environmental variable UNIT_TEST that your code checks, and set it inside your unit test driver.\n", "As Robert Harvey points out, it would be better if you could hand the settings to the component instead of having the component looking it up itself. This allows you to swap the settings simply by providing another object, i.e\nTestComponent(TestSettings())\n\ninstead of\nTestComponent(LiveSettings())\n\nor, if you want to use config files,\nTestComponent(Settings(\"test.conf\"))\n\ninstead of\nTestComponent(Settings(\"live.conf\"))\n\n", "I recommend using a mocking library such as mocker. And FYI, writing your code to check and see if you're in a unit test is a bad code smell. Your unit tests should be executing the code that's being tested as you wrote it.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "python", "sqlite", "unit_testing" ]
stackoverflow_0001601308_python_sqlite_unit_testing.txt
Q: How can I closely achieve ?: from C++/C# in Python? In C# I could easily write the following: string stringValue = string.IsNullOrEmpty( otherString ) ? defaultString : otherString; Is there a quick way of doing the same thing in Python or am I stuck with an 'if' statement? A: In Python 2.5, there is A if C else B which behaves a lot like ?: in C. However, it's frowned upon for two reasons: readability, and the fact that there's usually a simpler way to approach the problem. For instance, in your case: stringValue = otherString or defaultString A: @Dan if otherString: stringValue = otherString else: stringValue = defaultString This type of code is longer and more expressive, but also more readable Well yes, it's longer. Not so sure about “more expressive” and “more readable”. At the very least, your claim is disputable. I would even go as far as saying it's downright wrong, for two reasons. First, your code emphasizes the decision-making (rather extremely). Onthe other hand, the conditional operator emphasizes something else, namely the value (resp. the assignment of said value). And this is exactly what the writer of this code wants. The decision-making is really rather a by-product of the code. The important part here is the assignment operation. Your code hides this assignment in a lot of syntactic noise: the branching. Your code is less expressive because it shifts the emphasis from the important part. Even then your code would probably trump some obscure ASCII art like ?:. An inline-if would be preferable. Personally, I don't like the variant introduced with Python 2.5 because it's backwards. I would prefer something that reads in the same flow (direction) as the C ternary operator but uses words instead of ASCII characters: C = if cond then A else B This wins hands down. C and C# unfortunately don't have such an expressive statement. But (and this is the second argument), the ternary conditional operator of C languages is so long established that it has become an idiom in itself. The ternary operator is as much part of the language as the “conventional” if statement. Because it's an idiom, anybody who knows the language immediately reads this code right. Furthermore, it's an extremely short, concise way of expressing these semantics. In fact, it's the shortest imaginable way. It's extremely expressive because it doesn't obscure the essence with needless noise. Finally, Jeff Atwood has written the perfect conclusion to this: The best code is no code at all. A: It's never a bad thing to write readable, expressive code. if otherString: stringValue = otherString else: stringValue = defaultString This type of code is longer and more expressive, but also more readable and less likely to get tripped over or mis-edited down the road. Don't be afraid to write expressively - readable code should be a goal, not a byproduct. A: There are a few duplicates of this question, e.g. Does Python have a ternary conditional operator? What's the best way to replace the ternary operator in Python? In essence, in a general setting pre-2.5 code should use this: (condExp and [thenExp] or [elseExp])[0] (given condExp, thenExp and elseExp are arbitrary expressions), as it avoids wrong results if thenExp evaluates to boolean False, while maintaining short-circuit evaluation. A: By the way, j0rd4n, you don't (please don't!) write code like this in C#. Apart from the fact that the IsDefaultOrNull is actually called IsNullOrEmpty, this is pure code bloat. C# offers the coalesce operator for situations like these: string stringValue = otherString ?? defaultString; It's true that this only works if otherString is null (rather than empty) but if this can be ensured beforehand (and often it can) it makes the code much more readable. A: I also discovered that just using the "or" operator does pretty well. For instance: finalString = get_override() or defaultString If get_override() returns "" or None, it will always use defaultString. A: Chapter 4 of diveintopython.net has the answer. It's called the and-or trick in Python.
How can I closely achieve ?: from C++/C# in Python?
In C# I could easily write the following: string stringValue = string.IsNullOrEmpty( otherString ) ? defaultString : otherString; Is there a quick way of doing the same thing in Python or am I stuck with an 'if' statement?
[ "In Python 2.5, there is\nA if C else B\n\nwhich behaves a lot like ?: in C. However, it's frowned upon for two reasons: readability, and the fact that there's usually a simpler way to approach the problem. For instance, in your case:\nstringValue = otherString or defaultString\n\n", "@Dan\n\nif otherString:\n stringValue = otherString\nelse:\n stringValue = defaultString\n\nThis type of code is longer and more expressive, but also more readable\n\nWell yes, it's longer. Not so sure about “more expressive” and “more readable”. At the very least, your claim is disputable. I would even go as far as saying it's downright wrong, for two reasons.\nFirst, your code emphasizes the decision-making (rather extremely). Onthe other hand, the conditional operator emphasizes something else, namely the value (resp. the assignment of said value). And this is exactly what the writer of this code wants. The decision-making is really rather a by-product of the code. The important part here is the assignment operation. Your code hides this assignment in a lot of syntactic noise: the branching.\nYour code is less expressive because it shifts the emphasis from the important part.\nEven then your code would probably trump some obscure ASCII art like ?:. An inline-if would be preferable. Personally, I don't like the variant introduced with Python 2.5 because it's backwards. I would prefer something that reads in the same flow (direction) as the C ternary operator but uses words instead of ASCII characters:\nC = if cond then A else B\n\nThis wins hands down.\nC and C# unfortunately don't have such an expressive statement. But (and this is the second argument), the ternary conditional operator of C languages is so long established that it has become an idiom in itself. The ternary operator is as much part of the language as the “conventional” if statement. Because it's an idiom, anybody who knows the language immediately reads this code right. Furthermore, it's an extremely short, concise way of expressing these semantics. In fact, it's the shortest imaginable way. It's extremely expressive because it doesn't obscure the essence with needless noise.\nFinally, Jeff Atwood has written the perfect conclusion to this: The best code is no code at all.\n", "It's never a bad thing to write readable, expressive code.\nif otherString:\n stringValue = otherString\nelse:\n stringValue = defaultString\n\nThis type of code is longer and more expressive, but also more readable and less likely to get tripped over or mis-edited down the road. Don't be afraid to write expressively - readable code should be a goal, not a byproduct.\n", "There are a few duplicates of this question, e.g.\n\nDoes Python have a ternary conditional operator?\nWhat's the best way to replace the ternary operator in Python?\n\nIn essence, in a general setting pre-2.5 code should use this:\n (condExp and [thenExp] or [elseExp])[0]\n\n(given condExp, thenExp and elseExp are arbitrary expressions), as it avoids wrong results if thenExp evaluates to boolean False, while maintaining short-circuit evaluation.\n", "By the way, j0rd4n, you don't (please don't!) write code like this in C#. Apart from the fact that the IsDefaultOrNull is actually called IsNullOrEmpty, this is pure code bloat. C# offers the coalesce operator for situations like these:\nstring stringValue = otherString ?? defaultString;\n\nIt's true that this only works if otherString is null (rather than empty) but if this can be ensured beforehand (and often it can) it makes the code much more readable.\n", "I also discovered that just using the \"or\" operator does pretty well. For instance:\nfinalString = get_override() or defaultString\n\nIf get_override() returns \"\" or None, it will always use defaultString.\n", "Chapter 4 of diveintopython.net has the answer. It's called the and-or trick in Python.\n" ]
[ 24, 5, 1, 1, 0, 0, 0 ]
[ "You can take advantage of the fact that logical expressions return their value, and not just true or false status. For example, you can always use:\nresult = question and firstanswer or secondanswer\n\nWith the caveat that it doesn't work like the ternary operator if firstanswer is false. This is because question is evaluated first, assuming it's true firstanswer is returned unless firstanswer is false, so this usage fails to act like the ternary operator. If you know the values, however, there is usually no problem. An example would be:\nresult = choice == 7 and \"Seven\" or \"Another Choice\"\n\n", "If you used ruby, you could write\nstringValue = otherString.blank? ? defaultString : otherString;\n\nthe built in blank? method means null or empty.\nCome over to the dark side...\n" ]
[ -1, -1 ]
[ "python", "syntax", "syntax_rules", "ternary_operator" ]
stackoverflow_0000135303_python_syntax_syntax_rules_ternary_operator.txt
Q: Cherrypy server unavailable from anything but localhost I am having an issue with cherrypy that looks solved, but doesn't work. I can only bind on localhost or 127.0.0.1. Windows XP Home and Mac OS X (linux untested), cherrypy 3.1.2, python 2.5.4. This is the end of my app: global_conf = { 'global': { 'server.environment= "production"' 'engine.autoreload_on : True' 'engine.autoreload_frequency = 5 ' 'server.socket_host': '0.0.0.0', 'server.socket_port': 8080} } cherrypy.config.update(global_conf) cherrypy.tree.mount(home, '/', config = application_conf) cherrypy.engine.start() A: huh, you're doing something wrong with your dict: >>> global_conf = { ... 'global': { 'server.environment= "production"' ... 'engine.autoreload_on : True' ... 'engine.autoreload_frequency = 5 ' ... 'server.socket_host': '0.0.0.0', ... 'server.socket_port': 8080} ... } >>> print global_conf {'global': {'server.environment= "production"engine.autoreload_on : Trueengine.autoreload_frequency = 5 server.socket_host': '0.0.0.0', 'server.socket_port': 8080} } More specifically, there are commas and colons missing from your dict definiton. Each key/value pair must have a colon, and they are separated with commas. Something like this might work: global_conf = { 'global': { 'server.environment': 'production', 'engine.autoreload_on': True, 'engine.autoreload_frequency': 5, 'server.socket_host': '0.0.0.0', 'server.socket_port': 8080, } } Check python dictionary documentation for more info. A: If you're using a dual-stack OS, it may be that localhost is resolving to ::1 (the IPv6 localhost) and not 127.0.0.1 (the IPv4 localhost). Try accessing the server using http://127.0.0.1:8080. Also, if you're using an dual-stack capable OS, you can set server.socket_host to '::', and it will listen on all addresses in IPv6 and IPv4.
Cherrypy server unavailable from anything but localhost
I am having an issue with cherrypy that looks solved, but doesn't work. I can only bind on localhost or 127.0.0.1. Windows XP Home and Mac OS X (linux untested), cherrypy 3.1.2, python 2.5.4. This is the end of my app: global_conf = { 'global': { 'server.environment= "production"' 'engine.autoreload_on : True' 'engine.autoreload_frequency = 5 ' 'server.socket_host': '0.0.0.0', 'server.socket_port': 8080} } cherrypy.config.update(global_conf) cherrypy.tree.mount(home, '/', config = application_conf) cherrypy.engine.start()
[ "huh, you're doing something wrong with your dict:\n>>> global_conf = {\n... 'global': { 'server.environment= \"production\"'\n... 'engine.autoreload_on : True'\n... 'engine.autoreload_frequency = 5 '\n... 'server.socket_host': '0.0.0.0',\n... 'server.socket_port': 8080}\n... }\n>>> print global_conf\n{'global': \n {'server.environment= \"production\"engine.autoreload_on : Trueengine.autoreload_frequency = 5 server.socket_host': '0.0.0.0',\n 'server.socket_port': 8080}\n}\n\nMore specifically, there are commas and colons missing from your dict definiton. Each key/value pair must have a colon, and they are separated with commas. Something like this might work:\nglobal_conf = {\n 'global': { 'server.environment': 'production',\n 'engine.autoreload_on': True,\n 'engine.autoreload_frequency': 5,\n 'server.socket_host': '0.0.0.0',\n 'server.socket_port': 8080,\n }\n }\n\nCheck python dictionary documentation for more info.\n", "If you're using a dual-stack OS, it may be that localhost is resolving to ::1 (the IPv6 localhost) and not 127.0.0.1 (the IPv4 localhost). Try accessing the server using http://127.0.0.1:8080.\nAlso, if you're using an dual-stack capable OS, you can set server.socket_host to '::', and it will listen on all addresses in IPv6 and IPv4.\n" ]
[ 7, 3 ]
[]
[]
[ "cherrypy", "python" ]
stackoverflow_0001555319_cherrypy_python.txt
Q: How to make a completely unshared copy of a complicated list? (Deep copy is not enough) Have a look at this Python code: a = [1, 2, 3] b = [4, 5, 6] c = [[a, b], [b, a]] # [[[1, 2, 3], [4, 5, 6]], [[4, 5, 6], [1, 2, 3]]] c[0][0].append(99) # [[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3, 99]]] Notice how modifying one element of c modifies that everywhere. That is, if 99 is appended to c[0][0], it is also appended to c[1][1]. I am guessing this is because Python is cleverly referring to the same object for c[0][0] and c[1][1]. (That is their id() is the same.) Question: Is there something that can be done to c so that its list elements can be safely locally modified? Above is just an example, my real problem has a list much more complicated, but having a similar problem. (Sorry for the poorly formed question above. Python gurus please feel free to modify the question or tags to better express this query.) A: When you want a copy, you explicitly make a copy - the cryptical [:] "slice it all" form is idiomatic, but my favorite is the much-more-readable approach of explicitly calling list. If c is constructed in the wrong way (with references instead of shallow copies to lists you want to be able to modify independently) the best thing would be to fix the way it's built (why build it wrong and then labor to fix it?!), but if that's outside your control it IS possible to undo the damage -- just loop on c (recursively if needed), with an index, reassigning the relevant sublists to their copies. For example, if you know for sure that c's structure is two-level as you indicate, you can save yourself without recursion: def fixthewronglymadelist(c): for topsublist in c: for i, L in enumerate(topsublist): topsublist[i] = list(L) Despite what other answers suggest, copy.deepcopy would be hard to bend to this peculiar purpose, if all you're given is the wrongly made c: doing just copy.deepcopy(c) would carefully replicate whatever c's topology is, including multiple references to the same sublists! :-) A: To convert an existing list of lists to one where nothing is shared, you could recursively copy the list. deepcopy will not be sufficient, as it will copy the structure as-is, keeping internal references as references, not copies. def unshared_copy(inList): if isinstance(inList, list): return list( map(unshared_copy, inList) ) return inList alist = unshared_copy(your_function_returning_lists()) Note that this assumes the data is returned as a list of lists (arbitrarily nested). If the containers are of different types (eg. numpy arrays, dicts, or user classes), you may need to alter this. A: Use [:]: >>> a = [1, 2] >>> b = a[:] >>> b.append(9) >>> a [1, 2] Alteratively, use copy or deepcopy: >>> import copy >>> a = [1, 2] >>> b = copy.copy(a) >>> b.append(9) >>> a [1, 2] copy works on objects other than lists. For lists, it has the same effect as a[:]. deepcopy attempts to recursively copy nested elements, and is thus a more "thorough" operation that copy. A: Depending on you situation, you might want to work against a deep copy of this list. A: To see Stephan's suggestion at work, compare the two outputs below: a = [1, 2, 3] b = [4, 5, 6] c = [[a, b], [b, a]] c[0][0].append(99) print c print "-------------------" a = [1, 2, 3] b = [4, 5, 6] c = [[a[:], b[:]], [b[:], a[:]]] c[0][0].append(99) print c The output is as follows: [[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3, 99]]] ------------------- [[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3]]]
How to make a completely unshared copy of a complicated list? (Deep copy is not enough)
Have a look at this Python code: a = [1, 2, 3] b = [4, 5, 6] c = [[a, b], [b, a]] # [[[1, 2, 3], [4, 5, 6]], [[4, 5, 6], [1, 2, 3]]] c[0][0].append(99) # [[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3, 99]]] Notice how modifying one element of c modifies that everywhere. That is, if 99 is appended to c[0][0], it is also appended to c[1][1]. I am guessing this is because Python is cleverly referring to the same object for c[0][0] and c[1][1]. (That is their id() is the same.) Question: Is there something that can be done to c so that its list elements can be safely locally modified? Above is just an example, my real problem has a list much more complicated, but having a similar problem. (Sorry for the poorly formed question above. Python gurus please feel free to modify the question or tags to better express this query.)
[ "When you want a copy, you explicitly make a copy - the cryptical [:] \"slice it all\" form is idiomatic, but my favorite is the much-more-readable approach of explicitly calling list.\nIf c is constructed in the wrong way (with references instead of shallow copies to lists you want to be able to modify independently) the best thing would be to fix the way it's built (why build it wrong and then labor to fix it?!), but if that's outside your control it IS possible to undo the damage -- just loop on c (recursively if needed), with an index, reassigning the relevant sublists to their copies. For example, if you know for sure that c's structure is two-level as you indicate, you can save yourself without recursion:\ndef fixthewronglymadelist(c):\n for topsublist in c:\n for i, L in enumerate(topsublist):\n topsublist[i] = list(L)\n\nDespite what other answers suggest, copy.deepcopy would be hard to bend to this peculiar purpose, if all you're given is the wrongly made c: doing just copy.deepcopy(c) would carefully replicate whatever c's topology is, including multiple references to the same sublists! :-)\n", "To convert an existing list of lists to one where nothing is shared, you could recursively copy the list.\ndeepcopy will not be sufficient, as it will copy the structure as-is, keeping internal references as references, not copies.\ndef unshared_copy(inList):\n if isinstance(inList, list):\n return list( map(unshared_copy, inList) )\n return inList\n\nalist = unshared_copy(your_function_returning_lists())\n\nNote that this assumes the data is returned as a list of lists (arbitrarily nested).\nIf the containers are of different types (eg. numpy arrays, dicts, or user classes), you may need to alter this.\n", "Use [:]:\n>>> a = [1, 2]\n>>> b = a[:]\n>>> b.append(9)\n>>> a\n[1, 2]\n\nAlteratively, use copy or deepcopy:\n>>> import copy\n>>> a = [1, 2]\n>>> b = copy.copy(a)\n>>> b.append(9)\n>>> a\n[1, 2]\n\ncopy works on objects other than lists. For lists, it has the same effect as a[:]. deepcopy attempts to recursively copy nested elements, and is thus a more \"thorough\" operation that copy.\n", "Depending on you situation, you might want to work against a deep copy of this list.\n", "To see Stephan's suggestion at work, compare the two outputs below:\na = [1, 2, 3]\nb = [4, 5, 6]\nc = [[a, b], [b, a]]\nc[0][0].append(99)\nprint c\nprint \"-------------------\"\na = [1, 2, 3]\nb = [4, 5, 6]\nc = [[a[:], b[:]], [b[:], a[:]]]\nc[0][0].append(99)\nprint c\n\nThe output is as follows:\n[[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3, 99]]]\n-------------------\n[[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3]]]\n\n" ]
[ 8, 8, 5, 5, 1 ]
[]
[]
[ "copy", "list", "python" ]
stackoverflow_0001601269_copy_list_python.txt
Q: Recursion - Python, return value question I realize that this may sound like a silly question, but the last time I programmed it was in assembler so my thinking may be off: A recursive function as so: def fac(n): if n == 0: return 1 else: return n * fac(n - 1) Why is it that when the function reaches n == 0 that it does not return 1 but rather the answer which is the factorial. I am thinking something like in assembler it would be when n == 0: mov eax, 1 ret Why does the code above work, I suppose python returns the last value on the stack before that condition ? A: Think about like this, for fac(5) for example: return 5 * fac(4) return 4 * fac(3) return 3 * fac(2) return 2 * fac(1) return 1 * fac(0) 1 So 1 will be the first returned value but it will be returned to fac(1) and fac(1) will be returned to fac(2) and so on. A: It does return 1 when n == 0. That return value is popped off the stack from the calling site, which was the invocation at n * fac(n - 1). That 1 is multiplied by n and returned, etc. A: If you call fac(0) it will return 1 (not 0, but I suppose that's just a typo in your question). If you call fac(1), it will go in the else clause, and there it will call fac(0). This will return 1. It will then calculate n*1, which is 1, and return that. If you call fac(2) it will also go in the else clause, where it will call fac(1) which as mentioned above will return 1, so n*fac(n-1) will be 2 and that's the return value of fac(2). And so on. I hope that explained it for you. A: Nothing's being implicitely returned - when n=0, the function is entering the if statement, and returning 1 directly from the return 1 statement. However, this isn't the point at which the "answer which is the factorial" is returned to the user. Instead, it may be returning this value to the calling function invoked by fac(1), which in the middle of the n * fac(n - 1) branch. Thus it will take the "1" returned and return n*1, which is 1 to it's caller. If that's fac(2), it'll return n * 1, or 2 to it's caller and so on. Thus fac(5) gets translated like: fac(5) = 5 * fac(4) = 5 * (4 * fac(3) = 5 * (4* (3 * fac(2)) = 5 * (4* (3 * (2 * fac(1)) = 5 * (4* (3 * (2 * (1 * fac(0)) = 5*4*3*2*1*1 Only after the 1 value gets returned through each upper layer does it get back to the first caller, and the multiplication at each stage gives you the answer. A: James, when the final call to your function (when n==0) returns it's just one of several instances of fac(n) on the call stack. If you say print(fac(4)), the stack is essentially: fac(0) fac(1) fac(2) fac(3) fac(4) print() The final call to fac(0) appropriately returns 1, however in Python you've requested the return value of the first call to fac(n), fac(4). Don't think of it as a loop wherein 'ret' will break out, the return simply concludes one of several pending executions.
Recursion - Python, return value question
I realize that this may sound like a silly question, but the last time I programmed it was in assembler so my thinking may be off: A recursive function as so: def fac(n): if n == 0: return 1 else: return n * fac(n - 1) Why is it that when the function reaches n == 0 that it does not return 1 but rather the answer which is the factorial. I am thinking something like in assembler it would be when n == 0: mov eax, 1 ret Why does the code above work, I suppose python returns the last value on the stack before that condition ?
[ "Think about like this, for fac(5) for example:\nreturn 5 * fac(4)\n return 4 * fac(3)\n return 3 * fac(2)\n return 2 * fac(1)\n return 1 * fac(0)\n 1\n\nSo 1 will be the first returned value but it will be returned to fac(1) and fac(1) will be returned to fac(2) and so on.\n", "It does return 1 when n == 0. That return value is popped off the stack from the calling site, which was the invocation at n * fac(n - 1). That 1 is multiplied by n and returned, etc.\n", "If you call fac(0) it will return 1 (not 0, but I suppose that's just a typo in your question). If you call fac(1), it will go in the else clause, and there it will call fac(0). This will return 1. It will then calculate n*1, which is 1, and return that. If you call fac(2) it will also go in the else clause, where it will call fac(1) which as mentioned above will return 1, so n*fac(n-1) will be 2 and that's the return value of fac(2). And so on. I hope that explained it for you.\n", "Nothing's being implicitely returned - when n=0, the function is entering the if statement, and returning 1 directly from the return 1 statement.\nHowever, this isn't the point at which the \"answer which is the factorial\" is returned to the user. Instead, it may be returning this value to the\ncalling function invoked by fac(1), which in the middle of the n * fac(n - 1) branch. Thus it will take the \"1\" returned and return n*1, which is 1 to it's caller. If that's fac(2), it'll return n * 1, or 2 to it's caller and so on.\nThus fac(5) gets translated like:\nfac(5) = 5 * fac(4) = 5 * (4 * fac(3) = 5 * (4* (3 * fac(2)) = 5 * (4* (3 * (2 * fac(1)) = 5 * (4* (3 * (2 * (1 * fac(0)) = 5*4*3*2*1*1\n\nOnly after the 1 value gets returned through each upper layer does it get back to the first caller, and the multiplication at each stage gives you the answer.\n", "James, when the final call to your function (when n==0) returns it's just one of several instances of fac(n) on the call stack. If you say print(fac(4)), the stack is essentially:\nfac(0)\nfac(1)\nfac(2)\nfac(3)\nfac(4)\nprint()\n\nThe final call to fac(0) appropriately returns 1, however in Python you've requested the return value of the first call to fac(n), fac(4).\nDon't think of it as a loop wherein 'ret' will break out, the return simply concludes one of several pending executions.\n" ]
[ 12, 1, 0, 0, 0 ]
[]
[]
[ "python", "recursion", "stack" ]
stackoverflow_0001601757_python_recursion_stack.txt
Q: Asynchronous Stream Processing in Python Let's start with a simple example. A HTTP data stream comes in the following format: MESSAGE_LENGTH, 2 bytes MESSAGE_BODY, REPEAT... Currently, I use urllib2 to retrieve and process streaming data as below: length = response.read(2) while True: data = response.read(length) DO DATA PROCESSING It works, but since all messages are in size of 50-100 bytes, the above method limits buffer size each time it reads so it may hurt performance. Is it possible to use seperate threads for data retrieval and processing? A: Yes, can be done and is not that hard, if your format is essentially fixed. I used it with httplib in Python 2.2.3 and found it had some abysmal performance in the way we hacked it together (basically monkey patching a select() based socket layer into httplib). The trick is to get the socket and do the buffering yourself, so you do not fight over buffering with the intermediate layers (made for horrible performance when we had httplib buffer for chunked http decoding, the socket layer buffer for read()). Then have a statemachine that fetches new data from the socket when needed and pushes completed blocks into a Queue.Queue that feeds your processing threads. I use it to transfer files, checksum (zlib.ADLER32) them in an extra thread and write them to the filesystem in a third thread. Makes for about 40 MB/s sustained throughput on my local machine via sockets and with HTTP/chunked overhead. A: Yes, of course, and there are many different techniques to do so. You'll typically end up having a set of processes that only retrieves data, and increase the number of processes in that pool until you run out of bandwith, more or less. Those processes store the data somewhere, and then you have other processes or threads that pick the data up and process it from wherever it's stored. So the answer to your question is "Yes", your next question is gonna be "How" and then the people who are really good at this stuff will want to know more. :-) If you are doing this in a massive scale it can get very tricky, and you don't want them to step all over each other, and there are modules in Python that help you do all this. What the right way to do it is depends a lot on what scale we are talking, if you want to run this over multiple processors, or maybe even over completely separate machines, and how much data we are talking about. I've only done it once, and on a not very massive scale, but ended up having once process that got a long list of urls that should be processed, and another process that took that list and dispatched it to a set of separate processes simply by putting files with URL's in them in separate directories that worked as "queues". The separate processes that fetched the URLs would look in their own queue-directory, fetch the URL and stick it into another "outqueue" directory, where I had another process that would dispatch those files into another set of queue-directories for the processing processes. That worked fine, could be run of the network with NFS if necessary (although we never tried that) and could be scaled up to loads of processes on loads of machines if neeed (although we never did that either). There may be more clever ways.
Asynchronous Stream Processing in Python
Let's start with a simple example. A HTTP data stream comes in the following format: MESSAGE_LENGTH, 2 bytes MESSAGE_BODY, REPEAT... Currently, I use urllib2 to retrieve and process streaming data as below: length = response.read(2) while True: data = response.read(length) DO DATA PROCESSING It works, but since all messages are in size of 50-100 bytes, the above method limits buffer size each time it reads so it may hurt performance. Is it possible to use seperate threads for data retrieval and processing?
[ "Yes, can be done and is not that hard, if your format is essentially fixed.\nI used it with httplib in Python 2.2.3 and found it had some abysmal performance in the way we hacked it together (basically monkey patching a select() based socket layer into httplib).\nThe trick is to get the socket and do the buffering yourself, so you do not fight over buffering with the intermediate layers (made for horrible performance when we had httplib buffer for chunked http decoding, the socket layer buffer for read()).\nThen have a statemachine that fetches new data from the socket when needed and pushes completed blocks into a Queue.Queue that feeds your processing threads.\nI use it to transfer files, checksum (zlib.ADLER32) them in an extra thread and write them to the filesystem in a third thread. Makes for about 40 MB/s sustained throughput on my local machine via sockets and with HTTP/chunked overhead.\n", "Yes, of course, and there are many different techniques to do so. You'll typically end up having a set of processes that only retrieves data, and increase the number of processes in that pool until you run out of bandwith, more or less. Those processes store the data somewhere, and then you have other processes or threads that pick the data up and process it from wherever it's stored.\nSo the answer to your question is \"Yes\", your next question is gonna be \"How\" and then the people who are really good at this stuff will want to know more. :-)\nIf you are doing this in a massive scale it can get very tricky, and you don't want them to step all over each other, and there are modules in Python that help you do all this. What the right way to do it is depends a lot on what scale we are talking, if you want to run this over multiple processors, or maybe even over completely separate machines, and how much data we are talking about.\nI've only done it once, and on a not very massive scale, but ended up having once process that got a long list of urls that should be processed, and another process that took that list and dispatched it to a set of separate processes simply by putting files with URL's in them in separate directories that worked as \"queues\". The separate processes that fetched the URLs would look in their own queue-directory, fetch the URL and stick it into another \"outqueue\" directory, where I had another process that would dispatch those files into another set of queue-directories for the processing processes.\nThat worked fine, could be run of the network with NFS if necessary (although we never tried that) and could be scaled up to loads of processes on loads of machines if neeed (although we never did that either).\nThere may be more clever ways.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "stream", "streaming" ]
stackoverflow_0001599540_python_stream_streaming.txt
Q: What's the difference between casting and coercion in Python? In the Python documentation and on mailing lists I see that values are sometimes "cast", and sometimes "coerced". A: Cast is explicit. Coerce is implicit. The examples in Python would be: cast(2, POINTER(c_float)) #cast 1.0 + 2 #coerce 1.0 + float(2) #conversion Cast really only comes up in the C FFI. What is typically called casting in C or Java is referred to as conversion in python, though it often gets referred to as casting because of its similarities to those other languages. In pretty much every language that I have experience with (including python) Coercion is implicit type changing. A: I think "casting" shouldn't be used for Python; there are only type conversion, but no casts (in the C sense). A type conversion is done e.g. through int(o) where the object o is converted into an integer (actually, an integer object is constructed out of o). Coercion happens in the case of binary operations: if you do x+y, and x and y have different types, they are coerced into a single type before performing the operation. In 2.x, a special method __coerce__ allows object to control their coercion.
What's the difference between casting and coercion in Python?
In the Python documentation and on mailing lists I see that values are sometimes "cast", and sometimes "coerced".
[ "Cast is explicit. Coerce is implicit.\nThe examples in Python would be:\ncast(2, POINTER(c_float)) #cast\n1.0 + 2 #coerce \n1.0 + float(2) #conversion\n\nCast really only comes up in the C FFI. What is typically called casting in C or Java is referred to as conversion in python, though it often gets referred to as casting because of its similarities to those other languages. In pretty much every language that I have experience with (including python) Coercion is implicit type changing.\n", "I think \"casting\" shouldn't be used for Python; there are only type conversion, but no casts (in the C sense). A type conversion is done e.g. through int(o) where the object o is converted into an integer (actually, an integer object is constructed out of o). Coercion happens in the case of binary operations: if you do x+y, and x and y have different types, they are coerced into a single type before performing the operation. In 2.x, a special method __coerce__ allows object to control their coercion.\n" ]
[ 43, 32 ]
[]
[]
[ "casting", "coercion", "python", "types" ]
stackoverflow_0001602122_casting_coercion_python_types.txt
Q: Foreign key needs a value from the key's table to match a column in another table Pardon the excessive amount of code, but I'm not sure if I can explain my question otherwise I have a Django project that I am working on which has the following: class Project(models.Model): name = models.CharField(max_length=100, unique=True) dir = models.CharField(max_length=300, blank=True, unique=True ) def __unicode__(self): return self.name; class ASClass(models.Model): name = models.CharField(max_length=100) project = models.ForeignKey(Project, default=1) def __unicode__(self): return self.name; class Entry(models.Model): project = models.ForeignKey(Project, default=1) asclasses = models.ManyToManyField(ASClass) Here's the question: Is there a way, without overriding the save function of the model, to make it so that entries only allow classes which have the same project ID? ***********************************************************Begin Edit********************************************************** To be clear, I am not opposed to overriding save. I actually already overrode it in this case to provide for a property not listed above. I already know how to answer this question by simply extending that override, so simply saying, "You could override save" won't be helpful. I'm wondering if there isn't a better way to accomplish this, if there is a Django native implementation, and if the key type already exists. ***********************************************************End Edit*********************************************************** Is there a way to do this in Postgresql as well? (For good measure, here is the code to create the tables in the Postgresql) This has created the following tables: CREATE TABLE blog_asclass ( id serial NOT NULL, "name" character varying(100) NOT NULL, project_id integer NOT NULL, CONSTRAINT blog_asclass_pkey PRIMARY KEY (id), CONSTRAINT blog_asclass_project_id_fkey FOREIGN KEY (project_id) REFERENCES blog_project (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ) CREATE TABLE blog_entry ( id serial NOT NULL, project_id integer NOT NULL, build_date timestamp with time zone NOT NULL, CONSTRAINT blog_entry_pkey PRIMARY KEY (id), CONSTRAINT blog_entry_project_id_fkey FOREIGN KEY (project_id) REFERENCES blog_project (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ) CREATE TABLE blog_entry_asclasses ( id serial NOT NULL, entry_id integer NOT NULL, asclass_id integer NOT NULL, CONSTRAINT blog_entry_asclasses_pkey PRIMARY KEY (id), CONSTRAINT blog_entry_asclasses_asclass_id_fkey FOREIGN KEY (asclass_id) REFERENCES blog_asclass (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED, CONSTRAINT blog_entry_asclasses_entry_id_fkey FOREIGN KEY (entry_id) REFERENCES blog_entry (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED, CONSTRAINT blog_entry_asclasses_entry_id_key UNIQUE (entry_id, asclass_id) ) CREATE TABLE blog_project ( id serial NOT NULL, "name" character varying(100) NOT NULL, dir character varying(300) NOT NULL, CONSTRAINT blog_project_pkey PRIMARY KEY (id), CONSTRAINT blog_project_dir_key UNIQUE (dir), CONSTRAINT blog_project_name_key UNIQUE (name) ) A: You could use the pre_save signal and raise an error if they do no match... The effect would be similar to overridding save (it gets called before save) The problem is creating/deleting/updating the many-to-many relation will not trigger save (or consequentially pre_save or post_save) Update Try using the through argument on your many-to-many relation That lets you manually define the intermediary table for the m2m relation, which will give you access to the signals, as well as the functions. Then you can choose signals or overloading as you please A: I'm sure you could do it at the PostgreSQL level with a trigger, which you could add to a Django initial-SQL file so it's automatically created at syncdb. At the Django model level, in order to get a useful answer you'll have to clarify why you're opposed to overriding the save() method, since that is currently the correct (and perhaps only) way to provide this kind of validation. Django 1.2 will (hopefully) include a full model validation framework.
Foreign key needs a value from the key's table to match a column in another table
Pardon the excessive amount of code, but I'm not sure if I can explain my question otherwise I have a Django project that I am working on which has the following: class Project(models.Model): name = models.CharField(max_length=100, unique=True) dir = models.CharField(max_length=300, blank=True, unique=True ) def __unicode__(self): return self.name; class ASClass(models.Model): name = models.CharField(max_length=100) project = models.ForeignKey(Project, default=1) def __unicode__(self): return self.name; class Entry(models.Model): project = models.ForeignKey(Project, default=1) asclasses = models.ManyToManyField(ASClass) Here's the question: Is there a way, without overriding the save function of the model, to make it so that entries only allow classes which have the same project ID? ***********************************************************Begin Edit********************************************************** To be clear, I am not opposed to overriding save. I actually already overrode it in this case to provide for a property not listed above. I already know how to answer this question by simply extending that override, so simply saying, "You could override save" won't be helpful. I'm wondering if there isn't a better way to accomplish this, if there is a Django native implementation, and if the key type already exists. ***********************************************************End Edit*********************************************************** Is there a way to do this in Postgresql as well? (For good measure, here is the code to create the tables in the Postgresql) This has created the following tables: CREATE TABLE blog_asclass ( id serial NOT NULL, "name" character varying(100) NOT NULL, project_id integer NOT NULL, CONSTRAINT blog_asclass_pkey PRIMARY KEY (id), CONSTRAINT blog_asclass_project_id_fkey FOREIGN KEY (project_id) REFERENCES blog_project (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ) CREATE TABLE blog_entry ( id serial NOT NULL, project_id integer NOT NULL, build_date timestamp with time zone NOT NULL, CONSTRAINT blog_entry_pkey PRIMARY KEY (id), CONSTRAINT blog_entry_project_id_fkey FOREIGN KEY (project_id) REFERENCES blog_project (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED ) CREATE TABLE blog_entry_asclasses ( id serial NOT NULL, entry_id integer NOT NULL, asclass_id integer NOT NULL, CONSTRAINT blog_entry_asclasses_pkey PRIMARY KEY (id), CONSTRAINT blog_entry_asclasses_asclass_id_fkey FOREIGN KEY (asclass_id) REFERENCES blog_asclass (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED, CONSTRAINT blog_entry_asclasses_entry_id_fkey FOREIGN KEY (entry_id) REFERENCES blog_entry (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED, CONSTRAINT blog_entry_asclasses_entry_id_key UNIQUE (entry_id, asclass_id) ) CREATE TABLE blog_project ( id serial NOT NULL, "name" character varying(100) NOT NULL, dir character varying(300) NOT NULL, CONSTRAINT blog_project_pkey PRIMARY KEY (id), CONSTRAINT blog_project_dir_key UNIQUE (dir), CONSTRAINT blog_project_name_key UNIQUE (name) )
[ "You could use the pre_save signal and raise an error if they do no match... The effect would be similar to overridding save (it gets called before save)\nThe problem is creating/deleting/updating the many-to-many relation will not trigger save (or consequentially pre_save or post_save)\nUpdate\nTry using the through argument on your many-to-many relation\nThat lets you manually define the intermediary table for the m2m relation, which will give you access to the signals, as well as the functions. \nThen you can choose signals or overloading as you please\n", "I'm sure you could do it at the PostgreSQL level with a trigger, which you could add to a Django initial-SQL file so it's automatically created at syncdb.\nAt the Django model level, in order to get a useful answer you'll have to clarify why you're opposed to overriding the save() method, since that is currently the correct (and perhaps only) way to provide this kind of validation.\nDjango 1.2 will (hopefully) include a full model validation framework.\n" ]
[ 2, 1 ]
[]
[]
[ "database", "django", "python" ]
stackoverflow_0001601586_database_django_python.txt
Q: Problems scripting Unison with Python I am trying to make a simple script to automate and log synchronization via Unison. I am also using subprocess.Popen rather than the usual os.system call as it's deprecated. I've spent the past 2 days looking at docs and such trying to figure out what I'm doing wrong, but for some reason if I call unison from a terminal it works no problem, yet when I make the same call from Python it tries to do user interaction and in addition I'm not capturing but about half of the output, the other is still printing to the terminal. Here is my code I am trying to use: sync = Popen(["unison", "sync"], shell = True, stdout = PIPE) for line in sync.stdout logFile.write(line) sync.wait() if sync.returncode == 0 or sync.returncode == None: logFile.write("Sync Completed Successfully\n") else logFile.write("!! Sync Failed with a returncode of: " + str(sync.returncode) + "\n") Here is my Unison config file: root = /home/zephyrxero/temp/ root = /home/zephyrxero/test/ auto = true batch = true prefer = newer times = true owner = true group = true retry = 2 What am I doing wrong? Why isn't all output from Unison getting saved to my logfile, and why is it asking me for confirmation when the script runs, but not when I run it plainly from a terminal? UPDATE: Ok, thanks to Emil I'm now capturing all the output, but I still can't figure out why typing "unison sync" into a terminal is getting different results than when calling it from my script. A: The most likely culprit is that unison is sending some output to stderr instead of just stdout. Popen takes an additional stderr argument so you can try capturing that instead of (or in addition to) stdout). For a quick reference on standard streams see Wikipedia. A: Changed ["unison", "sync"] to simply ["unison sync"]...appears to be working without need for user interaction now, not sure why that would be any different...but works for me.
Problems scripting Unison with Python
I am trying to make a simple script to automate and log synchronization via Unison. I am also using subprocess.Popen rather than the usual os.system call as it's deprecated. I've spent the past 2 days looking at docs and such trying to figure out what I'm doing wrong, but for some reason if I call unison from a terminal it works no problem, yet when I make the same call from Python it tries to do user interaction and in addition I'm not capturing but about half of the output, the other is still printing to the terminal. Here is my code I am trying to use: sync = Popen(["unison", "sync"], shell = True, stdout = PIPE) for line in sync.stdout logFile.write(line) sync.wait() if sync.returncode == 0 or sync.returncode == None: logFile.write("Sync Completed Successfully\n") else logFile.write("!! Sync Failed with a returncode of: " + str(sync.returncode) + "\n") Here is my Unison config file: root = /home/zephyrxero/temp/ root = /home/zephyrxero/test/ auto = true batch = true prefer = newer times = true owner = true group = true retry = 2 What am I doing wrong? Why isn't all output from Unison getting saved to my logfile, and why is it asking me for confirmation when the script runs, but not when I run it plainly from a terminal? UPDATE: Ok, thanks to Emil I'm now capturing all the output, but I still can't figure out why typing "unison sync" into a terminal is getting different results than when calling it from my script.
[ "The most likely culprit is that unison is sending some output to stderr instead of just stdout. Popen takes an additional stderr argument so you can try capturing that instead of (or in addition to) stdout).\nFor a quick reference on standard streams see Wikipedia.\n", "Changed [\"unison\", \"sync\"] to simply [\"unison sync\"]...appears to be working without need for user interaction now, not sure why that would be any different...but works for me.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "unison" ]
stackoverflow_0001602529_python_unison.txt
Q: Python problem executing popen in cron I use popen to execute commands in a Python script, and I call it via cron. Cron calls out this script but the behavior isn't the same if I call it by hand. Source: from subprocess import Popen, PIPE pp = Popen('/usr/bin/which iptables', shell=True, stdout=PIPE) data = '' for ln in pp.stdout: data = data+ln if data == '': print 'ko' else: print 'ok : '+data By hand: # python /home/user/test.py > : /sbin/iptables By cron (in /tmp/err_cron): * * * * * /usr/bin/python /home/user/test.py >> /tmp/err_cron ko ko ko Why does cron not run this script normally? A: Normally when processes are run from cron, the PATH is set to a very restrictive value (the man page for my crontab says /usr/bin:/bin). You may need to add: PATH=/usr/bin:/bin:/sbin to the top of your crontab file.
Python problem executing popen in cron
I use popen to execute commands in a Python script, and I call it via cron. Cron calls out this script but the behavior isn't the same if I call it by hand. Source: from subprocess import Popen, PIPE pp = Popen('/usr/bin/which iptables', shell=True, stdout=PIPE) data = '' for ln in pp.stdout: data = data+ln if data == '': print 'ko' else: print 'ok : '+data By hand: # python /home/user/test.py > : /sbin/iptables By cron (in /tmp/err_cron): * * * * * /usr/bin/python /home/user/test.py >> /tmp/err_cron ko ko ko Why does cron not run this script normally?
[ "Normally when processes are run from cron, the PATH is set to a very restrictive value (the man page for my crontab says /usr/bin:/bin). You may need to add:\n\nPATH=/usr/bin:/bin:/sbin\n\nto the top of your crontab file.\n" ]
[ 20 ]
[]
[]
[ "console", "cron", "python" ]
stackoverflow_0001602830_console_cron_python.txt
Q: How do I wrangle python lookups: make.up.a.dot.separated.name.and.use.it.until.destroyed = 777 I'm a Python newbie with a very particular itch to experiment with Python's dot-name-lookup process. How do I code either a class or function in "make.py" so that these assignment statements work succesfully? import make make.a.dot.separated.name = 666 make.something.else.up = 123 make.anything.i.want = 777 A: #!/usr/bin/env python class Make: def __getattr__(self, name): self.__dict__[name] = Make() return self.__dict__[name] make = Make() make.a.dot.separated.name = 666 make.anything.i.want = 777 print make.a.dot.separated.name print make.anything.i.want The special __getattr__ method is called when a named value isn't found. The line make.anything.i.want ends up doing the equivalent of: m1 = make.anything # calls make.__getattr__("anything") m2 = m1.i # calls m1.__getattr__("i") m2.want = 777 The above implementation uses these calls to __getattr__ to create a chain of Make objects each time an unknown property is accessed. This allows the dot accesses to be nested arbitrarily deep until the final assignment at which point a real value is assigned. Python documentation - customizing attribute access: object.__getattr__(self, name) Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception. Note that if the attribute is found through the normal mechanism, __getattr__() is not called. (This is an intentional asymmetry between __getattr__() and __setattr__().) This is done both for efficiency reasons and because otherwise __getattr__() would have no way to access other attributes of the instance. Note that at least for instance variables, you can fake total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the __getattribute__() method below for a way to actually get total control in new-style classes. object.__setattr__(self, name, value) Called when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it. If __setattr__() wants to assign to an instance attribute, it should not simply execute self.name = value — this would cause a recursive call to itself. Instead, it should insert the value in the dictionary of instance attributes, e.g., self.__dict__[name] = value. For new-style classes, rather than accessing the instance dictionary, it should call the base class method with the same name, for example, object.__setattr__(self, name, value).
How do I wrangle python lookups: make.up.a.dot.separated.name.and.use.it.until.destroyed = 777
I'm a Python newbie with a very particular itch to experiment with Python's dot-name-lookup process. How do I code either a class or function in "make.py" so that these assignment statements work succesfully? import make make.a.dot.separated.name = 666 make.something.else.up = 123 make.anything.i.want = 777
[ "#!/usr/bin/env python\n\nclass Make:\n def __getattr__(self, name):\n self.__dict__[name] = Make()\n return self.__dict__[name]\n\nmake = Make()\n\nmake.a.dot.separated.name = 666\nmake.anything.i.want = 777\n\nprint make.a.dot.separated.name\nprint make.anything.i.want\n\nThe special __getattr__ method is called when a named value isn't found. The line make.anything.i.want ends up doing the equivalent of:\nm1 = make.anything # calls make.__getattr__(\"anything\")\nm2 = m1.i # calls m1.__getattr__(\"i\")\nm2.want = 777\n\nThe above implementation uses these calls to __getattr__ to create a chain of Make objects each time an unknown property is accessed. This allows the dot accesses to be nested arbitrarily deep until the final assignment at which point a real value is assigned.\nPython documentation - customizing attribute access:\n\nobject.__getattr__(self, name)\nCalled when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception.\nNote that if the attribute is found through the normal mechanism, __getattr__() is not called. (This is an intentional asymmetry between __getattr__() and __setattr__().) This is done both for efficiency reasons and because otherwise __getattr__() would have no way to access other attributes of the instance. Note that at least for instance variables, you can fake total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the __getattribute__() method below for a way to actually get total control in new-style classes.\nobject.__setattr__(self, name, value)\nCalled when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.\nIf __setattr__() wants to assign to an instance attribute, it should not simply execute self.name = value — this would cause a recursive call to itself. Instead, it should insert the value in the dictionary of instance attributes, e.g., self.__dict__[name] = value. For new-style classes, rather than accessing the instance dictionary, it should call the base class method with the same name, for example, object.__setattr__(self, name, value).\n\n" ]
[ 19 ]
[]
[]
[ "lookup", "namespaces", "python" ]
stackoverflow_0001602745_lookup_namespaces_python.txt
Q: Django raw id field lookup has the wrong link I have a django app, and on the backend I've got a many to many field that I've set in the 'raw_id_fields' property in the ModelAdmin class. When running it locally, everything is fine, but when I test on the live site, the link to the lookup popout window doesnt work. The django app resides at example.com/djangoapp/ and the admin is example.com/djangoapp/admin/ The links that the admin is generating for the lookup is example.com/admin/lookup_url/ rather tahn example.com/djangoapp/admin/lookup_url/ Any ideas why this is happening? Other links within the admin work fine, it just seems to be these raw id lookups. Thanks for the help. Edit: In the source for the page when rendered, the breadcrumbs have the following: <div class="breadcrumbs"> <a href="../../../">Home</a> &rsaquo; This link works fine, going back to the root of the admin (example.com/djangoapp/admin/) The HTML for the broken lookup link is: <a href="../../../auth/user/?t=id" class="related-lookup" id="lookup_id_user" onclick="return showRelatedObjectLookupPopup(this);"> Looks like it might have something to do with the JS instead of the link itself. A: This sounds like a bug in Django, I've seen a few of this kind. I'm pretty sure it has to do with the fact that you placed your admin at example.com/djangoapp/admin/ instead of example.com/admin/ which is the default. I have a hunch that if you change the admin url, it will work.
Django raw id field lookup has the wrong link
I have a django app, and on the backend I've got a many to many field that I've set in the 'raw_id_fields' property in the ModelAdmin class. When running it locally, everything is fine, but when I test on the live site, the link to the lookup popout window doesnt work. The django app resides at example.com/djangoapp/ and the admin is example.com/djangoapp/admin/ The links that the admin is generating for the lookup is example.com/admin/lookup_url/ rather tahn example.com/djangoapp/admin/lookup_url/ Any ideas why this is happening? Other links within the admin work fine, it just seems to be these raw id lookups. Thanks for the help. Edit: In the source for the page when rendered, the breadcrumbs have the following: <div class="breadcrumbs"> <a href="../../../">Home</a> &rsaquo; This link works fine, going back to the root of the admin (example.com/djangoapp/admin/) The HTML for the broken lookup link is: <a href="../../../auth/user/?t=id" class="related-lookup" id="lookup_id_user" onclick="return showRelatedObjectLookupPopup(this);"> Looks like it might have something to do with the JS instead of the link itself.
[ "This sounds like a bug in Django, I've seen a few of this kind. I'm pretty sure it has to do with the fact that you placed your admin at example.com/djangoapp/admin/ instead of example.com/admin/ which is the default. I have a hunch that if you change the admin url, it will work.\n" ]
[ 1 ]
[]
[]
[ "django", "django_admin", "python" ]
stackoverflow_0001602607_django_django_admin_python.txt
Q: Exit a process while threads are sleeping In a python script, I started a bunch of threads, each of which pulls some resource at an interval using time.sleep(interval). I have another thread running, which uses the cmd module to monitor user inputs. When the user enters 'q', I call sys.exit(0) However, when the script is running and I enter 'q', the thread user input monitoring thread is quit, but the sleeping threads are still alive. (meaning the program does not exit) I'm wondering if I'm doing it the right way? A: sys.exit will only stop the thread it executes from. If you have other non-daemon thread in your program they will continue to execute. Section 17.2.1 of the Python library docs contains: A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property. See also Why does sys.exit() not exit when called inside a thread in Python?.
Exit a process while threads are sleeping
In a python script, I started a bunch of threads, each of which pulls some resource at an interval using time.sleep(interval). I have another thread running, which uses the cmd module to monitor user inputs. When the user enters 'q', I call sys.exit(0) However, when the script is running and I enter 'q', the thread user input monitoring thread is quit, but the sleeping threads are still alive. (meaning the program does not exit) I'm wondering if I'm doing it the right way?
[ "sys.exit will only stop the thread it executes from. If you have other non-daemon thread in your program they will continue to execute. Section 17.2.1 of the Python library docs contains:\n\nA thread can be flagged as a “daemon\n thread”. The significance of this flag\n is that the entire Python program\n exits when only daemon threads are\n left. The initial value is inherited\n from the creating thread. The flag can\n be set through the daemon property.\n\nSee also Why does sys.exit() not exit when called inside a thread in Python?.\n" ]
[ 5 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0001602743_multithreading_python.txt
Q: I want to create a "CGI script" in python that stays resident in memory and services multiple requests I have a website that right now, runs by creating static html pages from a cron job that runs nightly. I'd like to add some search and filtering features using a CGI type script, but my script will have enough of a startup time (maybe a few seconds?) that I'd like it to stay resident and serve multiple requests. This is a side-project I'm doing for fun, and it's not going to be super complex. I don't mind using something like Pylons, but I don't feel like I need or want an ORM layer. What would be a reasonable approach here? EDIT: I wanted to point out that for the load I'm expecting and processing I need to do on a request, I'm confident that a single python script in a single process could handle all requests without any slowdowns, especially since my dataset would be memory-resident. A: That's exactly what WSGI is for ;) I don't know off hand what the simplest way to turn a CGI script into a WSGI application is, though (I've always had that managed by a framework). It shouldn't be too tricky, though. That said, An Introduction to the Python Web Server Gateway Interface (WSGI) seems to be a reasonable introduction, and you'll also want to take a look at mod_wsgi (assuming you're using Apache…)
I want to create a "CGI script" in python that stays resident in memory and services multiple requests
I have a website that right now, runs by creating static html pages from a cron job that runs nightly. I'd like to add some search and filtering features using a CGI type script, but my script will have enough of a startup time (maybe a few seconds?) that I'd like it to stay resident and serve multiple requests. This is a side-project I'm doing for fun, and it's not going to be super complex. I don't mind using something like Pylons, but I don't feel like I need or want an ORM layer. What would be a reasonable approach here? EDIT: I wanted to point out that for the load I'm expecting and processing I need to do on a request, I'm confident that a single python script in a single process could handle all requests without any slowdowns, especially since my dataset would be memory-resident.
[ "That's exactly what WSGI is for ;)\nI don't know off hand what the simplest way to turn a CGI script into a WSGI application is, though (I've always had that managed by a framework). It shouldn't be too tricky, though.\nThat said, An Introduction to the Python Web Server Gateway Interface (WSGI) seems to be a reasonable introduction, and you'll also want to take a look at mod_wsgi (assuming you're using Apache…)\n" ]
[ 4 ]
[ "maybe you should direct your search towards inter process commmunication and make a search process that returns the results to the web server. This search process will be running all the time assuming you have your own server.\n" ]
[ -1 ]
[ "cgi", "frameworks", "pylons", "python" ]
stackoverflow_0001602516_cgi_frameworks_pylons_python.txt
Q: Setting value for a node in XML document in Python I have a XML document "abc.xml": I need to write a function replace(name, newvalue) which can replace the value node with tag 'name' with the new value and write it back to the disk. Is this possible in python? How should I do this? A: Sure it is possible. The xml.etree.ElementTree module will help you with parsing XML, finding tags and replacing values. If you know a little bit more about the XML file you want to change, you can probably make the task a bit easier than if you need to write a generic function that will handle any XML file. If you are already familiar with DOM parsing, there's a xml.dom package to use instead of the ElementTree one. A: import xml.dom.minidom filename='abc.xml' doc = xml.dom.minidom.parse(filename) print doc.toxml() c = doc.getElementsByTagName("c") print c[0].toxml() c[0].childNodes[0].nodeValue = 'zip' print doc.toxml() def replace(tagname, newvalue): '''doc is global, first occurrence of tagname gets it!''' doc.getElementsByTagName(tagname)[0].childNodes[0].nodeValue = newvalue replace('c', 'zit') print doc.toxml() See minidom primer and API Reference. # cat abc.xml <root> <a> <c>zap</c> </a> <b> </b> </root>
Setting value for a node in XML document in Python
I have a XML document "abc.xml": I need to write a function replace(name, newvalue) which can replace the value node with tag 'name' with the new value and write it back to the disk. Is this possible in python? How should I do this?
[ "Sure it is possible. \nThe xml.etree.ElementTree module will help you with parsing XML, finding tags and replacing values.\nIf you know a little bit more about the XML file you want to change, you can probably make the task a bit easier than if you need to write a generic function that will handle any XML file.\nIf you are already familiar with DOM parsing, there's a xml.dom package to use instead of the ElementTree one.\n", "import xml.dom.minidom\nfilename='abc.xml'\ndoc = xml.dom.minidom.parse(filename)\nprint doc.toxml()\n\nc = doc.getElementsByTagName(\"c\")\nprint c[0].toxml()\nc[0].childNodes[0].nodeValue = 'zip'\nprint doc.toxml()\n\ndef replace(tagname, newvalue):\n '''doc is global, first occurrence of tagname gets it!'''\n doc.getElementsByTagName(tagname)[0].childNodes[0].nodeValue = newvalue\nreplace('c', 'zit')\n\nprint doc.toxml()\n\nSee minidom primer and API Reference.\n# cat abc.xml\n<root>\n <a>\n <c>zap</c>\n </a>\n <b>\n </b>\n</root>\n\n" ]
[ 2, 2 ]
[]
[]
[ "python", "python_3.x", "xml" ]
stackoverflow_0001602919_python_python_3.x_xml.txt
Q: mysqldb on python 2.6+ (win32) I am currently using python 2.6 and I would like to use the win32 mysqldb module. Unfortunately it seems it needs the 2.5 version of Python. Is there any way to get rid of this mismatch in the version numbers and install mysqldb with python 2.6? A: There are versions of mysqldb for python 2.6, they're just not available on the official site. It took me a while (and unfortunately I lost the link) but you can search google and find people who have compiled and released 2.6 versions of mysqldb for windows x64 and x32. EDIT: http://sourceforge.net/forum/forum.php?thread_id=3108914&forum_id=70460 http://sourceforge.net/forum/forum.php?thread_id=2316047&forum_id=70460 That fourm has a link to versions of mysqldb for Python 2.6 A: This one has both 32 and 64 versions for 2.6: http://www.codegood.com/archives/4
mysqldb on python 2.6+ (win32)
I am currently using python 2.6 and I would like to use the win32 mysqldb module. Unfortunately it seems it needs the 2.5 version of Python. Is there any way to get rid of this mismatch in the version numbers and install mysqldb with python 2.6?
[ "There are versions of mysqldb for python 2.6, they're just not available on the official site. It took me a while (and unfortunately I lost the link) but you can search google and find people who have compiled and released 2.6 versions of mysqldb for windows x64 and x32.\nEDIT:\nhttp://sourceforge.net/forum/forum.php?thread_id=3108914&forum_id=70460\nhttp://sourceforge.net/forum/forum.php?thread_id=2316047&forum_id=70460\nThat fourm has a link to versions of mysqldb for Python 2.6\n", "This one has both 32 and 64 versions for 2.6:\nhttp://www.codegood.com/archives/4\n" ]
[ 11, 7 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0000685869_mysql_python.txt
Q: Why can I not import the Python module 'signal' using Jython, in Linux? I can't find any reference to the 'signal' class being left out in Jython. Using Jython 2.1. Thanks A: I would imagine Unix-style signals are difficult to do on the JVM, since the JVM has no notion of signals, and it is likely some JNI magic would be required to get this to work. In Jython 2.5, the module exists, but seems to throw NotImplementedError for most functions.
Why can I not import the Python module 'signal' using Jython, in Linux?
I can't find any reference to the 'signal' class being left out in Jython. Using Jython 2.1. Thanks
[ "I would imagine Unix-style signals are difficult to do on the JVM, since the JVM has no notion of signals, and it is likely some JNI magic would be required to get this to work.\nIn Jython 2.5, the module exists, but seems to throw NotImplementedError for most functions.\n" ]
[ 2 ]
[]
[]
[ "jython", "python", "signals" ]
stackoverflow_0001603189_jython_python_signals.txt
Q: Python idiom for 'Try until no exception is raised' I want my code to automatically try multiple ways to create a database connection. As soon as one works, the code needs to move on (i.e. it shouldn't try to other ways anymore). If they all fail well, then the script can just blow up. So in - what I thought was, but most likely isn't - a stroke of genius I tried this: import psycopg2 from getpass import getpass # ouch, global variable, ooh well, it's just a simple script eh CURSOR = None def get_cursor(): """Create database connection and return standard cursor.""" global CURSOR if not CURSOR: # try to connect and get a cursor try: # first try the bog standard way: db postgres, user postgres and local socket conn = psycopg2.connect(database='postgres', user='postgres') except psycopg2.OperationalError: # maybe user pgsql? conn = psycopg2.connect(database='postgres', user='pgsql') except psycopg2.OperationalError: # maybe it was postgres, but on localhost? prolly need password then conn = psycopg2.connect(database='postgres', user='postgres', host='localhost', password=getpass()) except psycopg2.OperationalError: # or maybe it was pgsql and on localhost conn = psycopg2.connect(database='postgres', user='pgsql', host='localhost', password=getpass()) # allright, nothing blew up, so we have a connection # now make a cursor CURSOR = conn.cursor() # return existing or new cursor return CURSOR But it seems that the second and subsequent except statements aren't catching the OperationalErrors anymore. Probably because Python only catches an exception once in a try...except statement? Is that so? If not: is there anything else I'm doing wrong? If so: how do you do something like this then? Is there a standard idiom? (I know there are ways around this problem, like having the user specify the connection parameters on the command line, but that's not my question ok :) ) EDIT: I accepted retracile's excellent answer and I took in gnibbler's comment for using the for..else construct. The final code became (sorry, I'm not really following the max characters per line limit from pep8): EDIT 2: As you can see from the comment on the Cursor class: I don't really know how to call this kind of class. It's not really a singleton (I can have multiple different instances of Cursor) but when calling get_cursor I do get the same cursor object everytime. So it's like a singleton factory? :) import psycopg2 from getpass import getpass import sys class UnableToConnectError(Exception): pass class Cursor: """Cursor singleton factory?""" def __init__(self): self.CURSOR = None def __call__(self): if self.CURSOR is None: # try to connect and get a cursor attempts = [ {'database': 'postgres', 'user': 'postgres'}, {'database': 'postgres', 'user': 'pgsql'}, {'database': 'postgres', 'user': 'postgres', 'host': 'localhost', 'password': None}, {'database': 'postgres', 'user': 'pgsql', 'host': 'localhost', 'password': None}, ] for attempt in attempts: if 'password' in attempt: attempt['password'] = getpass(stream=sys.stderr) # tty and stderr are default in 2.6, but 2.5 uses sys.stdout, which I don't want try: conn = psycopg2.connect(**attempt) attempt.pop('password', None) sys.stderr.write("Succesfully connected using: %s\n\n" % attempt) break # no exception raised, we have a connection, break out of for loop except psycopg2.OperationalError: pass else: raise UnableToConnectError("Unable to connect: exhausted standard permutations of connection dsn.") # allright, nothing blew up, so we have a connection # now make a cursor self.CURSOR = conn.cursor() # return existing or new cursor return self.CURSOR get_cursor = Cursor() A: Approximately: attempts = [ { 'database'='postgres', 'user'='pgsql', ...}, { 'database'='postgres', 'user'='postgres', 'host'='localhost', 'password'=getpass()}, ... ] conn = None for attempt in attempts: try: conn = psycopg2.connect(**attempt) break except psycopg2.OperationalError: pass if conn is None: raise a ruckus CURSOR = conn.cursor() Now, if you don't want to call getpass() unless it is necessary, you'd want to check if 'password' in attempt: attempt['password'] = getpass() or so. Now about that global.... class MyCursor: def __init__(self): self.CURSOR = None def __call__(self): if self.CURSOR is None: <insert logic here> return self.CURSOR get_cursor = MyCursor() ... though I think there are a couple of other ways to accomplish the same thing. Bringing it all together: class MyCursor: def __init__(self): self.CURSOR = None def __call__(self): if self.CURSOR is None: attempts = [ {'database'='postgres', 'user'='postgres'}, {'database'='postgres', 'user'='pgsql'}, {'database'='postgres', 'user'='postgres', 'host'='localhost', 'password'=True}, {'database'='postgres', 'user'='pgsql', 'host'='localhost', 'password'=True}, ] conn = None for attempt in attempts: if 'password' in attempt: attempt['password'] = getpass() try: conn = psycopg2.connect(**attempt) break # that didn't throw an exception, we're done except psycopg2.OperationalError: pass if conn is None: raise a ruckus # nothin' worked self.CURSOR = conn.cursor() return self.CURSOR get_cursor = MyCursor() Note: completely untested
Python idiom for 'Try until no exception is raised'
I want my code to automatically try multiple ways to create a database connection. As soon as one works, the code needs to move on (i.e. it shouldn't try to other ways anymore). If they all fail well, then the script can just blow up. So in - what I thought was, but most likely isn't - a stroke of genius I tried this: import psycopg2 from getpass import getpass # ouch, global variable, ooh well, it's just a simple script eh CURSOR = None def get_cursor(): """Create database connection and return standard cursor.""" global CURSOR if not CURSOR: # try to connect and get a cursor try: # first try the bog standard way: db postgres, user postgres and local socket conn = psycopg2.connect(database='postgres', user='postgres') except psycopg2.OperationalError: # maybe user pgsql? conn = psycopg2.connect(database='postgres', user='pgsql') except psycopg2.OperationalError: # maybe it was postgres, but on localhost? prolly need password then conn = psycopg2.connect(database='postgres', user='postgres', host='localhost', password=getpass()) except psycopg2.OperationalError: # or maybe it was pgsql and on localhost conn = psycopg2.connect(database='postgres', user='pgsql', host='localhost', password=getpass()) # allright, nothing blew up, so we have a connection # now make a cursor CURSOR = conn.cursor() # return existing or new cursor return CURSOR But it seems that the second and subsequent except statements aren't catching the OperationalErrors anymore. Probably because Python only catches an exception once in a try...except statement? Is that so? If not: is there anything else I'm doing wrong? If so: how do you do something like this then? Is there a standard idiom? (I know there are ways around this problem, like having the user specify the connection parameters on the command line, but that's not my question ok :) ) EDIT: I accepted retracile's excellent answer and I took in gnibbler's comment for using the for..else construct. The final code became (sorry, I'm not really following the max characters per line limit from pep8): EDIT 2: As you can see from the comment on the Cursor class: I don't really know how to call this kind of class. It's not really a singleton (I can have multiple different instances of Cursor) but when calling get_cursor I do get the same cursor object everytime. So it's like a singleton factory? :) import psycopg2 from getpass import getpass import sys class UnableToConnectError(Exception): pass class Cursor: """Cursor singleton factory?""" def __init__(self): self.CURSOR = None def __call__(self): if self.CURSOR is None: # try to connect and get a cursor attempts = [ {'database': 'postgres', 'user': 'postgres'}, {'database': 'postgres', 'user': 'pgsql'}, {'database': 'postgres', 'user': 'postgres', 'host': 'localhost', 'password': None}, {'database': 'postgres', 'user': 'pgsql', 'host': 'localhost', 'password': None}, ] for attempt in attempts: if 'password' in attempt: attempt['password'] = getpass(stream=sys.stderr) # tty and stderr are default in 2.6, but 2.5 uses sys.stdout, which I don't want try: conn = psycopg2.connect(**attempt) attempt.pop('password', None) sys.stderr.write("Succesfully connected using: %s\n\n" % attempt) break # no exception raised, we have a connection, break out of for loop except psycopg2.OperationalError: pass else: raise UnableToConnectError("Unable to connect: exhausted standard permutations of connection dsn.") # allright, nothing blew up, so we have a connection # now make a cursor self.CURSOR = conn.cursor() # return existing or new cursor return self.CURSOR get_cursor = Cursor()
[ "Approximately:\nattempts = [\n { 'database'='postgres', 'user'='pgsql', ...},\n { 'database'='postgres', 'user'='postgres', 'host'='localhost', 'password'=getpass()},\n ...\n]\nconn = None\nfor attempt in attempts:\n try:\n conn = psycopg2.connect(**attempt)\n break\n except psycopg2.OperationalError:\n pass\nif conn is None:\n raise a ruckus\nCURSOR = conn.cursor()\n\nNow, if you don't want to call getpass() unless it is necessary, you'd want to check if 'password' in attempt: attempt['password'] = getpass() or so.\nNow about that global....\nclass MyCursor:\n def __init__(self):\n self.CURSOR = None\n def __call__(self):\n if self.CURSOR is None:\n <insert logic here>\n return self.CURSOR\n\nget_cursor = MyCursor()\n\n... though I think there are a couple of other ways to accomplish the same thing.\nBringing it all together:\nclass MyCursor:\n def __init__(self):\n self.CURSOR = None\n def __call__(self):\n if self.CURSOR is None:\n attempts = [\n {'database'='postgres', 'user'='postgres'},\n {'database'='postgres', 'user'='pgsql'},\n {'database'='postgres', 'user'='postgres', 'host'='localhost', 'password'=True},\n {'database'='postgres', 'user'='pgsql', 'host'='localhost', 'password'=True},\n ]\n conn = None\n for attempt in attempts:\n if 'password' in attempt:\n attempt['password'] = getpass()\n try:\n conn = psycopg2.connect(**attempt)\n break # that didn't throw an exception, we're done\n except psycopg2.OperationalError:\n pass\n if conn is None:\n raise a ruckus # nothin' worked\n self.CURSOR = conn.cursor()\n return self.CURSOR\nget_cursor = MyCursor()\n\nNote: completely untested\n" ]
[ 14 ]
[ "You're close. Probably the best thing to do in this case is nesting the second and subsequent attempts in the except block. Thus the critical part of your code would look like:\nif not CURSOR:\n # try to connect and get a cursor\n try:\n # first try the bog standard way: db postgres, user postgres and local socket\n conn = psycopg2.connect(database='postgres', user='postgres')\n except psycopg2.OperationalError:\n # maybe user pgsql?\n try:\n conn = psycopg2.connect(database='postgres', user='pgsql')\n except psycopg2.OperationalError:\n # maybe it was postgres, but on localhost? prolly need password then\n try:\n conn = psycopg2.connect(database='postgres', user='postgres', host='localhost', password=getpass())\n except psycopg2.OperationalError:\n # or maybe it was pgsql and on localhost\n conn = psycopg2.connect(database='postgres', user='pgsql', host='localhost', password=getpass())\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0001603578_python.txt
Q: Why is this simple python class not working? I'm trying to make a class that will get a list of numbers then print them out when I need. I need to be able to make 2 objects from the class to get two different lists. Here's what I have so far class getlist: def newlist(self,*number): lst=[] self.number=number lst.append(number) def printlist(self): return lst Sorry I'm not very clear, I'm a bit new to oop, can you please help me cos I don't know what I'm doing wrong. Thanks. A: In Python, when you are writing methods inside an object, you need to prefix all references to variables belonging to that object with self. - like so: class getlist: def newlist(self,*number): self.lst=[] self.lst += number #I changed this to add all args to the list def printlist(self): return self.lst The code you had before was creating and modifying a local variable called lst, so it would appear to "disappear" between calls. Also, it is usual to make a constructor, which has the special name __init__ : class getlist: #Init constructor def __init__(self,*number): self.lst=[] self.lst += number #I changed this to add all args to the list def printlist(self): return self.lst Finally, use like so >>> newlist=getlist(1,2,3, [4,5]) >>> newlist.printlist() [1, 2, 3, [4,5]] A: You should use "self.lst" instead of "lst". Without the "self", it's just internal variable to current method.
Why is this simple python class not working?
I'm trying to make a class that will get a list of numbers then print them out when I need. I need to be able to make 2 objects from the class to get two different lists. Here's what I have so far class getlist: def newlist(self,*number): lst=[] self.number=number lst.append(number) def printlist(self): return lst Sorry I'm not very clear, I'm a bit new to oop, can you please help me cos I don't know what I'm doing wrong. Thanks.
[ "In Python, when you are writing methods inside an object, you need to prefix all references to variables belonging to that object with self. - like so:\nclass getlist: \n def newlist(self,*number):\n self.lst=[]\n self.lst += number #I changed this to add all args to the list\n\n def printlist(self):\n return self.lst\n\nThe code you had before was creating and modifying a local variable called lst, so it would appear to \"disappear\" between calls.\nAlso, it is usual to make a constructor, which has the special name __init__ : \nclass getlist: \n #Init constructor\n def __init__(self,*number):\n self.lst=[]\n self.lst += number #I changed this to add all args to the list\n\n def printlist(self):\n return self.lst\n\nFinally, use like so\n>>> newlist=getlist(1,2,3, [4,5])\n>>> newlist.printlist()\n[1, 2, 3, [4,5]] \n\n", "You should use \"self.lst\" instead of \"lst\". Without the \"self\", it's just internal variable to current method.\n" ]
[ 7, 3 ]
[]
[]
[ "class", "list", "python" ]
stackoverflow_0001603696_class_list_python.txt
Q: CherryPy, load image from matplotlib, or in general I am not sure what I am doing wrong, It would be great if you could point me toward what to read. I have taken the first CherryPy tutorial "hello world" added a little matplotlib plot. Question 1: how do I know where the file will be saved? It happens to be where I am running the file. Question 2: I don't seem to be get the image to open/view in my browser. When I view source in the browser everything looks right but no luck, even when I am including the full image path. I think my problem is with the path but not sure the mechanics of what is happening thanks for the help Vincent import cherrypy import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt class HelloWorld: def index(self): fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1,2,3]) fig.savefig('test.png') return ''' <img src="test.png" width="640" height="480" border="0" /> ''' index.exposed = True import os.path tutconf = os.path.join(os.path.dirname(__file__), 'tutorial.conf') if __name__ == '__main__': cherrypy.quickstart(HelloWorld(), config=tutconf) else: cherrypy.tree.mount(HelloWorld(), config=tutconf) A: Below are some things that have worked for me, but before you proceed further I recommend that you read this page about how to configure directories which contain static content. Question 1: How do I know where the file will be saved? If you dictate where the file should be saved, the process of finding it should become easier. For example, you could save image files to a subdirectory called "img" within your CherryPy application directory like this: fig.savefig('img/test.png') # note: *no* forward slash before "img" And then display like this: return '<img src="/img/test.png" />' # note: forward slash before "img" Question 2: I don't seem to be [able to] get the image to open/view in my browser. Here is one way I've used to make static images available to a CherryPy application: if __name__ == '__main__': import os.path currdir = os.path.dirname(os.path.abspath(__file__)) conf = {'/css/style.css':{'tools.staticfile.on':True, 'tools.staticfile.filename':os.path.join(currdir,'css','style.css')}, '/img':{'tools.staticdir.on':True, 'tools.staticdir.dir':os.path.join(currdir,'img')}} cherrypy.quickstart(root, "/", config=conf)
CherryPy, load image from matplotlib, or in general
I am not sure what I am doing wrong, It would be great if you could point me toward what to read. I have taken the first CherryPy tutorial "hello world" added a little matplotlib plot. Question 1: how do I know where the file will be saved? It happens to be where I am running the file. Question 2: I don't seem to be get the image to open/view in my browser. When I view source in the browser everything looks right but no luck, even when I am including the full image path. I think my problem is with the path but not sure the mechanics of what is happening thanks for the help Vincent import cherrypy import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt class HelloWorld: def index(self): fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1,2,3]) fig.savefig('test.png') return ''' <img src="test.png" width="640" height="480" border="0" /> ''' index.exposed = True import os.path tutconf = os.path.join(os.path.dirname(__file__), 'tutorial.conf') if __name__ == '__main__': cherrypy.quickstart(HelloWorld(), config=tutconf) else: cherrypy.tree.mount(HelloWorld(), config=tutconf)
[ "Below are some things that have worked for me, but before you proceed further I recommend that you read this page about how to configure directories which contain static content.\nQuestion 1: How do I know where the file will be saved?\nIf you dictate where the file should be saved, the process of finding it should become easier.\nFor example, you could save image files to a subdirectory called \"img\" within your CherryPy application directory like this:\nfig.savefig('img/test.png') # note: *no* forward slash before \"img\"\n\nAnd then display like this:\nreturn '<img src=\"/img/test.png\" />' # note: forward slash before \"img\"\n\nQuestion 2: I don't seem to be [able to] get the image to open/view in my browser.\nHere is one way I've used to make static images available to a CherryPy application:\nif __name__ == '__main__':\n import os.path\n currdir = os.path.dirname(os.path.abspath(__file__))\n conf = {'/css/style.css':{'tools.staticfile.on':True,\n 'tools.staticfile.filename':os.path.join(currdir,'css','style.css')},\n '/img':{'tools.staticdir.on':True,\n 'tools.staticdir.dir':os.path.join(currdir,'img')}}\n cherrypy.quickstart(root, \"/\", config=conf)\n\n" ]
[ 5 ]
[]
[]
[ "cherrypy", "matplotlib", "python" ]
stackoverflow_0001603669_cherrypy_matplotlib_python.txt
Q: What is the right way to design an adventure game with PyGame? I have started development on a small 2d adventure side view game together with a couple of people. The game will consist of the regular elements: A room, a main character, an inventory, npcs, items and puzzles. We've chosen PyGame since we all are familiar with python from before. My question is quite theoretical, but how would we design this in a good way? Would every object on the screen talk to some main loop that blits everything to the screen? (Hope this question isn't too discussion-y) Thanks A: Python Adventure Writing System - http://home.fuse.net/wolfonenet/PAWS.htm - might be useful http://proquestcombo.safaribooksonline.com/1592000770 may also be useful
What is the right way to design an adventure game with PyGame?
I have started development on a small 2d adventure side view game together with a couple of people. The game will consist of the regular elements: A room, a main character, an inventory, npcs, items and puzzles. We've chosen PyGame since we all are familiar with python from before. My question is quite theoretical, but how would we design this in a good way? Would every object on the screen talk to some main loop that blits everything to the screen? (Hope this question isn't too discussion-y) Thanks
[ "Python Adventure Writing System - http://home.fuse.net/wolfonenet/PAWS.htm - might be useful\nhttp://proquestcombo.safaribooksonline.com/1592000770 may also be useful\n" ]
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0001603928_pygame_python.txt
Q: Deploying a web service to my Google App Engine application We made a simple application and using GoogleAppEngineLauncher (GAEL) ran that locally. Then we deployed, using GAEL again, to our appid. It works fine. Now, we made a web service. We ran that locally using GAEL and a very thin local python client. It works fine. We deployed that, and we get this message when we try to visit our default page: "Move along people, there is nothing to see here" We modified our local client and tried to run that against our google site and we got an error that looked like: Response is "text/plain", not "text/xml" Any ideas where we are falling down in our deployment or config for using a web service with google app engine? Any help appreciated! Thanks // :) A: Looks like you're not setting the Content-Type header correctly in your service (assuming you ARE actually trying to send XML -- e.g. SOAP, XML-RPC, &c). What code are you using to set that header? Without some indication about what protocol you're implementing and via what framework, it's impossible to help in detail...! A: Looks like we aren't going to get to the bottom of this one. Just not enough information available at debug time. We've managed to affect a fix on the service, although I hate ot admit it we never found out what was causing this bug.
Deploying a web service to my Google App Engine application
We made a simple application and using GoogleAppEngineLauncher (GAEL) ran that locally. Then we deployed, using GAEL again, to our appid. It works fine. Now, we made a web service. We ran that locally using GAEL and a very thin local python client. It works fine. We deployed that, and we get this message when we try to visit our default page: "Move along people, there is nothing to see here" We modified our local client and tried to run that against our google site and we got an error that looked like: Response is "text/plain", not "text/xml" Any ideas where we are falling down in our deployment or config for using a web service with google app engine? Any help appreciated! Thanks // :)
[ "Looks like you're not setting the Content-Type header correctly in your service (assuming you ARE actually trying to send XML -- e.g. SOAP, XML-RPC, &c). What code are you using to set that header? Without some indication about what protocol you're implementing and via what framework, it's impossible to help in detail...!\n", "Looks like we aren't going to get to the bottom of this one. Just not enough information available at debug time. We've managed to affect a fix on the service, although I hate ot admit it we never found out what was causing this bug.\n" ]
[ 1, 0 ]
[]
[]
[ "google_app_engine", "iphone", "python", "web_services" ]
stackoverflow_0001513038_google_app_engine_iphone_python_web_services.txt
Q: Parallel SSH in Python I wonder what is the best way to handle parallel SSH connections in python. I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way. Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection. Thanks. A: It might be worth checking out what options are available in Twisted. For example, the Twisted.Conch page reports: http://twistedmatrix.com/users/z3p/files/conch-talk.html Unlike OpenSSH, the Conch server does not fork a process for each incoming connection. Instead, it uses the Twisted reactor to multiplex the connections. A: Yes, you can do this with paramiko. If you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect. I haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming. A: You can simply use subprocess.Popen for that purpose, without any problems. However, you might want to simply install cronjobs on the remote machines. :-) A: Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open. A: I've tried clusterssh, and I don't like the multiwindow model. Too confusing in the common case when everything works. I've tried pssh, and it has a few problems with quotation escaping and password prompting. The best I've used is dsh: Description: dancer's shell, or distributed shell Executes specified command on a group of computers using remote shell methods such as rsh or ssh. . dsh can parallelise job submission using several algorithms, such as using fan-out method or opening as much connections as possible, or using a window of connections at one time. It also supports "interactive mode" for interactive maintenance of remote hosts. . This tool is handy for administration of PC clusters, and multiple hosts. Its very flexible in scheduling and topology: you can request something close to a calling tree if need be. But the default is a simple topology of one command node to many leaf nodes. http://www.netfort.gr.jp/~dancer/software/dsh.html
Parallel SSH in Python
I wonder what is the best way to handle parallel SSH connections in python. I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way. Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection. Thanks.
[ "It might be worth checking out what options are available in Twisted. For example, the Twisted.Conch page reports:\n\nhttp://twistedmatrix.com/users/z3p/files/conch-talk.html\nUnlike OpenSSH, the Conch server does not fork a process for each incoming connection. Instead, it uses the Twisted reactor to multiplex the connections.\n\n", "Yes, you can do this with paramiko.\nIf you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect.\nI haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming.\n", "You can simply use subprocess.Popen for that purpose, without any problems.\nHowever, you might want to simply install cronjobs on the remote machines. :-)\n", "Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open.\n", "I've tried clusterssh, and I don't like the multiwindow model. Too confusing in the common case when everything works. \nI've tried pssh, and it has a few problems with quotation escaping and password prompting.\nThe best I've used is dsh:\n\n Description: dancer's shell, or distributed shell\n Executes specified command on a group of computers using remote shell\n methods such as rsh or ssh.\n .\n dsh can parallelise job submission using several algorithms, such as using\n fan-out method or opening as much connections as possible, or\n using a window of connections at one time.\n It also supports \"interactive mode\" for interactive maintenance of\n remote hosts.\n .\n This tool is handy for administration of PC clusters, and multiple hosts.\n \n\nIts very flexible in scheduling and topology: you can request something close to a calling tree if need be. But the default is a simple topology of one command node to many leaf nodes.\nhttp://www.netfort.gr.jp/~dancer/software/dsh.html\n" ]
[ 3, 3, 1, 1, 1 ]
[ "This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too.\n" ]
[ -1 ]
[ "parallel_processing", "python", "ssh" ]
stackoverflow_0001185855_parallel_processing_python_ssh.txt
Q: Do dictionaries in Python have a single repr value? In this question, it was suggested that calling repr on a dictionary would be a good way to store it in another dictionary. This would depend on repr being the same regardless of how the keys are ordered. Is this the case? PS. the most elegant solution to the original problem was actually using frozenset A: No, the order that keys are added to a dictionary can affect the internal data structure. When two items have the same hash value and end up in the same bucket then the order they are added to the dictionary matters. >>> (1).__hash__() 1 >>> (1 << 32).__hash__() 1 >>> repr({1: 'one', 1 << 32: 'not one'}) "{1: 'one', 4294967296L: 'not one'}" >>> repr({1 << 32: 'not one', 1: 'one'}) "{4294967296L: 'not one', 1: 'one'}" A: That's not the case -- key ordering is arbitrary. If you'd like to use a dictionary as a key, it should be converted into a fixed form (such as a sorted tuple). Of course, this won't work for dictionaries with non-hashable values. A: If you want to store a dictionary in another dictionary, there is no need to do any transformations first. If you want to use a dictionary as the key to another dictionary, then you need to transform it, ideally into a sorted tuple of key/value tuples.
Do dictionaries in Python have a single repr value?
In this question, it was suggested that calling repr on a dictionary would be a good way to store it in another dictionary. This would depend on repr being the same regardless of how the keys are ordered. Is this the case? PS. the most elegant solution to the original problem was actually using frozenset
[ "No, the order that keys are added to a dictionary can affect the internal data structure. When two items have the same hash value and end up in the same bucket then the order they are added to the dictionary matters.\n>>> (1).__hash__()\n1\n>>> (1 << 32).__hash__()\n1\n>>> repr({1: 'one', 1 << 32: 'not one'})\n\"{1: 'one', 4294967296L: 'not one'}\"\n>>> repr({1 << 32: 'not one', 1: 'one'})\n\"{4294967296L: 'not one', 1: 'one'}\"\n\n", "That's not the case -- key ordering is arbitrary.\nIf you'd like to use a dictionary as a key, it should be converted into a fixed form (such as a sorted tuple). Of course, this won't work for dictionaries with non-hashable values.\n", "If you want to store a dictionary in another dictionary, there is no need to do any transformations first. If you want to use a dictionary as the key to another dictionary, then you need to transform it, ideally into a sorted tuple of key/value tuples.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001604281_python.txt
Q: How to use JQuery and Django (ajax + HttpResponse)? Suppose I have an AJAX function: function callpage{ $.ajax({ method:"get", url:"/abc/", data:"x="+3 beforeSend:function() {}, success:function(html){ IF HTTPRESPONSE = "1" , ALERT SUCCESS! } }); return false; } } When my "View" executes in Django, I want to return HttpResponse('1') or '0'. How can I know if it was successful, and then make that alert? A: The typical workflow is to have the server return a JSON object as text, and then interpret that object in the javascript. In your case you could return the text {"httpresponse":1} from the server, or use the python json libary to generate that for you. JQuery has a nice json-reader (I just read the docs, so there might be mistakes in my examples) Javascript: $.getJSON("/abc/?x="+3, function(data){ if (data["HTTPRESPONSE"] == 1) { alert("success") } }); Django #you might need to easy_install this import json def your_view(request): # You can dump a lot of structured data into a json object, such as # lists and touples json_data = json.dumps({"HTTPRESPONSE":1}) # json data is just a JSON string now. return HttpResponse(json_data, mimetype="application/json") An alternative View suggested by Issy (cute because it follows the DRY principle) def updates_after_t(request, id): response = HttpResponse() response['Content-Type'] = "text/javascript" response.write(serializers.serialize("json", TSearch.objects.filter(pk__gt=id))) return response A: Rather than do all this messy, low-level ajax and JSON stuff, consider using the taconite plugin for jQuery. You just make the call to the backend and it does the rest. It's well-documented and easy to debug -- especially if you are using Firebug with FF.
How to use JQuery and Django (ajax + HttpResponse)?
Suppose I have an AJAX function: function callpage{ $.ajax({ method:"get", url:"/abc/", data:"x="+3 beforeSend:function() {}, success:function(html){ IF HTTPRESPONSE = "1" , ALERT SUCCESS! } }); return false; } } When my "View" executes in Django, I want to return HttpResponse('1') or '0'. How can I know if it was successful, and then make that alert?
[ "The typical workflow is to have the server return a JSON object as text, and then interpret that object in the javascript. In your case you could return the text {\"httpresponse\":1} from the server, or use the python json libary to generate that for you. \nJQuery has a nice json-reader (I just read the docs, so there might be mistakes in my examples)\nJavascript:\n$.getJSON(\"/abc/?x=\"+3,\n function(data){\n if (data[\"HTTPRESPONSE\"] == 1)\n {\n alert(\"success\")\n }\n });\n\nDjango\n#you might need to easy_install this\nimport json \n\ndef your_view(request):\n # You can dump a lot of structured data into a json object, such as \n # lists and touples\n json_data = json.dumps({\"HTTPRESPONSE\":1})\n # json data is just a JSON string now. \n return HttpResponse(json_data, mimetype=\"application/json\")\n\nAn alternative View suggested by Issy (cute because it follows the DRY principle)\ndef updates_after_t(request, id): \n response = HttpResponse() \n response['Content-Type'] = \"text/javascript\" \n response.write(serializers.serialize(\"json\", \n TSearch.objects.filter(pk__gt=id))) \n return response \n\n", "Rather than do all this messy, low-level ajax and JSON stuff, consider using the taconite plugin for jQuery. You just make the call to the backend and it does the rest. It's well-documented and easy to debug -- especially if you are using Firebug with FF.\n" ]
[ 16, 2 ]
[]
[]
[ "django", "jquery", "python" ]
stackoverflow_0001527641_django_jquery_python.txt
Q: Object oriented design? I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design. My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts. I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.) The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions. I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals. In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself. Any suggestions would be appreciated. A: You don't have to throw out structured programming to do object-oriented programming. The code is still structured, it just belongs to the objects rather than being separate from them. In classical programming, code is the driving force that operates on data, leading to a dichotomy (and the possibility that code can operate on the wrong data). In OO, data and code are inextricably entwined - an object contains both data and the code to operate on that data (although technically the code (and sometimes some data) belongs to the class rather than an individual object). Any client code that wants to use those objects should do so only by using the code within that object. This prevents the code/data mismatch problem. For a bookkeeping system, I'd approach it as follows: Low-level objects are accounts and categories (actually, in accounting, there's no difference between these, this is a false separation only exacerbated by Quicken et al to separate balance sheet items from P&L - I'll refer to them as accounts only). An account object consists of (for example) an account code, name and starting balance, although in the accounting systems I've worked on, starting balance is always zero - I've always used a "startup" transaction to set the balanaces initially. Transactions are a balanced object which consist of a group of accounts/categories with associated movements (changes in dollar value). By balanced, I mean they must sum to zero (this is the crux of double entry accounting). This means it's a date, description and an array or vector of elements, each containing an account code and value. The overall accounting "object" (the ledger) is then simply the list of all accounts and transactions. Keep in mind that this is the "back-end" of the system (the data model). You will hopefully have separate classes for viewing the data (the view) which will allow you to easily change it, depending on user preferences. For example, you may want the whole ledger, just the balance sheet or just the P&L. Or you may want different date ranges. One thing I'd stress to make a good accounting system. You do need to think like a bookkeeper. By that I mean lose the artificial difference between "accounts" and "categories" since it will make your system a lot cleaner (you need to be able to have transactions between two asset-class accounts (such as a bank transfer) and this won't work if every transaction needs a "category". The data model should reflect the data, not the view. The only difficulty there is remembering that asset-class accounts have the opposite sign from which you expect (negative values for your cash-at-bank mean you have money in the bank and your very high positive value loan for that company sports car is a debt, for example). This will make the double-entry aspect work perfectly but you have to remember to reverse the signs of asset-class accounts (assets, liabilities and equity) when showing or printing the balance sheet. A: Not a direct answer to your question but O'Reilly's Head First Object-Oriented Analysis and Design is an excellent place to start. Followed by Head First Design Patterns A: "My data is basically a list of accounts" Account is a class. "dicts that represent transactions" Transaction appears to be a class. You happen to have elected to represent this as a dict. That's your first pass at OO design. Focus on the Responsibilities and Collaborators. You have at least two classes of objects. A: There are many 'mindsets' that you could adopt to help in the design process (some of which point towards OO and some that don't). I think it is often better to start with questions rather than answers (i.e. rather than say, 'how can I apply inheritance to this' you should ask how this system might expect to change over time). Here's a few questions to answer that might point you towards design principles: Are other's going to use this API? Are they likely to break it? (info hiding) do I need to deploy this across many machines? (state management, lifecycle management) do i need to interoperate with other systems, runtimes, languages? (abstraction and standards) what are my performance constraints? (state management, lifecycle management) what kind of security environment does this component live in? (abstraction, info hiding, interoperability) how would i construct my objects, assuming I used some? (configuration, inversion of control, object decoupling, hiding implementation details) These aren't direct answers to your question, but they might put you in the right frame of mind to answer it yourself. :)
Object oriented design?
I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design. My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts. I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.) The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions. I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals. In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself. Any suggestions would be appreciated.
[ "You don't have to throw out structured programming to do object-oriented programming. The code is still structured, it just belongs to the objects rather than being separate from them.\nIn classical programming, code is the driving force that operates on data, leading to a dichotomy (and the possibility that code can operate on the wrong data).\nIn OO, data and code are inextricably entwined - an object contains both data and the code to operate on that data (although technically the code (and sometimes some data) belongs to the class rather than an individual object). Any client code that wants to use those objects should do so only by using the code within that object. This prevents the code/data mismatch problem.\nFor a bookkeeping system, I'd approach it as follows:\n\nLow-level objects are accounts and categories (actually, in accounting, there's no difference between these, this is a false separation only exacerbated by Quicken et al to separate balance sheet items from P&L - I'll refer to them as accounts only). An account object consists of (for example) an account code, name and starting balance, although in the accounting systems I've worked on, starting balance is always zero - I've always used a \"startup\" transaction to set the balanaces initially.\nTransactions are a balanced object which consist of a group of accounts/categories with associated movements (changes in dollar value). By balanced, I mean they must sum to zero (this is the crux of double entry accounting). This means it's a date, description and an array or vector of elements, each containing an account code and value.\nThe overall accounting \"object\" (the ledger) is then simply the list of all accounts and transactions.\n\nKeep in mind that this is the \"back-end\" of the system (the data model). You will hopefully have separate classes for viewing the data (the view) which will allow you to easily change it, depending on user preferences. For example, you may want the whole ledger, just the balance sheet or just the P&L. Or you may want different date ranges.\nOne thing I'd stress to make a good accounting system. You do need to think like a bookkeeper. By that I mean lose the artificial difference between \"accounts\" and \"categories\" since it will make your system a lot cleaner (you need to be able to have transactions between two asset-class accounts (such as a bank transfer) and this won't work if every transaction needs a \"category\". The data model should reflect the data, not the view.\nThe only difficulty there is remembering that asset-class accounts have the opposite sign from which you expect (negative values for your cash-at-bank mean you have money in the bank and your very high positive value loan for that company sports car is a debt, for example). This will make the double-entry aspect work perfectly but you have to remember to reverse the signs of asset-class accounts (assets, liabilities and equity) when showing or printing the balance sheet.\n", "Not a direct answer to your question but O'Reilly's Head First Object-Oriented Analysis and Design is an excellent place to start.\nFollowed by Head First Design Patterns\n", "\"My data is basically a list of accounts\"\nAccount is a class.\n\"dicts that represent transactions\"\nTransaction appears to be a class. You happen to have elected to represent this as a dict.\nThat's your first pass at OO design. Focus on the Responsibilities and Collaborators.\nYou have at least two classes of objects.\n", "There are many 'mindsets' that you could adopt to help in the design process (some of which point towards OO and some that don't). I think it is often better to start with questions rather than answers (i.e. rather than say, 'how can I apply inheritance to this' you should ask how this system might expect to change over time).\nHere's a few questions to answer that might point you towards design principles:\n\nAre other's going to use this API? Are they likely to break it? (info hiding)\ndo I need to deploy this across many machines? (state management, lifecycle management)\ndo i need to interoperate with other systems, runtimes, languages? (abstraction and standards)\nwhat are my performance constraints? (state management, lifecycle management)\nwhat kind of security environment does this component live in? (abstraction, info hiding, interoperability)\nhow would i construct my objects, assuming I used some? (configuration, inversion of control, object decoupling, hiding implementation details)\n\nThese aren't direct answers to your question, but they might put you in the right frame of mind to answer it yourself. :)\n" ]
[ 7, 5, 3, 1 ]
[ "Rather than using dicts to represent your transactions, a better container would be a namedtuple from the collections module. A namedtuple is a subclass of tuple which allows you to reference it's items by name as well as index number.\nSince you may possibly have thousands of transactions in your journal lists, it pays to keep these items as small and light-weight as possible so that processing, sorting, searching, etc. is as fast and responsive as possible. A dict is a fairly heavy-weight object compared to a namedtuple which takes up no more memory than an ordinary tuple. A namedtuple also has the added advantage of keeping it's items in order, unlike a dict.\n>>> import sys\n>>> from collections import namedtuple\n>>> sys.getsizeof((1,2,3,4,5,6,7,8))\n60\n>>> ntc = namedtuple('ntc', 'one two three four five six seven eight')\n>>> xnt = ntc(1,2,3,4,5,6,7,8)\n>>> sys.getsizeof(xnt)\n60\n>>> xdic = dict(one=1, two=2, three=3, four=4, five=5, six=6, seven=7, eight=8)\n>>> sys.getsizeof(xdic)\n524\n\nSo you see that's almost 9 times saving in memory for an eight item transaction.\nI'm using Python 3.1, so your milage may vary.\n" ]
[ -1 ]
[ "oop", "python" ]
stackoverflow_0001604391_oop_python.txt
Q: How to get output? I am using the Python/C API with my app and am wondering how you can get console output with a gui app. When there is a script error, it is displayed via printf but this obviously has no effect with a gui app. I want to be able to obtain the output without creating a console. Can this be done? Edit - Im using Windows, btw. Edit - The Python/C library internally calls printf and does so before any script can be loaded and run. If there is an error I want to be able to get it. A: Use the logging package instead of printf. You can use something similar if you need to log output from a C function. A: If by printf you mean exactly thqt call from C code, you need to redirect (and un-buffer) your standard output (file descriptor 0) to somewhere you can pick up the data from -- far from trivial, esp. in Windows, although maybe doable. But why not just change that call in your C code to something more sensible? (Worst case, a geprintf function of your own devising that mimics printf to build a string then directs that string appropriately). If you actually mean print statements in Python code, it's much easier -- just set sys.stdout to an object with a write method accepting a string, and you can have that method do whatever you want, including logging, writing on a GUI windows, whatever you wish. Ah were it that simple at the C level!-)
How to get output?
I am using the Python/C API with my app and am wondering how you can get console output with a gui app. When there is a script error, it is displayed via printf but this obviously has no effect with a gui app. I want to be able to obtain the output without creating a console. Can this be done? Edit - Im using Windows, btw. Edit - The Python/C library internally calls printf and does so before any script can be loaded and run. If there is an error I want to be able to get it.
[ "Use the logging package instead of printf. You can use something similar if you need to log output from a C function.\n", "If by printf you mean exactly thqt call from C code, you need to redirect (and un-buffer) your standard output (file descriptor 0) to somewhere you can pick up the data from -- far from trivial, esp. in Windows, although maybe doable. But why not just change that call in your C code to something more sensible? (Worst case, a geprintf function of your own devising that mimics printf to build a string then directs that string appropriately).\nIf you actually mean print statements in Python code, it's much easier -- just set sys.stdout to an object with a write method accepting a string, and you can have that method do whatever you want, including logging, writing on a GUI windows, whatever you wish. Ah were it that simple at the C level!-)\n" ]
[ 1, 1 ]
[]
[]
[ "python", "user_interface" ]
stackoverflow_0001604811_python_user_interface.txt
Q: Parsing a text file with Python? I have to do an assignment where i have a .txt file that contains something like this p There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain... h1 this is another example of what this text file looks like i am suppose to write a python code that parses this text file and creates and xhtml file I need to find a starting point for this project because i am very new to python and not familiar with alot of this stuff. This python code is suppose to take each of these "tags" from this text file and put them into an xhtml file I hope that what i ask makes sense to you. Any help is greatly appreciated, Thanks in advance! -bojan A: You say you're very new to Python, so I'll start at the very low-level. You can iterate over the lines in a file very simply in Python fyle = open("contents.txt") for lyne in fyle : # Do string processing here fyle.close() Now how to parse it. If each formatting directive (e.g. p, h1), is on a separate line, you can check that easily. I'd build up a dictionary of handlers and get the handler like so: handlers= {"p": # p tag handler "h1": # h1 tag handler } # ... in the loop if lyne.rstrip() in handlers : # strip to remove trailing whitespace # close current handler? # start new handler? else : # pass string to current handler You could do what Daniel Pryden suggested and create an in-memory data structure first, and then serialize that the XHTML. In that case, the handlers would know how to build the objects corresponding to each tag. But I think the simpler solution, especially if you don't have lots of time, you have is just to go straight to XHTML, keeping a stack of the current enclosed tags. In that case your "handler" may just be some simple logic to write the tags to the output file/string. I can't say more without knowing the specifics of your problem. And besides, I don't want to do all your homework for you. This should give you a good start. A: Rather than going directly from the text file you describe to an XHTML file, I would transform it into an intermediate in-memory representation first. So I would build classes to represent the p and h1 tags, and then go through the text file and build those objects and put them into a list (or even a more complex object, but from the looks of your file a list should be sufficient). Then I would pass the list to another function that would loop through the p and h1 objects and output them as XHTML. As an added bonus, I would make each tag object (say, Paragraph and Heading1 classes) implement an as_xhtml() method, and delegate the actual formatting to that method. Then the XHTML output loop could be something like: for tag in input_tags: xhtml_file.write(tag.as_xhtml())
Parsing a text file with Python?
I have to do an assignment where i have a .txt file that contains something like this p There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain... h1 this is another example of what this text file looks like i am suppose to write a python code that parses this text file and creates and xhtml file I need to find a starting point for this project because i am very new to python and not familiar with alot of this stuff. This python code is suppose to take each of these "tags" from this text file and put them into an xhtml file I hope that what i ask makes sense to you. Any help is greatly appreciated, Thanks in advance! -bojan
[ "You say you're very new to Python, so I'll start at the very low-level. You can iterate over the lines in a file very simply in Python\nfyle = open(\"contents.txt\")\nfor lyne in fyle :\n # Do string processing here\nfyle.close()\n\nNow how to parse it. If each formatting directive (e.g. p, h1), is on a separate line, you can check that easily. I'd build up a dictionary of handlers and get the handler like so:\nhandlers= {\"p\": # p tag handler\n \"h1\": # h1 tag handler\n }\n\n# ... in the loop\n if lyne.rstrip() in handlers : # strip to remove trailing whitespace\n # close current handler?\n # start new handler?\n else :\n # pass string to current handler\n\nYou could do what Daniel Pryden suggested and create an in-memory data structure first, and then serialize that the XHTML. In that case, the handlers would know how to build the objects corresponding to each tag. But I think the simpler solution, especially if you don't have lots of time, you have is just to go straight to XHTML, keeping a stack of the current enclosed tags. In that case your \"handler\" may just be some simple logic to write the tags to the output file/string.\nI can't say more without knowing the specifics of your problem. And besides, I don't want to do all your homework for you. This should give you a good start.\n", "Rather than going directly from the text file you describe to an XHTML file, I would transform it into an intermediate in-memory representation first.\nSo I would build classes to represent the p and h1 tags, and then go through the text file and build those objects and put them into a list (or even a more complex object, but from the looks of your file a list should be sufficient). Then I would pass the list to another function that would loop through the p and h1 objects and output them as XHTML.\nAs an added bonus, I would make each tag object (say, Paragraph and Heading1 classes) implement an as_xhtml() method, and delegate the actual formatting to that method. Then the XHTML output loop could be something like:\nfor tag in input_tags:\n xhtml_file.write(tag.as_xhtml())\n\n" ]
[ 10, 1 ]
[]
[]
[ "parsing", "python", "string", "text_files" ]
stackoverflow_0001604074_parsing_python_string_text_files.txt
Q: How can I work out the class hierarchy given an object instance in Python? Is there anyway to discover the base class of a class in Python? For given the following class definitions: class A: def speak(self): print "Hi" class B(A): def getName(self): return "Bob" If I received an instance of an object I can easily work out that it is a B by doing the following: instance = B() print B.__class__.__name__ Which prints the class name 'B' as expected. Is there anyway to discover that the instance of an object inherits from a base class as well as the actual class? Or is that just not how objects in Python work? A: The inspect module is really powerful also: >>> import inspect >>> inst = B() >>> inspect.getmro(inst.__class__) (<class __main__.B at 0x012B42A0>, <class __main__.A at 0x012B4210>) A: b = B() b.__class__ b.__class__.__base__ b.__class__.__bases__ b.__class__.__base__.__subclasses__() I strongly recommend checking out ipython and use the tab completion :-) A: Another way to get the class hierarchy is to access the mro attribute: class A(object): pass class B(A): pass instance = B() print(instance.__class__.__mro__) # (<class '__main__.B'>, <class '__main__.A'>, <type 'object'>) Note that in Python 2.x, you must use "new-style" objects to ensure they have the mro attribute. You do this by declaring class A(object): instead of class A(): See http://www.python.org/doc/newstyle/ and http://www.python.org/download/releases/2.3/mro/ for more info new-style objects and mro (method resolution order). In Python 3.x, all objects are new-style objects, so you can use mro and simply declare objects this way: class A(): A: If instead of discovery of the baseclass (i.e reflection) you know the desired class in advance you could use the following built in functions eg: # With classes from original Question defined >>> instance = A() >>> B_instance = B() >>> isinstance(instance, A) True >>> isinstance(instance, B) False >>> isinstance(B_instance, A) # Note it returns true if instance is a subclass True >>> isinstance(B_instance, B) True >>> issubclass(B, A) True isinstance( object, classinfo) Return true if the object argument is an instance of the classinfo argument, or of a (direct or indirect) subclass thereof. Also return true if classinfo is a type object and object is an object of that type. If object is not a class instance or an object of the given type, the function always returns false. If classinfo is neither a class object nor a type object, it may be a tuple of class or type objects, or may recursively contain other such tuples (other sequence types are not accepted). If classinfo is not a class, type, or tuple of classes, types, and such tuples, a TypeError exception is raised. Changed in version 2.2: Support for a tuple of type information was added. issubclass( class, classinfo) Return true if class is a subclass (direct or indirect) of classinfo. A class is considered a subclass of itself. classinfo may be a tuple of class objects, in which case every entry in classinfo will be checked. In any other case, a TypeError exception is raised. Changed in version 2.3: Support for a tuple of type information was added.
How can I work out the class hierarchy given an object instance in Python?
Is there anyway to discover the base class of a class in Python? For given the following class definitions: class A: def speak(self): print "Hi" class B(A): def getName(self): return "Bob" If I received an instance of an object I can easily work out that it is a B by doing the following: instance = B() print B.__class__.__name__ Which prints the class name 'B' as expected. Is there anyway to discover that the instance of an object inherits from a base class as well as the actual class? Or is that just not how objects in Python work?
[ "The inspect module is really powerful also:\n>>> import inspect\n\n>>> inst = B()\n>>> inspect.getmro(inst.__class__)\n(<class __main__.B at 0x012B42A0>, <class __main__.A at 0x012B4210>)\n\n", "b = B()\nb.__class__\nb.__class__.__base__\nb.__class__.__bases__\nb.__class__.__base__.__subclasses__()\n\nI strongly recommend checking out ipython and use the tab completion :-)\n", "Another way to get the class hierarchy is to access the mro attribute:\nclass A(object):\n pass\nclass B(A):\n pass\ninstance = B()\nprint(instance.__class__.__mro__)\n# (<class '__main__.B'>, <class '__main__.A'>, <type 'object'>)\n\nNote that in Python 2.x, you must use \"new-style\" objects to ensure they have the\nmro attribute. You do this by declaring \nclass A(object):\n\ninstead of \nclass A():\n\nSee http://www.python.org/doc/newstyle/ and http://www.python.org/download/releases/2.3/mro/ for more info new-style objects and mro (method resolution order).\nIn Python 3.x, all objects are new-style objects, so you can use mro and simply declare objects this way:\nclass A():\n\n", "If instead of discovery of the baseclass (i.e reflection) you know the desired class in advance you could use the following built in functions\neg:\n# With classes from original Question defined\n>>> instance = A()\n>>> B_instance = B()\n>>> isinstance(instance, A)\nTrue\n>>> isinstance(instance, B)\nFalse\n>>> isinstance(B_instance, A) # Note it returns true if instance is a subclass\nTrue\n>>> isinstance(B_instance, B)\nTrue\n>>> issubclass(B, A)\nTrue\n\n\nisinstance( object, classinfo)\nReturn true if the object argument is an instance of the classinfo\nargument, or of a (direct or indirect)\nsubclass thereof. Also return true if\nclassinfo is a type object and object\nis an object of that type. If object\nis not a class instance or an object\nof the given type, the function always\nreturns false. If classinfo is neither\na class object nor a type object, it\nmay be a tuple of class or type\nobjects, or may recursively contain\nother such tuples (other sequence\ntypes are not accepted). If classinfo\nis not a class, type, or tuple of\nclasses, types, and such tuples, a\nTypeError exception is raised. Changed\nin version 2.2: Support for a tuple of\ntype information was added.\nissubclass( class, classinfo)\nReturn true if class is a subclass (direct or indirect) of classinfo. A\nclass is considered a subclass of\nitself. classinfo may be a tuple of\nclass objects, in which case every\nentry in classinfo will be checked. In\nany other case, a TypeError exception\nis raised. Changed in version 2.3:\nSupport for a tuple of type\ninformation was added.\n\n" ]
[ 6, 5, 4, 0 ]
[]
[]
[ "inheritance", "oop", "python" ]
stackoverflow_0001603964_inheritance_oop_python.txt
Q: Emulate processing with python? I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options? A: "Similar to processing except in python" screams "NodeBox" to me. NodeBox is OSX-only, and i don't know if it allows pixel-level manipulation, but much of its command set was derived directly from processing. You can find it at the NodeBox site. A: Well, this is as close as it gets: http://code.google.com/p/pyprocessing/ A: You could get pretty close to processing with vpython: http://vpython.org/ The primitives are very easy to work with, and it is adept at animation. I am not sure what kind of pixel manipulation you are looking for, but there may be something for that as well. A: I prefer pyglet to pygame (but I'm not sure exactly what your needs will be): http://www.pyglet.org/ If you need a 3d engine: http://www.panda3d.org/ http://www.pysoy.org/ Someone's already mentioned Shoebot, which is probably the closest in spirit to Processing: http://tinkerhouse.net/shoebot/ A: There's quite recent C++ library, SFML, which is a good alternative to SDL. Thus its Python bindings should be a good alternative to Pygame.
Emulate processing with python?
I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options?
[ "\"Similar to processing except in python\" screams \"NodeBox\" to me. NodeBox is OSX-only, and i don't know if it allows pixel-level manipulation, but much of its command set was derived directly from processing. You can find it at the NodeBox site.\n", "Well, this is as close as it gets: http://code.google.com/p/pyprocessing/\n", "You could get pretty close to processing with vpython:\nhttp://vpython.org/\nThe primitives are very easy to work with, and it is adept at animation. \nI am not sure what kind of pixel manipulation you are looking for, but there may be something for that as well.\n", "I prefer pyglet to pygame (but I'm not sure exactly what your needs will be):\n\nhttp://www.pyglet.org/\n\nIf you need a 3d engine:\n\nhttp://www.panda3d.org/\nhttp://www.pysoy.org/\n\nSomeone's already mentioned Shoebot, which is probably the closest in spirit to Processing:\n\nhttp://tinkerhouse.net/shoebot/\n\n", "There's quite recent C++ library, SFML, which is a good alternative to SDL.\nThus its Python bindings should be a good alternative to Pygame.\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "processing", "pygame", "python" ]
stackoverflow_0001150897_processing_pygame_python.txt
Q: Pure python solution to convert XHTML to PDF I am after a pure Python solution (for the GAE) to convert webpages to pdf. I had a look at reportlab but the documentation focuses on generating pdfs from scratch, rather than converting from HTML. What do you recommend? - pisa? Edit: My use case is I have a HTML report that I want to make available in PDF too. I will make updates to this report structure so I don't want to maintain a separate PDF version, but (hopefully) convert automatically. Also because I generate the report HTML I can ensure it is well formed XHTML to make the PDF conversion easier. A: Pisa claims to support what I want to do: pisa is a html2pdf converter using the ReportLab Toolkit, the HTML5lib and pyPdf. It supports HTML 5 and CSS 2.1 (and some of CSS 3). It is completely written in pure Python so it is platform independent. The main benefit of this tool that a user with Web skills like HTML and CSS is able to generate PDF templates very quickly without learning new technologies. Easy integration into Python frameworks like CherryPy, KID Templating, TurboGears, Django, Zope, Plone, Google AppEngine (GAE) etc. So I will investigate it further A: Have you considered pyPdf? I doubt it has anywhere like the functional richness you require, but, it IS a start, and is in pure Python. The PdfFileWriter class would be the one to generate PDF output, unfortunately it requires PageObject instances and doesn't provide real ways to put those together, except extracting them from existing PDF documents. Unfortunately all richer pdf page-generation packages I can find do appear to depend on reportlab or other non-pure-Python libraries:-(. A: What you're asking for is a pure Python HTML renderer, which is a big task to say the least ('real' renderers like webkit are the product of thousands of hours of work). As far as I'm aware, there aren't any. Instead of looking for an HTML to PDF converter, what I'd suggest is building your report in a format that's easily converted to both - for example, you could build it as a DOM (a set of linked objects), and write converters for both HTML and PDF output. This is a much more limited problem than converting HTML to PDF, and hence much easier to implement.
Pure python solution to convert XHTML to PDF
I am after a pure Python solution (for the GAE) to convert webpages to pdf. I had a look at reportlab but the documentation focuses on generating pdfs from scratch, rather than converting from HTML. What do you recommend? - pisa? Edit: My use case is I have a HTML report that I want to make available in PDF too. I will make updates to this report structure so I don't want to maintain a separate PDF version, but (hopefully) convert automatically. Also because I generate the report HTML I can ensure it is well formed XHTML to make the PDF conversion easier.
[ "Pisa claims to support what I want to do:\n\npisa is a html2pdf converter using the\n ReportLab Toolkit, the HTML5lib and\n pyPdf. It supports HTML 5 and CSS 2.1\n (and some of CSS 3). It is completely\n written in pure Python so it is\n platform independent. The main benefit\n of this tool that a user with Web\n skills like HTML and CSS is able to\n generate PDF templates very quickly\n without learning new technologies.\n Easy integration into Python\n frameworks like CherryPy, KID\n Templating, TurboGears, Django, Zope,\n Plone, Google AppEngine (GAE) etc.\n\nSo I will investigate it further\n", "Have you considered pyPdf? I doubt it has anywhere like the functional richness you require, but, it IS a start, and is in pure Python. The PdfFileWriter class would be the one to generate PDF output, unfortunately it requires PageObject instances and doesn't provide real ways to put those together, except extracting them from existing PDF documents. Unfortunately all richer pdf page-generation packages I can find do appear to depend on reportlab or other non-pure-Python libraries:-(.\n", "What you're asking for is a pure Python HTML renderer, which is a big task to say the least ('real' renderers like webkit are the product of thousands of hours of work). As far as I'm aware, there aren't any.\nInstead of looking for an HTML to PDF converter, what I'd suggest is building your report in a format that's easily converted to both - for example, you could build it as a DOM (a set of linked objects), and write converters for both HTML and PDF output. This is a much more limited problem than converting HTML to PDF, and hence much easier to implement.\n" ]
[ 8, 4, 4 ]
[]
[]
[ "google_app_engine", "pdf", "python" ]
stackoverflow_0001598715_google_app_engine_pdf_python.txt
Q: What files do I need to include with my Python app? I have an app that uses the python/c api and I was wondering what files I need to distribute with it? The app runs on Windows and links with libpython31.a Are there any other files? I tried the app on a seperate Win2k system and it said that python31.dll was needed so theres at least one. Edit - My app is written in C++ and uses the Python/C api as noted below. A: The best way to tell is to try it on 'clean' installations of windows and see what it complains about. Virtual machines are a good way to do that. A: You'll need at least Python's own DLL (release-specific) and the wincrt DLL version it requires, also Python version depended (if you want to run on releases of Windows that don't come with that DLL). The popular py2exe, the not-widely-know but hugely powerful Pyinstaller (NOTE: use the svn version, not the released version which is aeons behind), and similar package-makers, do a good job of identifying and solving all such dependencies, so there's no case for doing it by hand!
What files do I need to include with my Python app?
I have an app that uses the python/c api and I was wondering what files I need to distribute with it? The app runs on Windows and links with libpython31.a Are there any other files? I tried the app on a seperate Win2k system and it said that python31.dll was needed so theres at least one. Edit - My app is written in C++ and uses the Python/C api as noted below.
[ "The best way to tell is to try it on 'clean' installations of windows and see what it complains about. Virtual machines are a good way to do that.\n", "You'll need at least Python's own DLL (release-specific) and the wincrt DLL version it requires, also Python version depended (if you want to run on releases of Windows that don't come with that DLL). The popular py2exe, the not-widely-know but hugely powerful Pyinstaller (NOTE: use the svn version, not the released version which is aeons behind), and similar package-makers, do a good job of identifying and solving all such dependencies, so there's no case for doing it by hand!\n" ]
[ 2, 1 ]
[]
[]
[ "file", "python", "runtime" ]
stackoverflow_0001605022_file_python_runtime.txt
Q: Python - Using the Multiply Operator to Create Copies of Objects in Lists In Python, if I multiply of list of objects by an integer, I get a list of references to that object, e.g.: >>> a = [[]] * 3 >>> a [[], [], []] >>> a[0].append(1) >>> a [[1], [1], [1]] If my desired behavior is to create a list of copies of the original object (e.g. copies created by the "copy.copy()" method or something sort of standard, is there an elegant way to do this with the same multiplication operator? Or should I just stick with a list comprehension or something? E.g. [[] for x in range(0,3)] Any version of Python is fine. A: This is a good usage of list comprehension - its also the most readable way to do it IMO. So the [[] for x in range(0,3)] you suggest isn't the multiplication operator, but gets the result you want. A: The multiplication operator on a sequence means repetition of the item(s) -- NOT creation of copies (shallow or deep ones) of the items. Nothing stops you from going crazy, a la: import copy class Crazy(object): def __init__(self, body, weird=copy.copy): self.gomez = body self.cousinitt = weird def __mul__(self, n): return [self.cousinitt(x) for x in (self.gomez * n)] a = Crazy([[]]) * 3 ...except your sanity and common sense, if any. Checking on those, how DID you dream operator * could be made to mean something utterly different than it's intended to mean, except by defining another class overloading __mul__ in weird ways...?-) A: The list comprehension is the best way to do this. If you define a new class and overload the * operator, it will seriously confuse the next person to read the code.
Python - Using the Multiply Operator to Create Copies of Objects in Lists
In Python, if I multiply of list of objects by an integer, I get a list of references to that object, e.g.: >>> a = [[]] * 3 >>> a [[], [], []] >>> a[0].append(1) >>> a [[1], [1], [1]] If my desired behavior is to create a list of copies of the original object (e.g. copies created by the "copy.copy()" method or something sort of standard, is there an elegant way to do this with the same multiplication operator? Or should I just stick with a list comprehension or something? E.g. [[] for x in range(0,3)] Any version of Python is fine.
[ "This is a good usage of list comprehension - its also the most readable way to do it IMO.\nSo the [[] for x in range(0,3)] you suggest isn't the multiplication operator, but gets the result you want.\n", "The multiplication operator on a sequence means repetition of the item(s) -- NOT creation of copies (shallow or deep ones) of the items. Nothing stops you from going crazy, a la:\nimport copy\n\nclass Crazy(object):\n def __init__(self, body, weird=copy.copy):\n self.gomez = body\n self.cousinitt = weird\n def __mul__(self, n):\n return [self.cousinitt(x) for x in (self.gomez * n)]\n\na = Crazy([[]]) * 3\n\n...except your sanity and common sense, if any. Checking on those, how DID you dream operator * could be made to mean something utterly different than it's intended to mean, except by defining another class overloading __mul__ in weird ways...?-)\n", "The list comprehension is the best way to do this. If you define a new class and overload the * operator, it will seriously confuse the next person to read the code.\n" ]
[ 17, 4, 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001605024_list_python.txt
Q: python fileinput changes permission In my python code, I use the fileinput module for inplace replacing: import fileinput for line in fileinput.FileInput("permission.txt",inplace=1): line = line.strip() if not 'def' in line: print line else: line=line.replace(line,'zzz') print line fileinput.close() However, once it is done, permission.txt permissions is now changed to root only access. I can no longer edit the file. I can only delete it. I did some googling and it mentioned that this could be caused because fileinput creates a temporary file for this read/write replace interaction. However, I would have thought there would be a fix for this since the bug was reported in 1999. Is it something I have to do in my code to keep the permissions the same? or is it an operating system level issue. I'm using Python 2.6.2 on Ubuntu 9.04 A: If you can help it, don't run your script as root. EDIT Well, the answer has been accepted, but it's not really much of an answer. In case you must run the script as root (or indeed as any other user), you can use os.stat() to determine the user id and group id of the file's owner before processing the file, and then restore the file ownership after processing. import fileinput import os # save original file ownership details stat = os.stat('permission.txt') uid, gid = stat[4], stat[5] for line in fileinput.FileInput("permission.txt",inplace=1): line = line.strip() if not 'def' in line: print line else: line=line.replace(line,'zzz') print line fileinput.close() # restore original file ownership os.chown("permission.txt", uid, gid)
python fileinput changes permission
In my python code, I use the fileinput module for inplace replacing: import fileinput for line in fileinput.FileInput("permission.txt",inplace=1): line = line.strip() if not 'def' in line: print line else: line=line.replace(line,'zzz') print line fileinput.close() However, once it is done, permission.txt permissions is now changed to root only access. I can no longer edit the file. I can only delete it. I did some googling and it mentioned that this could be caused because fileinput creates a temporary file for this read/write replace interaction. However, I would have thought there would be a fix for this since the bug was reported in 1999. Is it something I have to do in my code to keep the permissions the same? or is it an operating system level issue. I'm using Python 2.6.2 on Ubuntu 9.04
[ "If you can help it, don't run your script as root.\nEDIT\nWell, the answer has been accepted, but it's not really much of an answer. In case you must run the script as root (or indeed as any other user), you can use os.stat() to determine the user id and group id of the file's owner before processing the file, and then restore the file ownership after processing.\nimport fileinput\nimport os\n\n# save original file ownership details\nstat = os.stat('permission.txt')\nuid, gid = stat[4], stat[5]\n\nfor line in fileinput.FileInput(\"permission.txt\",inplace=1):\n line = line.strip()\n if not 'def' in line:\n print line\n else:\n line=line.replace(line,'zzz')\n print line\n\n\nfileinput.close()\n\n# restore original file ownership\nos.chown(\"permission.txt\", uid, gid)\n\n" ]
[ 2 ]
[]
[]
[ "file_io", "file_permissions", "python" ]
stackoverflow_0001605288_file_io_file_permissions_python.txt
Q: How to launch and run external script in background? I tried these two methods: os.system("python test.py") subprocess.Popen("python test.py", shell=True) Both approaches need to wait until test.py finishes which blocks main process. I know "nohup" can do the job. Is there a Python way to launch test.py or any other shell scripts and leave it running in background? Suppose test.py is like this: for i in range(0, 1000000): print i Both os.system() or subprocess.Popen() will block main program until 1000000 lines of output displayed. What I want is let test.py runs silently and display main program output only. Main program may quie while test.py is still running. A: subprocess.Popen(["python", "test.py"]) should work. Note that the job might still die when your main script exits. In this case, try subprocess.Popen(["nohup", "python", "test.py"]) A: os.spawnlp(os.P_NOWAIT, "path_to_test.py", "test.py")
How to launch and run external script in background?
I tried these two methods: os.system("python test.py") subprocess.Popen("python test.py", shell=True) Both approaches need to wait until test.py finishes which blocks main process. I know "nohup" can do the job. Is there a Python way to launch test.py or any other shell scripts and leave it running in background? Suppose test.py is like this: for i in range(0, 1000000): print i Both os.system() or subprocess.Popen() will block main program until 1000000 lines of output displayed. What I want is let test.py runs silently and display main program output only. Main program may quie while test.py is still running.
[ "subprocess.Popen([\"python\", \"test.py\"]) should work.\nNote that the job might still die when your main script exits. In this case, try subprocess.Popen([\"nohup\", \"python\", \"test.py\"])\n", "os.spawnlp(os.P_NOWAIT, \"path_to_test.py\", \"test.py\")\n\n" ]
[ 39, 1 ]
[]
[]
[ "background_process", "external_process", "python" ]
stackoverflow_0001605520_background_process_external_process_python.txt
Q: how to put in transaction I have a class: class AccountTransaction(db.Model): account = db.ReferenceProperty(reference_class=Account) tran_date = db.DateProperty() debit_credit = db.IntegerProperty() ## -1, 1 amount = db.FloatProperty() comment = db.StringProperty() pair = db.SelfReferenceProperty() so, what I want is to make a Save() method which would run the following steps in the transaction: to save AccountTransaction to save paired AccountTransaction (the pair of paired transaction is self - circular reference) to update balances of each of the two Accounts - the account of primary & the account of paired transaction It is possible that the parents of transactions be the their Accounts, but yet it seems impossible to make an entity group of these entities. Described in terms of RDBMS, this means that I want that one table has two foreign keys (one entity - two parents). What to do? At first, I tried not to manage the balances, but it seems to slow to calculate it every time ... What to do? A: Because your Account entities can't all be in the same entity group, you can't perform an update in a single transaction. There are techniques to do this, particularly in the 'money transfer' case you've encountered - I wrote a blog post about this exact subject, in fact.
how to put in transaction
I have a class: class AccountTransaction(db.Model): account = db.ReferenceProperty(reference_class=Account) tran_date = db.DateProperty() debit_credit = db.IntegerProperty() ## -1, 1 amount = db.FloatProperty() comment = db.StringProperty() pair = db.SelfReferenceProperty() so, what I want is to make a Save() method which would run the following steps in the transaction: to save AccountTransaction to save paired AccountTransaction (the pair of paired transaction is self - circular reference) to update balances of each of the two Accounts - the account of primary & the account of paired transaction It is possible that the parents of transactions be the their Accounts, but yet it seems impossible to make an entity group of these entities. Described in terms of RDBMS, this means that I want that one table has two foreign keys (one entity - two parents). What to do? At first, I tried not to manage the balances, but it seems to slow to calculate it every time ... What to do?
[ "Because your Account entities can't all be in the same entity group, you can't perform an update in a single transaction. There are techniques to do this, particularly in the 'money transfer' case you've encountered - I wrote a blog post about this exact subject, in fact.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001604021_google_app_engine_python.txt
Q: Pythonic way to print a table I'm using this simple function: def print_players(players): tot = 1 for p in players: print '%2d: %15s \t (%d|%d) \t was: %s' % (tot, p['nick'], p['x'], p['y'], p['oldnick']) tot += 1 and I'm supposing nicks are no longer than 15 characters. I'd like to keep each "column" aligned, is there a some syntactic sugar allowing me to do the same but keeping the nicknames column left-aligned instead of right-aligned, without breaking column on the right? The equivalent, uglier, code would be: def print_players(players): tot = 1 for p in players: print '%2d: %s \t (%d|%d) \t was: %s' % (tot, p['nick']+' '*(15-len(p['nick'])), p['x'], p['y'], p['oldnick']) tot += 1 Thanks to all, here is the final version: def print_players(players): for tot, p in enumerate(players, start=1): print '%2d:'%tot, '%(nick)-12s (%(x)d|%(y)d) \t was %(oldnick)s'%p A: To left-align instead of right-align, use %-15s instead of %15s. A: Slightly off topic, but you can avoid performing explicit addition on tot using enumerate: for tot, p in enumerate(players, start=1): print '...' A: Or if your using python 2.6 you can use the format method of the string: This defines a dictionary of values, and uses them for dipslay: >>> values = {'total':93, 'name':'john', 'x':33, 'y':993, 'oldname':'rodger'} >>> '{total:2}: {name:15} \t ({x}|{y}\t was: {oldname}'.format(**values) '93: john \t (33|993\t was: rodger' A: Seeing that p seems to be a dict, how about: print '%2d' % tot + ': %(nick)-15s \t (%(x)d|%(y)d) \t was: %(oldnick)15s' % p
Pythonic way to print a table
I'm using this simple function: def print_players(players): tot = 1 for p in players: print '%2d: %15s \t (%d|%d) \t was: %s' % (tot, p['nick'], p['x'], p['y'], p['oldnick']) tot += 1 and I'm supposing nicks are no longer than 15 characters. I'd like to keep each "column" aligned, is there a some syntactic sugar allowing me to do the same but keeping the nicknames column left-aligned instead of right-aligned, without breaking column on the right? The equivalent, uglier, code would be: def print_players(players): tot = 1 for p in players: print '%2d: %s \t (%d|%d) \t was: %s' % (tot, p['nick']+' '*(15-len(p['nick'])), p['x'], p['y'], p['oldnick']) tot += 1 Thanks to all, here is the final version: def print_players(players): for tot, p in enumerate(players, start=1): print '%2d:'%tot, '%(nick)-12s (%(x)d|%(y)d) \t was %(oldnick)s'%p
[ "To left-align instead of right-align, use %-15s instead of %15s.\n", "Slightly off topic, but you can avoid performing explicit addition on tot using enumerate:\nfor tot, p in enumerate(players, start=1):\n print '...'\n\n", "Or if your using python 2.6 you can use the format method of the string:\nThis defines a dictionary of values, and uses them for dipslay:\n>>> values = {'total':93, 'name':'john', 'x':33, 'y':993, 'oldname':'rodger'}\n>>> '{total:2}: {name:15} \\t ({x}|{y}\\t was: {oldname}'.format(**values)\n'93: john \\t (33|993\\t was: rodger'\n\n", "Seeing that p seems to be a dict, how about:\nprint '%2d' % tot + ': %(nick)-15s \\t (%(x)d|%(y)d) \\t was: %(oldnick)15s' % p\n\n" ]
[ 4, 4, 3, 2 ]
[]
[]
[ "printing", "python", "string_formatting" ]
stackoverflow_0001605861_printing_python_string_formatting.txt
Q: Wait for an event, but don't take it off the queue Is there any way to make the program sleep until an event occurs, but to not take it off the queue? Similarly to http://www.pygame.org/docs/ref/event.html#pygame.event.wait Or will I need to use pygame.event.wait, and then put that event back onto the queue? Just to clarify, I do not need to know what that event is when it occurs, just that an event has occured. A: You will need to do what you suggest and post it back onto the queue. If the ordering is important (which it often is), then just keep your own queue of already retrieved events, and whenever you want to start processing events normally, just handle your own list first before draining pygame's queue. I'm at a loss as to why you would want to know an event came in but not to handle it, however.
Wait for an event, but don't take it off the queue
Is there any way to make the program sleep until an event occurs, but to not take it off the queue? Similarly to http://www.pygame.org/docs/ref/event.html#pygame.event.wait Or will I need to use pygame.event.wait, and then put that event back onto the queue? Just to clarify, I do not need to know what that event is when it occurs, just that an event has occured.
[ "You will need to do what you suggest and post it back onto the queue. If the ordering is important (which it often is), then just keep your own queue of already retrieved events, and whenever you want to start processing events normally, just handle your own list first before draining pygame's queue.\nI'm at a loss as to why you would want to know an event came in but not to handle it, however.\n" ]
[ 1 ]
[]
[]
[ "event_handling", "pygame", "python" ]
stackoverflow_0001603537_event_handling_pygame_python.txt
Q: Using multiple databases with Elixir I would like to provide database for my program that uses elixir for ORM. Right now the database file (I am using SQLite) must be hardcoded in metadata, but I would like to be able to pass this in argv. Is there any way to do this nice? The only thing I thought of is to: from sys import argv metadata.bind = argv[1] Can I set this in the main script and it would be used in all modules, that define any Entities? A: I have some code that does this in a slightly nicer fashion than just using argv from optparse import OptionParser parser = OptionParser() parser.add_option("-u", "--user", dest="user", help="Database username") parser.add_option("-p", "--password", dest="password", help="Database password") parser.add_option("-D", "--database", dest="database", default="myDatabase", help="Database name") parser.add_option("-e", "--engine", dest="engine", default="mysql", help="Database engine") parser.add_option("-H", "--host", dest="host", default="localhost", help="Database host") (options, args) = parser.parse_args() def opt_hash(name): global options return getattr(options, name) options.__getitem__ = opt_hash metadata.bind = '%(engine)s://%(user)s:%(password)s@%(host)s/%(database)s' % options Note that the part using opt_hash is a bit of a hack. I use it because OptionParser doesn't return a normal hash, which is what is really needed for the niceness of the bind string I use in the last line. A: Your question seems to be more related to general argument parsing in python than with elixir. Anyway, I had a similar problem, and I have solved it by using different configuration files and parsing them with the configparse module in python. For example, I have two config files, and each of them describes the db url, username, password, etc.. of one database. When I want to switch to another configuration, I pass an option like --configfile guest to the script (I use argparse for the command line interface), then the script looks for a config file called guest.txt, and reads all the information there. This is a lot safer, because if you pass a metadata string as a command line argument you can have some security issues, and moreover it is a lot longer to type. By the way, you can also find useful to write a Makefile to store the most common options. e.g. cat >Makefile debug_db: ipython connect_db.py -config guest -i connect_root: ipython connect_db.py -config db1_root -i connect_db1: ipython connect_db.py -config db1 -i and on the command line, you only have to type 'make debug_db' or 'make connect_db1' to execute a rule.
Using multiple databases with Elixir
I would like to provide database for my program that uses elixir for ORM. Right now the database file (I am using SQLite) must be hardcoded in metadata, but I would like to be able to pass this in argv. Is there any way to do this nice? The only thing I thought of is to: from sys import argv metadata.bind = argv[1] Can I set this in the main script and it would be used in all modules, that define any Entities?
[ "I have some code that does this in a slightly nicer fashion than just using argv\nfrom optparse import OptionParser\n\nparser = OptionParser()\nparser.add_option(\"-u\", \"--user\", dest=\"user\",\n help=\"Database username\")\nparser.add_option(\"-p\", \"--password\", dest=\"password\",\n help=\"Database password\")\nparser.add_option(\"-D\", \"--database\", dest=\"database\", default=\"myDatabase\",\n help=\"Database name\")\nparser.add_option(\"-e\", \"--engine\", dest=\"engine\", default=\"mysql\",\n help=\"Database engine\")\nparser.add_option(\"-H\", \"--host\", dest=\"host\", default=\"localhost\",\n help=\"Database host\")\n\n(options, args) = parser.parse_args()\n\ndef opt_hash(name):\n global options\n return getattr(options, name)\n\noptions.__getitem__ = opt_hash\n\nmetadata.bind = '%(engine)s://%(user)s:%(password)s@%(host)s/%(database)s' % options\n\nNote that the part using opt_hash is a bit of a hack. I use it because OptionParser doesn't return a normal hash, which is what is really needed for the niceness of the bind string I use in the last line.\n", "Your question seems to be more related to general argument parsing in python than with elixir.\nAnyway, I had a similar problem, and I have solved it by using different configuration files and parsing them with the configparse module in python.\nFor example, I have two config files, and each of them describes the db url, username, password, etc.. of one database. When I want to switch to another configuration, I pass an option like --configfile guest to the script (I use argparse for the command line interface), then the script looks for a config file called guest.txt, and reads all the information there.\nThis is a lot safer, because if you pass a metadata string as a command line argument you can have some security issues, and moreover it is a lot longer to type.\nBy the way, you can also find useful to write a Makefile to store the most common options.\ne.g. cat >Makefile\ndebug_db:\n ipython connect_db.py -config guest -i\n\nconnect_root:\n ipython connect_db.py -config db1_root -i\n\nconnect_db1: \n ipython connect_db.py -config db1 -i\n\nand on the command line, you only have to type 'make debug_db' or 'make connect_db1' to execute a rule.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_elixir" ]
stackoverflow_0001606341_python_python_elixir.txt
Q: Programmatically fetching contacts from Yahoo! Address Book Is there a way to programmatically log into Yahoo!, providing email id and password as inputs, and fetch the user's contacts? I've achieved the same thing with Gmail, thanks to the its ClientLogin interface. Yahoo Address book API provides BBAuth, which requires the user to be redirected to Yahoo login page. But I'm looking for a way to authenticate the user with Yahoo without the redirection. The way Ning.com handles it. Code samples in Python will be much appreciated. Thanks. A: You may want to look into the Contacts API provided by Yahoo! A: I just found a python script that solves my problem http://pypi.python.org/pypi/ContactGrabber/0.1 It's not complete though, fetches only a portion of the address book. A: you can try this http://developer.yahoo.com/social/contacts/
Programmatically fetching contacts from Yahoo! Address Book
Is there a way to programmatically log into Yahoo!, providing email id and password as inputs, and fetch the user's contacts? I've achieved the same thing with Gmail, thanks to the its ClientLogin interface. Yahoo Address book API provides BBAuth, which requires the user to be redirected to Yahoo login page. But I'm looking for a way to authenticate the user with Yahoo without the redirection. The way Ning.com handles it. Code samples in Python will be much appreciated. Thanks.
[ "You may want to look into the Contacts API provided by Yahoo!\n", "I just found a python script that solves my problem \nhttp://pypi.python.org/pypi/ContactGrabber/0.1\nIt's not complete though, fetches only a portion of the address book.\n", "you can try this\nhttp://developer.yahoo.com/social/contacts/\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "yahoo" ]
stackoverflow_0000909508_python_yahoo.txt
Q: What is the best way to store integers mapped to strings so that the keys can be ranges in python? What would be the best way to store (non-mutable) data that is of format: doodahs = { 0-256: "FOO", 257: "BAR", 258: "FISH", 279: "MOOSE", 280-65534: "Darth Vader", 65535: "Death to all newbies" } I have a relatively large amount of these type of data sets, so something that I can define the way of dictionaries (or close to it) and access via indexes. Oh, and this is on Python 2.4, so please give really really good reason for upgrade if you want me to use a newer version (and I'll go for 3 :) A: I'd split the range into a tuple and then inside your class, keep the items in an ordered list. You can use the bisect module to make inserts O(n) and lookup O(logn). If you are converting a dict to your new class, you can build an unordered list and sort it at the end doodahs = [ (0, 256, "FOO"), (257, 257, "BAR"), (258, 258, "FISH"), (279, 279, "MOOSE"), (280, 65534, "Darth Vader"), (65535, 65535, "Death to all newbies")] Your __getitem__ might work something like this: def __getitem__(self, key): return self.doodahs[bisect.bisect(self.doodahs, (key,))] __setitem__ might be something like this: def __setitem__(self,range,value): bisect.insort(self.doodahs, range+(value,)) A: If your integers have an upper bound that's less than a few million, then you simply expand the ranges into individual values. class RangedDict( dict ): def addRange( self, iterator, value ): for k in iterator: self[k]= value d= RangedDict() d.addRange( xrange(0,257), "FOO" ) d[257]= "BAR" d[258]= "FISH" d[279]= "MOOSE" d.addRange( xrange(280,65535), "Darth Vader" ) d[65535]= "Death to all newbies" All your lookups work, and are instant. A: Given that you don't have any "gaps" in your keys, why you just don't store the beginning of each segment and then lookup with bisect like suggested? doodahs = ( (0, "FOO"), (257, "BAR"), (258, "FISH"), (279, "MOOSE"), (280, "Darth Vader"), (65535, "Death to all newbies") ) A: class dict2(dict): def __init__(self,obj): dict.__init__(self,obj) def __getitem__(self,key): if self.has_key(key): return super(dict2,self).__getitem__(key) else: for k in self: if '-' in k: [x,y] = str(k).split('-') if key in range(int(x),int(y)+1): return super(dict2,self).__getitem__(k) return None d = {"0-10":"HELLO"} d2 = dict2(d) print d,d2,d2["0-10"], d2[1]
What is the best way to store integers mapped to strings so that the keys can be ranges in python?
What would be the best way to store (non-mutable) data that is of format: doodahs = { 0-256: "FOO", 257: "BAR", 258: "FISH", 279: "MOOSE", 280-65534: "Darth Vader", 65535: "Death to all newbies" } I have a relatively large amount of these type of data sets, so something that I can define the way of dictionaries (or close to it) and access via indexes. Oh, and this is on Python 2.4, so please give really really good reason for upgrade if you want me to use a newer version (and I'll go for 3 :)
[ "I'd split the range into a tuple and then inside your class, keep the items in an ordered list. You can use the bisect module to make inserts O(n) and lookup O(logn).\nIf you are converting a dict to your new class, you can build an unordered list and sort it at the end\ndoodahs = [\n (0, 256, \"FOO\"),\n (257, 257, \"BAR\"),\n (258, 258, \"FISH\"),\n (279, 279, \"MOOSE\"),\n (280, 65534, \"Darth Vader\"),\n (65535, 65535, \"Death to all newbies\")]\n\nYour __getitem__ might work something like this:\ndef __getitem__(self, key):\n return self.doodahs[bisect.bisect(self.doodahs, (key,))]\n\n__setitem__ might be something like this:\ndef __setitem__(self,range,value):\n bisect.insort(self.doodahs, range+(value,))\n\n", "If your integers have an upper bound that's less than a few million, then you simply expand the ranges into individual values.\nclass RangedDict( dict ):\n def addRange( self, iterator, value ):\n for k in iterator:\n self[k]= value\n\nd= RangedDict()\nd.addRange( xrange(0,257), \"FOO\" )\nd[257]= \"BAR\"\nd[258]= \"FISH\"\nd[279]= \"MOOSE\"\nd.addRange( xrange(280,65535), \"Darth Vader\" )\nd[65535]= \"Death to all newbies\"\n\nAll your lookups work, and are instant.\n", "Given that you don't have any \"gaps\" in your keys, why you just don't store the beginning of each segment and then lookup with bisect like suggested?\ndoodahs = (\n (0, \"FOO\"),\n (257, \"BAR\"),\n (258, \"FISH\"),\n (279, \"MOOSE\"),\n (280, \"Darth Vader\"),\n (65535, \"Death to all newbies\")\n)\n\n", "class dict2(dict):\n def __init__(self,obj):\n dict.__init__(self,obj)\n def __getitem__(self,key):\n if self.has_key(key):\n return super(dict2,self).__getitem__(key)\n else:\n for k in self:\n if '-' in k:\n [x,y] = str(k).split('-')\n if key in range(int(x),int(y)+1):\n return super(dict2,self).__getitem__(k)\n return None\n\n\nd = {\"0-10\":\"HELLO\"}\nd2 = dict2(d)\nprint d,d2,d2[\"0-10\"], d2[1]\n\n" ]
[ 5, 2, 1, 0 ]
[]
[]
[ "python", "types" ]
stackoverflow_0001606150_python_types.txt
Q: Timer in Python I am writing a python app using Tkinter for buttons and graphics and having trouble getting a timer working, what I need is a sample app that has three buttons and a label. [start timer] [stop timer] [quit] When I press the start button a function allows the label to count up from zero every 5 seconds, the stop button stops the timer and the quit button quits the app. I need to be able to press stop timer and quit at any time, and the time.sleep(5) function locks everything up so I can't use that. currently i'm using threading.timer(5,do_count_function) and getting nowhere ! I'm a vb.net programmer, so python is a bit new to me, but hey, i'm trying. A: Check the .after method of your Tk() object. This allows you to use Tk's timer to fire events within the gui's own loop by giving it a length of time and a callback method.
Timer in Python
I am writing a python app using Tkinter for buttons and graphics and having trouble getting a timer working, what I need is a sample app that has three buttons and a label. [start timer] [stop timer] [quit] When I press the start button a function allows the label to count up from zero every 5 seconds, the stop button stops the timer and the quit button quits the app. I need to be able to press stop timer and quit at any time, and the time.sleep(5) function locks everything up so I can't use that. currently i'm using threading.timer(5,do_count_function) and getting nowhere ! I'm a vb.net programmer, so python is a bit new to me, but hey, i'm trying.
[ "Check the .after method of your Tk() object. This allows you to use Tk's timer to fire events within the gui's own loop by giving it a length of time and a callback method.\n" ]
[ 2 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0001606700_python_tkinter.txt
Q: Function parameters hint Eclipse with PyDev Seems like newbie's question, but I just can't find the answer. Can I somehow see function parameters hint like in Visual Studio by pressing Ctrl+Shift+Space when cursor is in function call like this: someObj.doSomething("Test", "hello, wold", 4|) where | is my cursor position. Ctrl+Spase shows me that information when I start typing function name, but I want to see function's parameters list at any time I want. I'm using latest available Eclipse and PyDev A: Try "CTRL+space" after a ',', not after a parameter. The function parameters are displayed just after the '(' or after a ',' + "CTRL+space". A: Parameters will show up just after a '(', which is how I re-displayed mine for ages. I recently discovered that 'CTRL + Shift + Space' will show the parameters anywhere in a function call. Saved me some valuable seconds.
Function parameters hint Eclipse with PyDev
Seems like newbie's question, but I just can't find the answer. Can I somehow see function parameters hint like in Visual Studio by pressing Ctrl+Shift+Space when cursor is in function call like this: someObj.doSomething("Test", "hello, wold", 4|) where | is my cursor position. Ctrl+Spase shows me that information when I start typing function name, but I want to see function's parameters list at any time I want. I'm using latest available Eclipse and PyDev
[ "Try \"CTRL+space\" after a ',', not after a parameter.\nThe function parameters are displayed just after the '(' or after a ',' + \"CTRL+space\".\n", "Parameters will show up just after a '(', which is how I re-displayed mine for ages. I recently discovered that 'CTRL + Shift + Space' will show the parameters anywhere in a function call. Saved me some valuable seconds.\n" ]
[ 11, 3 ]
[]
[]
[ "eclipse", "pydev", "python" ]
stackoverflow_0000969466_eclipse_pydev_python.txt
Q: How to get all unique IDs from the list of dicts? Say you have a list of dicts like this {'id': 1, 'other_value':5} So maybe; items = [{'id': 1, 'other_value':5}, {'id': 1, 'other_value2':6}, {'id': 2, 'other_value':4}, {'id': 2, 'other_value2':3}] Now, you can assume this is a small subset of the data. There are maybe thousands. Also the structure isn't specified by me, its given to me. If I just want to get the IDs out I could do something like this: ids = [i[id] for i in items] However, you'll notice there are duplicate id's in the original data. so the question is; how can you tidily get the unique ID's? I was hoping for something like: ids = [i[id] for i in items if not in LIST] but as far as i know there isn't a way to access the list in the generator. of course I can do a for loop and easily do it that way. I was just curious to know if there was a more concise way of doing this. A: if you want unique id's you can use a set: set(i['id'] for i in items) A: set(i['id'] for i in items) but you might consider another data structure altogether, for example dict of lists: items = {1: [5, 6], 2: [2, 4]}
How to get all unique IDs from the list of dicts?
Say you have a list of dicts like this {'id': 1, 'other_value':5} So maybe; items = [{'id': 1, 'other_value':5}, {'id': 1, 'other_value2':6}, {'id': 2, 'other_value':4}, {'id': 2, 'other_value2':3}] Now, you can assume this is a small subset of the data. There are maybe thousands. Also the structure isn't specified by me, its given to me. If I just want to get the IDs out I could do something like this: ids = [i[id] for i in items] However, you'll notice there are duplicate id's in the original data. so the question is; how can you tidily get the unique ID's? I was hoping for something like: ids = [i[id] for i in items if not in LIST] but as far as i know there isn't a way to access the list in the generator. of course I can do a for loop and easily do it that way. I was just curious to know if there was a more concise way of doing this.
[ "if you want unique id's you can use a set:\nset(i['id'] for i in items)\n\n", "set(i['id'] for i in items)\n\nbut you might consider another data structure altogether, for example dict of lists:\nitems = {1: [5, 6], 2: [2, 4]}\n\n" ]
[ 5, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001606915_python.txt
Q: Best method to determine which of a set of keys exist in the datastore I have a few hundred keys, all of the same Model, which I have pre-computed: candidate_keys = [db.Key(...), db.Key(...), db.Key(...), ...] Some of these keys refer to actual entities in the datastore, and some do not. I wish to determine which keys do correspond to entities. It is not necessary to know the data within the entities, just whether they exist. One solution would be to use db.get(): keys_with_entities = set() for entity in db.get(candidate_keys): if entity: keys_with_entities.add(entity.key()) However this procedure would fetch all entity data from the store which is unnecessary and costly. A second idea is to use a Query with an IN filter on key_name, manually fetching in chunks of 30 to fit the requirements of the IN pseudo-filter. However keys-only queries are not allowed with the IN filter. Is there a better way? A: IN filters are not supported directly by the App Engine datastore; they're a convenience that's implemented in the client library. An IN query with 30 values is translated into 30 equality queries on one value each, resulting in 30 regular queries! Due to round-trip times and the expense of even keys-only queries, I suspect you'll find that simply attempting to fetch all the entities in one batch fetch is the most efficient. If your entities are large, however, you can make a further optimization: For every entity you insert, insert an empty 'presence' entity as a child of that entity, and use that in queries. For example: foo = AnEntity(...) foo.put() presence = PresenceEntity(key_name='x', parent=foo) presence.put() ... def exists(keys): test_keys = [db.Key.from_path('PresenceEntity', 'x', parent=x) for x in keys) return [x is not None for x in db.get(test_keys)] A: At this point, the only solution I have is to manually query by key with keys_only=True, once per key. for key in candidate_keys: if MyModel.all(keys_only=True).filter('__key__ =', key).count(): keys_with_entities.add(key) This may in fact be slower then just loading the entities in batch and discarding them, although the batch load also hammers the Data Received from API quota. A: How not to do it (update based on Nick Johnson's answer): I am also considering adding a parameter specifically for the purpose of being able to scan for it with an IN filter. class MyModel(db.Model): """Some model""" # ... all the old stuff the_key = db.StringProperty(required=True) # just a duplicate of the key_name #... meanwhile back in the example for key_batch in batches_of_30(candidate_keys): key_names = [x.name() for x in key_batch] found_keys = MyModel.all(keys_only=True).filter('the_key IN', key_names) keys_with_entities.update(found_keys) The reason this should be avoided is that the IN filter on a property sequentially performs an index scan, plus lookup once per item in your IN set. Each lookup takes 160-200ms so that very quickly becomes a very slow operation.
Best method to determine which of a set of keys exist in the datastore
I have a few hundred keys, all of the same Model, which I have pre-computed: candidate_keys = [db.Key(...), db.Key(...), db.Key(...), ...] Some of these keys refer to actual entities in the datastore, and some do not. I wish to determine which keys do correspond to entities. It is not necessary to know the data within the entities, just whether they exist. One solution would be to use db.get(): keys_with_entities = set() for entity in db.get(candidate_keys): if entity: keys_with_entities.add(entity.key()) However this procedure would fetch all entity data from the store which is unnecessary and costly. A second idea is to use a Query with an IN filter on key_name, manually fetching in chunks of 30 to fit the requirements of the IN pseudo-filter. However keys-only queries are not allowed with the IN filter. Is there a better way?
[ "IN filters are not supported directly by the App Engine datastore; they're a convenience that's implemented in the client library. An IN query with 30 values is translated into 30 equality queries on one value each, resulting in 30 regular queries!\nDue to round-trip times and the expense of even keys-only queries, I suspect you'll find that simply attempting to fetch all the entities in one batch fetch is the most efficient. If your entities are large, however, you can make a further optimization: For every entity you insert, insert an empty 'presence' entity as a child of that entity, and use that in queries. For example:\nfoo = AnEntity(...)\nfoo.put()\npresence = PresenceEntity(key_name='x', parent=foo)\npresence.put()\n...\ndef exists(keys):\n test_keys = [db.Key.from_path('PresenceEntity', 'x', parent=x) for x in keys)\n return [x is not None for x in db.get(test_keys)]\n\n", "At this point, the only solution I have is to manually query by key with keys_only=True, once per key.\nfor key in candidate_keys:\n if MyModel.all(keys_only=True).filter('__key__ =', key).count():\n keys_with_entities.add(key)\n\nThis may in fact be slower then just loading the entities in batch and discarding them, although the batch load also hammers the Data Received from API quota.\n", "How not to do it (update based on Nick Johnson's answer):\nI am also considering adding a parameter specifically for the purpose of being able to scan for it with an IN filter.\nclass MyModel(db.Model):\n \"\"\"Some model\"\"\"\n # ... all the old stuff\n the_key = db.StringProperty(required=True) # just a duplicate of the key_name\n\n#... meanwhile back in the example\n\nfor key_batch in batches_of_30(candidate_keys):\n key_names = [x.name() for x in key_batch]\n found_keys = MyModel.all(keys_only=True).filter('the_key IN', key_names)\n keys_with_entities.update(found_keys)\n\nThe reason this should be avoided is that the IN filter on a property sequentially performs an index scan, plus lookup once per item in your IN set. Each lookup takes 160-200ms so that very quickly becomes a very slow operation.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001607126_google_app_engine_google_cloud_datastore_python.txt
Q: PyQt custom widget in c++ Can I write custom Qt widget in pure C++, compile it and use in PyQt? I'm trying to use the ctypes-opencv with qt and I have performance problems with python's code for displaying opencv's image in Qt form. A: You will have to write a Python wrapper for the widget, using the sip library (which is used by PyQt). There is a simple example for a Qt/C++ widget in the documentation.
PyQt custom widget in c++
Can I write custom Qt widget in pure C++, compile it and use in PyQt? I'm trying to use the ctypes-opencv with qt and I have performance problems with python's code for displaying opencv's image in Qt form.
[ "You will have to write a Python wrapper for the widget, using the sip library (which is used by PyQt). There is a simple example for a Qt/C++ widget in the documentation.\n" ]
[ 5 ]
[]
[]
[ "c++", "pyqt", "python", "python_sip", "qt" ]
stackoverflow_0001607515_c++_pyqt_python_python_sip_qt.txt
Q: How to create a file one directory up? How can I create a file in python one directory up, without using the full path? I would like a way that worked both for windows and linux. Thanks. A: Use os.pardir (which is probably always "..") import os fobj = open(os.path.join(os.pardir, "filename"), "w") A: People don't seem to realize this, but Python is happy to accept forward slash even on Windows. This works fine on all platforms: fobj = open("../filename", "w") A: Depends whether you are working in a unix or windows environment. On windows: ..\foo.txt On unix like OS: ../foo.txt you need to make sure the os sets the current path correctly when your application launches. Take the appropriate path and simply create a file there.
How to create a file one directory up?
How can I create a file in python one directory up, without using the full path? I would like a way that worked both for windows and linux. Thanks.
[ "Use os.pardir (which is probably always \"..\")\nimport os\nfobj = open(os.path.join(os.pardir, \"filename\"), \"w\")\n\n", "People don't seem to realize this, but Python is happy to accept forward slash even on Windows. This works fine on all platforms:\nfobj = open(\"../filename\", \"w\")\n\n", "Depends whether you are working in a unix or windows environment.\nOn windows:\n..\\foo.txt\n\nOn unix like OS:\n../foo.txt\n\nyou need to make sure the os sets the current path correctly when your application launches.\nTake the appropriate path and simply create a file there. \n" ]
[ 33, 19, 2 ]
[]
[]
[ "io", "python" ]
stackoverflow_0001607751_io_python.txt
Q: os.walk and some tests I'm not sure if i understand properly how os.walk store its results. Im trying to do the following: I'm checking a root folder for subsequent folders. There are several hundreds of em, and they are nested in somewaht uniform way. I'm trying to check each subfolder, and if it ends with a four digit number, store it in a list. I used a highly procedural code, and got to it, but the code is using os.listdir, meaning that i need to execute the function for each folder i want. Is there a better way? def ListadorPastas(pasta): resultado = [] regex = "^[0-9]{4}" padrao = re.compile(regex) for p in os.listdir(pasta): regexObject = re.match(padrao,p[-4:]) if (regexObject!=None): resultado.append(regexObject.string) else: pass return resultado Also, i have a regex problem: this regex is matching the last four sliced digits of a expression. Sometime i have folders with 5 digits in the end, which ALSO will match. I tried using "$[0-9]{4}" but it returns me nothing. Any ideas why? Thanks in advance. George A: using "$[0-9]{4}" but it returns me nothing. Any ideas why? $ means end-of-(line or string) in a regex pattern, so I wonder how you expected "end of string then four digits" to ever possibly match anything...? By definition of "end" it won't be followed by 4 digits! r'(^|\D)\d{4}$' should work better if I understand what you want, to match strings that are just 4 digits, or end with exactly 4 digits, not 5 or more (\D means non-digit, just like \d means digit -- no reason to use [0-9] or [^0-9]!). os.walk does not need to store much -- a couple pointers on the implicit tree it's walking -- but why do you care how it's implemented internally? Just use it...: def ListadorPastas(pasta): resultado = [] for root, dirs, files in os.walk(pasta): for d in dirs: if (len(d)==4 or len(d)>4 and not d[-5].isdigit() ) and d[-4:].isdigit(): resultado.append(d) return resultado where I'm also taking the opportunity to show a non-regex way to do the checks you want on the subdirectory's name. A: regex you should be using is: pattern = re.compile(r'(?<!\d)\d{4}$') re.search(pattern, p) as for os.walk your explanation is not entirely clear. A: About the regex: If you use p[-4:], you'll always look at the last four characters of p, so you don't get a chance to see if there really are five. So instead, use regex = "(?<![0-9])[0-9]{4}$" padrao = re.compile(regex) regexObject = re.search(padrao, p) re.search will also match parts of the string.
os.walk and some tests
I'm not sure if i understand properly how os.walk store its results. Im trying to do the following: I'm checking a root folder for subsequent folders. There are several hundreds of em, and they are nested in somewaht uniform way. I'm trying to check each subfolder, and if it ends with a four digit number, store it in a list. I used a highly procedural code, and got to it, but the code is using os.listdir, meaning that i need to execute the function for each folder i want. Is there a better way? def ListadorPastas(pasta): resultado = [] regex = "^[0-9]{4}" padrao = re.compile(regex) for p in os.listdir(pasta): regexObject = re.match(padrao,p[-4:]) if (regexObject!=None): resultado.append(regexObject.string) else: pass return resultado Also, i have a regex problem: this regex is matching the last four sliced digits of a expression. Sometime i have folders with 5 digits in the end, which ALSO will match. I tried using "$[0-9]{4}" but it returns me nothing. Any ideas why? Thanks in advance. George
[ "\nusing \"$[0-9]{4}\" but it returns me\n nothing. Any ideas why?\n\n$ means end-of-(line or string) in a regex pattern, so I wonder how you expected \"end of string then four digits\" to ever possibly match anything...? By definition of \"end\" it won't be followed by 4 digits! r'(^|\\D)\\d{4}$' should work better if I understand what you want, to match strings that are just 4 digits, or end with exactly 4 digits, not 5 or more (\\D means non-digit, just like \\d means digit -- no reason to use [0-9] or [^0-9]!).\nos.walk does not need to store much -- a couple pointers on the implicit tree it's walking -- but why do you care how it's implemented internally? Just use it...:\ndef ListadorPastas(pasta):\n resultado = []\n for root, dirs, files in os.walk(pasta):\n for d in dirs:\n if (len(d)==4 or len(d)>4 and not d[-5].isdigit()\n ) and d[-4:].isdigit():\n resultado.append(d)\n return resultado\n\nwhere I'm also taking the opportunity to show a non-regex way to do the checks you want on the subdirectory's name.\n", "regex you should be using is:\npattern = re.compile(r'(?<!\\d)\\d{4}$')\nre.search(pattern, p)\n\nas for os.walk your explanation is not entirely clear.\n", "About the regex: If you use p[-4:], you'll always look at the last four characters of p, so you don't get a chance to see if there really are five. \nSo instead, use\nregex = \"(?<![0-9])[0-9]{4}$\"\npadrao = re.compile(regex)\n\nregexObject = re.search(padrao, p)\n\nre.search will also match parts of the string.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "directory", "python", "regex" ]
stackoverflow_0001608090_directory_python_regex.txt
Q: How to check which part of app is consuming CPU? I have a wxPython app which has many worker threads, idle event cycles, and many other such event handling code which can consume CPU, for now when app is not being interacted with consumes about 8-10% CPU. Question: Is there a tool which can tell which part/threads of my app is consuming most CPU? If there are no such generic tools, I am willing to know the approaches you usually take to tackle such scenarios? e.g. disabling part of app, trace etc Edit: May be my question's language is ambiguous, I do not want to know which function or code block in my code takes up most resources, for that I can use profiler. What I want to know is when I run my app, and I see cpu usage it is 8-10%, now is there a way to know what different parts, threads of my app are using up that 10% cpu? Basically at that instant i want to know which part(s) of code is running? A: If all your threads have unique start methods you could use the profiler that comes with Python. If you're on a Mac you should check out the Instruments app. You could also use dtrace for Linux. A: This isn't very practical at a language-agnostic level. Take away the language and all you have left is a load of machine code instructions sprinkled around with system calls. You could use strace on Linux or ProcessExplorer on Windows to try and guess what is going on from those system calls, but it would make far more sense to just use a profiler. If you do have access to the language, then there are a variety of things you could do (extra logging, random pausing in the debugger) but in that situation the profiler is still your best tool. A: I am able to solve my problem by writing a modifed version of python trace module , which can be enabled disabled, basically modify Trace class something like this import sys import trace class MyTrace(trace.Trace): def __init__(self, *args, **kwargs): trace.Trace.__init__(self, *args, **kwargs) self.enabled = False def localtrace_trace_and_count(self, *args, **kwargs): if not self.enabled: return None return trace.Trace.localtrace_trace_and_count(self, *args, **kwargs) tracer = MyTrace(ignoredirs=[sys.prefix, sys.exec_prefix],) def main(): a = 1 tracer.enabled = True a = 2 tracer.enabled = False a = 3 # run the new command using the given tracer tracer.run('main()') Output: --- modulename: untitled-2, funcname: main untitled-2.py(19): a = 2 untitled-2.py(20): tracer.enabled = False Enabling it at the critical points helps me to trace line by line which code statements are executing most. A: On Windows XP and higher, Process Explorer will show all processes, and you can view the properties of each process and see open threads. It shows thread ID, start time, state, kernel time, user time, and more.
How to check which part of app is consuming CPU?
I have a wxPython app which has many worker threads, idle event cycles, and many other such event handling code which can consume CPU, for now when app is not being interacted with consumes about 8-10% CPU. Question: Is there a tool which can tell which part/threads of my app is consuming most CPU? If there are no such generic tools, I am willing to know the approaches you usually take to tackle such scenarios? e.g. disabling part of app, trace etc Edit: May be my question's language is ambiguous, I do not want to know which function or code block in my code takes up most resources, for that I can use profiler. What I want to know is when I run my app, and I see cpu usage it is 8-10%, now is there a way to know what different parts, threads of my app are using up that 10% cpu? Basically at that instant i want to know which part(s) of code is running?
[ "If all your threads have unique start methods you could use the profiler that comes with Python.\nIf you're on a Mac you should check out the Instruments app. You could also use dtrace for Linux.\n", "This isn't very practical at a language-agnostic level. Take away the language and all you have left is a load of machine code instructions sprinkled around with system calls. You could use strace on Linux or ProcessExplorer on Windows to try and guess what is going on from those system calls, but it would make far more sense to just use a profiler. If you do have access to the language, then there are a variety of things you could do (extra logging, random pausing in the debugger) but in that situation the profiler is still your best tool.\n", "I am able to solve my problem by writing a modifed version of python trace module , which can be enabled disabled, basically modify Trace class something like this\nimport sys\nimport trace\n\nclass MyTrace(trace.Trace):\n def __init__(self, *args, **kwargs):\n trace.Trace.__init__(self, *args, **kwargs)\n self.enabled = False\n\n def localtrace_trace_and_count(self, *args, **kwargs):\n if not self.enabled:\n return None \n return trace.Trace.localtrace_trace_and_count(self, *args, **kwargs)\n\ntracer = MyTrace(ignoredirs=[sys.prefix, sys.exec_prefix],)\n\ndef main():\n a = 1\n tracer.enabled = True\n a = 2\n tracer.enabled = False\n a = 3\n\n# run the new command using the given tracer\ntracer.run('main()')\n\nOutput:\n --- modulename: untitled-2, funcname: main\nuntitled-2.py(19): a = 2\nuntitled-2.py(20): tracer.enabled = False\n\nEnabling it at the critical points helps me to trace line by line which code statements are executing most.\n", "On Windows XP and higher, Process Explorer will show all processes, and you can view the properties of each process and see open threads. It shows thread ID, start time, state, kernel time, user time, and more.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "cpu_usage", "python", "wxpython" ]
stackoverflow_0001470453_cpu_usage_python_wxpython.txt
Q: XML-RPC server with better error reporting Standard libraries (xmlrpclib+SimpleXMLRPCServer in Python 2 and xmlrpc.server in Python 3) report all errors (including usage errors) as python exceptions which is not suitable for public services: exception strings are often not easy understandable without python knowledge and might expose some sensitive information. It's not hard to fix this, but I prefer to avoid reinventing the wheel. Is there a third party library with better error reporting? I'm interested in good fault messages for all usage errors and hiding internals when reporting internal errors (this is better done with logging). xmlrpclib already have the constants for such errors: NOT_WELLFORMED_ERROR, UNSUPPORTED_ENCODING, INVALID_ENCODING_CHAR, INVALID_XMLRPC, METHOD_NOT_FOUND, INVALID_METHOD_PARAMS, INTERNAL_ERROR. A: I don't think you have a library specific problem. When using any library or framework you typically want to trap all errors, log them somewhere, and throw up "Oops, we're having problems. You may want to contact us at x@x.com with error number 100 and tell us what you did." So wrap your failable entry points in try/catches, create a generic logger and off you go... A: It look like there is no ready library with my requirements, so a ended up with own implementation: class ApplicationError(Fault): def __init__(self, exc_info): Fault.__init__(self, xmlrpclib.APPLICATION_ERROR, u'Application internal error') class NotWellformedError(Fault): def __init__(self, exc): Fault.__init__(self, xmlrpclib.NOT_WELLFORMED_ERROR, str(exc)) class UnsupportedEncoding(Fault): def __init__(self, exc): Fault.__init__(self, xmlrpclib.UNSUPPORTED_ENCODING, str(exc)) # XXX INVALID_ENCODING_CHAR is masked by xmlrpclib, so the error code will be # INVALID_XMLRPC. class InvalidRequest(Fault): def __init__(self, message): ault.__init__(self, xmlrpclib.INVALID_XMLRPC, message) class MethodNotFound(Fault): def __init__(self, name): Fault.__init__(self, xmlrpclib.METHOD_NOT_FOUND, u'Method %r is not supported' % name) class WrongMethodUsage(Fault): def __init__(self, message): Fault.__init__(self, xmlrpclib.INVALID_METHOD_PARAMS, message) class WrongType(Fault): def __init__(self, arg_name, type_name): Fault.__init__(self, xmlrpclib.INVALID_METHOD_PARAMS, u'Parameter %s must be %s' % (arg_name, type_name)) class XMLRPCDispatcher(SimpleXMLRPCDispatcher, XMLRPCDocGenerator): server_name = server_title = 'Personalization center RPC interface' server_documentation = 'Available methods' def __init__(self, methods): SimpleXMLRPCDispatcher.__init__(self, allow_none=True, encoding=None) self.register_instance(methods) self.register_multicall_functions() #self.register_introspection_functions() def _dispatch(self, method_name, args): if self.funcs.has_key(method_name): method = self.funcs[method_name] else: method = self.instance._getMethod(method_name) arg_names, args_name, kwargs_name, defaults = \ inspect.getargspec(method) assert arg_names[0]=='self' arg_names = arg_names[1:] n_args = len(args) if not (args_name or defaults): if n_args!=len(arg_names): raise WrongMethodUsage( u'Method %s takes exactly %d parameters (%d given)' % \ (method_name, len(arg_names), n_args)) else: min_args = len(arg_names)-len(defaults) if len(args)<min_args: raise WrongMethodUsage( u'Method %s requires at least %d parameters (%d given)' % \ (method_name, min_args, n_args)) if not args_name and n_args>len(arg_names): raise WrongMethodUsage( u'Method %s requires at most %d parameters (%d given)' % \ (method_name, len(arg_names), n_args)) try: return method(*args) except Fault: raise except: logger.exception('Application internal error for %s%r', method_name, args) raise ApplicationError(sys.exc_info()) def dispatch(self, data): try: try: args, method_name = xmlrpclib.loads(data) except ExpatError, exc: raise NotWellformedError(exc) except LookupError, exc: raise UnsupportedEncoding(exc) except xmlrpclib.ResponseError: raise InvalidRequest('Request structure is invalid') method_name = method_name.encode('ascii', 'replace') result = self._dispatch(method_name, args) except Fault, exc: logger.warning('Fault %s: %s', exc.faultCode, exc.faultString) return xmlrpclib.dumps(exc) else: try: return xmlrpclib.dumps((result,), methodresponse=1) except: logger.exception('Application internal error when marshalling'\ ' result for %s%r', method_name, args) return xmlrpclib.dumps(ApplicationError(sys.exc_info())) class InterfaceMethods: def _getMethod(self, name): if name.startswith('_'): raise MethodNotFound(name) try: method = getattr(self, name) except AttributeError: raise MethodNotFound(name) if not inspect.ismethod(method): raise MethodNotFound(name) return method
XML-RPC server with better error reporting
Standard libraries (xmlrpclib+SimpleXMLRPCServer in Python 2 and xmlrpc.server in Python 3) report all errors (including usage errors) as python exceptions which is not suitable for public services: exception strings are often not easy understandable without python knowledge and might expose some sensitive information. It's not hard to fix this, but I prefer to avoid reinventing the wheel. Is there a third party library with better error reporting? I'm interested in good fault messages for all usage errors and hiding internals when reporting internal errors (this is better done with logging). xmlrpclib already have the constants for such errors: NOT_WELLFORMED_ERROR, UNSUPPORTED_ENCODING, INVALID_ENCODING_CHAR, INVALID_XMLRPC, METHOD_NOT_FOUND, INVALID_METHOD_PARAMS, INTERNAL_ERROR.
[ "I don't think you have a library specific problem. When using any library or framework you typically want to trap all errors, log them somewhere, and throw up \"Oops, we're having problems. You may want to contact us at x@x.com with error number 100 and tell us what you did.\" So wrap your failable entry points in try/catches, create a generic logger and off you go...\n", "It look like there is no ready library with my requirements, so a ended up with own implementation:\nclass ApplicationError(Fault):\n\n def __init__(self, exc_info):\n Fault.__init__(self, xmlrpclib.APPLICATION_ERROR,\n u'Application internal error')\n\n\nclass NotWellformedError(Fault):\n\n def __init__(self, exc):\n Fault.__init__(self, xmlrpclib.NOT_WELLFORMED_ERROR, str(exc))\n\n\nclass UnsupportedEncoding(Fault):\n\n def __init__(self, exc):\n Fault.__init__(self, xmlrpclib.UNSUPPORTED_ENCODING, str(exc))\n\n\n# XXX INVALID_ENCODING_CHAR is masked by xmlrpclib, so the error code will be\n# INVALID_XMLRPC.\nclass InvalidRequest(Fault):\n\n def __init__(self, message):\n ault.__init__(self, xmlrpclib.INVALID_XMLRPC, message)\n\n\nclass MethodNotFound(Fault):\n\n def __init__(self, name):\n Fault.__init__(self, xmlrpclib.METHOD_NOT_FOUND,\n u'Method %r is not supported' % name)\n\n\nclass WrongMethodUsage(Fault):\n\n def __init__(self, message):\n Fault.__init__(self, xmlrpclib.INVALID_METHOD_PARAMS, message)\n\n\nclass WrongType(Fault):\n\n def __init__(self, arg_name, type_name):\n Fault.__init__(self, xmlrpclib.INVALID_METHOD_PARAMS,\n u'Parameter %s must be %s' % (arg_name, type_name))\n\n\nclass XMLRPCDispatcher(SimpleXMLRPCDispatcher, XMLRPCDocGenerator):\n\n server_name = server_title = 'Personalization center RPC interface'\n server_documentation = 'Available methods'\n\n def __init__(self, methods):\n SimpleXMLRPCDispatcher.__init__(self, allow_none=True, encoding=None)\n self.register_instance(methods)\n self.register_multicall_functions()\n #self.register_introspection_functions()\n\n def _dispatch(self, method_name, args):\n if self.funcs.has_key(method_name):\n method = self.funcs[method_name]\n else:\n method = self.instance._getMethod(method_name)\n arg_names, args_name, kwargs_name, defaults = \\\n inspect.getargspec(method)\n assert arg_names[0]=='self'\n arg_names = arg_names[1:]\n n_args = len(args)\n if not (args_name or defaults):\n if n_args!=len(arg_names):\n raise WrongMethodUsage(\n u'Method %s takes exactly %d parameters (%d given)' % \\\n (method_name, len(arg_names), n_args))\n else:\n min_args = len(arg_names)-len(defaults)\n if len(args)<min_args:\n raise WrongMethodUsage(\n u'Method %s requires at least %d parameters (%d given)' % \\\n (method_name, min_args, n_args))\n if not args_name and n_args>len(arg_names):\n raise WrongMethodUsage(\n u'Method %s requires at most %d parameters (%d given)' % \\\n (method_name, len(arg_names), n_args))\n try:\n return method(*args)\n except Fault:\n raise\n except:\n logger.exception('Application internal error for %s%r',\n method_name, args)\n raise ApplicationError(sys.exc_info())\n\n def dispatch(self, data):\n try:\n try:\n args, method_name = xmlrpclib.loads(data)\n except ExpatError, exc:\n raise NotWellformedError(exc)\n except LookupError, exc:\n raise UnsupportedEncoding(exc)\n except xmlrpclib.ResponseError:\n raise InvalidRequest('Request structure is invalid')\n method_name = method_name.encode('ascii', 'replace')\n result = self._dispatch(method_name, args)\n except Fault, exc:\n logger.warning('Fault %s: %s', exc.faultCode, exc.faultString)\n return xmlrpclib.dumps(exc)\n else:\n try:\n return xmlrpclib.dumps((result,), methodresponse=1)\n except:\n logger.exception('Application internal error when marshalling'\\\n ' result for %s%r', method_name, args)\n return xmlrpclib.dumps(ApplicationError(sys.exc_info()))\n\n\nclass InterfaceMethods:\n\n def _getMethod(self, name):\n if name.startswith('_'):\n raise MethodNotFound(name)\n try:\n method = getattr(self, name)\n except AttributeError:\n raise MethodNotFound(name)\n if not inspect.ismethod(method):\n raise MethodNotFound(name)\n return method\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "xml_rpc" ]
stackoverflow_0001571598_python_xml_rpc.txt
Q: Whats new in Python 3.x? http://docs.python.org/3.0/whatsnew/3.0.html says it lists whats new, but in my opinion, it only lists differences, so has does anybody know of any completely new Python features, introduced in release 3.x? To Avoid Confusion, I will define a completely new feature as something that has never been used in any other code before, somehting you walk up to and go "Ooh, shiny!". E.g. a function to make aliens invade, etc. A: Many of the completely new features introduced in 3.0 were also backported to 2.6, a deliberate choice. However, this was not practical in all cases, so some of the new features remained Python 3 - only. How metaclasses work, is probably the biggest single new feature. The syntax is clearly better than 2.*'s __metaclass__ assignment...: class X(abase, metaclass=Y): but more importantly, the new syntax means the compiler knows the metaclass to use before it processes the class body, and so the metaclass can finally influence the way the class body is processed -- this was not possible in 2.*. Specifically, the metaclass's new __prepare__ method can return any writable mapping, and if so then that's used instead of a regular dict to record the assignments (and assigning keywords such as def) performed in the class body. In particular, this lets the order of the class body finally get preserved exactly as it's written down, as well as allowing the metaclass, if it so chooses, to record multiple assignments/definitions for any name in the class body, rather than just the last assignment or definition performed for that name. This hugely broadens the applicability of classes with appropriate custom metaclasses, compared to what was feasible in 2.*. Another syntax biggie is annotations -- see the PEP I'm pointing to for details. Python's standard library gives no special semantics to annotations, but exactly because of that third-party frameworks and tools are empowered to apply any semantics they wish -- such tasks as type-checking for function arguments are hereby allowed, though not directly performed by the standard Python library. There are of course many others (the new "views" concept embodied by such methods as dict's .keys &c in 3.*, keyword-only arguments, better sequence unpacking, nonlocal for more powerful closures, ...), of varying heft, but all pretty useful and well-designed. A: The section New Syntax lists, well, the new syntax in Python 3.x. I think it's debatable sometimes whether stuff is new or changed. E.g. exception chaining (PEP 3134): is that a new feature, or is it a change to the exception machinery? In general, I recommend looking at all the PEPs listed in the document. They are the major changes/new features.
Whats new in Python 3.x?
http://docs.python.org/3.0/whatsnew/3.0.html says it lists whats new, but in my opinion, it only lists differences, so has does anybody know of any completely new Python features, introduced in release 3.x? To Avoid Confusion, I will define a completely new feature as something that has never been used in any other code before, somehting you walk up to and go "Ooh, shiny!". E.g. a function to make aliens invade, etc.
[ "Many of the completely new features introduced in 3.0 were also backported to 2.6, a deliberate choice. However, this was not practical in all cases, so some of the new features remained Python 3 - only.\nHow metaclasses work, is probably the biggest single new feature. The syntax is clearly better than 2.*'s __metaclass__ assignment...:\nclass X(abase, metaclass=Y):\n\nbut more importantly, the new syntax means the compiler knows the metaclass to use before it processes the class body, and so the metaclass can finally influence the way the class body is processed -- this was not possible in 2.*. Specifically, the metaclass's new __prepare__ method can return any writable mapping, and if so then that's used instead of a regular dict to record the assignments (and assigning keywords such as def) performed in the class body. In particular, this lets the order of the class body finally get preserved exactly as it's written down, as well as allowing the metaclass, if it so chooses, to record multiple assignments/definitions for any name in the class body, rather than just the last assignment or definition performed for that name. This hugely broadens the applicability of classes with appropriate custom metaclasses, compared to what was feasible in 2.*.\nAnother syntax biggie is annotations -- see the PEP I'm pointing to for details. Python's standard library gives no special semantics to annotations, but exactly because of that third-party frameworks and tools are empowered to apply any semantics they wish -- such tasks as type-checking for function arguments are hereby allowed, though not directly performed by the standard Python library.\nThere are of course many others (the new \"views\" concept embodied by such methods as dict's .keys &c in 3.*, keyword-only arguments, better sequence unpacking, nonlocal for more powerful closures, ...), of varying heft, but all pretty useful and well-designed.\n", "The section New Syntax lists, well, the new syntax in Python 3.x. I think it's debatable sometimes whether stuff is new or changed. E.g. exception chaining (PEP 3134): is that a new feature, or is it a change to the exception machinery?\nIn general, I recommend looking at all the PEPs listed in the document. They are the major changes/new features.\n" ]
[ 9, 4 ]
[]
[]
[ "python" ]
stackoverflow_0001608731_python.txt
Q: How to use PIL to resize and apply rotation EXIF information to the file? I am trying to use Python to resize picture. With my camera, files are all written is landscape way. The exif information handle a tag to ask the image viewer to rotate in a way or another. Since most of the browser doesn't understand this information, I want to rotate the image using this EXIF information and keeping every other EXIF information. Do you know how I can do that using Python ? Reading the EXIF.py source code, I found something like that : 0x0112: ('Orientation', {1: 'Horizontal (normal)', 2: 'Mirrored horizontal', 3: 'Rotated 180', 4: 'Mirrored vertical', 5: 'Mirrored horizontal then rotated 90 CCW', 6: 'Rotated 90 CW', 7: 'Mirrored horizontal then rotated 90 CW', 8: 'Rotated 90 CCW'}) How can I use this information and PIL to apply it ? A: I finally used pyexiv2, but it is a bit tricky to install on other platforms than GNU. #!/usr/bin/python # -*- coding: utf-8 -*- # Copyright (C) 2008-2009 Rémy HUBSCHER <natim@users.sf.net> - http://www.trunat.fr/portfolio/python.html # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License along # with this program; if not, write to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # Using : # - Python Imaging Library PIL http://www.pythonware.com/products/pil/index.htm # - pyexiv2 http://tilloy.net/dev/pyexiv2/ ### # What is doing this script ? # # 1. Take a directory of picture from a Reflex Camera (Nikon D90 for example) # 2. Use the EXIF Orientation information to turn the image # 3. Remove the thumbnail from the EXIF Information # 4. Create 2 image one maxi map in 600x600, one mini map in 200x200 # 5. Add a comment with the name of the Author and his Website # 6. Copy the EXIF information to the maxi and mini image # 7. Name the image files with a meanful name (Date of picture) import os, sys try: import Image except: print "To use this program, you need to install Python Imaging Library - http://www.pythonware.com/products/pil/" sys.exit(1) try: import pyexiv2 except: print "To use this program, you need to install pyexiv2 - http://tilloy.net/dev/pyexiv2/" sys.exit(1) ############# Configuration ############## size_mini = 200, 200 size_maxi = 1024, 1024 # Information about the Photograph should be in ASCII COPYRIGHT="Remy Hubscher - http://www.trunat.fr/" ARTIST="Remy Hubscher" ########################################## def listJPEG(directory): "Retourn a list of the JPEG files in the directory" fileList = [os.path.normcase(f) for f in os.listdir(directory)] fileList = [f for f in fileList if os.path.splitext(f)[1] in ('.jpg', '.JPG')] fileList.sort() return fileList def _mkdir(newdir): """ works the way a good mkdir should :) - already exists, silently complete - regular file in the way, raise an exception - parent directory(ies) does not exist, make them as well """ if os.path.isdir(newdir): pass elif os.path.isfile(newdir): raise OSError("a file with the same name as the desired " \ "dir, '%s', already exists." % newdir) else: head, tail = os.path.split(newdir) if head and not os.path.isdir(head): _mkdir(head) if tail: os.mkdir(newdir) if len(sys.argv) < 3: print "USAGE : python %s indir outdir [comment]" % sys.argv[0] exit indir = sys.argv[1] outdir = sys.argv[2] if len(sys.argv) == 4: comment = sys.argv[1] else: comment = COPYRIGHT agrandie = os.path.join(outdir, 'agrandie') miniature = os.path.join(outdir, 'miniature') print agrandie, miniature _mkdir(agrandie) _mkdir(miniature) for infile in listJPEG(indir): mini = os.path.join(miniature, infile) grand = os.path.join(agrandie, infile) file_path = os.path.join(indir, infile) image = pyexiv2.Image(file_path) image.readMetadata() # We clean the file and add some information image.deleteThumbnail() image['Exif.Image.Artist'] = ARTIST image['Exif.Image.Copyright'] = COPYRIGHT image.setComment(comment) # I prefer not to modify the input file # image.writeMetadata() # We look for a meanful name if 'Exif.Image.DateTime' in image.exifKeys(): filename = image['Exif.Image.DateTime'].strftime('%Y-%m-%d_%H-%M-%S.jpg') mini = os.path.join(miniature, filename) grand = os.path.join(agrandie, filename) else: # If no exif information, leave the old name mini = os.path.join(miniature, infile) grand = os.path.join(agrandie, infile) # We create the thumbnail #try: im = Image.open(file_path) im.thumbnail(size_maxi, Image.ANTIALIAS) # We rotate regarding to the EXIF orientation information if 'Exif.Image.Orientation' in image.exifKeys(): orientation = image['Exif.Image.Orientation'] if orientation == 1: # Nothing mirror = im.copy() elif orientation == 2: # Vertical Mirror mirror = im.transpose(Image.FLIP_LEFT_RIGHT) elif orientation == 3: # Rotation 180° mirror = im.transpose(Image.ROTATE_180) elif orientation == 4: # Horizontal Mirror mirror = im.transpose(Image.FLIP_TOP_BOTTOM) elif orientation == 5: # Horizontal Mirror + Rotation 90° CCW mirror = im.transpose(Image.FLIP_TOP_BOTTOM).transpose(Image.ROTATE_90) elif orientation == 6: # Rotation 270° mirror = im.transpose(Image.ROTATE_270) elif orientation == 7: # Horizontal Mirror + Rotation 270° mirror = im.transpose(Image.FLIP_TOP_BOTTOM).transpose(Image.ROTATE_270) elif orientation == 8: # Rotation 90° mirror = im.transpose(Image.ROTATE_90) # No more Orientation information image['Exif.Image.Orientation'] = 1 else: # No EXIF information, the user has to do it mirror = im.copy() mirror.save(grand, "JPEG", quality=85) img_grand = pyexiv2.Image(grand) img_grand.readMetadata() image.copyMetadataTo(img_grand) img_grand.writeMetadata() print grand mirror.thumbnail(size_mini, Image.ANTIALIAS) mirror.save(mini, "JPEG", quality=85) img_mini = pyexiv2.Image(mini) img_mini.readMetadata() image.copyMetadataTo(img_mini) img_mini.writeMetadata() print mini print If you see something that could be improved (except the fact that it is still for Python 2.5) then please let me know. A: Although PIL can read EXIF metadata, it doesn't have the ability to change it and write it back to an image file. A better choice is the pyexiv2 library. With this library it's quite simple flip the photo's orientation. Here's an example: import sys import pyexiv2 image = pyexiv2.Image(sys.argv[1]) image.readMetadata() image['Exif.Image.Orientation'] = 6 image.writeMetadata() This sets the orientation to 6, corresponding to "Rotated 90 CW". A: First you have to make sure that your camera actually has a rotation sensor. Most camera models without sensor simply set the Orientation Tag to 1 (Horizontal) for ALL pictures. Then I recommend to use pyexiv2 and pyjpegtran in your case. PIL doesn't support lossless rotation, which is the domain of pyjpegtran. pyexiv2 is a library that allows you to copy metadata from one image to another (i think the method name is copyMetadata). Are you sure that you don't want to resize your photos before displaying them in the browser? A 8 Megapixel JPEG is much too big for the browser window and will cause endless loading times.
How to use PIL to resize and apply rotation EXIF information to the file?
I am trying to use Python to resize picture. With my camera, files are all written is landscape way. The exif information handle a tag to ask the image viewer to rotate in a way or another. Since most of the browser doesn't understand this information, I want to rotate the image using this EXIF information and keeping every other EXIF information. Do you know how I can do that using Python ? Reading the EXIF.py source code, I found something like that : 0x0112: ('Orientation', {1: 'Horizontal (normal)', 2: 'Mirrored horizontal', 3: 'Rotated 180', 4: 'Mirrored vertical', 5: 'Mirrored horizontal then rotated 90 CCW', 6: 'Rotated 90 CW', 7: 'Mirrored horizontal then rotated 90 CW', 8: 'Rotated 90 CCW'}) How can I use this information and PIL to apply it ?
[ "I finally used pyexiv2, but it is a bit tricky to install on other platforms than GNU.\n#!/usr/bin/python\n# -*- coding: utf-8 -*-\n# Copyright (C) 2008-2009 Rémy HUBSCHER <natim@users.sf.net> - http://www.trunat.fr/portfolio/python.html\n\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\n# Using :\n# - Python Imaging Library PIL http://www.pythonware.com/products/pil/index.htm\n# - pyexiv2 http://tilloy.net/dev/pyexiv2/\n\n###\n# What is doing this script ?\n#\n# 1. Take a directory of picture from a Reflex Camera (Nikon D90 for example)\n# 2. Use the EXIF Orientation information to turn the image\n# 3. Remove the thumbnail from the EXIF Information\n# 4. Create 2 image one maxi map in 600x600, one mini map in 200x200\n# 5. Add a comment with the name of the Author and his Website\n# 6. Copy the EXIF information to the maxi and mini image\n# 7. Name the image files with a meanful name (Date of picture)\n\nimport os, sys\ntry:\n import Image\nexcept:\n print \"To use this program, you need to install Python Imaging Library - http://www.pythonware.com/products/pil/\"\n sys.exit(1)\n\ntry:\n import pyexiv2\nexcept:\n print \"To use this program, you need to install pyexiv2 - http://tilloy.net/dev/pyexiv2/\"\n sys.exit(1)\n\n############# Configuration ##############\nsize_mini = 200, 200\nsize_maxi = 1024, 1024\n\n# Information about the Photograph should be in ASCII\nCOPYRIGHT=\"Remy Hubscher - http://www.trunat.fr/\"\nARTIST=\"Remy Hubscher\"\n##########################################\n\ndef listJPEG(directory):\n \"Retourn a list of the JPEG files in the directory\"\n fileList = [os.path.normcase(f) for f in os.listdir(directory)]\n fileList = [f for f in fileList if os.path.splitext(f)[1] in ('.jpg', '.JPG')]\n fileList.sort()\n return fileList\n\ndef _mkdir(newdir):\n \"\"\"\n works the way a good mkdir should :)\n - already exists, silently complete\n - regular file in the way, raise an exception\n - parent directory(ies) does not exist, make them as well\n \"\"\"\n if os.path.isdir(newdir):\n pass\n elif os.path.isfile(newdir):\n raise OSError(\"a file with the same name as the desired \" \\\n \"dir, '%s', already exists.\" % newdir)\n else:\n head, tail = os.path.split(newdir)\n if head and not os.path.isdir(head):\n _mkdir(head)\n if tail:\n os.mkdir(newdir)\n\nif len(sys.argv) < 3:\n print \"USAGE : python %s indir outdir [comment]\" % sys.argv[0]\n exit\n\nindir = sys.argv[1]\noutdir = sys.argv[2]\n\nif len(sys.argv) == 4:\n comment = sys.argv[1]\nelse:\n comment = COPYRIGHT\n\nagrandie = os.path.join(outdir, 'agrandie')\nminiature = os.path.join(outdir, 'miniature')\n\nprint agrandie, miniature\n\n_mkdir(agrandie)\n_mkdir(miniature)\n\nfor infile in listJPEG(indir):\n mini = os.path.join(miniature, infile)\n grand = os.path.join(agrandie, infile)\n file_path = os.path.join(indir, infile)\n\n image = pyexiv2.Image(file_path)\n image.readMetadata()\n\n # We clean the file and add some information\n image.deleteThumbnail()\n\n image['Exif.Image.Artist'] = ARTIST\n image['Exif.Image.Copyright'] = COPYRIGHT\n\n image.setComment(comment)\n\n # I prefer not to modify the input file\n # image.writeMetadata()\n\n # We look for a meanful name\n if 'Exif.Image.DateTime' in image.exifKeys():\n filename = image['Exif.Image.DateTime'].strftime('%Y-%m-%d_%H-%M-%S.jpg')\n mini = os.path.join(miniature, filename)\n grand = os.path.join(agrandie, filename)\n else:\n # If no exif information, leave the old name\n mini = os.path.join(miniature, infile)\n grand = os.path.join(agrandie, infile)\n\n # We create the thumbnail\n #try:\n im = Image.open(file_path)\n im.thumbnail(size_maxi, Image.ANTIALIAS)\n\n # We rotate regarding to the EXIF orientation information\n if 'Exif.Image.Orientation' in image.exifKeys():\n orientation = image['Exif.Image.Orientation']\n if orientation == 1:\n # Nothing\n mirror = im.copy()\n elif orientation == 2:\n # Vertical Mirror\n mirror = im.transpose(Image.FLIP_LEFT_RIGHT)\n elif orientation == 3:\n # Rotation 180°\n mirror = im.transpose(Image.ROTATE_180)\n elif orientation == 4:\n # Horizontal Mirror\n mirror = im.transpose(Image.FLIP_TOP_BOTTOM)\n elif orientation == 5:\n # Horizontal Mirror + Rotation 90° CCW\n mirror = im.transpose(Image.FLIP_TOP_BOTTOM).transpose(Image.ROTATE_90)\n elif orientation == 6:\n # Rotation 270°\n mirror = im.transpose(Image.ROTATE_270)\n elif orientation == 7:\n # Horizontal Mirror + Rotation 270°\n mirror = im.transpose(Image.FLIP_TOP_BOTTOM).transpose(Image.ROTATE_270)\n elif orientation == 8:\n # Rotation 90°\n mirror = im.transpose(Image.ROTATE_90)\n\n # No more Orientation information\n image['Exif.Image.Orientation'] = 1\n else:\n # No EXIF information, the user has to do it\n mirror = im.copy()\n\n mirror.save(grand, \"JPEG\", quality=85)\n img_grand = pyexiv2.Image(grand)\n img_grand.readMetadata()\n image.copyMetadataTo(img_grand)\n img_grand.writeMetadata()\n print grand\n\n mirror.thumbnail(size_mini, Image.ANTIALIAS)\n mirror.save(mini, \"JPEG\", quality=85)\n img_mini = pyexiv2.Image(mini)\n img_mini.readMetadata()\n image.copyMetadataTo(img_mini)\n img_mini.writeMetadata()\n print mini\n\n print\n\nIf you see something that could be improved (except the fact that it is still for Python 2.5) then please let me know.\n", "Although PIL can read EXIF metadata, it doesn't have the ability to change it and write it back to an image file.\nA better choice is the pyexiv2 library. With this library it's quite simple flip the photo's orientation. Here's an example:\nimport sys\nimport pyexiv2\n\nimage = pyexiv2.Image(sys.argv[1])\nimage.readMetadata()\n\nimage['Exif.Image.Orientation'] = 6\nimage.writeMetadata()\n\nThis sets the orientation to 6, corresponding to \"Rotated 90 CW\".\n", "First you have to make sure that your camera actually has a rotation sensor. Most camera models without sensor simply set the Orientation Tag to 1 (Horizontal) for ALL pictures.\nThen I recommend to use pyexiv2 and pyjpegtran in your case. PIL doesn't support lossless rotation, which is the domain of pyjpegtran. pyexiv2 is a library that allows you to copy metadata from one image to another (i think the method name is copyMetadata).\nAre you sure that you don't want to resize your photos before displaying them in the browser? A 8 Megapixel JPEG is much too big for the browser window and will cause endless loading times.\n" ]
[ 15, 6, 2 ]
[]
[]
[ "exif", "jpeg", "python", "python_imaging_library", "rotation" ]
stackoverflow_0001606587_exif_jpeg_python_python_imaging_library_rotation.txt
Q: Load different modules without changing the logic file Suppose I've got 2 different modules which have the uniform(same) interfaces. The files list like this: root/ logic.py sns_api/ __init__.py facebook/ pyfacebook.py __init__.py myspace/ pymyspace.py __init__.py And pyfacebook.py and pymyspace.py have the same interfaces, which means: # in pyfacebook.py class Facebook: def __init__(self, a, b): # do the init def method1(self, a, b, ...): # do the logic # in pymyspace.py class Myspace: def __init__(self, a, b): # do the init def method1(self, a, b, ...): # do the logic Now I have a question. I want to do the logic in logic.py without duplicating the codes, so I'm wondering how can I just set a flag to show which module I use and python will load the right codes automatically, which means: # in logic.py PLATFORM = "facebook" # import the right modules in, complete the logic with the current platform # create the right instance and invoke the right methods Then I change PLATFORM = 'myspace', the logic will work automatically. So how can I do this? I'm wondering whether using the dynamic importing will work, or eval raw python codes, but seems not a good solution. Or if I can make a uniform wrapper in sns_api/__init__.py Anyone can help? A: With just two i'd do if platform == 'facebook': from pyfacebook import FaceBook as Platform elif platform == 'myspace': from pymyspace import Myspace as Platform else: raise RuntimeError, "not a valid platform" and use Platform in the rest of the code. It's done like this in the library, see the os module. You can do really dynamic imports using name =__import__('module'), but you probably don't need this. A: Have a "factory" function in each module, do dynamic import and call the factory for the loaded module. At least, that's one way to do it. Remember, the pythonic way is "duck typing" so that factory returns an object and the client uses it through "duck calls" :-) A: You can also use exec: exec "from sns_api.%s import Platform" % PLATFORM Then in your implementation files, assign something to Platform: # in pyfacebook.py class Facebook: def __init__(self, a, b): # do the init def method1(self, a, b, ...): # do the logic Platform = Facebook
Load different modules without changing the logic file
Suppose I've got 2 different modules which have the uniform(same) interfaces. The files list like this: root/ logic.py sns_api/ __init__.py facebook/ pyfacebook.py __init__.py myspace/ pymyspace.py __init__.py And pyfacebook.py and pymyspace.py have the same interfaces, which means: # in pyfacebook.py class Facebook: def __init__(self, a, b): # do the init def method1(self, a, b, ...): # do the logic # in pymyspace.py class Myspace: def __init__(self, a, b): # do the init def method1(self, a, b, ...): # do the logic Now I have a question. I want to do the logic in logic.py without duplicating the codes, so I'm wondering how can I just set a flag to show which module I use and python will load the right codes automatically, which means: # in logic.py PLATFORM = "facebook" # import the right modules in, complete the logic with the current platform # create the right instance and invoke the right methods Then I change PLATFORM = 'myspace', the logic will work automatically. So how can I do this? I'm wondering whether using the dynamic importing will work, or eval raw python codes, but seems not a good solution. Or if I can make a uniform wrapper in sns_api/__init__.py Anyone can help?
[ "With just two i'd do\nif platform == 'facebook':\n from pyfacebook import FaceBook as Platform\nelif platform == 'myspace':\n from pymyspace import Myspace as Platform\nelse:\n raise RuntimeError, \"not a valid platform\"\n\nand use Platform in the rest of the code. It's done like this in the library, see the os module.\nYou can do really dynamic imports using name =__import__('module'), but you probably don't need this.\n", "Have a \"factory\" function in each module, do dynamic import and call the factory for the loaded module. At least, that's one way to do it. Remember, the pythonic way is \"duck typing\" so that factory returns an object and the client uses it through \"duck calls\" :-)\n", "You can also use exec:\nexec \"from sns_api.%s import Platform\" % PLATFORM\n\nThen in your implementation files, assign something to Platform:\n# in pyfacebook.py\nclass Facebook:\n def __init__(self, a, b):\n # do the init\n def method1(self, a, b, ...):\n # do the logic\n\nPlatform = Facebook\n\n" ]
[ 6, 0, 0 ]
[]
[]
[ "dynamic_import", "interface", "python" ]
stackoverflow_0001606960_dynamic_import_interface_python.txt
Q: Types that define `__eq__` are unhashable? I had a strange bug when porting a feature to the Python 3.1 fork of my program. I narrowed it down to the following hypothesis: In contrast to Python 2.x, in Python 3.x if an object has an __eq__ method it is automatically unhashable. Is this true? Here's what happens in Python 3.1: >>> class O(object): ... def __eq__(self, other): ... return 'whatever' ... >>> o = O() >>> d = {o: 0} Traceback (most recent call last): File "<pyshell#16>", line 1, in <module> d = {o: 0} TypeError: unhashable type: 'O' The follow-up question is, how do I solve my personal problem? I have an object ChangeTracker which stores a WeakKeyDictionary that points to several objects, giving for each the value of their pickle dump at a certain time point in the past. Whenever an existing object is checked in, the change tracker says whether its new pickle is identical to its old one, therefore saying whether the object has changed in the meantime. Problem is, now I can't even check if the given object is in the library, because it makes it raise an exception about the object being unhashable. (Cause it has a __eq__ method.) How can I work around this? A: Yes, if you define __eq__, the default __hash__ (namely, hashing the address of the object in memory) goes away. This is important because hashing needs to be consistent with equality: equal objects need to hash the same. The solution is simple: just define __hash__ along with defining __eq__. A: This paragraph from http://docs.python.org/3.1/reference/datamodel.html#object.hash If a class that overrides __eq__() needs to retain the implementation of __hash__() from a parent class, the interpreter must be told this explicitly by setting __hash__ = <ParentClass>.__hash__. Otherwise the inheritance of __hash__() will be blocked, just as if __hash__ had been explicitly set to None. A: Check the Python 3 manual on object.__hash__: If a class does not define an __eq__() method it should not define a __hash__() operation either; if it defines __eq__() but not __hash__(), its instances will not be usable as items in hashable collections. Emphasis is mine. If you want to be lazy, it sounds like you can just define __hash__(self) to return id(self): User-defined classes have __eq__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns id(x). A: I'm no python expert, but wouldn't it make sense that, when you define a eq-method, you also have to define a hash-method as well (which calculates the hash value for an object) Otherwise, the hashing mechanism wouldn't know if it hit the same object, or a different object with just the same hash-value. Actually, it's the other way around, it'd probably end up computing different hash values for objects considered equal by your __eq__ method. I have no idea what that hash function is called though, __hash__ perhaps? :)
Types that define `__eq__` are unhashable?
I had a strange bug when porting a feature to the Python 3.1 fork of my program. I narrowed it down to the following hypothesis: In contrast to Python 2.x, in Python 3.x if an object has an __eq__ method it is automatically unhashable. Is this true? Here's what happens in Python 3.1: >>> class O(object): ... def __eq__(self, other): ... return 'whatever' ... >>> o = O() >>> d = {o: 0} Traceback (most recent call last): File "<pyshell#16>", line 1, in <module> d = {o: 0} TypeError: unhashable type: 'O' The follow-up question is, how do I solve my personal problem? I have an object ChangeTracker which stores a WeakKeyDictionary that points to several objects, giving for each the value of their pickle dump at a certain time point in the past. Whenever an existing object is checked in, the change tracker says whether its new pickle is identical to its old one, therefore saying whether the object has changed in the meantime. Problem is, now I can't even check if the given object is in the library, because it makes it raise an exception about the object being unhashable. (Cause it has a __eq__ method.) How can I work around this?
[ "Yes, if you define __eq__, the default __hash__ (namely, hashing the address of the object in memory) goes away. This is important because hashing needs to be consistent with equality: equal objects need to hash the same.\nThe solution is simple: just define __hash__ along with defining __eq__.\n", "This paragraph from http://docs.python.org/3.1/reference/datamodel.html#object.hash\n\nIf a class that overrides __eq__()\n needs to retain the implementation of\n __hash__() from a parent class, the interpreter must be told this\n explicitly by setting __hash__ =\n <ParentClass>.__hash__. Otherwise the\n inheritance of __hash__() will be\n blocked, just as if __hash__ had been\n explicitly set to None.\n\n", "Check the Python 3 manual on object.__hash__:\n\nIf a class does not define an __eq__() method it should not define a __hash__() operation either; if it defines __eq__() but not __hash__(), its instances will not be usable as items in hashable collections.\n\nEmphasis is mine.\nIf you want to be lazy, it sounds like you can just define __hash__(self) to return id(self):\n\nUser-defined classes have __eq__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns id(x).\n\n", "I'm no python expert, but wouldn't it make sense that, when you define a eq-method, you also have to define a hash-method as well (which calculates the hash value for an object) Otherwise, the hashing mechanism wouldn't know if it hit the same object, or a different object with just the same hash-value. Actually, it's the other way around, it'd probably end up computing different hash values for objects considered equal by your __eq__ method.\nI have no idea what that hash function is called though, __hash__ perhaps? :)\n" ]
[ 96, 33, 6, 1 ]
[]
[]
[ "hash", "python", "python_3.x" ]
stackoverflow_0001608842_hash_python_python_3.x.txt
Q: Abstract base class inheritance in Django with foreignkey I am attempting model inheritance on my Django powered site in order to adhere to DRY. My goal is to use an abstract base class called BasicCompany to supply the common info for three child classes: Butcher, Baker, CandlestickMaker (they are located in their own apps under their respective names). Each of the child classes has a need for a variable number of things like email addresses, phone numbers, URLs, etc, ranging in number from 0 and up. So I want a many-to-one/ForeignKey relationship between these classes and the company they refer to. Here is roughly what I imagine BasicCompany/models.py looking like: from django.contrib.auth.models import User from django.db import models class BasicCompany(models.Models) owner = models.ForeignKey(User) name = models.CharField() street_address = models.CharField() #etc... class Meta: abstract = True class EmailAddress(models.model) email = models.EmailField() basiccompany = models.ForeignKey(BasicCompany, related_name="email_addresses") #etc for URLs, PhoneNumbers, PaymentTypes. What I don't know how to do is inherit EmailAddress, URLs, PhoneNumbers (etc) into the child classes. Can it be done, and if so, how? If not, I would appreciate your advice on workarounds. A: I suspect you'll be better off with generic relations for the links, rather than trying to tie everything to a base class. Generic relations allow you to link a model such as EmailAddress to any other class, which would seem to be a good fit with your use case.
Abstract base class inheritance in Django with foreignkey
I am attempting model inheritance on my Django powered site in order to adhere to DRY. My goal is to use an abstract base class called BasicCompany to supply the common info for three child classes: Butcher, Baker, CandlestickMaker (they are located in their own apps under their respective names). Each of the child classes has a need for a variable number of things like email addresses, phone numbers, URLs, etc, ranging in number from 0 and up. So I want a many-to-one/ForeignKey relationship between these classes and the company they refer to. Here is roughly what I imagine BasicCompany/models.py looking like: from django.contrib.auth.models import User from django.db import models class BasicCompany(models.Models) owner = models.ForeignKey(User) name = models.CharField() street_address = models.CharField() #etc... class Meta: abstract = True class EmailAddress(models.model) email = models.EmailField() basiccompany = models.ForeignKey(BasicCompany, related_name="email_addresses") #etc for URLs, PhoneNumbers, PaymentTypes. What I don't know how to do is inherit EmailAddress, URLs, PhoneNumbers (etc) into the child classes. Can it be done, and if so, how? If not, I would appreciate your advice on workarounds.
[ "I suspect you'll be better off with generic relations for the links, rather than trying to tie everything to a base class. Generic relations allow you to link a model such as EmailAddress to any other class, which would seem to be a good fit with your use case.\n" ]
[ 6 ]
[]
[]
[ "abstract_class", "django", "django_models", "inheritance", "python" ]
stackoverflow_0001608975_abstract_class_django_django_models_inheritance_python.txt
Q: Site wide caching with Django - problems with password protected pages on logout I've recently implemented sitewide caching using memcached on my Django application, I've set the TTL to about 500 seconds, and implement per view caches on other parts of the web application. The problem I have is that when a user logs out, because it's a form post the site behaves as expected, however if they then go to a password protected part of the site, the application behaves as if they have still logged in, unless they hit "refresh". I'm new to caching, and wondering if I can do anything smart to prevent this? A: I ran into similar issues. The standard Django way is to disable cache for authenticated users. #settings.py CACHE_MIDDLEWARE_ANONYMOUS_ONLY=True It works fine if different users see different pages (example: their user name on them) and you can't have one version for them. But if there are only 2 versions of page: for authenticated users and for others then it is not good to completely disable cache for authenticated users. I wrote an app that, besides all, make it possible to fine-tune cache in this case. Update. BTW: you mentioned that when you click 'refresh' correct version of page is received. It means that problem is client-side cache (Expires header or E-tag), not the server cache. To prevent client-side caching (you have to do that if you have several versions of page under the same URL) use @cache_control(must_revalidate=True) decorator. A: In the view of a password protected part of the site, do you check whether the user is registered or anonymous before fetching the data (and perhaps bringing data from cache)? You should. Django helps you, with a login required decorator you can place on the view. Take a look at this: http://docs.djangoproject.com/en/dev/topics/auth/#the-login-required-decorator
Site wide caching with Django - problems with password protected pages on logout
I've recently implemented sitewide caching using memcached on my Django application, I've set the TTL to about 500 seconds, and implement per view caches on other parts of the web application. The problem I have is that when a user logs out, because it's a form post the site behaves as expected, however if they then go to a password protected part of the site, the application behaves as if they have still logged in, unless they hit "refresh". I'm new to caching, and wondering if I can do anything smart to prevent this?
[ "I ran into similar issues. The standard Django way is to disable cache for authenticated users. \n#settings.py\nCACHE_MIDDLEWARE_ANONYMOUS_ONLY=True\n\nIt works fine if different users see different pages (example: their user name on them) and you can't have one version for them.\nBut if there are only 2 versions of page: for authenticated users and for others then it is not good to completely disable cache for authenticated users. I wrote an app that, besides all, make it possible to fine-tune cache in this case.\nUpdate.\nBTW: you mentioned that when you click 'refresh' correct version of page is received. It means that problem is client-side cache (Expires header or E-tag), not the server cache. \nTo prevent client-side caching (you have to do that if you have several versions of page under the same URL) use @cache_control(must_revalidate=True) decorator.\n", "In the view of a password protected part of the site, do you check whether the user is registered or anonymous before fetching the data (and perhaps bringing data from cache)?\nYou should. Django helps you, with a login required decorator you can place on the view.\nTake a look at this:\nhttp://docs.djangoproject.com/en/dev/topics/auth/#the-login-required-decorator\n" ]
[ 7, 1 ]
[]
[]
[ "django", "memcached", "python" ]
stackoverflow_0001608521_django_memcached_python.txt
Q: I need __closure__ I just checked out this very interesting mindmap: http://www.mindmeister.com/10510492/python-underscore And I was wondering what some of the new ones mean, like __code__ and __closure__. I googled around but nothing concrete. Does anyone know? A: From What's New in Python 3.0 The function attributes named func_X have been renamed to use the __X__ form, freeing up these names in the function attribute namespace for user-defined attributes. To wit, func_closure, func_code, func_defaults, func_dict, func_doc, func_globals, func_name were renamed to __closure__, __code__, __defaults__, __dict__, __doc__, __globals__, __name__, respectively. Basically, same old Python 2 stuff, fancy new Python 3000 name. You can learn more about most of these in PEP 232 A: You actually have analogous fields in CPython 2.x: >>> first = lambda x: lambda y: x >>> f = first(2) >>> type(f.func_code) <type 'code'> >>> map(type, f.func_closure) [<type 'cell'>] Edit: For more details on the meaning of these constructs please read about "user defined functions" and "code objects" explained in the Data Model chapter of the Python Reference. A: They used to be called func_closure (now __closure__), func_code (now __code__) (that should help googling). A short explanation from here below. func_closure: None or a tuple of cells that contain bindings for the function’s free variables (read-only) func_code: The code object representing the compiled function body (writable) A: These are Python's special methods.
I need __closure__
I just checked out this very interesting mindmap: http://www.mindmeister.com/10510492/python-underscore And I was wondering what some of the new ones mean, like __code__ and __closure__. I googled around but nothing concrete. Does anyone know?
[ "From What's New in Python 3.0\nThe function attributes named func_X have been renamed to use the __X__ form, freeing up these names in the function attribute namespace for user-defined attributes. To wit, func_closure, func_code, func_defaults, func_dict, func_doc, func_globals, func_name were renamed to __closure__, __code__, __defaults__, __dict__, __doc__, __globals__, __name__, respectively.\nBasically, same old Python 2 stuff, fancy new Python 3000 name.\nYou can learn more about most of these in PEP 232\n", "You actually have analogous fields in CPython 2.x:\n>>> first = lambda x: lambda y: x\n>>> f = first(2)\n>>> type(f.func_code)\n<type 'code'>\n>>> map(type, f.func_closure)\n[<type 'cell'>]\n\nEdit: For more details on the meaning of these constructs please read about \"user defined functions\" and \"code objects\" explained in the Data Model chapter of the Python Reference.\n", "They used to be called \nfunc_closure (now __closure__), func_code (now __code__)\n\n(that should help googling).\nA short explanation from here below.\n\nfunc_closure: None or a tuple of cells that contain bindings for the function’s free variables (read-only)\nfunc_code: The code object\nrepresenting the compiled function\nbody (writable)\n\n", "These are Python's special methods.\n" ]
[ 7, 6, 4, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001609716_python_python_3.x.txt
Q: Unable to get WAF to run I am trying to build the Monotooth library on Ubuntu and there is a native component which needs to be compiled. The distro from github has a wscript file and requires WAF to build. However, whenever I try to execute waf configure I get: Checking for program gcc : ok /usr/bin/gcc Checking for program cpp : ok /usr/bin/cpp Checking for program ar : ok /usr/bin/ar Checking for program ranlib : ok /usr/bin/ranlib error: No such method 'create_library_configurator' I don't know python and I'm not sure what this is actually telling me. Am I missing a library (module) or what? A: I found that I needed to use a version of waf from http://waf.googlecode.com/files/waf-1.3.2.tar.bz2 I had version 1.5.9 which must have deprecated the create_library_configurator method.
Unable to get WAF to run
I am trying to build the Monotooth library on Ubuntu and there is a native component which needs to be compiled. The distro from github has a wscript file and requires WAF to build. However, whenever I try to execute waf configure I get: Checking for program gcc : ok /usr/bin/gcc Checking for program cpp : ok /usr/bin/cpp Checking for program ar : ok /usr/bin/ar Checking for program ranlib : ok /usr/bin/ranlib error: No such method 'create_library_configurator' I don't know python and I'm not sure what this is actually telling me. Am I missing a library (module) or what?
[ "I found that I needed to use a version of waf from http://waf.googlecode.com/files/waf-1.3.2.tar.bz2\nI had version 1.5.9 which must have deprecated the create_library_configurator method.\n" ]
[ 2 ]
[]
[]
[ "python", "waf", "wsh" ]
stackoverflow_0001608646_python_waf_wsh.txt
Q: sqlite3 Operation Error when doing many commits rapidly I get sqlite3.OperationalError: SQL logic error or missing database when I run an application I've been working on. What follows is a narrowed-down but complete sample that exhibits the problem for me. This sample uses two tables; one to store users and one to record whether user information is up-to-date in an external directory system. (As you can imagine, the tables are a fair bit longer in my real application). The sample creates a bunch of random users, and then goes through a list of (random) users and adds them to the second table. #!/usr/bin/env python import sqlite3 import random def random_username(): # Returns one of 10 000 four-letter placeholders for a username seq = 'abcdefghij' return random.choice(seq) + random.choice(seq) + \ random.choice(seq) + random.choice(seq) connection = sqlite3.connect("test.sqlite") connection.execute('''CREATE TABLE IF NOT EXISTS "users" ( "entry_id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "user_id" INTEGER NOT NULL , "obfuscated_name" TEXT NOT NULL)''') connection.execute('''CREATE TABLE IF NOT EXISTS "dir_x_user" ( "user_id" INTEGER PRIMARY KEY NOT NULL)''') # Create a bunch of random users random.seed(0) # get the same results every time for i in xrange(1500): connection.execute('''INSERT INTO users (user_id, obfuscated_name) VALUES (?, ?)''', (i, random_username())) connection.commit() #random.seed() for i in xrange(4000): username = random_username() result = connection.execute( 'SELECT user_id FROM users WHERE obfuscated_name = ?', (username, )) row = result.fetchone() if row is not None: user_id = row[0] print " %4d %s" % (user_id, username) connection.execute( 'INSERT OR IGNORE INTO dir_x_user (user_id) VALUES(?)', (user_id, )) else: print " ? %s" % username if i % 10 == 0: print "i = %s; committing" % i connection.commit() connection.commit() Of particular note is the line near the end that says, if i % 10 == 0: In the real application, I'm querying the data from a network resource, and want to commit the users every now and then. Changing that line changes when the error occurs; it seems that when I commit, there is a non-zero chance of the OperationalError. It seems to be somewhat related to the data I'm putting in the database, but I can't determine what the problem is. Most of the time if I read all the data and then commit only once, an error does not occur. [Yes, there is an obvious work-around there, but a latent problem remains.] Here is the end of a sample run on my computer: ? cgha i = 530; committing ? gegh ? aabd ? efhe ? jhji ? hejd ? biei ? eiaa ? eiib ? bgbf 759 bedd i = 540; committing Traceback (most recent call last): File "sqlitetest.py", line 46, in <module> connection.commit() sqlite3.OperationalError: SQL logic error or missing database I'm using Mac OS X 10.5.8 with the built-in Python 2.5.1 and Sqlite3 3.4.0. A: As the "lite" part of the name implies, sqlite3 is meant for light-weight database use, not massive scalable concurrency like some of the Big Boys. Seems to me that what's happening here is that sqlite hasn't finished writing the last change you requested when you make another request So, some options I see for you are: You could spend a lot of time learning about file locking, concurrency, and transaction in sqlite3 You could add some more error-proofing simply by having your app retry the action after the first failure, as suggested by some on this Reddit post, which includes tips such as "If the code has an effective mechanism for simply trying again, most of sqlite's concurrency problems go away" and "Passing isolation_level=None to connect seems to fix it". You could switch to using a more scalable database like PostgreSQL (For my money, #2 or #3 are the way to go.)
sqlite3 Operation Error when doing many commits rapidly
I get sqlite3.OperationalError: SQL logic error or missing database when I run an application I've been working on. What follows is a narrowed-down but complete sample that exhibits the problem for me. This sample uses two tables; one to store users and one to record whether user information is up-to-date in an external directory system. (As you can imagine, the tables are a fair bit longer in my real application). The sample creates a bunch of random users, and then goes through a list of (random) users and adds them to the second table. #!/usr/bin/env python import sqlite3 import random def random_username(): # Returns one of 10 000 four-letter placeholders for a username seq = 'abcdefghij' return random.choice(seq) + random.choice(seq) + \ random.choice(seq) + random.choice(seq) connection = sqlite3.connect("test.sqlite") connection.execute('''CREATE TABLE IF NOT EXISTS "users" ( "entry_id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "user_id" INTEGER NOT NULL , "obfuscated_name" TEXT NOT NULL)''') connection.execute('''CREATE TABLE IF NOT EXISTS "dir_x_user" ( "user_id" INTEGER PRIMARY KEY NOT NULL)''') # Create a bunch of random users random.seed(0) # get the same results every time for i in xrange(1500): connection.execute('''INSERT INTO users (user_id, obfuscated_name) VALUES (?, ?)''', (i, random_username())) connection.commit() #random.seed() for i in xrange(4000): username = random_username() result = connection.execute( 'SELECT user_id FROM users WHERE obfuscated_name = ?', (username, )) row = result.fetchone() if row is not None: user_id = row[0] print " %4d %s" % (user_id, username) connection.execute( 'INSERT OR IGNORE INTO dir_x_user (user_id) VALUES(?)', (user_id, )) else: print " ? %s" % username if i % 10 == 0: print "i = %s; committing" % i connection.commit() connection.commit() Of particular note is the line near the end that says, if i % 10 == 0: In the real application, I'm querying the data from a network resource, and want to commit the users every now and then. Changing that line changes when the error occurs; it seems that when I commit, there is a non-zero chance of the OperationalError. It seems to be somewhat related to the data I'm putting in the database, but I can't determine what the problem is. Most of the time if I read all the data and then commit only once, an error does not occur. [Yes, there is an obvious work-around there, but a latent problem remains.] Here is the end of a sample run on my computer: ? cgha i = 530; committing ? gegh ? aabd ? efhe ? jhji ? hejd ? biei ? eiaa ? eiib ? bgbf 759 bedd i = 540; committing Traceback (most recent call last): File "sqlitetest.py", line 46, in <module> connection.commit() sqlite3.OperationalError: SQL logic error or missing database I'm using Mac OS X 10.5.8 with the built-in Python 2.5.1 and Sqlite3 3.4.0.
[ "As the \"lite\" part of the name implies, sqlite3 is meant for light-weight database use, not massive scalable concurrency like some of the Big Boys. Seems to me that what's happening here is that sqlite hasn't finished writing the last change you requested when you make another request\nSo, some options I see for you are:\n\nYou could spend a lot of time learning about file locking, concurrency, and transaction in sqlite3\nYou could add some more error-proofing simply by having your app retry the action after the first failure, as suggested by some on this Reddit post, which includes tips such as \"If the code has an effective mechanism for simply trying again, most of sqlite's concurrency problems go away\" and \"Passing isolation_level=None to connect seems to fix it\".\nYou could switch to using a more scalable database like PostgreSQL\n\n(For my money, #2 or #3 are the way to go.)\n" ]
[ 2 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0001610154_python_sqlite.txt
Q: Run time of a subprocess.Popen instance Is there an easy way to find out the current (real or cpu) run time of a subprocess.Popen instance? A: No, but you can simply subclass and extend the Popen class to store the time it was created. A: Not in a platform-independent manner. On Linux, you can read /proc/<pid>/stat, in particular the columns utime, stime, and starttime (as described in proc(5)). A: On a Windows machine, you can use win32 APIs along with proc.pid, like so: import subprocess def print_times(proc): import win32process, win32api, win32con hproc = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION, False, proc.pid) times = win32process.GetProcessTimes(hproc) win32api.CloseHandle(hproc) # GetProcessTimes returns values in 100-ns intervals print 'kernel', times['KernelTime'] * 1e-7 print 'user', times['UserTime'] * 1e-7 proc = subprocess.Popen('cmd /c sleep 1') proc.wait() print_times(proc)
Run time of a subprocess.Popen instance
Is there an easy way to find out the current (real or cpu) run time of a subprocess.Popen instance?
[ "No, but you can simply subclass and extend the Popen class to store the time it was created.\n", "Not in a platform-independent manner. On Linux, you can read /proc/<pid>/stat, in particular the columns utime, stime, and starttime (as described in proc(5)).\n", "On a Windows machine, you can use win32 APIs along with proc.pid, like so:\nimport subprocess\n\ndef print_times(proc):\n import win32process, win32api, win32con\n hproc = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION, False, proc.pid)\n times = win32process.GetProcessTimes(hproc)\n win32api.CloseHandle(hproc)\n # GetProcessTimes returns values in 100-ns intervals\n print 'kernel', times['KernelTime'] * 1e-7\n print 'user', times['UserTime'] * 1e-7\n\nproc = subprocess.Popen('cmd /c sleep 1')\nproc.wait()\nprint_times(proc)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001608724_python_subprocess.txt
Q: C# equivalence of python maketrans and translate Where could I find code of a C# equivalence of Python maketrans and translate? Thanks! A: This should take you most of the way there: public class MakeTrans { private readonly Dictionary<char, char> d; public MakeTrans(string intab, string outab) { d = Enumerable.Range(0, intab.Length).ToDictionary(i => intab[i], i => outab[i]); //d = new Dictionary<char, char>(); //for (int i = 0; i < intab.Length; i++) // d[intab[i]] = outab[i]; } public string Translate(string src) { System.Text.StringBuilder sb = new StringBuilder(src.Length); foreach (char src_c in src) sb.Append(d.ContainsKey(src_c) ? d[src_c] : src_c); return sb.ToString(); } } You're responsible for making sure that intab and outtab are the same length. You can add functionality for dropping letters, etc. The dictionary construction is done in a cool LINQ-y way. It's a bit non-obvious, so the commented out code is provided and does the same thing. Here's how it looks in python (example lifted from here): >>> from string import maketrans # Required to call maketrans function. >>> >>> intab = "aeiou" >>> outtab = "12345" >>> trantab = maketrans(intab, outtab) >>> >>> str = "this is string example....wow!!!"; >>> print str.translate(trantab); th3s 3s str3ng 2x1mpl2....w4w!!! Here is C# test code: static void Main(string[] args) { MakeTrans.MakeTrans mt = new MakeTrans.MakeTrans("aeiou", "12345"); Console.WriteLine("{0}", mt.Translate("this is string example....wow!!!")); } And here is the output: th3s 3s str3ng 2x1mpl2....w4w!!!
C# equivalence of python maketrans and translate
Where could I find code of a C# equivalence of Python maketrans and translate? Thanks!
[ "This should take you most of the way there:\npublic class MakeTrans\n{\n private readonly Dictionary<char, char> d;\n public MakeTrans(string intab, string outab)\n {\n d = Enumerable.Range(0, intab.Length).ToDictionary(i => intab[i], i => outab[i]);\n //d = new Dictionary<char, char>();\n //for (int i = 0; i < intab.Length; i++)\n // d[intab[i]] = outab[i];\n }\n public string Translate(string src)\n {\n System.Text.StringBuilder sb = new StringBuilder(src.Length);\n foreach (char src_c in src)\n sb.Append(d.ContainsKey(src_c) ? d[src_c] : src_c);\n return sb.ToString();\n }\n}\n\nYou're responsible for making sure that intab and outtab are the same length. You can add functionality for dropping letters, etc.\nThe dictionary construction is done in a cool LINQ-y way. It's a bit non-obvious, so the commented out code is provided and does the same thing.\nHere's how it looks in python (example lifted from here):\n>>> from string import maketrans # Required to call maketrans function.\n>>>\n>>> intab = \"aeiou\"\n>>> outtab = \"12345\"\n>>> trantab = maketrans(intab, outtab)\n>>>\n>>> str = \"this is string example....wow!!!\";\n>>> print str.translate(trantab);\nth3s 3s str3ng 2x1mpl2....w4w!!!\n\nHere is C# test code:\n static void Main(string[] args)\n {\n MakeTrans.MakeTrans mt = new MakeTrans.MakeTrans(\"aeiou\", \"12345\");\n Console.WriteLine(\"{0}\", mt.Translate(\"this is string example....wow!!!\"));\n }\n\nAnd here is the output:\nth3s 3s str3ng 2x1mpl2....w4w!!!\n\n" ]
[ 3 ]
[]
[]
[ "c#", "python" ]
stackoverflow_0001610217_c#_python.txt
Q: How does __iter__ work? Despite reading up on it, I still dont quite understand how __iter__ works. What would be a simple explaination? I've seen def__iter__(self): return self. I don't see how this works or the steps on how this works. A: As simply as I can put it: __iter__ defines a method on a class which will return an iterator (an object that successively yields the next item contained by your object). The iterator object that __iter__() returns can be pretty much any object, as long as it defines a next() method. The next method will be called by statements like for ... in ... to yield the next item, and next() should raise the StopIteration exception when there are no more items. What's great about this is it lets you define how your object is iterated, and __iter__ provides a common interface that every other python function knows how to work with. A: An iterator needs to define two methods: __iter__() and __next__() (next() in python2). Usually, the object itself defines the __next__() or next() method, so it just returns itself as the iterator. This creates an iterable that is also itself an iterator. These methods are used by for and in statements. Python 3 docs: docs.python.org/3/library/stdtypes.html#iterator-types Python 2 docs: docs.python.org/2/library/stdtypes.html#iterator-types A: The specs for def __iter__(self): are: it returns an iterator. So, if self is an iterator, return self is clearly appropriate. "Being an iterator" means "having a __next__(self) method" (in Python 3; in Python 2, the name of the method in question is unfortunately plain next instead, clearly a name design glitch for a special method). In Python 2.6 and higher, the best way to implement an iterator is generally to use the appropriate abstract base class from the collections standard library module -- in Python 2.6, the code might be (remember to call the method __next__ instead in Python 3): import collections class infinite23s(collections.Iterator): def next(self): return 23 an instance of this class will return infinitely many copies of 23 when iterated on (like itertools.repeat(23)) so the loop must be terminated otherwise. The point is that subclassing collections.Iterator adds the right __iter__ method on your behalf -- not a big deal here, but a good general principle (avoid repetitive, boilerplate code like iterators' standard one-line __iter__ -- in repetition, there's no added value and a lot of subtracted value!-). A: A class supporting the __iter__ method will return an iterator object instance: an object supporting the next() method. This object will be usuable in the statements "for" and "in". A: In Python, an iterator is any object that supports the iterator protocol. Part of that protocol is that the object must have an __iter__() method that returns the iterator object. I suppose this gives you some flexibility so that an object can pass on the iterator responsibilities to an internal class, or create some special object. In any case, the __iter__() method usually has only one line and that line is often simply return self The other part of the protocol is the next() method, and this is where the real work is done. This method has to figure out or create or get the next thing, and return it. It may need to keep track of where it is so that the next time it is called, it really does return the next thing. Once you have an object that returns the next thing in a sequence, you can collapse a for loop that looks like this: myname = "Fredericus" x = [] for i in [1,2,3,4,5,6,7,8,9,10]: x.append(myname[i-1]) i = i + 1 # get the next i print x into this: myname = "Fredericus" x = [myname[i] for i in range(10)] print x Notice that there is nowhere where we have code that gets the next value of i because range(10) is an object that FOLLOWS the iterator protocol, and the list comprehension is a construct that USES the iterator protocol. You can also USE the iterator protocol directly. For instance, when writing scripts to process CSV files, I often write this: mydata = csv.reader(open('stuff.csv') mydata.next() for row in mydata: # do something with the row. I am using the iterator directly by calling next() to skip the header row, then using it indirectly via the builtin in operator in the for statement.
How does __iter__ work?
Despite reading up on it, I still dont quite understand how __iter__ works. What would be a simple explaination? I've seen def__iter__(self): return self. I don't see how this works or the steps on how this works.
[ "As simply as I can put it:\n__iter__ defines a method on a class which will return an iterator (an object that successively yields the next item contained by your object).\nThe iterator object that __iter__() returns can be pretty much any object, as long as it defines a next() method.\nThe next method will be called by statements like for ... in ... to yield the next item, and next() should raise the StopIteration exception when there are no more items.\nWhat's great about this is it lets you define how your object is iterated, and __iter__ provides a common interface that every other python function knows how to work with.\n", "An iterator needs to define two methods: __iter__() and __next__() (next() in python2). Usually, the object itself defines the __next__() or next() method, so it just returns itself as the iterator. This creates an iterable that is also itself an iterator. These methods are used by for and in statements.\n\nPython 3 docs: docs.python.org/3/library/stdtypes.html#iterator-types\nPython 2 docs: docs.python.org/2/library/stdtypes.html#iterator-types\n\n", "The specs for def __iter__(self): are: it returns an iterator. So, if self is an iterator, return self is clearly appropriate.\n\"Being an iterator\" means \"having a __next__(self) method\" (in Python 3; in Python 2, the name of the method in question is unfortunately plain next instead, clearly a name design glitch for a special method).\nIn Python 2.6 and higher, the best way to implement an iterator is generally to use the appropriate abstract base class from the collections standard library module -- in Python 2.6, the code might be (remember to call the method __next__ instead in Python 3):\nimport collections\n\nclass infinite23s(collections.Iterator):\n def next(self): return 23\n\nan instance of this class will return infinitely many copies of 23 when iterated on (like itertools.repeat(23)) so the loop must be terminated otherwise. The point is that subclassing collections.Iterator adds the right __iter__ method on your behalf -- not a big deal here, but a good general principle (avoid repetitive, boilerplate code like iterators' standard one-line __iter__ -- in repetition, there's no added value and a lot of subtracted value!-).\n", "A class supporting the __iter__ method will return an iterator object instance: an object supporting the next() method. This object will be usuable in the statements \"for\" and \"in\".\n", "In Python, an iterator is any object that supports the iterator protocol. Part of that protocol is that the object must have an __iter__() method that returns the iterator object. I suppose this gives you some flexibility so that an object can pass on the iterator responsibilities to an internal class, or create some special object. In any case, the __iter__() method usually has only one line and that line is often simply return self\nThe other part of the protocol is the next() method, and this is where the real work is done. This method has to figure out or create or get the next thing, and return it. It may need to keep track of where it is so that the next time it is called, it really does return the next thing.\nOnce you have an object that returns the next thing in a sequence, you can collapse a for loop that looks like this:\nmyname = \"Fredericus\"\nx = []\nfor i in [1,2,3,4,5,6,7,8,9,10]:\n x.append(myname[i-1])\n i = i + 1 # get the next i\nprint x\n\ninto this:\nmyname = \"Fredericus\"\nx = [myname[i] for i in range(10)]\nprint x\n\nNotice that there is nowhere where we have code that gets the next value of i because range(10) is an object that FOLLOWS the iterator protocol, and the list comprehension is a construct that USES the iterator protocol.\nYou can also USE the iterator protocol directly. For instance, when writing scripts to process CSV files, I often write this:\nmydata = csv.reader(open('stuff.csv')\nmydata.next()\nfor row in mydata:\n # do something with the row.\n\nI am using the iterator directly by calling next() to skip the header row, then using it indirectly via the builtin in operator in the for statement.\n" ]
[ 28, 9, 6, 3, 3 ]
[]
[]
[ "iterator", "python" ]
stackoverflow_0001610371_iterator_python.txt
Q: How to impose a time limit on a whole script in Python The user is entering a python script in a Java GUI python-editor and can run it from the editor. Is there a way to take the user's script and impose a time limit on the total script? I'm familiar with how to this with functions / signal.alarm(but I'm on windows & unix Jython) but the only solution I have come up with is to put that script in a method in another script where I use the setTrace() function but that removes the "feature" that the value of global variables in it persist. ie. try: i+=1 except NameError: i=0 The value of 'i' increments by 1 with every execution. A: Use a threading.Timer to run a function in a separate thread after a specified delay (the max duration you want for your program), and in that function use thread.interrupt_main (note it's in module thread, not in module threading!) to raise a KeyboardInterrupt exception in the main thread. A more solid approach (in case the script gets wedged into some non-interruptible non-Python code, so that it would ignore keyboard interrupts) is to spawn a "watchdog process" to kill the errant script very forcefully if needed (do that as well as the above approach, and a little while later than the delay you use above, to give the script a chance to run its destructors, atexit functions, etc, if at all feasible). A: This is just a guess, but maybe wrap it with Threading or Multiprocessing? Have a timer thread that kills it when it times out.
How to impose a time limit on a whole script in Python
The user is entering a python script in a Java GUI python-editor and can run it from the editor. Is there a way to take the user's script and impose a time limit on the total script? I'm familiar with how to this with functions / signal.alarm(but I'm on windows & unix Jython) but the only solution I have come up with is to put that script in a method in another script where I use the setTrace() function but that removes the "feature" that the value of global variables in it persist. ie. try: i+=1 except NameError: i=0 The value of 'i' increments by 1 with every execution.
[ "Use a threading.Timer to run a function in a separate thread after a specified delay (the max duration you want for your program), and in that function use thread.interrupt_main (note it's in module thread, not in module threading!) to raise a KeyboardInterrupt exception in the main thread.\nA more solid approach (in case the script gets wedged into some non-interruptible non-Python code, so that it would ignore keyboard interrupts) is to spawn a \"watchdog process\" to kill the errant script very forcefully if needed (do that as well as the above approach, and a little while later than the delay you use above, to give the script a chance to run its destructors, atexit functions, etc, if at all feasible).\n", "This is just a guess, but maybe wrap it with Threading or Multiprocessing? Have a timer thread that kills it when it times out. \n" ]
[ 4, 0 ]
[]
[]
[ "limit", "python", "scripting", "time" ]
stackoverflow_0001609869_limit_python_scripting_time.txt
Q: when to use an alternative Python distribution? I have been programming in Python for a few years now and have always used CPython without thinking about it. The books and documentation I have read always refer to CPython too. When does it make sense to use an alternative distribution (PyPy, Stackless, etc)? Thanks! A: If you need native interfacing with the JVM, use Jython. When you need native interfacing with the .Net platform, or want to use Winforms, use IronPython. If you need the latest version, cross-OS support, make use of existing C-based modules existing only for CPython, the use it. If you are thinking into proposing a functional PEP, going the Pypy route could be useful. If you need to do something that Python makes hard (i.e. microthreading), you could go the Stackless way, or any other language (Haskel, etc.). The alternative implementations are always behind CPython, most now target 2.5. Both Jython and IronPython are good ways to sneak in Python into MS-only or Java-only shops, generally through their use for unittests. A: I think it's clear when to use IronPython or Jython. If you want to use Python on CLR/JVM, either as the main language or a scripting language for your C#/Java application.
when to use an alternative Python distribution?
I have been programming in Python for a few years now and have always used CPython without thinking about it. The books and documentation I have read always refer to CPython too. When does it make sense to use an alternative distribution (PyPy, Stackless, etc)? Thanks!
[ "If you need native interfacing with the JVM, use Jython.\nWhen you need native interfacing with the .Net platform, or want to use Winforms, use IronPython.\nIf you need the latest version, cross-OS support, make use of existing C-based modules existing only for CPython, the use it.\nIf you are thinking into proposing a functional PEP, going the Pypy route could be useful.\nIf you need to do something that Python makes hard (i.e. microthreading), you could go the Stackless way, or any other language (Haskel, etc.).\n\nThe alternative implementations are always behind CPython, most now target 2.5.\nBoth Jython and IronPython are good ways to sneak in Python into MS-only or Java-only shops, generally through their use for unittests.\n", "I think it's clear when to use IronPython or Jython. If you want to use Python on CLR/JVM, either as the main language or a scripting language for your C#/Java application.\n" ]
[ 8, 0 ]
[]
[]
[ "cpython", "distribution", "pypy", "python" ]
stackoverflow_0001610822_cpython_distribution_pypy_python.txt
Q: CherryPy variables in html I have a cherryPy program that returns a page that has an image (plot) in a table. I would also like to have variables in the table that describe the plot. I am not using any templating just trying to keep it really simple. In the example below I have the variable numberofapplicants where I want it but it does not output the value of the variable in the table. I have not been able to find any examples of how to do this. Thanks for your help Vincent return ''' <html> <body> <table width="400" border="1"> <tr> <td>numberofapplicants</td> </tr> <tr> <td width="400" height="400"><img src="img/atest.png" width="400" height="400" /></td> </tr> </table> </body> </html> ''' A: Assuming you're using Python 2.x, just use regular string formatting. return ''' <html> <body> <table width="400" border="1"> <tr> <td>%(numberofapplicants)s</td> </tr> <tr> <td width="400" height="400"><img src="img/atest.png" width="400" height="400" /></td> </tr> </table> </body> </html> ''' % {"numberofapplicants": numberofapplicants} A: Your variable 'numberofapplicants' just looks like the rest of the string to Python. If you want to put 'numberofapplicants' into that spot, use the % syntax, e.g. return '''big long string of stuff... and on... <td>%i</td> and done.''' % numberofapplicants
CherryPy variables in html
I have a cherryPy program that returns a page that has an image (plot) in a table. I would also like to have variables in the table that describe the plot. I am not using any templating just trying to keep it really simple. In the example below I have the variable numberofapplicants where I want it but it does not output the value of the variable in the table. I have not been able to find any examples of how to do this. Thanks for your help Vincent return ''' <html> <body> <table width="400" border="1"> <tr> <td>numberofapplicants</td> </tr> <tr> <td width="400" height="400"><img src="img/atest.png" width="400" height="400" /></td> </tr> </table> </body> </html> '''
[ "Assuming you're using Python 2.x, just use regular string formatting.\nreturn '''\n <html>\n <body>\n <table width=\"400\" border=\"1\">\n <tr>\n <td>%(numberofapplicants)s</td>\n </tr>\n <tr>\n <td width=\"400\" height=\"400\"><img src=\"img/atest.png\" width=\"400\" height=\"400\" /></td>\n </tr>\n </table>\n </body>\n </html>\n ''' % {\"numberofapplicants\": numberofapplicants}\n\n", "Your variable 'numberofapplicants' just looks like the rest of the string to Python. If you want to put 'numberofapplicants' into that spot, use the % syntax, e.g.\n return '''big long string of stuff...\n and on...\n <td>%i</td>\n and done.''' % numberofapplicants\n\n" ]
[ 3, 2 ]
[]
[]
[ "cherrypy", "python" ]
stackoverflow_0001610995_cherrypy_python.txt
Q: AppEngine: Will the items stored in a StringListProperty always remain in the same order? After looking through the docs on App Engine and the StringListProperty or the ListProperty I can't seem to find whether there is a guarantee on the order of the items in the list. That is, I'd like to be certain that the list order stays the same despite putting and getting from the DataStore: instance = MyModel() instance.list_property = ['a', 'b', 'c'] instance.put() # Retrieve the model again instance2 = MyModel.get_by_id(instance.key().id()) assert instance2.list_property == ['a', 'b', 'c'] Does AppEngine make any guarantees about the order of items in a List or StringList? A: Yep, per the docs, Order is preserved, so when entities are returned by queries and get(), list properties will have values in the same order as when they were stored. It's the last sentence of the first paragraph at the URL I gave. A: I can't find it explicitly said in the documentation, but according to a Google I/O talk about the DataStore, StringListProperty will be sorted the same way you put it there.
AppEngine: Will the items stored in a StringListProperty always remain in the same order?
After looking through the docs on App Engine and the StringListProperty or the ListProperty I can't seem to find whether there is a guarantee on the order of the items in the list. That is, I'd like to be certain that the list order stays the same despite putting and getting from the DataStore: instance = MyModel() instance.list_property = ['a', 'b', 'c'] instance.put() # Retrieve the model again instance2 = MyModel.get_by_id(instance.key().id()) assert instance2.list_property == ['a', 'b', 'c'] Does AppEngine make any guarantees about the order of items in a List or StringList?
[ "Yep, per the docs, \n\nOrder is preserved, so when entities\n are returned by queries and get(),\n list properties will have values in\n the same order as when they were\n stored.\n\nIt's the last sentence of the first paragraph at the URL I gave.\n", "I can't find it explicitly said in the documentation, but according to a Google I/O talk about the DataStore, StringListProperty will be sorted the same way you put it there.\n" ]
[ 3, 2 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001610687_google_app_engine_python.txt
Q: Determine if a script is running in pythonw? I would like to redirect stderr and stdout to files when run inside of pythonw. How can I determine whether a script is running in pythonw or in python? A: sys.executable -- "A string giving the name of the executable binary for the Python interpreter, on systems where this makes sense."
Determine if a script is running in pythonw?
I would like to redirect stderr and stdout to files when run inside of pythonw. How can I determine whether a script is running in pythonw or in python?
[ "sys.executable -- \"A string giving the name of the executable binary for the Python interpreter, on systems where this makes sense.\"\n" ]
[ 5 ]
[]
[]
[ "python", "pythonw" ]
stackoverflow_0001611543_python_pythonw.txt
Q: Why does Psyco use a lot of memory? Psyco is a specialising compiler for Python. The documentation states Psyco can and will use large amounts of memory. What are the main reasons for this memory usage? Is substantial memory overhead a feature of JIT compilers in general? Edit: Thanks for the answers so far. There are three likely contenders. Writing multiple specialised blocks, each of which require memory Overhead due to compiling source on the fly Overhead due to capturing enough data to do dynamic profiling The question is, which one is the dominant factor in memory usage? I have my own opinion. But I'm adding a bounty, because I'd like to accept the answer that's actually correct! If anyone can demonstrate or prove where the majority of the memory is used, I'll accept it. Otherwise whoever the community votes for will be auto-accepted at the end of the bounty. A: From psyco website "The difference with the traditional approach to JIT compilers is that Psyco writes several version of the same blocks (a block is a bit of a function), which are optimized by being specialized to some kinds of variables (a "kind" can mean a type, but it is more general)" A: "Psyco uses the actual run-time data that your program manipulates to write potentially several versions of the machine code, each differently specialized for different kinds of data." http://psyco.sourceforge.net/introduction.html Many JIT compilers work with statically typed languages, so they know what the types are so can create machine code for just the known types. The better ones do dynamic profiling if the types are polymorphic and optimise the more commonly encountered paths; this is also commonly done with languages featuring dynamic types†. Psyco appears to hedge its bets in order to avoid doing a full program analysis to decide what the types could be, or profiling to find what the types in use are. † I've never gone deep enough into Python to work out whether it does or doesn't have dynamic types or not ( types whose structure can be changed at runtime after objects have been created with that type ), or just the common implementations only check types at runtime; most of the articles just rave about dynamic typing without actually defining it in the context of Python. A: The memory overhead of Psyco is currently large. I has been reduced a bit over time, but it is still an overhead. This overhead is proportional to the amount of Python code that Psyco rewrites; thus if your application has a few algorithmic "core" functions, these are the ones you will want Psyco to accelerate --- not the whole program. So I would think the large memory requirements are due to the fact that it's loading source into memory and then compiling it as it goes. The more source you try and compile the more it's going to need. I'd guess that if it's trying to optomise it on top of that, it'll look at multiple possible solutions to try and identify the best case. A: Definitely psyco memory usage comes from compiled assembler blocks. Psyco suffers sometimes from overspecialization of functions, which means there are multiple versions of assembler blocks. Also, which is also very important, psyco never frees once allocated assembler blocks even if the code assosciated with it is dead. If you run your program under linux you can look at /proc/xxx/smaps to see a growing block of anonymous memory, which is in different region than heap. That's anonymously mmap'ed part for writing down assembler, which of course disappears when running without psyco.
Why does Psyco use a lot of memory?
Psyco is a specialising compiler for Python. The documentation states Psyco can and will use large amounts of memory. What are the main reasons for this memory usage? Is substantial memory overhead a feature of JIT compilers in general? Edit: Thanks for the answers so far. There are three likely contenders. Writing multiple specialised blocks, each of which require memory Overhead due to compiling source on the fly Overhead due to capturing enough data to do dynamic profiling The question is, which one is the dominant factor in memory usage? I have my own opinion. But I'm adding a bounty, because I'd like to accept the answer that's actually correct! If anyone can demonstrate or prove where the majority of the memory is used, I'll accept it. Otherwise whoever the community votes for will be auto-accepted at the end of the bounty.
[ "From psyco website \"The difference with the traditional approach to JIT compilers is that Psyco writes several version of the same blocks (a block is a bit of a function), which are optimized by being specialized to some kinds of variables (a \"kind\" can mean a type, but it is more general)\"\n", "\n\"Psyco uses the actual run-time data that your program manipulates to write potentially several versions of the machine code, each differently specialized for different kinds of data.\" http://psyco.sourceforge.net/introduction.html\n\nMany JIT compilers work with statically typed languages, so they know what the types are so can create machine code for just the known types. The better ones do dynamic profiling if the types are polymorphic and optimise the more commonly encountered paths; this is also commonly done with languages featuring dynamic types†. Psyco appears to hedge its bets in order to avoid doing a full program analysis to decide what the types could be, or profiling to find what the types in use are.\n\n† I've never gone deep enough into Python to work out whether it does or doesn't have dynamic types or not ( types whose structure can be changed at runtime after objects have been created with that type ), or just the common implementations only check types at runtime; most of the articles just rave about dynamic typing without actually defining it in the context of Python.\n", "\nThe memory overhead of Psyco is currently large. I has been reduced a bit over time, but it is still an overhead. This overhead is proportional to the amount of Python code that Psyco rewrites; thus if your application has a few algorithmic \"core\" functions, these are the ones you will want Psyco to accelerate --- not the whole program.\n\nSo I would think the large memory requirements are due to the fact that it's loading source into memory and then compiling it as it goes. The more source you try and compile the more it's going to need. I'd guess that if it's trying to optomise it on top of that, it'll look at multiple possible solutions to try and identify the best case. \n", "Definitely psyco memory usage comes from compiled assembler blocks. Psyco suffers sometimes from overspecialization of functions, which means there are multiple versions of assembler\nblocks. Also, which is also very important, psyco never frees once allocated assembler blocks\neven if the code assosciated with it is dead.\nIf you run your program under linux you can look at /proc/xxx/smaps to see a growing block of anonymous memory, which is in different region than heap. That's anonymously mmap'ed part for writing down assembler, which of course disappears when running without psyco.\n" ]
[ 10, 5, 2, 2 ]
[]
[]
[ "compiler_construction", "jit", "memory", "psyco", "python" ]
stackoverflow_0001438220_compiler_construction_jit_memory_psyco_python.txt
Q: Automatically restart program when error occur The program is like this: HEADER CODE urllib2.initialization() try: while True: urllib2.read(somebytes) urllib2.read(somebytes) urllib2.read(somebytes) ... except Exception, e: print e FOOTER CODE My question is when error occurs (timeout, connection reset by peer, etc), how to restart from urllib2.initialization() instead of existing main program and restarting from HEADER CODE again? A: You could wrap your code in a "while not done" loop: #!/usr/bin/env python HEADER CODE done=False while not done: try: urllib2.initialization() while True: # I assume you have code to break out of this loop urllib2.read(somebytes) urllib2.read(somebytes) urllib2.read(somebytes) ... except Exception, e: # Try to be more specific about the execeptions # you wish to catch here print e else: # This block is only executed if the try-block executes without # raising an exception done=True FOOTER CODE A: How about just wrap it in another loop? HEADER CODE restart = True while restart == True: urllib2.initialization() try: while True: restart = False urllib2.read(somebytes) urllib2.read(somebytes) urllib2.read(somebytes) ... except Exception, e: restart = True print e FOOTER CODE A: Simple way with attempts restrictions HEADER CODE attempts = 5 for attempt in xrange(attempts): urllib2.initialization() try: while True: urllib2.read(somebytes) urllib2.read(somebytes) urllib2.read(somebytes) ... except Exception, e: print e else: break FOOTER CODE
Automatically restart program when error occur
The program is like this: HEADER CODE urllib2.initialization() try: while True: urllib2.read(somebytes) urllib2.read(somebytes) urllib2.read(somebytes) ... except Exception, e: print e FOOTER CODE My question is when error occurs (timeout, connection reset by peer, etc), how to restart from urllib2.initialization() instead of existing main program and restarting from HEADER CODE again?
[ "You could wrap your code in a \"while not done\" loop:\n#!/usr/bin/env python\n\nHEADER CODE\ndone=False\nwhile not done:\n try:\n urllib2.initialization()\n while True:\n # I assume you have code to break out of this loop\n urllib2.read(somebytes)\n urllib2.read(somebytes)\n urllib2.read(somebytes)\n ...\n except Exception, e: # Try to be more specific about the execeptions \n # you wish to catch here\n print e\n else:\n # This block is only executed if the try-block executes without\n # raising an exception\n done=True\nFOOTER CODE\n\n", "How about just wrap it in another loop?\nHEADER CODE\nrestart = True\nwhile restart == True:\n urllib2.initialization()\n try:\n while True:\n restart = False\n urllib2.read(somebytes)\n urllib2.read(somebytes)\n urllib2.read(somebytes)\n ...\n except Exception, e:\n restart = True\n print e\nFOOTER CODE\n\n", "Simple way with attempts restrictions\nHEADER CODE\nattempts = 5\nfor attempt in xrange(attempts):\n urllib2.initialization()\n try:\n while True:\n urllib2.read(somebytes)\n urllib2.read(somebytes)\n urllib2.read(somebytes)\n ...\n except Exception, e:\n print e\n else:\n break\nFOOTER CODE\n\n" ]
[ 5, 4, 2 ]
[]
[]
[ "python", "restart" ]
stackoverflow_0001611256_python_restart.txt
Q: How can I make HTML safe for web browser with python? How can I make HTML from email safe to display in web browser with python? Any external references shouldn't be followed when displayed. In other words, all displayed content should come from the email and nothing from internet. Other than spam emails should be displayed as closely as possible like intended by the writer. I would like to avoid coding this myself. Solutions requiring latest browser (firefox) version are also acceptable. A: html5lib contains an HTML+CSS sanitizer. It allows too much currently, but it shouldn't be too hard to modify it to match the use case. Found it from here. A: I'm not quite clear with what exactly you mean with "safe". It's a pretty big topic... but, for what it's worth: In my opinion, the stripping parser from the ActiveState Cookbook is one of the easiest solutions. You can pretty much copy/paste the class and start using it. Have a look at the comments as well. The last one states that it doesn't work anymore, but I also have this running in an application somewhere and it works fine. From work, I don't have access to that box, so I'll have to look it up over the weekend. A: Use the HTMLparser module, or install BeautifulSoup, and use those to parse the HTML and disable or remove the tags. This will leave whatever link text was there, but it will not be highlighted and it will not be clickable, since you are displaying it with a web browser component. You could make it clearer what was done by replacing the <A></A> with a <SPAN></SPAN> and changing the text decoration to show where the link used to be. Maybe a different shade of blue than normal and a dashed underscore to indicate brokenness. That way you are a little closer to displaying it as intended without actually misleading people into clicking on something that is not clickable. You could even add a hover in Javascript or pure CSS that pops up a tooltip explaining that links have been disabled for security reasons. Similar things could be done with <IMG></IMG> tags including replacing them with a blank rectangle to ensure that the page layout is close to the original. I've done stuff like this with Beautiful Soup, but HTMLparser is included with Python. In older Python distribs, there was an htmllib which is now deprecated. Since the HTML in an email message might not be fully correct, use Beautiful Soup 3.0.7a which is better at making sense of broken HTML.
How can I make HTML safe for web browser with python?
How can I make HTML from email safe to display in web browser with python? Any external references shouldn't be followed when displayed. In other words, all displayed content should come from the email and nothing from internet. Other than spam emails should be displayed as closely as possible like intended by the writer. I would like to avoid coding this myself. Solutions requiring latest browser (firefox) version are also acceptable.
[ "html5lib contains an HTML+CSS sanitizer. It allows too much currently, but it shouldn't be too hard to modify it to match the use case.\nFound it from here.\n", "I'm not quite clear with what exactly you mean with \"safe\". It's a pretty big topic... but, for what it's worth:\nIn my opinion, the stripping parser from the ActiveState Cookbook is one of the easiest solutions. You can pretty much copy/paste the class and start using it.\nHave a look at the comments as well. The last one states that it doesn't work anymore, but I also have this running in an application somewhere and it works fine. From work, I don't have access to that box, so I'll have to look it up over the weekend.\n", "Use the HTMLparser module, or install BeautifulSoup, and use those to parse the HTML and disable or remove the tags. This will leave whatever link text was there, but it will not be highlighted and it will not be clickable, since you are displaying it with a web browser component.\nYou could make it clearer what was done by replacing the <A></A> with a <SPAN></SPAN> and changing the text decoration to show where the link used to be. Maybe a different shade of blue than normal and a dashed underscore to indicate brokenness. That way you are a little closer to displaying it as intended without actually misleading people into clicking on something that is not clickable. You could even add a hover in Javascript or pure CSS that pops up a tooltip explaining that links have been disabled for security reasons.\nSimilar things could be done with <IMG></IMG> tags including replacing them with a blank rectangle to ensure that the page layout is close to the original. \nI've done stuff like this with Beautiful Soup, but HTMLparser is included with Python. In older Python distribs, there was an htmllib which is now deprecated. Since the HTML in an email message might not be fully correct, use Beautiful Soup 3.0.7a which is better at making sense of broken HTML.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "browser", "email", "html", "html_sanitizing", "python" ]
stackoverflow_0001606201_browser_email_html_html_sanitizing_python.txt
Q: Getting a list of child entities in App Engine using get_by_key_name (Python) My adventures with entity groups continue after a slightly embarrassing beginning (see Under some circumstances an App Engine get_by_key_name call using an existing key_name returns None). I now see that I can't do a normal get_by_key_name call over a list of entities for child entities that have more than one parent entity. As the Model docs say, Multiple entities requested by one (get_by_key_name) call must all have the same parent. I've gotten into the habit of doing something like the following: # Model just has the basic properties entities = Model.get_by_key_name(key_names) # ContentModel has all the text and blob properties for Model content_entities = ContentModel.get_by_key_name(content_key_names) for entity, content_entity in zip(entities, content_entities): # do some stuff Now that ContentModel entities are children of Model entities, this won't work because of the single-parent requirement. An easy way to enable the above scenario with entity groups is to be able to pass a list of parents to a get_by_key_name call, but I'm guessing that there's a good reason why this isn't currently possible. I'm wondering if this is a hard rule (as in there is absolutely no way such a call could ever work) or if perhaps the db module could be modified so that this type of call would work, even if it meant a greater CPU expense. I'd also really like to see how others accomplish this sort of task. I can think of a bunch of ways of handling it, like using GQL queries, but none I can think of approach the performance of a get_by_key_name call. A: Just create a key list and do a get on it. entities = Model.get_by_key_name(key_names) content_keys = [db.Key.from_path('Model', name, 'ContentModel', name) for name in key_names] content_entities = ContentModel.get(content_keys) Note that I assume the key_name for each ContentModel entity is the same as its parent Model. (For a 1:1 relationship, it makes sense to reuse the key_name.) A: I'm embarrassed to say that the restriction ('must be in the same entity group') actually no longer applies in this case. Please do feel free to file a documentation bug! In any case, get_by_key_name is only syntactic sugar for get, as Bill Katz illustrates. You can go a step further, even, and use db.get on a list of keys to get everything in one go - db.get doesn't care about model type.
Getting a list of child entities in App Engine using get_by_key_name (Python)
My adventures with entity groups continue after a slightly embarrassing beginning (see Under some circumstances an App Engine get_by_key_name call using an existing key_name returns None). I now see that I can't do a normal get_by_key_name call over a list of entities for child entities that have more than one parent entity. As the Model docs say, Multiple entities requested by one (get_by_key_name) call must all have the same parent. I've gotten into the habit of doing something like the following: # Model just has the basic properties entities = Model.get_by_key_name(key_names) # ContentModel has all the text and blob properties for Model content_entities = ContentModel.get_by_key_name(content_key_names) for entity, content_entity in zip(entities, content_entities): # do some stuff Now that ContentModel entities are children of Model entities, this won't work because of the single-parent requirement. An easy way to enable the above scenario with entity groups is to be able to pass a list of parents to a get_by_key_name call, but I'm guessing that there's a good reason why this isn't currently possible. I'm wondering if this is a hard rule (as in there is absolutely no way such a call could ever work) or if perhaps the db module could be modified so that this type of call would work, even if it meant a greater CPU expense. I'd also really like to see how others accomplish this sort of task. I can think of a bunch of ways of handling it, like using GQL queries, but none I can think of approach the performance of a get_by_key_name call.
[ "Just create a key list and do a get on it.\nentities = Model.get_by_key_name(key_names)\ncontent_keys = [db.Key.from_path('Model', name, 'ContentModel', name) \n for name in key_names]\ncontent_entities = ContentModel.get(content_keys)\n\nNote that I assume the key_name for each ContentModel entity is the same as its parent Model. (For a 1:1 relationship, it makes sense to reuse the key_name.)\n", "I'm embarrassed to say that the restriction ('must be in the same entity group') actually no longer applies in this case. Please do feel free to file a documentation bug!\nIn any case, get_by_key_name is only syntactic sugar for get, as Bill Katz illustrates. You can go a step further, even, and use db.get on a list of keys to get everything in one go - db.get doesn't care about model type.\n" ]
[ 4, 1 ]
[]
[]
[ "google_app_engine", "model", "performance", "python" ]
stackoverflow_0001611148_google_app_engine_model_performance_python.txt
Q: Pamie and python-win32 question pamie3 not working currently im making some web scraping script. and i was choice PAMIE to use my script. actually im new to python and programming. so i have no idea ,if i use PAMIE,it really helpful to make script to relate with win32-python. ok my problem is , while im making script,i was encounter two probelm. first , i want to let work my script work together Beautifulsoup and PAMIE. or it also ok..if can work native internet explorer interface together. but it not work for me. im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't make it working. my second problem is,while im making script,i think sometime i need normal IE interface. is it possible to change PAMIE's IE interface to just normal IE interface(InternetExplorer.Application)? i don't want to open new IE window to work with normal IE interface,want to continue work with current PAMIE's IE windows. sorry for my bad english Paul A: PAMIE might be getting a bit dated. You could take a look at Selenium which will also automate a web browser, but is more current. http://jimmyg.org/blog/2009/getting-started-with-selenium-and-python.html
Pamie and python-win32 question pamie3 not working
currently im making some web scraping script. and i was choice PAMIE to use my script. actually im new to python and programming. so i have no idea ,if i use PAMIE,it really helpful to make script to relate with win32-python. ok my problem is , while im making script,i was encounter two probelm. first , i want to let work my script work together Beautifulsoup and PAMIE. or it also ok..if can work native internet explorer interface together. but it not work for me. im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't make it working. my second problem is,while im making script,i think sometime i need normal IE interface. is it possible to change PAMIE's IE interface to just normal IE interface(InternetExplorer.Application)? i don't want to open new IE window to work with normal IE interface,want to continue work with current PAMIE's IE windows. sorry for my bad english Paul
[ "PAMIE might be getting a bit dated. You could take a look at Selenium which will also automate a web browser, but is more current. \nhttp://jimmyg.org/blog/2009/getting-started-with-selenium-and-python.html\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "pamie", "python", "winapi" ]
stackoverflow_0001611852_beautifulsoup_pamie_python_winapi.txt
Q: Performance of Python worth the cost? I'm looking at implementing a fuzzy logic controller based on either PyFuzzy (Python) or FFLL (C++) libraries. I'd prefer to work with python but am unsure if the performance will be acceptable in the embedded environment it will work in (either ARM or embedded x86 proc both ~64Mbs of RAM). The main concern is that response times are as fast as possible (an update rate of 5hz+ would be ideal >2Hz is required). The system would be reading from multiple (probably 5) sensors from an RS232 port and provide 2/3 outputs based on the results of the fuzzy evaluation. Should I be concerned that Python will be too slow for this task? A: In general, you shouldn't obsess over performance until you've actually seen it become a problem. Since we don't know the details of your app, we can't say how it'd perform if implemented in Python. And since you haven't implemented it yet, neither can you. Implement the version you're most comfortable with, and can implement fastest, first. Then benchmark it. And if it is too slow, you have three options which should be done in order: First, optimize your Python code If that's not enough, write the most performance-critical functions in C/C++, and call that from your Python code And finally, if you really need top performance, you might have to rewrite the whole thing in C++. But then at least you'll have a working prototype in Python, and you'll have a much clearer idea of how it should be implemented. You'll know what pitfalls to avoid, and you'll have an already correct implementation to test against and compare results to. A: Python is very slow at handling large amounts of non-string data. For some operations, you may see that it is 1000 times slower than C/C++, so yes, you should investigate into this and do necessary benchmarks before you make time-critical algorithms in Python. However, you can extend python with modules in C/C++ code, so that time-critical things are fast, while still being able to use python for the main code. A: Make it work, then make it work fast. A: If most of your runtime is spent in C libraries, the language you use to call these libraries isn't important. What language are your time-eating libraries written in ? A: From your description, speed should not be much of a concern (and you can use C, cython, whatever you want to make it faster), but memory would be. For environments with 64 Mb max (where the OS and all should fit as well, right ?), I think there is a good chance that python may not be the right tool for target deployment. If you have non trivial logic to handle, I would still prototype in python, though. A: I never really measured the performance of pyfuzzy's examples, but as the new version 0.1.0 can read FCL files as FFLL does. Just describe your fuzzy system in this format, write some wrappers, and check the performance of both variants. For reading FCL with pyfuzzy you need the antlr python runtime, but after reading you should be able to pickle the read object, so you don't need the antlr overhead on the target.
Performance of Python worth the cost?
I'm looking at implementing a fuzzy logic controller based on either PyFuzzy (Python) or FFLL (C++) libraries. I'd prefer to work with python but am unsure if the performance will be acceptable in the embedded environment it will work in (either ARM or embedded x86 proc both ~64Mbs of RAM). The main concern is that response times are as fast as possible (an update rate of 5hz+ would be ideal >2Hz is required). The system would be reading from multiple (probably 5) sensors from an RS232 port and provide 2/3 outputs based on the results of the fuzzy evaluation. Should I be concerned that Python will be too slow for this task?
[ "In general, you shouldn't obsess over performance until you've actually seen it become a problem. Since we don't know the details of your app, we can't say how it'd perform if implemented in Python. And since you haven't implemented it yet, neither can you.\nImplement the version you're most comfortable with, and can implement fastest, first. Then benchmark it. And if it is too slow, you have three options which should be done in order:\n\nFirst, optimize your Python code\nIf that's not enough, write the most performance-critical functions in C/C++, and call that from your Python code\nAnd finally, if you really need top performance, you might have to rewrite the whole thing in C++. But then at least you'll have a working prototype in Python, and you'll have a much clearer idea of how it should be implemented. You'll know what pitfalls to avoid, and you'll have an already correct implementation to test against and compare results to.\n\n", "Python is very slow at handling large amounts of non-string data. For some operations, you may see that it is 1000 times slower than C/C++, so yes, you should investigate into this and do necessary benchmarks before you make time-critical algorithms in Python.\nHowever, you can extend python with modules in C/C++ code, so that time-critical things are fast, while still being able to use python for the main code.\n", "Make it work, then make it work fast.\n", "If most of your runtime is spent in C libraries, the language you use to call these libraries isn't important. What language are your time-eating libraries written in ?\n", "From your description, speed should not be much of a concern (and you can use C, cython, whatever you want to make it faster), but memory would be. For environments with 64 Mb max (where the OS and all should fit as well, right ?), I think there is a good chance that python may not be the right tool for target deployment.\nIf you have non trivial logic to handle, I would still prototype in python, though.\n", "I never really measured the performance of pyfuzzy's examples, but as the new version 0.1.0 can read FCL files as FFLL does. Just describe your fuzzy system in this format, write some wrappers, and check the performance of both variants.\nFor reading FCL with pyfuzzy you need the antlr python runtime, but after reading you should be able to pickle the read object, so you don't need the antlr overhead on the target.\n" ]
[ 35, 12, 5, 1, 0, 0 ]
[]
[]
[ "c", "embedded", "fuzzy_logic", "python" ]
stackoverflow_0001498155_c_embedded_fuzzy_logic_python.txt
Q: What profiling tools exist for Python on Linux beyond the ones included in the standard library? I've been using Python's built-in cProfile tool with some pretty good success. But I'd like to be able to access more information such as how long I'm waiting for I/O (and what kind of I/O I'm waiting on) or how many cache misses I have. Are there any Linux tools to help with this beyond your basic time command? A: I'm not sure if python will provide the low level information you are looking for. You might want to look at oprofile and latencytop though. A: If you want to know exactly what you are waiting for, and approximately what percentage of the time, this will tell you. It won't tell you other things though, like cache misses or memory leaks.
What profiling tools exist for Python on Linux beyond the ones included in the standard library?
I've been using Python's built-in cProfile tool with some pretty good success. But I'd like to be able to access more information such as how long I'm waiting for I/O (and what kind of I/O I'm waiting on) or how many cache misses I have. Are there any Linux tools to help with this beyond your basic time command?
[ "I'm not sure if python will provide the low level information you are looking for. You might want to look at oprofile and latencytop though.\n", "If you want to know exactly what you are waiting for, and approximately what percentage of the time, this will tell you. It won't tell you other things though, like cache misses or memory leaks.\n" ]
[ 2, 1 ]
[]
[]
[ "linux", "profiling", "python" ]
stackoverflow_0001607641_linux_profiling_python.txt
Q: Rolling my own __repr__ I want to write my own __repr__ for some class that I define. I want it to be similar to the default <__main__.O object at 0x00D229D0>, except have a few other details in there. How do I reproduce that <__main__.O object at 0x00D229D0> thing? A: See http://docs.python.org/reference/datamodel.html#object.repr #!/usr/bin/env python class O(object): def __repr__(self): return '<%s.%s object at 0x%x>'%(self.__module__,self.__class__.__name__,id(self)) o=O() print(repr(o)) # <__main__.O object at 0xb7e7d0cc> A: You can write your own repr like this: class Test (object): def __repr__(self): t = type(self) return "<Instance of %s.%s at %x>" % (t.__module__, t.__name__, id(self))
Rolling my own __repr__
I want to write my own __repr__ for some class that I define. I want it to be similar to the default <__main__.O object at 0x00D229D0>, except have a few other details in there. How do I reproduce that <__main__.O object at 0x00D229D0> thing?
[ "See http://docs.python.org/reference/datamodel.html#object.repr\n#!/usr/bin/env python\nclass O(object):\n def __repr__(self):\n return '<%s.%s object at 0x%x>'%(self.__module__,self.__class__.__name__,id(self))\no=O()\nprint(repr(o))\n\n# <__main__.O object at 0xb7e7d0cc>\n\n", "You can write your own repr like this:\nclass Test (object):\n def __repr__(self):\n t = type(self)\n return \"<Instance of %s.%s at %x>\" % (t.__module__, t.__name__, id(self))\n\n" ]
[ 5, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001613037_python.txt