content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How to find the url using the referer and the href in Python? Suppose I have window_location = 'http://stackoverflow.com/questions/ask' href = '/users/48465/jader-dias' I want to obtain link = 'http://stackoverflow.com/users/48465/jader-dias' How do I do it in Python? It have to work just as it works in the browser A: >>> import urlparse >>> urlparse.urljoin('http://stackoverflow.com/questions/ask', ... '/users/48465/jader-dias') 'http://stackoverflow.com/users/48465/jader-dias' From the doc page of urlparse.urljoin: urlparse.urljoin(base, url[, allow_fragments]) Construct a full (“absolute”) URL by combining a “base URL” (base) with another URL (url). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. If url is an absolute URL (that is, starting with // or scheme://), the url‘s host name and/or scheme will be present in the result.
How to find the url using the referer and the href in Python?
Suppose I have window_location = 'http://stackoverflow.com/questions/ask' href = '/users/48465/jader-dias' I want to obtain link = 'http://stackoverflow.com/users/48465/jader-dias' How do I do it in Python? It have to work just as it works in the browser
[ ">>> import urlparse\n>>> urlparse.urljoin('http://stackoverflow.com/questions/ask',\n... '/users/48465/jader-dias')\n'http://stackoverflow.com/users/48465/jader-dias'\n\nFrom the doc page of urlparse.urljoin:\n\nurlparse.urljoin(base, url[,\n allow_fragments])\nConstruct a full (“absolute”) URL by combining a “base URL” (base) with\n another URL (url). Informally, this\n uses components of the base URL, in\n particular the addressing scheme, the\n network location and (part of) the\n path, to provide missing components in\n the relative URL.\nIf url is an absolute URL (that is,\n starting with // or scheme://), the\n url‘s host name and/or scheme will be\n present in the result.\n\n" ]
[ 6 ]
[]
[]
[ "href", "python", "regex", "string", "url" ]
stackoverflow_0001250371_href_python_regex_string_url.txt
Q: @Rails users: have you tried web2py? Pros? Cons? web2py to is a Python framework but shares the "convention over configuration" design that Ruby on Rails has. On the plus side it packages a lot more functionality with its s standard distribution and we claim it is faster and easier to use. Has any Rails user tried it? What is your impression? No rants please. Just technical comments. A: c'mon guys... your only argument is "Technical differences are rather irrelevant." and "it don't matter what web framework you use"? I disagree. The size of the users base has more to do with marketing and how long a framework has been around. By that argument ASP and PHP are better than Rails. Has anybody here used both Rails and web2py? web2py runs on webfaction and any hosting provider that supports mod_proxy or mod_wsgi or mod_fcgi, and runs on Google App Engine (rails does not). There is also a dedicated web2py hosting provider (star-nix.com). A: I found web2py much easier to learn... there are fewer scripts to run and abstractions. On the other hand, web2py's database layer isn't a real ORM... it's almost like writing raw SQL. Simple things end up taking many lines of code, just like SQL. A: I would say the biggest "con" of using webpy over Rails is that there are not a lot of Rails-specific hosting services around, and the huge community based around it (there are Rails plugins and tools for.. everything). The same cannot be said for web2py. It depends what you want to do with it - if it's something to write your personal site with, and you already have a server to host it on, use whatever you prefer. If it's something to distribute for others to run, Rails has more options for hosting, and a bigger community, so it may be a better choice. Technical differences are rather irrelevant. Every framework can basically do the same (generate web-pages). What is important is community, ease of use, useful feature-sets, ability to host it and so on - and those are all really subjective. I still use PHP quite often, not because "it's better", but because I can host it on a huge majority of web-hosts. I also use Rails because as it has a good, and very active community. The actually technicalities of the framework wasn't ever a consideration, really.. I could probably put together a list of why web2py is "better"/"worse" than Rails - Rails may be 0.04sec/request slower at generating templates containing loops, or web2py may have a good DB model generator, or some other technical reason - but those may not be relevant to you at all
@Rails users: have you tried web2py? Pros? Cons?
web2py to is a Python framework but shares the "convention over configuration" design that Ruby on Rails has. On the plus side it packages a lot more functionality with its s standard distribution and we claim it is faster and easier to use. Has any Rails user tried it? What is your impression? No rants please. Just technical comments.
[ "c'mon guys... your only argument is \"Technical differences are rather irrelevant.\" and \"it don't matter what web framework you use\"? I disagree. The size of the users base has more to do with marketing and how long a framework has been around. By that argument ASP and PHP are better than Rails.\nHas anybody here used both Rails and web2py?\nweb2py runs on webfaction and any hosting provider that supports mod_proxy or mod_wsgi or mod_fcgi, and runs on Google App Engine (rails does not). There is also a dedicated web2py hosting provider (star-nix.com).\n", "I found web2py much easier to learn... there are fewer scripts to run and abstractions. On the other hand, web2py's database layer isn't a real ORM... it's almost like writing raw SQL. Simple things end up taking many lines of code, just like SQL.\n", "I would say the biggest \"con\" of using webpy over Rails is that there are not a lot of Rails-specific hosting services around, and the huge community based around it (there are Rails plugins and tools for.. everything). The same cannot be said for web2py.\nIt depends what you want to do with it - if it's something to write your personal site with, and you already have a server to host it on, use whatever you prefer. If it's something to distribute for others to run, Rails has more options for hosting, and a bigger community, so it may be a better choice.\nTechnical differences are rather irrelevant. Every framework can basically do the same (generate web-pages). What is important is community, ease of use, useful feature-sets, ability to host it and so on - and those are all really subjective.\nI still use PHP quite often, not because \"it's better\", but because I can host it on a huge majority of web-hosts. I also use Rails because as it has a good, and very active community. The actually technicalities of the framework wasn't ever a consideration, really..\nI could probably put together a list of why web2py is \"better\"/\"worse\" than Rails - Rails may be 0.04sec/request slower at generating templates containing loops, or web2py may have a good DB model generator, or some other technical reason - but those may not be relevant to you at all\n" ]
[ 11, 1, 0 ]
[]
[]
[ "python", "ruby_on_rails", "web2py" ]
stackoverflow_0000327101_python_ruby_on_rails_web2py.txt
Q: python urllib, how to watch messages? How can I watch the messages being sent back and for on urllib shttp requests? If it were simple http I would just watch the socket traffic but of course that won't work for https. Is there a debug flag I can set that will do this? import urllib params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) f = urllib.urlopen("https://example.com/cgi-bin/query", params) A: You can always do a little bit of mokeypatching import httplib # override the HTTPS request class class DebugHTTPS(httplib.HTTPS): real_putheader = httplib.HTTPS.putheader def putheader(self, *args, **kwargs): print 'putheader(%s,%s)' % (args, kwargs) result = self.real_putheader(self, *args, **kwargs) return result httplib.HTTPS = DebugHTTPS # set a new default urlopener import urllib class DebugOpener(urllib.FancyURLopener): def open(self, *args, **kwargs): result = urllib.FancyURLopener.open(self, *args, **kwargs) print 'response:' print result.headers return result urllib._urlopener = DebugOpener() params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) f = urllib.urlopen("https://www.google.com/", params) gives the output putheader(('Content-Type', 'application/x-www-form-urlencoded'),{}) putheader(('Content-Length', '21'),{}) putheader(('Host', 'www.google.com'),{}) putheader(('User-Agent', 'Python-urllib/1.17'),{}) response: Content-Type: text/html; charset=UTF-8 Content-Length: 1363 Date: Sun, 09 Aug 2009 12:49:59 GMT Server: GFE/2.0 A: No, there's no debug flag to watch this. You can use your favorite debugger. It is the easiest option. Just add a breakpoint in the urlopen function and you're done. Another option would be to write your own download function: def graburl(url, **params): print "LOG: Going to %s with %r" % (url, params) params = urllib.urlencode(params) return urllib.urlopen(url, params) And use it like this: f = graburl("https://example.com/cgi-bin/query", spam=1, eggs=2, bacon=0)
python urllib, how to watch messages?
How can I watch the messages being sent back and for on urllib shttp requests? If it were simple http I would just watch the socket traffic but of course that won't work for https. Is there a debug flag I can set that will do this? import urllib params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) f = urllib.urlopen("https://example.com/cgi-bin/query", params)
[ "You can always do a little bit of mokeypatching\nimport httplib\n\n# override the HTTPS request class\n\nclass DebugHTTPS(httplib.HTTPS):\n real_putheader = httplib.HTTPS.putheader\n def putheader(self, *args, **kwargs):\n print 'putheader(%s,%s)' % (args, kwargs)\n result = self.real_putheader(self, *args, **kwargs)\n return result\n\nhttplib.HTTPS = DebugHTTPS\n\n\n\n# set a new default urlopener\n\nimport urllib\n\nclass DebugOpener(urllib.FancyURLopener):\n def open(self, *args, **kwargs):\n result = urllib.FancyURLopener.open(self, *args, **kwargs)\n print 'response:'\n print result.headers\n return result\n\nurllib._urlopener = DebugOpener()\n\n\nparams = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) \nf = urllib.urlopen(\"https://www.google.com/\", params)\n\ngives the output\nputheader(('Content-Type', 'application/x-www-form-urlencoded'),{})\nputheader(('Content-Length', '21'),{})\nputheader(('Host', 'www.google.com'),{})\nputheader(('User-Agent', 'Python-urllib/1.17'),{})\nresponse:\nContent-Type: text/html; charset=UTF-8\nContent-Length: 1363\nDate: Sun, 09 Aug 2009 12:49:59 GMT\nServer: GFE/2.0\n\n", "No, there's no debug flag to watch this.\nYou can use your favorite debugger. It is the easiest option. Just add a breakpoint in the urlopen function and you're done.\nAnother option would be to write your own download function:\ndef graburl(url, **params):\n print \"LOG: Going to %s with %r\" % (url, params)\n params = urllib.urlencode(params)\n return urllib.urlopen(url, params)\n\nAnd use it like this:\nf = graburl(\"https://example.com/cgi-bin/query\", spam=1, eggs=2, bacon=0)\n\n" ]
[ 2, 1 ]
[]
[]
[ "https", "python", "urllib" ]
stackoverflow_0001250965_https_python_urllib.txt
Q: Creating alternative login to Google Users for Google app engine How does one handle logging in and out/creating users, without using Google Users? I'd like a few more options then just email and password. Is it just a case of making a user model with the fields I need? Is that secure enough? Alternatively, is there a way to get the user to log in using the Google ID, but without being redirected to the actual Google page? A: I recommend using OpenID, see here for more -- just like Stack Overflow does!-) A: If you roll your own user model, you're going to need to do your own session handling as well; the App Engine Users API creates login sessions for you behind the scenes. Also, while this should be obvious, you shouldn't store the user's password in plaintext; store an SHA-1 hash and compare it to a hash of the user's submitted password when they login.
Creating alternative login to Google Users for Google app engine
How does one handle logging in and out/creating users, without using Google Users? I'd like a few more options then just email and password. Is it just a case of making a user model with the fields I need? Is that secure enough? Alternatively, is there a way to get the user to log in using the Google ID, but without being redirected to the actual Google page?
[ "I recommend using OpenID, see here for more -- just like Stack Overflow does!-)\n", "If you roll your own user model, you're going to need to do your own session handling as well; the App Engine Users API creates login sessions for you behind the scenes. \nAlso, while this should be obvious, you shouldn't store the user's password in plaintext; store an SHA-1 hash and compare it to a hash of the user's submitted password when they login.\n" ]
[ 8, 1 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "model_view_controller", "python" ]
stackoverflow_0001250437_google_app_engine_google_cloud_datastore_model_view_controller_python.txt
Q: How to store regular expressions in the Google App Engine datastore? Regular Expressions are usually expressed as strings, but they also have properties (ie. single line, multi line, ignore case). How would you store them? And for compiled regular expressions, how to store it? Please note that we can write custom property classes: http://googleappengine.blogspot.com/2009/07/writing-custom-property-classes.html As I don't understand Python enough, my first try to write a custom property which stores a compiled regular expression failed. A: I'm not sure if Python supprts it, but in .net regex, you can specify these options within the regex itself: (?si)^a.*z$ would specify single-line, ignore case. Indeed, the Python docs describe such a mechanism here: http://docs.python.org/library/re.html To recap: (cut'n'paste from link above) (?iLmsux) (One or more letters from the set 'i', 'L', 'm', 's', 'u', 'x'.) The group matches the empty string; the letters set the corresponding flags: re.I (ignore case), re.L (locale dependent), re.M (multi-line), re.S (dot matches all), re.U (Unicode dependent), and re.X (verbose), for the entire regular expression. (The flags are described in Module Contents.) This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the compile() function. Note that the (?x) flag changes how the expression is parsed. It should be used first in the expression string, or after one or more whitespace characters. If there are non-whitespace characters before the flag, the results are undefined. A: I wouldn't try to store the compiled regex. The data in a compiled regex is not designed to be stored, and is not guaranteed to be picklable or serializable. Just store the string and re-compile (the re module will do this for you behind the scenes anyway). A: You can either store the text, as suggested above, or you can pickle and unpickle the compiled RE. For example, see PickledProperty on the cookbook. Due to the (lack of) speed of Pickle, particularly on App Engine where cPickle is unavailable, you'll probably find that storing the text of the regex is the faster option. In fact, it appears that when pickled, a re simply stores the original text anyway.
How to store regular expressions in the Google App Engine datastore?
Regular Expressions are usually expressed as strings, but they also have properties (ie. single line, multi line, ignore case). How would you store them? And for compiled regular expressions, how to store it? Please note that we can write custom property classes: http://googleappengine.blogspot.com/2009/07/writing-custom-property-classes.html As I don't understand Python enough, my first try to write a custom property which stores a compiled regular expression failed.
[ "I'm not sure if Python supprts it, but in .net regex, you can specify these options within the regex itself:\n(?si)^a.*z$\n\nwould specify single-line, ignore case.\nIndeed, the Python docs describe such a mechanism here: http://docs.python.org/library/re.html\nTo recap: (cut'n'paste from link above)\n(?iLmsux)\n(One or more letters from the set 'i', 'L', 'm', 's', 'u', 'x'.) The group matches the empty string; the letters set the corresponding flags: re.I (ignore case), re.L (locale dependent), re.M (multi-line), re.S (dot matches all), re.U (Unicode dependent), and re.X (verbose), for the entire regular expression. (The flags are described in Module Contents.) This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the compile() function.\nNote that the (?x) flag changes how the expression is parsed. It should be used first in the expression string, or after one or more whitespace characters. If there are non-whitespace characters before the flag, the results are undefined.\n", "I wouldn't try to store the compiled regex. The data in a compiled regex is not designed to be stored, and is not guaranteed to be picklable or serializable. Just store the string and re-compile (the re module will do this for you behind the scenes anyway).\n", "You can either store the text, as suggested above, or you can pickle and unpickle the compiled RE. For example, see PickledProperty on the cookbook.\nDue to the (lack of) speed of Pickle, particularly on App Engine where cPickle is unavailable, you'll probably find that storing the text of the regex is the faster option. In fact, it appears that when pickled, a re simply stores the original text anyway.\n" ]
[ 3, 3, 2 ]
[]
[]
[ "customproperty", "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001250313_customproperty_google_app_engine_google_cloud_datastore_python.txt
Q: Any reason why socket.send() hangs? I'm writing an mini FTP server in Python that exposes an underlying database as if it was FTP. The flow is something like this: sock.send("150 Here's the file you wanted\r\n") proc = Popen2(...) for parts in data: data_sock.send(parts) proc.kill() sock.send("226 There's the file you wanted\r\n") data_sock.shutdown(0) data_sock.close() data_sock is the PASV socket that's up and working, confirmed by Wireshark. What's actually happening is after the 163,328th byte has been sent over the data_sock, the data_sock.send() line just hangs. I suspect the send buffer is full, but it's a mystery to me why the FTP clients wouldn't be reading from the PASV socket. I've included the Popen2(...) line because I've managed to reproduce http://bugs.python.org/issue3006 on OS X--sockets don't close until the Popen process is killed. Not sure if this is somehow related. A: Hard to say from this code fragment and not knowing the client, but is it possible that your sending of 150 (indicating a new data channel), not 125 (indicating use of existing data channel) confuses the client and it simply does not start reading the data? Have you had a look of pyftpdlib as an alternative for rolling your own server? A: I've encountered similar issues on the client side on uploads, which seem to trace to the modem/router choking -- the only workround I have at the moment is to throttle the transmission rate (send 128 bytes, sleep ~50ms, repeat). A: One reason a client could stop reading the data is that somebody unplugged the client (or disconnected its Ethernet cable) during the transfer. In that case, TCP will keep (unsuccessfully) resending packets for several minutes, getting no response, until it gives up. There are other possible reasons as well. Since the above possibilities are something you'll have to deal with if you want a robust server, the real question is not necessarily why it happens but rather what you should do when it does happen. Some possible things to do are: Make sure the clients don't have any bugs that cause them to stop reading even though data is available Make sure that the server doesn't block even if a particular connection's send() call does block (you can do this via select()/poll() and non-blocking sockets, or possibly via multithreading... I recommend the former if possible) Add some timeout logic to select() so that if more than (N) seconds go by where a socket has data ready to send but isn't actually sending it, the server gives up and closes the socket. (TCP does this itself, but TCP's timeout period may be too long for your taste)
Any reason why socket.send() hangs?
I'm writing an mini FTP server in Python that exposes an underlying database as if it was FTP. The flow is something like this: sock.send("150 Here's the file you wanted\r\n") proc = Popen2(...) for parts in data: data_sock.send(parts) proc.kill() sock.send("226 There's the file you wanted\r\n") data_sock.shutdown(0) data_sock.close() data_sock is the PASV socket that's up and working, confirmed by Wireshark. What's actually happening is after the 163,328th byte has been sent over the data_sock, the data_sock.send() line just hangs. I suspect the send buffer is full, but it's a mystery to me why the FTP clients wouldn't be reading from the PASV socket. I've included the Popen2(...) line because I've managed to reproduce http://bugs.python.org/issue3006 on OS X--sockets don't close until the Popen process is killed. Not sure if this is somehow related.
[ "Hard to say from this code fragment and not knowing the client, but is it possible that your sending of 150 (indicating a new data channel), not 125 (indicating use of existing data channel) confuses the client and it simply does not start reading the data?\nHave you had a look of pyftpdlib as an alternative for rolling your own server?\n", "I've encountered similar issues on the client side on uploads, which seem to trace to the modem/router choking -- the only workround I have at the moment is to throttle the transmission rate (send 128 bytes, sleep ~50ms, repeat).\n", "One reason a client could stop reading the data is that somebody unplugged the client (or disconnected its Ethernet cable) during the transfer. In that case, TCP will keep (unsuccessfully) resending packets for several minutes, getting no response, until it gives up. There are other possible reasons as well.\nSince the above possibilities are something you'll have to deal with if you want a robust server, the real question is not necessarily why it happens but rather what you should do when it does happen. Some possible things to do are:\n\nMake sure the clients don't have any bugs that cause them to stop reading even though data is available\nMake sure that the server doesn't block even if a particular connection's send() call does block (you can do this via select()/poll() and non-blocking sockets, or possibly via multithreading... I recommend the former if possible)\nAdd some timeout logic to select() so that if more than (N) seconds go by where a socket has data ready to send but isn't actually sending it, the server gives up and closes the socket. (TCP does this itself, but TCP's timeout period may be too long for your taste)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "ftp", "python", "sockets" ]
stackoverflow_0001250979_ftp_python_sockets.txt
Q: Python imports: Will changing a variable in "child" change variable in "parent"/other children? Suppose you have 3 modules, a.py, b.py, and c.py: a.py: v1 = 1 v2 = 2 etc. b.py: from a import * c.py: from a import * v1 = 0 Will c.py change v1 in a.py and b.py? If not, is there a way to do it? A: All that a statement like: v1 = 0 can do is bind the name v1 to the object 0. It can't affect a different module. If I'm using unfamiliar terms there, and I guess I probably am, I strongly recommend you read Fredrik Lundh's excellent article Python Objects: Reset your brain. A: The from ... import * form is basically intended for handy interactive use at the interpreter prompt: you'd be well advised to never use it in other situations, as it will give you nothing but problems. In fact, the in-house style guide at my employer goes further, recommending to always import a module, never contents from within a module (a module from within a package is OK and in fact recommended). As a result, in our codebase, references to imported things are always qualified names (themod.thething) and never barenames (which always refer to builtin, globals of this same module, or locals); this makes the code much clearer and more readable and avoids all kinds of subtle anomalies. Of course, if a module's name is too long, an as clause in the import, to give it a shorter and handier alias for the purposes of the importing module, is fine. But, with your one-letter module names, that won't be needed;-). So, if you follow the guideline and always import the module (and not things from inside it), c.v1 will always be referring to the same thing as a.v1 and b.v1, both for getting AND setting: here's one potential subtle anomaly avoided right off the bat!-) Remember the very last bit of the Zen of Python (do import this at the interpreter prompt to see it all): Namespaces are one honking great idea -- let's do more of those! Importing the whole module (not bits and pieces from within it) preserves its integrity as a namespace, as does always referring to things inside the imported module by qualified (dotted) names. It's one honking great idea: do more of that!-) A: Yes, you just need to access it correctly (and don't use import *, it's evil) c.py: import a print a.v1 # prints 1 a.v1 = 0 print a.v1 # prints 0
Python imports: Will changing a variable in "child" change variable in "parent"/other children?
Suppose you have 3 modules, a.py, b.py, and c.py: a.py: v1 = 1 v2 = 2 etc. b.py: from a import * c.py: from a import * v1 = 0 Will c.py change v1 in a.py and b.py? If not, is there a way to do it?
[ "All that a statement like:\nv1 = 0\n\ncan do is bind the name v1 to the object 0. It can't affect a different module.\nIf I'm using unfamiliar terms there, and I guess I probably am, I strongly recommend you read Fredrik Lundh's excellent article Python Objects: Reset your brain.\n", "The from ... import * form is basically intended for handy interactive use at the interpreter prompt: you'd be well advised to never use it in other situations, as it will give you nothing but problems.\nIn fact, the in-house style guide at my employer goes further, recommending to always import a module, never contents from within a module (a module from within a package is OK and in fact recommended). As a result, in our codebase, references to imported things are always qualified names (themod.thething) and never barenames (which always refer to builtin, globals of this same module, or locals); this makes the code much clearer and more readable and avoids all kinds of subtle anomalies.\nOf course, if a module's name is too long, an as clause in the import, to give it a shorter and handier alias for the purposes of the importing module, is fine. But, with your one-letter module names, that won't be needed;-).\nSo, if you follow the guideline and always import the module (and not things from inside it), c.v1 will always be referring to the same thing as a.v1 and b.v1, both for getting AND setting: here's one potential subtle anomaly avoided right off the bat!-)\nRemember the very last bit of the Zen of Python (do import this at the interpreter prompt to see it all):\nNamespaces are one honking great idea -- let's do more of those!\n\nImporting the whole module (not bits and pieces from within it) preserves its integrity as a namespace, as does always referring to things inside the imported module by qualified (dotted) names. It's one honking great idea: do more of that!-)\n", "Yes, you just need to access it correctly (and don't use import *, it's evil)\nc.py:\nimport a\nprint a.v1 # prints 1\na.v1 = 0\nprint a.v1 # prints 0\n\n" ]
[ 5, 2, 1 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001251611_import_python.txt
Q: Python Class vs. Module Attributes I'm interested in hearing some discussion about class attributes in Python. For example, what is a good use case for class attributes? For the most part, I can not come up with a case where a class attribute is preferable to using a module level attribute. If this is true, then why have them around? The problem I have with them, is that it is almost too easy to clobber a class attribute value by mistake, and then your "global" value has turned into a local instance attribute. Feel free to comment on how you would handle the following situations: Constant values used by a class and/or sub-classes. This may include "magic number" dictionary keys or list indexes that will never change, but possible need one-time initialization. Default class attribute, that in rare occasions updated for a special instance of the class. Global data structure used to represent an internal state of a class shared between all instances. A class that initializes a number of default attributes, not influenced by constructor arguments. Some Related Posts: Difference Between Class and Instance Attributes A: #4: I never use class attributes to initialize default instance attributes (the ones you normally put in __init__). For example: class Obj(object): def __init__(self): self.users = 0 and never: class Obj(object): users = 0 Why? Because it's inconsistent: it doesn't do what you want when you assign anything but an invariant object: class Obj(object): users = [] causes the users list to be shared across all objects, which in this case isn't wanted. It's confusing to split these into class attributes and assignments in __init__ depending on their type, so I always put them all in __init__, which I find clearer anyway. As for the rest, I generally put class-specific values inside the class. This isn't so much because globals are "evil"--they're not so big a deal as in some languages, because they're still scoped to the module, unless the module itself is too big--but if external code wants to access them, it's handy to have all of the relevant values in one place. For example, in module.py: class Obj(object): class Exception(Exception): pass ... and then: from module import Obj try: o = Obj() o.go() except o.Exception: print "error" Aside from allowing subclasses to change the value (which isn't always wanted anyway), it means I don't have to laboriously import exception names and a bunch of other stuff needed to use Obj. "from module import Obj, ObjException, ..." gets tiresome quickly. A: what is a good use case for class attributes Case 0. Class methods are just class attributes. This is not just a technical similarity - you can access and modify class methods at runtime by assigning callables to them. Case 1. A module can easily define several classes. It's reasonable to encapsulate everything about class A into A... and everything about class B into B.... For example, # module xxx class X: MAX_THREADS = 100 ... # main program from xxx import X if nthreads < X.MAX_THREADS: ... Case 2. This class has lots of default attributes which can be modified in an instance. Here the ability to leave attribute to be a 'global default' is a feature, not bug. class NiceDiff: """Formats time difference given in seconds into a form '15 minutes ago'.""" magic = .249 pattern = 'in {0}', 'right now', '{0} ago' divisions = 1 # there are more default attributes One creates instance of NiceDiff to use the existing or slightly modified formatting, but a localizer to a different language subclasses the class to implement some functions in a fundamentally different way and redefine constants: class Разница(NiceDiff): # NiceDiff localized to Russian '''Из разницы во времени, типа -300, делает конкретно '5 минут назад'.''' pattern = 'через {0}', 'прям щас', '{0} назад' Your cases: constants -- yes, I put them to class. It's strange to say self.CONSTANT = ..., so I don't see a big risk for clobbering them. Default attribute -- mixed, as above may go to class, but may also go to __init__ depending on the semantics. Global data structure --- goes to class if used only by the class, but may also go to module, in either case must be very well-documented. A: Class attributes are often used to allow overriding defaults in subclasses. For example, BaseHTTPRequestHandler has class constants sys_version and server_version, the latter defaulting to "BaseHTTP/" + __version__. SimpleHTTPRequestHandler overrides server_version to "SimpleHTTP/" + __version__. A: Encapsulation is a good principle: when an attribute is inside the class it pertains to instead of being in the global scope, this gives additional information to people reading the code. In your situations 1-4, I would thus avoid globals as much as I can, and prefer using class attributes, which allow one to benefit from encapsulation.
Python Class vs. Module Attributes
I'm interested in hearing some discussion about class attributes in Python. For example, what is a good use case for class attributes? For the most part, I can not come up with a case where a class attribute is preferable to using a module level attribute. If this is true, then why have them around? The problem I have with them, is that it is almost too easy to clobber a class attribute value by mistake, and then your "global" value has turned into a local instance attribute. Feel free to comment on how you would handle the following situations: Constant values used by a class and/or sub-classes. This may include "magic number" dictionary keys or list indexes that will never change, but possible need one-time initialization. Default class attribute, that in rare occasions updated for a special instance of the class. Global data structure used to represent an internal state of a class shared between all instances. A class that initializes a number of default attributes, not influenced by constructor arguments. Some Related Posts: Difference Between Class and Instance Attributes
[ "#4: \nI never use class attributes to initialize default instance attributes (the ones you normally put in __init__). For example:\nclass Obj(object):\n def __init__(self):\n self.users = 0\n\nand never:\nclass Obj(object):\n users = 0\n\nWhy? Because it's inconsistent: it doesn't do what you want when you assign anything but an invariant object:\nclass Obj(object):\n users = []\n\ncauses the users list to be shared across all objects, which in this case isn't wanted. It's confusing to split these into class attributes and assignments in __init__ depending on their type, so I always put them all in __init__, which I find clearer anyway.\n\nAs for the rest, I generally put class-specific values inside the class. This isn't so much because globals are \"evil\"--they're not so big a deal as in some languages, because they're still scoped to the module, unless the module itself is too big--but if external code wants to access them, it's handy to have all of the relevant values in one place. For example, in module.py:\nclass Obj(object):\n class Exception(Exception): pass\n ...\n\nand then:\nfrom module import Obj\n\ntry:\n o = Obj()\n o.go()\nexcept o.Exception:\n print \"error\"\n\nAside from allowing subclasses to change the value (which isn't always wanted anyway), it means I don't have to laboriously import exception names and a bunch of other stuff needed to use Obj. \"from module import Obj, ObjException, ...\" gets tiresome quickly.\n", "\nwhat is a good use case for class attributes\n\nCase 0. Class methods are just class attributes. This is not just a technical similarity - you can access and modify class methods at runtime by assigning callables to them.\nCase 1. A module can easily define several classes. It's reasonable to encapsulate everything about class A into A... and everything about class B into B.... For example, \n# module xxx\nclass X:\n MAX_THREADS = 100\n ...\n\n# main program\nfrom xxx import X\n\nif nthreads < X.MAX_THREADS: ...\n\nCase 2. This class has lots of default attributes which can be modified in an instance. Here the ability to leave attribute to be a 'global default' is a feature, not bug. \nclass NiceDiff:\n \"\"\"Formats time difference given in seconds into a form '15 minutes ago'.\"\"\"\n\n magic = .249\n pattern = 'in {0}', 'right now', '{0} ago'\n\n divisions = 1\n\n # there are more default attributes\n\nOne creates instance of NiceDiff to use the existing or slightly modified formatting, but a localizer to a different language subclasses the class to implement some functions in a fundamentally different way and redefine constants:\nclass Разница(NiceDiff): # NiceDiff localized to Russian\n '''Из разницы во времени, типа -300, делает конкретно '5 минут назад'.'''\n\n pattern = 'через {0}', 'прям щас', '{0} назад'\n\nYour cases: \n\nconstants -- yes, I put them to class. It's strange to say self.CONSTANT = ..., so I don't see a big risk for clobbering them. \nDefault attribute -- mixed, as above may go to class, but may also go to __init__ depending on the semantics. \nGlobal data structure --- goes to class if used only by the class, but may also go to module, in either case must be very well-documented. \n\n", "Class attributes are often used to allow overriding defaults in subclasses. For example, BaseHTTPRequestHandler has class constants sys_version and server_version, the latter defaulting to \"BaseHTTP/\" + __version__. SimpleHTTPRequestHandler overrides server_version to \"SimpleHTTP/\" + __version__.\n", "Encapsulation is a good principle: when an attribute is inside the class it pertains to instead of being in the global scope, this gives additional information to people reading the code.\nIn your situations 1-4, I would thus avoid globals as much as I can, and prefer using class attributes, which allow one to benefit from encapsulation.\n" ]
[ 7, 4, 2, 1 ]
[]
[]
[ "attributes", "class_design", "module", "python" ]
stackoverflow_0001250779_attributes_class_design_module_python.txt
Q: Django + Jquery, expanding AJAX div How can I, when a user clicks a link, open a div right underneath the link which loads it's content via AJAX? Thanks for the help; I cannot find out how to. Just statically filling the div on the server side while loading the page works fine, but it's too much content for that. I'm kind of looking for a specific Django version of the solution if anyone has one? A: jQuery.load does exactly that: $("div#my-container").load("/url/to/content/ #content-id") this fetches the content from /url/to/content/, filters it by #content-id and injects the result into div#my-container. edit: there's really nothing Django-specific about this, since it's all client-side. But if you insist... templates/base.html <html> <head> <title>My funky example</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> {% block extrahead %}{% endblock %} </head> <body> {% block content %}{% endblock %} </body> </html> templates/page.html {% extends "base.html" %} {% block extrahead %} <script type="text/javascript"> $(function(){ $('a.extendable').click(function(){ $(this).after($('<div class="external-content"></div>').load($(this).attr('href') + ' #content')); return false; }); }); </script> {% endblock extrahead %} {% block content %} <p>Hi! <a href="/external/content/a/" class="extendable">Click here</a> and wait for something funny to happen!</p> <p><a href="/external/content/b/" class="extendable">This link</a> is cool, too!</p> {% endblock content %} templates/a.html {% extends "base.html" %} {% block content %} <div id="content">so long and thanks for all the fish</div> {% endblock %} templates/b.html {% extends "base.html" %} {% block content %} <div id="content">Don't panic</div> {% endblock %} urls.py from django.conf.urls.defaults import * urlpatterns = patterns('django.views.generic.simple', (r'^$', 'direct_to_template', {'template': 'page.html'}), (r'^external/content/a/$', 'direct_to_template', {'template': 'a.html'}), (r'^external/content/b/$', 'direct_to_template', {'template': 'b.html'}), ) You can download all the code here. A: Something like this will work <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script type="text/javascript"> function loadDiv() { $.get("test.php", function(data){ $('#thediv').html(data); }); } </script> </head> <body> <a href="javascript:loadDiv();">Load Div</a> <div id="thediv"></div> </body> </html>
Django + Jquery, expanding AJAX div
How can I, when a user clicks a link, open a div right underneath the link which loads it's content via AJAX? Thanks for the help; I cannot find out how to. Just statically filling the div on the server side while loading the page works fine, but it's too much content for that. I'm kind of looking for a specific Django version of the solution if anyone has one?
[ "jQuery.load does exactly that:\n$(\"div#my-container\").load(\"/url/to/content/ #content-id\")\n\nthis fetches the content from /url/to/content/, filters it by #content-id and injects the result into div#my-container.\nedit: there's really nothing Django-specific about this, since it's all client-side. But if you insist...\ntemplates/base.html\n<html>\n <head>\n <title>My funky example</title>\n <script type=\"text/javascript\" src=\"http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js\"></script>\n {% block extrahead %}{% endblock %}\n </head>\n <body>\n {% block content %}{% endblock %}\n </body>\n</html>\n\ntemplates/page.html\n{% extends \"base.html\" %}\n{% block extrahead %}\n <script type=\"text/javascript\">\n $(function(){\n $('a.extendable').click(function(){\n $(this).after($('<div class=\"external-content\"></div>').load($(this).attr('href') + ' #content'));\n return false;\n });\n });\n </script>\n{% endblock extrahead %}\n{% block content %}\n <p>Hi! <a href=\"/external/content/a/\" class=\"extendable\">Click here</a> and wait for something funny to happen!</p>\n <p><a href=\"/external/content/b/\" class=\"extendable\">This link</a> is cool, too!</p>\n{% endblock content %}\n\ntemplates/a.html\n{% extends \"base.html\" %}\n{% block content %}\n <div id=\"content\">so long and thanks for all the fish</div>\n{% endblock %}\n\ntemplates/b.html\n{% extends \"base.html\" %}\n{% block content %}\n <div id=\"content\">Don't panic</div>\n{% endblock %}\n\nurls.py\nfrom django.conf.urls.defaults import *\nurlpatterns = patterns('django.views.generic.simple',\n (r'^$', 'direct_to_template', {'template': 'page.html'}),\n (r'^external/content/a/$', 'direct_to_template', {'template': 'a.html'}),\n (r'^external/content/b/$', 'direct_to_template', {'template': 'b.html'}),\n)\n\nYou can download all the code here.\n", "Something like this will work\n<html>\n<head>\n <script type=\"text/javascript\" src=\"http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js\"></script>\n <script type=\"text/javascript\">\n function loadDiv() {\n $.get(\"test.php\", function(data){\n $('#thediv').html(data);\n });\n }\n\n</script>\n</head>\n<body>\n<a href=\"javascript:loadDiv();\">Load Div</a>\n<div id=\"thediv\"></div>\n\n</body>\n</html>\n\n" ]
[ 13, 1 ]
[]
[]
[ "django", "jquery", "python" ]
stackoverflow_0001252275_django_jquery_python.txt
Q: How to check null value for UserProperty in Google App Engine In Google App Engine, datastore modelling, I would like to ask how can I check for null value of a property with class UserProperty? for example: I have this code: class Entry(db.Model): title = db.StringProperty() description = db.StringProperty() author = db.UserProperty() editor = db.UserProperty() creationdate = db.DateTimeProperty() When I want to check those entries that have the editor is not null, I can not use this kind of GqlQuery query = db.GqlQuery("SELECT * FROM Entry " + "WHERE editor IS NOT NULL" + "ORDER BY creationdate DESC") entries = query.fetch(5) I am wondering if there is any method for checking the existence of a variable with UserProperty? Thank you! A: query = db.GqlQuery("SELECT * FROM Entry WHERE editor > :1",None) However, you can't ORDER BY one column and have an inequality condition on another column: that's a well-known GAE limitation and has nothing to do with the property being a UserProperty nor with the inequality check you're doing being with None. Edit: I had a != before, but as @Nick pointed out, anything that's != None is > None, and > is about twice as fast on GAE (since != is synthesized by union of < and >), so using > here is a worthwhile optimization.
How to check null value for UserProperty in Google App Engine
In Google App Engine, datastore modelling, I would like to ask how can I check for null value of a property with class UserProperty? for example: I have this code: class Entry(db.Model): title = db.StringProperty() description = db.StringProperty() author = db.UserProperty() editor = db.UserProperty() creationdate = db.DateTimeProperty() When I want to check those entries that have the editor is not null, I can not use this kind of GqlQuery query = db.GqlQuery("SELECT * FROM Entry " + "WHERE editor IS NOT NULL" + "ORDER BY creationdate DESC") entries = query.fetch(5) I am wondering if there is any method for checking the existence of a variable with UserProperty? Thank you!
[ "query = db.GqlQuery(\"SELECT * FROM Entry WHERE editor > :1\",None)\n\nHowever, you can't ORDER BY one column and have an inequality condition on another column: that's a well-known GAE limitation and has nothing to do with the property being a UserProperty nor with the inequality check you're doing being with None.\nEdit: I had a != before, but as @Nick pointed out, anything that's != None is > None, and > is about twice as fast on GAE (since != is synthesized by union of < and >), so using > here is a worthwhile optimization.\n" ]
[ 13 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001252196_google_app_engine_google_cloud_datastore_python.txt
Q: Controlling getter and setter for a python's class Consider the following class : class Token: def __init__(self): self.d_dict = {} def __setattr__(self, s_name, value): self.d_dict[s_name] = value def __getattr__(self, s_name): if s_name in self.d_dict.keys(): return self.d_dict[s_name] else: raise AttributeError('No attribute {0} found !'.format(s_name)) In my code Token have some other function (like get_all() wich return d_dict, has(s_name) which tell me if my token has a particular attribute). Anyway, I think their is a flaw in my plan since it don't work : when I create a new instance, python try to call __setattr__('d_dict', '{}'). How can I achieve a similar behaviour (maybe in a more pythonic way ?) without having to write something like Token.set(name, value) and get(name) each I want to set or get an attribute for a token. Critics about design flaw and/or stupidity welcome :) Thank ! A: You need to special-case d_dict. Although of course, in the above code, all you do is replicate what any object does with __dict__ already, so it's pretty pointless. Do I guess correctly if you intended to special case some attributes and actally use methods for those? In that case, you can use properties. class C(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x A: The special-casing of __dict__ works like this: def __init__(self): self.__dict__['d_dict'] = {} There is no need to use a new-style class for that. A: A solution, not very pythonic but works. As Lennart Regebro pointed, you have to use a special case for d_dict. class Token(object): def __init__(self): super(Token,self).__setattr__('d_dict', {}) def __getattr__(self,name): return self.a[name] def __setattr__(self,name,value): self.a[name] = value You need to use new style classes. A: the problem seems to be in time of evaluation of your code in __init__ method. You could define __new__ method and initialize d_dict variable there instead of __init__. Thats a bit hackish but it works, remember though to comment it as after few months it'll be total magic. >>> class Foo(object): ... def __new__(cls, *args): ... my_cls = super(Foo, cls).__new__(cls, *args) ... my_cls.d_dict = {} ... return my_cls >>> f = Foo() >>> id(f.d_dict) 3077948796L >>> d = Foo() >>> id(d.d_dict) 3078142804L Word of explanation why I consider that hackish: call to __new__ returns new instance of class so then d_dict initialised in there is kind of static, but it's initialised with new instance of dictionary each time class is "created" so everything works as you need. A: It's worth remembering that __getattr__ is only called if the attribute doesn't exist in the object, whereas __setattr__ is always called. A: I think we'll be able to say something about the overall design of your class if you explain its purpose. For example, # This is a class that serves as a dictionary but also has user-defined methods class mydict(dict): pass # This is a class that allows setting x.attr = value or getting x.attr: class mysetget: pass # This is a class that allows setting x.attr = value or getting x.attr: class mygetsethas: def has(self, key): return key in self.__dict__ x = mygetsethas() x.a = 5 print(x.has('a'), x.a) I think the last class is closest to what you meant, and I also like to play with syntax and get lots of joy from it, but unfortunately this is not a good thing. Reasons why it's not advisable to use object attributes to re-implement dictionary: you can't use x.3, you conflict with x.has(), you have to put quotes in has('a') and many more.
Controlling getter and setter for a python's class
Consider the following class : class Token: def __init__(self): self.d_dict = {} def __setattr__(self, s_name, value): self.d_dict[s_name] = value def __getattr__(self, s_name): if s_name in self.d_dict.keys(): return self.d_dict[s_name] else: raise AttributeError('No attribute {0} found !'.format(s_name)) In my code Token have some other function (like get_all() wich return d_dict, has(s_name) which tell me if my token has a particular attribute). Anyway, I think their is a flaw in my plan since it don't work : when I create a new instance, python try to call __setattr__('d_dict', '{}'). How can I achieve a similar behaviour (maybe in a more pythonic way ?) without having to write something like Token.set(name, value) and get(name) each I want to set or get an attribute for a token. Critics about design flaw and/or stupidity welcome :) Thank !
[ "You need to special-case d_dict.\nAlthough of course, in the above code, all you do is replicate what any object does with __dict__ already, so it's pretty pointless. Do I guess correctly if you intended to special case some attributes and actally use methods for those?\nIn that case, you can use properties.\nclass C(object):\n def __init__(self):\n self._x = None\n\n @property\n def x(self):\n \"\"\"I'm the 'x' property.\"\"\"\n return self._x\n\n @x.setter\n def x(self, value):\n self._x = value\n\n @x.deleter\n def x(self):\n del self._x\n\n", "The special-casing of __dict__ works like this:\ndef __init__(self):\n self.__dict__['d_dict'] = {}\n\nThere is no need to use a new-style class for that.\n", "A solution, not very pythonic but works. As Lennart Regebro pointed, you have to use a special case for d_dict.\nclass Token(object):\n\n def __init__(self):\n super(Token,self).__setattr__('d_dict', {})\n\n def __getattr__(self,name):\n return self.a[name]\n\n def __setattr__(self,name,value):\n self.a[name] = value\n\nYou need to use new style classes.\n", "the problem seems to be in time of evaluation of your code in __init__ method.\nYou could define __new__ method and initialize d_dict variable there instead of __init__.\nThats a bit hackish but it works, remember though to comment it as after few months it'll be total magic.\n>>> class Foo(object):\n... def __new__(cls, *args):\n... my_cls = super(Foo, cls).__new__(cls, *args)\n... my_cls.d_dict = {}\n... return my_cls\n\n>>> f = Foo()\n>>> id(f.d_dict)\n3077948796L\n>>> d = Foo()\n>>> id(d.d_dict)\n3078142804L\n\nWord of explanation why I consider that hackish: call to __new__ returns new instance of class so then d_dict initialised in there is kind of static, but it's initialised with new instance of dictionary each time class is \"created\" so everything works as you need.\n", "It's worth remembering that __getattr__ is only called if the attribute doesn't exist in the object, whereas __setattr__ is always called.\n", "I think we'll be able to say something about the overall design of your class if you explain its purpose. For example, \n# This is a class that serves as a dictionary but also has user-defined methods\nclass mydict(dict): pass\n\n# This is a class that allows setting x.attr = value or getting x.attr:\nclass mysetget: pass\n\n# This is a class that allows setting x.attr = value or getting x.attr:\nclass mygetsethas: \n def has(self, key):\n return key in self.__dict__\n\nx = mygetsethas()\nx.a = 5\nprint(x.has('a'), x.a)\n\nI think the last class is closest to what you meant, and I also like to play with syntax and get lots of joy from it, but unfortunately this is not a good thing. Reasons why it's not advisable to use object attributes to re-implement dictionary: you can't use x.3, you conflict with x.has(), you have to put quotes in has('a') and many more.\n" ]
[ 3, 3, 2, 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001241703_python.txt
Q: new django app's from 1.1 causing 500 error i'm running on wsgi on centos 5... i've recently updated locally from 1.0 to 1.1 I updated the server using svn update now when I apply a new app developed locally to the server it returns with a 500 error. all i'm doing is python manage.py startapp appname adding the app into installed_apps in the settings file and uploading this then causes a 500 error... anything that would be causing this? A: Check also the list at http://code.djangoproject.com/wiki/BackwardsIncompatibleChanges. A: Didn't we solve this for you in IRC the other day? If not, there was someone with the same OS and vague problem description. Turned out to be a third-party app causing the problem, not the newly added one (which a review of the trackback proved if read carefully)
new django app's from 1.1 causing 500 error
i'm running on wsgi on centos 5... i've recently updated locally from 1.0 to 1.1 I updated the server using svn update now when I apply a new app developed locally to the server it returns with a 500 error. all i'm doing is python manage.py startapp appname adding the app into installed_apps in the settings file and uploading this then causes a 500 error... anything that would be causing this?
[ "Check also the list at http://code.djangoproject.com/wiki/BackwardsIncompatibleChanges.\n", "Didn't we solve this for you in IRC the other day? If not, there was someone with the same OS and vague problem description.\nTurned out to be a third-party app causing the problem, not the newly added one (which a review of the trackback proved if read carefully)\n" ]
[ 1, 0 ]
[]
[]
[ "centos", "django", "python" ]
stackoverflow_0001235777_centos_django_python.txt
Q: Portable command execution syntax implemented in Python Python is not a pretty good language in defining a set of commands to run. Bash is. But Bash does not run naively on Windows. Background: I am trying to build a set of programs - with established dependency relationships between them - on mac/win/linux. Something like macports but should work on all the three platforms listed. This begets the necessary to have a shell-like language that can be used to define the command sequence to run for building/patching/etc.. a particular program, for instance: post-destroot { xinstall -d ${destroot}${prefix}/share/emacs/${version}/leim delete ${destroot}${prefix}/bin/ctags delete ${destroot}${prefix}/share/man/man1/ctags.1 if {[variant_isset carbon]} { global version delete ${destroot}${prefix}/bin/emacs ${destroot}${prefix}/bin/emacs-${version} } } The above is from the emacs Portfile in macports. Note the use of variables, conditionals, functions .. besides specifying a simple list of commands to execute in that order. Although the syntax is not Python, the actual execution semantics has to be deferred to Python using subprocess or whatever. In short, I should be able to 'run' such a script .. but for each command a specified hook function gets called that actually runs any command passed as argument. I hope you get the idea. A: Sounds like you need some combination of PyParsing and Python Subprocess. I find subprocess a little confusing, despite the MOTW about it, so I use this kind of wrapper code a lot. from subprocess import Popen, PIPE def shell(args, input=None): p = Popen(args, stdin=PIPE, stdout=PIPE, stderr=PIPE) stdout, stderr = p.communicate(input=input) return stdout, stderr A: Ned Batchelder, in this post, lists several dozens of different tools you might choose to perform parsing, in Python, of some arbitrary language (though I don't quite understand why you don't want to use Python itself as your "language to define a set of commands to run", I'm sure you have your own reasons). One of these tools Ned lists is pyparsing, as Gregg mentions, but there are more than thirty others so you may want to have a look before you pick one that's to your taste. Once you have transformed your input source language into a syntax tree or other convenient in-memory representation, you can just walk the tree and execute as you go (inserting variable values, branching on conditionals and loops, etc). Besides the ability to run external processes (e.g. via subprocess, whether wrapped up as Gregg suggests, or not), don't forget that Python itself may be able to execute some of your elementary commands without breaking a sweat, and, when feasible, that will be significantly faster than delegating execution to a child process (indeed, one early motivation for Perl's success was that it could do a lot of things within one process, while sh was forking away like mad; modern descendants of sh like bash and ksh took the lesson, and now implement a lot of built-ins that can execute within the same process as the main script;-). For example, that delete command in your example can be implemented "internally" via os.unlink (can't link to the Python online docs right now as python.org is currently down due to HW problems;-). A: Here's my trivial suggestion: parse it via regexp to Python so it becomes def post_destroot(): xinstall ('-d', '${destroot}${prefix}/share/emacs/${version}/leim') delete ('${destroot}${prefix}/bin/ctags') delete ('${destroot}${prefix}/share/man/man1/ctags.1') if test('variant_isset', 'carbon'): global('version') delete('${destroot}${prefix}/bin/emacs', '${destroot}${prefix}/bin/emacs-${version}') I think it's not hard to write xinstall(), delete() and test() functions, especially since Python already has built-in function to format strings "{destroot}".format(dictionary). But why bother? Try looking into distutils module from the standard library.
Portable command execution syntax implemented in Python
Python is not a pretty good language in defining a set of commands to run. Bash is. But Bash does not run naively on Windows. Background: I am trying to build a set of programs - with established dependency relationships between them - on mac/win/linux. Something like macports but should work on all the three platforms listed. This begets the necessary to have a shell-like language that can be used to define the command sequence to run for building/patching/etc.. a particular program, for instance: post-destroot { xinstall -d ${destroot}${prefix}/share/emacs/${version}/leim delete ${destroot}${prefix}/bin/ctags delete ${destroot}${prefix}/share/man/man1/ctags.1 if {[variant_isset carbon]} { global version delete ${destroot}${prefix}/bin/emacs ${destroot}${prefix}/bin/emacs-${version} } } The above is from the emacs Portfile in macports. Note the use of variables, conditionals, functions .. besides specifying a simple list of commands to execute in that order. Although the syntax is not Python, the actual execution semantics has to be deferred to Python using subprocess or whatever. In short, I should be able to 'run' such a script .. but for each command a specified hook function gets called that actually runs any command passed as argument. I hope you get the idea.
[ "Sounds like you need some combination of PyParsing and Python Subprocess.\nI find subprocess a little confusing, despite the MOTW about it, so I use this kind of wrapper code a lot.\nfrom subprocess import Popen, PIPE\n\ndef shell(args, input=None):\n p = Popen(args, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n stdout, stderr = p.communicate(input=input)\n return stdout, stderr\n\n", "Ned Batchelder, in this post, lists several dozens of different tools you might choose to perform parsing, in Python, of some arbitrary language (though I don't quite understand why you don't want to use Python itself as your \"language to define a set of commands to run\", I'm sure you have your own reasons). One of these tools Ned lists is pyparsing, as Gregg mentions, but there are more than thirty others so you may want to have a look before you pick one that's to your taste.\nOnce you have transformed your input source language into a syntax tree or other convenient in-memory representation, you can just walk the tree and execute as you go (inserting variable values, branching on conditionals and loops, etc). Besides the ability to run external processes (e.g. via subprocess, whether wrapped up as Gregg suggests, or not), don't forget that Python itself may be able to execute some of your elementary commands without breaking a sweat, and, when feasible, that will be significantly faster than delegating execution to a child process (indeed, one early motivation for Perl's success was that it could do a lot of things within one process, while sh was forking away like mad; modern descendants of sh like bash and ksh took the lesson, and now implement a lot of built-ins that can execute within the same process as the main script;-).\nFor example, that delete command in your example can be implemented \"internally\" via os.unlink (can't link to the Python online docs right now as python.org is currently down due to HW problems;-).\n", "Here's my trivial suggestion: parse it via regexp to Python so it becomes\ndef post_destroot():\n xinstall ('-d', '${destroot}${prefix}/share/emacs/${version}/leim')\n delete ('${destroot}${prefix}/bin/ctags')\n delete ('${destroot}${prefix}/share/man/man1/ctags.1')\n if test('variant_isset', 'carbon'):\n global('version')\n delete('${destroot}${prefix}/bin/emacs', '${destroot}${prefix}/bin/emacs-${version}')\n\nI think it's not hard to write xinstall(), delete() and test() functions, especially since Python already has built-in function to format strings \"{destroot}\".format(dictionary).\nBut why bother? Try looking into distutils module from the standard library.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "command", "cross_platform", "python", "shell" ]
stackoverflow_0001249165_command_cross_platform_python_shell.txt
Q: sort dictionary by another dictionary I've been having a problem with making sorted lists from dictionaries. I have this list list = [ d = {'file_name':'thisfile.flt', 'item_name':'box', 'item_height':'8.7', 'item_width':'10.5', 'item_depth':'2.2', 'texture_file': 'red.jpg'}, d = {'file_name':'thatfile.flt', 'item_name':'teapot', 'item_height':'6.0', 'item_width':'12.4', 'item_depth':'3.0' 'texture_file': 'blue.jpg'}, etc. ] I'm trying to loop through the list and from each dictionary create a new list containing items from the dictionary. (It varies which items and how many items need to be appended to the list as the user makes that choice) sort the list When I say sort, I imagine creating a new dictionary like this order = { 'file_name': 0, 'item_name': 1, 'item_height': 2, 'item_width': 3, 'item_depth': 4, 'texture_file': 5 } and it sorts each list by the values in the order dictionary. During one execution of the script all the lists might look like this ['thisfile.flt', 'box', '8.7', '10.5', '2.2'] ['thatfile.flt', 'teapot', '6.0', '12.4', '3.0'] on the other hand they might look like this ['thisfile.flt', 'box', '8.7', '10.5', 'red.jpg'] ['thatfile.flt', 'teapot', '6.0', '12.4', 'blue.jpg'] I guess my question is how would I go about making a list from specific values from a dictionary and sorting it by the values in another dictionary which has the same keys as the first dictionary? Appreciate any ideas/suggestions, sorry for noobish behaviour - I am still learning python/programming A: The first code box has invalid Python syntax (I suspect the d = parts are extraneous...?) as well as unwisely trampling on the built-in name list. Anyway, given for example: d = {'file_name':'thisfile.flt', 'item_name':'box', 'item_height':'8.7', 'item_width':'10.5', 'item_depth':'2.2', 'texture_file': 'red.jpg'} order = { 'file_name': 0, 'item_name': 1, 'item_height': 2, 'item_width': 3, 'item_depth': 4, 'texture_file': 5 } one nifty way to get the desired result ['thisfile.flt', 'box', '8.7', '10.5', '2.2', "red.jpg'] would be: def doit(d, order): return [d[k] for k in sorted(order, key=order.get)]
sort dictionary by another dictionary
I've been having a problem with making sorted lists from dictionaries. I have this list list = [ d = {'file_name':'thisfile.flt', 'item_name':'box', 'item_height':'8.7', 'item_width':'10.5', 'item_depth':'2.2', 'texture_file': 'red.jpg'}, d = {'file_name':'thatfile.flt', 'item_name':'teapot', 'item_height':'6.0', 'item_width':'12.4', 'item_depth':'3.0' 'texture_file': 'blue.jpg'}, etc. ] I'm trying to loop through the list and from each dictionary create a new list containing items from the dictionary. (It varies which items and how many items need to be appended to the list as the user makes that choice) sort the list When I say sort, I imagine creating a new dictionary like this order = { 'file_name': 0, 'item_name': 1, 'item_height': 2, 'item_width': 3, 'item_depth': 4, 'texture_file': 5 } and it sorts each list by the values in the order dictionary. During one execution of the script all the lists might look like this ['thisfile.flt', 'box', '8.7', '10.5', '2.2'] ['thatfile.flt', 'teapot', '6.0', '12.4', '3.0'] on the other hand they might look like this ['thisfile.flt', 'box', '8.7', '10.5', 'red.jpg'] ['thatfile.flt', 'teapot', '6.0', '12.4', 'blue.jpg'] I guess my question is how would I go about making a list from specific values from a dictionary and sorting it by the values in another dictionary which has the same keys as the first dictionary? Appreciate any ideas/suggestions, sorry for noobish behaviour - I am still learning python/programming
[ "The first code box has invalid Python syntax (I suspect the d = parts are extraneous...?) as well as unwisely trampling on the built-in name list.\nAnyway, given for example:\nd = {'file_name':'thisfile.flt', 'item_name':'box', 'item_height':'8.7', \n 'item_width':'10.5', 'item_depth':'2.2', 'texture_file': 'red.jpg'}\n\norder = {\n 'file_name': 0,\n 'item_name': 1, \n 'item_height': 2,\n 'item_width': 3,\n 'item_depth': 4,\n 'texture_file': 5\n}\n\none nifty way to get the desired result ['thisfile.flt', 'box', '8.7', '10.5', '2.2', \"red.jpg'] would be:\ndef doit(d, order):\n return [d[k] for k in sorted(order, key=order.get)]\n\n" ]
[ 11 ]
[]
[]
[ "dictionary", "python", "sorting" ]
stackoverflow_0001252481_dictionary_python_sorting.txt
Q: Python: Zope's BTree OOSet, IISet, etc... Effective for this requirement? I asked another question: https://stackoverflow.com/questions/1180240/best-way-to-sort-1m-records-in-python where I was trying to determine the best approach for sorting 1 million records. In my case I need to be able to add additional items to the collection and have them resorted. It was suggested that I try using Zope's BTrees for this task. After doing some reading I am a little stumped as to what data I would put in a set. Basically, for each record I have two pieces of data. 1. A unique ID which maps to a user and 2. a value of interest for sorting on. I see that I can add the items to an OOSet as tuples, where the value for sorting on is at index 0. So, (200, 'id1'),(120, 'id2'),(400, 'id3') and the resulting set would be sorted with id2, id1 and id3 in order. However, part of the requirement for this is that each id appear only once in the set. I will be adding additional data to the set periodically and the new data may or may not include duplicated 'ids'. If they are duplicated I want to update the value and not add an additional entry. So, based on the tuples above, I might add (405, 'id1'),(10, 'id4') to the set and would want the output to have id4, id2, id3, id1 in order. Any suggestions on how to accomplish this. Sorry for my newbness on the subject. * EDIT - additional info * Here is some actual code from the project: for field in lb_fields: t = time.time() self.data[field] = [ (v[field], k) for k, v in self.foreign_keys.iteritems() ] self.data[field].sort(reverse=True) print "Added %s: %03.5f seconds" %(field, (time.time() - t)) foreign_keys is the original data in a dictionary with each id as the key and a dictionary of the additional data as the value. data is a dictionary containing the lists of sorted data. As a side note, as each itereation of the for field in lb_fields runs, the time to sort increases - not by much... but it is noticeable. After 1 million records have been sorted for each of the 16 fields it is using about 4 Gigs or RAM. Eventually this will run on a machine with 48 Gigs. A: I don't think BTrees or other traditional sorted data structures (red-black trees, etc) will help you, because they keep order by key, not by corresponding value -- in other words, the field they guarantee as unique is the same one they order by. Your requirements are different, because you want uniqueness along one field, but sortedness by the other. What are your performance requirements? With a rather simple pure Python implementation based on Python dicts for uniqueness and Python sorts, on a not-blazingly-fast laptop, I get 5 seconds for the original construction (essentially a sort over the million elements, starting with them as a dict), and about 9 seconds for the "update" with 20,000 new id/value pairs of which half "overlap" (thus overwrite) an existing id and half are new (I can implement the update in a faster way, about 6.5 seconds, but that implementation has an anomaly: if one of the "new" pairs is exactly identical to one of the "old" ones, both id and value, it's duplicated -- warding against such "duplication of identicals" is what pushes me from 6.5 seconds to 9, and I imagine you would need the same kind of precaution). How far are these 5-and-9 seconds times from your requirements (taking into account the actual speed of the machine you'll be running on vs the 2.4 GHz Core Duo, 2GB of RAM, and typical laptop performance issues of this laptop I'm using)? IOW, is it close enough to "striking distance" to be worth tinkering and trying to squeeze a last few cycles out of, or do you need orders of magnitude faster performance? I've tried several other approaches (with a SQL DB, with C++ and its std::sort &c, ...) but they're all slower, so if you need much higher performance I'm not sure what you could do. Edit: since the OP says this performance would be fine but he can't achieve anywhere near it, I guess I'd best show the script I used to measure these times...: import gc import operator import random import time nk = 1000 def popcon(d): for x in xrange(nk*1000): d['id%s' % x] = random.randrange(100*1000) def sorted_container(): ctr = dict() popcon(ctr) start = time.time() ctr_sorted = ctr.items() ctr_sorted.sort(key=operator.itemgetter(1)) stend = time.time() return stend-start, ctr_sorted def do_update(ctr, newones): start = time.time() dicol = dict(ctr) ctr.extend((k,v) for (k,v) in newones if v!=dicol.get(k,None)) dicnu = dict(newones) ctr.sort(key=operator.itemgetter(1)) newctr = [(k,v) for (k,v) in ctr if v==dicnu.get(k,v)] stend = time.time() return stend-start, newctr def main(): random.seed(12345) for x in range(3): duration, ctr = sorted_container() print 'dict-to-sorted, %d: %.2f sec, len=%d' % (x, duration, len(ctr)) newones = [('id%s' % y, random.randrange(nk*100)) for y in xrange(nk*990,nk*1010)] duration, ctr = do_update(ctr, newones) print 'updt-to-sorted, %d: %.2f sec, len=%d' % (x, duration, len(ctr)) del ctr gc.collect() main() and this is a typical run: $ time python som.py dict-to-sorted, 0: 5.01 sec, len=1000000 updt-to-sorted, 0: 9.78 sec, len=1010000 dict-to-sorted, 1: 5.02 sec, len=1000000 updt-to-sorted, 1: 9.12 sec, len=1010000 dict-to-sorted, 2: 5.03 sec, len=1000000 updt-to-sorted, 2: 9.12 sec, len=1010000 real 0m54.073s user 0m52.464s sys 0m1.258s the overall elapsed time being a few seconds more than the totals I'm measuring, obviously, because it includes the time needed to populate the container with random numbers, generate the "new data" also randomly, destroy and garbage-collect things at the end of each run, and so forth. This is with the system-supplied Python 2.5.2 on a Macbook with Mac OS X 10.5.7, 2.4 GHz Intel Core Duo, and 2GB of RAM (times don't change much when I use different versions of Python). A: It is perfectly possible to solve your problem. For this you should just note that the container types in Python always compare objects by calling their methods. Therefore you should do something like: class Record: 'Combination of unique part and sort part.' def __init__(self, unique, sort): self.unique = unique self.sort = sort def __hash__(self): # Hash should be implemented if __eq__ is implemented. return hash(self.unique) def __eq__(self, other): return self.unique == other.unique def __lt__(self, other): return self.sort < other.sort records = btree((Record(u, s) for u, s in zip(unique_data, sort_data))) print(records.pop()) Notes: depending on how your favorite container type is implemented, you might need to add methods for !=, <=, >, >= as well this will not break the relationship between == and <= as long as x.unique == y.unique ==> x.sort == y.sort
Python: Zope's BTree OOSet, IISet, etc... Effective for this requirement?
I asked another question: https://stackoverflow.com/questions/1180240/best-way-to-sort-1m-records-in-python where I was trying to determine the best approach for sorting 1 million records. In my case I need to be able to add additional items to the collection and have them resorted. It was suggested that I try using Zope's BTrees for this task. After doing some reading I am a little stumped as to what data I would put in a set. Basically, for each record I have two pieces of data. 1. A unique ID which maps to a user and 2. a value of interest for sorting on. I see that I can add the items to an OOSet as tuples, where the value for sorting on is at index 0. So, (200, 'id1'),(120, 'id2'),(400, 'id3') and the resulting set would be sorted with id2, id1 and id3 in order. However, part of the requirement for this is that each id appear only once in the set. I will be adding additional data to the set periodically and the new data may or may not include duplicated 'ids'. If they are duplicated I want to update the value and not add an additional entry. So, based on the tuples above, I might add (405, 'id1'),(10, 'id4') to the set and would want the output to have id4, id2, id3, id1 in order. Any suggestions on how to accomplish this. Sorry for my newbness on the subject. * EDIT - additional info * Here is some actual code from the project: for field in lb_fields: t = time.time() self.data[field] = [ (v[field], k) for k, v in self.foreign_keys.iteritems() ] self.data[field].sort(reverse=True) print "Added %s: %03.5f seconds" %(field, (time.time() - t)) foreign_keys is the original data in a dictionary with each id as the key and a dictionary of the additional data as the value. data is a dictionary containing the lists of sorted data. As a side note, as each itereation of the for field in lb_fields runs, the time to sort increases - not by much... but it is noticeable. After 1 million records have been sorted for each of the 16 fields it is using about 4 Gigs or RAM. Eventually this will run on a machine with 48 Gigs.
[ "I don't think BTrees or other traditional sorted data structures (red-black trees, etc) will help you, because they keep order by key, not by corresponding value -- in other words, the field they guarantee as unique is the same one they order by. Your requirements are different, because you want uniqueness along one field, but sortedness by the other.\nWhat are your performance requirements? With a rather simple pure Python implementation based on Python dicts for uniqueness and Python sorts, on a not-blazingly-fast laptop, I get 5 seconds for the original construction (essentially a sort over the million elements, starting with them as a dict), and about 9 seconds for the \"update\" with 20,000 new id/value pairs of which half \"overlap\" (thus overwrite) an existing id and half are new (I can implement the update in a faster way, about 6.5 seconds, but that implementation has an anomaly: if one of the \"new\" pairs is exactly identical to one of the \"old\" ones, both id and value, it's duplicated -- warding against such \"duplication of identicals\" is what pushes me from 6.5 seconds to 9, and I imagine you would need the same kind of precaution).\nHow far are these 5-and-9 seconds times from your requirements (taking into account the actual speed of the machine you'll be running on vs the 2.4 GHz Core Duo, 2GB of RAM, and typical laptop performance issues of this laptop I'm using)? IOW, is it close enough to \"striking distance\" to be worth tinkering and trying to squeeze a last few cycles out of, or do you need orders of magnitude faster performance?\nI've tried several other approaches (with a SQL DB, with C++ and its std::sort &c, ...) but they're all slower, so if you need much higher performance I'm not sure what you could do.\nEdit: since the OP says this performance would be fine but he can't achieve anywhere near it, I guess I'd best show the script I used to measure these times...:\nimport gc\nimport operator\nimport random\nimport time\n\n\nnk = 1000\n\ndef popcon(d):\n for x in xrange(nk*1000):\n d['id%s' % x] = random.randrange(100*1000)\n\ndef sorted_container():\n ctr = dict()\n popcon(ctr)\n start = time.time()\n ctr_sorted = ctr.items()\n ctr_sorted.sort(key=operator.itemgetter(1))\n stend = time.time()\n return stend-start, ctr_sorted\n\ndef do_update(ctr, newones):\n start = time.time()\n dicol = dict(ctr)\n ctr.extend((k,v) for (k,v) in newones if v!=dicol.get(k,None))\n dicnu = dict(newones)\n ctr.sort(key=operator.itemgetter(1))\n newctr = [(k,v) for (k,v) in ctr if v==dicnu.get(k,v)]\n stend = time.time()\n return stend-start, newctr\n\ndef main():\n random.seed(12345)\n for x in range(3):\n duration, ctr = sorted_container()\n print 'dict-to-sorted, %d: %.2f sec, len=%d' % (x, duration, len(ctr))\n newones = [('id%s' % y, random.randrange(nk*100))\n for y in xrange(nk*990,nk*1010)]\n duration, ctr = do_update(ctr, newones)\n print 'updt-to-sorted, %d: %.2f sec, len=%d' % (x, duration, len(ctr))\n del ctr\n gc.collect()\n\nmain()\n\nand this is a typical run:\n$ time python som.py\ndict-to-sorted, 0: 5.01 sec, len=1000000\nupdt-to-sorted, 0: 9.78 sec, len=1010000\ndict-to-sorted, 1: 5.02 sec, len=1000000\nupdt-to-sorted, 1: 9.12 sec, len=1010000\ndict-to-sorted, 2: 5.03 sec, len=1000000\nupdt-to-sorted, 2: 9.12 sec, len=1010000\n\nreal 0m54.073s\nuser 0m52.464s\nsys 0m1.258s\n\nthe overall elapsed time being a few seconds more than the totals I'm measuring, obviously, because it includes the time needed to populate the container with random numbers, generate the \"new data\" also randomly, destroy and garbage-collect things at the end of each run, and so forth.\nThis is with the system-supplied Python 2.5.2 on a Macbook with Mac OS X 10.5.7, 2.4 GHz Intel Core Duo, and 2GB of RAM (times don't change much when I use different versions of Python).\n", "It is perfectly possible to solve your problem. For this you should just note that the container types in Python always compare objects by calling their methods. Therefore you should do something like:\nclass Record:\n 'Combination of unique part and sort part.'\n def __init__(self, unique, sort):\n self.unique = unique\n self.sort = sort\n\n def __hash__(self):\n # Hash should be implemented if __eq__ is implemented.\n return hash(self.unique)\n\n def __eq__(self, other):\n return self.unique == other.unique\n\n def __lt__(self, other):\n return self.sort < other.sort\n\n records = btree((Record(u, s) for u, s in zip(unique_data, sort_data)))\n\n print(records.pop())\n\nNotes:\n\ndepending on how your favorite container type is implemented, you might need to add methods for !=, <=, >, >= as well\nthis will not break the relationship between == and <= as long as x.unique == y.unique ==> x.sort == y.sort\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "zope" ]
stackoverflow_0001183428_python_zope.txt
Q: How to make translucent sprites in pygame I've just started working with pygame and I'm trying to make a semi-transparent sprite, and the sprite's source file is a non-transparent bitmap file loaded from the disk. I don't want to edit the source image if I can help it. I'm sure there's a way to do this with pygame code, but Google is of no help to me. A: After loading the image, you will need to enable an alpha channel on the Surface. that will look a little like this: background = pygame.Display.set_mode() myimage = pygame.image.load("path/to/image.bmp").convert_alpha(background) This will load the image and immediately convert it to a pixel format suitable for alpha blending onto the display surface. You could use some other surface if you need to blit to some other, off screen buffer in another format. You can set per-pixel alpha simply enough, say you have a function which takes a 3-tuple for rgb color value and returns some desired 4tuple of rgba color+alpha, you could alter the surface per pixel: def set_alphas(color): if color == (255,255,0): # magenta means clear return (0,0,0,0) if color == (0,255,255): # cyan means shadow return (0,0,0,128) r,g,b = color return (r,g,b,255) # otherwise use the solid color from the image. for row in range(myimage.get_height()): for col in range(myimage,get_width()): myimage.set_at((row, col), set_alphas(myimage.get_at((row, col))[:3])) There are other, more useful ways to do this, but this gives you the idea, I hope. A: If your image has a solid color background that you want it to became transparent you can set it as color_key value, and pygame will make it transparent when blitting the image. eg: color = image.get_at((0,0)) #we get the color of the upper-left corner pixel image.set_colorkey(color) A: I may have not been clear in my original question, but I think I figured it out on my own. What I was looking for turns out to be Surface's set_alpha() method, so all I had to do was make sure that translucent images were on their own surface. Here's an example with my stripped down code: import pygame, os.path from pygame.locals import * class TranslucentSprite(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self, TranslucentSprite.container) self.image = pygame.image.load(os.path.join('data', 'image.bmp')) self.image = self.image.convert() self.image.set_colorkey(-1, RLEACCEL) self.rect = self.image.get_rect() self.rect.center = (320,240) def main(): pygame.init() screen = pygame.display.set_mode((640,480)) background = pygame.Surface(screen.get_size()) background = background.convert() background.fill((250,250,250)) clock = pygame.time.Clock() transgroups = pygame.sprite.Group() TranslucentSprite.container = transgroups """Here's the Translucency Code""" transsurface = pygame.display.set_mode(screen.get_size()) transsurface = transsurface.convert(screen) transsurface.fill((255,0,255)) transsurface.set_colorkey((255,0,255)) transsurface.set_alpha(50) TranslucentSprite() while 1: clock.tick(60) for event in pygame.event.get(): if event.type == QUIT: return elif event.type == KEYDOWN and event.key == K_ESCAPE: return transgroups.draw(transsurface) screen.blit(background,(0,0)) screen.blit(transsurface,(0,0)) pygame.display.flip() if __name__ == '__main__' : main() Is this the best technique? It seems to be the most simple and straightforward. A: you might consider switching to using png images where you can do any kind of transparency you want directly in the image.
How to make translucent sprites in pygame
I've just started working with pygame and I'm trying to make a semi-transparent sprite, and the sprite's source file is a non-transparent bitmap file loaded from the disk. I don't want to edit the source image if I can help it. I'm sure there's a way to do this with pygame code, but Google is of no help to me.
[ "After loading the image, you will need to enable an alpha channel on the Surface. that will look a little like this:\nbackground = pygame.Display.set_mode()\nmyimage = pygame.image.load(\"path/to/image.bmp\").convert_alpha(background)\n\nThis will load the image and immediately convert it to a pixel format suitable for alpha blending onto the display surface. You could use some other surface if you need to blit to some other, off screen buffer in another format.\nYou can set per-pixel alpha simply enough, say you have a function which takes a 3-tuple for rgb color value and returns some desired 4tuple of rgba color+alpha, you could alter the surface per pixel:\ndef set_alphas(color):\n if color == (255,255,0): # magenta means clear\n return (0,0,0,0)\n if color == (0,255,255): # cyan means shadow\n return (0,0,0,128)\n r,g,b = color\n return (r,g,b,255) # otherwise use the solid color from the image.\n\nfor row in range(myimage.get_height()):\n for col in range(myimage,get_width()):\n myimage.set_at((row, col), set_alphas(myimage.get_at((row, col))[:3]))\n\nThere are other, more useful ways to do this, but this gives you the idea, I hope.\n", "If your image has a solid color background that you want it to became transparent you can set it as color_key value, and pygame will make it transparent when blitting the image.\neg:\ncolor = image.get_at((0,0)) #we get the color of the upper-left corner pixel\nimage.set_colorkey(color)\n\n", "I may have not been clear in my original question, but I think I figured it out on my own. What I was looking for turns out to be Surface's set_alpha() method, so all I had to do was make sure that translucent images were on their own surface.\nHere's an example with my stripped down code:\nimport pygame, os.path\nfrom pygame.locals import *\n\nclass TranslucentSprite(pygame.sprite.Sprite):\n def __init__(self):\n pygame.sprite.Sprite.__init__(self, TranslucentSprite.container)\n self.image = pygame.image.load(os.path.join('data', 'image.bmp'))\n self.image = self.image.convert()\n self.image.set_colorkey(-1, RLEACCEL)\n self.rect = self.image.get_rect()\n self.rect.center = (320,240)\n\ndef main():\n pygame.init()\n screen = pygame.display.set_mode((640,480))\n background = pygame.Surface(screen.get_size())\n background = background.convert()\n background.fill((250,250,250))\n clock = pygame.time.Clock()\n transgroups = pygame.sprite.Group()\n TranslucentSprite.container = transgroups\n\n \"\"\"Here's the Translucency Code\"\"\"\n transsurface = pygame.display.set_mode(screen.get_size())\n transsurface = transsurface.convert(screen)\n transsurface.fill((255,0,255))\n transsurface.set_colorkey((255,0,255))\n transsurface.set_alpha(50)\n\n TranslucentSprite()\n while 1:\n clock.tick(60)\n for event in pygame.event.get():\n if event.type == QUIT:\n return\n elif event.type == KEYDOWN and event.key == K_ESCAPE:\n return\n transgroups.draw(transsurface)\n screen.blit(background,(0,0))\n screen.blit(transsurface,(0,0))\n pygame.display.flip()\n\nif __name__ == '__main__' : main()\n\nIs this the best technique? It seems to be the most simple and straightforward.\n", "you might consider switching to using png images where you can do any kind of transparency you want directly in the image.\n" ]
[ 4, 2, 2, 1 ]
[]
[]
[ "graphics", "pygame", "python", "sprite" ]
stackoverflow_0001247921_graphics_pygame_python_sprite.txt
Q: Writing Windows GUI applications with embedded Python scripts What would be the optimal way to develop a basic graphical application for Windows based on a Python console script? It would be great if the solution could be distributed as a standalone directory, containing the .exe file. A: As far as I understand your question, you want to write a graphical windows application in Python, to do this I suggest using wxPython and then py2exe to create a standalone exe that can run on any machine without requiring python to be installed The following tutorial shows everything step by step: Quickly Creating Professional Looking Application Using wxPython, py2exe and InnoSetup A: I would recommend that you use IronPython, which is Microsoft's implementation of Python for the .NET framework. A: Tkinter is quick and easy to use. Tkinter is in the Python standard library.
Writing Windows GUI applications with embedded Python scripts
What would be the optimal way to develop a basic graphical application for Windows based on a Python console script? It would be great if the solution could be distributed as a standalone directory, containing the .exe file.
[ "As far as I understand your question, you want to write a graphical windows application in Python, to do this I suggest using wxPython and then py2exe to create a standalone exe that can run on any machine without requiring python to be installed\nThe following tutorial shows everything step by step: Quickly Creating Professional\nLooking Application Using wxPython, py2exe and InnoSetup\n", "I would recommend that you use IronPython, which is Microsoft's implementation of Python for the .NET framework.\n", "Tkinter is quick and easy to use. Tkinter is in the Python standard library.\n" ]
[ 8, 3, 2 ]
[]
[]
[ "python", "user_interface", "windows" ]
stackoverflow_0001251260_python_user_interface_windows.txt
Q: Is it possible to update an entry on Google App Engine datastore through the object's dictionary? I tried the following code and it didn't work: class SourceUpdate(webapp.RequestHandler): def post(self): id = int(self.request.get('id')) source = Source.get_by_id(id) for property in self.request.arguments(): if property != 'id': source.__dict__[property] = self.request.get(property) source.put() self.redirect('/source') I am posting all the necessary properties but the entry isn't updated, and no error is shown. How to fix it? BTW class Source(db.Model): #some string properties A: You're bypassing the __setattr__-like functionality that the models' metaclass (type(type(source))) is normally using to deal with attribute-setting properly. Change your inner loop to: for property in self.request.arguments(): if property != 'id': setattr(source, property, self.request.get(property)) and everything should work (if all the types of properties can be properly set from a string, since that's what you'll get from request.get). A: Instead of directly setting model values from the request, you might want to look into using Django forms. They're bundled with App Engine, and facilitate validating form data and storing it in the datastore, as well as generating the form HTML. There's an article on how to use them with the App Engine datastore here. Also, don't forget that making changes based on a GET request is almost always a bad idea, and leads to XSRF vulnerabilities and other problems!
Is it possible to update an entry on Google App Engine datastore through the object's dictionary?
I tried the following code and it didn't work: class SourceUpdate(webapp.RequestHandler): def post(self): id = int(self.request.get('id')) source = Source.get_by_id(id) for property in self.request.arguments(): if property != 'id': source.__dict__[property] = self.request.get(property) source.put() self.redirect('/source') I am posting all the necessary properties but the entry isn't updated, and no error is shown. How to fix it? BTW class Source(db.Model): #some string properties
[ "You're bypassing the __setattr__-like functionality that the models' metaclass (type(type(source))) is normally using to deal with attribute-setting properly. Change your inner loop to:\nfor property in self.request.arguments():\n if property != 'id':\n setattr(source, property, self.request.get(property))\n\nand everything should work (if all the types of properties can be properly set from a string, since that's what you'll get from request.get).\n", "Instead of directly setting model values from the request, you might want to look into using Django forms. They're bundled with App Engine, and facilitate validating form data and storing it in the datastore, as well as generating the form HTML. There's an article on how to use them with the App Engine datastore here.\nAlso, don't forget that making changes based on a GET request is almost always a bad idea, and leads to XSRF vulnerabilities and other problems!\n" ]
[ 2, 2 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001252092_google_app_engine_google_cloud_datastore_python.txt
Q: Python leaking memory while using PyQt and matplotlib I've created a small PyQt based utility in Python that creates PNG graphs using matplotlib when a user clicks a button. Everything works well during the first few clicks, however each time an image is created, the application's memory footprint grows about 120 MB, eventually crashing Python altogether. How can I recover this memory after a graph is created? I've included a simplified version of my code here: import datetime as dt from datetime import datetime import os import gc # For Graphing import matplotlib from pylab import figure, show, savefig from matplotlib import figure as matfigure from matplotlib.dates import MonthLocator, WeekdayLocator, DateFormatter, DayLocator from matplotlib.ticker import MultipleLocator import matplotlib.pyplot as plot import matplotlib.ticker as ticker # For GUI import sys from PyQt4 import QtGui, QtCore class HyperGraph(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setWindowTitle('Title') self.create_widgets() def create_widgets(self): grid = QtGui.QGridLayout() self.generate_button = QtGui.QPushButton("Generate Graph", self) grid.addWidget(self.generate_button, 1, 1) QtCore.QObject.connect(self.generate_button, QtCore.SIGNAL("clicked()"), self.generate_graph) def generate_graph(self): try: fig = figure() ax = fig.add_axes([1,1,1,1]) # set title ax.set_title('Title') # configure x axis plot.xlim(dt.date.today() - dt.timedelta(days=180), dt.date.today()) ax.set_xlabel('Date') fig.set_figwidth(100) # configure y axis plot.ylim(0, 200) ax.set_ylabel('Price') fig.set_figheight(30) # export the graph to a png file plot.savefig('graph.png') except: print 'Error' plot.close(fig) gc.collect() app = QtGui.QApplication(sys.argv) hyper_graph = HyperGraph() hyper_graph.show() sys.exit(app.exec_()) The plot.savefig('graph.png') command seems to be what's gobbling up the memory. I'd greatly appreciate any help! A: It seems that some backends are leaking memory. Try setting your backend explicitly, e.g. import matplotlib matplotlib.use('Agg') # before import pylab import pylab A: The pyplot interface is meant for easy interactive use, but for embedding in an application the object-oriented API is better. For example, pyplot keeps track of all figures you have created. Your plot.close(figure) should get rid of them, but maybe it doesn't get executed -- try putting it inside finally or reusing the same figure object. See this example of embedding matplotlib in a PyQt4 application using the object-oriented API. It's more work, but since everything is explicit, you shouldn't get memory leaks from the behind-the-scenes automation that pyplot does.
Python leaking memory while using PyQt and matplotlib
I've created a small PyQt based utility in Python that creates PNG graphs using matplotlib when a user clicks a button. Everything works well during the first few clicks, however each time an image is created, the application's memory footprint grows about 120 MB, eventually crashing Python altogether. How can I recover this memory after a graph is created? I've included a simplified version of my code here: import datetime as dt from datetime import datetime import os import gc # For Graphing import matplotlib from pylab import figure, show, savefig from matplotlib import figure as matfigure from matplotlib.dates import MonthLocator, WeekdayLocator, DateFormatter, DayLocator from matplotlib.ticker import MultipleLocator import matplotlib.pyplot as plot import matplotlib.ticker as ticker # For GUI import sys from PyQt4 import QtGui, QtCore class HyperGraph(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setWindowTitle('Title') self.create_widgets() def create_widgets(self): grid = QtGui.QGridLayout() self.generate_button = QtGui.QPushButton("Generate Graph", self) grid.addWidget(self.generate_button, 1, 1) QtCore.QObject.connect(self.generate_button, QtCore.SIGNAL("clicked()"), self.generate_graph) def generate_graph(self): try: fig = figure() ax = fig.add_axes([1,1,1,1]) # set title ax.set_title('Title') # configure x axis plot.xlim(dt.date.today() - dt.timedelta(days=180), dt.date.today()) ax.set_xlabel('Date') fig.set_figwidth(100) # configure y axis plot.ylim(0, 200) ax.set_ylabel('Price') fig.set_figheight(30) # export the graph to a png file plot.savefig('graph.png') except: print 'Error' plot.close(fig) gc.collect() app = QtGui.QApplication(sys.argv) hyper_graph = HyperGraph() hyper_graph.show() sys.exit(app.exec_()) The plot.savefig('graph.png') command seems to be what's gobbling up the memory. I'd greatly appreciate any help!
[ "It seems that some backends are leaking memory. Try setting your backend explicitly, e.g.\nimport matplotlib\nmatplotlib.use('Agg') # before import pylab\nimport pylab\n\n", "The pyplot interface is meant for easy interactive use, but for embedding in an application the object-oriented API is better. For example, pyplot keeps track of all figures you have created. Your plot.close(figure) should get rid of them, but maybe it doesn't get executed -- try putting it inside finally or reusing the same figure object.\nSee this example of embedding matplotlib in a PyQt4 application using the object-oriented API. It's more work, but since everything is explicit, you shouldn't get memory leaks from the behind-the-scenes automation that pyplot does.\n" ]
[ 7, 6 ]
[]
[]
[ "matplotlib", "memory_leaks", "pyqt", "python" ]
stackoverflow_0001249182_matplotlib_memory_leaks_pyqt_python.txt
Q: Avoid program exit on I/O error I have a Python script using shutil.copy2 extensively. Since I use it to copy files over the network, I get too frequent I/O errors, which lead to the abortion of my program's execution: Traceback (most recent call last): File "run_model.py", line 46, in <module> main() File "run_model.py", line 41, in main tracerconfigfile=OPT.tracerconfig) File "ModelRun.py", line 517, in run self.copy_data() File "ModelRun.py", line 604, in copy_ecmwf_data shutil.copy2(remotefilename, localfilename) File "/usr/lib64/python2.6/shutil.py", line 99, in copy2 copyfile(src, dst) File "/usr/lib64/python2.6/shutil.py", line 54, in copyfile copyfileobj(fsrc, fdst) File "/usr/lib64/python2.6/shutil.py", line 27, in copyfileobj buf = fsrc.read(length) IOError: [Errno 5] Input/output error How can I avoid abortion of my program's execution and have it retry the copy process instead? The code I'm using already checks that the file is actually copied completely by checking the filesize: def check_file(file, size=0): if not os.path.exists(file): return False if (size != 0 and os.path.getsize(file) != size): return False return True while (check_file(rempdg,self._ndays*130160640) is False): shutil.copy2(locpdg, rempdg) A: Which block is giving the error? Just wrap a try/except around it: def check_file(file, size=0): try: if not os.path.exists(file): return False if (size != 0 and os.path.getsize(file) != size): return False return True except IOError: return False # or True, whatever your default is while (check_file(rempdg,self._ndays*130160640) is False): try: shutil.copy2(locpdg, rempdg) except IOError: pass # ignore the IOError and keep going A: you can use try: ... except IOError as err: ... to catch the errors and treat them Have a look on this
Avoid program exit on I/O error
I have a Python script using shutil.copy2 extensively. Since I use it to copy files over the network, I get too frequent I/O errors, which lead to the abortion of my program's execution: Traceback (most recent call last): File "run_model.py", line 46, in <module> main() File "run_model.py", line 41, in main tracerconfigfile=OPT.tracerconfig) File "ModelRun.py", line 517, in run self.copy_data() File "ModelRun.py", line 604, in copy_ecmwf_data shutil.copy2(remotefilename, localfilename) File "/usr/lib64/python2.6/shutil.py", line 99, in copy2 copyfile(src, dst) File "/usr/lib64/python2.6/shutil.py", line 54, in copyfile copyfileobj(fsrc, fdst) File "/usr/lib64/python2.6/shutil.py", line 27, in copyfileobj buf = fsrc.read(length) IOError: [Errno 5] Input/output error How can I avoid abortion of my program's execution and have it retry the copy process instead? The code I'm using already checks that the file is actually copied completely by checking the filesize: def check_file(file, size=0): if not os.path.exists(file): return False if (size != 0 and os.path.getsize(file) != size): return False return True while (check_file(rempdg,self._ndays*130160640) is False): shutil.copy2(locpdg, rempdg)
[ "Which block is giving the error? Just wrap a try/except around it:\ndef check_file(file, size=0):\n try:\n if not os.path.exists(file):\n return False\n if (size != 0 and os.path.getsize(file) != size):\n return False\n return True\n except IOError:\n return False # or True, whatever your default is\n\nwhile (check_file(rempdg,self._ndays*130160640) is False):\n try:\n shutil.copy2(locpdg, rempdg)\n except IOError:\n pass # ignore the IOError and keep going\n\n", "you can use \ntry:\n ...\nexcept IOError as err:\n ...\n\nto catch the errors and treat them\nHave a look on this\n" ]
[ 8, 6 ]
[]
[]
[ "python", "shutil" ]
stackoverflow_0001254292_python_shutil.txt
Q: string quoting issues in doctests When I run doctests on different Python versions (2.5 vs 2.6) and different plattforms (FreeBSD vs Mac OS) strings get quoted differently: Failed example: decode('{"created_by":"test","guid":123,"num":5.00}') Expected: {'guid': 123, 'num': Decimal("5.00"), 'created_by': 'test'} Got: {'guid': 123, 'num': Decimal('5.00'), 'created_by': 'test'} So on one box repr(decimal.Decimal('5.00')) results in 'Decimal("5.00")' on the other in "Decimal('5.00')". Is there any way to get arround the issue withyout creating more compliated test logic? A: This is actually because the decimal module's source code has changed: In python 2.4 and python2.5 the decimal.Decimal.__repr__ function contains: return 'Decimal("%s")' % str(self) whereas in python2.6 it contains: return "Decimal('%s')" % str(self) So in this case the best thing to do is just to print out str() of the result and check the type separately if necessary... A: Following the hits by David Fraser i found this suggestion by Raymond Hettinger on the Python mailinglist. I now use something like this: import sys if sys.version_info[:2] <= (2, 5): # ugly monkeypatch to make doctests work. For the reasons see # See http://mail.python.org/pipermail/python-dev/2008-July/081420.html # It can go away once all our boxes run python > 2.5 decimal.Decimal.__repr__ = lambda s: "Decimal('%s')" % str(s)
string quoting issues in doctests
When I run doctests on different Python versions (2.5 vs 2.6) and different plattforms (FreeBSD vs Mac OS) strings get quoted differently: Failed example: decode('{"created_by":"test","guid":123,"num":5.00}') Expected: {'guid': 123, 'num': Decimal("5.00"), 'created_by': 'test'} Got: {'guid': 123, 'num': Decimal('5.00'), 'created_by': 'test'} So on one box repr(decimal.Decimal('5.00')) results in 'Decimal("5.00")' on the other in "Decimal('5.00')". Is there any way to get arround the issue withyout creating more compliated test logic?
[ "This is actually because the decimal module's source code has changed: In python 2.4 and python2.5 the decimal.Decimal.__repr__ function contains:\nreturn 'Decimal(\"%s\")' % str(self)\n\nwhereas in python2.6 it contains:\nreturn \"Decimal('%s')\" % str(self)\n\nSo in this case the best thing to do is just to print out str() of the result and check the type separately if necessary...\n", "Following the hits by David Fraser i found this suggestion by Raymond Hettinger on the Python mailinglist.\nI now use something like this:\nimport sys\nif sys.version_info[:2] <= (2, 5):\n # ugly monkeypatch to make doctests work. For the reasons see\n # See http://mail.python.org/pipermail/python-dev/2008-July/081420.html\n # It can go away once all our boxes run python > 2.5\n decimal.Decimal.__repr__ = lambda s: \"Decimal('%s')\" % str(s)\n\n" ]
[ 4, 0 ]
[]
[]
[ "doctest", "python", "testing" ]
stackoverflow_0001254187_doctest_python_testing.txt
Q: Python API to fetch PGP public key from key server? Is there any Python API which can fetch a PGP public key from the public key server? A: You can use HTTP (urllib2 and beautiful soup would be my choice) if you're querying the MIT PGP keyserver. http://pgp.mit.edu/extracthelp.html
Python API to fetch PGP public key from key server?
Is there any Python API which can fetch a PGP public key from the public key server?
[ "You can use HTTP (urllib2 and beautiful soup would be my choice) if you're querying the MIT PGP keyserver.\nhttp://pgp.mit.edu/extracthelp.html\n" ]
[ 3 ]
[]
[]
[ "pgp", "python" ]
stackoverflow_0001254425_pgp_python.txt
Q: Python and web-tags regex i have need webpage-content. I need to get some data from it. It looks like: < div class="deg">DATA< /div> As i understand, i have to use regex, but i can't choose one. I tried the code below but had no any results. Please, correct me: regexHandler = re.compile('(<div class="deg">(?P<div class="deg">.*?)</div>)') result = regexHandler.search( pageData ) A: I suggest using a good HTML parser (such as BeautifulSoup -- but for your purposes, i.e. with well-formed HTML as input, the ones that come with the Python standard library, such as HTMLParser, should also work well) rather than raw REs to parse HTML. If you want to persist with the raw RE approach, the pattern: r'<div class="deg">([^<]*)</div>' looks like the simplest way to get the string 'DATA' out of the string '<div class="deg">DATA</div>' -- assuming that's what you're after. You may need to add one or more \s* in spots where you need to tolerate optional whitespace. A: If you want the div tags included in the matched item: regexpHandler = re.compile('(<div class="deg">.*?</div>)') If you don't want the div tags included, only the DATA portion: regexpHandler = re.compile('<div class="deg">(.*?)</div>') Then to run the match and get the result: result = regexHandler.search( pageData ) matchedText = result.groups()[0] A: you can use simple string functions in Python, no need for regex mystr = """< div class="deg">DATA< /div>""" if "div" in mystr and "class" in mystr and "deg" in mystr: s = mystr.split(">") for n,item in enumerate(s): if "deg" in item: print s[n+1][:s[n+1].index("<")] my approach, get something to split on. eg in the above, i split on ">". Then go through the splitted items, check for "deg", and get the item after it, since "deg" appears before the data you want to get. of course, this is not the only approach. A: While it is ok to use rexex for quick and dirty html processing a much better and cleaner way is to use a html parser like lxml.html and to query the parsed tree with XPath or CSS Selectors. html = """<html><body><div class="deg">DATA1</div><div class="deg">DATA2</div></body></html>""" import lxml.html page = lxml.html.fromstring(html) #page = lxml.html.parse(url) for element in page.findall('.//div[@class="deg"]'): print element.text #using css selectors from lxml.cssselect import CSSSelector sel = CSSSelector("div.deg") for element in sel(page): print element.text
Python and web-tags regex
i have need webpage-content. I need to get some data from it. It looks like: < div class="deg">DATA< /div> As i understand, i have to use regex, but i can't choose one. I tried the code below but had no any results. Please, correct me: regexHandler = re.compile('(<div class="deg">(?P<div class="deg">.*?)</div>)') result = regexHandler.search( pageData )
[ "I suggest using a good HTML parser (such as BeautifulSoup -- but for your purposes, i.e. with well-formed HTML as input, the ones that come with the Python standard library, such as HTMLParser, should also work well) rather than raw REs to parse HTML.\nIf you want to persist with the raw RE approach, the pattern:\nr'<div class=\"deg\">([^<]*)</div>'\n\nlooks like the simplest way to get the string 'DATA' out of the string '<div class=\"deg\">DATA</div>' -- assuming that's what you're after. You may need to add one or more \\s* in spots where you need to tolerate optional whitespace.\n", "If you want the div tags included in the matched item:\nregexpHandler = re.compile('(<div class=\"deg\">.*?</div>)')\n\nIf you don't want the div tags included, only the DATA portion:\nregexpHandler = re.compile('<div class=\"deg\">(.*?)</div>')\n\nThen to run the match and get the result:\nresult = regexHandler.search( pageData )\nmatchedText = result.groups()[0]\n\n", "you can use simple string functions in Python, no need for regex\nmystr = \"\"\"< div class=\"deg\">DATA< /div>\"\"\"\nif \"div\" in mystr and \"class\" in mystr and \"deg\" in mystr:\n s = mystr.split(\">\")\n for n,item in enumerate(s):\n if \"deg\" in item:\n print s[n+1][:s[n+1].index(\"<\")]\n\nmy approach, get something to split on. eg in the above, i split on \">\". Then go through the splitted items, check for \"deg\", and get the item after it, since \"deg\" appears before the data you want to get. of course, this is not the only approach. \n", "While it is ok to use rexex for quick and dirty html processing a much better and cleaner way is to use a html parser like lxml.html and to query the parsed tree with XPath or CSS Selectors.\nhtml = \"\"\"<html><body><div class=\"deg\">DATA1</div><div class=\"deg\">DATA2</div></body></html>\"\"\"\n\nimport lxml.html\n\npage = lxml.html.fromstring(html)\n#page = lxml.html.parse(url)\n\nfor element in page.findall('.//div[@class=\"deg\"]'):\n print element.text\n\n#using css selectors\nfrom lxml.cssselect import CSSSelector\nsel = CSSSelector(\"div.deg\")\n\nfor element in sel(page):\n print element.text\n\n" ]
[ 6, 3, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001252316_python_regex.txt
Q: Non-editable text box in wxPython How to create a non-editable text box with no cursor in wxPython to dump text in? A: wx.StaticText You could also use a regular TextCtrl with the style TE_READONLY but that shows a cursor and the text looks editable, but it isn't.
Non-editable text box in wxPython
How to create a non-editable text box with no cursor in wxPython to dump text in?
[ "wx.StaticText\nYou could also use a regular TextCtrl with the style TE_READONLY but that shows a cursor and the text looks editable, but it isn't.\n" ]
[ 7 ]
[]
[]
[ "python", "textbox", "wxpython", "wxwidgets" ]
stackoverflow_0001254819_python_textbox_wxpython_wxwidgets.txt
Q: Coroutines for game design? I've heard that coroutines are a good way to structure games (e.g., PEP 342: "Coroutines are a natural way of expressing many algorithms, such as simulations, games...") but I'm having a hard time wrapping my head around how this would actually be done. I see from this article that coroutines can represent states in a state machine which transition to each other using a scheduler, but it's not clear to me how this applies to a game where the game state is changing based on moves from multiple players. Is there any simple example of a game written using coroutines available? Or can someone offer a sketch of how it might be done? A: Coroutines allow for creating large amounts of very-lightweight "microthreads" with cooperative multitasking (i.e. microthreads suspending themselves willfully to allow other microthreads to run). Read up in Dave Beazley's article on this subject. Now, it's obvious how such microthreads can be useful for game programming. Consider a realtime strategy game, where you have dozens of units - each with a mind of its own. It may be a convenient abstraction for each unit's AI to run as such a microthread in your simulated multitasking environment. This is just one example, I'm sure there are more. The "coroutine game programming" search on Google seems to bring up interesting results. A: The most prominent case of coroutines are probally old graphical point&click adventure games, where they where used to script cutscenes and other animated sequences in the game. A simple code example would look like this: # script_file.scr bob.walkto(jane) bob.lookat(jane) bob.say("How are you?") wait(2) jane.say("Fine") ... This whole sequence can't be written as normal code, as you want to see bob do his walk animation after you did bob.walkto(jane) instead of jumping right to the next line. For the walk animation to play you however need to give control back to the game engine and that is where coroutines come into play. This whole sequence is executed as a coroutine, meaning you have the ability to suspend and resume it as you like. A command like bob.walkto(jane) thus tells the engine side bob object its target and then suspends the coroutine, waiting for a wakeup call when bob has reached his target. On the engine side things might look like this (pseudo code): class Script: def __init__(self, filename): self.coroutine = Coroutine(filename) self.is_wokenup = True def wakeup(self): self.is_wokenup = False; def update(): if self.is_wokenup: coroutine.run() class Character: def walkto(self, target, script): self.script = script self.target = target def update(self): if target: move_one_step_closer_to(self.target) if close_enough_to(self.target): self.script.wakeup() self.target = None self.script = None objects = [Character("bob"), Character("jane")] scripts = [Script("script_file.scr")] while True: for obj in objects: obj.update() for scr in scripts: scr.update() A litte word of warning however, while coroutines make coding these sequences very simple, not every implementations you will find of them will be build with serialisation in mind, so game saving will become quite a troublesome issue if you make heavy use of coroutines. This example is also just the most basic case of a coroutine in a game, coroutines themselves can be used for plenty of other tasks and algorithms as well. A: One way coroutines can be used in games is as light weight threads in an actor like model, like in Kamaelia. Each object in your game would be a Kamaelia 'component'. A component is an object that can pause execution by yielding when it's allowable to pause. These components also have a messaging system that allows them to safely communicate to each other asynchronously. All the objects would be concurrently doing their own thing, with messages sent to each other when interactions occur. So, it's not really specific to games, but anything when you have a multitude of communicating components acting concurrently could benefit from coroutines. A: Two links that you might find interesting: http://aigamedev.com/open/articles/coroutine-state-machine/ http://www.dabeaz.com/coroutines/ (search for 'task' in this page and pdf) A: It is quite common to want to use coroutines to represent individual actor AI scripts. Unfortunately people tend to forget that coroutines have all the same synchronisation and mutual exclusion problems that threads have, just at a higher level. So you often need to firstly do as much as you can to eliminate local state, and secondly write robust ways to handle the errors in a coroutine that arise when something you were referring to ceases to exist. So in practice they are quite hard to get a benefit from, which is why languages like UnrealScript that use some semblance of coroutines push much of the useful logic out to atomic events. Some people get good utility out of them, eg. the EVE Online people, but they had to pretty much architect their entire system around the concept. (And how they handle contention over shared resources is not well documented.)
Coroutines for game design?
I've heard that coroutines are a good way to structure games (e.g., PEP 342: "Coroutines are a natural way of expressing many algorithms, such as simulations, games...") but I'm having a hard time wrapping my head around how this would actually be done. I see from this article that coroutines can represent states in a state machine which transition to each other using a scheduler, but it's not clear to me how this applies to a game where the game state is changing based on moves from multiple players. Is there any simple example of a game written using coroutines available? Or can someone offer a sketch of how it might be done?
[ "Coroutines allow for creating large amounts of very-lightweight \"microthreads\" with cooperative multitasking (i.e. microthreads suspending themselves willfully to allow other microthreads to run). Read up in Dave Beazley's article on this subject.\nNow, it's obvious how such microthreads can be useful for game programming. Consider a realtime strategy game, where you have dozens of units - each with a mind of its own. It may be a convenient abstraction for each unit's AI to run as such a microthread in your simulated multitasking environment. This is just one example, I'm sure there are more.\nThe \"coroutine game programming\" search on Google seems to bring up interesting results.\n", "The most prominent case of coroutines are probally old graphical point&click adventure games, where they where used to script cutscenes and other animated sequences in the game. A simple code example would look like this:\n# script_file.scr\nbob.walkto(jane)\nbob.lookat(jane)\nbob.say(\"How are you?\")\nwait(2)\njane.say(\"Fine\")\n...\n\nThis whole sequence can't be written as normal code, as you want to see bob do his walk animation after you did bob.walkto(jane) instead of jumping right to the next line. For the walk animation to play you however need to give control back to the game engine and that is where coroutines come into play.\nThis whole sequence is executed as a coroutine, meaning you have the ability to suspend and resume it as you like. A command like bob.walkto(jane) thus tells the engine side bob object its target and then suspends the coroutine, waiting for a wakeup call when bob has reached his target.\nOn the engine side things might look like this (pseudo code):\nclass Script:\n def __init__(self, filename):\n self.coroutine = Coroutine(filename)\n self.is_wokenup = True\n\n def wakeup(self):\n self.is_wokenup = False;\n\n def update():\n if self.is_wokenup:\n coroutine.run() \n\n\nclass Character:\n def walkto(self, target, script):\n self.script = script\n self.target = target\n\n def update(self):\n if target:\n move_one_step_closer_to(self.target)\n if close_enough_to(self.target):\n self.script.wakeup()\n\n self.target = None\n self.script = None\n\nobjects = [Character(\"bob\"), Character(\"jane\")]\nscripts = [Script(\"script_file.scr\")]\n\nwhile True:\n for obj in objects:\n obj.update()\n\n for scr in scripts:\n scr.update()\n\nA litte word of warning however, while coroutines make coding these sequences very simple, not every implementations you will find of them will be build with serialisation in mind, so game saving will become quite a troublesome issue if you make heavy use of coroutines.\nThis example is also just the most basic case of a coroutine in a game, coroutines themselves can be used for plenty of other tasks and algorithms as well.\n", "One way coroutines can be used in games is as light weight threads in an actor like model, like in Kamaelia.\nEach object in your game would be a Kamaelia 'component'. A component is an object that can pause execution by yielding when it's allowable to pause. These components also have a messaging system that allows them to safely communicate to each other asynchronously.\nAll the objects would be concurrently doing their own thing, with messages sent to each other when interactions occur.\nSo, it's not really specific to games, but anything when you have a multitude of communicating components acting concurrently could benefit from coroutines.\n", "Two links that you might find interesting:\nhttp://aigamedev.com/open/articles/coroutine-state-machine/\nhttp://www.dabeaz.com/coroutines/ (search for 'task' in this page and pdf)\n", "It is quite common to want to use coroutines to represent individual actor AI scripts. Unfortunately people tend to forget that coroutines have all the same synchronisation and mutual exclusion problems that threads have, just at a higher level. So you often need to firstly do as much as you can to eliminate local state, and secondly write robust ways to handle the errors in a coroutine that arise when something you were referring to ceases to exist.\nSo in practice they are quite hard to get a benefit from, which is why languages like UnrealScript that use some semblance of coroutines push much of the useful logic out to atomic events. Some people get good utility out of them, eg. the EVE Online people, but they had to pretty much architect their entire system around the concept. (And how they handle contention over shared resources is not well documented.)\n" ]
[ 10, 10, 7, 2, 1 ]
[]
[]
[ "coroutine", "python" ]
stackoverflow_0001247894_coroutine_python.txt
Q: Python: using threads to call subprocess.Popen multiple times I have a service that is running (Twisted jsonrpc server). When I make a call to "run_procs" the service will look at a bunch of objects and inspect their timestamp property to see if they should run. If they should, they get added to a thread_pool (list) and then every item in the thread_pool gets the start() method called. I have used this setup for several other applications where I wanted to run a function within my class with theading. However, when I am using a subprocess.Popen call in the function called by each thread, the calls run one-at-a-time instead of running concurrently like I would expect. Here is some sample code: class ProcService(jsonrpc.JSONRPC): self.thread_pool = [] self.running_threads = [] self.lock = threading.Lock() def clean_pool(self, thread_pool, join=False): for th in [x for x in thread_pool if not x.isAlive()]: if join: th.join() thread_pool.remove(th) del th return thread_pool def run_threads(self, parallel=10): while len(self.running_threads)+len(self.thread_pool) > 0: self.clean_pool(self.running_threads, join=True) n = min(max(parallel - len(self.running_threads), 0), len(self.thread_pool)) if n > 0: for th in self.thread_pool[0:n]: th.start() self.running_threads.extend(self.thread_pool[0:n]) del self.thread_pool[0:n] time.sleep(.01) for th in self.running_threads+self.thread_pool: th.join() def jsonrpc_run_procs(self): for i, item in enumerate(self.items): if item.should_run(): self.thread_pool.append(threading.Thread(target=self.run_proc, args=tuple([item]))) self.run_threads(5) def run_proc(self, proc): self.lock.acquire() print "\nSubprocess started" p = subprocess.Popen('%s/program_to_run.py %s' %(os.getcwd(), proc.data), shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,) stdout_value = proc.communicate('through stdin to stdout')[0] self.lock.release() Any help/suggestions are appreciated. * EDIT * OK. So now I want to read back the output from the stdout pipe. This works some of the time, but also fails with select.error: (4, 'Interrupted system call') I assume this is because sometimes the process has already terminated before I try to run the communicate method. the code in the run_proc method has been changed to: def run_proc(self, proc): self.lock.acquire() p = subprocess.Popen( #etc self.running_procs.append([p, proc.data.id]) self.lock.release() after I call self.run_threads(5) I call self.check_procs() check_procs method iterates the list of running_procs to check for poll() is not None. How can I get output from pipe? I have tried both of the following calling check_procs once: def check_procs(self): for proc_details in self.running_procs: proc = proc_details[0] while (proc.poll() == None): time.sleep(0.1) stdout_value = proc.communicate('through stdin to stdout')[0] self.running_procs.remove(proc_details) print proc_details[1], stdout_value del proc_details calling check_procs in while loop like: while len(self.running_procs) > 0: self.check_procs() def check_procs(self): for proc_details in self.running_procs: if (proc.poll() is not None): stdout_value = proc.communicate('through stdin to stdout')[0] self.running_procs.remove(proc_details) print proc_details[1], stdout_value del proc_details A: I think the key code is: self.lock.acquire() print "\nSubprocess started" p = subprocess.Popen( # etc stdout_value = proc.communicate('through stdin to stdout')[0] self.lock.release() the explicit calls to acquire and release should guarantee serialization -- don't you observe serialization just as invariably if you do other things in this block instead of the subprocess use? Edit: all silence here, so I'll add the suggestion to remove the locking and instead put each stdout_value on a Queue.Queue() instance -- Queue is intrinsicaly threadsafe (deals with its own locking) so you can get (or get_nowait, etc etc) results from it once they're ready and have been put there. In general, Queue is the best way to arrange thread communication (and often synchronization too) in Python, any time it can be feasibly arranged to do things that way. Specifically: add import Queue at the start; give up making, acquiring and releasing self.lock (just delete those three lines); add self.q = Queue.Queue() to the __init__; right after the call stdout_value = proc.communicate(... add one statement self.q.put(stdout_value); now e.g finish the jsonrpc_run_procs method with while not self.q.empty(): result = self.q.get() print 'One result is %r' % result to confirm that all the results are there. (Normally the empty method of queues is not reliable, but in this case all threads putting to the queue are already finished, so you should be fine). A: Your specific problem is probably caused by the line stdout_value = proc.communicate('through stdin to stdout')[0]. Subprocess.communicate will "Wait for process to terminate", which, when used with a lock, will run one at a time. What you can do is simply add the p variable to a list and run and use the Subprocess API to wait for the subprocesses to finish. Periodically poll each subprocess in your main thread. On second look, it looks like you may have an issue on this line as well: for th in self.running_threads+self.thread_pool: th.join(). Thread.join() is another method that will wait for the thread to finish.
Python: using threads to call subprocess.Popen multiple times
I have a service that is running (Twisted jsonrpc server). When I make a call to "run_procs" the service will look at a bunch of objects and inspect their timestamp property to see if they should run. If they should, they get added to a thread_pool (list) and then every item in the thread_pool gets the start() method called. I have used this setup for several other applications where I wanted to run a function within my class with theading. However, when I am using a subprocess.Popen call in the function called by each thread, the calls run one-at-a-time instead of running concurrently like I would expect. Here is some sample code: class ProcService(jsonrpc.JSONRPC): self.thread_pool = [] self.running_threads = [] self.lock = threading.Lock() def clean_pool(self, thread_pool, join=False): for th in [x for x in thread_pool if not x.isAlive()]: if join: th.join() thread_pool.remove(th) del th return thread_pool def run_threads(self, parallel=10): while len(self.running_threads)+len(self.thread_pool) > 0: self.clean_pool(self.running_threads, join=True) n = min(max(parallel - len(self.running_threads), 0), len(self.thread_pool)) if n > 0: for th in self.thread_pool[0:n]: th.start() self.running_threads.extend(self.thread_pool[0:n]) del self.thread_pool[0:n] time.sleep(.01) for th in self.running_threads+self.thread_pool: th.join() def jsonrpc_run_procs(self): for i, item in enumerate(self.items): if item.should_run(): self.thread_pool.append(threading.Thread(target=self.run_proc, args=tuple([item]))) self.run_threads(5) def run_proc(self, proc): self.lock.acquire() print "\nSubprocess started" p = subprocess.Popen('%s/program_to_run.py %s' %(os.getcwd(), proc.data), shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,) stdout_value = proc.communicate('through stdin to stdout')[0] self.lock.release() Any help/suggestions are appreciated. * EDIT * OK. So now I want to read back the output from the stdout pipe. This works some of the time, but also fails with select.error: (4, 'Interrupted system call') I assume this is because sometimes the process has already terminated before I try to run the communicate method. the code in the run_proc method has been changed to: def run_proc(self, proc): self.lock.acquire() p = subprocess.Popen( #etc self.running_procs.append([p, proc.data.id]) self.lock.release() after I call self.run_threads(5) I call self.check_procs() check_procs method iterates the list of running_procs to check for poll() is not None. How can I get output from pipe? I have tried both of the following calling check_procs once: def check_procs(self): for proc_details in self.running_procs: proc = proc_details[0] while (proc.poll() == None): time.sleep(0.1) stdout_value = proc.communicate('through stdin to stdout')[0] self.running_procs.remove(proc_details) print proc_details[1], stdout_value del proc_details calling check_procs in while loop like: while len(self.running_procs) > 0: self.check_procs() def check_procs(self): for proc_details in self.running_procs: if (proc.poll() is not None): stdout_value = proc.communicate('through stdin to stdout')[0] self.running_procs.remove(proc_details) print proc_details[1], stdout_value del proc_details
[ "I think the key code is:\n self.lock.acquire()\n print \"\\nSubprocess started\"\n p = subprocess.Popen( # etc\n stdout_value = proc.communicate('through stdin to stdout')[0]\n self.lock.release()\n\nthe explicit calls to acquire and release should guarantee serialization -- don't you observe serialization just as invariably if you do other things in this block instead of the subprocess use?\nEdit: all silence here, so I'll add the suggestion to remove the locking and instead put each stdout_value on a Queue.Queue() instance -- Queue is intrinsicaly threadsafe (deals with its own locking) so you can get (or get_nowait, etc etc) results from it once they're ready and have been put there. In general, Queue is the best way to arrange thread communication (and often synchronization too) in Python, any time it can be feasibly arranged to do things that way.\nSpecifically: add import Queue at the start; give up making, acquiring and releasing self.lock (just delete those three lines); add self.q = Queue.Queue() to the __init__; right after the call stdout_value = proc.communicate(... add one statement self.q.put(stdout_value); now e.g finish the jsonrpc_run_procs method with\nwhile not self.q.empty():\n result = self.q.get()\n print 'One result is %r' % result\n\nto confirm that all the results are there. (Normally the empty method of queues is not reliable, but in this case all threads putting to the queue are already finished, so you should be fine).\n", "Your specific problem is probably caused by the line stdout_value = proc.communicate('through stdin to stdout')[0]. Subprocess.communicate will \"Wait for process to terminate\", which, when used with a lock, will run one at a time.\nWhat you can do is simply add the p variable to a list and run and use the Subprocess API to wait for the subprocesses to finish. Periodically poll each subprocess in your main thread.\nOn second look, it looks like you may have an issue on this line as well: for th in self.running_threads+self.thread_pool: th.join(). Thread.join() is another method that will wait for the thread to finish.\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001255449_python.txt
Q: Catching uncaught exceptions through django development server I am looking for some way in django's development server that will make the server to stop at any uncaught exception automatically, as it is done with pdb mode in ipython console. I know to put import pdb; pdb.set_trace() lines into the code to make application stop. But this doesn't help me, because the line where the exception is thrown is being called too many times. So I can't find out the exact conditions to define a conditional break point. Is this possible? Thank you... A: You can set sys.excepthook to a function that does import pdb; pdb.pm(), as per this recipe.
Catching uncaught exceptions through django development server
I am looking for some way in django's development server that will make the server to stop at any uncaught exception automatically, as it is done with pdb mode in ipython console. I know to put import pdb; pdb.set_trace() lines into the code to make application stop. But this doesn't help me, because the line where the exception is thrown is being called too many times. So I can't find out the exact conditions to define a conditional break point. Is this possible? Thank you...
[ "You can set sys.excepthook to a function that does import pdb; pdb.pm(), as per this recipe.\n" ]
[ 2 ]
[]
[]
[ "debugging", "django", "python" ]
stackoverflow_0001255467_debugging_django_python.txt
Q: Has anyone succeeded in using Google App Engine with Python version 2.6? Since Python 2.6 is backward compatible to 2.52 , did anyone succeeded in using it with Google app Engine ( which supports 2.52 officially ). I know i should try it myself. But i am a python and web-apps new bee and for me installation and configuration is the hardest part while getting started with something new in this domain. ( .... I am trying it myself in the meanwhile ....) Thanks A: I suppose logging module crashes if you try to start the dev environment. See the issue and a workaround. After doing that change my code worked in 2.6 without any problems. I suggest using 2.5.x though so there are no other incompatibilities introduced in your code which would make your app fail on the live server. A: There are a few issues with using Python 2.6 with the SDK, mostly related to the SDK's sandboxing, which is designed to imitate the sandbox limitations in production. Note, of course, that even if you get Python 2.6 running with the SDK, your code will still have to run under 2.5 in production.
Has anyone succeeded in using Google App Engine with Python version 2.6?
Since Python 2.6 is backward compatible to 2.52 , did anyone succeeded in using it with Google app Engine ( which supports 2.52 officially ). I know i should try it myself. But i am a python and web-apps new bee and for me installation and configuration is the hardest part while getting started with something new in this domain. ( .... I am trying it myself in the meanwhile ....) Thanks
[ "I suppose logging module crashes if you try to start the dev environment. See the issue and a workaround.\nAfter doing that change my code worked in 2.6 without any problems. I suggest using 2.5.x though so there are no other incompatibilities introduced in your code which would make your app fail on the live server.\n", "There are a few issues with using Python 2.6 with the SDK, mostly related to the SDK's sandboxing, which is designed to imitate the sandbox limitations in production. Note, of course, that even if you get Python 2.6 running with the SDK, your code will still have to run under 2.5 in production.\n" ]
[ 11, 6 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001254028_google_app_engine_python.txt
Q: How can you ensure registered atexit function will run with AppHelper.runEventLoop() in PyObjC? I'm just wondering why I my registered an atexit function... e.g. import atexit atexit.register(somefunc) ... AppHelper.runEventLoop() Of course I know when will atexit won't work. When I comment out AppHelper.runEventLoop() the atexit function gets called. I also browsed my pyobjc egg, and I saw under __init__.py under objc package the following code: import atexit atexit.register(recycleAutoreleasePool) I looked for any reference within the egg to no avail. I also tried surrounding a try-finally shell around AppHelper.runEventLoop(), and the commands in the finally block won't get called. Hope someone could help me out here. P.S. Assuming I don't want to use Application delegate's applicationShouldTerminate: method... A: I believe you do need delegates, because otherwise the event loop can exit the process rather abruptly (kind of like os._exit) and therefore not give the Python runtime a chance to run termination code such as finally clauses, atexit functions, etc etc.
How can you ensure registered atexit function will run with AppHelper.runEventLoop() in PyObjC?
I'm just wondering why I my registered an atexit function... e.g. import atexit atexit.register(somefunc) ... AppHelper.runEventLoop() Of course I know when will atexit won't work. When I comment out AppHelper.runEventLoop() the atexit function gets called. I also browsed my pyobjc egg, and I saw under __init__.py under objc package the following code: import atexit atexit.register(recycleAutoreleasePool) I looked for any reference within the egg to no avail. I also tried surrounding a try-finally shell around AppHelper.runEventLoop(), and the commands in the finally block won't get called. Hope someone could help me out here. P.S. Assuming I don't want to use Application delegate's applicationShouldTerminate: method...
[ "I believe you do need delegates, because otherwise the event loop can exit the process rather abruptly (kind of like os._exit) and therefore not give the Python runtime a chance to run termination code such as finally clauses, atexit functions, etc etc.\n" ]
[ 1 ]
[]
[]
[ "atexit", "pyobjc", "python" ]
stackoverflow_0001255025_atexit_pyobjc_python.txt
Q: python tab completion in windows I'm writing a cross-platform shell like program in python and I'd like to add custom tab-completion actions. On Unix systems I can use the built-in readline module and use code like the following to specify a list of possible completions when I hit the TAB key: import readline readline.parse_and_bind( 'tab: complete' ) readline.set_completer( ... ) How can I do this on Windows? I'd like to avoid relying on 3rd-party packages if possible. If no solution exists is it possible to simply trap TAB key press so that I can implement my own from scratch? A: Do u have a look at PyReadline: a ctypes-based readline for Windows? Although 3rd-party packages is NOT your option, maybe it's useful for build one's own, isn't it:). A: you could look at how ipython does it with pyreadline as well, maybe A: Another possibility to check out is readline.py.
python tab completion in windows
I'm writing a cross-platform shell like program in python and I'd like to add custom tab-completion actions. On Unix systems I can use the built-in readline module and use code like the following to specify a list of possible completions when I hit the TAB key: import readline readline.parse_and_bind( 'tab: complete' ) readline.set_completer( ... ) How can I do this on Windows? I'd like to avoid relying on 3rd-party packages if possible. If no solution exists is it possible to simply trap TAB key press so that I can implement my own from scratch?
[ "Do u have a look at PyReadline: a ctypes-based readline for Windows? Although 3rd-party packages is NOT your option, maybe it's useful for build one's own, isn't it:).\n", "you could look at how ipython does it with pyreadline as well, maybe \n", "Another possibility to check out is readline.py.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python", "readline", "tab_completion", "windows" ]
stackoverflow_0001081405_python_readline_tab_completion_windows.txt
Q: Suggestion Needed - Networking in Python - A good idea? I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network. Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development. What do you think of the python socket module : Is it reliable and fast enough for production quality software ? Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ? Thanks in advance, Paul A: Check out Twisted, a Python engine for Networking. Has built-in support for TCP, UDP, SSL/TLS, multicast, Unix sockets, a large number of protocols (including HTTP, NNTP, IMAP, SSH, IRC, FTP, and others) A: Python is a mature language that can do almost anything that you can do in C/C++ (even direct memory access if you really want to hurt yourself). You'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what it does even after returning one year later). The drawback of Python is that your code will be somewhat slow. "Somewhat" as in "might be too slow for certain cases". So the usual approach is to write as much as possible in Python because it will make your app maintainable. Eventually, you might run into speed issues. That would be the time to consider to rewrite a part of your app in C. The main advantages of this approach are: You already have a running application. Translating the code from Python to C is much more simple than write it from scratch. You already have a running application. After the translation of a small part of Python to C, you just have to test that small part and you can use the rest of the app (that didn't change) to do it. You don't pay a price upfront. If Python is fast enough for you, you'll never have to do the optional optimization. Python is much, much more powerful than C. Every line of Python can do the same as 100 or even 1000 lines of C. A: To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code. A: The python that EVE online uses is StacklessPython (http://www.stackless.com/), and as far as i understand they use it for how it implements threading through using tasklets and whatnot. But since python itself can handle stuff like MMO with 40k people online i think it can do anything. This bad answer and not really an answer to your question, rather addition to previous answer. Alan.
Suggestion Needed - Networking in Python - A good idea?
I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network. Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development. What do you think of the python socket module : Is it reliable and fast enough for production quality software ? Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ? Thanks in advance, Paul
[ "Check out Twisted, a Python engine for Networking. Has built-in support for TCP, UDP, SSL/TLS, multicast, Unix sockets, a large number of protocols (including HTTP, NNTP, IMAP, SSH, IRC, FTP, and others)\n", "Python is a mature language that can do almost anything that you can do in C/C++ (even direct memory access if you really want to hurt yourself).\nYou'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what it does even after returning one year later).\nThe drawback of Python is that your code will be somewhat slow. \"Somewhat\" as in \"might be too slow for certain cases\". So the usual approach is to write as much as possible in Python because it will make your app maintainable. Eventually, you might run into speed issues. That would be the time to consider to rewrite a part of your app in C.\nThe main advantages of this approach are:\n\nYou already have a running application. Translating the code from Python to C is much more simple than write it from scratch.\nYou already have a running application. After the translation of a small part of Python to C, you just have to test that small part and you can use the rest of the app (that didn't change) to do it.\nYou don't pay a price upfront. If Python is fast enough for you, you'll never have to do the optional optimization.\nPython is much, much more powerful than C. Every line of Python can do the same as 100 or even 1000 lines of C.\n\n", "To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code.\n", "The python that EVE online uses is StacklessPython (http://www.stackless.com/), and as far as i understand they use it for how it implements threading through using tasklets and whatnot. But since python itself can handle stuff like MMO with 40k people online i think it can do anything.\nThis bad answer and not really an answer to your question, rather addition to previous answer.\nAlan.\n" ]
[ 9, 3, 1, 1 ]
[]
[]
[ "network_programming", "python" ]
stackoverflow_0001253905_network_programming_python.txt
Q: text file format from array I have no: of arrays, and i like to take it to text file in specific format, for eg., 'present form' a= [1 2 3 4 5 ] b= [ 1 2 3 4 5 6 7 8 ] c= [ 8 9 10 12 23 43 45 56 76 78] d= [ 1 2 3 4 5 6 7 8 45 56 76 78 12 23 43 ] The 'required format' in a txt file, a '\t' b '\t' d '\t' c 1 '\t' 1 2 '\t' 2 3 '\t' 3 4 '\t' 4 5 '\t' 5 6 7 8 '\t'- 1 tab space problem is, I have the array in linear form[a],[b],[c],and d, i have to transpose('required format') and sort [a],[b],[d],and [c] and write it as a txt file A: from __future__ import with_statement import csv import itertools a= [1, 2, 3, 4, 5] b= [1, 2, 3, 4, 5, 6, 7, 8] c= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78] d= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43] with open('destination.txt', 'w') as f: cf = csv.writer(f, delimiter='\t') cf.writerow(['a', 'b', 'd', 'c']) # header cf.writerows(itertools.izip_longest(a, b, d, c)) Results on destination.txt (<tab>s are in fact real tabs on the file): a<tab>b<tab>d<tab>c 1<tab>1<tab>1<tab>8 2<tab>2<tab>2<tab>9 3<tab>3<tab>3<tab>10 4<tab>4<tab>4<tab>12 5<tab>5<tab>5<tab>23 <tab>6<tab>6<tab>43 <tab>7<tab>7<tab>45 <tab>8<tab>8<tab>56 <tab><tab>45<tab>76 <tab><tab>56<tab>78 <tab><tab>76<tab> <tab><tab>78<tab> <tab><tab>12<tab> <tab><tab>23<tab> <tab><tab>43<tab> Here's the izip_longest function, if you have python < 2.6: def izip_longest(*iterables, fillvalue=None): def sentinel(counter=([fillvalue]*(len(iterables)-1)).pop): yield counter() fillers = itertools.repeat(fillvalue) iters = [itertools.chain(it, sentinel(), fillers) for it in iterables] try: for tup in itertools.izip(*iters): yield tup except IndexError: pass A: Have a look at matplotlib.mlab.rec2csv and csv2rec: >>> from matplotlib.mlab import rec2csv,csv2rec # note: these are also imported automatically when you do ipython -pylab >>> rec = csv2rec('csv file.csv') >>> rec2csv(rec, 'copy csv file', delimiter='\t') A: Just for fun with no imports: a= [1, 2, 3, 4, 5] b= [1, 2, 3, 4, 5, 6, 7, 8] c= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78] d= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43] fh = open("out.txt","w") # header line fh.write("a\tb\td\tc\n") # rest of file for i in map(lambda *row: [elem or "" for elem in row], *[a,b,d,c]): fh.write("\t".join(map(str,i))+"\n") fh.close()
text file format from array
I have no: of arrays, and i like to take it to text file in specific format, for eg., 'present form' a= [1 2 3 4 5 ] b= [ 1 2 3 4 5 6 7 8 ] c= [ 8 9 10 12 23 43 45 56 76 78] d= [ 1 2 3 4 5 6 7 8 45 56 76 78 12 23 43 ] The 'required format' in a txt file, a '\t' b '\t' d '\t' c 1 '\t' 1 2 '\t' 2 3 '\t' 3 4 '\t' 4 5 '\t' 5 6 7 8 '\t'- 1 tab space problem is, I have the array in linear form[a],[b],[c],and d, i have to transpose('required format') and sort [a],[b],[d],and [c] and write it as a txt file
[ "from __future__ import with_statement\nimport csv\nimport itertools\n\n\n\na= [1, 2, 3, 4, 5]\nb= [1, 2, 3, 4, 5, 6, 7, 8]\nc= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78]\nd= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43]\n\nwith open('destination.txt', 'w') as f:\n cf = csv.writer(f, delimiter='\\t')\n cf.writerow(['a', 'b', 'd', 'c']) # header \n cf.writerows(itertools.izip_longest(a, b, d, c))\n\nResults on destination.txt (<tab>s are in fact real tabs on the file):\na<tab>b<tab>d<tab>c\n1<tab>1<tab>1<tab>8\n2<tab>2<tab>2<tab>9\n3<tab>3<tab>3<tab>10\n4<tab>4<tab>4<tab>12\n5<tab>5<tab>5<tab>23\n<tab>6<tab>6<tab>43\n<tab>7<tab>7<tab>45\n<tab>8<tab>8<tab>56\n<tab><tab>45<tab>76\n<tab><tab>56<tab>78\n<tab><tab>76<tab>\n<tab><tab>78<tab>\n<tab><tab>12<tab>\n<tab><tab>23<tab>\n<tab><tab>43<tab>\n\nHere's the izip_longest function, if you have python < 2.6:\ndef izip_longest(*iterables, fillvalue=None):\n def sentinel(counter=([fillvalue]*(len(iterables)-1)).pop):\n yield counter()\n fillers = itertools.repeat(fillvalue)\n iters = [itertools.chain(it, sentinel(), fillers) \n for it in iterables]\n try:\n for tup in itertools.izip(*iters):\n yield tup\n except IndexError:\n pass\n\n", "Have a look at matplotlib.mlab.rec2csv and csv2rec:\n>>> from matplotlib.mlab import rec2csv,csv2rec\n# note: these are also imported automatically when you do ipython -pylab\n\n>>> rec = csv2rec('csv file.csv')\n>>> rec2csv(rec, 'copy csv file', delimiter='\\t')\n\n", "Just for fun with no imports:\na= [1, 2, 3, 4, 5]\nb= [1, 2, 3, 4, 5, 6, 7, 8]\nc= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78]\nd= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43]\n\nfh = open(\"out.txt\",\"w\")\n\n# header line\nfh.write(\"a\\tb\\td\\tc\\n\")\n# rest of file\nfor i in map(lambda *row: [elem or \"\" for elem in row], *[a,b,d,c]):\n fh.write(\"\\t\".join(map(str,i))+\"\\n\")\n\nfh.close()\n\n" ]
[ 6, 1, -1 ]
[]
[]
[ "python" ]
stackoverflow_0001255688_python.txt
Q: How to read String in java that was written using python's struct.pack method I have written information to a file in python using struct.pack eg. out.write( struct.pack(">f", 1.1) ); out.write( struct.pack(">i", 12) ); out.write( struct.pack(">3s", "abc") ); Then I read it in java using DataInputStream and readInt, readFloat and readUTF. Reading the numbers works but as soon as I call readUTF() I get EOFException. I assume this is because of the differences in the format of the string being written and the way java reads it, or am I doing something wrong? If they are incompatible, is there another way to read and write strings? A: The format expected by readUTF(), is documented here. In short, it expects a 16-bit, big-endian length followed by the bytes of the string. So, I think you could modify your pack call to look something like this: s = "abc" out.write( struct.pack(">H", len(s) )) out.write( struct.pack(">%ds" % len(s), s )) My Python is a little rusty, but I think that's close. It also assume that a short (the >H) is 16 bits.
How to read String in java that was written using python's struct.pack method
I have written information to a file in python using struct.pack eg. out.write( struct.pack(">f", 1.1) ); out.write( struct.pack(">i", 12) ); out.write( struct.pack(">3s", "abc") ); Then I read it in java using DataInputStream and readInt, readFloat and readUTF. Reading the numbers works but as soon as I call readUTF() I get EOFException. I assume this is because of the differences in the format of the string being written and the way java reads it, or am I doing something wrong? If they are incompatible, is there another way to read and write strings?
[ "The format expected by readUTF(), is documented here. In short, it expects a 16-bit, big-endian length followed by the bytes of the string. So, I think you could modify your pack call to look something like this:\ns = \"abc\"\nout.write( struct.pack(\">H\", len(s) ))\nout.write( struct.pack(\">%ds\" % len(s), s ))\n\nMy Python is a little rusty, but I think that's close. It also assume that a short (the >H) is 16 bits.\n" ]
[ 4 ]
[]
[]
[ "java", "python" ]
stackoverflow_0001255918_java_python.txt
Q: Checking whether a command produced output I am using the following call for executing the 'aspell' command on some strings in Python: r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l") I want to test the success of the function looking at the stdout File Object r. If there is no output the command is successful. What is the best way to test that in Python? Thanks in advance. A: Best is to use the subprocess module of the standard Python library, see here -- popen2 is old and not recommended. Anyway, in your code, if r.read(1): is a fast way to test if there's any content in r (if you don't care about what that content might specifically be). A: Why don't you use aspell -a? You could use subprocess as indicated by Alex, but keep the pipe open. Follow the directions for using the pipe API of aspell, and it should be pretty efficient. The upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.
Checking whether a command produced output
I am using the following call for executing the 'aspell' command on some strings in Python: r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l") I want to test the success of the function looking at the stdout File Object r. If there is no output the command is successful. What is the best way to test that in Python? Thanks in advance.
[ "Best is to use the subprocess module of the standard Python library, see here -- popen2 is old and not recommended.\nAnyway, in your code, if r.read(1): is a fast way to test if there's any content in r (if you don't care about what that content might specifically be).\n", "Why don't you use aspell -a?\nYou could use subprocess as indicated by Alex, but keep the pipe open. Follow the directions for using the pipe API of aspell, and it should be pretty efficient.\nThe upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.\n" ]
[ 2, 2 ]
[]
[]
[ "python", "scripting", "unix" ]
stackoverflow_0001256424_python_scripting_unix.txt
Q: Finding Functions Defined in a with: Block Here's some code from Richard Jones' Blog: with gui.vertical: text = gui.label('hello!') items = gui.selection(['one', 'two', 'three']) with gui.button('click me!'): def on_click(): text.value = items.value text.foreground = red My question is: how the heck did he do this? How can the context manager access the scope inside the with block? Here's a basic template for trying to figure this out: from __future__ import with_statement class button(object): def __enter__(self): #do some setup pass def __exit__(self, exc_type, exc_value, traceback): #XXX: how can we find the testing() function? pass with button(): def testing(): pass A: Here's one way: from __future__ import with_statement import inspect class button(object): def __enter__(self): # keep track of all that's already defined BEFORE the `with` f = inspect.currentframe(1) self.mustignore = dict(f.f_locals) def __exit__(self, exc_type, exc_value, traceback): f = inspect.currentframe(1) # see what's been bound anew in the body of the `with` interesting = dict() for n in f.f_locals: newf = f.f_locals[n] if n not in self.mustignore: interesting[n] = newf continue anf = self.mustignore[n] if id(newf) != id(anf): interesting[n] = newf if interesting: print 'interesting new things: %s' % ', '.join(sorted(interesting)) for n, v in interesting.items(): if isinstance(v, type(lambda:None)): print 'function %r' % n print v() else: print 'nothing interesting' def main(): for i in (1, 2): def ignorebefore(): pass with button(): def testing(i=i): return i def ignoreafter(): pass main() Edit: stretched code a bit more, added some explanation...: Catching caller's locals at __exit__ is easy -- trickier is avoiding those locals that were already defined before the with block, which is why I added to main two local functions that the with should ignore. I'm not 100% happy with this solution, which looks a bit complicated, but I couldn't get equality testing correct with either == or is, so I resorted to this rather complicated approach. I've also added a loop (to make more strongly sure the defs before / within / after are being properly handled) and a type-check and function-call to make sure the right incarnation of testing is the one that's identified (everything seems to work fine) -- of course the code as written only works if the def inside the with is for a function callable without arguments, it's not hard to get the signature with inspect to ward against that (but since I'm doing the call only for the purpose of checking that the right function objects are identified, I didn't bother about this last refinement;-). A: To answer your question, yes, it's frame introspection. But the syntax I would create to do the same thing is with gui.vertical: text = gui.label('hello!') items = gui.selection(['one', 'two', 'three']) @gui.button('click me!') class button: def on_click(): text.value = items.value text.foreground = red Here I would implement gui.button as a decorator that returns button instance given some parameters and events (though it appears to me now that button = gui.button('click me!', mybutton_onclick is fine as well). I would also leave gui.vertical as it is since it can be implemented without introspection. I'm not sure about its implementation, but it may involve setting gui.direction = gui.VERTICAL so that gui.label() and others use it in computing their coordinates. Now when I look at this, I think I'd try the syntax: with gui.vertical: text = gui.label('hello!') items = gui.selection(['one', 'two', 'three']) @gui.button('click me!') def button(): text.value = items.value foreground = red (the idea being that similarly to how label is made out of text, a button is made out of text and function)
Finding Functions Defined in a with: Block
Here's some code from Richard Jones' Blog: with gui.vertical: text = gui.label('hello!') items = gui.selection(['one', 'two', 'three']) with gui.button('click me!'): def on_click(): text.value = items.value text.foreground = red My question is: how the heck did he do this? How can the context manager access the scope inside the with block? Here's a basic template for trying to figure this out: from __future__ import with_statement class button(object): def __enter__(self): #do some setup pass def __exit__(self, exc_type, exc_value, traceback): #XXX: how can we find the testing() function? pass with button(): def testing(): pass
[ "Here's one way:\nfrom __future__ import with_statement\nimport inspect\n\nclass button(object):\n def __enter__(self):\n # keep track of all that's already defined BEFORE the `with`\n f = inspect.currentframe(1)\n self.mustignore = dict(f.f_locals)\n\n def __exit__(self, exc_type, exc_value, traceback):\n f = inspect.currentframe(1)\n # see what's been bound anew in the body of the `with`\n interesting = dict()\n for n in f.f_locals:\n newf = f.f_locals[n]\n if n not in self.mustignore:\n interesting[n] = newf\n continue\n anf = self.mustignore[n]\n if id(newf) != id(anf):\n interesting[n] = newf\n if interesting:\n print 'interesting new things: %s' % ', '.join(sorted(interesting))\n for n, v in interesting.items():\n if isinstance(v, type(lambda:None)):\n print 'function %r' % n\n print v()\n else:\n print 'nothing interesting'\n\ndef main():\n for i in (1, 2):\n def ignorebefore():\n pass\n with button():\n def testing(i=i):\n return i\n def ignoreafter():\n pass\n\nmain()\n\nEdit: stretched code a bit more, added some explanation...:\nCatching caller's locals at __exit__ is easy -- trickier is avoiding those locals that were already defined before the with block, which is why I added to main two local functions that the with should ignore. I'm not 100% happy with this solution, which looks a bit complicated, but I couldn't get equality testing correct with either == or is, so I resorted to this rather complicated approach.\nI've also added a loop (to make more strongly sure the defs before / within / after are being properly handled) and a type-check and function-call to make sure the right incarnation of testing is the one that's identified (everything seems to work fine) -- of course the code as written only works if the def inside the with is for a function callable without arguments, it's not hard to get the signature with inspect to ward against that (but since I'm doing the call only for the purpose of checking that the right function objects are identified, I didn't bother about this last refinement;-).\n", "To answer your question, yes, it's frame introspection.\nBut the syntax I would create to do the same thing is\nwith gui.vertical:\n text = gui.label('hello!')\n items = gui.selection(['one', 'two', 'three'])\n @gui.button('click me!')\n class button:\n def on_click():\n text.value = items.value\n text.foreground = red\n\nHere I would implement gui.button as a decorator that returns button instance given some parameters and events (though it appears to me now that button = gui.button('click me!', mybutton_onclick is fine as well).\nI would also leave gui.vertical as it is since it can be implemented without introspection. I'm not sure about its implementation, but it may involve setting gui.direction = gui.VERTICAL so that gui.label() and others use it in computing their coordinates.\nNow when I look at this, I think I'd try the syntax:\n with gui.vertical:\n text = gui.label('hello!')\n items = gui.selection(['one', 'two', 'three'])\n\n @gui.button('click me!')\n def button():\n text.value = items.value\n foreground = red\n\n(the idea being that similarly to how label is made out of text, a button is made out of text and function)\n" ]
[ 14, 2 ]
[]
[]
[ "contextmanager", "python", "scope", "with_statement" ]
stackoverflow_0001255914_contextmanager_python_scope_with_statement.txt
Q: Call PHP code from Python I'm trying to integrate an old PHP ad management system into a (Django) Python-based web application. The PHP and the Python code are both installed on the same hosts, PHP is executed by mod_php5 and Python through mod_wsgi, usually. Now I wonder what's the best way to call this PHP ad management code from within my Python code in a most efficient manner (the ad management code has to be called multiple times for each page)? The solutions I came up with so far, are the following: Write SOAP interface in PHP for the ad management code and write a SOAP client in Python which then calls the appropriate functions. The problem I see is, that will slow down the execution of the Python code considerably, since for each page served, multiple SOAP client requests are necessary in the background. Call the PHP code through os.execvp() or subprocess.Popen() using PHP command line interface. The problem here is that the PHP code makes use of the Apache environment ($_SERVER vars and other superglobals). I'm not sure if this can be simulated correctly. Rewrite the ad management code in Python. This will probably be the last resort. This ad management code just runs and runs, and there is no one remaining who wrote a piece of code for this :) I'd be quite afraid to do this ;) Any other ideas or hints how this can be done? Thanks. A: How about using AJAX from the browser to load the ads? For instance (using JQuery): $(document).ready(function() { $("#apageelement").load("/phpapp/getads.php"); }) This allows you to keep you app almost completely separate from the PHP app. A: Best solution is to use server side includes. Most webservers support this. For example this is how it would be done in nginx: <!--# include virtual="http://localhost:8080/phpapp/getads.php" --> Your webserver would then dynamically request from your php backend, and insert it into the response that goes to the client. No javascript necessary, and entirely transparent. You could also use a borderless <iframe> A: I've done this in the past by serving the PHP portions directly via Apache. You could either put them in with your media files, (/site_media/php/) or if you prefer to use something more lightweight for your media server (like lighttpd), you can set up another portion of the site that goes through apache with PHP enabled. From there, you can either take the ajax route in your templates, or you can load the PHP from your views using urllib(2) or httplib(2). Better yet, wrap the urllib2 call in a templatetag, and call that in your templates.
Call PHP code from Python
I'm trying to integrate an old PHP ad management system into a (Django) Python-based web application. The PHP and the Python code are both installed on the same hosts, PHP is executed by mod_php5 and Python through mod_wsgi, usually. Now I wonder what's the best way to call this PHP ad management code from within my Python code in a most efficient manner (the ad management code has to be called multiple times for each page)? The solutions I came up with so far, are the following: Write SOAP interface in PHP for the ad management code and write a SOAP client in Python which then calls the appropriate functions. The problem I see is, that will slow down the execution of the Python code considerably, since for each page served, multiple SOAP client requests are necessary in the background. Call the PHP code through os.execvp() or subprocess.Popen() using PHP command line interface. The problem here is that the PHP code makes use of the Apache environment ($_SERVER vars and other superglobals). I'm not sure if this can be simulated correctly. Rewrite the ad management code in Python. This will probably be the last resort. This ad management code just runs and runs, and there is no one remaining who wrote a piece of code for this :) I'd be quite afraid to do this ;) Any other ideas or hints how this can be done? Thanks.
[ "How about using AJAX from the browser to load the ads?\nFor instance (using JQuery):\n$(document).ready(function() { $(\"#apageelement\").load(\"/phpapp/getads.php\"); })\n\nThis allows you to keep you app almost completely separate from the PHP app.\n", "Best solution is to use server side includes. Most webservers support this.\nFor example this is how it would be done in nginx:\n<!--# include virtual=\"http://localhost:8080/phpapp/getads.php\" -->\n\nYour webserver would then dynamically request from your php backend, and insert it into the response that goes to the client. No javascript necessary, and entirely transparent.\nYou could also use a borderless <iframe>\n", "I've done this in the past by serving the PHP portions directly via Apache. You could either put them in with your media files, (/site_media/php/) or if you prefer to use something more lightweight for your media server (like lighttpd), you can set up another portion of the site that goes through apache with PHP enabled.\nFrom there, you can either take the ajax route in your templates, or you can load the PHP from your views using urllib(2) or httplib(2). Better yet, wrap the urllib2 call in a templatetag, and call that in your templates.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "php", "python" ]
stackoverflow_0001254802_php_python.txt
Q: PHP - Print all statements that are executed in a PHP command line script? In python, one can trace all the statements that are executed by a command line script using the trace module. In bash you can do the same with set -x. We have a PHP script that we're running from the command line, like a normal bash / python / perl / etc script. Nothing web-y is going on. Is there anyway to get a trace of all the lines of code that are being executes? A: There is a PECL extension, apd, that will generate a trace file. A: Not in pure-PHP, no -- as far as i know. But you can use a debugger ; a nice way to do that is with The extension Xdebug, which can be used as a debugger and some graphical IDE that integrates some debugging tools, like Eclipse PDT Both of those are free, btw. With those, you can do step by step, set up breakpoints, watch the content of variables, view stack traces, ... And it works both for Web and CLI scripts ;-) Of course, it means having Eclipse running on the machine you are executing your script... But if you are executing it on your development machine, you probably have a GUI and all that, so it should be fine... (I know that, for web applications, you can have Eclipse running on a different machine than the one with the PHP webserver -- don't know if it's possible in CLI, though) As a sidenote : maybe you can integrate Xdebug with a CLI-based debugger ; see the page I linked to earlier for a list of supported tools.
PHP - Print all statements that are executed in a PHP command line script?
In python, one can trace all the statements that are executed by a command line script using the trace module. In bash you can do the same with set -x. We have a PHP script that we're running from the command line, like a normal bash / python / perl / etc script. Nothing web-y is going on. Is there anyway to get a trace of all the lines of code that are being executes?
[ "There is a PECL extension, apd, that will generate a trace file. \n", "Not in pure-PHP, no -- as far as i know.\nBut you can use a debugger ; a nice way to do that is with \n\nThe extension Xdebug, which can be used as a debugger\nand some graphical IDE that integrates some debugging tools, like Eclipse PDT\n\nBoth of those are free, btw.\nWith those, you can do step by step, set up breakpoints, watch the content of variables, view stack traces, ... And it works both for Web and CLI scripts ;-)\nOf course, it means having Eclipse running on the machine you are executing your script... But if you are executing it on your development machine, you probably have a GUI and all that, so it should be fine...\n(I know that, for web applications, you can have Eclipse running on a different machine than the one with the PHP webserver -- don't know if it's possible in CLI, though)\n\nAs a sidenote : maybe you can integrate Xdebug with a CLI-based debugger ; see the page I linked to earlier for a list of supported tools.\n" ]
[ 2, 1 ]
[ "I'm kinda blind here but I guess one way you could do it is to write all the relevant code inside custom functions and call debug_backtrace(). debug_print_backtrace may also be useful.\nI hope it helps.\n" ]
[ -1 ]
[ "command_line", "debugging", "php", "python" ]
stackoverflow_0001254215_command_line_debugging_php_python.txt
Q: Selecting related objects in django I have following problem: My application have 2 models: 1) class ActiveList(models.Model): user = models.ForeignKey(User, unique=True) updatedOn = models.DateTimeField(auto_now=True) def __unicode__(self): return self.user.username ''' GameClaim class, to store game requests. ''' class GameClaim(models.Model): me = models.ForeignKey(ActiveList, related_name='gameclaim_me') opponent = models.ForeignKey(ActiveList, related_name='gameclaim_opponent') In my view I took all ActiveList objects all = ActiveList.objects.all() and passed it to the template In template I am looping through every item in the ActiveList, and create an xml file which is used on my client application. the question is: How can I query the info about the claims which one user (e.g. test, part of ActiveList), made to the user who is under loop user2 e.g is taken like this {% for item in activeList %} {% endfor %} user 2 is an item in this case A: What you are looking at doing belongs more properly in the view than the template. I think you want something like: claimer = User.objects.get(name='test') claimed_opponents = User.objects.filter(gameclaim_opponent__me__user=claimer) Then you can pass those into your template, and operate on them directly. You might also look at rethinking how your tables relate to one another. I think claims should probably go directly between users, and whether a given user is active should be external to the relationship. I would think a user should be able to claim a game with an inactive user, even if they have to wait for the user to reactivate before that game can begin. A: I'm not sure I entirely understand your question, but I think the information you're looking for might be here: http://docs.djangoproject.com/en/dev/topics/db/queries/ Perhaps you could clarify the question if you don't find an answer there?
Selecting related objects in django
I have following problem: My application have 2 models: 1) class ActiveList(models.Model): user = models.ForeignKey(User, unique=True) updatedOn = models.DateTimeField(auto_now=True) def __unicode__(self): return self.user.username ''' GameClaim class, to store game requests. ''' class GameClaim(models.Model): me = models.ForeignKey(ActiveList, related_name='gameclaim_me') opponent = models.ForeignKey(ActiveList, related_name='gameclaim_opponent') In my view I took all ActiveList objects all = ActiveList.objects.all() and passed it to the template In template I am looping through every item in the ActiveList, and create an xml file which is used on my client application. the question is: How can I query the info about the claims which one user (e.g. test, part of ActiveList), made to the user who is under loop user2 e.g is taken like this {% for item in activeList %} {% endfor %} user 2 is an item in this case
[ "What you are looking at doing belongs more properly in the view than the template. I think you want something like:\nclaimer = User.objects.get(name='test')\nclaimed_opponents = User.objects.filter(gameclaim_opponent__me__user=claimer)\n\nThen you can pass those into your template, and operate on them directly. \nYou might also look at rethinking how your tables relate to one another. I think claims should probably go directly between users, and whether a given user is active should be external to the relationship. I would think a user should be able to claim a game with an inactive user, even if they have to wait for the user to reactivate before that game can begin.\n", "I'm not sure I entirely understand your question, but I think the information you're looking for might be here: http://docs.djangoproject.com/en/dev/topics/db/queries/\nPerhaps you could clarify the question if you don't find an answer there?\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001256387_django_python.txt
Q: Need help in refactoring my python script I have a python script which process a file line by line, if the line matches a regex, it calls a function to handle it. My question is is there a better write to refactor my script. The script works, but as it is, i need to keep indent to the right of the editor as I add more and more regex for my file. Thank you for any idea. Now my code end up like this: for line in fi.readlines(): result= reg1.match(line) if result: handleReg1(result) else: result = reg2.match(line) if result: handleReg2(result) else: result = reg3.match(line) if result: handleReg3(result) else: result = reg4.match(line) if result: handleReg4(result) else: result = reg5.match(line) if result: handleReg5(result) A: I'd switch to using a data structure mapping regexes to functions. Something like: map = { reg1: handleReg1, reg2: handleReg2, etc } Then you just loop through them: for reg, handler in map.items(): result = reg.match(line) if result: handler(result) break If you need the matches to happen in a particular order you'll need to use a list instead of a dictionary, but the principal is the same. A: Here's a trivial one: handlers = { reg1 : handleReg1, ... } for line in fi.readlines(): for h in handlers: x = h.match(line) if x: handlers[h](x) If there could be a line that matches several regexps this code will be different from the code you pasted: it will call several handlers. Adding break won't help, because the regexps will be tried in a different order, so you'll end up calling the wrong one. So if this is the case you should iterate over list: handlers = [ (reg1, handleReg1), (reg2, handleReg2), ... ] for line in fi.readlines(): for reg, handler in handlers: x = reg.match(line) if x: handler(x) break A: An alternate approach that might work for you is to combine all the regexps into one giant regexp and use m.group() to detect which matched. My intuition says this should be faster, but I haven't tested it. >>> reg = re.compile('(cat)|(dog)|(apple)') >>> m = reg.search('we like dogs') >>> print m.group() dog >>> print m.groups() (None, 'dog', None) This gets complicated if the regexps you're testing against are themselves complicated or use match groups.
Need help in refactoring my python script
I have a python script which process a file line by line, if the line matches a regex, it calls a function to handle it. My question is is there a better write to refactor my script. The script works, but as it is, i need to keep indent to the right of the editor as I add more and more regex for my file. Thank you for any idea. Now my code end up like this: for line in fi.readlines(): result= reg1.match(line) if result: handleReg1(result) else: result = reg2.match(line) if result: handleReg2(result) else: result = reg3.match(line) if result: handleReg3(result) else: result = reg4.match(line) if result: handleReg4(result) else: result = reg5.match(line) if result: handleReg5(result)
[ "I'd switch to using a data structure mapping regexes to functions. Something like:\nmap = { reg1: handleReg1, reg2: handleReg2, etc }\n\nThen you just loop through them:\nfor reg, handler in map.items():\n result = reg.match(line)\n if result:\n handler(result)\n break\n\nIf you need the matches to happen in a particular order you'll need to use a list instead of a dictionary, but the principal is the same.\n", "Here's a trivial one:\nhandlers = { reg1 : handleReg1, ... }\n\nfor line in fi.readlines():\n for h in handlers:\n x = h.match(line)\n if x:\n handlers[h](x)\n\nIf there could be a line that matches several regexps this code will be different from the code you pasted: it will call several handlers. Adding break won't help, because the regexps will be tried in a different order, so you'll end up calling the wrong one. So if this is the case you should iterate over list:\nhandlers = [ (reg1, handleReg1), (reg2, handleReg2), ... ]\n\nfor line in fi.readlines():\n for reg, handler in handlers:\n x = reg.match(line)\n if x:\n handler(x)\n break\n\n", "An alternate approach that might work for you is to combine all the regexps into one giant regexp and use m.group() to detect which matched. My intuition says this should be faster, but I haven't tested it.\n>>> reg = re.compile('(cat)|(dog)|(apple)')\n>>> m = reg.search('we like dogs')\n>>> print m.group()\ndog\n>>> print m.groups()\n(None, 'dog', None)\n\nThis gets complicated if the regexps you're testing against are themselves complicated or use match groups.\n" ]
[ 12, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001256704_python.txt
Q: Why would traceback.extract_stack() return [] when there is definitely a call stack? I have a class that calls traceback.extract_stack() in its __init__(), but whenever I do that, the value of traceback.extract_stack() is []. What are some reasons that this could be the case? Is there another way to get a traceback that will be more reliable? I think the problem is that the code is running in Pylons. Here is some code for a controller action: def test_tb(self): import traceback return a.lib.htmlencode(traceback.extract_stack()) It generates a webpage that is just [] So, I don't think it has anything to do with being in the constructor of an object or anything like that. Could it have to do with an incompatibility between some kinds of threading and the traceback module or something like that? A: Following shows traceback.extract_stack() working when called from a class's __init__ method. Please post your code showing that it doesn't work. Include the Python version. Don't type from memory; use copy/paste as I have done. Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import traceback as tb >>> tb.extract_stack() [('<stdin>', 1, '<module>', None)] >>> def func(): ... print tb.extract_stack() ... >>> func() [('<stdin>', 1, '<module>', None), ('<stdin>', 2, 'func', None)] >>> class Klass(object): ... def __init__(self): ... print tb.extract_stack() ... >>> k = Klass() [('<stdin>', 1, '<module>', None), ('<stdin>', 3, '__init__', None)] >>> UPDATE Instead of looking at return a.lib.htmlencode(traceback.extract_stack()) and wondering, tap into the pipeline: (1) do tb_stack = repr((traceback.extract_stack()) and write the result to your logfile for checking (2) do return a.lib.htmlencode(some_known_constant_data) and check that the known data shows up correctly where you expect it to show up. A: Looking at the code for the traceback module, one possibility is that you've got sys.tracebacklimit set to zero, though that seems like a longshot... A: The reason turned out to be that someone turned on Pysco on the project, and Psyco doesn't play nice with the traceback module.
Why would traceback.extract_stack() return [] when there is definitely a call stack?
I have a class that calls traceback.extract_stack() in its __init__(), but whenever I do that, the value of traceback.extract_stack() is []. What are some reasons that this could be the case? Is there another way to get a traceback that will be more reliable? I think the problem is that the code is running in Pylons. Here is some code for a controller action: def test_tb(self): import traceback return a.lib.htmlencode(traceback.extract_stack()) It generates a webpage that is just [] So, I don't think it has anything to do with being in the constructor of an object or anything like that. Could it have to do with an incompatibility between some kinds of threading and the traceback module or something like that?
[ "Following shows traceback.extract_stack() working when called from a class's __init__ method. Please post your code showing that it doesn't work. Include the Python version. Don't type from memory; use copy/paste as I have done.\nPython 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import traceback as tb\n>>> tb.extract_stack()\n[('<stdin>', 1, '<module>', None)]\n>>> def func():\n... print tb.extract_stack()\n...\n>>> func()\n[('<stdin>', 1, '<module>', None), ('<stdin>', 2, 'func', None)]\n>>> class Klass(object):\n... def __init__(self):\n... print tb.extract_stack()\n...\n>>> k = Klass()\n[('<stdin>', 1, '<module>', None), ('<stdin>', 3, '__init__', None)]\n>>>\n\nUPDATE Instead of looking at return a.lib.htmlencode(traceback.extract_stack()) and wondering, tap into the pipeline:\n(1) do tb_stack = repr((traceback.extract_stack()) and write the result to your logfile for checking\n(2) do return a.lib.htmlencode(some_known_constant_data) and check that the known data shows up correctly where you expect it to show up.\n", "Looking at the code for the traceback module, one possibility is that you've got sys.tracebacklimit set to zero, though that seems like a longshot...\n", "The reason turned out to be that someone turned on Pysco on the project, and Psyco doesn't play nice with the traceback module.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "stack_trace" ]
stackoverflow_0001252823_python_stack_trace.txt
Q: Loading and saving data from m2m relationships in Textarea widgets with ModelForm I have a Model that looks something like this: class Business(models.Model): name = models.CharField('business name', max_length=100) # ... some other fields emails = models.ManyToManyField(Email, null=True) phone_numbers = models.ManyToManyField(PhoneNumber, null=True) urls = models.ManyToManyField(URL, null=True) and a corresponding ModelForm: class BusinessContactForm(forms.ModelForm): emails = forms.CharField(widget=forms.Textarea(attrs={'rows':4,'cols':32})) phone_numbers = forms.CharField(widget=forms.Textarea(attrs={'rows':4,'cols':32})) urls = forms.CharField(widget=forms.Textarea(attrs={'rows':4,'cols':32})) class Meta: model = Business fields = ['emails', 'phone_numbers', 'urls',] My question: What is the best way to load the existing emails, phone_numbers, and urls into the Textarea widgets when presenting the form (one per line in their respective widgets)? Then, after the form has been modified and submitted, what is the best way to make sure to add any new emails, numbers, or urls (m2m relationships) and remove any that are no longer in the list (also making sure not to add duplicates)? A: This is not directly an answer to your question. It is more a suggestion to re-think your data model. It looks like your BusinessContactForm presents textarea widgets to insert multiple rows into the database. I would not use a Textarea widget for multiple items of more restricted type: I'd enter phone numbers with a phone number widget, URLs with a URL widget, and emails with an email widget. A business contact is really a person who works for a company and has an email address and phone number, correct? So why not model the business contact like that and have a foreign key to the business? That's more of the approach I would take. A: This really isn't a good way to do it. Dealing with related items on forms is what formsets are for. Instead of defining the related fields as extra fields on the BusinessForm model, use a standard form for a contact, with email, phone and url. Then pass this into the modelformset_factory to create an inline formset for your BusinessContact form.
Loading and saving data from m2m relationships in Textarea widgets with ModelForm
I have a Model that looks something like this: class Business(models.Model): name = models.CharField('business name', max_length=100) # ... some other fields emails = models.ManyToManyField(Email, null=True) phone_numbers = models.ManyToManyField(PhoneNumber, null=True) urls = models.ManyToManyField(URL, null=True) and a corresponding ModelForm: class BusinessContactForm(forms.ModelForm): emails = forms.CharField(widget=forms.Textarea(attrs={'rows':4,'cols':32})) phone_numbers = forms.CharField(widget=forms.Textarea(attrs={'rows':4,'cols':32})) urls = forms.CharField(widget=forms.Textarea(attrs={'rows':4,'cols':32})) class Meta: model = Business fields = ['emails', 'phone_numbers', 'urls',] My question: What is the best way to load the existing emails, phone_numbers, and urls into the Textarea widgets when presenting the form (one per line in their respective widgets)? Then, after the form has been modified and submitted, what is the best way to make sure to add any new emails, numbers, or urls (m2m relationships) and remove any that are no longer in the list (also making sure not to add duplicates)?
[ "This is not directly an answer to your question. It is more a suggestion to re-think your data model.\nIt looks like your BusinessContactForm presents textarea widgets to insert multiple rows into the database. I would not use a Textarea widget for multiple items of more restricted type: I'd enter phone numbers with a phone number widget, URLs with a URL widget, and emails with an email widget.\nA business contact is really a person who works for a company and has an email address and phone number, correct? So why not model the business contact like that and have a foreign key to the business?\nThat's more of the approach I would take.\n", "This really isn't a good way to do it. Dealing with related items on forms is what formsets are for.\nInstead of defining the related fields as extra fields on the BusinessForm model, use a standard form for a contact, with email, phone and url. Then pass this into the modelformset_factory to create an inline formset for your BusinessContact form.\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_forms", "django_models", "python" ]
stackoverflow_0001257062_django_django_forms_django_models_python.txt
Q: How to update the twisted framework I can see from the latest 8.2 (almost 1200 lines of code) twisted that I am missing something: http://twistedmatrix.com/trac/browser/trunk/twisted/words/protocols/jabber/xmlstream.py My copy (697 lines from 3 years ago) is in: /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/twisted/words/protocols/jabber/xmlstream.py I ran the mac installer found on the website, all looked like it installed fine, but obviously something I am missing: http://twistedmatrix.com/trac/wiki/Downloads Can someone tell me how to update twisted properly on my mac? A: Try using virtualenv and pip (sudo easy_install virtualenv pip), which are great ways to avoid the dependency hell that you are experiencing. With virtualenv you can create isolated Python environments, and then using pip you can directly install new packages into you virtualenvs. Here is a complete example: #create fresh virtualenv, void of old packages, and install latest Twisted virtualenv --no-site-packages twisted_env pip -E twisted_env install -U twisted #now activate the virtualenv cd twisted_env source bin/activate #test to see you have latest Twisted: python -c "import twisted; print twisted.__version__" A: You can download that file you mentioned by scrolling to the bottom and click "Download in other formats" Otherwise just do svn update. A: The answer was hidden away here: http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhyamIgettingImportErrorsforTwistedsubpackagesonOSX10.5 Not really clear on exactly how/where to fix the issue though. After some digging I was able to solve it with this: From the command prompt type: pico ~/.bash_profile Add to the top of that file: export PYTHONPATH=~/Library/Python/2.5/site-packages/ Save and exit the file and you will finally be running the latest and greatest version twisted. (assuming that you have already downloaded and installed it from the twisted site)
How to update the twisted framework
I can see from the latest 8.2 (almost 1200 lines of code) twisted that I am missing something: http://twistedmatrix.com/trac/browser/trunk/twisted/words/protocols/jabber/xmlstream.py My copy (697 lines from 3 years ago) is in: /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/twisted/words/protocols/jabber/xmlstream.py I ran the mac installer found on the website, all looked like it installed fine, but obviously something I am missing: http://twistedmatrix.com/trac/wiki/Downloads Can someone tell me how to update twisted properly on my mac?
[ "Try using virtualenv and pip (sudo easy_install virtualenv pip), which are great ways to avoid the dependency hell that you are experiencing.\nWith virtualenv you can create isolated Python environments, and then using pip you can directly install new packages into you virtualenvs.\nHere is a complete example:\n\n#create fresh virtualenv, void of old packages, and install latest Twisted\nvirtualenv --no-site-packages twisted_env\npip -E twisted_env install -U twisted\n\n#now activate the virtualenv\ncd twisted_env\nsource bin/activate\n\n#test to see you have latest Twisted:\npython -c \"import twisted; print twisted.__version__\"\n\n", "You can download that file you mentioned by scrolling to the bottom and click \"Download in other formats\"\nOtherwise just do svn update.\n", "The answer was hidden away here:\nhttp://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhyamIgettingImportErrorsforTwistedsubpackagesonOSX10.5\nNot really clear on exactly how/where to fix the issue though.\nAfter some digging I was able to solve it with this: \nFrom the command prompt type: pico ~/.bash_profile\nAdd to the top of that file: export PYTHONPATH=~/Library/Python/2.5/site-packages/\nSave and exit the file and you will finally be running the latest and greatest version twisted. (assuming that you have already downloaded and installed it from the twisted site)\n" ]
[ 17, 1, 1 ]
[]
[]
[ "python", "twisted" ]
stackoverflow_0001117255_python_twisted.txt
Q: Access Ruby objects with Python via XML-RPC? I am trying to export a Ruby framework via XML-RPC. However I am having some problems when trying to call a method from a class not directly added as a handler to the XML-RPC server. Please see my example below: I have a test Ruby XML-RPC server as follows: require "xmlrpc/server" class ExampleBar def bar() return "hello world!" end end class ExampleFoo def foo() return ExampleBar.new end def test() return "test!" end end s = XMLRPC::Server.new( 9090 ) s.add_introspection s.add_handler( "example", ExampleFoo.new ) s.serve And I have a test Python XML-RPC Client as follows: import xmlrpclib s = xmlrpclib.Server( "http://127.0.0.1:9090/" ) print s.example.foo().bar() I would expect the python client to print "hello world!" as it is the equivalent of the following ruby code: example = ExampleFoo.new puts example.foo().bar() However it generates an error: "xmlrpclib.ProtocolError: <ProtocolError for 127.0.0.1:9090/: 500 Internal Server Error>". print s.example.test() works fine. I dont expect the new ExampleBar object to go over the wire but I would expect it to be 'cached' server side and the subsequent call to bar() to be honoured. Can XML-RPC support this kind of usage or is it too basic? So I guess my question really is; how can I get this working, if not with XML-RPC what with? A: Your client (s in you Python code) is a ServerProxy object. It only accepts return values of type boolean, integers, floats, arrays, structures, dates or binary data. However, without you doing the wiring, there is no way for it to return another ServerProxy, which you would need for accessing another class. You could probably implement an object cache on the Ruby side, but it would involve keeping track of active session and deciding when to remove objects, how to handle missing objects, etc. Instead I would suggest exposing a thin wrapper on the ruby side that does atomic operations like: def foobar() return ExampleFoo.new().foo().bar() end A: XML-RPC can't pass objects. The set of parameter types is limited (as jakber says). A: Returning a nil inside of a supported data structure will also cause an Internal Server Error message. The stdlib ruby xmlrpc server does not appear to support the xmlrpc extensions which allow nils, even though the python side does. xmlrpc4r supports nils but I haven't tried it yet.
Access Ruby objects with Python via XML-RPC?
I am trying to export a Ruby framework via XML-RPC. However I am having some problems when trying to call a method from a class not directly added as a handler to the XML-RPC server. Please see my example below: I have a test Ruby XML-RPC server as follows: require "xmlrpc/server" class ExampleBar def bar() return "hello world!" end end class ExampleFoo def foo() return ExampleBar.new end def test() return "test!" end end s = XMLRPC::Server.new( 9090 ) s.add_introspection s.add_handler( "example", ExampleFoo.new ) s.serve And I have a test Python XML-RPC Client as follows: import xmlrpclib s = xmlrpclib.Server( "http://127.0.0.1:9090/" ) print s.example.foo().bar() I would expect the python client to print "hello world!" as it is the equivalent of the following ruby code: example = ExampleFoo.new puts example.foo().bar() However it generates an error: "xmlrpclib.ProtocolError: <ProtocolError for 127.0.0.1:9090/: 500 Internal Server Error>". print s.example.test() works fine. I dont expect the new ExampleBar object to go over the wire but I would expect it to be 'cached' server side and the subsequent call to bar() to be honoured. Can XML-RPC support this kind of usage or is it too basic? So I guess my question really is; how can I get this working, if not with XML-RPC what with?
[ "Your client (s in you Python code) is a ServerProxy object. It only accepts return values of type boolean, integers, floats, arrays, structures, dates or binary data.\nHowever, without you doing the wiring, there is no way for it to return another ServerProxy, which you would need for accessing another class. You could probably implement an object cache on the Ruby side, but it would involve keeping track of active session and deciding when to remove objects, how to handle missing objects, etc.\nInstead I would suggest exposing a thin wrapper on the ruby side that does atomic operations like:\ndef foobar()\n return ExampleFoo.new().foo().bar()\nend\n\n", "XML-RPC can't pass objects. The set of parameter types is limited (as jakber says).\n", "Returning a nil inside of a supported data structure will also cause an Internal Server Error message. The stdlib ruby xmlrpc server does not appear to support the xmlrpc extensions which allow nils, even though the python side does. xmlrpc4r supports nils but I haven't tried it yet.\n" ]
[ 5, 1, 1 ]
[]
[]
[ "interop", "python", "ruby", "xml_rpc" ]
stackoverflow_0000264128_interop_python_ruby_xml_rpc.txt
Q: What's Python's equivalent to Java InputStream's available method? Java's InputStream provides a method named available which returns the number of bytes that can be read without blocking. How can I achieve this in Python? A: You've got to tell us what type of object you're working with. I'm assuming you're talking about a socket read. Either you read the socket with blocking or you read without blocking. You can measure how you have just read in a non-blocking read, if you are interested in that. However, it sounds like you are trying to bend python into a java.io style stream-buffer paradigm that it just doesn't support in detail. A: Maybe the answers to this question will help. Or that link. To summarize, you could use select, which works for sockets in Windows and for sockets and other files (and pipes) in UNIX.
What's Python's equivalent to Java InputStream's available method?
Java's InputStream provides a method named available which returns the number of bytes that can be read without blocking. How can I achieve this in Python?
[ "You've got to tell us what type of object you're working with. I'm assuming you're talking about a socket read. Either you read the socket with blocking or you read without blocking. You can measure how you have just read in a non-blocking read, if you are interested in that. However, it sounds like you are trying to bend python into a java.io style stream-buffer paradigm that it just doesn't support in detail.\n", "Maybe the answers to this question will help.\nOr that link.\nTo summarize, you could use select, which works for sockets in Windows and for sockets and other files (and pipes) in UNIX.\n" ]
[ 3, 1 ]
[]
[]
[ "java", "python", "sockets" ]
stackoverflow_0001257264_java_python_sockets.txt
Q: python: slow timeit() function When I run the code below outside of timeit(), it appears to complete instantaneously. However when I run it within the timeit() function, it takes much longer. Why? >>> import timeit >>> t = timeit.Timer("3**4**5") >>> t.timeit() 16.55522028637718 Using: Python 3.1 (x86) - AMD Athlon 64 X2 - WinXP (32 bit) A: The timeit() function runs the code many times (default one million) and takes an average of the timings. To run the code only once, do this: t.timeit(1) but that will give you skewed results - it repeats for good reason. To get the per-loop time having let it repeat, divide the result by the number of loops. Use a smaller value for the number of repeats if one million is too many: count = 1000 print t.timeit(count) / count A: Because timeit defaults to running it one million times. The point is to do micro-benchmarks, and the only way to get accurate timings of short events is to repeat them many times. A: According to the docs, Timer.timeit() runs your code one million times by default. Use the "number" parameter to change this default: t.timeit(number=100) for example. A: Timeit runs for one million loops by default. You also may have order of operations issues: (3**4)**5 != 3**4**5.
python: slow timeit() function
When I run the code below outside of timeit(), it appears to complete instantaneously. However when I run it within the timeit() function, it takes much longer. Why? >>> import timeit >>> t = timeit.Timer("3**4**5") >>> t.timeit() 16.55522028637718 Using: Python 3.1 (x86) - AMD Athlon 64 X2 - WinXP (32 bit)
[ "The timeit() function runs the code many times (default one million) and takes an average of the timings.\nTo run the code only once, do this:\nt.timeit(1)\n\nbut that will give you skewed results - it repeats for good reason.\nTo get the per-loop time having let it repeat, divide the result by the number of loops. Use a smaller value for the number of repeats if one million is too many:\ncount = 1000\nprint t.timeit(count) / count\n\n", "Because timeit defaults to running it one million times. The point is to do micro-benchmarks, and the only way to get accurate timings of short events is to repeat them many times.\n", "According to the docs, Timer.timeit() runs your code one million times by default. Use the \"number\" parameter to change this default:\nt.timeit(number=100)\n\nfor example.\n", "Timeit runs for one million loops by default.\nYou also may have order of operations issues: (3**4)**5 != 3**4**5.\n" ]
[ 32, 6, 4, 2 ]
[]
[]
[ "python", "timeit", "timer" ]
stackoverflow_0001257727_python_timeit_timer.txt
Q: get_allowed_auths() in paramiko for authentication types I am trying to get supported authentication types/methods from a running SSH server in Python. I found this method get_allowed_auths() in the ServerInterface class in Paramiko but I can't understand if it is usable in a simple client-like snippet of code (I am writing something that accomplish in ONLY this task). Anyone can suggest me some links with examples, other the distribution documentation? Maybe any other ideas to do this? Thanks. A: You can try to authenticate using no authentication, which should always fail, but the server will then send back the auth types that can continue. There is an auth_none() method provided by paramiko.Transport to do this. import paramiko import socket s = socket.socket() s.connect(('localhost', 22)) t = paramiko.Transport(s) t.connect() try: t.auth_none('') except paramiko.BadAuthenticationType, err: print err.allowed_types
get_allowed_auths() in paramiko for authentication types
I am trying to get supported authentication types/methods from a running SSH server in Python. I found this method get_allowed_auths() in the ServerInterface class in Paramiko but I can't understand if it is usable in a simple client-like snippet of code (I am writing something that accomplish in ONLY this task). Anyone can suggest me some links with examples, other the distribution documentation? Maybe any other ideas to do this? Thanks.
[ "You can try to authenticate using no authentication, which should always fail, but the server will then send back the auth types that can continue. There is an auth_none() method provided by paramiko.Transport to do this.\nimport paramiko\nimport socket\n\ns = socket.socket()\ns.connect(('localhost', 22))\nt = paramiko.Transport(s)\nt.connect()\n\ntry:\n t.auth_none('')\nexcept paramiko.BadAuthenticationType, err:\n print err.allowed_types\n\n" ]
[ 4 ]
[]
[]
[ "authentication", "paramiko", "python", "ssh" ]
stackoverflow_0001253870_authentication_paramiko_python_ssh.txt
Q: create an array from a txt file I'm new in python and I have a problem. I have some measured data saved in a txt file. the data is separated with tabs, it has this structure: 0 0 -11.007001 -14.222319 2.336769 i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point. The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates. I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates. I already started a bit and here is what I have so far: ## read file coords = [x.split('\t') for x in open(f,'r').read().replace('\r','')[:-1].split('\n')] ## extract the information you want simnum = [int(x[0]) for x in coords] npts = [int(x[1]) for x in coords] xyz = array([map(float,x[2:]) for x in coords]) but I don't know how to combine these 2 lists and this one array. in the end i would like to have something like this: array = [simnum][num_dat_point][xyz] thanks for your help. I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this. thanks again A: you can combine them with zip function, like so: for sim, datapoint, x, y, z in zip(simnum, npts, *xyz): # do your thing or you could avoid list comprehensions altogether and just iterate over the lines of the file: for line in open(fname): lst = line.split('\t') sim, datapoint = int(lst[0]), int(lst[1]) x, y, z = [float(i) for i in lst[2:]] # do your thing to parse a single line you could (and should) do the following: coords = [x.split('\t') for x in open(fname)] A: According to the zen of python, flat is better than nested. I'd just use a dict. import csv f = csv.reader(open('thefile.csv'), delimiter='\t', quoting=csv.QUOTE_NONNUMERIC) result = {} for simn, dpoint, c1, c2, c3 in f: result[simn, dpoint] = c1, c2, c3 # pretty-prints the result: from pprint import pprint pprint(result) A: This seems like a good opportunity to use itertools.groupby. import itertools import csv file = open("data.txt") reader = csv.reader(file, delimiter='\t') result = [] for simnumberStr, rows in itertools.groupby(reader, key=lambda t: t[0]): simData = [] for row in rows: simData.append([float(v) for v in row[2:]]) result.append(simData) file.close() This will create a 3 dimensional list named 'result'. The first index is the simulation number, and the second index is the data index within that simulation. The value is a list of integers containing the x, y, and z coordinate. Note that this assumes the data is already sorted on simulation number and data number. A: essentially the difficulty is what happens if different simulations have different numbers of points. You will therefore need to dimension an array to the appropriate sizes first. t should be an array of at least max(simnum) x max(npts) x 3. To eliminate confusion you should initialise with not-a-number, this will allow you to see missing points. then use something like for x in coords: t[int(x[0])][int(x[1])][0]=float(x[3]) t[int(x[0])][int(x[1])][1]=float(x[4]) t[int(x[0])][int(x[1])][2]=float(x[5]) is this what you meant? A: You could be using many different kinds of containers for your purposes, but none of them has array as an unqualified name -- Python has a module array which you can import from the standard library, but the array.array type is too limited for your purposes (1-D only and with elementary types as contents); there's a popular third-party extension known as numpy, which does have a powerful numpy.array type, which you could use if you has downloaded and installed the extension -- but as you never even once mention numpy I doubt that's what you mean; the relevant builtin types are list and dict. I'll assume you want any container whatsoever -- but if you could learn to use precise terminology in the future, that will substantially help you AND anybody who's trying to help you (say list when you mean list, array only when you DO mean array, "container" when you're uncertain about what container to use, and so forth). I suggest you look at the csv module in the standard library for a more robust way to reading your data, but that's a separate issue. Let's start from when you have the coords list of lists of 5 strings each, each sublist with strings representing two ints followed by three floats. Two more key aspects need to be specified... One key aspect you don't tell us about: is the list sorted in some significant way? is there, in particular, some significant order you want to keep? As you don't even mention either issue, I will have to assume one way or another, and I'll assume that there isn't any guaranteed nor meaningful order; but, no repetition (each pair of simulation/datapoint numbers is not allowed to occur more than once). Second key aspect: are there the same number of datapoints per simulation, in increasing order (0, 1, 2, ...), or is that not necessarily the case (and btw, are the simulation themselves numbered 0, 1, 2, ...)? Again, no clue from you on this indispensable part of the specs -- note how many assumptions you're forcing would-be helpers to make by just not telling us about such obviously crucial aspects. Don't let people who want to help you stumble in the dark: rather, learn to ask questions the smart way -- this will save untold amounts of time to yourself AND would-be helpers, and give you higher-quality and more relevant help, so, why not do it? Anyway, forced to make yet another assumption, I'll have to assume nothing at all is known about the simulation numbers nor about the numers of datapoints in each simulation. With these assumptions dict emerges as the only sensible structure to use for the outer container: a dictionary whose key is a tuple with two items, simulation number then datapoint number within the simulation. The values may as well be tuple, too (with three floats each), since it does appear that you have exactly 3 coordinates per line. With all of these assumptions...: def make_container(coords): result = dict() for s, d, x, y, z in coords: key = int(s), int(d) value = float(x), float(y), float(z) result[key] = value return result It's always best, and fastest, to have all significant code within def statements (i.e. as functions to be called, possibly with appropriate arguments), so I'm presenting it this way. make_container returns a dictionary which you can address with the simulation number and datapoint number; for example, d = make_container(coords) print d[0, 0] will print the x, y, z for dp 0 of sim 0, assuming one exists (you would get an error if such a sim/dp combination did not exist). dicts have many useful methods, e.g. changing the print statement above to print d.get((0, 0)) (yes, you do need double parentheses here -- inner ones to make a tuple, outer ones to call get with that tuple as its single argument), you'd see None, rather than get an exception, if there was no such sim/dp combinarion as (0, 0). If you can edit your question to make your specs more precise (perhaps including some indication of ways you plan to use the resulting container, as well as the various key aspects I've listed above), I might well be able to fine-tune this advice to match your need and circumstances much better (and so might ever other responder, regarding their own advice!), so I strongly recommend you do so -- thanks in advance for helping us help you!-) A: First I'd point out that your first data point appears to be an index, and wonder if the data is therefore important or not, but whichever :-) def parse(line): mch = re.compile('^(\d+)\s+(\d+)\s+([-\d\.]+)\s+([-\d\.]+)\s+([-\d\.]+)$') m = mch.match(line) if m: l = m.groups() (idx,data,xyz) = (int(l[0]),int(l[1]), map(float, l[2:])) return (idx, data, xyz) return None finaldata = [] file = open("data.txt",'r') for line in file: r = parse(line) if r is not None: finaldata.append(r) Final data should have output along the lines of: [(0, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]), (1, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]), (2, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]), (3, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]), (4, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999])] This should be pretty robust about dealing w/ the whitespace issues (tabs spaces whatnot)... I also wonder how big your data files are, mine are usually large so being able to process them in chunks or groups become more important... Anyway this will work in python 2.6. A: Are you sure a 3d array is what you want? It seems more likely that you want a 2d array, where the simulation number is one dimension, the data point is the second, and then the value stored at that location is the coordinates. This code will give you that. data = [] for coord in coords: if coord[0] not in data: data[coord[0]] = [] data[coord[0]][coord[1]] = (coord[2], coord[3], coord[4]) To get the coordinates at simulation 7, data point 13, just do data[7][13]
create an array from a txt file
I'm new in python and I have a problem. I have some measured data saved in a txt file. the data is separated with tabs, it has this structure: 0 0 -11.007001 -14.222319 2.336769 i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point. The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates. I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates. I already started a bit and here is what I have so far: ## read file coords = [x.split('\t') for x in open(f,'r').read().replace('\r','')[:-1].split('\n')] ## extract the information you want simnum = [int(x[0]) for x in coords] npts = [int(x[1]) for x in coords] xyz = array([map(float,x[2:]) for x in coords]) but I don't know how to combine these 2 lists and this one array. in the end i would like to have something like this: array = [simnum][num_dat_point][xyz] thanks for your help. I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this. thanks again
[ "you can combine them with zip function, like so:\nfor sim, datapoint, x, y, z in zip(simnum, npts, *xyz):\n # do your thing\n\nor you could avoid list comprehensions altogether and just iterate over the lines of the file:\nfor line in open(fname):\n lst = line.split('\\t')\n sim, datapoint = int(lst[0]), int(lst[1])\n x, y, z = [float(i) for i in lst[2:]]\n # do your thing\n\nto parse a single line you could (and should) do the following:\ncoords = [x.split('\\t') for x in open(fname)]\n\n", "According to the zen of python, flat is better than nested. I'd just use a dict.\nimport csv\nf = csv.reader(open('thefile.csv'), delimiter='\\t',\n quoting=csv.QUOTE_NONNUMERIC)\n\nresult = {}\nfor simn, dpoint, c1, c2, c3 in f:\n result[simn, dpoint] = c1, c2, c3\n\n# pretty-prints the result:\nfrom pprint import pprint\npprint(result)\n\n", "This seems like a good opportunity to use itertools.groupby.\nimport itertools\nimport csv\nfile = open(\"data.txt\")\nreader = csv.reader(file, delimiter='\\t')\nresult = []\nfor simnumberStr, rows in itertools.groupby(reader, key=lambda t: t[0]):\n simData = []\n for row in rows:\n simData.append([float(v) for v in row[2:]])\n result.append(simData)\nfile.close()\n\nThis will create a 3 dimensional list named 'result'. The first index is the simulation number, and the second index is the data index within that simulation. The value is a list of integers containing the x, y, and z coordinate.\nNote that this assumes the data is already sorted on simulation number and data number.\n", "essentially the difficulty is what happens if different simulations have different numbers of points.\nYou will therefore need to dimension an array to the appropriate sizes first. \nt should be an array of at least max(simnum) x max(npts) x 3. \nTo eliminate confusion you should initialise with not-a-number,\nthis will allow you to see missing points.\nthen use something like\nfor x in coords:\n t[int(x[0])][int(x[1])][0]=float(x[3])\n t[int(x[0])][int(x[1])][1]=float(x[4])\n t[int(x[0])][int(x[1])][2]=float(x[5])\n\nis this what you meant?\n", "You could be using many different kinds of containers for your purposes, but none of them has array as an unqualified name -- Python has a module array which you can import from the standard library, but the array.array type is too limited for your purposes (1-D only and with elementary types as contents); there's a popular third-party extension known as numpy, which does have a powerful numpy.array type, which you could use if you has downloaded and installed the extension -- but as you never even once mention numpy I doubt that's what you mean; the relevant builtin types are list and dict. I'll assume you want any container whatsoever -- but if you could learn to use precise terminology in the future, that will substantially help you AND anybody who's trying to help you (say list when you mean list, array only when you DO mean array, \"container\" when you're uncertain about what container to use, and so forth).\nI suggest you look at the csv module in the standard library for a more robust way to reading your data, but that's a separate issue. Let's start from when you have the coords list of lists of 5 strings each, each sublist with strings representing two ints followed by three floats. Two more key aspects need to be specified...\nOne key aspect you don't tell us about: is the list sorted in some significant way? is there, in particular, some significant order you want to keep? As you don't even mention either issue, I will have to assume one way or another, and I'll assume that there isn't any guaranteed nor meaningful order; but, no repetition (each pair of simulation/datapoint numbers is not allowed to occur more than once).\nSecond key aspect: are there the same number of datapoints per simulation, in increasing order (0, 1, 2, ...), or is that not necessarily the case (and btw, are the simulation themselves numbered 0, 1, 2, ...)? Again, no clue from you on this indispensable part of the specs -- note how many assumptions you're forcing would-be helpers to make by just not telling us about such obviously crucial aspects. Don't let people who want to help you stumble in the dark: rather, learn to ask questions the smart way -- this will save untold amounts of time to yourself AND would-be helpers, and give you higher-quality and more relevant help, so, why not do it? Anyway, forced to make yet another assumption, I'll have to assume nothing at all is known about the simulation numbers nor about the numers of datapoints in each simulation.\nWith these assumptions dict emerges as the only sensible structure to use for the outer container: a dictionary whose key is a tuple with two items, simulation number then datapoint number within the simulation. The values may as well be tuple, too (with three floats each), since it does appear that you have exactly 3 coordinates per line.\nWith all of these assumptions...:\ndef make_container(coords):\n result = dict()\n for s, d, x, y, z in coords:\n key = int(s), int(d)\n value = float(x), float(y), float(z)\n result[key] = value\n return result\n\nIt's always best, and fastest, to have all significant code within def statements (i.e. as functions to be called, possibly with appropriate arguments), so I'm presenting it this way. make_container returns a dictionary which you can address with the simulation number and datapoint number; for example,\nd = make_container(coords)\nprint d[0, 0]\n\nwill print the x, y, z for dp 0 of sim 0, assuming one exists (you would get an error if such a sim/dp combination did not exist). dicts have many useful methods, e.g. changing the print statement above to\nprint d.get((0, 0))\n\n(yes, you do need double parentheses here -- inner ones to make a tuple, outer ones to call get with that tuple as its single argument), you'd see None, rather than get an exception, if there was no such sim/dp combinarion as (0, 0).\nIf you can edit your question to make your specs more precise (perhaps including some indication of ways you plan to use the resulting container, as well as the various key aspects I've listed above), I might well be able to fine-tune this advice to match your need and circumstances much better (and so might ever other responder, regarding their own advice!), so I strongly recommend you do so -- thanks in advance for helping us help you!-)\n", "First I'd point out that your first data point appears to be an index, and wonder if the data is therefore important or not, but whichever :-)\ndef parse(line):\n mch = re.compile('^(\\d+)\\s+(\\d+)\\s+([-\\d\\.]+)\\s+([-\\d\\.]+)\\s+([-\\d\\.]+)$')\n m = mch.match(line)\n if m:\n l = m.groups()\n (idx,data,xyz) = (int(l[0]),int(l[1]), map(float, l[2:]))\n return (idx, data, xyz)\n return None\n\nfinaldata = []\nfile = open(\"data.txt\",'r')\nfor line in file:\n r = parse(line)\n if r is not None:\n finaldata.append(r)\n\nFinal data should have output along the lines of:\n[(0, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),\n (1, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),\n (2, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),\n (3, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),\n (4, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999])]\n\nThis should be pretty robust about dealing w/ the whitespace issues (tabs spaces whatnot)... \nI also wonder how big your data files are, mine are usually large so being able to process them in chunks or groups become more important... Anyway this will work in python 2.6.\n", "Are you sure a 3d array is what you want? It seems more likely that you want a 2d array, where the simulation number is one dimension, the data point is the second, and then the value stored at that location is the coordinates.\nThis code will give you that.\ndata = []\nfor coord in coords:\n if coord[0] not in data:\n data[coord[0]] = []\n data[coord[0]][coord[1]] = (coord[2], coord[3], coord[4])\n\nTo get the coordinates at simulation 7, data point 13, just do data[7][13]\n" ]
[ 2, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "arrays", "python", "text" ]
stackoverflow_0001256099_arrays_python_text.txt
Q: How do I get omnicompletion for Python external libraries? I've setup my gVim to have omnicompletion, but only for the standard library atm.. How do I include other libraries (Django, Pygame, etc...)? Thanks! A: Here's a tutorial on using omnicomplete with Django.
How do I get omnicompletion for Python external libraries?
I've setup my gVim to have omnicompletion, but only for the standard library atm.. How do I include other libraries (Django, Pygame, etc...)? Thanks!
[ "Here's a tutorial on using omnicomplete with Django.\n" ]
[ 1 ]
[]
[]
[ "python", "vim" ]
stackoverflow_0001257742_python_vim.txt
Q: Problem reading an Integer in java that was written using python’s struct.pack method First I write the integer using python: out.write( struct.pack(">i", int(i)) ); I then read the integer using DataInputStream.readInt() in Java. I works but when it tries to read the number 10, and probably some other numbers too, it starts to read garbage. Reading the numbers: 0, 4, 5, 0, 5, 13, 10, 1, 5, 6 Java reads: 0, 4, 5, 0, 5, 13, 167772160, 16777216, 83886080 What am I doing wrong? A: Psychic debugging: You're writing the output in text mode on Windows using code like this: f = open("output.dat", "w") f.write(my_data) and that's making your 13 (which is a newline) become carriage return / newline (10, 13). You need to write your output in binary mode: f = open("output.dat", "wb") f.write(my_data)
Problem reading an Integer in java that was written using python’s struct.pack method
First I write the integer using python: out.write( struct.pack(">i", int(i)) ); I then read the integer using DataInputStream.readInt() in Java. I works but when it tries to read the number 10, and probably some other numbers too, it starts to read garbage. Reading the numbers: 0, 4, 5, 0, 5, 13, 10, 1, 5, 6 Java reads: 0, 4, 5, 0, 5, 13, 167772160, 16777216, 83886080 What am I doing wrong?
[ "Psychic debugging: You're writing the output in text mode on Windows using code like this:\nf = open(\"output.dat\", \"w\")\nf.write(my_data)\n\nand that's making your 13 (which is a newline) become carriage return / newline (10, 13).\nYou need to write your output in binary mode:\nf = open(\"output.dat\", \"wb\")\nf.write(my_data)\n\n" ]
[ 7 ]
[]
[]
[ "java", "python" ]
stackoverflow_0001257856_java_python.txt
Q: Django: Permalinks for Admin I know the link template to reach an object is like following: "{{ domain }}/{{ admin_dir }}/{{ appname }}/{{ modelname }}/{{ pk }}" Is there a way built-in to get a permalink for an object? from django.contrib import admin def get_admin_permalink(instance, admin_site=admin.site): # returns admin URL for instance change page raise NotImplemented EDIT It seems in v1.1 admin has named URLs. Unfortunately it's not yet released. A: 1.1 is out, the doc is right here: http://docs.djangoproject.com/en/dev/ref/contrib/admin/#admin-reverse-urls http://docs.djangoproject.com/en/dev/ref/templates/builtins/#url I also used it a bit, the admin namespace will have to be specified whenever you are fetching an existing admin url. # in urls.py, assuming you have a customized view url(r'foo/$', 'foo', name='foo_index'), # in the template, to get the admin url {% url admin:foo_index %} In 1.1, whenever an admin url is fetched, you'll have to specify the 'admin' namespace.
Django: Permalinks for Admin
I know the link template to reach an object is like following: "{{ domain }}/{{ admin_dir }}/{{ appname }}/{{ modelname }}/{{ pk }}" Is there a way built-in to get a permalink for an object? from django.contrib import admin def get_admin_permalink(instance, admin_site=admin.site): # returns admin URL for instance change page raise NotImplemented EDIT It seems in v1.1 admin has named URLs. Unfortunately it's not yet released.
[ "1.1 is out, the doc is right here: http://docs.djangoproject.com/en/dev/ref/contrib/admin/#admin-reverse-urls\nhttp://docs.djangoproject.com/en/dev/ref/templates/builtins/#url\nI also used it a bit, the admin namespace will have to be specified whenever you are fetching an existing admin url.\n# in urls.py, assuming you have a customized view\nurl(r'foo/$', 'foo', name='foo_index'),\n\n# in the template, to get the admin url\n{% url admin:foo_index %}\n\nIn 1.1, whenever an admin url is fetched, you'll have to specify the 'admin' namespace.\n" ]
[ 1 ]
[]
[]
[ "django", "django_admin", "permalinks", "python", "reverse" ]
stackoverflow_0000690688_django_django_admin_permalinks_python_reverse.txt
Q: Generic view 'archive_year' produces blank page I am using Django's generic views to create a blog site. The templates I created, entry_archive_day, entry_archive_month, entry_archive, and entry_detail all work perfectly. But entry_archive_year does not. Instead, it is simply a valid page with no content (not a 404 or other error. It looks like it sees no objects in **object_list**. I know that archive uses a latest list instead of object_list, but that's not the case with archive_year, correct? Thanks! A: To solve your problem: If you set make_object_list=True when calling archive_year, then the list of objects for that year will be available as object_list. As a quick example, if your url pattern looks like url(r'^(?P<year>\d{4})/$', 'archive_year', info_dict, name="entry_archive_year") where info_dict is a dictionary containing the queryset and date_field, change it to url(r'^(?P<year>\d{4}/$', 'archive_year', dict(info_dict,make_object_list=True), name="entry_archive_year") Explanation: The generic view archive_year has an optional argument make_object_list. By default, it is set to false, and object_list is passed to the template as an empty list. make_object_list: A boolean specifying whether to retrieve the full list of objects for this year and pass those to the template. If True, this list of objects will be made available to the template as object_list. (The name object_list may be different; see the docs for object_list in the "Template context" section below.) By default, this is False. A reason for this is that you might not always want to display the entire object list in the entry_archive_year view. You may have hundreds of blog posts for that year, too many to display on one page. Instead, archive_year adds date_list to the template context. This allows you to create links to the monthly archive pages of that year, for the months which have entries. date_list: A list of datetime.date objects representing all months that have objects available in the given year, according to queryset, in ascending order. There's more info in the Django docs. As requested in the comment below, an example of how to use date_list: To use date_list, your entry_archive_year template would contain something like this: <ul> {% for month in date_list %} <li><a href="/blog/{{month|date:"Y"}}/{{month|date:"b"}}> {{month|date:"F"}}</a></li> {% endfor %} </ul> Note that I've hardcoded the url - in practice it would be better to use the url template tag. For an example of date_list being used in the wild, look at the Django Weblog 2009 Archive.
Generic view 'archive_year' produces blank page
I am using Django's generic views to create a blog site. The templates I created, entry_archive_day, entry_archive_month, entry_archive, and entry_detail all work perfectly. But entry_archive_year does not. Instead, it is simply a valid page with no content (not a 404 or other error. It looks like it sees no objects in **object_list**. I know that archive uses a latest list instead of object_list, but that's not the case with archive_year, correct? Thanks!
[ "To solve your problem:\nIf you set make_object_list=True when calling archive_year, then the list of objects for that year will be available as object_list.\nAs a quick example, if your url pattern looks like\nurl(r'^(?P<year>\\d{4})/$', 'archive_year', info_dict, name=\"entry_archive_year\")\n\nwhere info_dict is a dictionary containing the queryset and date_field, change it to\nurl(r'^(?P<year>\\d{4}/$', 'archive_year', dict(info_dict,make_object_list=True),\n name=\"entry_archive_year\")\n\nExplanation:\nThe generic view archive_year has an optional argument make_object_list. By default, it is set to false, and object_list is passed to the template as an empty list.\n\nmake_object_list: A boolean specifying whether to retrieve the full list of objects for this year and pass those to the template. If True, this list of objects will be made available to the template as object_list. (The name object_list may be different; see the docs for object_list in the \"Template context\" section below.) By default, this is False.\n\nA reason for this is that you might not always want to display the entire object list in the entry_archive_year view. You may have hundreds of blog posts for that year, too many to display on one page.\nInstead, archive_year adds date_list to the template context. This allows you to create links to the monthly archive pages of that year, for the months which have entries.\n\ndate_list: A list of datetime.date objects representing all months that have objects available in the given year, according to queryset, in ascending order.\n\nThere's more info in the Django docs.\nAs requested in the comment below, an example of how to use date_list:\nTo use date_list, your entry_archive_year template would contain something like this:\n<ul>\n {% for month in date_list %}\n\n <li><a href=\"/blog/{{month|date:\"Y\"}}/{{month|date:\"b\"}}>\n {{month|date:\"F\"}}</a></li>\n {% endfor %}\n</ul>\n\nNote that I've hardcoded the url - in practice it would be better to use the url template tag. For an example of date_list being used in the wild, look at the Django Weblog 2009 Archive.\n" ]
[ 5 ]
[]
[]
[ "django", "generics", "python", "view" ]
stackoverflow_0001257943_django_generics_python_view.txt
Q: Hit a URL on other server from google app engine I want to hit a URL from python in google app engine. Can any one please tell me how can i hit the URL using python in google app engine. A: You can use the URLFetch API from google.appengine.api import urlfetch url = "http://www.google.com/" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) A: It always depends on post or get. urllib can post to a form somewhere else, if we want the rather tricky thing validate between different hashes (sha and md5) import urllib data = urllib.urlencode({"id" : str(d), "password" : self.request.POST['passwd'], "edit" : "edit"}) result = urlfetch.fetch(url="http://www..../Login.do", payload=data, method=urlfetch.POST, headers={'Content-Type': 'application/x-www-form-urlencoded'}) if result.content.find('loggedInOrYourInfo') > 0: self.response.out.write("clear") else: self.response.out.write("wrong password ")
Hit a URL on other server from google app engine
I want to hit a URL from python in google app engine. Can any one please tell me how can i hit the URL using python in google app engine.
[ "You can use the URLFetch API\nfrom google.appengine.api import urlfetch\n\nurl = \"http://www.google.com/\"\nresult = urlfetch.fetch(url)\nif result.status_code == 200:\n doSomethingWithResult(result.content)\n\n", "It always depends on post or get. urllib can post to a form somewhere else, if we want the rather tricky thing validate between different hashes (sha and md5)\nimport urllib \ndata = urllib.urlencode({\"id\" : str(d), \"password\" : self.request.POST['passwd'], \"edit\" : \"edit\"})\nresult = urlfetch.fetch(url=\"http://www..../Login.do\",\npayload=data,\nmethod=urlfetch.POST,\nheaders={'Content-Type': 'application/x-www-form-urlencoded'})\nif result.content.find('loggedInOrYourInfo') > 0: \n self.response.out.write(\"clear\") \nelse: \n self.response.out.write(\"wrong password \") \n\n" ]
[ 6, 0 ]
[]
[]
[ "google_app_engine", "python", "url" ]
stackoverflow_0001233284_google_app_engine_python_url.txt
Q: Starting semantic image recognition How to recognize (in)appropriate images? To facilitate, enable and easify photo and image moderation and administration targeting gae, I try get started with basic python image recognition ie basic semantic information what the image looks like to hold back doubtful material until human can judge it, and to approve the most that are good. A test batch > 10 000 images had one or just a very few so avoiding false positives naturally is good. I found the following links to follow and thank you all in advance for all advice, suggestions and recommendations. Very basically moderation will display a number of images and just a button "ok" or viceversa default "ok" and a button "Disapprove" depending on default decision (default probably publish everything and ad hoc (human) disapproval if some unsuitable since the absolute major part > 99 % material is suitably good) link text link text A: In python you could always: import supreme_court Because when it comes to pornography, they know it when they see it. Mediocre jokes aside, I would develop a bunch of fuzzy image recognizers that match easy things (like how much of the image is made up of a skin color tone?). You could probably come up with a good amount of suspicious variables at this point - this is the hard(ish) part. Then use Classification and Regression Trees to implement the actual decision engine. Train it with your training sample, then do cross-sample validation to get a sense of the false positives/negatives. A: I believe you will want to start here http://en.wikipedia.org/wiki/Feature_detection_%28computer_vision%29 and then brush up on your statistical theory, reading any papers on the topic.
Starting semantic image recognition
How to recognize (in)appropriate images? To facilitate, enable and easify photo and image moderation and administration targeting gae, I try get started with basic python image recognition ie basic semantic information what the image looks like to hold back doubtful material until human can judge it, and to approve the most that are good. A test batch > 10 000 images had one or just a very few so avoiding false positives naturally is good. I found the following links to follow and thank you all in advance for all advice, suggestions and recommendations. Very basically moderation will display a number of images and just a button "ok" or viceversa default "ok" and a button "Disapprove" depending on default decision (default probably publish everything and ad hoc (human) disapproval if some unsuitable since the absolute major part > 99 % material is suitably good) link text link text
[ "In python you could always:\nimport supreme_court\n\nBecause when it comes to pornography, they know it when they see it.\nMediocre jokes aside, I would develop a bunch of fuzzy image recognizers that match easy things (like how much of the image is made up of a skin color tone?). You could probably come up with a good amount of suspicious variables at this point - this is the hard(ish) part. Then use Classification and Regression Trees to implement the actual decision engine. Train it with your training sample, then do cross-sample validation to get a sense of the false positives/negatives. \n", "I believe you will want to start here\nhttp://en.wikipedia.org/wiki/Feature_detection_%28computer_vision%29\nand then brush up on your statistical theory, reading any papers on the topic.\n" ]
[ 2, 2 ]
[]
[]
[ "computer_vision", "image", "image_recognition", "pattern_recognition", "python" ]
stackoverflow_0001257933_computer_vision_image_image_recognition_pattern_recognition_python.txt
Q: Elixir Entity with a list of tuples in it. ex. Cooking Recipe with list of (ingrediant, quantity) tuple I'm trying to build an elixir model in which I have a class with a list(of variable size) of tuples. One example would be a recipe while I can do something like this: class Recipe(Entity): ingrediants = OneToMany('IngrediantList') cooking_time = Field(Integer) ... class IngrediantList(Entity): ingrediant = ManyToOne('Ingrediant') quantity = Field(Number(3,2)) class Ingrediant(Entity): name = Field(String(30)) ... It has a number of short comings. For one, I don't like creating an entity for ingrediant list which I don't have any meaning for wrt the domain; takes fun out of abstraction. Another is that a query like which items can I prepare with this ingredient would get really messy and probably inefficient without adding more relations and or fields to the model making it messy in turn. One more example would be a bank deposit slip with list of denominations and quantity. What is the best way to design such a model? A: This is the correct way to model composite objects. The only thing I'd change is the name of the IngredientList class. Something like RecipeEntry or IngredientQuantity would be more appropriate. Calling it a tuple is just trying to avoid the need to name the fact that a recipe needs some quantity of some ingredient. If for most of the code you don't want to consider the details of the association you can use sqlalchemy associationproxy extension to create a proxy attribute to hide the details. A: solution using sqlalchemy associationproxy This is from the sqlalchemy docs.
Elixir Entity with a list of tuples in it. ex. Cooking Recipe with list of (ingrediant, quantity) tuple
I'm trying to build an elixir model in which I have a class with a list(of variable size) of tuples. One example would be a recipe while I can do something like this: class Recipe(Entity): ingrediants = OneToMany('IngrediantList') cooking_time = Field(Integer) ... class IngrediantList(Entity): ingrediant = ManyToOne('Ingrediant') quantity = Field(Number(3,2)) class Ingrediant(Entity): name = Field(String(30)) ... It has a number of short comings. For one, I don't like creating an entity for ingrediant list which I don't have any meaning for wrt the domain; takes fun out of abstraction. Another is that a query like which items can I prepare with this ingredient would get really messy and probably inefficient without adding more relations and or fields to the model making it messy in turn. One more example would be a bank deposit slip with list of denominations and quantity. What is the best way to design such a model?
[ "This is the correct way to model composite objects. The only thing I'd change is the name of the IngredientList class. Something like RecipeEntry or IngredientQuantity would be more appropriate. Calling it a tuple is just trying to avoid the need to name the fact that a recipe needs some quantity of some ingredient.\nIf for most of the code you don't want to consider the details of the association you can use sqlalchemy associationproxy extension to create a proxy attribute to hide the details.\n", "solution using sqlalchemy associationproxy\nThis is from the sqlalchemy docs.\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_elixir", "sqlalchemy" ]
stackoverflow_0001247343_python_python_elixir_sqlalchemy.txt
Q: Google Web Toolkit like application in Django I'm trying to develop an application that would be perfect for GWT, however I am using this app as a learning example for Django. Is there some precedence for this type of application in Django? A: Pyjamas is sort of like GWT which is written with Python. From there you can make it work with your django code. A: Lots of people have done this by writing their UI in GWT and having it issue ajax calls back to their python backend. There are basically two ways to go about it. First, you can simply use JSON to communicate between the frontend and the backend. That's the approach you will find here (http://palantar.blogspot.com/2006/06/agad-tutorial-ish-sort-of-post.html). Second, some people want to use GWT's RPC system to talk to python backends. This is a little more involved, but some people have created tools (for example, http://code.google.com/p/python-gwt-rpc/). To be honest, most successful projects just use JSON to communicate between GWT and the python server. GWT's RPC is pretty advanced in that it is able to serialize arbitrary java object graphs to and from the client. It's a tricky problem to get right and I'm pretty doubtful that any of the python tools have it right.
Google Web Toolkit like application in Django
I'm trying to develop an application that would be perfect for GWT, however I am using this app as a learning example for Django. Is there some precedence for this type of application in Django?
[ "Pyjamas is sort of like GWT which is written with Python. From there you can make it work with your django code.\n", "Lots of people have done this by writing their UI in GWT and having it issue ajax calls back to their python backend. There are basically two ways to go about it. First, you can simply use JSON to communicate between the frontend and the backend. That's the approach you will find here (http://palantar.blogspot.com/2006/06/agad-tutorial-ish-sort-of-post.html). Second, some people want to use GWT's RPC system to talk to python backends. This is a little more involved, but some people have created tools (for example, http://code.google.com/p/python-gwt-rpc/).\nTo be honest, most successful projects just use JSON to communicate between GWT and the python server. GWT's RPC is pretty advanced in that it is able to serialize arbitrary java object graphs to and from the client. It's a tricky problem to get right and I'm pretty doubtful that any of the python tools have it right.\n" ]
[ 7, 3 ]
[]
[]
[ "django", "gwt", "python" ]
stackoverflow_0001253056_django_gwt_python.txt
Q: Can you get more information about the online file? I have a online file: http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe ,please donot download it, i want to determine the software version whether is changed, so i want more information about it. for example, using python,i can get this: import urllib2,urllib req = urllib2.Request('http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe') response = urllib2.urlopen(req) print response.info() print response.geturl() Content-Length: 16868680 Server: qqdlsrv(1.84 for linux) Connection: close Content-Disposition: attachment; filename=TM2009Beta_chs.exe Accept-Ranges: bytes Content-Type: application/octet-stream http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe Can you get more imformation to let me determine the software version is changed? A: Download the first thousand bytes or so of the file using the range header. Use pefile to parse the PE header and extract version information. With the data, extract useful information such as the time date stamp and other goodies that let you find changes in files without reading the whole thing. A: You can get all kinds of information about an EXE Windows file if you download it (the easy way, by running external utilities on it, or up to a point the hard way, via APIs and your own code simulating those utilities) -- a lot depends on what info was put into it when it was built. Without downloading, you can get only the info the server is giving you, which in this case seems pretty scarce -- I can't believe that server's configured to NOT tell you latest modified date &c. In your shoes, I'd see what can be done on the server side to remedy that dearth of info, so you don't have to download the EXE just to find out more! A: Configure your server to provide a Last-Modified header, and use If-Modified-Since in your request.
Can you get more information about the online file?
I have a online file: http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe ,please donot download it, i want to determine the software version whether is changed, so i want more information about it. for example, using python,i can get this: import urllib2,urllib req = urllib2.Request('http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe') response = urllib2.urlopen(req) print response.info() print response.geturl() Content-Length: 16868680 Server: qqdlsrv(1.84 for linux) Connection: close Content-Disposition: attachment; filename=TM2009Beta_chs.exe Accept-Ranges: bytes Content-Type: application/octet-stream http://dl_dir.qq.com/qqfile/tm/TM2009Beta_chs.exe Can you get more imformation to let me determine the software version is changed?
[ "\nDownload the first thousand bytes or so of the file using the range header.\nUse pefile to parse the PE header and extract version information.\nWith the data, extract useful information such as the time date stamp and other goodies that let you find changes in files without reading the whole thing.\n\n", "You can get all kinds of information about an EXE Windows file if you download it (the easy way, by running external utilities on it, or up to a point the hard way, via APIs and your own code simulating those utilities) -- a lot depends on what info was put into it when it was built. Without downloading, you can get only the info the server is giving you, which in this case seems pretty scarce -- I can't believe that server's configured to NOT tell you latest modified date &c. In your shoes, I'd see what can be done on the server side to remedy that dearth of info, so you don't have to download the EXE just to find out more!\n", "Configure your server to provide a Last-Modified header, and use If-Modified-Since in your request.\n" ]
[ 4, 2, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001258280_python.txt
Q: How to get webcam info with WMI Which class that we can obtain webcam info? Thank A: I think you may actually need WIA -- though the URL I'm quoting is all about images, not videos, I'm sure WIA also has video functionality, I just can't find good docs on that!-(
How to get webcam info with WMI
Which class that we can obtain webcam info? Thank
[ "I think you may actually need WIA -- though the URL I'm quoting is all about images, not videos, I'm sure WIA also has video functionality, I just can't find good docs on that!-(\n" ]
[ 1 ]
[]
[]
[ "python", "wmi" ]
stackoverflow_0001258455_python_wmi.txt
Q: How to make python urllib2 follow redirect and keep post method I am using urllib2 to post data to a form. The problem is that the form replies with a 302 redirect. According to Python HTTPRedirectHandler the redirect handler will take the request and convert it from POST to GET and follow the 301 or 302. I would like to preserve the POST method and the data passed to the opener. I made an unsuccessful attempt at a custom HTTPRedirectHandler by simply adding data=req.get_data() to the new Request. I am sure this has been done before so I thought I would make a post. Note: this is similar to this post and this one but I don't want to prevent the redirect I just want to keep the POST data. Here is my HTTPRedirectHandler that does not work class MyHTTPRedirectHandler(urllib2.HTTPRedirectHandler): def redirect_request(self, req, fp, code, msg, headers, newurl): """Return a Request or None in response to a redirect. This is called by the http_error_30x methods when a redirection response is received. If a redirection should take place, return a new Request to allow http_error_30x to perform the redirect. Otherwise, raise HTTPError if no-one else should try to handle this url. Return None if you can't but another Handler might. """ m = req.get_method() if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") or code in (301, 302, 303) and m == "POST"): # Strictly (according to RFC 2616), 301 or 302 in response # to a POST MUST NOT cause a redirection without confirmation # from the user (of urllib2, in this case). In practice, # essentially all clients do redirect in this case, so we # do the same. # be conciliant with URIs containing a space newurl = newurl.replace(' ', '%20') return Request(newurl, headers=req.headers, data=req.get_data(), origin_req_host=req.get_origin_req_host(), unverifiable=True) else: raise HTTPError(req.get_full_url(), code, msg, headers, fp) A: This is actually a really bad thing to do the more I thought about it. For instance, if I submit a form to http://example.com/add (with post data to add a item) and the response is a 302 redirect to http://example.com/add and I post the same data that I posted the first time I will end up in an infinite loop. Not sure why I didn't think of this before. I'll leave the question here just as a warning to anyone else thinking about doing this.
How to make python urllib2 follow redirect and keep post method
I am using urllib2 to post data to a form. The problem is that the form replies with a 302 redirect. According to Python HTTPRedirectHandler the redirect handler will take the request and convert it from POST to GET and follow the 301 or 302. I would like to preserve the POST method and the data passed to the opener. I made an unsuccessful attempt at a custom HTTPRedirectHandler by simply adding data=req.get_data() to the new Request. I am sure this has been done before so I thought I would make a post. Note: this is similar to this post and this one but I don't want to prevent the redirect I just want to keep the POST data. Here is my HTTPRedirectHandler that does not work class MyHTTPRedirectHandler(urllib2.HTTPRedirectHandler): def redirect_request(self, req, fp, code, msg, headers, newurl): """Return a Request or None in response to a redirect. This is called by the http_error_30x methods when a redirection response is received. If a redirection should take place, return a new Request to allow http_error_30x to perform the redirect. Otherwise, raise HTTPError if no-one else should try to handle this url. Return None if you can't but another Handler might. """ m = req.get_method() if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") or code in (301, 302, 303) and m == "POST"): # Strictly (according to RFC 2616), 301 or 302 in response # to a POST MUST NOT cause a redirection without confirmation # from the user (of urllib2, in this case). In practice, # essentially all clients do redirect in this case, so we # do the same. # be conciliant with URIs containing a space newurl = newurl.replace(' ', '%20') return Request(newurl, headers=req.headers, data=req.get_data(), origin_req_host=req.get_origin_req_host(), unverifiable=True) else: raise HTTPError(req.get_full_url(), code, msg, headers, fp)
[ "This is actually a really bad thing to do the more I thought about it. For instance, if I submit a form to \nhttp://example.com/add (with post data to add a item)\nand the response is a 302 redirect to http://example.com/add and I post the same data that I posted the first time I will end up in an infinite loop. Not sure why I didn't think of this before. I'll leave the question here just as a warning to anyone else thinking about doing this.\n" ]
[ 6 ]
[]
[]
[ "automation", "python", "urllib2" ]
stackoverflow_0001258428_automation_python_urllib2.txt
Q: storing classmethod reference in tuple does not work as in variable #!/usr/bin/python class Bar(object): @staticmethod def ruleOn(rule): if isinstance(rule, tuple): print rule[0] print rule[0].__get__(None, Foo) else: print rule class Foo(object): @classmethod def callRule(cls): Bar.ruleOn(cls.RULE1) Bar.ruleOn(cls.RULE2) @classmethod def check(cls): print "I am check" RULE1 = check RULE2 = (check,) Foo.callRule() Output: <bound method type.check of <class '__main__.Foo'>> <classmethod object at 0xb7d313a4> <bound method type.check of <class '__main__.Foo'>> As you can see I'm trying to store a reference to a classmethod function in a tuple for future use. However, it seems to store the object itself rather then reference to the bound function. As you see it works for a variable reference. The only way to get it is to use __get__, which requires the name of the class it belongs to, which is not available at the time of the RULE variable assignment. Any ideas anyone? A: This is because method are actually functions in Python. They only become bound methods when you look them up on the constructed class instance. See my answer to this question for more details. The non-tuple variant works because it is conceptually the same as accessing a classmethod. If you want to assign bound classmethods to class attributes you'll have to do that after you construct the class: class Foo(object): @classmethod def callRule(cls): Bar.ruleOn(cls.RULE1) Bar.ruleOn(cls.RULE2) @classmethod def check(cls): print "I am check" Foo.RULE1 = Foo.check Foo.RULE2 = (Foo.check,)
storing classmethod reference in tuple does not work as in variable
#!/usr/bin/python class Bar(object): @staticmethod def ruleOn(rule): if isinstance(rule, tuple): print rule[0] print rule[0].__get__(None, Foo) else: print rule class Foo(object): @classmethod def callRule(cls): Bar.ruleOn(cls.RULE1) Bar.ruleOn(cls.RULE2) @classmethod def check(cls): print "I am check" RULE1 = check RULE2 = (check,) Foo.callRule() Output: <bound method type.check of <class '__main__.Foo'>> <classmethod object at 0xb7d313a4> <bound method type.check of <class '__main__.Foo'>> As you can see I'm trying to store a reference to a classmethod function in a tuple for future use. However, it seems to store the object itself rather then reference to the bound function. As you see it works for a variable reference. The only way to get it is to use __get__, which requires the name of the class it belongs to, which is not available at the time of the RULE variable assignment. Any ideas anyone?
[ "This is because method are actually functions in Python. They only become bound methods when you look them up on the constructed class instance. See my answer to this question for more details. The non-tuple variant works because it is conceptually the same as accessing a classmethod.\nIf you want to assign bound classmethods to class attributes you'll have to do that after you construct the class:\nclass Foo(object):\n @classmethod\n def callRule(cls):\n Bar.ruleOn(cls.RULE1)\n Bar.ruleOn(cls.RULE2)\n\n @classmethod\n def check(cls):\n print \"I am check\"\n\n Foo.RULE1 = Foo.check\n Foo.RULE2 = (Foo.check,)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0001258690_python.txt
Q: pywikipedia bot with https and http authentication I'm having trouble getting my bot to login to a MediaWiki install on the intranet. I believe it is due to the http authentication protecting the wiki. Facts: The wiki root is: https://local.example.com/mywiki/ When visiting the wiki with a web browser, a popup comes up asking for enterprise credentials (I assume this is basic access authentication) This is what I have in my user-config.py: mylang = 'en' family = 'mywiki' usernames['mywiki']['en'] = u'Bot' authenticate['local.example.com'] = ('user', 'pass') This is what I have in mywiki_family.py: # -*- coding: utf-8 -*- import family, config # The Wikimedia family that is known as mywiki class Family(family.Family): def __init__(self): family.Family.__init__(self) self.name = 'mywiki' self.langs = { 'en' : 'local.example.com'} def scriptpath(self, code): return '/mywiki' def version(self, code): return '1.13.5' def isPublic(self): return False def hostname(self, code): return 'local.example.com' def protocol(self, code): return 'https' def path(self, code): return '/mywiki/index.php' When I execute login.py -v -v, I get this: urllib2.urlopen(urllib2.Request('https://local.example.com/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit', wpSkipCookieCheck=1&wpPassword=XXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=Bot, {'Content-type': 'application/x-www-form-urlencoded', 'User-agent': 'PythonWikipediaBot/1.0'})): (Redundant traceback info here) urllib2.HTTPError: HTTP Error 401: Unauthorized (I'm not sure why it has 'local.example.com/w' instead of '/mywiki'.) I thought it might be trying to authenticate to example.com instead of example.com/wiki, so I changed the authenticate line to: authenticate['local.example.com/mywiki'] = ('user', 'pass') But then I get an HTTP 401.2 error back from IIS: You do not have permission to view this directory or page using the credentials that you supplied because your Web browser is sending a WWW-Authenticate header field that the Web server is not configured to accept. Any help on how to get this working would be appreciated. Update After fixing my family file, it now says: Getting information for site mywiki:en ('http error', 401, 'Unauthorized', ) WARNING: Could not open 'https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook'. Maybe the server or your connection is down. Retrying in 1 minutes... I looked at the HTTP headers on a plan urllib2.ulropen call and it's using WWW-Authenticate: Negotiate WWW-Authenticate: NTLM. I'm guessing urllib2 and thus pywikipedia don't support this? Update Added a tasty bounty for help in getting this to work. I can authenticate using python-ntlm. How do I integrate this into pywikipedia? A: Well the fact that login.py tries accessing '\w' instead of your path shows that there is a family configuration issue. Your code is indented strangely: is scriptpath a member of the new Family class? as in: class Family(family.Family): def __init__(self): family.Family.__init__(self) self.name = 'mywiki' self.langs = { 'en' : 'local.example.com'} def scriptpath(self, code): return '/mywiki' def version(self, code): return '1.13.5' def isPublic(self): return False def hostname(self, code): return 'local.example.com' def protocol(self, code): return 'https' ? I believe that something is wrong with your family file. A good way to check is to do in a python console: import wikipedia site = wikipedia.getSite('en', 'mywiki') print site.login_address() as long as the relative address is wrong, showing '/w' instead of '/mywiki', it means that the family file is still not configured correctly, and that the bot won't work :) Update: how to integrate ntlm in pywikipedia? I just had a look at the basic example here. I would integrate the code before that line in login.py: response = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers)) You want to write something of the like: from ntlm import HTTPNtlmAuthHandler user = 'DOMAIN\User' password = "Password" url = self.site.protocol() + '://' + self.site.hostname() passman = urllib2.HTTPPasswordMgrWithDefaultRealm() passman.add_password(None, url, user, password) # create the NTLM authentication handler auth_NTLM = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman) # create and install the opener opener = urllib2.build_opener(auth_NTLM) urllib2.install_opener(opener) response = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers)) I would test this and integrate it directly into pywikipedia codebase if only I had an available ntlm setup... Whatever happens, please do not vanish with your solution: we're interested, at pywikipedia, by your solution :) A: I am guessing the problem you have is that the server expects basic authentication and you are not handling that in your client. Michael Foord wrote a good article about handling basic authentication in Python. You did not provide enough information for me to be sure about this, so if that does not work, please provide some additional information, like network dump of you connection attempt.
pywikipedia bot with https and http authentication
I'm having trouble getting my bot to login to a MediaWiki install on the intranet. I believe it is due to the http authentication protecting the wiki. Facts: The wiki root is: https://local.example.com/mywiki/ When visiting the wiki with a web browser, a popup comes up asking for enterprise credentials (I assume this is basic access authentication) This is what I have in my user-config.py: mylang = 'en' family = 'mywiki' usernames['mywiki']['en'] = u'Bot' authenticate['local.example.com'] = ('user', 'pass') This is what I have in mywiki_family.py: # -*- coding: utf-8 -*- import family, config # The Wikimedia family that is known as mywiki class Family(family.Family): def __init__(self): family.Family.__init__(self) self.name = 'mywiki' self.langs = { 'en' : 'local.example.com'} def scriptpath(self, code): return '/mywiki' def version(self, code): return '1.13.5' def isPublic(self): return False def hostname(self, code): return 'local.example.com' def protocol(self, code): return 'https' def path(self, code): return '/mywiki/index.php' When I execute login.py -v -v, I get this: urllib2.urlopen(urllib2.Request('https://local.example.com/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit', wpSkipCookieCheck=1&wpPassword=XXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=Bot, {'Content-type': 'application/x-www-form-urlencoded', 'User-agent': 'PythonWikipediaBot/1.0'})): (Redundant traceback info here) urllib2.HTTPError: HTTP Error 401: Unauthorized (I'm not sure why it has 'local.example.com/w' instead of '/mywiki'.) I thought it might be trying to authenticate to example.com instead of example.com/wiki, so I changed the authenticate line to: authenticate['local.example.com/mywiki'] = ('user', 'pass') But then I get an HTTP 401.2 error back from IIS: You do not have permission to view this directory or page using the credentials that you supplied because your Web browser is sending a WWW-Authenticate header field that the Web server is not configured to accept. Any help on how to get this working would be appreciated. Update After fixing my family file, it now says: Getting information for site mywiki:en ('http error', 401, 'Unauthorized', ) WARNING: Could not open 'https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook'. Maybe the server or your connection is down. Retrying in 1 minutes... I looked at the HTTP headers on a plan urllib2.ulropen call and it's using WWW-Authenticate: Negotiate WWW-Authenticate: NTLM. I'm guessing urllib2 and thus pywikipedia don't support this? Update Added a tasty bounty for help in getting this to work. I can authenticate using python-ntlm. How do I integrate this into pywikipedia?
[ "Well the fact that login.py tries accessing '\\w' instead of your path shows that there is a family configuration issue.\nYour code is indented strangely: is scriptpath a member of the new Family class? as in:\nclass Family(family.Family):\n def __init__(self):\n family.Family.__init__(self)\n self.name = 'mywiki'\n self.langs = { 'en' : 'local.example.com'}\n\n def scriptpath(self, code):\n return '/mywiki'\n\n def version(self, code):\n return '1.13.5'\n\n def isPublic(self):\n return False\n\n def hostname(self, code):\n return 'local.example.com'\n\n def protocol(self, code):\n return 'https'\n\n?\nI believe that something is wrong with your family file. A good way to check is to do in a python console:\nimport wikipedia\nsite = wikipedia.getSite('en', 'mywiki')\nprint site.login_address()\n\nas long as the relative address is wrong, showing '/w' instead of '/mywiki', it means that the family file is still not configured correctly, and that the bot won't work :)\nUpdate: how to integrate ntlm in pywikipedia?\nI just had a look at the basic example here. I would integrate the code before that line in login.py:\nresponse = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers))\n\nYou want to write something of the like:\nfrom ntlm import HTTPNtlmAuthHandler\n\nuser = 'DOMAIN\\User'\npassword = \"Password\"\nurl = self.site.protocol() + '://' + self.site.hostname()\n\npassman = urllib2.HTTPPasswordMgrWithDefaultRealm()\npassman.add_password(None, url, user, password)\n# create the NTLM authentication handler\nauth_NTLM = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman)\n\n# create and install the opener\nopener = urllib2.build_opener(auth_NTLM)\nurllib2.install_opener(opener)\n\nresponse = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers))\n\nI would test this and integrate it directly into pywikipedia codebase if only I had an available ntlm setup...\nWhatever happens, please do not vanish with your solution: we're interested, at pywikipedia, by your solution :)\n", "I am guessing the problem you have is that the server expects basic authentication and you are not handling that in your client. Michael Foord wrote a good article about handling basic authentication in Python.\nYou did not provide enough information for me to be sure about this, so if that does not work, please provide some additional information, like network dump of you connection attempt.\n" ]
[ 4, 0 ]
[]
[]
[ "http_authentication", "https", "python", "pywikibot", "urllib2" ]
stackoverflow_0001256213_http_authentication_https_python_pywikibot_urllib2.txt
Q: Python Subprocess - Redirect stdout/err to two places I have a small python script which invokes an external process using subprocess. I want to redirect stdout and stderr to both a log file and to the terminal. How can this be done? A: You can do this with subprocess.PIPE. You can find some sample code here.
Python Subprocess - Redirect stdout/err to two places
I have a small python script which invokes an external process using subprocess. I want to redirect stdout and stderr to both a log file and to the terminal. How can this be done?
[ "You can do this with subprocess.PIPE.\nYou can find some sample code here.\n" ]
[ 8 ]
[]
[]
[ "python", "redirect", "stdout", "subprocess" ]
stackoverflow_0001258863_python_redirect_stdout_subprocess.txt
Q: surprising time shift for python call I'm using the following code in Python 2.5.1 to generate a UTC timestamp from a string representation of a date: time.mktime(time.strptime("2009-06-16", "%Y-%m-%d")) The general result is: 1245103200 (16.6.2009 0:00 UTC or 15.6.09 22:00:00, if you're in my time zone). But now, I found that on some computers running Windows XP, this statement would generate a time shifted by 1 hour, 1 minute and 1 second: 1245099539 (15.6.2009 22:58:59 UTC or 15.6.09 20:58:59 in my time zone). DST and time zone do not seem to be the cause of the problem, because the time shift seems to appear additionally to DST and time zone calculation. Has anybody experienced the same behaviour or is able to describe what happens here? A: Found the answer myself: When executing the following on the python command line interface, I get the result: time.strptime("2009-06-16", "%Y-%m-%d") time.struct_time(tm_year=2009, tm_mon=6, tm_mday=16, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=167, tm_isdst=-1) If I run the same command in an executable built with py2exe, it results in the following time structure: time.struct_time(tm_year=2009, tm_mon=8, tm_mday=11, tm_hour=-1, tm_min=-1, tm_sec=-1, tm_wday=1, tm_yday=224, tm_isdst=-1) The internal time structure apparently initializes differently on command line and when using py2exe. Fixed the problem by adding a midnight time to the command. A: What version of Python are you running? There are two bug descriptions here which sound a lot like what you are experiencing: http://bugs.python.org/issue5582 https://www.mercurial-scm.org/bts/issue1364
surprising time shift for python call
I'm using the following code in Python 2.5.1 to generate a UTC timestamp from a string representation of a date: time.mktime(time.strptime("2009-06-16", "%Y-%m-%d")) The general result is: 1245103200 (16.6.2009 0:00 UTC or 15.6.09 22:00:00, if you're in my time zone). But now, I found that on some computers running Windows XP, this statement would generate a time shifted by 1 hour, 1 minute and 1 second: 1245099539 (15.6.2009 22:58:59 UTC or 15.6.09 20:58:59 in my time zone). DST and time zone do not seem to be the cause of the problem, because the time shift seems to appear additionally to DST and time zone calculation. Has anybody experienced the same behaviour or is able to describe what happens here?
[ "Found the answer myself:\nWhen executing the following on the python command line interface, I get the result:\n\ntime.strptime(\"2009-06-16\", \"%Y-%m-%d\")\n time.struct_time(tm_year=2009, tm_mon=6, tm_mday=16, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=167, tm_isdst=-1) \n\nIf I run the same command in an executable built with py2exe, it results in the following time structure:\n\ntime.struct_time(tm_year=2009, tm_mon=8, tm_mday=11, tm_hour=-1, tm_min=-1, tm_sec=-1, tm_wday=1, tm_yday=224, tm_isdst=-1)\n\nThe internal time structure apparently initializes differently on command line and when using py2exe. Fixed the problem by adding a midnight time to the command.\n", "What version of Python are you running? There are two bug descriptions here which sound a lot like what you are experiencing:\n\nhttp://bugs.python.org/issue5582\nhttps://www.mercurial-scm.org/bts/issue1364\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "timestamp", "windows_xp" ]
stackoverflow_0001238648_python_timestamp_windows_xp.txt
Q: Locking a custom dictionary Good day pythonians, I want to make a custom dictionary with two main features: All keys are declared on creation It is impossible to add new keys or modify current ones (values are still modifiable) Right now code is this: class pick(dict): """This will make delicious toffee when finished""" def __init__(self, *args): dict.__init__(self) for arg in args: self[arg] = None Any help is much appreciated. upd: While solution is what I was looking for there is one problem: dictionary calls the __setitem__ to add the items on the initialization and not finding the keys it raises the error. cupboard = pick('milk') #raises error upd1: all solved, thank you very much. A: Override the __setitem__ method with your desired behavior, call dict.__setitem__(self, key, value) to modify the base dictionary without going through your base logic. class ImmutableDict(dict): def __setitem__(self, key, value): if key not in self: raise KeyError("Immutable dict") dict.__setitem__(self, key, value) d = ImmutableDict(foo=1, bar=2) d['foo'] = 3 print(d) d['baz'] = 4 # Raises error You'll also need to override dict.update() and setdefault() to avoid addition of keys. And possibly dict.__delitem__(), dict.clear(), dict.pop() and dict.popitem() to avoid removal of keys. A: Something like class UniqueDict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) def __setitem__(self, name, value): if name not in self: raise KeyError("%s not present") dict.__setitem__(self, name, value) A: I suggest you to use delegation, rather than inheritance. If you inherit, all the previous methods of the dict will be inherited, but they could behave as you don't expect, thus ruining the assumptions you have on the class, and doing it silently. If you do delegation, you can fully control the methods you provide, and if you try to use your dict in a context requiring a method you have not implemented, you will get an exception, failing fast to the problem.
Locking a custom dictionary
Good day pythonians, I want to make a custom dictionary with two main features: All keys are declared on creation It is impossible to add new keys or modify current ones (values are still modifiable) Right now code is this: class pick(dict): """This will make delicious toffee when finished""" def __init__(self, *args): dict.__init__(self) for arg in args: self[arg] = None Any help is much appreciated. upd: While solution is what I was looking for there is one problem: dictionary calls the __setitem__ to add the items on the initialization and not finding the keys it raises the error. cupboard = pick('milk') #raises error upd1: all solved, thank you very much.
[ "Override the __setitem__ method with your desired behavior, call dict.__setitem__(self, key, value) to modify the base dictionary without going through your base logic.\nclass ImmutableDict(dict):\n def __setitem__(self, key, value):\n if key not in self:\n raise KeyError(\"Immutable dict\")\n dict.__setitem__(self, key, value)\n\nd = ImmutableDict(foo=1, bar=2)\nd['foo'] = 3\nprint(d)\nd['baz'] = 4 # Raises error\n\nYou'll also need to override dict.update() and setdefault() to avoid addition of keys. And possibly dict.__delitem__(), dict.clear(), dict.pop() and dict.popitem() to avoid removal of keys.\n", "Something like\nclass UniqueDict(dict):\n def __init__(self, *args, **kwargs):\n dict.__init__(self, *args, **kwargs)\n\n def __setitem__(self, name, value):\n if name not in self:\n raise KeyError(\"%s not present\")\n dict.__setitem__(self, name, value)\n\n", "I suggest you to use delegation, rather than inheritance.\nIf you inherit, all the previous methods of the dict will be inherited, but they could behave as you don't expect, thus ruining the assumptions you have on the class, and doing it silently.\nIf you do delegation, you can fully control the methods you provide, and if you try to use your dict in a context requiring a method you have not implemented, you will get an exception, failing fast to the problem.\n" ]
[ 12, 2, 2 ]
[]
[]
[ "dictionary", "locking", "python" ]
stackoverflow_0001260649_dictionary_locking_python.txt
Q: Where is the hidden parameter? I have this function call here: import test_hosts test_hosts.LocalTestHost(mst, port, local_ip, remote_if_mac, remote_if_ip, service_port) and when I run it, the interpreter fails, and says I'm passing 6 parameters to a function that receives 7 parameters. LocalTestHost is a class which its constructor takes a self parameter and six others: resulting in a total of 7 parameters. This is it's declaration: class LocalTestHost: def __init__(self, mst, port, local_ip, remote_if_mac, remote_if_ip, service_port): ... I've stared at this code for hours, and I can't find the problem. When I run this as is, it fails because I pass 6 parameters, which is too few, if I call the constructor with an added parameter just to see that I can still count, it says I'm passing 8 parameters, which is too many. A: The snippets of code you pasted look fine. As others correctly said, to find the problem you should find the smallest amount of code that still has the bug. My suggestion would be to (1) check that module test_hosts is written for your version of Python and that it's indeed the file being imported (2) copy the class LocalTestHost: def __init__(... function to your file and try calling it from there. It will raise something like NameError if you get the # of params right. (3) if the above function works for you, check the test_hosts.LocalTestHost.__init__() signature using runtime introspection. somebody might be changing it by e.g. __init__ = staticmethod(__init__) somewhere (an old method of defining static functions). And please tell us how it's going! A: Another idea: you are inadvertently calling an older version of the code. Make sure you don't have a .pyc file lying around somewhere. A: I've seen these issues before, but it was because of preceding code being crafted in such a way that it syntactically correct but was not as I had intended. This snippet isn't enough to reproduce the problem for me, at least not on 2.5.1 on OS X. A: My god, I was an idiot. I need to start reading the error messages more thoroughly. The code that actually caused the problem wasn't in here actually. It was several lines inside the constructor. Here it is: class LocalTestHost: def __init__(self, mst, port, local_ip, remote_if_mac, remote_if_ip, service_port): . . <some initialization code> . # This is the faulty line self.__host_operations = HostOperationsFactory().create( local_ip, port, mst, remote_if_ip) And here is the error message, I kept not reading, and foolishly did not post with my question: >>> test_hosts.LocalTestHost(1,2,3,4,5,6) Traceback (most recent call last): File "<stdin>", line 1, in ? File "test_hosts.py", line 709, in __init__ self.__host_operations = HostOperationsFactory().create( File "test_hosts.py", line 339, in create remote_ip) File "test_hosts.py", line 110, in __init__ packet_size, remote_ip) TypeError: __init__() takes exactly 7 arguments (6 given) I've refactored my code a bit, and added parameters to several methods and constructors, but I forgot to update their usage in several places. This create function actually returns another object it instantiates, and it's constructor (incidentally has the same parameters as the constructor I picked on) did not receive all the parameters it should have. I did not read the message thoroughly, and my confusion came from the last line stating I have passed the constructor too few parameters. Now, I also tried adding too many parameters as a sanity check, and there it actually was the constructor I was picking on. I'm surprised I didn't see that in this case the error trace was significantly shorter. I've learned a valuable lesson today. The problem is I think I've leaned it several times already for some years.
Where is the hidden parameter?
I have this function call here: import test_hosts test_hosts.LocalTestHost(mst, port, local_ip, remote_if_mac, remote_if_ip, service_port) and when I run it, the interpreter fails, and says I'm passing 6 parameters to a function that receives 7 parameters. LocalTestHost is a class which its constructor takes a self parameter and six others: resulting in a total of 7 parameters. This is it's declaration: class LocalTestHost: def __init__(self, mst, port, local_ip, remote_if_mac, remote_if_ip, service_port): ... I've stared at this code for hours, and I can't find the problem. When I run this as is, it fails because I pass 6 parameters, which is too few, if I call the constructor with an added parameter just to see that I can still count, it says I'm passing 8 parameters, which is too many.
[ "The snippets of code you pasted look fine. As others correctly said, to find the problem you should find the smallest amount of code that still has the bug.\nMy suggestion would be to \n(1) check that module test_hosts is written for your version of Python and that it's indeed the file being imported \n(2) copy the class LocalTestHost: def __init__(... function to your file and try calling it from there. It will raise something like NameError if you get the # of params right.\n(3) if the above function works for you, check the test_hosts.LocalTestHost.__init__() signature using runtime introspection. somebody might be changing it by e.g. __init__ = staticmethod(__init__) somewhere (an old method of defining static functions).\nAnd please tell us how it's going!\n", "Another idea: you are inadvertently calling an older version of the code. Make sure you don't have a .pyc file lying around somewhere.\n", "I've seen these issues before, but it was because of preceding code being crafted in such a way that it syntactically correct but was not as I had intended.\nThis snippet isn't enough to reproduce the problem for me, at least not on 2.5.1 on OS X.\n", "My god, I was an idiot. I need to start reading the error messages more thoroughly.\nThe code that actually caused the problem wasn't in here actually. It was several lines inside the constructor. Here it is:\nclass LocalTestHost:\n\n def __init__(self, mst, port, local_ip, remote_if_mac, remote_if_ip, service_port):\n .\n . <some initialization code>\n .\n\n # This is the faulty line\n self.__host_operations = HostOperationsFactory().create(\n local_ip, port, mst, remote_if_ip)\n\nAnd here is the error message, I kept not reading, and foolishly did not post with my question:\n>>> test_hosts.LocalTestHost(1,2,3,4,5,6)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\n File \"test_hosts.py\", line 709, in __init__\n self.__host_operations = HostOperationsFactory().create(\n File \"test_hosts.py\", line 339, in create\n remote_ip)\n File \"test_hosts.py\", line 110, in __init__\n packet_size, remote_ip)\nTypeError: __init__() takes exactly 7 arguments (6 given)\n\nI've refactored my code a bit, and added parameters to several methods and constructors, but I forgot to update their usage in several places. This create function actually returns another object it instantiates, and it's constructor (incidentally has the same parameters as the constructor I picked on) did not receive all the parameters it should have.\nI did not read the message thoroughly, and my confusion came from the last line stating I have passed the constructor too few parameters. Now, I also tried adding too many parameters as a sanity check, and there it actually was the constructor I was picking on. I'm surprised I didn't see that in this case the error trace was significantly shorter.\nI've learned a valuable lesson today. The problem is I think I've leaned it several times already for some years.\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001251427_python.txt
Q: Python : Assert that variable is instance method? How can one check if a variable is an instance method or not? I'm using python 2.5. Something like this: class Test: def method(self): pass assert is_instance_method(Test().method) A: inspect.ismethod is what you want to find out if you definitely have a method, rather than just something you can call. import inspect def foo(): pass class Test(object): def method(self): pass print inspect.ismethod(foo) # False print inspect.ismethod(Test) # False print inspect.ismethod(Test.method) # True print inspect.ismethod(Test().method) # True print callable(foo) # True print callable(Test) # True print callable(Test.method) # True print callable(Test().method) # True callable is true if the argument if the argument is a method, a function (including lambdas), an instance with __call__ or a class. Methods have different properties than functions (like im_class and im_self). So you want assert inspect.ismethod(Test().method) A: If you want to know if it is precisely an instance method use the following function. (It considers methods that are defined on a metaclass and accessed on a class class methods, although they could also be considered instance methods) import types def is_instance_method(obj): """Checks if an object is a bound method on an instance.""" if not isinstance(obj, types.MethodType): return False # Not a method if obj.im_self is None: return False # Method is not bound if issubclass(obj.im_class, type) or obj.im_class is types.ClassType: return False # Method is a classmethod return True Usually checking for that is a bad idea. It is more flexible to be able to use any callable() interchangeably with methods.
Python : Assert that variable is instance method?
How can one check if a variable is an instance method or not? I'm using python 2.5. Something like this: class Test: def method(self): pass assert is_instance_method(Test().method)
[ "inspect.ismethod is what you want to find out if you definitely have a method, rather than just something you can call.\nimport inspect\n\ndef foo(): pass\n\nclass Test(object):\n def method(self): pass\n\nprint inspect.ismethod(foo) # False\nprint inspect.ismethod(Test) # False\nprint inspect.ismethod(Test.method) # True\nprint inspect.ismethod(Test().method) # True\n\nprint callable(foo) # True\nprint callable(Test) # True\nprint callable(Test.method) # True\nprint callable(Test().method) # True\n\ncallable is true if the argument if the argument is a method, a function (including lambdas), an instance with __call__ or a class. \nMethods have different properties than functions (like im_class and im_self). So you want\nassert inspect.ismethod(Test().method) \n\n", "If you want to know if it is precisely an instance method use the following function. (It considers methods that are defined on a metaclass and accessed on a class class methods, although they could also be considered instance methods)\nimport types\ndef is_instance_method(obj):\n \"\"\"Checks if an object is a bound method on an instance.\"\"\"\n if not isinstance(obj, types.MethodType):\n return False # Not a method\n if obj.im_self is None:\n return False # Method is not bound\n if issubclass(obj.im_class, type) or obj.im_class is types.ClassType:\n return False # Method is a classmethod\n return True\n\nUsually checking for that is a bad idea. It is more flexible to be able to use any callable() interchangeably with methods.\n" ]
[ 53, 8 ]
[]
[]
[ "assert", "instance", "methods", "python" ]
stackoverflow_0001259963_assert_instance_methods_python.txt
Q: Python: How can I choose which module to import when they are named the same Lets say I'm in a file called openid.py and I do : from openid.consumer.discover import discover, DiscoveryFailure I have the openid module on my pythonpath but the interpreter seems to be trying to use my openid.py file. How can I get the library version? (Of course, something other than the obvious 'rename your file' answer would be nice). A: Thats the reason absolute imports have been chosen as the new default behaviour. However, they are not yet the default in 2.6 (maybe in 2.7...). You can get their behaviour now by importing them from the future: from __future__ import absolute_import You can find out more about this in the PEP metnioned by Nick or (easier to understand, I think) in the document "What's New in Python 2.5". A: Rename it. This is the idea behind name spaces. your openid could be a sub-module in your top module project. your email will clash with top module email in stdlib. because your openid is not universal, it provides a special case for your project. A: I won't get into the polemics on renaming and instead focus on showing you how to do what you want (whether it's "good for you" or not;-). The solution is not difficult... Just set __path__! A little demonstration: $ mkdir /tmp/modules /tmp/packages $ mkdir /tmp/packages/openid $ echo 'print "Package!"' > /tmp/packages/openid/__init__.py $ gvim /tmp/modules/openid.py $ PYTHONPATH='/tmp/modules:/tmp/packages' python -c'import openid' Module! Package! this shows a module openid managing to import a homonymous package even though the module's path comes earlier in sys.path, and sys.modules['openid'] is clearly already set at that time. And all the "secret" is in openid.py's simple code...: print "Module!" __path__ = ['/tmp/packages'] import openid without the __path__ assignment, of course, it would only emit Module!. Also works for importing submodules within the package, of course. Do: $ echo 'print "Submod!"' > /tmp/packages/openid/submod.py and change openid.py's last line to from openid import submod and you'll see: $ PYTHONPATH='/tmp/modules:/tmp/packages' python -c'import openid' Module! Package! Submod! $ A: You can use relative or absolute imports (depending on the specifics of your situation), which are covered in PEP 328 most recently. Of course, seriously, you should not be creating naming conflicts like this and should rename your file.
Python: How can I choose which module to import when they are named the same
Lets say I'm in a file called openid.py and I do : from openid.consumer.discover import discover, DiscoveryFailure I have the openid module on my pythonpath but the interpreter seems to be trying to use my openid.py file. How can I get the library version? (Of course, something other than the obvious 'rename your file' answer would be nice).
[ "Thats the reason absolute imports have been chosen as the new default behaviour. However, they are not yet the default in 2.6 (maybe in 2.7...). You can get their behaviour now by importing them from the future:\nfrom __future__ import absolute_import\n\nYou can find out more about this in the PEP metnioned by Nick or (easier to understand, I think) in the document \"What's New in Python 2.5\". \n", "Rename it. This is the idea behind name spaces. your openid could be a sub-module in your top module project. your email will clash with top module email in stdlib.\nbecause your openid is not universal, it provides a special case for your project.\n", "I won't get into the polemics on renaming and instead focus on showing you how to do what you want (whether it's \"good for you\" or not;-). The solution is not difficult...\nJust set __path__! A little demonstration:\n$ mkdir /tmp/modules /tmp/packages\n$ mkdir /tmp/packages/openid\n$ echo 'print \"Package!\"' > /tmp/packages/openid/__init__.py\n$ gvim /tmp/modules/openid.py\n$ PYTHONPATH='/tmp/modules:/tmp/packages' python -c'import openid'\nModule!\nPackage!\n\nthis shows a module openid managing to import a homonymous package even though the module's path comes earlier in sys.path, and sys.modules['openid'] is clearly already set at that time. And all the \"secret\" is in openid.py's simple code...:\nprint \"Module!\"\n__path__ = ['/tmp/packages']\nimport openid\n\nwithout the __path__ assignment, of course, it would only emit Module!.\nAlso works for importing submodules within the package, of course. Do:\n$ echo 'print \"Submod!\"' > /tmp/packages/openid/submod.py\n\nand change openid.py's last line to\nfrom openid import submod\n\nand you'll see:\n$ PYTHONPATH='/tmp/modules:/tmp/packages' python -c'import openid'\nModule!\nPackage!\nSubmod!\n$ \n\n", "You can use relative or absolute imports (depending on the specifics of your situation), which are covered in PEP 328 most recently. Of course, seriously, you should not be creating naming conflicts like this and should rename your file.\n" ]
[ 9, 3, 2, 1 ]
[ "You could try shuffling sys.path, to move the interesting directories to the front before doing the import.\n" ]
[ -1 ]
[ "import", "namespaces", "python" ]
stackoverflow_0001259106_import_namespaces_python.txt
Q: How to get user input during a while loop without blocking I'm trying to write a while loop that constantly updates the screen by using os.system("clear") and then printing out a different text message every few seconds. How do I get user input during the loop? raw_input() just pauses and waits, which is not the functionality I want. import os import time string = "the fox jumped over the lazy dog" words = string.split(" ") i = 0 while 1: os.system("clear") print words[i] time.sleep(1) i += 1 i = i%len(words) I would like to be able to press 'q' or 'p' in the middle to quit and pause respectively. A: The select module in Python's standard library may be what you're looking for -- standard input has FD 0, though you may also need to put a terminal in "raw" (as opposed to "cooked") mode, on unix-y systems, to get single keypresses from it as opposed to whole lines complete with line-end. If on Windows, msvcrt, also in Python standard library, has all the functionality you need -- msvcrt.kbhit() tells you if any keystroke is pending, and, if so, msvcrt.getch() tells you what character it is. A: You can also check one of the recipes available out there, which gives you the functionality you're looking for for both Unix and Windows. A: You can do that with threads, here is a basic example : import threading, os, time, itertools, Queue # setting a cross platform getch like function # thks to the Python Cookbook # why isn't standard on this battery included language ? try : # on windows from msvcrt import getch except ImportError : # on unix like systems import sys, tty, termios def getch() : fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try : tty.setraw(fd) ch = sys.stdin.read(1) finally : termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch # this will allow us to communicate between the two threads # Queue is a FIFO list, the param is the size limit, 0 for infinite commands = Queue.Queue(0) # the thread reading the command from the user input def control(commands) : while 1 : command = getch() commands.put(command) # put the command in the queue so the other thread can read it # don't forget to quit here as well, or you will have memory leaks if command == "q" : break # your function displaying the words in an infinite loop def display(commands): string = "the fox jumped over the lazy dog" words = string.split(" ") pause = False command = "" # we create an infinite generator from you list # much better than using indices word_list = itertools.cycle(words) # BTW, in Python itertools is your best friend while 1 : # parsing the command queue try: # false means "do not block the thread if the queue is empty" # a second parameter can set a millisecond time out command = commands.get(False) except Queue.Empty, e: command = "" # behave according to the command if command == "q" : break if command == "p" : pause = True if command == "r" : pause = False # if pause is set to off, then print the word # your initial code, rewritten with a generator if not pause : os.system("clear") print word_list.next() # getting the next item from the infinite generator # wait anyway for a second, you can tweak that time.sleep(1) # then start the two threads displayer = threading.Thread(None, # always to None since the ThreadGroup class is not implemented yet display, # the function the thread will run None, # doo, don't remember and too lazy to look in the doc (commands,), # *args to pass to the function {}) # **kwargs to pass to the function controler = threading.Thread(None, control, None, (commands,), {}) if __name__ == "__main__" : displayer.start() controler.start() As usual, using threads is tricky, so be sure you understand what you do before coding that. Warning : Queue will be rename in queue in Python 3.
How to get user input during a while loop without blocking
I'm trying to write a while loop that constantly updates the screen by using os.system("clear") and then printing out a different text message every few seconds. How do I get user input during the loop? raw_input() just pauses and waits, which is not the functionality I want. import os import time string = "the fox jumped over the lazy dog" words = string.split(" ") i = 0 while 1: os.system("clear") print words[i] time.sleep(1) i += 1 i = i%len(words) I would like to be able to press 'q' or 'p' in the middle to quit and pause respectively.
[ "The select module in Python's standard library may be what you're looking for -- standard input has FD 0, though you may also need to put a terminal in \"raw\" (as opposed to \"cooked\") mode, on unix-y systems, to get single keypresses from it as opposed to whole lines complete with line-end. If on Windows, msvcrt, also in Python standard library, has all the functionality you need -- msvcrt.kbhit() tells you if any keystroke is pending, and, if so, msvcrt.getch() tells you what character it is.\n", "You can also check one of the recipes available out there, which gives you the functionality you're looking for for both Unix and Windows.\n", "You can do that with threads, here is a basic example :\nimport threading, os, time, itertools, Queue\n\n# setting a cross platform getch like function\n# thks to the Python Cookbook\n# why isn't standard on this battery included language ?\ntry : # on windows\n from msvcrt import getch\nexcept ImportError : # on unix like systems\n import sys, tty, termios\n def getch() :\n fd = sys.stdin.fileno()\n old_settings = termios.tcgetattr(fd)\n try :\n tty.setraw(fd)\n ch = sys.stdin.read(1)\n finally :\n termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)\n return ch\n\n# this will allow us to communicate between the two threads\n# Queue is a FIFO list, the param is the size limit, 0 for infinite\ncommands = Queue.Queue(0)\n\n# the thread reading the command from the user input \ndef control(commands) :\n\n while 1 :\n\n command = getch()\n commands.put(command) # put the command in the queue so the other thread can read it\n\n # don't forget to quit here as well, or you will have memory leaks\n if command == \"q\" :\n break\n\n\n# your function displaying the words in an infinite loop\ndef display(commands):\n\n string = \"the fox jumped over the lazy dog\"\n words = string.split(\" \")\n pause = False \n command = \"\"\n\n # we create an infinite generator from you list\n # much better than using indices\n word_list = itertools.cycle(words) \n\n # BTW, in Python itertools is your best friend\n\n while 1 :\n\n # parsing the command queue\n try:\n # false means \"do not block the thread if the queue is empty\"\n # a second parameter can set a millisecond time out\n command = commands.get(False) \n except Queue.Empty, e:\n command = \"\"\n\n # behave according to the command\n if command == \"q\" :\n break\n\n if command == \"p\" :\n pause = True\n\n if command == \"r\" :\n pause = False\n\n # if pause is set to off, then print the word\n # your initial code, rewritten with a generator\n if not pause :\n os.system(\"clear\")\n print word_list.next() # getting the next item from the infinite generator \n\n # wait anyway for a second, you can tweak that\n time.sleep(1)\n\n\n\n# then start the two threads\ndisplayer = threading.Thread(None, # always to None since the ThreadGroup class is not implemented yet\n display, # the function the thread will run\n None, # doo, don't remember and too lazy to look in the doc\n (commands,), # *args to pass to the function\n {}) # **kwargs to pass to the function\n\ncontroler = threading.Thread(None, control, None, (commands,), {})\n\nif __name__ == \"__main__\" :\n displayer.start()\n controler.start()\n\nAs usual, using threads is tricky, so be sure you understand what you do before coding that.\nWarning : Queue will be rename in queue in Python 3.\n" ]
[ 9, 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001258566_python.txt
Q: Pythonic way of searching for a substring in a list I have a list of strings - something like mytext = ['This is some text','this is yet more text','This is text that contains the substring foobar123','yet more text'] I want to find the first occurrence of anything that starts with foobar. If I was grepping then I would do search for foobar*. My current solution looks like this for i in mytext: index = i.find("foobar") if(index!=-1): print i Which works just fine but I am wondering if there is a 'better' (i.e more pythonic) way of doing this? Cheers, Mike A: You can also use a list comprehension : matches = [s for s in mytext if 'foobar' in s] (and if you were really looking for strings starting with 'foobar' as THC4k noticed, consider the following : matches = [s for s in mytext if s.startswith('foobar')] A: If you really want the FIRST occurrence of a string that STARTS WITH foobar (which is what your words say, though very different from your code, all answers provided, your mention of grep -- how contradictory can you get?-), try: found = next((s for s in mylist if s.startswith('foobar')), '') this gives an empty string as the found result if no item of mylist meets the condition. You could also use itertools, etc, in lieu of the simple genexp, but the key trick is this way of using the next builtin with a default (Python 2.6 and better only). A: for s in lst: if 'foobar' in s: print(s) A: results = [ s for s in lst if 'foobar' in s] print(results) A: in case you really looking for strings that start with foobar ( not with foobar in them): for s in mylist: if s.startswith( 'foobar' ): print s or found = [ s for s in mylist if s.startswith('foobar') ]
Pythonic way of searching for a substring in a list
I have a list of strings - something like mytext = ['This is some text','this is yet more text','This is text that contains the substring foobar123','yet more text'] I want to find the first occurrence of anything that starts with foobar. If I was grepping then I would do search for foobar*. My current solution looks like this for i in mytext: index = i.find("foobar") if(index!=-1): print i Which works just fine but I am wondering if there is a 'better' (i.e more pythonic) way of doing this? Cheers, Mike
[ "You can also use a list comprehension : \nmatches = [s for s in mytext if 'foobar' in s]\n\n(and if you were really looking for strings starting with 'foobar' as THC4k noticed, consider the following : \nmatches = [s for s in mytext if s.startswith('foobar')]\n\n", "If you really want the FIRST occurrence of a string that STARTS WITH foobar (which is what your words say, though very different from your code, all answers provided, your mention of grep -- how contradictory can you get?-), try:\nfound = next((s for s in mylist if s.startswith('foobar')), '')\n\nthis gives an empty string as the found result if no item of mylist meets the condition. You could also use itertools, etc, in lieu of the simple genexp, but the key trick is this way of using the next builtin with a default (Python 2.6 and better only).\n", "for s in lst:\n if 'foobar' in s:\n print(s)\n\n", "results = [ s for s in lst if 'foobar' in s]\nprint(results)\n\n", "in case you really looking for strings that start with foobar ( not with foobar in them):\nfor s in mylist:\n if s.startswith( 'foobar' ):\n print s\n\nor \nfound = [ s for s in mylist if s.startswith('foobar') ]\n\n" ]
[ 15, 9, 6, 5, 4 ]
[]
[]
[ "list", "python", "string", "substring" ]
stackoverflow_0001260947_list_python_string_substring.txt
Q: Serializing a Python Object to XML (Apple .plist) I need to read and serialize objects from and to XML, Apple's .plist format in particular. What's the most intelligent way to do it in Python? Are there any ready-made solutions? A: Check out plistlib. A: Assuming you are on a Mac, you can use PyObjC. Here is an example of reading from a plist, from Using Python For System Administration, slide 27. from Cocoa import NSDictionary myfile = "/Library/Preferences/com.apple.SoftwareUpdate.plist" mydict = NSDictionary.dictionaryWithContentsOfFile_(myfile) print mydict["LastSuccessfulDate"] # returns: 2009-08-11 08:38:01 -0600 And an example of writing to a plist (that I wrote): #!/usr/bin/env python from Cocoa import NSDictionary, NSString myfile = "~/test.plist" myfile = NSString.stringByExpandingTildeInPath(myfile) mydict = {"Nice Number" : 47, "Universal Sum" : 42} mydict["Vector"] = (10, 20, 30) mydict["Complex"] = [47, "i^2"] mydict["Truth"] = True NSDictionary.dictionaryWithDictionary_(mydict).writeToFile_atomically_(myfile, True) When I then run defaults read ~/test in bash, I get: { Complex = ( 47, "i^2" ); "Nice Number" = 47; Truth = 1; "Universal Sum" = 42; Vector = ( 10, 20, 30 ); } And the file looks very nice when opened in Property List Editor.app.
Serializing a Python Object to XML (Apple .plist)
I need to read and serialize objects from and to XML, Apple's .plist format in particular. What's the most intelligent way to do it in Python? Are there any ready-made solutions?
[ "Check out plistlib.\n", "Assuming you are on a Mac, you can use PyObjC.\nHere is an example of reading from a plist, from Using Python For System Administration, slide 27.\nfrom Cocoa import NSDictionary\n\nmyfile = \"/Library/Preferences/com.apple.SoftwareUpdate.plist\"\nmydict = NSDictionary.dictionaryWithContentsOfFile_(myfile)\n\nprint mydict[\"LastSuccessfulDate\"]\n\n# returns: 2009-08-11 08:38:01 -0600\n\nAnd an example of writing to a plist (that I wrote):\n#!/usr/bin/env python\n\nfrom Cocoa import NSDictionary, NSString\n\nmyfile = \"~/test.plist\"\nmyfile = NSString.stringByExpandingTildeInPath(myfile)\n\nmydict = {\"Nice Number\" : 47, \"Universal Sum\" : 42}\nmydict[\"Vector\"] = (10, 20, 30)\nmydict[\"Complex\"] = [47, \"i^2\"]\nmydict[\"Truth\"] = True\n\nNSDictionary.dictionaryWithDictionary_(mydict).writeToFile_atomically_(myfile, True)\n\nWhen I then run defaults read ~/test in bash, I get:\n{\n Complex = (\n 47,\n \"i^2\"\n );\n \"Nice Number\" = 47;\n Truth = 1;\n \"Universal Sum\" = 42;\n Vector = (\n 10,\n 20,\n 30\n );\n}\n\nAnd the file looks very nice when opened in Property List Editor.app.\n" ]
[ 7, 2 ]
[]
[]
[ "plist", "python", "xml" ]
stackoverflow_0000879212_plist_python_xml.txt
Q: Interrupt Python program deadlocked in a DLL How can I ensure a python program can be interrupted via Ctrl-C, or a similar mechanism, when it is deadlocked in code within a DLL? A: Not sure if this is exactly what you are asking, but there are issues when trying to interrupt (via Ctrl-C) a multi-threaded python process. Here is a video of a talk about the python Global Interpreter Lock that also discusses that issue: Mindblowing Python GIL A: You might want to take a look at this mailing list for a couple other suggestions, but there aren't any conclusive answers. I've encountered the issue several times, and I can at least confirm that this happens when using FFI in Haskell. I could have sworn that I once saw something in Haskell's FFI documentation mentioning that DLLs would not return from a ctrl-c signal, but I'm not having any luck finding that document. You can try using ctrl-break, but that's not working to break out of a DLL in Haskell and I'm doubting it will work in Python either. Update: ctrl-break does work for me in Python when ctrl-c does not, during a call to a DLL function in an infinite loop.
Interrupt Python program deadlocked in a DLL
How can I ensure a python program can be interrupted via Ctrl-C, or a similar mechanism, when it is deadlocked in code within a DLL?
[ "Not sure if this is exactly what you are asking, but there are issues when trying to interrupt (via Ctrl-C) a multi-threaded python process. Here is a video of a talk about the python Global Interpreter Lock that also discusses that issue:\nMindblowing Python GIL\n", "You might want to take a look at this mailing list for a couple other suggestions, but there aren't any conclusive answers.\nI've encountered the issue several times, and I can at least confirm that this happens when using FFI in Haskell. I could have sworn that I once saw something in Haskell's FFI documentation mentioning that DLLs would not return from a ctrl-c signal, but I'm not having any luck finding that document.\nYou can try using ctrl-break, but that's not working to break out of a DLL in Haskell and I'm doubting it will work in Python either.\n\nUpdate: ctrl-break does work for me in Python when ctrl-c does not, during a call to a DLL function in an infinite loop.\n" ]
[ 1, 0 ]
[]
[]
[ "dll", "python" ]
stackoverflow_0001260259_dll_python.txt
Q: Is it a good idea to use super() in Python? Or should I just explicitly reference the superclasses whose methods I want to call? It seems brittle to repeat the names of super classes when referencing their constructors, but this page http://fuhm.net/super-harmful/ makes some good arguments against using super(). A: The book Expert Python Programming has discussed the topic of "super pitfalls" in chapter 3. It is worth reading. Below is the book's conclusion: Super usage has to be consistent: In a class hierarchy, super should be used everywhere or nowhere. Mixing super and classic calls is a confusing practice. People tend to avoid super, for their code to be more explicit. Edit: Today I read this part of the book again. I'll copy some more sentences, since super usage is tricky: Avoid multiple inheritance in your code. Be consistent with its usage and don't mix new-style and old-style. Check the class hierarchy before calling its methods in your subclass. A: You can use super, but as the article says, there are drawbacks. As long as you know them, there is no problem with using the feature. It's like people saying "use composition, not inheritance" or "never use global variables". If the feature exists, there is a reason. Just be sure to understand the why and the what and use them wisely. A: super() tries to solve for you the problem of multiple inheritance; it's hard to replicate its semantics and you certainly shouldn't create any new semantics unless you're completely sure. For single inheritance, there's really no difference between class X(Y): def func(self): Y.func(self) and class X(Y): def func(self): super().func() so I guess that's just the question of taste. A: I like super() more because it allows you to change the inherited class (for example when you're refactoring and add an intermediate class) without changing it on all the methods. A: The problem people have with super is more a problem of multiple inheritance. So it is a little unfair to blame super. Without super multiple inheritance is even worse. Michele Simionato nicely wrapped this up in his blog article on super: On the other hand, one may wonder if all super warts aren't hints of some serious problem underlying. It may well be that the problem is not with super, nor with cooperative methods: the problem may be with multiple inheritance itself. So the main lesson is that you should try to avoid multiple inheritance. In the interest of consistency I always use super, even if for single inheritance it does not really matter (apart from the small advantage of not having to know the parent class name). In Python 3+ super is more convenient, so there one should definitely use super. A: Yes, you should use super() over other methods. This is now the standard object inheritance model in Python 3. Just stick to keyword arguments in your __init__ methods and you shouldn't have too many problems. Additionally you can use **kwargs to support additional parameters that are not defined in levels of the inheritance chain. I agree that it is brittle, but no less so than using the name of the inherited class.
Is it a good idea to use super() in Python?
Or should I just explicitly reference the superclasses whose methods I want to call? It seems brittle to repeat the names of super classes when referencing their constructors, but this page http://fuhm.net/super-harmful/ makes some good arguments against using super().
[ "The book Expert Python Programming has discussed the topic of \"super pitfalls\" in chapter 3. It is worth reading. Below is the book's conclusion:\n\nSuper usage has to be consistent: In a class hierarchy, super should be used everywhere or nowhere. Mixing super and classic calls is a confusing practice. People tend to avoid super, for their code to be more explicit. \n\nEdit: Today I read this part of the book again. I'll copy some more sentences, since super usage is tricky:\n\nAvoid multiple inheritance in your code.\nBe consistent with its usage and don't mix new-style and\nold-style.\nCheck the class hierarchy before calling its methods in\nyour subclass.\n\n", "You can use super, but as the article says, there are drawbacks. As long as you know them, there is no problem with using the feature. It's like people saying \"use composition, not inheritance\" or \"never use global variables\". If the feature exists, there is a reason. Just be sure to understand the why and the what and use them wisely.\n", "super() tries to solve for you the problem of multiple inheritance; it's hard to replicate its semantics and you certainly shouldn't create any new semantics unless you're completely sure. \nFor single inheritance, there's really no difference between \nclass X(Y):\n def func(self):\n Y.func(self)\n\nand\nclass X(Y):\n def func(self):\n super().func()\n\nso I guess that's just the question of taste.\n", "I like super() more because it allows you to change the inherited class (for example when you're refactoring and add an intermediate class) without changing it on all the methods.\n", "The problem people have with super is more a problem of multiple inheritance. So it is a little unfair to blame super. Without super multiple inheritance is even worse. Michele Simionato nicely wrapped this up in his blog article on super:\n\nOn the other hand, one may wonder if\n all super warts aren't hints of some\n serious problem underlying. It may\n well be that the problem is not with\n super, nor with cooperative methods:\n the problem may be with multiple\n inheritance itself.\n\nSo the main lesson is that you should try to avoid multiple inheritance.\nIn the interest of consistency I always use super, even if for single inheritance it does not really matter (apart from the small advantage of not having to know the parent class name). In Python 3+ super is more convenient, so there one should definitely use super.\n", "Yes, you should use super() over other methods. This is now the standard object inheritance model in Python 3. \nJust stick to keyword arguments in your __init__ methods and you shouldn't have too many problems. Additionally you can use **kwargs to support additional parameters that are not defined in levels of the inheritance chain. \nI agree that it is brittle, but no less so than using the name of the inherited class.\n" ]
[ 11, 6, 4, 3, 2, 0 ]
[]
[]
[ "oop", "python", "super" ]
stackoverflow_0001259547_oop_python_super.txt
Q: Eclipse+Pydev: "cleanup" functions aren't called when pressing "stop""? Trying to run this file in eclipse class Try: def __init__(self): pass def __del__(self): print 1 a=Try() raw_input('waiting to finish') and pressing the stop button without letting the program finish doesn't print "1", i.e the del method is never called. If i try to run the script from the shell and do ctrl-c\sys.exit "1" does get printed i.e del is called. Same thing if I try to use wait(): class A: def __enter__(self): return None def __exit__(self, type, value, traceback): print 3 with A(): print 1 raw_input('Waiting') print 2 If i press "stop" when prompted, "3" isn't printed Why is that? Is there a way around it? Thanks, Noam A: Python docs: __del__(self) Called when the instance is about to be destroyed. This is also called a destructor. If a base class has a __del__() method, the derived class's __del__() method, if any, must explicitly call it to ensure proper deletion of the base class part of the instance. Note that it is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. It may then be called at a later time when this new reference is deleted. It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits. If you want to guarantee that a method is called use the with-statement A: Pressing stop in Eclipse outright kills the interpreter (though it actually fails fairly often). Like using kill/taskkill, the process is unaware of it's demise. Ctrl+C snippet from Wikipedia... Control-C as an abort command was popularized by UNIX and adopted in other systems. In POSIX systems, the sequence causes the active program to receive a SIGINT signal. If the program does not specify how to handle this condition, it is terminated. Typically a program which does handle a SIGINT will still terminate itself, or at least terminate the task running inside it. Ctrl+C is a control signal to interrupt the program, but as you may have noticed in the middle of that paragraph, programs can specify how to handle the signal. In Python, Ctrl+C throws a KeyboardInterrupt exception which is normally caught and then Python exits cleanly. Even if you're killing the interpreter with Ctrl+C it may handle it so that it cleans the environment before exiting. I included the following because you asked "Is there a way around it?" If you are wanting to stop on raw_input(...) calls, you could use Ctrl+Z to send EOF. I looked around, and there seems to be no way to send Ctrl+C/0x03 in Eclipse, unfortunately.
Eclipse+Pydev: "cleanup" functions aren't called when pressing "stop""?
Trying to run this file in eclipse class Try: def __init__(self): pass def __del__(self): print 1 a=Try() raw_input('waiting to finish') and pressing the stop button without letting the program finish doesn't print "1", i.e the del method is never called. If i try to run the script from the shell and do ctrl-c\sys.exit "1" does get printed i.e del is called. Same thing if I try to use wait(): class A: def __enter__(self): return None def __exit__(self, type, value, traceback): print 3 with A(): print 1 raw_input('Waiting') print 2 If i press "stop" when prompted, "3" isn't printed Why is that? Is there a way around it? Thanks, Noam
[ "Python docs:\n\n__del__(self)\n\nCalled when the instance is about to be destroyed. This is also called a destructor. If a base class has a __del__() method, the derived class's __del__() method, if any, must explicitly call it to ensure proper deletion of the base class part of the instance. Note that it is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. It may then be called at a later time when this new reference is deleted. It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.\n\nIf you want to guarantee that a method is called use the with-statement\n", "Pressing stop in Eclipse outright kills the interpreter (though it actually fails fairly often). Like using kill/taskkill, the process is unaware of it's demise.\nCtrl+C snippet from Wikipedia...\n\nControl-C as an abort command was\n popularized by UNIX and adopted in\n other systems. In POSIX systems, the\n sequence causes the active program to\n receive a SIGINT signal. If the\n program does not specify how to handle\n this condition, it is terminated.\n Typically a program which does handle\n a SIGINT will still terminate itself,\n or at least terminate the task running\n inside it.\n\nCtrl+C is a control signal to interrupt the program, but as you may have noticed in the middle of that paragraph, programs can specify how to handle the signal. In Python, Ctrl+C throws a KeyboardInterrupt exception which is normally caught and then Python exits cleanly. Even if you're killing the interpreter with Ctrl+C it may handle it so that it cleans the environment before exiting.\nI included the following because you asked \"Is there a way around it?\"\nIf you are wanting to stop on raw_input(...) calls, you could use Ctrl+Z to send EOF. I looked around, and there seems to be no way to send Ctrl+C/0x03 in Eclipse, unfortunately.\n" ]
[ 4, 2 ]
[]
[]
[ "del", "eclipse", "pydev", "python" ]
stackoverflow_0001261597_del_eclipse_pydev_python.txt
Q: using setuptools with post-install and python dependencies This is somewhat related to this question. Let's say I have a package that I want to deploy via rpm because I need to do some file copying on post-install and I have some non-python dependencies I want to declare. But let's also say I have some python dependencies that are easily available in PyPI. It seems like if I just package as an egg, an unzip followed by python setup.py install will automatically take care of my python dependencies, at the expense of losing any post-install functionality and non-python dependencies. Is there any recommended way of doing this? I suppose I could specify this in a pre-install script, but then I'm getting into information duplication and not really using setuptools for much of anything. (My current setup involves passing install_requires = ['dependency_name'] to setup, which works for python setup.py bdist_egg and unzip my_package.egg; python my_package/setup.py install, but not for python setup.py bdist_rpm --post-install post-install.sh and rpm --install my_package.rpm.) A: I think it would be best if your python dependencies were available as RPMs also, and declared as dependencies in the RPM. If they aren't available elsewhere, create them yourself, and put them in your yum repository. Running PyPI installations as a side effect of RPM installation is evil, as it won't support proper uninstallation (i.e. uninstalling your RPM will remove your package, but leave the dependencies behind, with no proper removal procedure).
using setuptools with post-install and python dependencies
This is somewhat related to this question. Let's say I have a package that I want to deploy via rpm because I need to do some file copying on post-install and I have some non-python dependencies I want to declare. But let's also say I have some python dependencies that are easily available in PyPI. It seems like if I just package as an egg, an unzip followed by python setup.py install will automatically take care of my python dependencies, at the expense of losing any post-install functionality and non-python dependencies. Is there any recommended way of doing this? I suppose I could specify this in a pre-install script, but then I'm getting into information duplication and not really using setuptools for much of anything. (My current setup involves passing install_requires = ['dependency_name'] to setup, which works for python setup.py bdist_egg and unzip my_package.egg; python my_package/setup.py install, but not for python setup.py bdist_rpm --post-install post-install.sh and rpm --install my_package.rpm.)
[ "I think it would be best if your python dependencies were available as RPMs also, and declared as dependencies in the RPM. If they aren't available elsewhere, create them yourself, and put them in your yum repository.\nRunning PyPI installations as a side effect of RPM installation is evil, as it won't support proper uninstallation (i.e. uninstalling your RPM will remove your package, but leave the dependencies behind, with no proper removal procedure).\n" ]
[ 7 ]
[]
[]
[ "packaging", "python", "rpm", "setuptools" ]
stackoverflow_0001262052_packaging_python_rpm_setuptools.txt
Q: python "block" library I'm looking for a library that let's configure/write special python block, something like ruby block and/or C-macros. A: Looks like you're looking for metapython.
python "block" library
I'm looking for a library that let's configure/write special python block, something like ruby block and/or C-macros.
[ "Looks like you're looking for metapython.\n" ]
[ 0 ]
[]
[]
[ "programming_languages", "python" ]
stackoverflow_0001262568_programming_languages_python.txt
Q: Creating a new input event dispatcher in Pyglet (infra red input) I recently asked this question in the pyglet-users group, but got response, so I'm trying here instead. I would like to extend Pyglet to be able to use an infra red input device supported by lirc. I've used pyLirc before ( http://pylirc.mccabe.nu/ ) with PyGame and I want to rewrite my application to use Pyglet instead. To see if a button was pressed you would typically poll pyLirc to see if there is any button presses in its queue. My question is, what is the correct way in Pyglet to integrate pyLirc? I would prefer if it works in the same was as the current window keyboard/mouse events, but I'm not sure where to start. I know I can create a new EventDispatcher, in which I can register the new types of events and dispatch them after polling, like so: class pyLircDispatcher(pyglet.event.EventDispatcher): def poll(self): codes = pylirc.nextcode() if codes is not None: for code in codes: self.dispatch_event('on_irbutton', code) def on_irbutton(self, code): pass But how do I integrate that into the application's main loop to keep on calling poll() if I use pyglet.app.run() and how do I attach this eventdispatcher to my window so it works the same as the mouse and keyboard dispatchers? I see that I can set up a scheduler to call poll() at regular intervals with pyglet.clock.schedule_interval, but is this the correct way to do it? A: The correct way is whatever works. You can always change it later if you find a better way. A: It's probably too late for the OP, but I'll reply anyway in case it's helpful to anyone else. Creating the event dispatcher and using pyglet.clock.schedule_interval to call poll() at regular intervals is a good way to do it. To attach the event dispatcher to your window, you need to create an instance of the dispatcher and then call its push_handlers method: dispatcher.push_handlers(window) Then you can treat the events just like any other events coming into the window.
Creating a new input event dispatcher in Pyglet (infra red input)
I recently asked this question in the pyglet-users group, but got response, so I'm trying here instead. I would like to extend Pyglet to be able to use an infra red input device supported by lirc. I've used pyLirc before ( http://pylirc.mccabe.nu/ ) with PyGame and I want to rewrite my application to use Pyglet instead. To see if a button was pressed you would typically poll pyLirc to see if there is any button presses in its queue. My question is, what is the correct way in Pyglet to integrate pyLirc? I would prefer if it works in the same was as the current window keyboard/mouse events, but I'm not sure where to start. I know I can create a new EventDispatcher, in which I can register the new types of events and dispatch them after polling, like so: class pyLircDispatcher(pyglet.event.EventDispatcher): def poll(self): codes = pylirc.nextcode() if codes is not None: for code in codes: self.dispatch_event('on_irbutton', code) def on_irbutton(self, code): pass But how do I integrate that into the application's main loop to keep on calling poll() if I use pyglet.app.run() and how do I attach this eventdispatcher to my window so it works the same as the mouse and keyboard dispatchers? I see that I can set up a scheduler to call poll() at regular intervals with pyglet.clock.schedule_interval, but is this the correct way to do it?
[ "The correct way is whatever works. You can always change it later if you find a better way.\n", "It's probably too late for the OP, but I'll reply anyway in case it's helpful to anyone else.\nCreating the event dispatcher and using pyglet.clock.schedule_interval to call poll() at regular intervals is a good way to do it.\nTo attach the event dispatcher to your window, you need to create an instance of the dispatcher and then call its push_handlers method:\ndispatcher.push_handlers(window)\n\nThen you can treat the events just like any other events coming into the window.\n" ]
[ 1, 1 ]
[]
[]
[ "pyglet", "python" ]
stackoverflow_0001206628_pyglet_python.txt
Q: How do I pick 2 random items from a Python set? I currently have a Python set of n size where n >= 0. Is there a quick 1 or 2 lines Python solution to do it? For example, the set will look like: fruits = set(['apple', 'orange', 'watermelon', 'grape']) The goal is to pick 2 random items from the above and it's possible that the above set can contain 0, 1 or more items. The only way I can think of doing the above is to convert the set to a list(mutable) from where I can access 2 random unique index within the length of the set. A: Use the random module: http://docs.python.org/library/random.html import random random.sample(set([1, 2, 3, 4, 5, 6]), 2) This samples the two values without replacement (so the two values are different).
How do I pick 2 random items from a Python set?
I currently have a Python set of n size where n >= 0. Is there a quick 1 or 2 lines Python solution to do it? For example, the set will look like: fruits = set(['apple', 'orange', 'watermelon', 'grape']) The goal is to pick 2 random items from the above and it's possible that the above set can contain 0, 1 or more items. The only way I can think of doing the above is to convert the set to a list(mutable) from where I can access 2 random unique index within the length of the set.
[ "Use the random module: http://docs.python.org/library/random.html\nimport random\nrandom.sample(set([1, 2, 3, 4, 5, 6]), 2)\n\nThis samples the two values without replacement (so the two values are different).\n" ]
[ 335 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001262955_python_random.txt
Q: Reverse mapping class attributes to classes in Python I have some code in Python where I'll have a bunch of classes, each of which will have an attribute _internal_attribute. I would like to be able to generate a mapping of those attributes to the original class. Essentially I would like to be able to do this: class A(object): _internal_attribute = 'A attribute' class B(object): _internal_attribute = 'B attribute' a_instance = magic_reverse_mapping['A attribute']() b_instance = magic_reverse_mapping['B attribute']() What I'm missing here is how to generate magic_reverse_mapping dict. I have a gut feeling that having a metaclass generate A and B is the correct way to go about this; does that seem right? A: You can use a meta class to automatically register your classes in magic_reverse_mapping: magic_reverse_mapping = {} class MagicRegister(type): def __new__(meta, name, bases, dict): cls = type.__new__(meta, name, bases, dict) magic_reverse_mapping[dict['_internal_attribute']] = cls return cls class A(object): __metaclass__ = MagicRegister _internal_attribute = 'A attribute' afoo = magic_reverse_mapping['A attribute']() Alternatively you can use a decorator on your classes to register them. I think this is more readable and easier to understand: magic_reverse_mapping = {} def magic_register(cls): magic_reverse_mapping[cls._internal_attribute] = cls return cls @magic_register class A(object): _internal_attribute = 'A attribute' afoo = magic_reverse_mapping['A attribute']() Or you could even do it by hand. It's not that much more work without using any magic: reverse_mapping = {} class A(object): _internal_attribute = 'A attribute' reverse_mapping[A._internal_attribute] = A Looking at the different variants I think the decorator version would be the most pleasant to use. A: You need some data structure to store the list of applicable classes in the first place, but you don't have to generate it in the first place. You can read classes from globals instead. This naturally assumes that your classes extend object, as they do in your first post. def magic_reverse_mapping(attribute_name, attribute_value): classobjects = [val for val in globals().values() if isinstance(val, object)] attrobjects = [cls for cls in classobjects if hasattr(cls, attribute_name)] resultobjects = [cls for cls in attrobjects if object.__getattribute__(cls, attribute_name) == attribute_value] return resultobjects magic_reverse_mapping('_internal_attribute', 'A attribute') #output: [<class '__main__.A'>] Note that this returns a list of classes with that attribute value, because there may be more than one. If you wanted to instantiate the first one: magic_reverse_mapping('_internal_attribute', 'A attribute')[0]() #output: <__main__.A object at 0xb7ce486c> Unlike in sth's answer, you don't have to add a decorator to your classes (neat solution, though). However, there's no way to exclude any classes that are in the global namespace.
Reverse mapping class attributes to classes in Python
I have some code in Python where I'll have a bunch of classes, each of which will have an attribute _internal_attribute. I would like to be able to generate a mapping of those attributes to the original class. Essentially I would like to be able to do this: class A(object): _internal_attribute = 'A attribute' class B(object): _internal_attribute = 'B attribute' a_instance = magic_reverse_mapping['A attribute']() b_instance = magic_reverse_mapping['B attribute']() What I'm missing here is how to generate magic_reverse_mapping dict. I have a gut feeling that having a metaclass generate A and B is the correct way to go about this; does that seem right?
[ "You can use a meta class to automatically register your classes in magic_reverse_mapping:\nmagic_reverse_mapping = {}\n\nclass MagicRegister(type):\n def __new__(meta, name, bases, dict):\n cls = type.__new__(meta, name, bases, dict)\n magic_reverse_mapping[dict['_internal_attribute']] = cls\n return cls\n\nclass A(object):\n __metaclass__ = MagicRegister\n _internal_attribute = 'A attribute'\n\nafoo = magic_reverse_mapping['A attribute']()\n\nAlternatively you can use a decorator on your classes to register them. I think this is more readable and easier to understand:\nmagic_reverse_mapping = {}\n\ndef magic_register(cls):\n magic_reverse_mapping[cls._internal_attribute] = cls\n return cls\n\n@magic_register\nclass A(object):\n _internal_attribute = 'A attribute'\n\nafoo = magic_reverse_mapping['A attribute']()\n\nOr you could even do it by hand. It's not that much more work without using any magic:\nreverse_mapping = {}\n\nclass A(object):\n _internal_attribute = 'A attribute'\n\nreverse_mapping[A._internal_attribute] = A\n\nLooking at the different variants I think the decorator version would be the most pleasant to use.\n", "You need some data structure to store the list of applicable classes in the first place, but you don't have to generate it in the first place. You can read classes from globals instead. This naturally assumes that your classes extend object, as they do in your first post.\ndef magic_reverse_mapping(attribute_name, attribute_value):\n classobjects = [val for val in globals().values() if isinstance(val, object)]\n attrobjects = [cls for cls in classobjects if hasattr(cls, attribute_name)]\n resultobjects = [cls for cls in attrobjects if object.__getattribute__(cls, attribute_name) == attribute_value]\n return resultobjects\n\nmagic_reverse_mapping('_internal_attribute', 'A attribute')\n#output: [<class '__main__.A'>]\n\nNote that this returns a list of classes with that attribute value, because there may be more than one. If you wanted to instantiate the first one:\nmagic_reverse_mapping('_internal_attribute', 'A attribute')[0]()\n#output: <__main__.A object at 0xb7ce486c>\n\nUnlike in sth's answer, you don't have to add a decorator to your classes (neat solution, though). However, there's no way to exclude any classes that are in the global namespace.\n" ]
[ 5, 3 ]
[]
[]
[ "metaclass", "python" ]
stackoverflow_0001263479_metaclass_python.txt
Q: How to catch errors elegantly and keep methods clean? I am in the process of writing a small(er) Python script to automate a semi-frequent, long, and error-prone task. This script is responsible for making various system calls - either though os.system or through os.(mkdir|chdir|etc). Here is an example of my code right now: class AClass: def __init__(self, foo, bar, verbose=False, silent=False): # Sets up variables needed for each instance, etc self.redirect = '' if silent: self.redirect = '> 2>&1' self.verbose = verbose def a_method(self): """ Responsible for running 4-6 things via system calls as described """ if self.verbose and not self.silent: print "Creating a directory" try: os.mkdir('foobar') except OSError, e: raise OSError, "Problem creating directory %s: %s" % (e.filename, e.strerror) if self.verbose and not self.silent: print "Listing a directory" if (os.system('ls foobar %s') % self.redirect) is not 0: raise OSError, "Could not list directory foobar" def b_method(self): """ Looks very similar to a_method() """ def run(self): """ Stitches everything together """ try: a_method() except OSError, e: print "a_method(): %s" % e.strerror sys.exit(-1) try: b_method() except OSError, e: print "b_method(): %s" % e.strerror sys.exit(-1) Obviously writing all the if self.verbose and not self.silent is messy and then the try/catch or if around each call is ugly to look at. I would have liked to use Python's logging class and simply have one logging level (verbose) configurable via command line and then I can simple call logger.debug('My message') but I am using Python 2.2 and I do not have access to the logging class. Summary/Base Questions I am using Python 2.2 and I cannot change this. I am running on an ESX 3.0.2 server and I cannot touch it in any other way for the time being. What is the best way to handle error checking and verbose output without tying this logic to your class (which should only do One Thing)? How can I reduce the clutter with something more simple or elegant to look at? Thanks! A: How to clean up your verbose output Move the verbose/quiet logic into a single function, and then call that function for all of your output. If you make it something nice and short it keeps your mainline code quite tidy. def P(s): if (verbose): print s I have a package that does this in our internal code, it has the following methods: P -- normal print: P('this prints regardless, --quiet does not shut it up') V -- verbose print: V('this only prints if --verbose') D -- debug print: D('this only prints if --debug') A: writing all the if verbose and not silent is messy So, instead, assign sys.stdout to a dummy class whose write is a no-op if you need to be unverbose or silent, then just use print without needing guards. (Do remember to restore sys.stdout to the real thing for prints that aren't so conditioned -- easier to encapsulate in a couple of functions, actually). For error checks, all the blocks like: try: a_method() except OSError, e: print "a_method(): %s" % e.strerror sys.exit(-1) can and should be like docall(a_method) for what I hope is a pretty obvious def docall(acallable):. Similarly, other try/except case and ones where the new exception is raised conditionally can become calls to functions with appropriate arguments (including callables, i.e. higher order functions). I'll be glad to add detailed code if you clarify what parts of this are hard or unclear to you! Python 2.2, while now very old, was a great language in its way, and it's just as feasible to use it neatly, as you wish, as it would be for other great old languages, like, say, MacLisp;-).
How to catch errors elegantly and keep methods clean?
I am in the process of writing a small(er) Python script to automate a semi-frequent, long, and error-prone task. This script is responsible for making various system calls - either though os.system or through os.(mkdir|chdir|etc). Here is an example of my code right now: class AClass: def __init__(self, foo, bar, verbose=False, silent=False): # Sets up variables needed for each instance, etc self.redirect = '' if silent: self.redirect = '> 2>&1' self.verbose = verbose def a_method(self): """ Responsible for running 4-6 things via system calls as described """ if self.verbose and not self.silent: print "Creating a directory" try: os.mkdir('foobar') except OSError, e: raise OSError, "Problem creating directory %s: %s" % (e.filename, e.strerror) if self.verbose and not self.silent: print "Listing a directory" if (os.system('ls foobar %s') % self.redirect) is not 0: raise OSError, "Could not list directory foobar" def b_method(self): """ Looks very similar to a_method() """ def run(self): """ Stitches everything together """ try: a_method() except OSError, e: print "a_method(): %s" % e.strerror sys.exit(-1) try: b_method() except OSError, e: print "b_method(): %s" % e.strerror sys.exit(-1) Obviously writing all the if self.verbose and not self.silent is messy and then the try/catch or if around each call is ugly to look at. I would have liked to use Python's logging class and simply have one logging level (verbose) configurable via command line and then I can simple call logger.debug('My message') but I am using Python 2.2 and I do not have access to the logging class. Summary/Base Questions I am using Python 2.2 and I cannot change this. I am running on an ESX 3.0.2 server and I cannot touch it in any other way for the time being. What is the best way to handle error checking and verbose output without tying this logic to your class (which should only do One Thing)? How can I reduce the clutter with something more simple or elegant to look at? Thanks!
[ "How to clean up your verbose output\nMove the verbose/quiet logic into a single function, and then call that function for all of your output. If you make it something nice and short it keeps your mainline code quite tidy.\ndef P(s):\n if (verbose):\n print s\n\nI have a package that does this in our internal code, it has the following methods:\n\nP -- normal print: P('this prints regardless, --quiet does not shut it up')\nV -- verbose print: V('this only prints if --verbose')\nD -- debug print: D('this only prints if --debug')\n\n", "writing all the if verbose and not silent is messy\n\nSo, instead, assign sys.stdout to a dummy class whose write is a no-op if you need to be unverbose or silent, then just use print without needing guards. (Do remember to restore sys.stdout to the real thing for prints that aren't so conditioned -- easier to encapsulate in a couple of functions, actually).\nFor error checks, all the blocks like:\n try:\n a_method()\n except OSError, e:\n print \"a_method(): %s\" % e.strerror\n sys.exit(-1)\n\ncan and should be like\n docall(a_method)\n\nfor what I hope is a pretty obvious def docall(acallable):.\nSimilarly, other try/except case and ones where the new exception is raised conditionally can become calls to functions with appropriate arguments (including callables, i.e. higher order functions). I'll be glad to add detailed code if you clarify what parts of this are hard or unclear to you!\nPython 2.2, while now very old, was a great language in its way, and it's just as feasible to use it neatly, as you wish, as it would be for other great old languages, like, say, MacLisp;-).\n" ]
[ 4, 4 ]
[]
[]
[ "python" ]
stackoverflow_0001264150_python.txt
Q: Mechanize and BeautifulSoup for PHP? I was wondering if there was anything similar like Mechanize or BeautifulSoup for PHP? A: SimpleTest provides you with similar functionality: http://www.simpletest.org/en/browser_documentation.html A: I don't know how powerful BeautifulSoup is, so maybe this won't be as great ; but you could try using DOMDocument::loadHTML : The function parses the HTML contained in the string source . Unlike loading XML, HTML does not have to be well-formed to load. After using this, you should be able to access the HTML document using DOM methods -- including XPath queries.
Mechanize and BeautifulSoup for PHP?
I was wondering if there was anything similar like Mechanize or BeautifulSoup for PHP?
[ "SimpleTest provides you with similar functionality:\nhttp://www.simpletest.org/en/browser_documentation.html\n", "I don't know how powerful BeautifulSoup is, so maybe this won't be as great ; but you could try using DOMDocument::loadHTML :\n\nThe function parses the HTML contained\n in the string source . Unlike loading\n XML, HTML does not have to be\n well-formed to load.\n\nAfter using this, you should be able to access the HTML document using DOM methods -- including XPath queries.\n" ]
[ 8, 8 ]
[]
[]
[ "beautifulsoup", "mechanize", "php", "python" ]
stackoverflow_0001263800_beautifulsoup_mechanize_php_python.txt
Q: Django saving objects - works, but values of objects seem to be cached until I restart server I'm writing an app for tagging photos. One of the views handles adding new tags and without boilerplate for POST/GET and handling field errors it does this: tagName = request.cleaned_attributes['tagName'] t = Tag.objects.create(name = tagName) t.save() Now in a view for another request to retrieve all tags I have: tags = Tag.objects.all() I see the data only after restarting Django development server, which is odd to me. Seems like Tag.objects.all() has some caching mechanism that is not invalidated properly? The data is for sure saved to the database. The database backend is sqlite. I guess I am either missing some configuration or just forgot to do something simple. Ideas? A: Tag.objects.all() is a QuerySet. These do not hit the database until you do something to evaluate them. So, how exactly are you using it in your view? If you are using a generic view and passing the queryset through extra_context, for example, it wouldn't be re-evaluated. Also, as an aside, Tag.objects.create(name = tagName) will automatically save to the db.
Django saving objects - works, but values of objects seem to be cached until I restart server
I'm writing an app for tagging photos. One of the views handles adding new tags and without boilerplate for POST/GET and handling field errors it does this: tagName = request.cleaned_attributes['tagName'] t = Tag.objects.create(name = tagName) t.save() Now in a view for another request to retrieve all tags I have: tags = Tag.objects.all() I see the data only after restarting Django development server, which is odd to me. Seems like Tag.objects.all() has some caching mechanism that is not invalidated properly? The data is for sure saved to the database. The database backend is sqlite. I guess I am either missing some configuration or just forgot to do something simple. Ideas?
[ "Tag.objects.all() is a QuerySet. These do not hit the database until you do something to evaluate them. So, how exactly are you using it in your view? If you are using a generic view and passing the queryset through extra_context, for example, it wouldn't be re-evaluated. \nAlso, as an aside, Tag.objects.create(name = tagName) will automatically save to the db.\n" ]
[ 4 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001264246_django_django_models_python.txt
Q: Plone navigation with one-language-per-folder site I am developing a multi-lingual site with Plone. I want to have one language per folder but the Plone navigation UI is causing problems. I have several different folders in my root, such as en, de, nl, etcetera. Inside those folders is the actual content, such as en/news, nl/nieuw, de/nachrichten, etcetera. I have set up Plone Language Tool to pick the language setting from the URL, but the navigation is not showing the correct items. The tabbed navigation is making tabs for the language folders. The path bar is showing "You are here: Home -> en -> news". How can I change the tabbed navigation and the path bar to show the items inside the language specific folder? I want to have a tab for "news", not for "en" on the English site. The path bar should show "You are here: Home -> news". I am using Plone 3.2.3 with Plone Language Tool 3.0.2 and LinguaPlone 2.4. A: Each language folder should implement INavigationRoot. You can set that up by going to the ZMI, finding the folder, and going to the Interfaces tab. There you will find plone.app.layout.navigation.interfaces.INavigationRoot. Click it, and navigation will treat it as the root of the tree. (Note that in Plone 3.3 the support for INavigationRoot has gotten better, so you may want to upgrade. -- edit by Maurits)
Plone navigation with one-language-per-folder site
I am developing a multi-lingual site with Plone. I want to have one language per folder but the Plone navigation UI is causing problems. I have several different folders in my root, such as en, de, nl, etcetera. Inside those folders is the actual content, such as en/news, nl/nieuw, de/nachrichten, etcetera. I have set up Plone Language Tool to pick the language setting from the URL, but the navigation is not showing the correct items. The tabbed navigation is making tabs for the language folders. The path bar is showing "You are here: Home -> en -> news". How can I change the tabbed navigation and the path bar to show the items inside the language specific folder? I want to have a tab for "news", not for "en" on the English site. The path bar should show "You are here: Home -> news". I am using Plone 3.2.3 with Plone Language Tool 3.0.2 and LinguaPlone 2.4.
[ "Each language folder should implement INavigationRoot.\nYou can set that up by going to the ZMI, finding the folder, and going to the Interfaces tab. There you will find plone.app.layout.navigation.interfaces.INavigationRoot. Click it, and navigation will treat it as the root of the tree.\n(Note that in Plone 3.3 the support for INavigationRoot has gotten better, so you may want to upgrade. -- edit by Maurits)\n" ]
[ 4 ]
[]
[]
[ "linguaplone", "localization", "multilingual", "plone", "python" ]
stackoverflow_0001264421_linguaplone_localization_multilingual_plone_python.txt
Q: Is there a class library diagram for django? I'm looking for a way to find out the class structure at a glance for django. Is there a link to an overview of it? A: In the app django_extensions on google code. There is GraphModels command A: A class diagram of most of django's class structure is really not very interesting or useful for that matter. The problem is that most classes you use for development with django are standalone in the sense that they don't branch out to child classes. The only thing that comes to mind is the structure of the class-based generic views, but that's not yet committed to trunk. Other than that, there's really not much class structure that you use when developing with django. There are several examples for development for django, but most are transparent to the user (e.g. QuerySet and its children classes). I think a much better source for a better overview is the documentation and the source in general (no pun intended). A: I'm not aware of a reference diagram. But you could probably generate one using a tool like the following: Eclipse-PyUML PyNSource A: Graphviz is solution worth looking at. Personally, I much prefer a graphic representation over UML
Is there a class library diagram for django?
I'm looking for a way to find out the class structure at a glance for django. Is there a link to an overview of it?
[ "In the app django_extensions on google code.\nThere is GraphModels command\n", "A class diagram of most of django's class structure is really not very interesting or useful for that matter. The problem is that most classes you use for development with django are standalone in the sense that they don't branch out to child classes. The only thing that comes to mind is the structure of the class-based generic views, but that's not yet committed to trunk.\nOther than that, there's really not much class structure that you use when developing with django. There are several examples for development for django, but most are transparent to the user (e.g. QuerySet and its children classes). I think a much better source for a better overview is the documentation and the source in general (no pun intended).\n", "I'm not aware of a reference diagram. But you could probably generate one using a tool like the following:\n\nEclipse-PyUML\nPyNSource\n\n", "Graphviz is solution worth looking at. Personally, I much prefer a graphic representation over UML\n" ]
[ 8, 1, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001263677_django_python.txt
Q: Error with time.strptime() and python-twitter I use python-twitter to get the date of a tweet and try to parse it with the time.strptime() function. When I do it interactively, everything works fine. When I call the program from my bash, I get a ValueError saying (for example): time data u'Wed Aug 12 08:43:35 +0000 2009' does not match format '%a %b %d %H:%M:%S +0000 %Y' Code looks like this: api = twitter.Api(username='username', password='pw') user = api.GetUser(username) latest = user.GetStatus() date = latest.GetCreatedAt() date_struct = time.strptime(date, '%a %b %d %H:%M:%S +0000 %Y') which throws the exception mentioned above. It works on the interactive shell: >>> user = api.GetUser('username') >>> latest = user.GetStatus() >>> date = latest.GetCreatedAt() >>> date u'Wed Aug 12 08:15:10 +0000 2009' >>>> struct = time.strptime(date, '%a %b %d %H:%M:%S +0000 %Y') >>>> struct time.struct_time(tm_year=2009, tm_mon=8, tm_mday=12, tm_hour=8, tm_min=15, tm_sec=10, tm_wday=2, tm_yday=224, tm_isdst=-1) Someone any idea why this is happening? I am Using Ubuntu 9.04, Python 2.6.2 and python-twitter 0.6. All files in unicode. A: Things to try: (1) Is it possible that your interactive session and your "bash" are using different locales? Put print time.strftime(some known struct_time) into your script and see if the day and month come out in a different language. (2) Put print repr(date) in your script to show unambiguously what you are getting from the latest.GetCreatedAt() call.
Error with time.strptime() and python-twitter
I use python-twitter to get the date of a tweet and try to parse it with the time.strptime() function. When I do it interactively, everything works fine. When I call the program from my bash, I get a ValueError saying (for example): time data u'Wed Aug 12 08:43:35 +0000 2009' does not match format '%a %b %d %H:%M:%S +0000 %Y' Code looks like this: api = twitter.Api(username='username', password='pw') user = api.GetUser(username) latest = user.GetStatus() date = latest.GetCreatedAt() date_struct = time.strptime(date, '%a %b %d %H:%M:%S +0000 %Y') which throws the exception mentioned above. It works on the interactive shell: >>> user = api.GetUser('username') >>> latest = user.GetStatus() >>> date = latest.GetCreatedAt() >>> date u'Wed Aug 12 08:15:10 +0000 2009' >>>> struct = time.strptime(date, '%a %b %d %H:%M:%S +0000 %Y') >>>> struct time.struct_time(tm_year=2009, tm_mon=8, tm_mday=12, tm_hour=8, tm_min=15, tm_sec=10, tm_wday=2, tm_yday=224, tm_isdst=-1) Someone any idea why this is happening? I am Using Ubuntu 9.04, Python 2.6.2 and python-twitter 0.6. All files in unicode.
[ "Things to try:\n(1) Is it possible that your interactive session and your \"bash\" are using different locales? Put print time.strftime(some known struct_time) into your script and see if the day and month come out in a different language.\n(2) Put print repr(date) in your script to show unambiguously what you are getting from the latest.GetCreatedAt() call.\n" ]
[ 2 ]
[]
[]
[ "python", "twitter" ]
stackoverflow_0001265064_python_twitter.txt
Q: xmpp with python: xmpp.protocol.InvalidFrom: (u'invalid-from', '') cl = xmpp.Client('myserver.com') if not cl.connect(server=('mysefver.com',5223)): raise IOError('cannot connect to server') cl.RegisterHandler('message',messageHandler) cl.auth('myemail@myserver.com', 'mypassword', 'statusbot') cl.sendInitPresence() msgtext = formatToDo(cal, 'text') message = xmpp.Message('anotheremail@myserver.com', msgtext) message.setAttr('type', 'chat') cl.send(message) I get the following error message when I try to run it: xmpp.protocol.InvalidFrom: (u'invalid-from', '') Why is this happening :( A: From the XMPP protocol specification: If the value of the 'from' address does not match the hostname represented by the Receiving Server when opening the TCP connection (or any validated domain thereof, such as a validated subdomain of the Receiving Server's hostname or another validated domain hosted by the Receiving Server), then the Authoritative Server MUST generate an stream error condition and terminate both the XML stream and the underlying TCP connection. which basically means, that if the sender is not recognized by the xmpp-server, it'll reply with this message. XMPP supplies a registration mechanism: xmpp.features.register
xmpp with python: xmpp.protocol.InvalidFrom: (u'invalid-from', '')
cl = xmpp.Client('myserver.com') if not cl.connect(server=('mysefver.com',5223)): raise IOError('cannot connect to server') cl.RegisterHandler('message',messageHandler) cl.auth('myemail@myserver.com', 'mypassword', 'statusbot') cl.sendInitPresence() msgtext = formatToDo(cal, 'text') message = xmpp.Message('anotheremail@myserver.com', msgtext) message.setAttr('type', 'chat') cl.send(message) I get the following error message when I try to run it: xmpp.protocol.InvalidFrom: (u'invalid-from', '') Why is this happening :(
[ "From the XMPP protocol specification:\n\nIf the value of the 'from'\n address does not match the hostname represented by the Receiving\n Server when opening the TCP connection (or any validated domain\n thereof, such as a validated subdomain of the Receiving Server's\n hostname or another validated domain hosted by the Receiving Server),\n then the Authoritative Server MUST generate an stream\n error condition and terminate both the XML stream and the underlying\n TCP connection.\n\nwhich basically means, that if the sender is not recognized by the xmpp-server, it'll reply with this message. XMPP supplies a registration mechanism: xmpp.features.register\n" ]
[ 4 ]
[]
[]
[ "bots", "python", "xmpp" ]
stackoverflow_0001265146_bots_python_xmpp.txt
Q: Django Forms, Display Error on ModelMultipleChoiceField I'm having an issue getting validation error messages to display for a particular field in a Django form, where the field in question is a ModelMultipleChoiceField. In the clean(self) method for the Form, I try to add the error message to the field like so: msg = 'error' self._errors['field_name'] = ErrorList([msg]) raise forms.ValidationError(msg) This works okay where 'field_name' points to other field types, but for ModelMultipleChoiceField it just won't display. Should this be handled differently? A: Yeah, it sounds like you're doing it wrong. You should be using the clean_ method instead. Read through that whole document, in fact - it's very informative. A: Why are you instantiating an ErrorList and writing to self._errors directly? Calling "raise forms.ValidationError(msg)" takes care of all that already. And what does your template look like?
Django Forms, Display Error on ModelMultipleChoiceField
I'm having an issue getting validation error messages to display for a particular field in a Django form, where the field in question is a ModelMultipleChoiceField. In the clean(self) method for the Form, I try to add the error message to the field like so: msg = 'error' self._errors['field_name'] = ErrorList([msg]) raise forms.ValidationError(msg) This works okay where 'field_name' points to other field types, but for ModelMultipleChoiceField it just won't display. Should this be handled differently?
[ "Yeah, it sounds like you're doing it wrong.\nYou should be using the clean_ method instead. Read through that whole document, in fact - it's very informative.\n", "Why are you instantiating an ErrorList and writing to self._errors directly? Calling \"raise forms.ValidationError(msg)\" takes care of all that already. \nAnd what does your template look like?\n" ]
[ 2, 0 ]
[]
[]
[ "django", "forms", "python", "validation" ]
stackoverflow_0000265888_django_forms_python_validation.txt
Q: How do I do database transactions with psycopg2/python db api? Im fiddling with psycopg2 , and while there's a .commit() and .rollback() there's no .begin() or similar to start a transaction , or so it seems ? I'd expect to be able to do db.begin() # possible even set the isolation level here curs = db.cursor() cursor.execute('select etc... for update') ... cursor.execute('update ... etc.') db.commit(); So, how do transactions work with psycopg2 ? How would I set/change the isolation level ? A: Use db.set_isolation_level(n), assuming db is your connection object. As Federico wrote here, the meaning of n is: 0 -> autocommit 1 -> read committed 2 -> serialized (but not officially supported by pg) 3 -> serialized As documented here, psycopg2.extensions gives you symbolic constants for the purpose: Setting transaction isolation levels ==================================== psycopg2 connection objects hold informations about the PostgreSQL `transaction isolation level`_. The current transaction level can be read from the `.isolation_level` attribute. The default isolation level is ``READ COMMITTED``. A different isolation level con be set through the `.set_isolation_level()` method. The level can be set to one of the following constants, defined in `psycopg2.extensions`: `ISOLATION_LEVEL_AUTOCOMMIT` No transaction is started when command are issued and no `.commit()`/`.rollback()` is required. Some PostgreSQL command such as ``CREATE DATABASE`` can't run into a transaction: to run such command use `.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)`. `ISOLATION_LEVEL_READ_COMMITTED` This is the default value. A new transaction is started at the first `.execute()` command on a cursor and at each new `.execute()` after a `.commit()` or a `.rollback()`. The transaction runs in the PostgreSQL ``READ COMMITTED`` isolation level. `ISOLATION_LEVEL_SERIALIZABLE` Transactions are run at a ``SERIALIZABLE`` isolation level. .. _transaction isolation level: http://www.postgresql.org/docs/8.1/static/transaction-iso.html A: The BEGIN with python standard DB API is always implicit. When you start working with the database the driver issues a BEGIN and after any COMMIT or ROLLBACK another BEGIN is issued. A python DB API compliant with the specification should always work this way (not only the postgresql). You can change this setting the isolation level to autocommit with db.set_isolation_level(n) as pointed by Alex Martelli. As Tebas said the begin is implicit but not executed until an SQL is executed, so if you don't execute any SQL, the session is not in a transaction. A: I prefer to explicitly see where my transactions are : cursor.execute("BEGIN") cursor.execute("COMMIT")
How do I do database transactions with psycopg2/python db api?
Im fiddling with psycopg2 , and while there's a .commit() and .rollback() there's no .begin() or similar to start a transaction , or so it seems ? I'd expect to be able to do db.begin() # possible even set the isolation level here curs = db.cursor() cursor.execute('select etc... for update') ... cursor.execute('update ... etc.') db.commit(); So, how do transactions work with psycopg2 ? How would I set/change the isolation level ?
[ "Use db.set_isolation_level(n), assuming db is your connection object. As Federico wrote here, the meaning of n is:\n0 -> autocommit\n1 -> read committed\n2 -> serialized (but not officially supported by pg)\n3 -> serialized\n\nAs documented here, psycopg2.extensions gives you symbolic constants for the purpose:\nSetting transaction isolation levels\n====================================\n\npsycopg2 connection objects hold informations about the PostgreSQL `transaction\nisolation level`_. The current transaction level can be read from the\n`.isolation_level` attribute. The default isolation level is ``READ\nCOMMITTED``. A different isolation level con be set through the\n`.set_isolation_level()` method. The level can be set to one of the following\nconstants, defined in `psycopg2.extensions`:\n\n`ISOLATION_LEVEL_AUTOCOMMIT`\n No transaction is started when command are issued and no\n `.commit()`/`.rollback()` is required. Some PostgreSQL command such as\n ``CREATE DATABASE`` can't run into a transaction: to run such command use\n `.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)`.\n\n`ISOLATION_LEVEL_READ_COMMITTED`\n This is the default value. A new transaction is started at the first\n `.execute()` command on a cursor and at each new `.execute()` after a\n `.commit()` or a `.rollback()`. The transaction runs in the PostgreSQL\n ``READ COMMITTED`` isolation level.\n\n`ISOLATION_LEVEL_SERIALIZABLE`\n Transactions are run at a ``SERIALIZABLE`` isolation level.\n\n\n.. _transaction isolation level: \n http://www.postgresql.org/docs/8.1/static/transaction-iso.html\n\n", "The BEGIN with python standard DB API is always implicit. When you start working with the database the driver issues a BEGIN and after any COMMIT or ROLLBACK another BEGIN is issued. A python DB API compliant with the specification should always work this way (not only the postgresql).\nYou can change this setting the isolation level to autocommit with db.set_isolation_level(n) as pointed by Alex Martelli.\nAs Tebas said the begin is implicit but not executed until an SQL is executed, so if you don't execute any SQL, the session is not in a transaction.\n", "I prefer to explicitly see where my transactions are : \n\ncursor.execute(\"BEGIN\")\ncursor.execute(\"COMMIT\")\n\n" ]
[ 34, 18, 9 ]
[]
[]
[ "database", "postgresql", "python" ]
stackoverflow_0001219326_database_postgresql_python.txt
Q: System standard sound in Python How play standard system sounds from a Python script? I'm writing a GUI program in wxPython that needs to beep on events to attract user's attention, maybe there are functions in wxPython I can utilize? A: on windows you could use winsound and I suppose curses.beep on Unix. A: from the documentation, you could use wx.Bell() function (not tested though) A: From the documentation: wxTopLevelWindow::RequestUserAttention void RequestUserAttention(int flags = wxUSER_ATTENTION_INFO) Use a system-dependent way to attract users attention to the window when it is in background. flags may have the value of either wxUSER_ATTENTION_INFO (default) or wxUSER_ATTENTION_ERROR which results in a more drastic action. When in doubt, use the default value. Note that this function should normally be only used when the application is not already in foreground. This function is currently implemented for Win32 where it flashes the window icon in the taskbar, and for wxGTK with task bars supporting it.
System standard sound in Python
How play standard system sounds from a Python script? I'm writing a GUI program in wxPython that needs to beep on events to attract user's attention, maybe there are functions in wxPython I can utilize?
[ "on windows you could use winsound and I suppose curses.beep on Unix.\n", "from the documentation, you could use wx.Bell() function (not tested though)\n", "From the documentation:\n\nwxTopLevelWindow::RequestUserAttention\nvoid RequestUserAttention(int flags =\n wxUSER_ATTENTION_INFO)\nUse a system-dependent way to attract\n users attention to the window when it\n is in background.\nflags may have the value of either\n wxUSER_ATTENTION_INFO (default) or\n wxUSER_ATTENTION_ERROR which results\n in a more drastic action. When in\n doubt, use the default value.\nNote that this function should\n normally be only used when the\n application is not already in\n foreground.\nThis function is currently implemented\n for Win32 where it flashes the window\n icon in the taskbar, and for wxGTK\n with task bars supporting it.\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "audio", "python", "system_sounds", "wxpython" ]
stackoverflow_0001265599_audio_python_system_sounds_wxpython.txt
Q: Changing height of an object in wxPython How to change only hight of an object in wxPython, leaving its width automatic? In my case it's a TextCtrl. How to make the height of the window available for change and lock the width? A: For the width or height to be automatically determined based on context you use for it the value of -1, for example (-1, 100) for a height of 100 and automatic width. The default size for controls is usually (-1, -1). If a width or height is specified and the sizer item for the control doesn't have wx.EXPAND flag set (note that even if this flag is set some sizers won't expand in both directions by default) you might call it "locked" as it won't chage that dimension. Make sure to study the workings of sizers in depth as it is one of the most important things in GUI design.
Changing height of an object in wxPython
How to change only hight of an object in wxPython, leaving its width automatic? In my case it's a TextCtrl. How to make the height of the window available for change and lock the width?
[ "For the width or height to be automatically determined based on context you use for it the value of -1, for example (-1, 100) for a height of 100 and automatic width.\nThe default size for controls is usually (-1, -1).\nIf a width or height is specified and the sizer item for the control doesn't have wx.EXPAND flag set (note that even if this flag is set some sizers won't expand in both directions by default) you might call it \"locked\" as it won't chage that dimension.\nMake sure to study the workings of sizers in depth as it is one of the most important things in GUI design.\n" ]
[ 6 ]
[]
[]
[ "python", "size", "wxpython" ]
stackoverflow_0001265821_python_size_wxpython.txt
Q: Python non-trivial C++ Extension I have fairly large C++ library with several sub-libraries that support it, and I need to turn the whole thing into a python extension. I'm using distutils because it needs to be cross-platform, but if there's a better tool I'm open to suggestions. Is there a way to make distutils first compile the sub-libraries, and link them in when it creates an extension from the main library? A: I do just this with a massive C++ library in our product. There are several tools out there that can help you automate the task of writing bindings: the most popular is SWIG, which has been around a while, is used in lots of projects, and generally works very well. The biggest thing against SWIG (in my opinion) is that the C++ codebase of SWIG itself is really rather crufty to put it mildly. It was written before the STL and has it's own semi-dynamic type system which is just old and creaky now. This won't matter much unless you ever have to get stuck in and make some modifications to the core (I once tried to add doxygen -> docstring conversion) but if you ever do, good luck to you! People also say that SWIG generated code is not that efficient, which may be true but for me I've never found the SWIG calls themselves to be enough of a bottleneck to worry about it. There are other tools you can use if SWIG doesn't float your boat: boost.python is also popular and could be a good option if you already use boost libraries in your C++ code. The downside is that it is heavy on compile times since it is pretty much all c++ template based. Both these tools require you to do some work up-front in order to define what will be exposed and quite how it will be done. For SWIG you provide interface files which are like C++ headers but stripped down, and with some extra directives to tell SWIG how to translate complex types etc. Writing these interfaces can be tedious, so you may want to look at something like pygccxml to help you auto-generate them for you. The author of that package actually wrote another extension which you might like: py++. This package does two things: it can autogenerate binding definitions that can then be fed to boost.python to generate python bindings: basically it is the full solution for most people. You might want to start there if you no particulrly special or difficult requirements to meet. Some other questions that might prove useful as a reference: Extending python - to swig or not to swig SWIG vs CTypes Extending Python with C/C++ You may also find this comparison of binding generation tools for Python handy. As Alex points out in the comments though, its rather old now but at least gives you some idea of the landscape... In terms of how to drive the build, you may want to look at a more advanced built tool that distutils: if you want to stick with Python I would highly recommend Waf as a framework (others will tell you SCons is the way to go, but believe me it's slow as hell: I've been there and back already!)...it takes a little learning, but when you get your head around it is extremely powerful. And since it's pure Python it will integrate perfectly with any other Python code you have as part of your build process (say for example you use Py++ in the end)...
Python non-trivial C++ Extension
I have fairly large C++ library with several sub-libraries that support it, and I need to turn the whole thing into a python extension. I'm using distutils because it needs to be cross-platform, but if there's a better tool I'm open to suggestions. Is there a way to make distutils first compile the sub-libraries, and link them in when it creates an extension from the main library?
[ "I do just this with a massive C++ library in our product. There are several tools out there that can help you automate the task of writing bindings: the most popular is SWIG, which has been around a while, is used in lots of projects, and generally works very well. \nThe biggest thing against SWIG (in my opinion) is that the C++ codebase of SWIG itself is really rather crufty to put it mildly. It was written before the STL and has it's own semi-dynamic type system which is just old and creaky now. This won't matter much unless you ever have to get stuck in and make some modifications to the core (I once tried to add doxygen -> docstring conversion) but if you ever do, good luck to you! People also say that SWIG generated code is not that efficient, which may be true but for me I've never found the SWIG calls themselves to be enough of a bottleneck to worry about it.\nThere are other tools you can use if SWIG doesn't float your boat: boost.python is also popular and could be a good option if you already use boost libraries in your C++ code. The downside is that it is heavy on compile times since it is pretty much all c++ template based.\nBoth these tools require you to do some work up-front in order to define what will be exposed and quite how it will be done. For SWIG you provide interface files which are like C++ headers but stripped down, and with some extra directives to tell SWIG how to translate complex types etc. Writing these interfaces can be tedious, so you may want to look at something like pygccxml to help you auto-generate them for you.\nThe author of that package actually wrote another extension which you might like: py++. This package does two things: it can autogenerate binding definitions that can then be fed to boost.python to generate python bindings: basically it is the full solution for most people. You might want to start there if you no particulrly special or difficult requirements to meet.\nSome other questions that might prove useful as a reference:\n\nExtending python - to swig or not to swig\nSWIG vs CTypes\nExtending Python with C/C++\n\nYou may also find this comparison of binding generation tools for Python handy. As Alex points out in the comments though, its rather old now but at least gives you some idea of the landscape...\nIn terms of how to drive the build, you may want to look at a more advanced built tool that distutils: if you want to stick with Python I would highly recommend Waf as a framework (others will tell you SCons is the way to go, but believe me it's slow as hell: I've been there and back already!)...it takes a little learning, but when you get your head around it is extremely powerful. And since it's pure Python it will integrate perfectly with any other Python code you have as part of your build process (say for example you use Py++ in the end)...\n" ]
[ 10 ]
[]
[]
[ "c++", "distutils", "py++", "python", "swig" ]
stackoverflow_0001266570_c++_distutils_py++_python_swig.txt
Q: How to show hidden autofield in django formset A Django autofield when displayed using a formset is hidden by default. What would be the best way to show it? At the moment, the model is declared as, class MyModel: locid = models.AutoField(primary_key=True) ... When this is rendered using Django formsets, class MyModelForm(ModelForm): class Meta: model = MyModel fields = ('locid', 'name') it shows up on the page as, <input id="id_form-0-locid" type="hidden" value="707" name="form-0-locid"/> Thanks. Edit I create the formset like this - LocFormSet = modelformset_factory(MyModel) pformset = LocFormSet(request.POST, request.FILES, queryset=MyModel.objects.order_by('name')) Second Edit Looks like I'm not using the custom form class I defined there, so the question needs slight modification.. How would I create a formset from a custom form (which will show a hidden field), as well as use a custom queryset? At the moment, I can either inherit from a BaseModelFormSet class and use a custom query set, or I can use the ModelForm class to add a custom field to a form. Is there a way to do both with a formset? Third Edit I'm now using, class MyModelForm(ModelForm): def __init__(self, *args, **kwargs): super(MyModelForm, self).__init__(*args, **kwargs) locid = forms.IntegerField(min_value = 1, required=True) self.fields['locid'].widget.attrs["type"] = 'visible' self.queryset = MyModel.objects.order_by('name') class Meta: model = MyModel fields = ('locid', 'name') LocFormSet = modelformset_factory(MyModel, form = MyModelForm) pformset = LocFormSet() But this still doesn't Show locid Use the custom query that was specified. A: Try changing the default field type: from django import forms class MyModelForm(ModelForm): locid = forms.IntegerField(min_value=1, required=True) class Meta: model = MyModel fields = ('locid', 'name') EDIT: Tested and works... A: As you say, you are not using the custom form you have defined. This is because you aren't passing it in anywhere, so Django can't know about it. The solution is simple - just pass the custom form class into modelformset_factory: LocFormSet = modelformset_factory(MyModel, form=MyModelForm) Edit in response to update 3: Firstly, you have the redefinition for locid in the wrong place - it needs to be at the class level, not inside the __init__. Secondly, putting the queryset inside the form won't do anything at all - forms don't know about querysets. You should go back to what you were doing before, passing it in as a parameter when you instantiate the formset. (Alternatively, you could define a custom formset, but that seems like overkill.) class MyModelForm(ModelForm): locid = forms.IntegerField(min_value=1, required=True) def __init__(self, *args, **kwargs): super(MyModelForm, self).__init__(*args, **kwargs) self.fields['locid'].widget.attrs["type"] = 'visible' class Meta: model = MyModel fields = ('locid', 'name') LocFormSet = modelformset_factory(MyModel, form = MyModelForm) pformset = LocFormSet(request.POST, request.FILES, queryset=MyModel.objects.order_by('name'))) A: Okay, none of the approaches above worked for me. I solved this issue from the template side, finally. There is a ticket filed (http://code.djangoproject.com/ticket/10427), which adds a "value" option to a template variable for a form. For instance, it allows, {{form.locid.value}} to be shown. This is available as a patch, which can be installed in the SVN version of django using "patch -p0 file.patch" Remember, the {{form.locid.value}} variable will be used in conjunction with the invisible form - otherwise, the submit and save operations for the formset will crash. This is Not the same as {{form.locid.data}} - as is explained in the ticket referred to above. A: The reason that the autofield is hidden, is that both BaseModelFormSet and BaseInlineFormSet override that field in add_field. The way to fix it is to create your own formset and override add_field without calling super. Also you don't have to explicitly define the primary key. you have to pass the formset to modelformset_factory: LocFormSet = modelformset_factory(MyModel, formset=VisiblePrimaryKeyFormSet) This is in the formset class: from django.forms.models import BaseInlineFormSet, BaseModelFormSet, IntegerField from django.forms.formsets import BaseFormSet class VisiblePrimaryKeyFormset(BaseModelFormSet): def add_fields(self, form, index): self._pk_field = pk = self.model._meta.pk if form.is_bound: pk_value = form.instance.pk else: try: pk_value = self.get_queryset()[index].pk except IndexError: pk_value = None form.fields[self._pk_field.name] = IntegerField( initial=pk_value, required=True) #or any other field you would like to display the pk in BaseFormSet.add_fields(self, form, index) # call baseformset which does not modify your primary key field
How to show hidden autofield in django formset
A Django autofield when displayed using a formset is hidden by default. What would be the best way to show it? At the moment, the model is declared as, class MyModel: locid = models.AutoField(primary_key=True) ... When this is rendered using Django formsets, class MyModelForm(ModelForm): class Meta: model = MyModel fields = ('locid', 'name') it shows up on the page as, <input id="id_form-0-locid" type="hidden" value="707" name="form-0-locid"/> Thanks. Edit I create the formset like this - LocFormSet = modelformset_factory(MyModel) pformset = LocFormSet(request.POST, request.FILES, queryset=MyModel.objects.order_by('name')) Second Edit Looks like I'm not using the custom form class I defined there, so the question needs slight modification.. How would I create a formset from a custom form (which will show a hidden field), as well as use a custom queryset? At the moment, I can either inherit from a BaseModelFormSet class and use a custom query set, or I can use the ModelForm class to add a custom field to a form. Is there a way to do both with a formset? Third Edit I'm now using, class MyModelForm(ModelForm): def __init__(self, *args, **kwargs): super(MyModelForm, self).__init__(*args, **kwargs) locid = forms.IntegerField(min_value = 1, required=True) self.fields['locid'].widget.attrs["type"] = 'visible' self.queryset = MyModel.objects.order_by('name') class Meta: model = MyModel fields = ('locid', 'name') LocFormSet = modelformset_factory(MyModel, form = MyModelForm) pformset = LocFormSet() But this still doesn't Show locid Use the custom query that was specified.
[ "Try changing the default field type:\nfrom django import forms\nclass MyModelForm(ModelForm):\n locid = forms.IntegerField(min_value=1, required=True)\n class Meta:\n model = MyModel\n fields = ('locid', 'name')\n\nEDIT: Tested and works...\n", "As you say, you are not using the custom form you have defined. This is because you aren't passing it in anywhere, so Django can't know about it.\nThe solution is simple - just pass the custom form class into modelformset_factory:\nLocFormSet = modelformset_factory(MyModel, form=MyModelForm) \n\nEdit in response to update 3:\nFirstly, you have the redefinition for locid in the wrong place - it needs to be at the class level, not inside the __init__.\nSecondly, putting the queryset inside the form won't do anything at all - forms don't know about querysets. You should go back to what you were doing before, passing it in as a parameter when you instantiate the formset. (Alternatively, you could define a custom formset, but that seems like overkill.)\nclass MyModelForm(ModelForm):\n locid = forms.IntegerField(min_value=1, required=True)\n\n def __init__(self, *args, **kwargs):\n super(MyModelForm, self).__init__(*args, **kwargs)\n self.fields['locid'].widget.attrs[\"type\"] = 'visible'\n class Meta:\n model = MyModel\n fields = ('locid', 'name')\n\nLocFormSet = modelformset_factory(MyModel, form = MyModelForm)\npformset = LocFormSet(request.POST, request.FILES,\n queryset=MyModel.objects.order_by('name')))\n\n", "Okay, none of the approaches above worked for me. I solved this issue from the template side, finally.\n\nThere is a ticket filed (http://code.djangoproject.com/ticket/10427), which adds a \"value\" option to a template variable for a form. For instance, it allows,\n{{form.locid.value}}\n\nto be shown. This is available as a patch, which can be installed in the SVN version of django using \"patch -p0 file.patch\"\n\nRemember, the {{form.locid.value}} variable will be used in conjunction with the invisible form - otherwise, the submit and save operations for the formset will crash.\nThis is Not the same as {{form.locid.data}} - as is explained in the ticket referred to above.\n\n", "The reason that the autofield is hidden, is that both BaseModelFormSet and BaseInlineFormSet override that field in add_field. The way to fix it is to create your own formset and override add_field without calling super. Also you don't have to explicitly define the primary key.\nyou have to pass the formset to modelformset_factory:\n LocFormSet = modelformset_factory(MyModel, \n formset=VisiblePrimaryKeyFormSet)\n\nThis is in the formset class:\nfrom django.forms.models import BaseInlineFormSet, BaseModelFormSet, IntegerField\nfrom django.forms.formsets import BaseFormSet\n\nclass VisiblePrimaryKeyFormset(BaseModelFormSet):\n def add_fields(self, form, index):\n self._pk_field = pk = self.model._meta.pk\n if form.is_bound:\n pk_value = form.instance.pk\n else:\n try:\n pk_value = self.get_queryset()[index].pk\n except IndexError:\n pk_value = None\n form.fields[self._pk_field.name] = IntegerField( initial=pk_value,\n required=True) #or any other field you would like to display the pk in\n BaseFormSet.add_fields(self, form, index) # call baseformset which does not modify your primary key field\n\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "django", "formset", "python" ]
stackoverflow_0000896153_django_formset_python.txt
Q: substituting in a file I use python 2.5, i like to replace certain variables in a txt file and write the complete data into new file. i wrote a program to do the above, from scipy import * import numpy from numpy import asarray from string import Template def Dat(Par): Par = numpy.asarray(Par) Par[0] = a1 Par[1] = a2 Par[2] = a3 Par[3] = a4 sTemplate=Template(open('/home/av/W/python/data.txt', 'r').read()).safe_substitute(Par) open('/home/av/W/python/data_new.txt' ,'w').write(sTemplate) Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] Dat(Init) when i executed the above *i obtained the error 'TypeError: 'function' object is unsubscriptable' 'data.txt' is a text file, i have placed $a1, $a2, $a3, $a4, i need to replace $a1 $a2 $a3 $a4 by 10.0 200.0 500.0 10.0 My constraints are i need to pass the values only by array like Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] please help me. is this error due to python 2.5 version? or any mistakes in program A: The error is here: Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] which was probably meant to be Init = numpy.asarray ([10.0, 200.0, 500.0, 10.0]) (note the swapped braces/parens). Since python found a "[" after "asarray" (which is a function), it throws an error, because you cannot subscribe (i.e. do something like x[17]) a function. A: Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] This is your problem. numpy.asarray is a function, and you are trying to use it as a list (hence the exception). Flip the brackets and parentheses and try that. A: The line Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] should almost certainly be Init = numpy.asarray([(10.0, 200.0, 500.0, 10.0)]) I believe that is what is causing your "'function' object is unsubscriptable" error A: from scipy import * import numpy from numpy import asarray from string import Template def Dat(Par): Par = numpy.asarray(Par) ParDict =dict(a1 = Par[0], a2 = Par[1],a3 = Par[2],a4 = Par[3]) sTemplate=Template(open('/home/av/W/python/data.txt', 'r').read()).safe_substitute(ParDict) open('/home/av/W/python/data_new.txt' ,'w').write(sTemplate) Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] Dat(Init) In this way its works fine.
substituting in a file
I use python 2.5, i like to replace certain variables in a txt file and write the complete data into new file. i wrote a program to do the above, from scipy import * import numpy from numpy import asarray from string import Template def Dat(Par): Par = numpy.asarray(Par) Par[0] = a1 Par[1] = a2 Par[2] = a3 Par[3] = a4 sTemplate=Template(open('/home/av/W/python/data.txt', 'r').read()).safe_substitute(Par) open('/home/av/W/python/data_new.txt' ,'w').write(sTemplate) Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] Dat(Init) when i executed the above *i obtained the error 'TypeError: 'function' object is unsubscriptable' 'data.txt' is a text file, i have placed $a1, $a2, $a3, $a4, i need to replace $a1 $a2 $a3 $a4 by 10.0 200.0 500.0 10.0 My constraints are i need to pass the values only by array like Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)] please help me. is this error due to python 2.5 version? or any mistakes in program
[ "The error is here:\nInit = numpy.asarray [(10.0, 200.0, 500.0, 10.0)]\n\nwhich was probably meant to be\nInit = numpy.asarray ([10.0, 200.0, 500.0, 10.0])\n\n(note the swapped braces/parens). Since python found a \"[\" after \"asarray\" (which is a function), it throws an error, because you cannot subscribe (i.e. do something like x[17]) a function.\n", "Init = numpy.asarray [(10.0, 200.0, 500.0, 10.0)]\n\nThis is your problem. numpy.asarray is a function, and you are trying to use it as a list (hence the exception). Flip the brackets and parentheses and try that. \n", "The line\nInit = numpy.asarray [(10.0, 200.0, 500.0, 10.0)]\n\nshould almost certainly be\nInit = numpy.asarray([(10.0, 200.0, 500.0, 10.0)])\n\nI believe that is what is causing your \"'function' object is unsubscriptable\" error\n", "from scipy import *\nimport numpy \nfrom numpy import asarray\nfrom string import Template\ndef Dat(Par):\nPar = numpy.asarray(Par)\nParDict =dict(a1 = Par[0], a2 = Par[1],a3 = Par[2],a4 = Par[3])\nsTemplate=Template(open('/home/av/W/python/data.txt', 'r').read()).safe_substitute(ParDict)\nopen('/home/av/W/python/data_new.txt' ,'w').write(sTemplate)\nInit = numpy.asarray [(10.0, 200.0, 500.0, 10.0)]\nDat(Init)\nIn this way its works fine.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001261578_python.txt
Q: Combining two QMainWindows Good day pythonistas and the rest of the coding crowd, I have two QMainWindows designed and coded separately. I need to: display first on a button-press close the first window construct and display the second window using the arguments from the first I have tried to design a third class to control the flow but it does not understand my signal/slot attempt: QtCore.QObject.connect(self.firstWindow,QtCore.SIGNAL("destroyed()"),self.openSecondWindow) Oh gurus, would you enlighten me on some clever way or a witty hack to solve my hardships. Cheers. A: Answer: I had some trouble with connecting signals recently. I found that it worked when I removed the parentheses from the QtCore.SIGNAL. try changing this: QtCore.SIGNAL("destroyed()") to this: QtCore.SIGNAL("destroyed") Reference: This is because your are using the "old style" signals/slots according to Riverbank. Here's the reference to the docs. Specifically, this is the line you're looking for: QtCore.QObject.connect(a, QtCore.SIGNAL("PySig"), pyFunction) Also: Make sure your this.FirstWindow class has this line before your __init__(self...): __pyqtSignals__ = ( "destroyed" ) A: Well, I have given up on the control class (next time will make the control as the first thing and only after that make the windows) Instead have mated the windows by injecting the seconds' constructor seed into the body of the first one and then self.close() the young mother. So tragic.
Combining two QMainWindows
Good day pythonistas and the rest of the coding crowd, I have two QMainWindows designed and coded separately. I need to: display first on a button-press close the first window construct and display the second window using the arguments from the first I have tried to design a third class to control the flow but it does not understand my signal/slot attempt: QtCore.QObject.connect(self.firstWindow,QtCore.SIGNAL("destroyed()"),self.openSecondWindow) Oh gurus, would you enlighten me on some clever way or a witty hack to solve my hardships. Cheers.
[ "Answer:\nI had some trouble with connecting signals recently. I found that it worked when I removed the parentheses from the QtCore.SIGNAL.\ntry changing this:\nQtCore.SIGNAL(\"destroyed()\")\n\nto this:\nQtCore.SIGNAL(\"destroyed\")\n\nReference:\nThis is because your are using the \"old style\" signals/slots according to Riverbank. Here's the reference to the docs. Specifically, this is the line you're looking for:\nQtCore.QObject.connect(a, QtCore.SIGNAL(\"PySig\"), pyFunction)\n\nAlso:\nMake sure your this.FirstWindow class has this line before your __init__(self...):\n__pyqtSignals__ = ( \"destroyed\" )\n\n", "Well, I have given up on the control class (next time will make the control as the first thing and only after that make the windows)\nInstead have mated the windows by injecting the seconds' constructor seed into the body of the first one and then self.close() the young mother. So tragic.\n" ]
[ 1, 0 ]
[]
[]
[ "pyqt", "python", "signals_slots" ]
stackoverflow_0001265646_pyqt_python_signals_slots.txt
Q: How do I calculate the numeric value of a string with unicode components in python? Along the lines of my previous question, How do I convert unicode characters to floats in Python? , I would like to find a more elegant solution to calculating the value of a string that contains unicode numeric values. For example, take the strings "1⅕" and "1 ⅕". I would like these to resolve to 1.2 I know that I can iterate through the string by character, check for unicodedata.category(x) == "No" on each character, and convert the unicode characters by unicodedata.numeric(x). I would then have to split the string and sum the values. However, this seems rather hacky and unstable. Is there a more elegant solution for this in Python? A: I think this is what you want... import unicodedata def eval_unicode(s): #sum all the unicode fractions u = sum(map(unicodedata.numeric, filter(lambda x: unicodedata.category(x)=="No",s))) #eval the regular digits (with optional dot) as a float, or default to 0 n = float("".join(filter(lambda x:x.isdigit() or x==".", s)) or 0) return n+u or the "comprehensive" solution, for those who prefer that style: import unicodedata def eval_unicode(s): #sum all the unicode fractions u = sum(unicodedata.numeric(i) for i in s if unicodedata.category(i)=="No") #eval the regular digits (with optional dot) as a float, or default to 0 n = float("".join(i for i in s if i.isdigit() or i==".") or 0) return n+u But beware, there are many unicode values that seem to not have a numeric value assigned in python (for example ⅜⅝ don't work... or maybe is just a matter with my keyboard xD). Another note on the implementation: it's "too robust", it will work even will malformed numbers like "123½3 ½" and will eval it to 1234.0... but it won't work if there are more than one dots. A: >>> import unicodedata >>> b = '10 ⅕' >>> int(b[:-1]) + unicodedata.numeric(b[-1]) 10.2 define convert_dubious_strings(s): try: return int(s) except UnicodeEncodeError: return int(b[:-1]) + unicodedata.numeric(b[-1]) and if it might have no integer part than another try-except sub-block needs to be added. A: This might be sufficient for you, depending on the strange edge cases you want to deal with: val = 0 for c in my_unicode_string: if unicodedata.category(unichr(c)) == 'No': cval = unicodedata.numeric(c) elif c.isdigit(): cval = int(c) else: continue if cval == int(cval): val *= 10 val += cval print val Whole digits are assumed to be another digit in the number, fractional characters are assumed to be fractions to add to the number. Doesn't do the right thing with spaces between digits, repeated fractions, etc.
How do I calculate the numeric value of a string with unicode components in python?
Along the lines of my previous question, How do I convert unicode characters to floats in Python? , I would like to find a more elegant solution to calculating the value of a string that contains unicode numeric values. For example, take the strings "1⅕" and "1 ⅕". I would like these to resolve to 1.2 I know that I can iterate through the string by character, check for unicodedata.category(x) == "No" on each character, and convert the unicode characters by unicodedata.numeric(x). I would then have to split the string and sum the values. However, this seems rather hacky and unstable. Is there a more elegant solution for this in Python?
[ "I think this is what you want...\nimport unicodedata\ndef eval_unicode(s):\n #sum all the unicode fractions\n u = sum(map(unicodedata.numeric, filter(lambda x: unicodedata.category(x)==\"No\",s)))\n #eval the regular digits (with optional dot) as a float, or default to 0\n n = float(\"\".join(filter(lambda x:x.isdigit() or x==\".\", s)) or 0)\n return n+u\n\nor the \"comprehensive\" solution, for those who prefer that style:\nimport unicodedata\ndef eval_unicode(s):\n #sum all the unicode fractions\n u = sum(unicodedata.numeric(i) for i in s if unicodedata.category(i)==\"No\")\n #eval the regular digits (with optional dot) as a float, or default to 0\n n = float(\"\".join(i for i in s if i.isdigit() or i==\".\") or 0)\n return n+u\n\nBut beware, there are many unicode values that seem to not have a numeric value assigned in python (for example ⅜⅝ don't work... or maybe is just a matter with my keyboard xD).\nAnother note on the implementation: it's \"too robust\", it will work even will malformed numbers like \"123½3 ½\" and will eval it to 1234.0... but it won't work if there are more than one dots.\n", ">>> import unicodedata\n>>> b = '10 ⅕'\n>>> int(b[:-1]) + unicodedata.numeric(b[-1])\n10.2\n\ndefine convert_dubious_strings(s):\n try:\n return int(s)\n except UnicodeEncodeError:\n return int(b[:-1]) + unicodedata.numeric(b[-1])\n\nand if it might have no integer part than another try-except sub-block needs to be added.\n", "This might be sufficient for you, depending on the strange edge cases you want to deal with:\nval = 0\nfor c in my_unicode_string:\n if unicodedata.category(unichr(c)) == 'No':\n cval = unicodedata.numeric(c)\n elif c.isdigit():\n cval = int(c)\n else:\n continue\n if cval == int(cval):\n val *= 10\n val += cval\nprint val\n\nWhole digits are assumed to be another digit in the number, fractional characters are assumed to be fractions to add to the number. Doesn't do the right thing with spaces between digits, repeated fractions, etc.\n" ]
[ 2, 1, 0 ]
[ "I think you'll need a regular expression, explicitly listing the characters that you want to support. Not all numerical characters are suitable for the kind of composition that you envision - for example, what should be the numerical value of\nu\"4\\N{CIRCLED NUMBER FORTY TWO}2\\N{SUPERSCRIPT SIX}\"\n\n???\nDo \nfor i in range(65536):\n if unicodedata.category(unichr(i)) == 'No':\n print hex(i), unicodedata.name(unichdr(i))\n\nand go through the list defining which ones you really want to support.\n" ]
[ -1 ]
[ "floating_point", "python", "string", "unicode" ]
stackoverflow_0001267314_floating_point_python_string_unicode.txt
Q: What is Python's "built-in method acquire"? How can I speed it up? I'm writing a Python program with a lot of file access. It's running surprisingly slowly, so I used cProfile to find out what was taking the time. It seems there's a lot of time spent in what Python is reporting as "{built-in method acquire}". I have no idea what this method is. What is it, and how can I speed up my program? A: Without seeing your code, it is hard to guess. But to guess I would say that it is the threading.Lock.acquire method. Part of your code is trying to get a threading lock, and it is waiting until it has got it. There may be simple ways of fixing it by restructuring your file access, not locking, using blocking=False, or even not using threads at all. But again, without seeing your code, it is hard to guess. A: Using threads for IO is a bad idea. Threading won't make your program wait faster. You can achieve better results by using asynchronous I/O and an event loop; Post more information about your program, and why you are using threads. A: you want to look for cpu used, not for "total time used" from within that method--that might help. Sorry I don't use python but that's how it is for me in ruby :) -r
What is Python's "built-in method acquire"? How can I speed it up?
I'm writing a Python program with a lot of file access. It's running surprisingly slowly, so I used cProfile to find out what was taking the time. It seems there's a lot of time spent in what Python is reporting as "{built-in method acquire}". I have no idea what this method is. What is it, and how can I speed up my program?
[ "Without seeing your code, it is hard to guess. But to guess I would say that it is the threading.Lock.acquire method. Part of your code is trying to get a threading lock, and it is waiting until it has got it.\nThere may be simple ways of fixing it by\n\nrestructuring your file access,\nnot locking,\nusing blocking=False,\nor even not using threads at all.\n\nBut again, without seeing your code, it is hard to guess.\n", "Using threads for IO is a bad idea. Threading won't make your program wait faster. You can achieve better results by using asynchronous I/O and an event loop; Post more information about your program, and why you are using threads.\n", "you want to look for cpu used, not for \"total time used\" from within that method--that might help. Sorry I don't use python but that's how it is for me in ruby :)\n-r\n" ]
[ 6, 0, 0 ]
[]
[]
[ "optimization", "performance", "profiling", "python" ]
stackoverflow_0000530127_optimization_performance_profiling_python.txt
Q: Python standard module for emulating geometric points Is there a standard class in Python to emulate a geometric point that includes coordinates and a value, including arithmetic operations between the coordinates? A: If you just want standard matrix arithmetic operations for your coordinates, try numpy's array type.
Python standard module for emulating geometric points
Is there a standard class in Python to emulate a geometric point that includes coordinates and a value, including arithmetic operations between the coordinates?
[ "If you just want standard matrix arithmetic operations for your coordinates, try numpy's array type.\n" ]
[ 1 ]
[]
[]
[ "geometry", "python" ]
stackoverflow_0001267968_geometry_python.txt
Q: Standard Solution for Decoding Additive Numbers From the Oracle docs. A number representing one or more statistics class. The following class numbers are additive: 1 - User 2 - Redo 4 - Enqueue 8 - Cache 16 - OS 32 - Real Application Clusters 64 - SQL 128 - Debug It there a standard solution for taking say 22 and decoding that into 16, 4, and 2? My first guess would be to create an object which holds every possible combination and use that as a lookup? Is there a better solution using binary or something? Preferred solution would by in Python. (disclaimer: This is not homework.) A: Each of those values corresponds to a single bit. So use the binary. 1<<0 - 1 - User 1<<1 - 2 - Redo 1<<2 - 4 - Enqueue 1<<3 - 8 - Cache 1<<4 - 16 - OS 1<<5 - 32 - Real Application Clusters 1<<6 - 64 - SQL 1<<7 - 128 - Debug Use & to test for each bit. def decode(value): readable = [] flags = ['User', 'Redo', 'Enqueue', 'Cache', 'OS', 'Real Application Clusters', 'SQL', 'Debug'] for i, flag in enumerate(flags): if value & (1<<i): readable.append(flags[i]) return readable print decode(22) A: You want to use binary operations to decode the original. The following code actually returns the correct strings: >>> FLAGS = ('User', 'Redo', 'Enqueue', 'Cache', 'OS', ... 'Real Application Clusters', 'SQL', 'Debug') >>> def getFlags(value): ... flags = [] ... for i, flag in enumerate(FLAGS): ... if value & (1 << i): ... flags.append(flag) ... return flags ... >>> print getFlags(22) ['Redo', 'Enqueue', 'OS'] If you really just want the constants: >>> def binaryDecomposition(value): ... return [1 << i for i in xrange(len(FLAGS)) if value & (1 << i)] ... >>> print binaryDecomposition(22) [2, 4, 16]
Standard Solution for Decoding Additive Numbers
From the Oracle docs. A number representing one or more statistics class. The following class numbers are additive: 1 - User 2 - Redo 4 - Enqueue 8 - Cache 16 - OS 32 - Real Application Clusters 64 - SQL 128 - Debug It there a standard solution for taking say 22 and decoding that into 16, 4, and 2? My first guess would be to create an object which holds every possible combination and use that as a lookup? Is there a better solution using binary or something? Preferred solution would by in Python. (disclaimer: This is not homework.)
[ "Each of those values corresponds to a single bit. So use the binary.\n1<<0 - 1 - User\n1<<1 - 2 - Redo\n1<<2 - 4 - Enqueue\n1<<3 - 8 - Cache\n1<<4 - 16 - OS\n1<<5 - 32 - Real Application Clusters\n1<<6 - 64 - SQL\n1<<7 - 128 - Debug\n\nUse & to test for each bit.\ndef decode(value):\n readable = []\n flags = ['User', 'Redo', 'Enqueue', 'Cache', 'OS',\n 'Real Application Clusters', 'SQL', 'Debug']\n for i, flag in enumerate(flags):\n if value & (1<<i):\n readable.append(flags[i])\n return readable\n\nprint decode(22)\n\n", "You want to use binary operations to decode the original. The following code actually returns the correct strings:\n>>> FLAGS = ('User', 'Redo', 'Enqueue', 'Cache', 'OS',\n... 'Real Application Clusters', 'SQL', 'Debug')\n>>> def getFlags(value):\n... flags = []\n... for i, flag in enumerate(FLAGS):\n... if value & (1 << i):\n... flags.append(flag)\n... return flags\n...\n>>> print getFlags(22)\n['Redo', 'Enqueue', 'OS']\n\nIf you really just want the constants:\n>>> def binaryDecomposition(value):\n... return [1 << i for i in xrange(len(FLAGS)) if value & (1 << i)]\n...\n>>> print binaryDecomposition(22)\n[2, 4, 16]\n\n" ]
[ 4, 4 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001268050_algorithm_python.txt
Q: Importing Model / Lib Class and calling from controller I'm new to python and pylons although experienced in PHP. I'm trying to write a model class which will act as a my data access to my database (couchdb). My problem is simple My model looks like this and is called models/BlogModel.py from couchdb import * class BlogModel: def getTitles(self): # code to get titles here def saveTitle(self): # code to save title here My controller is called controllers/main.py import logging from pylons import request, response, session, tmpl_context as c from pylons.controllers.util import abort, redirect_to from billion.lib.base import BaseController, render log = logging.getLogger(__name__) from billion.model import BlogModel class MainController(BaseController): def index(self): return render('/main.mako') In my index action, how do I access the method getTitles() in BlogModel? I've tried x = BlogModel() x.getTitles() But i get TypeError: 'module' object is not callable Also BlogModel.getTitles() results in AttributeError: 'module' object has no attribute 'getTitles' Is this down to the way I'm including the class ? Can someone tell me the best way to do this ? thanks A: x = BlogModel.BlogModel() Or, more verbosely: After you did the import, you have an object in your namespace called 'BlogModel'. That object is the BlogModel module. (The module name comes from the filename.) Inside that module, there is a class object called 'BlogModel', which is what you were after. (The class name comes from the source code you wrote.) Instead of: from billion.model import BlogModel You could use: from billion.model.BlogModel import BlogModel then your x = BlogModel() would work.
Importing Model / Lib Class and calling from controller
I'm new to python and pylons although experienced in PHP. I'm trying to write a model class which will act as a my data access to my database (couchdb). My problem is simple My model looks like this and is called models/BlogModel.py from couchdb import * class BlogModel: def getTitles(self): # code to get titles here def saveTitle(self): # code to save title here My controller is called controllers/main.py import logging from pylons import request, response, session, tmpl_context as c from pylons.controllers.util import abort, redirect_to from billion.lib.base import BaseController, render log = logging.getLogger(__name__) from billion.model import BlogModel class MainController(BaseController): def index(self): return render('/main.mako') In my index action, how do I access the method getTitles() in BlogModel? I've tried x = BlogModel() x.getTitles() But i get TypeError: 'module' object is not callable Also BlogModel.getTitles() results in AttributeError: 'module' object has no attribute 'getTitles' Is this down to the way I'm including the class ? Can someone tell me the best way to do this ? thanks
[ "x = BlogModel.BlogModel()\n\nOr, more verbosely:\nAfter you did the import, you have an object in your namespace called 'BlogModel'. That object is the BlogModel module. (The module name comes from the filename.) Inside that module, there is a class object called 'BlogModel', which is what you were after. (The class name comes from the source code you wrote.)\nInstead of:\nfrom billion.model import BlogModel\n\nYou could use:\nfrom billion.model.BlogModel import BlogModel\n\nthen your\nx = BlogModel()\n\nwould work.\n" ]
[ 2 ]
[]
[]
[ "model_view_controller", "pylons", "python" ]
stackoverflow_0001268432_model_view_controller_pylons_python.txt
Q: Does __str__() call decode() method behind scenes? It seems to me that built-in functions __repr__ and __str__ have an important difference in their base definition. >>> t2 = u'\u0131\u015f\u0131k' >>> print t2 ışık >>> t2 Out[0]: u'\u0131\u015f\u0131k' t2.decode raises an error since t2 is a unicode string. >>> enc = 'utf-8' >>> t2.decode(enc) ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> File "C:\java\python\Python25\Lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordin al not in range(128) __str__ raises an error as if decode() function is being called: >>> t2.__str__() ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordin al not in range(128) but __repr__ works without problem: >>> t2.__repr__() Out[0]: "u'\\u0131\\u015f\\u0131k'" Why does __str__ produce an error whereas __repr__ work properly? This small difference seems to cause a bug in one django application that I am working on. A: Basically, __str__ can only output ascii strings. Since t2 contains unicode codepoints above ascii, it cannot be represented with just a string. __repr__, on the other hand, tries to output the python code needed to recreate the object. You'll see that the output from repr(t2) (this syntax is preferred to t2.__repr_()) is exactly what you set t2 equal to up on the first line. The result from repr looks roughly like ['\', 'u', '0', ...], which are all ascii values, but the output from str is trying to be [chr(0x0131), chr(0x015f), chr(0x0131), 'k'], most of which are above the range of characters acceptable in a python string. Generally, when dealing with django applications, you should use __unicode__ for everything, and never touch __str__. More info in the django documentation on strings. A: In general, calling str.__unicode__() or unicode.__str__() is a very bad idea, because bytes can't be safely converted to Unicode character points and vice versa. The exception is ASCII values, which are generally the same in all single-byte encodings. The problem is that you're using the wrong method for conversion. To convert unicode to str, you should use encode(): >>> t1 = u"\u0131\u015f\u0131k" >>> t1.encode("utf-8") '\xc4\xb1\xc5\x9f\xc4\xb1k' To convert str to unicode, use decode(): >>> t2 = '\xc4\xb1\xc5\x9f\xc4\xb1k' >>> t2.decode("utf-8") u'\u0131\u015f\u0131k' A: To add a bit of support to John's good answer: To understand the naming of the two methods encode() and decode(), you just have to see that Python considers unicode strings of the form u'...' to be in the reference format. You encode going from the reference format into another format (e.g. utf-8), and you decode from some other format to come to the reference format. The unicode format is always considered the "real thing" :-). A: Note that in Python 3, unicode is the default, and __str__() should always give you unicode.
Does __str__() call decode() method behind scenes?
It seems to me that built-in functions __repr__ and __str__ have an important difference in their base definition. >>> t2 = u'\u0131\u015f\u0131k' >>> print t2 ışık >>> t2 Out[0]: u'\u0131\u015f\u0131k' t2.decode raises an error since t2 is a unicode string. >>> enc = 'utf-8' >>> t2.decode(enc) ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> File "C:\java\python\Python25\Lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordin al not in range(128) __str__ raises an error as if decode() function is being called: >>> t2.__str__() ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordin al not in range(128) but __repr__ works without problem: >>> t2.__repr__() Out[0]: "u'\\u0131\\u015f\\u0131k'" Why does __str__ produce an error whereas __repr__ work properly? This small difference seems to cause a bug in one django application that I am working on.
[ "Basically, __str__ can only output ascii strings. Since t2 contains unicode codepoints above ascii, it cannot be represented with just a string. __repr__, on the other hand, tries to output the python code needed to recreate the object. You'll see that the output from repr(t2) (this syntax is preferred to t2.__repr_()) is exactly what you set t2 equal to up on the first line. The result from repr looks roughly like ['\\', 'u', '0', ...], which are all ascii values, but the output from str is trying to be [chr(0x0131), chr(0x015f), chr(0x0131), 'k'], most of which are above the range of characters acceptable in a python string. Generally, when dealing with django applications, you should use __unicode__ for everything, and never touch __str__.\nMore info in the django documentation on strings.\n", "In general, calling str.__unicode__() or unicode.__str__() is a very bad idea, because bytes can't be safely converted to Unicode character points and vice versa. The exception is ASCII values, which are generally the same in all single-byte encodings. The problem is that you're using the wrong method for conversion.\nTo convert unicode to str, you should use encode():\n>>> t1 = u\"\\u0131\\u015f\\u0131k\"\n>>> t1.encode(\"utf-8\")\n'\\xc4\\xb1\\xc5\\x9f\\xc4\\xb1k'\n\nTo convert str to unicode, use decode():\n>>> t2 = '\\xc4\\xb1\\xc5\\x9f\\xc4\\xb1k'\n>>> t2.decode(\"utf-8\")\nu'\\u0131\\u015f\\u0131k'\n\n", "To add a bit of support to John's good answer:\nTo understand the naming of the two methods encode() and decode(), you just have to see that Python considers unicode strings of the form u'...' to be in the reference format. You encode going from the reference format into another format (e.g. utf-8), and you decode from some other format to come to the reference format. The unicode format is always considered the \"real thing\" :-).\n", "Note that in Python 3, unicode is the default, and __str__() should always give you unicode.\n" ]
[ 7, 5, 2, 0 ]
[]
[]
[ "django", "python", "string", "unicode" ]
stackoverflow_0001267754_django_python_string_unicode.txt
Q: Regex Matching Error I am new to Python (I dont have any programming training either), so please keep that in mind as I ask my question. I am trying to search a retrieved webpage and find all links using a specified pattern. I have done this successfully in other scripts, but I am getting an error that says raise error, v # invalid expression sre_constants.error: multiple repeat I have to admit I do not know why, but again, I am new to Python and Regular Expressions. However, even when I don't use patterns and use a specific link (just to test the matching), I do not believe I return any matches (nothing is sent to the window when I print match.group(0). The link I tested is commented out below. Any ideas? It usually is easier for me to learn by example, but any advice you can give is greatly appreciated! Brock import urllib2 from BeautifulSoup import BeautifulSoup import re url = "http://forums.epicgames.com/archive/index.php?f-356-p-164.html" page = urllib2.urlopen(url).read() soup = BeautifulSoup(page) pattern = r'<a href="http://forums.epicgames.com/archive/index.php?t-([0-9]+).html">(.?+)</a> <i>((.?+) replies)' #pattern = r'href="http://forums.epicgames.com/archive/index.php?t-622233.html">Gears of War 2: Horde Gameplay</a> <i>(20 replies)' for match in re.finditer(pattern, page, re.S): print match(0) A: You need to escape the literal '?' and the literal '(' and ')' that you are trying to match. Also, instead of '?+', I think you're looking for the non-greedy matching provided by '+?'. More documentation here. For your case, try this: pattern = r'<a href="http://forums.epicgames.com/archive/index.php\?t-([0-9]+).html"> (.+?)</a> <i>\((.+?) replies\)' A: That means your regular expression has an error. (.?+)</a> <i>((.?+) What does ?+ mean? Both ? and + are meta characters that does not make sense right next to each other. Maybe you forgot to escape the '?' or something. A: As you're discovering, parsing arbitrary HTML is not easy to do correctly. That's what packages like Beautiful Soup do. Note, you're calling it in your script but then not using the results. Refer to its documentation here for examples of how to make your task a lot easier! A: To extend on what others wrote: .? means "one or zero of any character" .+ means "one ore more of any character" As you can hopefully see, combining the two makes no sense; they are different and contradictory "repeat" characters. So, your error about "multiple repeats" is because you combined those two "repeat" characters in your regular expression. To fix it, just decide which one you actually meant to use, and delete the other. A: import urllib2 import re from BeautifulSoup import BeautifulSoup url = "http://forums.epicgames.com/archive/index.php?f-356-p-164.html" page = urllib2.urlopen(url).read() soup = BeautifulSoup(page) # Get all the links links = [str(match) for match in soup('a')] s = r'<a href="http://forums.epicgames.com/archive/index.php\?t-\d+.html">(.+?)</a>' r = re.compile(s) for link in links: m = r.match(link) if m: print m.groups(1)[0]
Regex Matching Error
I am new to Python (I dont have any programming training either), so please keep that in mind as I ask my question. I am trying to search a retrieved webpage and find all links using a specified pattern. I have done this successfully in other scripts, but I am getting an error that says raise error, v # invalid expression sre_constants.error: multiple repeat I have to admit I do not know why, but again, I am new to Python and Regular Expressions. However, even when I don't use patterns and use a specific link (just to test the matching), I do not believe I return any matches (nothing is sent to the window when I print match.group(0). The link I tested is commented out below. Any ideas? It usually is easier for me to learn by example, but any advice you can give is greatly appreciated! Brock import urllib2 from BeautifulSoup import BeautifulSoup import re url = "http://forums.epicgames.com/archive/index.php?f-356-p-164.html" page = urllib2.urlopen(url).read() soup = BeautifulSoup(page) pattern = r'<a href="http://forums.epicgames.com/archive/index.php?t-([0-9]+).html">(.?+)</a> <i>((.?+) replies)' #pattern = r'href="http://forums.epicgames.com/archive/index.php?t-622233.html">Gears of War 2: Horde Gameplay</a> <i>(20 replies)' for match in re.finditer(pattern, page, re.S): print match(0)
[ "You need to escape the literal '?' and the literal '(' and ')' that you are trying to match.\nAlso, instead of '?+', I think you're looking for the non-greedy matching provided by '+?'.\nMore documentation here.\nFor your case, try this:\npattern = r'<a href=\"http://forums.epicgames.com/archive/index.php\\?t-([0-9]+).html\"> (.+?)</a> <i>\\((.+?) replies\\)'\n\n", "That means your regular expression has an error.\n(.?+)</a> <i>((.?+)\n\nWhat does ?+ mean? Both ? and + are meta characters that does not make sense right next to each other. Maybe you forgot to escape the '?' or something.\n", "As you're discovering, parsing arbitrary HTML is not easy to do correctly. That's what packages like Beautiful Soup do. Note, you're calling it in your script but then not using the results. Refer to its documentation here for examples of how to make your task a lot easier!\n", "To extend on what others wrote:\n.? means \"one or zero of any character\"\n.+ means \"one ore more of any character\"\nAs you can hopefully see, combining the two makes no sense; they are different and contradictory \"repeat\" characters. So, your error about \"multiple repeats\" is because you combined those two \"repeat\" characters in your regular expression. To fix it, just decide which one you actually meant to use, and delete the other.\n", "import urllib2\nimport re\nfrom BeautifulSoup import BeautifulSoup\n\nurl = \"http://forums.epicgames.com/archive/index.php?f-356-p-164.html\"\npage = urllib2.urlopen(url).read()\nsoup = BeautifulSoup(page)\n\n# Get all the links\nlinks = [str(match) for match in soup('a')]\n\ns = r'<a href=\"http://forums.epicgames.com/archive/index.php\\?t-\\d+.html\">(.+?)</a>' \nr = re.compile(s)\nfor link in links:\n m = r.match(link)\n if m:\n print m.groups(1)[0]\n\n" ]
[ 1, 1, 1, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001268761_python_regex.txt
Q: Really long query How do u do long query? Is there way to optimize it? I would do complicated and long query: all_accepted_parts = acceptedFragment.objects.filter(fragmentID = fragment.objects.filter(categories = fragmentCategory.objects.filter(id=1))) but it doesn't work, i get: Error binding parameter 0 - probably unsupported type. I will be thankful for any hint how i could optimize it or solve of course too - more thankful :) A: If it's not working, you can't optimize it. First make it work. At first glance, it seems that you have really mixed concepts about fields, relationships and equality/membership. First go thought the docs, and build your query piece by piece on the python shell (likely from the inside out). Just a shot in the dark: all_accepted_parts = acceptedFragment.objects.filter(fragment__in = fragment.objects.filter(categories = fragmentCategory.objects.get(id=1))) or maybe: all_accepted_parts = acceptedFragment.objects.filter(fragment__in = fragment.objects.filter(categories = 1)) A: As others have said, we really need the models, and some explanation of what you're actually trying to achieve. But it looks like you want to do a related table lookup. Rather than getting all the related objects in a separate nested query, you should use Django's related model syntax to do the join within your query. Something like: acceptedFragment.objects.filter(fragment__categories__id = 1)
Really long query
How do u do long query? Is there way to optimize it? I would do complicated and long query: all_accepted_parts = acceptedFragment.objects.filter(fragmentID = fragment.objects.filter(categories = fragmentCategory.objects.filter(id=1))) but it doesn't work, i get: Error binding parameter 0 - probably unsupported type. I will be thankful for any hint how i could optimize it or solve of course too - more thankful :)
[ "If it's not working, you can't optimize it. First make it work.\nAt first glance, it seems that you have really mixed concepts about fields, relationships and equality/membership. First go thought the docs, and build your query piece by piece on the python shell (likely from the inside out).\nJust a shot in the dark:\nall_accepted_parts = acceptedFragment.objects.filter(fragment__in = fragment.objects.filter(categories = fragmentCategory.objects.get(id=1)))\n\nor maybe:\nall_accepted_parts = acceptedFragment.objects.filter(fragment__in = fragment.objects.filter(categories = 1))\n\n", "As others have said, we really need the models, and some explanation of what you're actually trying to achieve.\nBut it looks like you want to do a related table lookup. Rather than getting all the related objects in a separate nested query, you should use Django's related model syntax to do the join within your query.\nSomething like:\nacceptedFragment.objects.filter(fragment__categories__id = 1)\n\n" ]
[ 4, 4 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001268899_django_django_models_python.txt