content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: CPython/Jython cross-implementation GUI I need to make some GUIs for a Jython application, but would like to minimize translation time should the project switch over to CPython. HTML or XUL are possibilities, but ones that I'd like to avoid. Any ideas on a cross-implementation pythonic GUI toolkit? A: From the discussion here, it doesn't look as though there is a single GUI toolkit that can be used across both Jython and CPython. There have been attempts like wx4j (wxWindows for Java) but these are not actively maintained.
CPython/Jython cross-implementation GUI
I need to make some GUIs for a Jython application, but would like to minimize translation time should the project switch over to CPython. HTML or XUL are possibilities, but ones that I'd like to avoid. Any ideas on a cross-implementation pythonic GUI toolkit?
[ "From the discussion here, it doesn't look as though there is a single GUI toolkit that can be used across both Jython and CPython. There have been attempts like wx4j (wxWindows for Java) but these are not actively maintained.\n" ]
[ 0 ]
[]
[]
[ "jython", "python", "user_interface" ]
stackoverflow_0001948925_jython_python_user_interface.txt
Q: permutations of two lists in python I have two lists like: list1 = ['square','circle','triangle'] list2 = ['red','green'] How can I create all permutations of these lists, like this: [ 'squarered', 'squaregreen', 'redsquare', 'greensquare', 'circlered', 'circlegreen', 'redcircle', 'greencircle', 'trianglered', 'trianglegreen', 'redtriangle', 'greentriangle' ] Can I use itertools for this? A: You want the itertools.product method, which will give you the Cartesian product of both lists. >>> import itertools >>> a = ['foo', 'bar', 'baz'] >>> b = ['x', 'y', 'z', 'w'] >>> for r in itertools.product(a, b): print r[0] + r[1] foox fooy fooz foow barx bary barz barw bazx bazy bazz bazw Your example asks for the bidirectional product (that is, you want 'xfoo' as well as 'foox'). To get that, just do another product and chain the results: >>> for r in itertools.chain(itertools.product(a, b), itertools.product(b, a)): ... print r[0] + r[1] A: >>> import itertools >>> map(''.join, itertools.chain(itertools.product(list1, list2), itertools.product(list2, list1))) ['squarered', 'squaregreen', 'circlered', 'circlegreen', 'trianglered', 'trianglegreen', 'redsquare', 'redcircle', 'redtriangle', 'greensquare', 'greencircle', 'greentriangle'] A: How about [x + y for x in list1 for y in list2] + [y + x for x in list1 for y in list2] Example IPython interaction: In [3]: list1 = ['square', 'circle', 'triangle'] In [4]: list2 = ['red', 'green'] In [5]: [x + y for x in list1 for y in list2] + [y + x for x in list1 for y in list2] Out[5]: ['squarered', 'squaregreen', 'circlered', 'circlegreen', 'trianglered', 'trianglegreen', 'redsquare', 'greensquare', 'redcircle', 'greencircle', 'redtriangle', 'greentriangle'] A: I think what you are looking for is the product of two lists, not the permutations: #!/usr/bin/env python import itertools list1=['square','circle','triangle'] list2=['red','green'] for shape,color in itertools.product(list1,list2): print(shape+color) yields squarered squaregreen circlered circlegreen trianglered trianglegreen If you'd like both squarered and redsquare, then you could do something like this: for pair in itertools.product(list1,list2): for a,b in itertools.permutations(pair,2): print(a+b) or, to make it into a list: l=[a+b for pair in itertools.product(list1,list2) for a,b in itertools.permutations(pair,2)] print(l) yields ['squarered', 'redsquare', 'squaregreen', 'greensquare', 'circlered', 'redcircle', 'circlegreen', 'greencircle', 'trianglered', 'redtriangle', 'trianglegreen', 'greentriangle'] A: You can in any case do something like: perms = [] for shape in list1: for color in list2: perms.append(shape+color)
permutations of two lists in python
I have two lists like: list1 = ['square','circle','triangle'] list2 = ['red','green'] How can I create all permutations of these lists, like this: [ 'squarered', 'squaregreen', 'redsquare', 'greensquare', 'circlered', 'circlegreen', 'redcircle', 'greencircle', 'trianglered', 'trianglegreen', 'redtriangle', 'greentriangle' ] Can I use itertools for this?
[ "You want the itertools.product method, which will give you the Cartesian product of both lists.\n>>> import itertools\n>>> a = ['foo', 'bar', 'baz']\n>>> b = ['x', 'y', 'z', 'w']\n\n>>> for r in itertools.product(a, b): print r[0] + r[1]\nfoox\nfooy\nfooz\nfoow\nbarx\nbary\nbarz\nbarw\nbazx\nbazy\nbazz\nbazw\n\nYour example asks for the bidirectional product (that is, you want 'xfoo' as well as 'foox'). To get that, just do another product and chain the results:\n>>> for r in itertools.chain(itertools.product(a, b), itertools.product(b, a)):\n... print r[0] + r[1]\n\n", ">>> import itertools\n>>> map(''.join, itertools.chain(itertools.product(list1, list2), itertools.product(list2, list1)))\n['squarered', 'squaregreen', 'circlered',\n'circlegreen', 'trianglered', 'trianglegreen',\n'redsquare', 'redcircle', 'redtriangle', 'greensquare',\n'greencircle', 'greentriangle']\n\n", "How about\n[x + y for x in list1 for y in list2] + [y + x for x in list1 for y in list2]\n\nExample IPython interaction:\nIn [3]: list1 = ['square', 'circle', 'triangle']\n\nIn [4]: list2 = ['red', 'green']\n\nIn [5]: [x + y for x in list1 for y in list2] + [y + x for x in list1 for y in list2]\nOut[5]: \n['squarered',\n 'squaregreen',\n 'circlered',\n 'circlegreen',\n 'trianglered',\n 'trianglegreen',\n 'redsquare',\n 'greensquare',\n 'redcircle',\n 'greencircle',\n 'redtriangle',\n 'greentriangle']\n\n", "I think what you are looking for is the product of two lists, not the permutations:\n#!/usr/bin/env python\nimport itertools\nlist1=['square','circle','triangle'] \nlist2=['red','green']\nfor shape,color in itertools.product(list1,list2):\n print(shape+color)\n\nyields\nsquarered\nsquaregreen\ncirclered\ncirclegreen\ntrianglered\ntrianglegreen\n\nIf you'd like both squarered and redsquare, then you could do something like this:\nfor pair in itertools.product(list1,list2):\n for a,b in itertools.permutations(pair,2):\n print(a+b)\n\nor, to make it into a list: \nl=[a+b for pair in itertools.product(list1,list2)\n for a,b in itertools.permutations(pair,2)]\nprint(l)\n\nyields \n['squarered', 'redsquare', 'squaregreen', 'greensquare', 'circlered', 'redcircle', 'circlegreen', 'greencircle', 'trianglered', 'redtriangle', 'trianglegreen', 'greentriangle']\n\n", "You can in any case do something like:\nperms = []\nfor shape in list1:\n for color in list2:\n perms.append(shape+color)\n\n" ]
[ 103, 45, 17, 11, 6 ]
[]
[]
[ "python" ]
stackoverflow_0001953194_python.txt
Q: Retrieving doubly raised exceptions original stack trace in python If I have a scenario where an exception is raised, caught, then raised again inside the except: block, is there a way to capture the initial stack frame from which it was raised? The stack-trace that gets printed as python exits describes the place where the exception is raised a second time. Is there a way to raise the exception such that the stack frame that the exception was originally thrown is shown? A: It's a common mistake to re-raise an exception by specifying the exception instance again, like this: except Exception, ex: # do something raise ex This strips the original traceback info and starts a new one. What you should do instead is this, without explicitly specifying the exception (i.e. use a "bare" raise): except Exception, ex: # do something raise This preserves all the original information in the stack trace. See this section in the docs for somewhat helpful background.
Retrieving doubly raised exceptions original stack trace in python
If I have a scenario where an exception is raised, caught, then raised again inside the except: block, is there a way to capture the initial stack frame from which it was raised? The stack-trace that gets printed as python exits describes the place where the exception is raised a second time. Is there a way to raise the exception such that the stack frame that the exception was originally thrown is shown?
[ "It's a common mistake to re-raise an exception by specifying the exception instance again, like this:\nexcept Exception, ex:\n # do something\n raise ex\n\nThis strips the original traceback info and starts a new one. What you should do instead is this, without explicitly specifying the exception (i.e. use a \"bare\" raise):\nexcept Exception, ex:\n # do something\n raise\n\nThis preserves all the original information in the stack trace. See this section in the docs for somewhat helpful background.\n" ]
[ 11 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0001953237_exception_python.txt
Q: email last lines from a text file in python I am trying to set up an email function that will email the last 15 lines of a results.txt file in python. I am not sure how to do this and was asking do I have to connect to an email server or does python have some other way of sending email. The code below is what i have got so far and any help would be appreciated. Thanks import smtplib # Import the email modules we'll need from email.mime.text import MIMEText # Open a plain text file for reading. For this example, assume that # the text file contains only ASCII characters. fp = open('/home/build/result.txt', 'r') # Create a text/plain message msg = MIMEText(fp.read()) fp.close() me = 'name@server.com' you = 'name@server.com' msg['Subject'] = 'The contents of %s' % '/home/build/result.txt' msg['From'] = me msg['To'] = you # Send the message via our own SMTP server, but don't include the # envelope header. s = smtplib.SMTP() s.sendmail(me, [you], msg.as_string()) s.quit() Hello again, When I'm trying to connect the server will not connect. I know I should not enter the email address. Can anyone suggest what way to write the host information. Thanks smtplib.SMTPServerDisconnected: please run connect() first A: There is no way for your machine to send mail without connecting to a server (otherwise how would the mail get out of your machine?). Most people have a readily available SMTP server provided for them, either by their company (if this is on an intranet) or by their ISP (if a home user). You would need the host name (often something like smtp1.myispdomain.com where of course myispdomain is something else for you) and a port number, usually 25. Sometimes the host is provided as a numeric IP address, like 192.168.0.1. The SMTP() call can take these parameters and will connect to the server automatically. If you don't supply the parameters when you create the SMTP object, you have to call connect() on it later, supplying the same info. See the documentation for more. Note that the default is to connect to localhost and port 25. This works if you're on a Linux box running its own mail forwarder (e.g. Postfix, Sendmail, Exim) but if you're on a Windows machine generally you'll have to use the address supplied by your ISP. A: msg = MIMEText(''.join(fp.readlines()[-15:])) A: msg = MIMEText("\n".join(fp.read().split("\n")[-15:])) Or if you don't need blank lines at end, do like msg = MIMEText("\n".join(fp.read().strip().split("\n")[-15:])) A: You can have a look at this one: http://docs.python.org/library/email-examples.html A: You might want to have a look at my mailer module. It wraps the email modules in the standard library. from mailer import Mailer from mailer import Message message = Message(From="me@example.com", To="you@example.com", charset="utf-8") message.Subject = 'The contents of %s' % '/home/build/result.txt' message.Body = ''.join(fp.readlines()[-15:]) sender = Mailer('smtp.example.com') sender.login('username', 'password') sender.send(message)
email last lines from a text file in python
I am trying to set up an email function that will email the last 15 lines of a results.txt file in python. I am not sure how to do this and was asking do I have to connect to an email server or does python have some other way of sending email. The code below is what i have got so far and any help would be appreciated. Thanks import smtplib # Import the email modules we'll need from email.mime.text import MIMEText # Open a plain text file for reading. For this example, assume that # the text file contains only ASCII characters. fp = open('/home/build/result.txt', 'r') # Create a text/plain message msg = MIMEText(fp.read()) fp.close() me = 'name@server.com' you = 'name@server.com' msg['Subject'] = 'The contents of %s' % '/home/build/result.txt' msg['From'] = me msg['To'] = you # Send the message via our own SMTP server, but don't include the # envelope header. s = smtplib.SMTP() s.sendmail(me, [you], msg.as_string()) s.quit() Hello again, When I'm trying to connect the server will not connect. I know I should not enter the email address. Can anyone suggest what way to write the host information. Thanks smtplib.SMTPServerDisconnected: please run connect() first
[ "There is no way for your machine to send mail without connecting to a server (otherwise how would the mail get out of your machine?). Most people have a readily available SMTP server provided for them, either by their company (if this is on an intranet) or by their ISP (if a home user). You would need the host name (often something like smtp1.myispdomain.com where of course myispdomain is something else for you) and a port number, usually 25. Sometimes the host is provided as a numeric IP address, like 192.168.0.1.\nThe SMTP() call can take these parameters and will connect to the server automatically. If you don't supply the parameters when you create the SMTP object, you have to call connect() on it later, supplying the same info. See the documentation for more.\nNote that the default is to connect to localhost and port 25. This works if you're on a Linux box running its own mail forwarder (e.g. Postfix, Sendmail, Exim) but if you're on a Windows machine generally you'll have to use the address supplied by your ISP.\n", "msg = MIMEText(''.join(fp.readlines()[-15:]))\n\n", "msg = MIMEText(\"\\n\".join(fp.read().split(\"\\n\")[-15:]))\n\nOr if you don't need blank lines at end, do like\nmsg = MIMEText(\"\\n\".join(fp.read().strip().split(\"\\n\")[-15:]))\n\n", "You can have a look at this one:\nhttp://docs.python.org/library/email-examples.html\n", "You might want to have a look at my mailer module. It wraps the email modules in the standard library.\nfrom mailer import Mailer\nfrom mailer import Message\n\nmessage = Message(From=\"me@example.com\",\n To=\"you@example.com\",\n charset=\"utf-8\")\nmessage.Subject = 'The contents of %s' % '/home/build/result.txt'\nmessage.Body = ''.join(fp.readlines()[-15:])\n\nsender = Mailer('smtp.example.com')\nsender.login('username', 'password')\nsender.send(message)\n\n" ]
[ 4, 2, 1, 1, 0 ]
[]
[]
[ "email", "python", "smtp" ]
stackoverflow_0001952734_email_python_smtp.txt
Q: How do I display notifications from `django-notification`? I've been reading the docs for django-notification, and they seem to cover creating notifications just fine, but not how to display them to users. Is there a good reference for this out there, and my Google-fu has just failed me? If not, can someone give me some pointers here? Thanks. A: The answer is you have to build it into your own templates. This can be as simple as the following snippet: <table> <caption>{% trans "Notices" %}</caption> <thead> <tr> <th>{% trans "Type" %}</th> <th>{% trans "Message" %}</th> <th>{% trans "Date of the Notice" %}</th> </tr> </thead> <tbody> {% for notice in notices %} {% if notice.is_unseen %} <tr class="unseen_notice"> {% else %} <tr class="notice"> {% endif %} <td class="notice_type">[{% trans notice.notice_type.display %}]</td> <td class="notice_message">{{ notice.message|safe }}</td> <td class="notice_time">{{ notice.added|timesince }} {% trans "ago" %}</td> </tr> {% endfor %} </tbody> </table> As @googletorp answered, Pinax is the goto place for figuring out how the authors are using django-notification. In particular, there is a notification administration page that can serve as a handy guide. A: Tale a look at Pinax the source can be found on github. They use notifications a lot for their project site http://code.pinaxproject.com . Edit: I just gave it a look. It seems all that Pinax does to make it work is to list it in installed apps before any the other external apps and include it's urls file like you usually would do.
How do I display notifications from `django-notification`?
I've been reading the docs for django-notification, and they seem to cover creating notifications just fine, but not how to display them to users. Is there a good reference for this out there, and my Google-fu has just failed me? If not, can someone give me some pointers here? Thanks.
[ "The answer is you have to build it into your own templates. This can be as simple as the following snippet:\n<table>\n <caption>{% trans \"Notices\" %}</caption> \n <thead>\n <tr>\n <th>{% trans \"Type\" %}</th>\n <th>{% trans \"Message\" %}</th>\n <th>{% trans \"Date of the Notice\" %}</th>\n </tr>\n </thead>\n <tbody>\n {% for notice in notices %}\n {% if notice.is_unseen %}\n <tr class=\"unseen_notice\">\n {% else %}\n <tr class=\"notice\">\n {% endif %}\n <td class=\"notice_type\">[{% trans notice.notice_type.display %}]</td>\n <td class=\"notice_message\">{{ notice.message|safe }}</td>\n <td class=\"notice_time\">{{ notice.added|timesince }} {% trans \"ago\" %}</td>\n </tr>\n {% endfor %}\n </tbody>\n</table>\n\nAs @googletorp answered, Pinax is the goto place for figuring out how the authors are using django-notification. In particular, there is a notification administration page that can serve as a handy guide.\n", "Tale a look at Pinax the source can be found on github. They use notifications a lot for their project site http://code.pinaxproject.com .\nEdit:\nI just gave it a look. It seems all that Pinax does to make it work is to list it in installed apps before any the other external apps and include it's urls file like you usually would do. \n" ]
[ 4, 2 ]
[]
[]
[ "django", "django_notification", "python" ]
stackoverflow_0001609775_django_django_notification_python.txt
Q: Building a weakref cache in python I'm currently coding a project in python where I need a sort of cache of generic objects, I have settled on using WeakValueDictionaries for this. These generic objects are often referenced by many other non-generic objects. My main problem though is that I can't seem to wrap my head around a way of making these WeakValueDictionaries available to many different portions of the program. I would prefer not to use "global" variables if possible. Best regards FrederikNS A: Maybe I'm not understanding your question, but making a dictionary of weakly referenced values available to your code isn't really any different from making a dictionary of anything else available to your code. I would store a reference to the WeakValueDictionary on: each instance (referenced via self) a class (also referenced via self, but shared between instances) a module (global, kind of) depending on what made the most sense given the rest of your code.
Building a weakref cache in python
I'm currently coding a project in python where I need a sort of cache of generic objects, I have settled on using WeakValueDictionaries for this. These generic objects are often referenced by many other non-generic objects. My main problem though is that I can't seem to wrap my head around a way of making these WeakValueDictionaries available to many different portions of the program. I would prefer not to use "global" variables if possible. Best regards FrederikNS
[ "Maybe I'm not understanding your question, but making a dictionary of weakly referenced values available to your code isn't really any different from making a dictionary of anything else available to your code. I would store a reference to the WeakValueDictionary on:\n\neach instance (referenced via self)\na class (also referenced via self, but shared between instances)\na module (global, kind of)\n\ndepending on what made the most sense given the rest of your code.\n" ]
[ 3 ]
[]
[]
[ "caching", "python", "weak_references" ]
stackoverflow_0001953666_caching_python_weak_references.txt
Q: Are CPython, IronPython, Jython scripts compatible with each other? I am pretty sure that python scripts will work in all three, but I want to make sure. I have read here and there about editors that can write CPython, Jython, IronPython and I am hoping that I am looking to much into the distinction. My situation is I have 3 different api's that I want to test. Each api performs the same functionality code wise, but they are different in implementation. I am writing wrappers around each language's apis. Each wrapper should expose the exact same functionality and implementation to python using Boost::python, Jython, and IronPython. My question is, would a python script written using these exposed methods (that are common for each language) work in all three "flavors" of Python? Like I said I am pretty sure the answer is 'Of course,' but I need to make sure before I spend too much time working on this. A: The short answer is: Sometimes. Some projects built on top of IronPython may not work with CPython, and some CPython modules that are written in C (e.g. NumPy) will not work with IronPython. On a similar note, while Jython implements the language specification, it has several incompatibilities with CPython (for instance, it lacks a few parts of the CPython standard library, and it can import Java standard library packages and classes, like Swing) So, yes, as long as you avoid the incompatibilities.
Are CPython, IronPython, Jython scripts compatible with each other?
I am pretty sure that python scripts will work in all three, but I want to make sure. I have read here and there about editors that can write CPython, Jython, IronPython and I am hoping that I am looking to much into the distinction. My situation is I have 3 different api's that I want to test. Each api performs the same functionality code wise, but they are different in implementation. I am writing wrappers around each language's apis. Each wrapper should expose the exact same functionality and implementation to python using Boost::python, Jython, and IronPython. My question is, would a python script written using these exposed methods (that are common for each language) work in all three "flavors" of Python? Like I said I am pretty sure the answer is 'Of course,' but I need to make sure before I spend too much time working on this.
[ "The short answer is: Sometimes.\nSome projects built on top of IronPython may not work with CPython, and some CPython modules that are written in C (e.g. NumPy) will not work with IronPython.\nOn a similar note, while Jython implements the language specification, it has several incompatibilities with CPython (for instance, it lacks a few parts of the CPython standard library, and it can import Java standard library packages and classes, like Swing)\nSo, yes, as long as you avoid the incompatibilities.\n" ]
[ 10 ]
[]
[]
[ "boost_python", "ironpython", "jython", "python", "testing" ]
stackoverflow_0001953989_boost_python_ironpython_jython_python_testing.txt
Q: PyODBC and Microsoft Access: Inconsistent results from simple query I am using pyodbc, via Microsoft Jet, to access the data in a Microsoft Access 2003 database from a Python program. The Microsoft Access database comes from a third-party; I am only reading the data. I have generally been having success in extracting the data I need, but I recently noticed some discrepancies. I have boiled it down to a simple query, of the form: SELECT field1 FROM table WHERE field1 = 601 AND field2 = 9067 I've obfuscated the field names and values but really, it doesn't get much more trivial than that! When I run the query in Access, it returns one record. Then I run it over pyodbc, with code that looks like this: connection = pyodbc.connect(connectionString) rows = connection.execute(queryString).fetchall() (Again, it doesn't get much more trivial than that!) The value of queryString is cut-and-pasted from the working query in Access, but it returns no records. I expected it to return the same record. When I change the query to search for a different value for field2, bingo, it works. It is only some values it rejects. So, please help me out. Where should I be looking next to explain this discrepancy? If I can't trust the results of trivial queries, I don't have a chance on this project! Update: It gets even simpler! The following query gives different numbers... SELECT COUNT(*) FROM table I ponder if it is related to some form of caching and/or improper transaction management by another application that occasionally to populates the data. A: can you give us an obfuscated database that shows this problem? I've never experienced this. At least give the table definitions -- are any of the columns floats or decimal? A: This might sound stupid. But... Is the path to actual database & connection string (DSN) point to same file location? A: Do you have the same problem with other ODBC tools, for example Query Tool? You can also turn on ODBC tracing in ODBC Connection Manager. I don't have access and don't know whether its sql commands will be traced but sometimes it helps me to solve ODBC problems. A: Are the fields indexed? If so, maybe one of the indexes is corrupted and you need to compact the MDB file. If an index is corrupt, it can lead to major issues. You could lose existing relationships (if the corrupt index is the PK), or you could lose data. So you need to have a backup before you do this. If there is a corrupt index, I think the interactive Access compact operation will tell you, but if not, you can look for the MSysCompactErrors table which will tell you what errors occurred during the compact. This happens only very rarely and can indicate one of two things: bad application design, including obsolete Jet versions (Jet 4 before service pack 6 was very susceptible to this, and that's where I encountered it). unreliable operating environment (networking/hardware/software). Of course, this suggestion is a real long shot, but it is definitely one cause of different results (the most common would be to ORDER BY on the corrupt index and you'll end up with a different record count than with another ORDER BY). A: Problem was resolved somewhere between an upgrade to Access 2007 and downloading a fresh copy of the database from the source. Still don't know what the root cause was, but suspect some form of index corruption. A: I guess the problem may be that you did not commit the query. PYODBC starts with autocommit = False and therefore every query like select,insert,update etc will start a transaction that in order to get effect you have to commit. Either call connection.autocommit = True or call cursor.execute("commit") after the query and then fetchall.
PyODBC and Microsoft Access: Inconsistent results from simple query
I am using pyodbc, via Microsoft Jet, to access the data in a Microsoft Access 2003 database from a Python program. The Microsoft Access database comes from a third-party; I am only reading the data. I have generally been having success in extracting the data I need, but I recently noticed some discrepancies. I have boiled it down to a simple query, of the form: SELECT field1 FROM table WHERE field1 = 601 AND field2 = 9067 I've obfuscated the field names and values but really, it doesn't get much more trivial than that! When I run the query in Access, it returns one record. Then I run it over pyodbc, with code that looks like this: connection = pyodbc.connect(connectionString) rows = connection.execute(queryString).fetchall() (Again, it doesn't get much more trivial than that!) The value of queryString is cut-and-pasted from the working query in Access, but it returns no records. I expected it to return the same record. When I change the query to search for a different value for field2, bingo, it works. It is only some values it rejects. So, please help me out. Where should I be looking next to explain this discrepancy? If I can't trust the results of trivial queries, I don't have a chance on this project! Update: It gets even simpler! The following query gives different numbers... SELECT COUNT(*) FROM table I ponder if it is related to some form of caching and/or improper transaction management by another application that occasionally to populates the data.
[ "can you give us an obfuscated database that shows this problem? I've never experienced this. At least give the table definitions -- are any of the columns floats or decimal?\n", "This might sound stupid. But...\nIs the path to actual database & connection string (DSN) point to same file location?\n", "Do you have the same problem with other ODBC tools, for example Query Tool?\nYou can also turn on ODBC tracing in ODBC Connection Manager. I don't have access\nand don't know whether its sql commands will be traced but sometimes it helps me to solve ODBC problems.\n", "Are the fields indexed? If so, maybe one of the indexes is corrupted and you need to compact the MDB file. If an index is corrupt, it can lead to major issues. You could lose existing relationships (if the corrupt index is the PK), or you could lose data. So you need to have a backup before you do this. If there is a corrupt index, I think the interactive Access compact operation will tell you, but if not, you can look for the MSysCompactErrors table which will tell you what errors occurred during the compact.\nThis happens only very rarely and can indicate one of two things:\n\nbad application design, including obsolete Jet versions (Jet 4 before service pack 6 was very susceptible to this, and that's where I encountered it).\nunreliable operating environment (networking/hardware/software).\n\nOf course, this suggestion is a real long shot, but it is definitely one cause of different results (the most common would be to ORDER BY on the corrupt index and you'll end up with a different record count than with another ORDER BY).\n", "Problem was resolved somewhere between an upgrade to Access 2007 and downloading a fresh copy of the database from the source. Still don't know what the root cause was, but suspect some form of index corruption.\n", "I guess the problem may be that you did not commit the query. PYODBC starts with autocommit = False and therefore every query like select,insert,update etc will start a transaction that in order to get effect you have to commit. Either call connection.autocommit = True or call cursor.execute(\"commit\") after the query and then fetchall.\n" ]
[ 1, 1, 1, 1, 1, 1 ]
[]
[]
[ "jet", "ms_access", "odbc", "pyodbc", "python" ]
stackoverflow_0000827502_jet_ms_access_odbc_pyodbc_python.txt
Q: Changing properties of inherited field I want to alter properties of a model field inherited from a base class. The way I try this below does not seem to have any effect. Any ideas? def __init__(self, *args, **kwargs): super(SomeModel, self).__init__(*args, **kwargs) f = self._meta.get_field('some_field') f.blank = True f.help_text = 'This is optional' A: So.. You need to change blank and help_text attributes.. And I assume that you want this feature just so the help_text is displayed in forms, and form does not raise "this field is required" So do this in forms: class MyForm(ModelForm): class Meta: model = YourModel some_field = forms.CharField(required=False, help_text="Whatever you want") A: OK, that's simply not possible, here is why: http://docs.djangoproject.com/en/1.1/topics/db/models/#field-name-hiding-is-not-permitted EDIT: And by the way: don't try to change class properties inside a constructor, it's not a wise thing to do. Basically what you are trying to do, is to change the table, when you are creating a row. You wouldn't do that, if you were just using SQL, would you :)? Completely different thing is changing forms that way - I often dynamically change instance a form, but then I still change only this one instance, not the whole template (a class) of form to be used (for example to dynamically add a field, that is required in this instance of a form).
Changing properties of inherited field
I want to alter properties of a model field inherited from a base class. The way I try this below does not seem to have any effect. Any ideas? def __init__(self, *args, **kwargs): super(SomeModel, self).__init__(*args, **kwargs) f = self._meta.get_field('some_field') f.blank = True f.help_text = 'This is optional'
[ "So.. You need to change blank and help_text attributes.. And I assume that you want this feature just so the help_text is displayed in forms, and form does not raise \"this field is required\"\nSo do this in forms:\nclass MyForm(ModelForm):\n class Meta:\n model = YourModel\n\n some_field = forms.CharField(required=False, help_text=\"Whatever you want\")\n\n", "OK, that's simply not possible, here is why:\nhttp://docs.djangoproject.com/en/1.1/topics/db/models/#field-name-hiding-is-not-permitted\nEDIT:\nAnd by the way: don't try to change class properties inside a constructor, it's not a wise thing to do. Basically what you are trying to do, is to change the table, when you are creating a row. You wouldn't do that, if you were just using SQL, would you :)? Completely different thing is changing forms that way - I often dynamically change instance a form, but then I still change only this one instance, not the whole template (a class) of form to be used (for example to dynamically add a field, that is required in this instance of a form).\n" ]
[ 3, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001940459_django_django_models_python.txt
Q: How can I debug a problem calling Python's copy.deepcopy() against a custom type? In my code I'm trying to take copies of instances a class using copy.deepcopy. The problem is that under some circumstances it is erroring with the following error: TypeError: 'object.__new__(NotImplementedType) is not safe, use NotImplementedType.__new__()' After much digging I have found that I am able to reproduce the error using the following code: import copy copy.deepcopy(__builtins__) The problem appears to be that at some point it is trying to copy the NotImplementedType builtin. The question is why is it doing this? I have not overridden __deepcopy__ in my class and it doesn't happen all the time. Does anyone have any tips for tracking down where the request to make a copy of this type comes from? I've put some debugging code in the copy module itself to ensure that this is what's happening, but the point at which the problem occurs is so far down a recursive stack it's very hard to make much of what I'm seeing. A: In the end I did some digging in the copy source code and came up with the following solution: from copy import deepcopy, _deepcopy_dispatch from types import ModuleType class MyType(object): def __init__(self): self.module = __builtins__ def copy(self): ''' Patch the deepcopy dispatcher to pass modules back unchanged ''' _deepcopy_dispatch[ModuleType] = lambda x, m: x result = deepcopy(self) del _deepcopy_dispatch[ModuleType] return result MyType().copy() I realise this uses a private API but I couldn't find another clean way of achieving the same thing. I did a quick search on the web and found that other people had used the same API without any bother. If it changes in the future I'll take the hit. I'm also aware that this is not thread-safe (if a thread needed the old behaviour whilst I was doing a copy on another thread I'd be screwed) but again its not a problem for me right now. Hope that helps someone else out at some point. A: you could override the __deepcopy__ method: (python documentation) In order for a class to define its own copy implementation, it can define special methods __copy__() and __deepcopy__(). The former is called to implement the shallow copy operation; no additional arguments are passed. The latter is called to implement the deep copy operation; it is passed one argument, the memo dictionary. If the __deepcopy__() implementation needs to make a deep copy of a component, it should call the deepcopy() function with the component as first argument and the memo dictionary as second argument. Otherwise you could save the modules in a global list or something else. A: You can override the deepcopy behavior of the class that contains a pointer to a module, by using the pickle protocol, which is supported by the copy module, as is stated here. In particular, you can define __getstate__ and __setstate__ for that class. E.g.: >>> class MyClass: ... def __getstate__(self): ... state = self.__dict__.copy() ... del state['some_module'] ... return state ... def __setstate__(self, state): ... self.__dict__.update(state) ... self.some_module = some_module
How can I debug a problem calling Python's copy.deepcopy() against a custom type?
In my code I'm trying to take copies of instances a class using copy.deepcopy. The problem is that under some circumstances it is erroring with the following error: TypeError: 'object.__new__(NotImplementedType) is not safe, use NotImplementedType.__new__()' After much digging I have found that I am able to reproduce the error using the following code: import copy copy.deepcopy(__builtins__) The problem appears to be that at some point it is trying to copy the NotImplementedType builtin. The question is why is it doing this? I have not overridden __deepcopy__ in my class and it doesn't happen all the time. Does anyone have any tips for tracking down where the request to make a copy of this type comes from? I've put some debugging code in the copy module itself to ensure that this is what's happening, but the point at which the problem occurs is so far down a recursive stack it's very hard to make much of what I'm seeing.
[ "In the end I did some digging in the copy source code and came up with the following solution:\nfrom copy import deepcopy, _deepcopy_dispatch\nfrom types import ModuleType\n\nclass MyType(object):\n\n def __init__(self):\n self.module = __builtins__\n\n def copy(self):\n ''' Patch the deepcopy dispatcher to pass modules back unchanged '''\n _deepcopy_dispatch[ModuleType] = lambda x, m: x\n result = deepcopy(self)\n del _deepcopy_dispatch[ModuleType]\n return result\n\nMyType().copy()\n\nI realise this uses a private API but I couldn't find another clean way of achieving the same thing. I did a quick search on the web and found that other people had used the same API without any bother. If it changes in the future I'll take the hit.\nI'm also aware that this is not thread-safe (if a thread needed the old behaviour whilst I was doing a copy on another thread I'd be screwed) but again its not a problem for me right now.\nHope that helps someone else out at some point.\n", "you could override the __deepcopy__ method: (python documentation)\n\nIn order for a class to define its own copy implementation, it can define special methods __copy__() and __deepcopy__(). The former is called to implement the shallow copy operation; no additional arguments are passed. The latter is called to implement the deep copy operation; it is passed one argument, the memo dictionary. If the __deepcopy__() implementation needs to make a deep copy of a component, it should call the deepcopy() function with the component as first argument and the memo dictionary as second argument.\n\nOtherwise you could save the modules in a global list or something else.\n", "You can override the deepcopy behavior of the class that contains a pointer to a module, by using the pickle protocol, which is supported by the copy module, as is stated here. In particular, you can define __getstate__ and __setstate__ for that class. E.g.:\n>>> class MyClass:\n... def __getstate__(self):\n... state = self.__dict__.copy()\n... del state['some_module']\n... return state\n... def __setstate__(self, state):\n... self.__dict__.update(state)\n... self.some_module = some_module\n\n" ]
[ 3, 1, 1 ]
[]
[]
[ "deep_copy", "python" ]
stackoverflow_0001941887_deep_copy_python.txt
Q: How can I run 2 servers at once in Python? I need to run 2 servers at once in Python using the threading module, but to call the function run(), the first server is running, but the second server does not run until the end of the first server. This is the source code: import os import sys import threading n_server = 0 n_server_lock = threading.Lock() class ServersThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.start() self.join() def run(self): global n_server, n_server_lock if n_server == 0: n_server_lock.acquire() n_server += 1 n_server_lock.release() print(['MainServer']) # This is the first server class main_server = MainServer() elif n_server == 1: n_server_lock.acquire() n_server += 1 n_server_lock.release() print (['DownloadServer']) # This is the second server class download_server = DownloadServer() if __name__ == "__main__": servers = [] for i in range(2): servers += [ServersThread()] When I call the server class, it automatically runs an infinite while loop. So how can I run 2 servers at once? Thank you very much for your help Fragsworth, I just test the new structure and working perfect. The MainServer and DownloadServer classes, inherit from threading.Thread and run the infinite loop inside run(). Finally I call the servers as you said. A: You don't want to join() in your __init__ function. This is causing the system to block until each thread finishes. I would recommend you restructure your program so your main function looks more like the following: if name == "__main__": servers = [MainServer(), DownloadServer()] for s in servers: s.start() for s in servers: s.join() That is, create a separate thread class for your MainServer and DownloadServer, then have them start asynchronously from the main process, and join afterwards.
How can I run 2 servers at once in Python?
I need to run 2 servers at once in Python using the threading module, but to call the function run(), the first server is running, but the second server does not run until the end of the first server. This is the source code: import os import sys import threading n_server = 0 n_server_lock = threading.Lock() class ServersThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.start() self.join() def run(self): global n_server, n_server_lock if n_server == 0: n_server_lock.acquire() n_server += 1 n_server_lock.release() print(['MainServer']) # This is the first server class main_server = MainServer() elif n_server == 1: n_server_lock.acquire() n_server += 1 n_server_lock.release() print (['DownloadServer']) # This is the second server class download_server = DownloadServer() if __name__ == "__main__": servers = [] for i in range(2): servers += [ServersThread()] When I call the server class, it automatically runs an infinite while loop. So how can I run 2 servers at once? Thank you very much for your help Fragsworth, I just test the new structure and working perfect. The MainServer and DownloadServer classes, inherit from threading.Thread and run the infinite loop inside run(). Finally I call the servers as you said.
[ "You don't want to join() in your __init__ function. This is causing the system to block until each thread finishes.\nI would recommend you restructure your program so your main function looks more like the following:\nif name == \"__main__\":\n servers = [MainServer(), DownloadServer()]\n for s in servers:\n s.start()\n for s in servers:\n s.join() \n\nThat is, create a separate thread class for your MainServer and DownloadServer, then have them start asynchronously from the main process, and join afterwards.\n" ]
[ 6 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0001954549_multithreading_python.txt
Q: Python instance method in C Consider the following Python (3.x) code: class Foo(object): def bar(self): pass foo = Foo() How to write the same functionality in C? I mean, how do I create an object with a method in C? And then create an instance from it? Edit: Oh, sorry! I meant the same functionality via Python C API. How to create a Python method via its C API? Something like: PyObject *Foo = ?????; PyMethod??? *bar = ????; A: You can't! C does not have "classes", it only has structs. And a struct cannot have code (methods or functions). You can, however, fake it with function pointers: /* struct object has 1 member, namely a pointer to a function */ struct object { int (*class)(void); }; /* create a variable of type `struct object` and call it `new` */ struct object new; /* make its `class` member point to the `rand()` function */ new.class = rand; /* now call the "object method" */ new.class(); A: Here's a simple class (adapted from http://nedbatchelder.com/text/whirlext.html for 3.x): #include "Python.h" #include "structmember.h" // The CountDict type. typedef struct { PyObject_HEAD PyObject * dict; int count; } CountDict; static int CountDict_init(CountDict *self, PyObject *args, PyObject *kwds) { self->dict = PyDict_New(); self->count = 0; return 0; } static void CountDict_dealloc(CountDict *self) { Py_XDECREF(self->dict); self->ob_type->tp_free((PyObject*)self); } static PyObject * CountDict_set(CountDict *self, PyObject *args) { const char *key; PyObject *value; if (!PyArg_ParseTuple(args, "sO:set", &key, &value)) { return NULL; } if (PyDict_SetItemString(self->dict, key, value) < 0) { return NULL; } self->count++; return Py_BuildValue("i", self->count); } static PyMemberDef CountDict_members[] = { { "dict", T_OBJECT, offsetof(CountDict, dict), 0, "The dictionary of values collected so far." }, { "count", T_INT, offsetof(CountDict, count), 0, "The number of times set() has been called." }, { NULL } }; static PyMethodDef CountDict_methods[] = { { "set", (PyCFunction) CountDict_set, METH_VARARGS, "Set a key and increment the count." }, // typically there would be more here... { NULL } }; static PyTypeObject CountDictType = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "CountDict", /* tp_name */ sizeof(CountDict), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)CountDict_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags*/ "CountDict object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ CountDict_methods, /* tp_methods */ CountDict_members, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)CountDict_init, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ }; // Module definition static PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "countdict", MODULE_DOC, -1, NULL, /* methods */ NULL, NULL, /* traverse */ NULL, /* clear */ NULL }; PyObject * PyInit_countdict(void) { PyObject * mod = PyModule_Create(&moduledef); if (mod == NULL) { return NULL; } CountDictType.tp_new = PyType_GenericNew; if (PyType_Ready(&CountDictType) < 0) { Py_DECREF(mod); return NULL; } Py_INCREF(&CountDictType); PyModule_AddObject(mod, "CountDict", (PyObject *)&CountDictType); return mod; } A: I suggest you start from the example source code here -- it's part of Python 3's sources, and it exists specifically to show you, by example, how to perform what you require (and a few other things besides) -- use the C API to create a module, make a new type in that module, endow that type with methods and attributes. That's basically the first part of the source, culminating in the definition of Xxo_Type -- then you get examples of how to define various kinds of functions, some other types you may not care about, and finally the module object proper and its initialization (you can skip most of that of course, though not the module object and the parts of its initialization that lead up to the definition of the type of interest;-). Most of the questions you might have while studying and adapting that source to your specific needs have good answers in the docs, especially in the section on "Object Implementation Support" -- but of course you can always open a new question here (one per issue would be best -- a "question" with many actual questions is always a bother!-) showing exactly what you're doing, what you were expecting as a result, and what you are seeing instead -- and you'll get answers which tend to include some pretty useful ones;-).
Python instance method in C
Consider the following Python (3.x) code: class Foo(object): def bar(self): pass foo = Foo() How to write the same functionality in C? I mean, how do I create an object with a method in C? And then create an instance from it? Edit: Oh, sorry! I meant the same functionality via Python C API. How to create a Python method via its C API? Something like: PyObject *Foo = ?????; PyMethod??? *bar = ????;
[ "You can't! C does not have \"classes\", it only has structs. And a struct cannot have code (methods or functions).\nYou can, however, fake it with function pointers:\n/* struct object has 1 member, namely a pointer to a function */\nstruct object {\n int (*class)(void);\n};\n\n/* create a variable of type `struct object` and call it `new` */\nstruct object new;\n/* make its `class` member point to the `rand()` function */\nnew.class = rand;\n\n/* now call the \"object method\" */\nnew.class();\n\n", "Here's a simple class (adapted from http://nedbatchelder.com/text/whirlext.html for 3.x):\n#include \"Python.h\"\n#include \"structmember.h\"\n\n// The CountDict type.\n\ntypedef struct {\n PyObject_HEAD\n PyObject * dict;\n int count;\n} CountDict;\n\nstatic int\nCountDict_init(CountDict *self, PyObject *args, PyObject *kwds)\n{\n self->dict = PyDict_New();\n self->count = 0;\n return 0;\n}\n\nstatic void\nCountDict_dealloc(CountDict *self)\n{\n Py_XDECREF(self->dict);\n self->ob_type->tp_free((PyObject*)self);\n}\n\nstatic PyObject *\nCountDict_set(CountDict *self, PyObject *args)\n{\n const char *key;\n PyObject *value;\n\n if (!PyArg_ParseTuple(args, \"sO:set\", &key, &value)) {\n return NULL;\n }\n\n if (PyDict_SetItemString(self->dict, key, value) < 0) {\n return NULL;\n }\n\n self->count++;\n\n return Py_BuildValue(\"i\", self->count);\n}\n\nstatic PyMemberDef\nCountDict_members[] = {\n { \"dict\", T_OBJECT, offsetof(CountDict, dict), 0,\n \"The dictionary of values collected so far.\" },\n\n { \"count\", T_INT, offsetof(CountDict, count), 0,\n \"The number of times set() has been called.\" },\n\n { NULL }\n};\n\nstatic PyMethodDef\nCountDict_methods[] = {\n { \"set\", (PyCFunction) CountDict_set, METH_VARARGS,\n \"Set a key and increment the count.\" },\n // typically there would be more here...\n\n { NULL }\n};\n\nstatic PyTypeObject\nCountDictType = {\n PyObject_HEAD_INIT(NULL)\n 0, /* ob_size */\n \"CountDict\", /* tp_name */\n sizeof(CountDict), /* tp_basicsize */\n 0, /* tp_itemsize */\n (destructor)CountDict_dealloc, /* tp_dealloc */\n 0, /* tp_print */\n 0, /* tp_getattr */\n 0, /* tp_setattr */\n 0, /* tp_compare */\n 0, /* tp_repr */\n 0, /* tp_as_number */\n 0, /* tp_as_sequence */\n 0, /* tp_as_mapping */\n 0, /* tp_hash */\n 0, /* tp_call */\n 0, /* tp_str */\n 0, /* tp_getattro */\n 0, /* tp_setattro */\n 0, /* tp_as_buffer */\n Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags*/\n \"CountDict object\", /* tp_doc */\n 0, /* tp_traverse */\n 0, /* tp_clear */\n 0, /* tp_richcompare */\n 0, /* tp_weaklistoffset */\n 0, /* tp_iter */\n 0, /* tp_iternext */\n CountDict_methods, /* tp_methods */\n CountDict_members, /* tp_members */\n 0, /* tp_getset */\n 0, /* tp_base */\n 0, /* tp_dict */\n 0, /* tp_descr_get */\n 0, /* tp_descr_set */\n 0, /* tp_dictoffset */\n (initproc)CountDict_init, /* tp_init */\n 0, /* tp_alloc */\n 0, /* tp_new */\n};\n\n// Module definition\n\nstatic PyModuleDef\nmoduledef = {\n PyModuleDef_HEAD_INIT,\n \"countdict\",\n MODULE_DOC,\n -1,\n NULL, /* methods */\n NULL,\n NULL, /* traverse */\n NULL, /* clear */\n NULL\n};\n\n\nPyObject *\nPyInit_countdict(void)\n{\n PyObject * mod = PyModule_Create(&moduledef);\n if (mod == NULL) {\n return NULL;\n }\n\n CountDictType.tp_new = PyType_GenericNew;\n if (PyType_Ready(&CountDictType) < 0) {\n Py_DECREF(mod);\n return NULL;\n }\n\n Py_INCREF(&CountDictType);\n PyModule_AddObject(mod, \"CountDict\", (PyObject *)&CountDictType);\n\n return mod;\n}\n\n", "I suggest you start from the example source code here -- it's part of Python 3's sources, and it exists specifically to show you, by example, how to perform what you require (and a few other things besides) -- use the C API to create a module, make a new type in that module, endow that type with methods and attributes. That's basically the first part of the source, culminating in the definition of Xxo_Type -- then you get examples of how to define various kinds of functions, some other types you may not care about, and finally the module object proper and its initialization (you can skip most of that of course, though not the module object and the parts of its initialization that lead up to the definition of the type of interest;-).\nMost of the questions you might have while studying and adapting that source to your specific needs have good answers in the docs, especially in the section on \"Object Implementation Support\" -- but of course you can always open a new question here (one per issue would be best -- a \"question\" with many actual questions is always a bother!-) showing exactly what you're doing, what you were expecting as a result, and what you are seeing instead -- and you'll get answers which tend to include some pretty useful ones;-).\n" ]
[ 3, 3, 2 ]
[]
[]
[ "c", "python", "python_c_api" ]
stackoverflow_0001954494_c_python_python_c_api.txt
Q: Can I store a python dictionary in google's BigTable datastore without serializing it explicitly? I have a python dictionary that I would like to store in Google's BigTable datastore (it is an attribute in a db.Model class). Is there an easy way to do this? i.e. using a db.DictionaryProperty? Or do I have to use pickle to serialize my dictionary? My dictionary is relatively straight forward. It consists of strings as keys, but it may also contain sub dictionaries for some keys. For example: { 'myKey' : 100, 'another' : 'aha', 'a sub dictionary' : { 'a': 1, 'b':2 } } PS: I would like to serialize as binary, not text if possible. A: Here's another approach: class DictProperty(db.Property): data_type = dict def get_value_for_datastore(self, model_instance): value = super(DictProperty, self).get_value_for_datastore(model_instance) return db.Blob(pickle.dumps(value)) def make_value_from_datastore(self, value): if value is None: return dict() return pickle.loads(value) def default_value(self): if self.default is None: return dict() else: return super(DictProperty, self).default_value().copy() def validate(self, value): if not isinstance(value, dict): raise db.BadValueError('Property %s needs to be convertible ' 'to a dict instance (%s) of class dict' % (self.name, value)) return super(DictProperty, self).validate(value) def empty(self, value): return value is None A: I think you cannot avoid serializing your objects. I would define the following model to store each key, value pair: class DictModel(db.Model): value = db.TextProperty() To save to the datastore I'd use: def set_value(key, value): key = DictModel(value=pickle.dumps(value), key_name=key) key.save() return key And to retrieve data: def get_value(key): return pickle.loads(DictModel.get_by_key_name(key).value) A: I assume that when you need to be able to reach the dict, it's all-at-once? You don't have to get values from inside the dict while it's in the datastore? If so, you'll have to serialize, but don't have to use pickle; we use simplejson instead. Then retrieving is a simple matter of overriding toBasicType(), sort of like this: class MyModel(db.Model): #define some properties, including "data" which is a TextProperty containing a biggish dict def toBasicType(self): return {'metadata': self.getMetadata(), 'data': simplejson.loads(self.data)} Creation involves calling MyModel(...,simplejson.dumps(data),...). If you're already pickling, that may be your best bet, but simplejson's working pretty well for us.
Can I store a python dictionary in google's BigTable datastore without serializing it explicitly?
I have a python dictionary that I would like to store in Google's BigTable datastore (it is an attribute in a db.Model class). Is there an easy way to do this? i.e. using a db.DictionaryProperty? Or do I have to use pickle to serialize my dictionary? My dictionary is relatively straight forward. It consists of strings as keys, but it may also contain sub dictionaries for some keys. For example: { 'myKey' : 100, 'another' : 'aha', 'a sub dictionary' : { 'a': 1, 'b':2 } } PS: I would like to serialize as binary, not text if possible.
[ "Here's another approach:\nclass DictProperty(db.Property):\n data_type = dict\n\n def get_value_for_datastore(self, model_instance):\n value = super(DictProperty, self).get_value_for_datastore(model_instance)\n return db.Blob(pickle.dumps(value))\n\n def make_value_from_datastore(self, value):\n if value is None:\n return dict()\n return pickle.loads(value)\n\n def default_value(self):\n if self.default is None:\n return dict()\n else:\n return super(DictProperty, self).default_value().copy()\n\n def validate(self, value):\n if not isinstance(value, dict):\n raise db.BadValueError('Property %s needs to be convertible '\n 'to a dict instance (%s) of class dict' % (self.name, value))\n return super(DictProperty, self).validate(value)\n\n def empty(self, value):\n return value is None\n\n", "I think you cannot avoid serializing your objects.\nI would define the following model to store each key, value pair:\nclass DictModel(db.Model):\n value = db.TextProperty()\n\nTo save to the datastore I'd use:\ndef set_value(key, value):\n key = DictModel(value=pickle.dumps(value), key_name=key)\n key.save()\n return key\n\nAnd to retrieve data:\ndef get_value(key):\n return pickle.loads(DictModel.get_by_key_name(key).value)\n\n", "I assume that when you need to be able to reach the dict, it's all-at-once? You don't have to get values from inside the dict while it's in the datastore?\nIf so, you'll have to serialize, but don't have to use pickle; we use simplejson instead. Then retrieving is a simple matter of overriding toBasicType(), sort of like this:\nclass MyModel(db.Model):\n #define some properties, including \"data\" which is a TextProperty containing a biggish dict\n def toBasicType(self):\n return {'metadata': self.getMetadata(),\n 'data': simplejson.loads(self.data)}\nCreation involves calling MyModel(...,simplejson.dumps(data),...).\nIf you're already pickling, that may be your best bet, but simplejson's working pretty well for us.\n" ]
[ 8, 1, 1 ]
[]
[]
[ "google_app_engine", "pickle", "python" ]
stackoverflow_0001953784_google_app_engine_pickle_python.txt
Q: What's the point of a main function and/or __name__ == "__main__" check in Python? I occasionally notice something like the following in Python scripts: if __name__ == "__main__": # do stuff like call main() What's the point of this? A: Having all substantial Python code live inside a function (i.e., not at module top level) is a crucial performance optimization as well as an important factor in good organization of code (the Python compiler can optimize access to local variables in a function much better than it can optimize "local" variables which are actually a module's globals, since the semantics of the latter are more demanding). Making the call to the function conditional on the current module being run as the "main script" (rather than imported from another module) makes for potential reusability of nuggets of functionality contained in the module (since other modules may import it and just call the appropriate functions or classes), and even more importantly it supports solid unit testing (where all sort of mock-ups and fakes for external subsystems may generally need to be set up before the module's functionality is exercised and tested). A: This allows a python script to be imported or run standalone is a sane way. If you run a python file directly, the __name__ variable will contain __main__. If you import the script that will not be the case. Normally, if you import the script you want to call functions or reference classes from the file. If you did not have this check, any code that was not in a class or function would run when you import. A: The sole purpose of this, assuming it is in main.py, is so other files can import main to include classes and functions that are in your "main" program, but without running the source code. Without this condition, code that is in the global scope will be executed when it is imported by other scripts. A: It's a great place to put module tests. This will only run when a module is run directly from the shell but it will not run if imported.
What's the point of a main function and/or __name__ == "__main__" check in Python?
I occasionally notice something like the following in Python scripts: if __name__ == "__main__": # do stuff like call main() What's the point of this?
[ "Having all substantial Python code live inside a function (i.e., not at module top level) is a crucial performance optimization as well as an important factor in good organization of code (the Python compiler can optimize access to local variables in a function much better than it can optimize \"local\" variables which are actually a module's globals, since the semantics of the latter are more demanding).\nMaking the call to the function conditional on the current module being run as the \"main script\" (rather than imported from another module) makes for potential reusability of nuggets of functionality contained in the module (since other modules may import it and just call the appropriate functions or classes), and even more importantly it supports solid unit testing (where all sort of mock-ups and fakes for external subsystems may generally need to be set up before the module's functionality is exercised and tested).\n", "This allows a python script to be imported or run standalone is a sane way.\nIf you run a python file directly, the __name__ variable will contain __main__. If you import the script that will not be the case. Normally, if you import the script you want to call functions or reference classes from the file.\nIf you did not have this check, any code that was not in a class or function would run when you import.\n", "The sole purpose of this, assuming it is in main.py, is so other files can import main to include classes and functions that are in your \"main\" program, but without running the source code.\nWithout this condition, code that is in the global scope will be executed when it is imported by other scripts.\n", "It's a great place to put module tests. This will only run when a module is run directly from the shell but it will not run if imported.\n" ]
[ 26, 8, 7, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001954700_python.txt
Q: How do I access a object's method when the method's name is in a variable? Say I have a class object named test. test has various methods, one of them is whatever() . I have a variable named method = "whatever" How can I access the method using the variable with test? Thanks! A: Get the attribute with getattr: method = "whatever" getattr(test, method) You can also call it: getattr(test, method)() A: To access the method, getattr(test, test.method); this way you can bind it to a variable, return it as a function result, pass it as an argument, and so forth. To call it as well, append parenthesized arguments (just parentheses if there are no arguments), for example getattr(test, test.method)().
How do I access a object's method when the method's name is in a variable?
Say I have a class object named test. test has various methods, one of them is whatever() . I have a variable named method = "whatever" How can I access the method using the variable with test? Thanks!
[ "Get the attribute with getattr:\nmethod = \"whatever\"\ngetattr(test, method)\n\nYou can also call it:\ngetattr(test, method)()\n\n", "To access the method, getattr(test, test.method); this way you can bind it to a variable, return it as a function result, pass it as an argument, and so forth. To call it as well, append parenthesized arguments (just parentheses if there are no arguments), for example getattr(test, test.method)().\n" ]
[ 9, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001954840_python.txt
Q: call python modules from java classes? is it possible, by using jython to call jython classes from java code? If yes, how please? A: Yes - the Jython FAQ has a specific entry about this. A: Nice article regarding this issue Or just try Jython Jython Project Page
call python modules from java classes?
is it possible, by using jython to call jython classes from java code? If yes, how please?
[ "Yes - the Jython FAQ has a specific entry about this.\n", "Nice article regarding this issue\nOr just try Jython\nJython Project Page\n" ]
[ 3, 3 ]
[]
[]
[ "class", "java", "jython", "module", "python" ]
stackoverflow_0001954887_class_java_jython_module_python.txt
Q: Basics of SymPy I am just starting to play with SymPy and I am a bit surprised by some of its behavior, for instance this is not the results I would expect: >>> import sympy as s >>> (-1)**s.I == s.E**(-1* s.pi) False >>> s.I**s.I == s.exp(-s.pi/2) False Why are these returning False and is there a way to get it to convert from one way of writing a complex number to another? A: From the FAQ: Why does SymPy say that two equal expressions are unequal? The equality operator (==) tests whether expressions have identical form, not whether they are mathematically equivalent. To make equality testing useful in basic cases, SymPy tries to rewrite mathematically equivalent expressions to a canonical form when evaluating them. For example, SymPy evaluates both x+x and -(-2*x) to 2*x, and x*x to x**2. The simplest example where the default transformations fail to generate a canonical form is for nonlinear polynomials, which can be represented in both factored and expanded form. Although clearly a(1+b) = a+ab mathematically, SymPy gives: >>> bool(a*(1+b) == a + a*b) False Likewise, SymPy fails to detect that the difference is zero: >>> bool(a*(1+b) - (a+a*b) == 0) False If you want to determine the mathematical equivalence of nontrivial expressions, you should apply a more advanced simplification routine to both sides of the equation. In the case of polynomials, expressions can be rewritten in a canonical form by expanding them fully. This is done using the .expand() method in SymPy: >>> A, B = a*(1+b), a + a*b >>> bool(A.expand() == B.expand()) True >>> (A - B).expand() 0 If .expand() does not help, try simplify(), trigsimp(), etc, which attempt more advanced transformations. For example, >>> trigsimp(cos(x)**2 + sin(x)**2) == 1 True A: Because they're not equal. Try this one: s.E**(s.I* s.pi)== s.I*s.I
Basics of SymPy
I am just starting to play with SymPy and I am a bit surprised by some of its behavior, for instance this is not the results I would expect: >>> import sympy as s >>> (-1)**s.I == s.E**(-1* s.pi) False >>> s.I**s.I == s.exp(-s.pi/2) False Why are these returning False and is there a way to get it to convert from one way of writing a complex number to another?
[ "From the FAQ:\nWhy does SymPy say that two equal expressions are unequal?\nThe equality operator (==) tests whether expressions have identical form, not whether they are mathematically equivalent.\nTo make equality testing useful in basic cases, SymPy tries to rewrite mathematically equivalent expressions to a canonical form when evaluating them. For example, SymPy evaluates both x+x and -(-2*x) to 2*x, and x*x to x**2.\nThe simplest example where the default transformations fail to generate a canonical form is for nonlinear polynomials, which can be represented in both factored and expanded form. Although clearly a(1+b) = a+ab mathematically, SymPy gives: \n>>> bool(a*(1+b) == a + a*b) False \n\nLikewise, SymPy fails to detect that the difference is zero: \n >>> bool(a*(1+b) - (a+a*b) == 0) False \n\nIf you want to determine the mathematical equivalence of nontrivial expressions, you should apply a more advanced simplification routine to both sides of the equation. In the case of polynomials, expressions can be rewritten in a canonical form by expanding them fully. This is done using the .expand() method in SymPy: \n>>> A, B = a*(1+b), a + a*b \n>>> bool(A.expand() == B.expand()) True \n>>> (A - B).expand() 0 \n\nIf .expand() does not help, try simplify(), trigsimp(), etc, which attempt more advanced transformations. For example, \n>>> trigsimp(cos(x)**2 + sin(x)**2) == 1 True\n\n", "Because they're not equal. Try this one:\ns.E**(s.I* s.pi)== s.I*s.I\n" ]
[ 9, 0 ]
[]
[]
[ "python", "sympy" ]
stackoverflow_0001954799_python_sympy.txt
Q: Python: which XML parsing library will work out-of-the-box for Python 2.4 and up? How can I make sure that my Python script, which will be doing some XML parsing, will Just Work with Python 2.4, 2.5 and 2.6? Specifically, which (if any) XML parsing library is present in, and compatible between, all those versions? Edit: the working-out-of-the-box requirement is in place because the XML parsing I'm going to need to do is very limited (just grabbing some values) and I'm going to need to run this script on a bunch of different platforms, so I'd rather deal with a crappy XML API then try to get lxml installed on Mac, Linux and Windows. A: minidom is available in Python 2.0 and later. However, if I were you, I would strongly consider using ElementTree which is available in Python 2.5 and later. Its syntax is much more pleasant. 2.4 users can reasonably easily download ElementTree, 2.5+ it will work without any additional dependencies. But I may be spoiled by rarely needing to target pre-2.5, myself. A: So, basically intersect the result of "xml" in these pages: https://docs.python.org/release/2.4/modindex.html https://docs.python.org/release/2.5/modindex.html https://docs.python.org/release/2.6/modindex.html That leaves xml.dom and xml.sax. If you could relieve the "out-of-the-box"-requirement: lxml A: You can use minidom
Python: which XML parsing library will work out-of-the-box for Python 2.4 and up?
How can I make sure that my Python script, which will be doing some XML parsing, will Just Work with Python 2.4, 2.5 and 2.6? Specifically, which (if any) XML parsing library is present in, and compatible between, all those versions? Edit: the working-out-of-the-box requirement is in place because the XML parsing I'm going to need to do is very limited (just grabbing some values) and I'm going to need to run this script on a bunch of different platforms, so I'd rather deal with a crappy XML API then try to get lxml installed on Mac, Linux and Windows.
[ "minidom is available in Python 2.0 and later.\nHowever, if I were you, I would strongly consider using ElementTree which is available in Python 2.5 and later. Its syntax is much more pleasant.\n2.4 users can reasonably easily download ElementTree, 2.5+ it will work without any additional dependencies. But I may be spoiled by rarely needing to target pre-2.5, myself.\n", "So, basically intersect the result of \"xml\" in these pages:\n\nhttps://docs.python.org/release/2.4/modindex.html\nhttps://docs.python.org/release/2.5/modindex.html\nhttps://docs.python.org/release/2.6/modindex.html\n\nThat leaves xml.dom and xml.sax.\nIf you could relieve the \"out-of-the-box\"-requirement: lxml\n", "You can use minidom\n" ]
[ 8, 5, 1 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001954923_python_xml.txt
Q: Get html output from python code I have dictionary and would like to produce html page where will be drawn simple html table with keys and values. How it can be done from python code? A: output = "<html><body><table>" for key in your_dict: output += "<tr><td>%s</td><td>%s</td></tr>" % (key, your_dict[key]) output += "</table></body></html> print output A: You can use a template engine like Jinja. A list of engines for templating is available here. A: You maybe interested by markup see http://markup.sourceforge.net/ A: You may also consider using Mako template library. A: Along with the numerous template engines, you might consider using Python's built in Template. It will give you basic templating functionality without needing to learn (or choose) a template library.
Get html output from python code
I have dictionary and would like to produce html page where will be drawn simple html table with keys and values. How it can be done from python code?
[ "output = \"<html><body><table>\"\nfor key in your_dict:\n output += \"<tr><td>%s</td><td>%s</td></tr>\" % (key, your_dict[key])\noutput += \"</table></body></html>\nprint output\n\n", "You can use a template engine like Jinja. A list of engines for templating is available here.\n", "You maybe interested by markup see http://markup.sourceforge.net/\n", "You may also consider using Mako template library.\n", "Along with the numerous template engines, you might consider using Python's built in Template. It will give you basic templating functionality without needing to learn (or choose) a template library.\n" ]
[ 10, 2, 1, 1, 0 ]
[]
[]
[ "html", "python" ]
stackoverflow_0001953649_html_python.txt
Q: python foreign character in csv I have a little csv which contains foreign characters, like Chinese. How can I display them in Chinese instead of those \xa5\\xa4\ string??? Thanks A: Have you read the docs for the csv module? It includes examples of how to wrap csv to handle Unicode data.
python foreign character in csv
I have a little csv which contains foreign characters, like Chinese. How can I display them in Chinese instead of those \xa5\\xa4\ string??? Thanks
[ "Have you read the docs for the csv module? It includes examples of how to wrap csv to handle Unicode data.\n" ]
[ 3 ]
[]
[]
[ "encode", "python" ]
stackoverflow_0001955301_encode_python.txt
Q: __getattr__ keeps returning None even when I attempt to return values Try running the following code: class Test(object): def func_accepting_args(self,prop,*args): msg = "%s getter/setter got called with args %s" % (prop,args) print msg #this is prented return msg #Why is None returned? def __getattr__(self,name): if name.startswith("get_") or name.startswith("set_"): prop = name[4:] def return_method(*args): self.func_accepting_args(prop,*args) return return_method else: raise AttributeError, name x = Test() x.get_prop(50) #will return None, why?!, I was hoping it would return msg from func_accepting_args Anyone with an explanation as to why None is returned? A: return_method() doesn't return anything. It should return the result of the wrapped func_accepting_args(): def return_method(*args): return self.func_accepting_args(prop,*args) A: Because return_method() doesn't return a value. It just falls out the bottom, hence you get None.
__getattr__ keeps returning None even when I attempt to return values
Try running the following code: class Test(object): def func_accepting_args(self,prop,*args): msg = "%s getter/setter got called with args %s" % (prop,args) print msg #this is prented return msg #Why is None returned? def __getattr__(self,name): if name.startswith("get_") or name.startswith("set_"): prop = name[4:] def return_method(*args): self.func_accepting_args(prop,*args) return return_method else: raise AttributeError, name x = Test() x.get_prop(50) #will return None, why?!, I was hoping it would return msg from func_accepting_args Anyone with an explanation as to why None is returned?
[ "return_method() doesn't return anything. It should return the result of the wrapped func_accepting_args():\ndef return_method(*args):\n return self.func_accepting_args(prop,*args)\n\n", "Because return_method() doesn't return a value. It just falls out the bottom, hence you get None.\n" ]
[ 6, 1 ]
[]
[]
[ "metaprogramming", "python" ]
stackoverflow_0001955363_metaprogramming_python.txt
Q: Why is this a python syntax error during an initialisation? This code: class Todo: def addto(self, list_name="", text=""): """ Adds an item to the specified list. """ if list_name == "": list_name = sys.argv[2] text = ''.join(sys.argv[3:] todo_list = TodoList(getListFilename(list_name)) produces a syntax error with the little arrow pointing to todo_list on the last line. The __init__ method for TodoList is here: def __init__(self, json_location): """ Sets up the list. """ self.json_location = json_location self.load() I am kind of new to Python, so I don't see what I am doing wrong here. A: you need to close this ) text = ''.join(sys.argv[3:]
Why is this a python syntax error during an initialisation?
This code: class Todo: def addto(self, list_name="", text=""): """ Adds an item to the specified list. """ if list_name == "": list_name = sys.argv[2] text = ''.join(sys.argv[3:] todo_list = TodoList(getListFilename(list_name)) produces a syntax error with the little arrow pointing to todo_list on the last line. The __init__ method for TodoList is here: def __init__(self, json_location): """ Sets up the list. """ self.json_location = json_location self.load() I am kind of new to Python, so I don't see what I am doing wrong here.
[ "you need to close this )\ntext = ''.join(sys.argv[3:]\n\n" ]
[ 11 ]
[]
[]
[ "python", "syntax", "syntax_error" ]
stackoverflow_0001955448_python_syntax_syntax_error.txt
Q: Comparison of the multiprocessing module and pyro? I use pyro for basic management of parallel jobs on a compute cluster. I just moved to a cluster where I will be responsible for using all the cores on each compute node. (On previous clusters, each core has been a separate node.) The python multiprocessing module seems like a good fit for this. I notice it can also be used for remote-process communication. If anyone has used both frameworks for remote-process communication, I'd be grateful to hear how they stack up against each other. The obvious benefit of the multiprocessing module is that it's built-in from 2.6. Apart from that, it's hard for me to tell which is better. A: EDIT: I'm changing my answer so you avoid pain. multiprocessing is immature, the docs on BaseManager are INCORRECT, and if you're an object-oriented thinker that wants to create shared objects on the fly at run-time, USE PYRO OR YOU WILL SERIOUSLY REGRET IT! If you are just doing functional programming using a shared queue that you register up front like all the stupid examples GOOD FOR YOU. Short Answer Multiprocessing: Feels awkward doing object-oriented remote objects Easy breezy crypto (authkey) Over a network or just inter-process communication No nameserver extra hassle like in Pyro (there are ways to get around this) Edit: Can't "register" objects once the manager is instantiated!!?? Edit: If a server isn't not started, the client throws some "Invalid argument" exception instead of just saying "Failed to connect" WTF!? Edit: BaseManager documentation is incorrect! There is no "start" method!?! Edit: Very little examples as to how to use it. Pyro: Simple remote objects Network comms only (loopback if local only) Edit: This thing just WORKS, and it likes object-oriented object sharing, which makes me LIKE it Edit: Why isn't THIS a part of the standard library instead of that multiprocessing piece of crap that tried to copy it and failed miserably? Edit: The first time I answered this I had just dived into 2.6 multiprocessing. In the code I show below, the Texture class is registered and shared as a proxy, however the "data" attribute inside of it is NOT. So guess what happens, each process has a separate copy of the "data" attribute inside of the Texture proxy, despite what you might expect. I just spent untold amount of hours trying to figure out how a good pattern to create shared objects during run-time and I kept running in to brick walls. It has been quite confusing and frustrating. Maybe it's just me, but looking around at the scant examples people have attempted it doesn't look like it. I'm having to make the painful decision of dropping multiprocessing library and preferring Pyro until multiprocessing is more mature. While initially I was excited to learn multiprocessing being built into python, I am now thoroughly disgusted with it and would rather install the Pyro package many many times with glee that such a beautiful library exists for python. Long Answer I have used Pyro in past projects and have been very happy with it. I have also started to work with multiprocessing new in 2.6. With multiprocessing I found it a bit awkward to allow shared objects to be created as needed. It seems like, in its youth, the multiprocessing module has been more geared for functional programming as opposed to object-oriented. However this is not entirely true because it is possible to do, I'm just feeling constrained by the "register" calls. For example: manager.py: from multiprocessing import Process from multiprocessing.managers import BaseManager class Texture(object): def __init__(self, data): self.data = data def setData(self, data): print "Calling set data %s" % (data) self.data = data def getData(self): return self.data class TextureManager(BaseManager): def __init__(self, address=None, authkey=''): BaseManager.__init__(self, address, authkey) self.textures = {} def addTexture(self, name, texture): self.textures[name] = texture def hasTexture(self, name): return name in self.textures server.py: from multiprocessing import Process from multiprocessing.managers import BaseManager from manager import Texture, TextureManager manager = TextureManager(address=('', 50000), authkey='hello') def getTexture(name): if manager.hasTexture(name): return manager.textures[name] else: texture = Texture([0]*100) manager.addTexture(name, texture) manager.register(name, lambda: texture) TextureManager.register("getTexture", getTexture) if __name__ == "__main__": server = manager.get_server() server.serve_forever() client.py: from multiprocessing import Process from multiprocessing.managers import BaseManager from manager import Texture, TextureManager if __name__ == "__main__": manager = TextureManager(address=('127.0.0.1', 50000), authkey='hello') manager.connect() TextureManager.register("getTexture") texture = manager.getTexture("texture2") data = [2] * 100 texture.setData(data) print "data = %s" % (texture.getData()) The awkwardness I'm describing comes from server.py where I register a getTexture function to retrieve a function of a certain name from the TextureManager. As I'm going over this the awkwardness could probably be removed if I made the TextureManager a shareable object which creates/retrieves shareable textures. Meh I'm still playing, but you get the idea. I don't remember encountering this awkwardness using pyro, but there probably is a solution that's cleaner than the example above.
Comparison of the multiprocessing module and pyro?
I use pyro for basic management of parallel jobs on a compute cluster. I just moved to a cluster where I will be responsible for using all the cores on each compute node. (On previous clusters, each core has been a separate node.) The python multiprocessing module seems like a good fit for this. I notice it can also be used for remote-process communication. If anyone has used both frameworks for remote-process communication, I'd be grateful to hear how they stack up against each other. The obvious benefit of the multiprocessing module is that it's built-in from 2.6. Apart from that, it's hard for me to tell which is better.
[ "EDIT: I'm changing my answer so you avoid pain. multiprocessing is immature, the docs on BaseManager are INCORRECT, and if you're an object-oriented thinker that wants to create shared objects on the fly at run-time, USE PYRO OR YOU WILL SERIOUSLY REGRET IT! If you are just doing functional programming using a shared queue that you register up front like all the stupid examples GOOD FOR YOU.\nShort Answer\nMultiprocessing:\n\nFeels awkward doing object-oriented remote objects\nEasy breezy crypto (authkey)\nOver a network or just inter-process communication\nNo nameserver extra hassle like in Pyro (there are ways to get around this)\nEdit: Can't \"register\" objects once the manager is instantiated!!??\nEdit: If a server isn't not started, the client throws some \"Invalid argument\" exception instead of just saying \"Failed to connect\" WTF!?\nEdit: BaseManager documentation is incorrect! There is no \"start\" method!?!\nEdit: Very little examples as to how to use it.\n\nPyro:\n\nSimple remote objects\nNetwork comms only (loopback if local only)\nEdit: This thing just WORKS, and it likes object-oriented object sharing, which makes me LIKE it\nEdit: Why isn't THIS a part of the standard library instead of that multiprocessing piece of crap that tried to copy it and failed miserably?\n\nEdit: The first time I answered this I had just dived into 2.6 multiprocessing. In the code I show below, the Texture class is registered and shared as a proxy, however the \"data\" attribute inside of it is NOT. So guess what happens, each process has a separate copy of the \"data\" attribute inside of the Texture proxy, despite what you might expect. I just spent untold amount of hours trying to figure out how a good pattern to create shared objects during run-time and I kept running in to brick walls. It has been quite confusing and frustrating. Maybe it's just me, but looking around at the scant examples people have attempted it doesn't look like it.\nI'm having to make the painful decision of dropping multiprocessing library and preferring Pyro until multiprocessing is more mature. While initially I was excited to learn multiprocessing being built into python, I am now thoroughly disgusted with it and would rather install the Pyro package many many times with glee that such a beautiful library exists for python.\nLong Answer\nI have used Pyro in past projects and have been very happy with it. I have also started to work with multiprocessing new in 2.6. \nWith multiprocessing I found it a bit awkward to allow shared objects to be created as needed. It seems like, in its youth, the multiprocessing module has been more geared for functional programming as opposed to object-oriented. However this is not entirely true because it is possible to do, I'm just feeling constrained by the \"register\" calls.\nFor example:\nmanager.py:\nfrom multiprocessing import Process\nfrom multiprocessing.managers import BaseManager\n\nclass Texture(object):\n def __init__(self, data):\n self.data = data\n\n def setData(self, data):\n print \"Calling set data %s\" % (data)\n self.data = data\n\n def getData(self):\n return self.data\n\nclass TextureManager(BaseManager):\n def __init__(self, address=None, authkey=''):\n BaseManager.__init__(self, address, authkey)\n self.textures = {}\n\n def addTexture(self, name, texture):\n self.textures[name] = texture\n\n def hasTexture(self, name):\n return name in self.textures\n\nserver.py:\nfrom multiprocessing import Process\nfrom multiprocessing.managers import BaseManager\nfrom manager import Texture, TextureManager\n\nmanager = TextureManager(address=('', 50000), authkey='hello')\n\ndef getTexture(name):\n if manager.hasTexture(name):\n return manager.textures[name]\n else:\n texture = Texture([0]*100)\n manager.addTexture(name, texture)\n manager.register(name, lambda: texture)\n\nTextureManager.register(\"getTexture\", getTexture)\n\n\nif __name__ == \"__main__\":\n server = manager.get_server()\n server.serve_forever()\n\nclient.py:\nfrom multiprocessing import Process\nfrom multiprocessing.managers import BaseManager\nfrom manager import Texture, TextureManager\n\nif __name__ == \"__main__\":\n manager = TextureManager(address=('127.0.0.1', 50000), authkey='hello')\n manager.connect()\n TextureManager.register(\"getTexture\")\n texture = manager.getTexture(\"texture2\")\n data = [2] * 100\n texture.setData(data)\n print \"data = %s\" % (texture.getData())\n\nThe awkwardness I'm describing comes from server.py where I register a getTexture function to retrieve a function of a certain name from the TextureManager. As I'm going over this the awkwardness could probably be removed if I made the TextureManager a shareable object which creates/retrieves shareable textures. Meh I'm still playing, but you get the idea. I don't remember encountering this awkwardness using pyro, but there probably is a solution that's cleaner than the example above.\n" ]
[ 15 ]
[]
[]
[ "multiprocessing", "pyro", "python", "rpc" ]
stackoverflow_0001171767_multiprocessing_pyro_python_rpc.txt
Q: Writing a manager to filter query set results I have the following code: class GroupDepartmentManager(models.Manager): def get_query_set(self): return super(GroupDepartmentManager, self).get_query_set().filter(group='1') class Department(models.Model): name = models.CharField(max_length=128) group = models.ForeignKey(Group) def __str__(self): return self.name objects = GroupDepartmentManager() ... and it works fine. Only thing is that I need to replace group='1' with group=(the group specified by group = models.ForeignKey(Group)). I am having quite a time trying to determine whether that foreign key needs to be passed into the class, or into the get_query_set function, or what. I know that you can accomplish this with group.department_set.filter(group=desired group), but I am writing this model for the admin site, so I need to use a variable and not a constant after the = sign. A: I have a hunch that replacing the default Manager on objects in this manner might not be good idea, especially if you're planning on using the admin site... Even if it helps you with your Employees, it won't help you at all when handling Departments. How about a second property providing a restricted view on Departments alongside the usual objects? Or move the standard Manager from objects to _objects and rename from_same_group to objects if you really prefer your original approach for your app. class Department(models.Model): name = models.CharField(max_length=128) group = models.ForeignKey(Group) def __str__(self): return self.name objects = models.Manager() @property def from_same_group(self): return Department.objects.filter(group__exact=self.group) Also, I understand you know how to set up the admin site to take advantage of the funny Manager; if not (or if I misunderstood your question somehow), leave a comment, I'll try to follow up sometime soon. EDIT: OK, to make this more clear: if you do absolutely insist on replacing objects, you'd probably want to do this: class Department(models.Model): name = models.CharField(max_length=128) group = models.ForeignKey(Group) def __str__(self): return self.name _objects = models.Manager() @property def objects(self): # note the _objects in the next line return Department._objects.filter(group__exact=self.group) A: You might want to reconsider the relationship between the groups and departments if you find that trying to create a custom manager is too complex. Managers excel at simplifying common queries, but flop at displaying complex relationships between models and instances of models. However, I think this article about filtering model objects with a custom manager will point you in the right direction. The author proposes a technique to perform a function call that returns a customized manager class that has the filter parameters you specify saved in the class so they don't get passed to the instance. Do it!
Writing a manager to filter query set results
I have the following code: class GroupDepartmentManager(models.Manager): def get_query_set(self): return super(GroupDepartmentManager, self).get_query_set().filter(group='1') class Department(models.Model): name = models.CharField(max_length=128) group = models.ForeignKey(Group) def __str__(self): return self.name objects = GroupDepartmentManager() ... and it works fine. Only thing is that I need to replace group='1' with group=(the group specified by group = models.ForeignKey(Group)). I am having quite a time trying to determine whether that foreign key needs to be passed into the class, or into the get_query_set function, or what. I know that you can accomplish this with group.department_set.filter(group=desired group), but I am writing this model for the admin site, so I need to use a variable and not a constant after the = sign.
[ "I have a hunch that replacing the default Manager on objects in this manner might not be good idea, especially if you're planning on using the admin site... Even if it helps you with your Employees, it won't help you at all when handling Departments. How about a second property providing a restricted view on Departments alongside the usual objects? Or move the standard Manager from objects to _objects and rename from_same_group to objects if you really prefer your original approach for your app.\nclass Department(models.Model):\n name = models.CharField(max_length=128)\n group = models.ForeignKey(Group)\n def __str__(self):\n return self.name\n objects = models.Manager()\n\n @property\n def from_same_group(self):\n return Department.objects.filter(group__exact=self.group)\n\nAlso, I understand you know how to set up the admin site to take advantage of the funny Manager; if not (or if I misunderstood your question somehow), leave a comment, I'll try to follow up sometime soon.\n\nEDIT: OK, to make this more clear: if you do absolutely insist on replacing objects, you'd probably want to do this:\nclass Department(models.Model):\n name = models.CharField(max_length=128)\n group = models.ForeignKey(Group)\n def __str__(self):\n return self.name\n\n _objects = models.Manager()\n\n @property\n def objects(self):\n # note the _objects in the next line\n return Department._objects.filter(group__exact=self.group)\n\n", "You might want to reconsider the relationship between the groups and departments if you find that trying to create a custom manager is too complex. Managers excel at simplifying common queries, but flop at displaying complex relationships between models and instances of models. \nHowever, I think this article about filtering model objects with a custom manager will point you in the right direction. The author proposes a technique to perform a function call that returns a customized manager class that has the filter parameters you specify saved in the class so they don't get passed to the instance. Do it!\n" ]
[ 4, 2 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001954664_django_django_models_python.txt
Q: i don't know __iter__ in python,who can give me a good code example my code run wrong class a(object): def __iter(self): return 33 b={'a':'aaa','b':'bbb'} c=a() print b.itervalues() print c.itervalues() Please try to use the code, rather than text, because my English is not very good, thank you A: a. Spell it right: not def __iter(self): but: def __iter__(self): with __ before and after iter. b. Make the body right: not return 33 but: yield 33 or return iter([33]) If you return a value from __iter__, return an iterator (an iterable, as in return [33], is almost as good but not quite...); or else, yield 1+ values, making __iter__ into a generator function (so it intrinsically returns a generator iterator). c. Call it right: not a().itervalues() but, e.g.: for x in a(): print x or print list(a()) itervalues is a method of dict, and has nothing to do with __iter__. If you fix all three (!) mistakes, the code works better;-). A: A few things about your code: __iter should be __iter__ You're returning '33' in the __iter__ function. You should actually be returning an iterator object. An iterator is an object which keeps returning different values when it's next() function is called (maybe a sequence of values like [0,1,2,3 etc]). Here's a working example of an iterator: class a(object): def __init__(self,x=10): self.x = x def __iter__(self): return self def next(self): if self.x > 0: self.x-=1 return self.x else: raise StopIteration c=a() for x in c: print x Any object of class a is an iterator object. Calling the __iter__ function is supposed to return the iterator, so it returns itself – as you can see, the a class has a next() function, so this is an iterator object. When the next function is called, it keeps return consecutive values until it hits zero, and then it sends the StopIteration exception, which (appropriately) stops the iteration. If this seems a little hazy, I would suggest experimenting with the code and then checking out the documentation here: http://docs.python.org/library/stdtypes.html A: Here is a code example that implements the xrange builtin: class my_xrange(object): def __init__(self, start, end, skip=1): self.curval = int(start) self.lastval = int(end) self.skip = int(skip) assert(int(skip) != 0) def __iter__(self): return self def next(self): if (self.skip > 0) and (self.curval >= self.lastval): raise StopIteration() elif (self.skip < 0) and (self.curval <= self.lastval): raise StopIteration() else: oldval = self.curval self.curval += self.skip return oldval for i in my_xrange(0, 10): print i A: You are using this language feature incorrectly. http://docs.python.org/library/stdtypes.html#iterator-types This above link will explain what the function should be used for. You can try to see documentation in your native language here: http://wiki.python.org/moin/Languages
i don't know __iter__ in python,who can give me a good code example
my code run wrong class a(object): def __iter(self): return 33 b={'a':'aaa','b':'bbb'} c=a() print b.itervalues() print c.itervalues() Please try to use the code, rather than text, because my English is not very good, thank you
[ "a. Spell it right: not\n def __iter(self):\n\nbut:\n def __iter__(self):\n\nwith __ before and after iter.\nb. Make the body right: not\nreturn 33\n\nbut:\nyield 33\n\nor\n return iter([33])\nIf you return a value from __iter__, return an iterator (an iterable, as in return [33], is almost as good but not quite...); or else, yield 1+ values, making __iter__ into a generator function (so it intrinsically returns a generator iterator).\nc. Call it right: not\na().itervalues()\n\nbut, e.g.:\nfor x in a(): print x\n\nor\nprint list(a())\n\nitervalues is a method of dict, and has nothing to do with __iter__.\nIf you fix all three (!) mistakes, the code works better;-).\n", "A few things about your code:\n\n__iter should be __iter__\nYou're returning '33' in the __iter__ function. You should actually be returning an iterator object. An iterator is an object which keeps returning different values when it's next() function is called (maybe a sequence of values like [0,1,2,3 etc]).\n\nHere's a working example of an iterator:\nclass a(object):\n def __init__(self,x=10):\n self.x = x\n def __iter__(self):\n return self\n def next(self):\n if self.x > 0:\n self.x-=1\n return self.x\n else:\n raise StopIteration\n\nc=a()\n\nfor x in c:\n print x\n\nAny object of class a is an iterator object. Calling the __iter__ function is supposed to return the iterator, so it returns itself – as you can see, the a class has a next() function, so this is an iterator object.\nWhen the next function is called, it keeps return consecutive values until it hits zero, and then it sends the StopIteration exception, which (appropriately) stops the iteration.\nIf this seems a little hazy, I would suggest experimenting with the code and then checking out the documentation here: http://docs.python.org/library/stdtypes.html\n", "Here is a code example that implements the xrange builtin:\nclass my_xrange(object):\n def __init__(self, start, end, skip=1):\n self.curval = int(start)\n self.lastval = int(end)\n self.skip = int(skip)\n assert(int(skip) != 0)\n\n def __iter__(self):\n return self\n\n def next(self):\n if (self.skip > 0) and (self.curval >= self.lastval):\n raise StopIteration()\n elif (self.skip < 0) and (self.curval <= self.lastval):\n raise StopIteration()\n else:\n oldval = self.curval\n self.curval += self.skip\n return oldval\n\nfor i in my_xrange(0, 10):\n print i\n\n", "You are using this language feature incorrectly.\nhttp://docs.python.org/library/stdtypes.html#iterator-types\nThis above link will explain what the function should be used for.\nYou can try to see documentation in your native language here: http://wiki.python.org/moin/Languages\n" ]
[ 17, 5, 1, 0 ]
[]
[]
[ "iterator", "python" ]
stackoverflow_0001956623_iterator_python.txt
Q: Adding authentication to beanstalkd from Python (or any UNIX) client So what I like about beanstalkd: small, lightweight, has priorities for messages, has a great set of clients, easy to use. What I dislike about beanstalkd: the lack of authentication menaing if you can connect to the port you can insert messages into it. So my thoughts are to either firewall it to trusted systems (which is a pain to maintain and external to the application adding another layer of stuff to do) or to wrap it in TLS/SSL using something like stunnel (which will incur a good chunk of overhead with respect to establishing connections and whatnot). I did think of maybe signing jobs (MD5 or SHA of job string+time value+secret appended to the job), but if an attacker were to flood the server with bogus jobs I'd still be in trouble. Can anyone think of any other methods to secure the beanstalkd against insertion of bogus messages from an attacker? Especially those that don't incur a lot of overhead computationally or administratively. A: I have to disagree about the practice of just having connections being held open indefinitely, since I use BeanstalkD from a web-scripting language (php) for various events. The overhead of opening a secure connection would be something I would have to think very carefully over. Like Memcached, beanstalkd is designed for use in a trusted environment - behind the firewall. If you don't control the entire private network, then limiting access to a set of machines (by IP address) would be a typical way of controlling that. Putting in a security hash to then throw away invalid jobs is not difficult, and has little work or overhead to check, but wouldn't stop a flood of jobs being sent. The questions to ask are 'How often are your machines likely to be added to (at random IP addresses outside of a given range), and how likely is a third party that is also on the local network would want to inject random jobs to your queues?'. The first part is about how much work is it to firewall the machines off, the latter is about do you need to anyway? A: This question really belongs on the beanstalkd talk list. I added SASL support to memcached recently for a similar reason. The overhead is almost irrelevant in practice since you only authenticate at connect time (and you hold connections open indefinitely). If authentication is something you need, I'd recommend bringing it up there where people are likely to help you solve your problems. A: I do two things that reduce the issue you are refererring to: First I always run beanstalkd on 127.0.0.1 Second, I normally serialize the job structure, and load a "secret key" encrypted base64 digest as the job string. Only workers that can decrypt the job string correctly can parse jobs. I know that this is certainly not a substitute for authentication. But I hope they do minimize to some extent some one hijacking enqueued jobs.
Adding authentication to beanstalkd from Python (or any UNIX) client
So what I like about beanstalkd: small, lightweight, has priorities for messages, has a great set of clients, easy to use. What I dislike about beanstalkd: the lack of authentication menaing if you can connect to the port you can insert messages into it. So my thoughts are to either firewall it to trusted systems (which is a pain to maintain and external to the application adding another layer of stuff to do) or to wrap it in TLS/SSL using something like stunnel (which will incur a good chunk of overhead with respect to establishing connections and whatnot). I did think of maybe signing jobs (MD5 or SHA of job string+time value+secret appended to the job), but if an attacker were to flood the server with bogus jobs I'd still be in trouble. Can anyone think of any other methods to secure the beanstalkd against insertion of bogus messages from an attacker? Especially those that don't incur a lot of overhead computationally or administratively.
[ "I have to disagree about the practice of just having connections being held open indefinitely, since I use BeanstalkD from a web-scripting language (php) for various events. The overhead of opening a secure connection would be something I would have to think very carefully over.\nLike Memcached, beanstalkd is designed for use in a trusted environment - behind the firewall. If you don't control the entire private network, then limiting access to a set of machines (by IP address) would be a typical way of controlling that. Putting in a security hash to then throw away invalid jobs is not difficult, and has little work or overhead to check, but wouldn't stop a flood of jobs being sent. \nThe questions to ask are 'How often are your machines likely to be added to (at random IP addresses outside of a given range), and how likely is a third party that is also on the local network would want to inject random jobs to your queues?'. The first part is about how much work is it to firewall the machines off, the latter is about do you need to anyway?\n", "This question really belongs on the beanstalkd talk list.\nI added SASL support to memcached recently for a similar reason. The overhead is almost irrelevant in practice since you only authenticate at connect time (and you hold connections open indefinitely).\nIf authentication is something you need, I'd recommend bringing it up there where people are likely to help you solve your problems.\n", "I do two things that reduce the issue you are refererring to:\nFirst I always run beanstalkd on 127.0.0.1\nSecond, I normally serialize the job structure, and load a \"secret key\" encrypted base64 digest as the job string. Only workers that can decrypt the job string correctly can parse jobs.\nI know that this is certainly not a substitute for authentication. But I hope they do minimize to some extent some one hijacking enqueued jobs.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "beanstalkd", "python" ]
stackoverflow_0001692346_beanstalkd_python.txt
Q: How to get latest timestamp in a column? I have a timestamp column in my t1 table. The format is as: 2009-12-24 06:17:34 There are many entries as such. How do we query from views to get the latest timestamp A: ModelClass.objects.latest(timestamp_field)
How to get latest timestamp in a column?
I have a timestamp column in my t1 table. The format is as: 2009-12-24 06:17:34 There are many entries as such. How do we query from views to get the latest timestamp
[ "ModelClass.objects.latest(timestamp_field)\n\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001957021_django_python.txt
Q: why my "eval" function doesn't work ,i think it can be print 'b',but not a='''b="ddd"''' eval(repr(a)) print str(a) print b Please try to use the code, rather than text, because my English is not very good, thank you A: Use: eval(compile(a,'<string>','exec')) instead of: eval(repr(a)) Transcript: >>> a='''b="ddd"''' >>> eval(compile(a,'<string>','exec')) >>> print str(a) b="ddd" >>> print b ddd The problem is that you're actually executing the statement 'b="ddd"' which is not an assignment to b but an evaluation of the string. The eval() built-in, when given a string, evaluates it as an expression (not a statement) and returns the result. You can get eval() to run non-expression code by giving it a code object, which we create with compile() above. In that case it runs the code and returns None. You can see a similar effect if you just enter: >>> 'c=7' 'c=7' >>> c Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'c' is not defined >>> c=7 >>> c 7 >>> '7=d' '7=d' >>> 7=d File "<stdin>", line 1 SyntaxError: can't assign to literal Clearly 7=d is not valid Python, yet '7=d' is, for the reason explained above. Descriptions of the expr(), repr() and compile() built-ins, adequate enough to work this out, were found here. No built-ins were harmed during the making of this answer. A: eval is used to evaluate (get the result of) an expression. What you want is dynamic execution of Python code, which is done with exec: >>> a='''b="ddd"''' >>> exec(a) >>> print b ddd Also note that you should not call repr() before passing the string to either function. You already have a string, calling repr() creates a string representation of a string. A: Reconsider whether you really need to use eval(). For example, you can use globals() like this: >>> globals()['b'] = 'ddd' >>> print b ddd But perhaps what you should probably be using is just a dictionary: >>> my_namespace = dict() >>> my_namespace['b'] = 'ddd' >>> my_namespace {'b': 'ddd'} >>> print my_namespace['b'] ddd
why my "eval" function doesn't work ,i think it can be print 'b',but not
a='''b="ddd"''' eval(repr(a)) print str(a) print b Please try to use the code, rather than text, because my English is not very good, thank you
[ "Use:\neval(compile(a,'<string>','exec'))\n\ninstead of:\neval(repr(a))\n\nTranscript:\n>>> a='''b=\"ddd\"'''\n>>> eval(compile(a,'<string>','exec'))\n>>> print str(a)\nb=\"ddd\"\n>>> print b\nddd\n\nThe problem is that you're actually executing the statement 'b=\"ddd\"' which is not an assignment to b but an evaluation of the string.\nThe eval() built-in, when given a string, evaluates it as an expression (not a statement) and returns the result. You can get eval() to run non-expression code by giving it a code object, which we create with compile() above. In that case it runs the code and returns None.\nYou can see a similar effect if you just enter:\n>>> 'c=7'\n'c=7'\n>>> c\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'c' is not defined\n>>> c=7\n>>> c\n7\n>>> '7=d'\n'7=d'\n>>> 7=d\n File \"<stdin>\", line 1\nSyntaxError: can't assign to literal\n\nClearly 7=d is not valid Python, yet '7=d' is, for the reason explained above.\nDescriptions of the expr(), repr() and compile() built-ins, adequate enough to work this out, were found here. No built-ins were harmed during the making of this answer.\n", "eval is used to evaluate (get the result of) an expression. What you want is dynamic execution of Python code, which is done with exec:\n>>> a='''b=\"ddd\"'''\n>>> exec(a)\n>>> print b\nddd\n\nAlso note that you should not call repr() before passing the string to either function. You already have a string, calling repr() creates a string representation of a string.\n", "Reconsider whether you really need to use eval(). For example, you can use globals() like this:\n>>> globals()['b'] = 'ddd'\n>>> print b\nddd\n\nBut perhaps what you should probably be using is just a dictionary:\n>>> my_namespace = dict()\n>>> my_namespace['b'] = 'ddd'\n>>> my_namespace\n{'b': 'ddd'}\n>>> print my_namespace['b']\nddd\n\n" ]
[ 3, 2, 0 ]
[]
[]
[ "eval", "exec", "python" ]
stackoverflow_0001957086_eval_exec_python.txt
Q: How do I generate a random string (of length X, a-z only) in Python? Possible Duplicate: python random string generation with upper case letters and digits How do I generate a String of length X a-z in Python? A: ''.join(random.choice(string.lowercase) for x in range(X)) A: If you want no repetitions: import string, random ''.join(random.sample(string.ascii_lowercase, X)) If you DO want (potential) repetitions: import string, random ''.join(random.choice(string.ascii_lowercase) for _ in xrange(X))) That's assuming that by a-z you mean "ASCII lowercase characters", otherwise your alphabet might be expressed differently in these expression (e.g., string.lowercase for "locale dependent lowercase letters" that may include accented or otherwise decorated lowercase letters depending on your current locale).
How do I generate a random string (of length X, a-z only) in Python?
Possible Duplicate: python random string generation with upper case letters and digits How do I generate a String of length X a-z in Python?
[ "''.join(random.choice(string.lowercase) for x in range(X))\n\n", "If you want no repetitions:\nimport string, random\n''.join(random.sample(string.ascii_lowercase, X))\n\nIf you DO want (potential) repetitions:\nimport string, random\n''.join(random.choice(string.ascii_lowercase) for _ in xrange(X)))\n\nThat's assuming that by a-z you mean \"ASCII lowercase characters\", otherwise your alphabet might be expressed differently in these expression (e.g., string.lowercase for \"locale dependent lowercase letters\" that may include accented or otherwise decorated lowercase letters depending on your current locale).\n" ]
[ 64, 31 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001957273_python_string.txt
Q: i don't know why iter(not __iter__) function use in this place,what is the mean of iter in this code i don't know "self._iterator = iter(self._container)"in next code. in django.http : class HttpResponse(object): def __iter__(self): self._iterator = iter(self._container) return self def next(self): chunk = self._iterator.next() if isinstance(chunk, unicode): chunk = chunk.encode(self._charset) return str(chunk) i read the api : Return an iterator object. The first argument is interpreted very differently depending on the presence of the second argument. Without a second argument, o must be a collection object which supports the iteration protocol (the __iter__() method), or it must support the sequence protocol (the __getitem__() method with integer arguments starting at 0). If it does not support either of those protocols, TypeError is raised. If the second argument, sentinel, is given, then o must be a callable object. The iterator created in this case will call o with no arguments for each call to its next() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned. One useful application of the second form of iter() is to read lines of a file until a certain line is reached. The following example reads a file until "STOP" is reached: but i also don't know what the iter function made . i know the __iter__: class a(object): def __init__(self,x=10): self.x = x def __iter__(self): return self def next(self): if self.x > 0: self.x-=1 return self.x else: raise StopIteration Please try to use the code, rather than text, because my English is not very good, thank you A: An iterator can be iterated: for item in mylist: print item for key,item in enumerate(mylist): print key,":",item for i in range(0,50): print i To use for item in X, X must be iterable. You can make your class iterable by adding next(self) etc, as in your sample. So with class a(object): def __init__(self,x=10): self.x = x def __iter__(self): return self def next(self): if self.x > 0: self.x-=1 return self.x else: raise StopIteration Then you can do ainst = a() for item in aisnt: print item A: HttpResponse is a class that can store string data. The data is stored in a member variable called _container. Suppose hr is an instance of HttpResponse with data inside it. When you call iter(hr) then you should get back an iterator. This iterator will return data from the _container member variable. This class "wraps" the _container member so that it can always return non-Unicode text. Because this class has a __iter__() method function, when you call iter() you are really calling the special __iter__() method function. This method function actually does call iter() on the _container member variable to get an iterator for its contents. But then it saves this iterator in the _iterator member variable, and returns self. Now it is ready to iterate. There is a next() method function defined. If the type of the _container variable is Unicode, it calls encode() to encode the Unicode in some encoding and return non-Unicode. It uses another member variable, _charset, to know which charset to use for the encoding. If the type of the container variable is not Unicode, it must be an ordinary string type, and the data is simply returned unchanged. In this way, the object "wrapped" in this class can be iterated and always return non-Unicode text. I am surprised by this implementation of the iterator protocol. When it returns an iterator to you, it is just returning self, so if you call iter() twice, you do not actually get two usable iterators back. This seems like it could be dangerous. I guess Django code never does anything like that.
i don't know why iter(not __iter__) function use in this place,what is the mean of iter in this code
i don't know "self._iterator = iter(self._container)"in next code. in django.http : class HttpResponse(object): def __iter__(self): self._iterator = iter(self._container) return self def next(self): chunk = self._iterator.next() if isinstance(chunk, unicode): chunk = chunk.encode(self._charset) return str(chunk) i read the api : Return an iterator object. The first argument is interpreted very differently depending on the presence of the second argument. Without a second argument, o must be a collection object which supports the iteration protocol (the __iter__() method), or it must support the sequence protocol (the __getitem__() method with integer arguments starting at 0). If it does not support either of those protocols, TypeError is raised. If the second argument, sentinel, is given, then o must be a callable object. The iterator created in this case will call o with no arguments for each call to its next() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned. One useful application of the second form of iter() is to read lines of a file until a certain line is reached. The following example reads a file until "STOP" is reached: but i also don't know what the iter function made . i know the __iter__: class a(object): def __init__(self,x=10): self.x = x def __iter__(self): return self def next(self): if self.x > 0: self.x-=1 return self.x else: raise StopIteration Please try to use the code, rather than text, because my English is not very good, thank you
[ "An iterator can be iterated:\nfor item in mylist:\n print item\n\nfor key,item in enumerate(mylist):\n print key,\":\",item\n\nfor i in range(0,50):\n print i\n\nTo use for item in X, X must be iterable.\nYou can make your class iterable by adding next(self) etc, as in your sample. So with\nclass a(object):\n def __init__(self,x=10):\n self.x = x\n def __iter__(self):\n return self\n def next(self):\n if self.x > 0:\n self.x-=1\n return self.x\n else:\n raise StopIteration\n\nThen you can do\n ainst = a()\n for item in aisnt:\n print item\n\n", "HttpResponse is a class that can store string data. The data is stored in a member variable called _container.\nSuppose hr is an instance of HttpResponse with data inside it. When you call iter(hr) then you should get back an iterator. This iterator will return data from the _container member variable.\nThis class \"wraps\" the _container member so that it can always return non-Unicode text. Because this class has a __iter__() method function, when you call iter() you are really calling the special __iter__() method function. This method function actually does call iter() on the _container member variable to get an iterator for its contents. But then it saves this iterator in the _iterator member variable, and returns self. Now it is ready to iterate.\nThere is a next() method function defined. If the type of the _container variable is Unicode, it calls encode() to encode the Unicode in some encoding and return non-Unicode. It uses another member variable, _charset, to know which charset to use for the encoding. If the type of the container variable is not Unicode, it must be an ordinary string type, and the data is simply returned unchanged.\nIn this way, the object \"wrapped\" in this class can be iterated and always return non-Unicode text.\nI am surprised by this implementation of the iterator protocol. When it returns an iterator to you, it is just returning self, so if you call iter() twice, you do not actually get two usable iterators back. This seems like it could be dangerous. I guess Django code never does anything like that.\n" ]
[ 1, 0 ]
[]
[]
[ "iterator", "python" ]
stackoverflow_0001957329_iterator_python.txt
Q: Why python super does not accept only instance? In python 2.x, super accepts the following cases class super(object) | super(type) -> unbound super object | super(type, obj) -> bound super object; requires isinstance(obj, type) | super(type, type2) -> bound super object; requires issubclass(type2, type) | Typical use to call a cooperative superclass method: as far as I see, super is a class, wrapping the type and (eventually) the instance to resolve the superclass of a class. I'm rather puzzled by a couple of things: why there is also no super(instance), with typical usage e.g. super(self).__init__(). Technically, you can obtain the type of an object from the object itself, so the current strategy super(ClassType, self).__init__() is kind of redundant. I assume compatibility issues with old-style classes, or multiple inheritance, but I'd like to hear your point. why, on the other hand, python 3 will accept (see Understanding Python super() with __init__() methods) super().__init__() ? I see kind of magic in this, violating the explicit is better than implicit Zen. I would have seen more appropriate self.super().__init__(). A: super(ClassType, self).__init__() is not redundant in a cooperative multiple inheritance scheme -- ClassType is not necessarily the type of self, but the class from which you want to do the cooperative call to __init__. In the class hierarchy C inherits B inherits A, in C.__init__ you want to call superclass' init from C's perspective, and you call B.__init__; then in B.__init__ you must pass the class type B to super -- since you want to resolve calling superclasses of B (or rather, the next in the mro after B of the class C). class A (object): def __init__(self): pass class B (A): def __init__(self): super(B, self).__init__() class C (B): def __init__(self): super(C, self).__init__() if you now instantiate c = C(), you see that the class type is not redundant -- super(self).__init__() inside B.__init__ would not really work! What you do is that you manually specify in which class the method calling super is (an this is solved in Python 3's super by a hidden variable pointing to the method's class). Two links with examples of super and multiple inheritance: Things to Know About Python Super (1 of 3) Python's Super is nifty, but you can't use it A: I can't provide a specific answer, but have you read the PEP's around the super keyword? I did a quick google search and it came up with PEP 367 and PEP 3135. http://www.python.org/dev/peps/pep-0367/ http://www.python.org/dev/peps/pep-3135/#numbering-note Unlike any other language I know of, most of the time you can find the answers to Python's quirks in the PEP's along with clear rational and position statements. Update: Having read through 3135, related emails in the Python Mailing and the language reference it kind of makes sense why it is the way it is for Python 2 vs Python 3. http://docs.python.org/library/functions.html?highlight=super#super I think super was implemented to be explicit/redundant just to be on the safe side and keep the logic involved as simple as possible ( no sugar or deep logic to find the parent ). Since super is a builtin function, it has to infer the correct return from what is provided without adding more complication to how Python objects are structured. PEP 3135 changes everything because it presented and won an argument for a DRY'er approach to super.
Why python super does not accept only instance?
In python 2.x, super accepts the following cases class super(object) | super(type) -> unbound super object | super(type, obj) -> bound super object; requires isinstance(obj, type) | super(type, type2) -> bound super object; requires issubclass(type2, type) | Typical use to call a cooperative superclass method: as far as I see, super is a class, wrapping the type and (eventually) the instance to resolve the superclass of a class. I'm rather puzzled by a couple of things: why there is also no super(instance), with typical usage e.g. super(self).__init__(). Technically, you can obtain the type of an object from the object itself, so the current strategy super(ClassType, self).__init__() is kind of redundant. I assume compatibility issues with old-style classes, or multiple inheritance, but I'd like to hear your point. why, on the other hand, python 3 will accept (see Understanding Python super() with __init__() methods) super().__init__() ? I see kind of magic in this, violating the explicit is better than implicit Zen. I would have seen more appropriate self.super().__init__().
[ "super(ClassType, self).__init__() is not redundant in a cooperative multiple inheritance scheme -- ClassType is not necessarily the type of self, but the class from which you want to do the cooperative call to __init__.\nIn the class hierarchy C inherits B inherits A, in C.__init__ you want to call superclass' init from C's perspective, and you call B.__init__; then in B.__init__ you must pass the class type B to super -- since you want to resolve calling superclasses of B (or rather, the next in the mro after B of the class C).\nclass A (object):\n def __init__(self):\n pass\n\nclass B (A):\n def __init__(self):\n super(B, self).__init__()\n\nclass C (B):\n def __init__(self):\n super(C, self).__init__()\n\nif you now instantiate c = C(), you see that the class type is not redundant -- super(self).__init__() inside B.__init__ would not really work! What you do is that you manually specify in which class the method calling super is (an this is solved in Python 3's super by a hidden variable pointing to the method's class).\nTwo links with examples of super and multiple inheritance:\n\nThings to Know About Python Super (1 of 3)\nPython's Super is nifty, but you can't use it\n\n", "I can't provide a specific answer, but have you read the PEP's around the super keyword? I did a quick google search and it came up with PEP 367 and PEP 3135.\nhttp://www.python.org/dev/peps/pep-0367/\nhttp://www.python.org/dev/peps/pep-3135/#numbering-note\nUnlike any other language I know of, most of the time you can find the answers to Python's quirks in the PEP's along with clear rational and position statements.\nUpdate:\nHaving read through 3135, related emails in the Python Mailing and the language reference it kind of makes sense why it is the way it is for Python 2 vs Python 3.\nhttp://docs.python.org/library/functions.html?highlight=super#super\nI think super was implemented to be explicit/redundant just to be on the safe side and keep the logic involved as simple as possible ( no sugar or deep logic to find the parent ). Since super is a builtin function, it has to infer the correct return from what is provided without adding more complication to how Python objects are structured.\nPEP 3135 changes everything because it presented and won an argument for a DRY'er approach to super.\n" ]
[ 6, 3 ]
[]
[]
[ "language_design", "python" ]
stackoverflow_0001957251_language_design_python.txt
Q: How to override the [] operator in Python? What is the name of the method to override the [] operator (subscript notation) for a class in Python? A: You need to use the __getitem__ method. class MyClass: def __getitem__(self, key): return key * 2 myobj = MyClass() myobj[3] #Output: 6 And if you're going to be setting values you'll need to implement the __setitem__ method too, otherwise this will happen: >>> myobj[5] = 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: MyClass instance has no attribute '__setitem__' A: To fully overload it you also need to implement the __setitem__and __delitem__ methods. edit I almost forgot... if you want to completely emulate a list, you also need __getslice__, __setslice__ and __delslice__. There are all documented in http://docs.python.org/reference/datamodel.html A: You are looking for the __getitem__ method. See http://docs.python.org/reference/datamodel.html, section 3.4.6
How to override the [] operator in Python?
What is the name of the method to override the [] operator (subscript notation) for a class in Python?
[ "You need to use the __getitem__ method.\nclass MyClass:\n def __getitem__(self, key):\n return key * 2\n\nmyobj = MyClass()\nmyobj[3] #Output: 6\n\nAnd if you're going to be setting values you'll need to implement the __setitem__ method too, otherwise this will happen:\n>>> myobj[5] = 1\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: MyClass instance has no attribute '__setitem__'\n\n", "To fully overload it you also need to implement the __setitem__and __delitem__ methods.\nedit\nI almost forgot... if you want to completely emulate a list, you also need __getslice__, __setslice__ and __delslice__.\nThere are all documented in http://docs.python.org/reference/datamodel.html\n", "You are looking for the __getitem__ method. See http://docs.python.org/reference/datamodel.html, section 3.4.6\n" ]
[ 406, 80, 22 ]
[]
[]
[ "operator_overloading", "python" ]
stackoverflow_0001957780_operator_overloading_python.txt
Q: deleting old folders with datetime function I am trying to delete old folders and I am asking does anyone know how to set up a variable that allows me to check the variable 'todaystr' which is today's date and minus 7 days of this string and store it another variable. I am wanting to automatically delete old files after a week. Below shows the variable 'todaystr' being set up. todaystr = datetime.date.today().isoformat() I would like to create a variable 'oldfile' that stores the current date minus 7 days so I can delete the file with this date. Thanks for any help. A: import datetime import os import shutil threshold = datetime.datetime.now() + datetime.timedelta(days=-7) file_time = datetime.datetime.fromtimestamp(os.path.getmtime('/folder_name')) if file_time < threshold: shutil.rmtree('/folder_name') A: I relation to the above answer it works very well, the code I used was different in the end. I create the name of the folder with the current date, so when the nightly build runs it will only delete the folder named from 7 days ago. The code is as follows: import datetime import os import calendar today = datetime.date.today() todaystr = datetime.date.today().isoformat() minus_seven = today.replace(day=today.day-7).isoformat() if os.path.exists(minus_seven): os.system("sudo rm -rf "+minus_seven) print 'Sandboxes from 7 days ago removed' I used linux the delete the folder as I have some linux incorporated into my code and it runs good like this.
deleting old folders with datetime function
I am trying to delete old folders and I am asking does anyone know how to set up a variable that allows me to check the variable 'todaystr' which is today's date and minus 7 days of this string and store it another variable. I am wanting to automatically delete old files after a week. Below shows the variable 'todaystr' being set up. todaystr = datetime.date.today().isoformat() I would like to create a variable 'oldfile' that stores the current date minus 7 days so I can delete the file with this date. Thanks for any help.
[ "import datetime\nimport os\nimport shutil\n\nthreshold = datetime.datetime.now() + datetime.timedelta(days=-7)\nfile_time = datetime.datetime.fromtimestamp(os.path.getmtime('/folder_name'))\n\nif file_time < threshold:\n shutil.rmtree('/folder_name')\n\n", "I relation to the above answer it works very well, the code I used was different in the end. I create the name of the folder with the current date, so when the nightly build runs it will only delete the folder named from 7 days ago. The code is as follows:\nimport datetime \nimport os \nimport calendar \n\ntoday = datetime.date.today()\ntodaystr = datetime.date.today().isoformat()\nminus_seven = today.replace(day=today.day-7).isoformat()\n\n\nif os.path.exists(minus_seven):\n os.system(\"sudo rm -rf \"+minus_seven)\n print 'Sandboxes from 7 days ago removed'\n\nI used linux the delete the folder as I have some linux incorporated into my code and it runs good like this.\n" ]
[ 4, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001953958_python.txt
Q: mapping two list I have two list like this: list1 = [{'id':1, 'name':'foo', 'age':20}, {'id':2, 'name':'foo', 'age':20}] list2 = [{'id':2, 'created':'2004-12-23'}, {'id':12, 'created':'2004-12-23'}, {'id':1, 'created':'2004-12-23'}] list1 = [{'id':1, 'name':'foo', 'age':20, 'match':True}, {'id':2, 'name':'foo', 'age':20, 'match':True}] I want to add match to the corresponding list if the id of list1 and list2 matches. How would I do that efficiently? A: set2 = set(x['id'] for x in list2) for entry in list1: if entry['id'] in set2: entry['match'] = True OR set2 = set(x['id'] for x in list2) for entry in list1: entry['match'] = entry['id'] in set2
mapping two list
I have two list like this: list1 = [{'id':1, 'name':'foo', 'age':20}, {'id':2, 'name':'foo', 'age':20}] list2 = [{'id':2, 'created':'2004-12-23'}, {'id':12, 'created':'2004-12-23'}, {'id':1, 'created':'2004-12-23'}] list1 = [{'id':1, 'name':'foo', 'age':20, 'match':True}, {'id':2, 'name':'foo', 'age':20, 'match':True}] I want to add match to the corresponding list if the id of list1 and list2 matches. How would I do that efficiently?
[ "set2 = set(x['id'] for x in list2)\nfor entry in list1:\n if entry['id'] in set2:\n entry['match'] = True\n\nOR\nset2 = set(x['id'] for x in list2)\nfor entry in list1:\n entry['match'] = entry['id'] in set2\n\n" ]
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0001957877_python.txt
Q: char to keycode in python I want to be able to translate a string to keycode to write it with Xlib (to simulate user action on linux). The keycode are not the ascii but the code you get when you do use xev on linuxKeyPress event, serial 33, synthetic NO, window 0x6400001, root 0x13c, subw 0x0, time 51212100, (259,9), root:(262,81), state 0x0, keycode 24 (keysym 0x61, a), same_screen YES, XLookupString gives 1 bytes: (61) "a" XmbLookupString gives 1 bytes: (61) "a" XFilterEvent returns: False for exemple the keycode for 'a' is 24 I can easly detect if the letter is upercase and then make a combination ALT+lowercase(letter) but I don't know how to get the keycode. One solution would be to be a list of every combination (a=24, b=56, c=54,...) but would be better if there is a function. I'm using an azerty keyboard. Is the keycode for the same letter different on an qwerty keyboard ? thank you A: I've found this code which is doing exactly what I wanted. It uses the function display.keysym_to_keycode(Xlib.XK.string_to_keysym(char)) A: The keycodes depend not only on the keyboard hardware, but also on the user's preference for keyboard layout -- a user may use a dvorak layout on a qwerty keyboard, for example. The best solution would probably be to use python-xlib to find out the information per the user's keyboard preferences. I don't know the details on how to do that. A crude solution would be to run xmodmap -pke and parse the output.
char to keycode in python
I want to be able to translate a string to keycode to write it with Xlib (to simulate user action on linux). The keycode are not the ascii but the code you get when you do use xev on linuxKeyPress event, serial 33, synthetic NO, window 0x6400001, root 0x13c, subw 0x0, time 51212100, (259,9), root:(262,81), state 0x0, keycode 24 (keysym 0x61, a), same_screen YES, XLookupString gives 1 bytes: (61) "a" XmbLookupString gives 1 bytes: (61) "a" XFilterEvent returns: False for exemple the keycode for 'a' is 24 I can easly detect if the letter is upercase and then make a combination ALT+lowercase(letter) but I don't know how to get the keycode. One solution would be to be a list of every combination (a=24, b=56, c=54,...) but would be better if there is a function. I'm using an azerty keyboard. Is the keycode for the same letter different on an qwerty keyboard ? thank you
[ "I've found this code which is doing exactly what I wanted.\nIt uses the function display.keysym_to_keycode(Xlib.XK.string_to_keysym(char))\n", "The keycodes depend not only on the keyboard hardware, but also on the user's preference for keyboard layout -- a user may use a dvorak layout on a qwerty keyboard, for example.\nThe best solution would probably be to use python-xlib to find out the information per the user's keyboard preferences. I don't know the details on how to do that.\nA crude solution would be to run xmodmap -pke and parse the output.\n" ]
[ 6, 4 ]
[]
[]
[ "keycode", "linux", "python" ]
stackoverflow_0001957867_keycode_linux_python.txt
Q: Functions not executing in Python I have a program that runs when the functions have not been defined. When I put code into a function, it does not execute the code it contains. Why? Some of the code is: def new_directory(): if not os.path.exists(current_sandbox): os.mkdir(current_sandbox) A: Your code is actually a definition of a new_directory function. It won't be executed unless you make a call to new_directory(). So, when you want to execute the code from your post, just add a function call like this: def new_directory(): if not os.path.exists(current_sandbox): os.mkdir(current_sandbox) new_directory() I am not sure if that's the behavior you expect to get. A: Problem 1 is that you define a function ("def" is an abbreviation of "define"), but you don't call it. def new_directory(): # define the function if not os.path.exists(current_sandbox): os.mkdir(current_sandbox) new_directory() # call the function Problem 2 (which hasn't hit you yet) is that you are using a global (current_sandbox) when you should use an argument -- in the latter case your function will be generally useful and even usefully callable from another module. Problem 3 is irregular indentation -- using an indent of 1 will cause anybody who has to read your code (including yourself) to go nuts. Stick to 4 and use spaces, not tabs. def new_directory(dir_path): if not os.path.exists(dir_path): os.mkdir(dir_path) new_directory(current_sandbox) # much later new_directory(some_other_path) A: def new_directory(): if not os.path.exists(current_sandbox): os.mkdir(current_sandbox) new_directory()
Functions not executing in Python
I have a program that runs when the functions have not been defined. When I put code into a function, it does not execute the code it contains. Why? Some of the code is: def new_directory(): if not os.path.exists(current_sandbox): os.mkdir(current_sandbox)
[ "Your code is actually a definition of a new_directory function. It won't be executed unless you make a call to new_directory().\nSo, when you want to execute the code from your post, just add a function call like this:\ndef new_directory():\n\n if not os.path.exists(current_sandbox):\n os.mkdir(current_sandbox)\n\nnew_directory()\n\nI am not sure if that's the behavior you expect to get.\n", "Problem 1 is that you define a function (\"def\" is an abbreviation of \"define\"), but you don't call it.\ndef new_directory(): # define the function\n if not os.path.exists(current_sandbox): \n os.mkdir(current_sandbox)\n\nnew_directory() # call the function\n\nProblem 2 (which hasn't hit you yet) is that you are using a global (current_sandbox) when you should use an argument -- in the latter case your function will be generally useful and even usefully callable from another module. Problem 3 is irregular indentation -- using an indent of 1 will cause anybody who has to read your code (including yourself) to go nuts. Stick to 4 and use spaces, not tabs.\ndef new_directory(dir_path):\n if not os.path.exists(dir_path): \n os.mkdir(dir_path)\n\nnew_directory(current_sandbox)\n# much later\nnew_directory(some_other_path)\n\n", "def new_directory(): \n if not os.path.exists(current_sandbox): \n os.mkdir(current_sandbox)\n\nnew_directory() \n\n" ]
[ 4, 4, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001958134_python.txt
Q: django templatetags template , combine {{ }} method call with template tag context variable i m trying to make the result of a template tag dependent from another template tag. the use case is the following. i have a headers list which contains all the columns i want to show in a table + the column of the model they are showing +whether they are visible or not. LIST_HEADERS = ( ('Title', 'title', True), ('First Name', 'first_name', True), ('Last Name', 'last_name', True), ('Modified At', 'modified', False), ) now i have a templatetag which prints out all the headers. consequently i wanted to create a template tag which prints out the body of the table. therefore i want to take the headers list and check which header is visible and want to accordingly show or hide my value. therefore i created the templatetag template below: <tr class="{% cycle odd,even %}"> {% for header in headers %} {% if header.visible %} <td><a href="{{ model_instance.get_absolute_url|escape }}">{{ model_instance.title }}</a></td> {% else %} <td style="visibility:hidden;"><a href="{{ model_instance.get_absolute_url|escape }}">{{ model_instance.title }}</a></td> {% endif %} {% endfor %} </tr> you see the value {{ model_instance.title }} there. this value i want to change to model_instance.title, model_instance.first_name, model_instance.last_name, ... at runtime. thus i m searching a way how i can combine {{ model_instance }} with header.model_column . model_column equals to the second entry in the LIST_HEADERS. Thus model_column would be title, first_name,.. thus the solution would be something like [pseudocode] {{ model_instance.header.model_column }} [pseudocode] ..thus i search a way how i can combine a django template method call with a django template tag attribute..huh.. sounds crazy :D i hope i explained it good enough! probably there is a much easier solution to my problem. but this seems to me pretty generic and easy and would work. A: I would do this as a filter, as they provide an easy way to render a result dependent on the value of a variable. @register.filter def field_from_name(instance, field_name): return getattr(instance, field_name, None) and then in the template: {{ model_instance|field_from_name:header.model_column }} A: Simplify this. First, read about the things the Django template language actually can do. It's not much. It can deference variables, lists and dictionaries. It's simpler if you do all the "work" in your view function. show = [ ] for title, field_name, visible in LIST_HEADERS: if visible: style= "visibility:hidden" else: style= "" show.append( (style, title, getattr(object,field_name) ) render_to_response( "template", { 'show_list': show, ... }, ... ) In your template {% for style, name, value in show_list %} <tr class="{% cycle odd,even %}"> <td class="{{style}}"><a href="...">{{value}}</a></td> {% endfor %} Indeed, I'd suggest dropping LIST_HEADERS from your view function. show = [ ("", 'Title', object.title), ("",'First Name', object.first_name), ("",'Last Name', object.last_name), ("visibility:hidden",'Modified At', object.modified), ] render_to_response( "template", { 'show_list': show, ... }, ... ) I find the above much easier to work with because it's explicit and it's in the view function.
django templatetags template , combine {{ }} method call with template tag context variable
i m trying to make the result of a template tag dependent from another template tag. the use case is the following. i have a headers list which contains all the columns i want to show in a table + the column of the model they are showing +whether they are visible or not. LIST_HEADERS = ( ('Title', 'title', True), ('First Name', 'first_name', True), ('Last Name', 'last_name', True), ('Modified At', 'modified', False), ) now i have a templatetag which prints out all the headers. consequently i wanted to create a template tag which prints out the body of the table. therefore i want to take the headers list and check which header is visible and want to accordingly show or hide my value. therefore i created the templatetag template below: <tr class="{% cycle odd,even %}"> {% for header in headers %} {% if header.visible %} <td><a href="{{ model_instance.get_absolute_url|escape }}">{{ model_instance.title }}</a></td> {% else %} <td style="visibility:hidden;"><a href="{{ model_instance.get_absolute_url|escape }}">{{ model_instance.title }}</a></td> {% endif %} {% endfor %} </tr> you see the value {{ model_instance.title }} there. this value i want to change to model_instance.title, model_instance.first_name, model_instance.last_name, ... at runtime. thus i m searching a way how i can combine {{ model_instance }} with header.model_column . model_column equals to the second entry in the LIST_HEADERS. Thus model_column would be title, first_name,.. thus the solution would be something like [pseudocode] {{ model_instance.header.model_column }} [pseudocode] ..thus i search a way how i can combine a django template method call with a django template tag attribute..huh.. sounds crazy :D i hope i explained it good enough! probably there is a much easier solution to my problem. but this seems to me pretty generic and easy and would work.
[ "I would do this as a filter, as they provide an easy way to render a result dependent on the value of a variable.\n@register.filter\ndef field_from_name(instance, field_name):\n return getattr(instance, field_name, None)\n\nand then in the template:\n{{ model_instance|field_from_name:header.model_column }} \n\n", "Simplify this.\nFirst, read about the things the Django template language actually can do. It's not much. It can deference variables, lists and dictionaries.\nIt's simpler if you do all the \"work\" in your view function.\nshow = [ ]\nfor title, field_name, visible in LIST_HEADERS:\n if visible: style= \"visibility:hidden\"\n else: style= \"\"\n show.append( (style, title, getattr(object,field_name) )\nrender_to_response( \"template\", { 'show_list': show, ... }, ... )\n\nIn your template\n{% for style, name, value in show_list %}\n<tr class=\"{% cycle odd,even %}\">\n <td class=\"{{style}}\"><a href=\"...\">{{value}}</a></td>\n{% endfor %}\n\nIndeed, I'd suggest dropping LIST_HEADERS from your view function.\nshow = [ \n (\"\", 'Title', object.title),\n (\"\",'First Name', object.first_name),\n (\"\",'Last Name', object.last_name),\n (\"visibility:hidden\",'Modified At', object.modified),\n]\nrender_to_response( \"template\", { 'show_list': show, ... }, ... )\n\nI find the above much easier to work with because it's explicit and it's in the view function.\n" ]
[ 2, 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001958286_django_python.txt
Q: why my extjs combobox is not filled dynamically? Here is my Extjs onReady function var store = new Ext.data.Store({ proxy: new Ext.data.HttpProxy({ url: '/loginjson.json' }), reader: new Ext.data.JsonReader( {root: 'row', fields:['dblist']} ) }); store.load(); and here I'm using it in my FormPanel like renderTo: document.getElementById("loginform"), title: "Login Form", items: [{ xtype: 'combo', fieldLabel: 'genre', name: 'genre', store: store, autoLoad: true, displayField: 'dblist', } and JSON URL of django returns like this http://localhost:8000/loginjson.json {"row": [{"dblist": "datalist"}]} but my combobox is not filled I'm missing somewhere on extJS but couldn't found. A: If you are expecting the ComboBox to behave more like an HTML select field then add to your ComboBox config the property: triggerAction: 'all' This will ensure that all items in the store will be displayed when the field's trigger button is clicked. The ComboBox config will also be needing a valueField property: valueField: 'dblist' Also, explicitly calling the store's load method is not necessary. The ComboBox will handle that for you at the appropriate time. A: I think the fields property of your JSON reader is not configured correctly. Try this: reader: new Ext.data.JsonReader({ root: 'row' , fields:[{name: "dblist"}] })
why my extjs combobox is not filled dynamically?
Here is my Extjs onReady function var store = new Ext.data.Store({ proxy: new Ext.data.HttpProxy({ url: '/loginjson.json' }), reader: new Ext.data.JsonReader( {root: 'row', fields:['dblist']} ) }); store.load(); and here I'm using it in my FormPanel like renderTo: document.getElementById("loginform"), title: "Login Form", items: [{ xtype: 'combo', fieldLabel: 'genre', name: 'genre', store: store, autoLoad: true, displayField: 'dblist', } and JSON URL of django returns like this http://localhost:8000/loginjson.json {"row": [{"dblist": "datalist"}]} but my combobox is not filled I'm missing somewhere on extJS but couldn't found.
[ "If you are expecting the ComboBox to behave more like an HTML select field then add to your ComboBox config the property:\ntriggerAction: 'all'\n\nThis will ensure that all items in the store will be displayed when the field's trigger button is clicked.\nThe ComboBox config will also be needing a valueField property:\nvalueField: 'dblist'\n\nAlso, explicitly calling the store's load method is not necessary. The ComboBox will handle that for you at the appropriate time.\n", "I think the fields property of your JSON reader is not configured correctly. Try this:\nreader: new Ext.data.JsonReader({\n root: 'row'\n , fields:[{name: \"dblist\"}]\n })\n\n" ]
[ 4, 0 ]
[]
[]
[ "combobox", "django", "extjs", "python" ]
stackoverflow_0001957578_combobox_django_extjs_python.txt
Q: Why are there dummy modules in sys.modules? Importing the standard "logging" module pollutes sys.modules with a bunch of dummy entries: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 >>> import sys >>> import logging >>> sorted(x for x in sys.modules.keys() if 'log' in x) ['logging', 'logging.atexit', 'logging.cStringIO', 'logging.codecs', 'logging.os', 'logging.string', 'logging.sys', 'logging.thread', 'logging.threading', 'logging.time', 'logging.traceback', 'logging.types'] # and perhaps even more surprising: >>> import traceback >>> traceback is sys.modules['logging.traceback'] False >>> sys.modules['logging.traceback'] is None True So importing this package puts extra names into sys.modules, except that they are not modules, just references to None. Other modules (e.g. xml.dom and encodings) have this issue as well. Why? Edit: Building on bobince's answer, there are pages describing the origin (see section "Dummy Entries in sys.modules") and future of the feature. A: None values in sys.modules are cached failures of relative lookups. So when you're in package foo and you import sys, Python looks first for a foo.sys module, and if that fails goes to the top-level sys module. To avoid having to check the filesystem for foo/sys.py again on further relative imports, it stores None in the sys.modules to flag that the module didn't exist and a subsequent import shouldn't look there again, but go straight to the loaded sys. This is a cPython implementation detail you can't usefully rely on, but you will need to know it if you're doing nasty magic import/reload hacking. It happens to all packages, not just logging. For example, import xml.dom and see xml.dom.xml in the module list as it tries to import xml from inside xml.dom. As Python moves towards absolute import this ugliness will happen less.
Why are there dummy modules in sys.modules?
Importing the standard "logging" module pollutes sys.modules with a bunch of dummy entries: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 >>> import sys >>> import logging >>> sorted(x for x in sys.modules.keys() if 'log' in x) ['logging', 'logging.atexit', 'logging.cStringIO', 'logging.codecs', 'logging.os', 'logging.string', 'logging.sys', 'logging.thread', 'logging.threading', 'logging.time', 'logging.traceback', 'logging.types'] # and perhaps even more surprising: >>> import traceback >>> traceback is sys.modules['logging.traceback'] False >>> sys.modules['logging.traceback'] is None True So importing this package puts extra names into sys.modules, except that they are not modules, just references to None. Other modules (e.g. xml.dom and encodings) have this issue as well. Why? Edit: Building on bobince's answer, there are pages describing the origin (see section "Dummy Entries in sys.modules") and future of the feature.
[ "None values in sys.modules are cached failures of relative lookups.\nSo when you're in package foo and you import sys, Python looks first for a foo.sys module, and if that fails goes to the top-level sys module. To avoid having to check the filesystem for foo/sys.py again on further relative imports, it stores None in the sys.modules to flag that the module didn't exist and a subsequent import shouldn't look there again, but go straight to the loaded sys.\nThis is a cPython implementation detail you can't usefully rely on, but you will need to know it if you're doing nasty magic import/reload hacking.\nIt happens to all packages, not just logging. For example, import xml.dom and see xml.dom.xml in the module list as it tries to import xml from inside xml.dom.\nAs Python moves towards absolute import this ugliness will happen less.\n" ]
[ 23 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001958417_import_python.txt
Q: Python2.4 and 2.6 behaves differently for os.path.getmtime() on Windows Getting two different modification time when calculated from different Python versions on Windows XP. Python2.4 C:\Copy of elisp>c:\python24\python Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.getmtime("auto-complete-emacs-lisp.el") 1251684178 >>> ^Z Python2.6 C:\Copy of elisp>C:\Python26\python Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.getmtime("auto-complete-emacs-lisp.el") 1251687778.0 >>> There is a difference of 3600 seconds reported by Python2.6 and Python2.4. What is the reason of this strange behavior? A: There is a difference of 3600 seconds ... This should be the kicker. It's a timezone problem, pure and simple. Now all you have to do is find out why 2.4 and 2.6 are using different timezone information :-) A: It's a bug in Microsoft's implementation of the C standard library. Python 2.4 used to use the stdlib fstat call to get file information, and hence could end up an hour out in locales that use DST. In Python 2.5 and later, os.stat calls the direct Win32-only API to get file information when running on Windows, resulting in the correct output. See this thread for more.
Python2.4 and 2.6 behaves differently for os.path.getmtime() on Windows
Getting two different modification time when calculated from different Python versions on Windows XP. Python2.4 C:\Copy of elisp>c:\python24\python Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.getmtime("auto-complete-emacs-lisp.el") 1251684178 >>> ^Z Python2.6 C:\Copy of elisp>C:\Python26\python Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.getmtime("auto-complete-emacs-lisp.el") 1251687778.0 >>> There is a difference of 3600 seconds reported by Python2.6 and Python2.4. What is the reason of this strange behavior?
[ "\nThere is a difference of 3600 seconds ...\n\nThis should be the kicker. It's a timezone problem, pure and simple.\nNow all you have to do is find out why 2.4 and 2.6 are using different timezone information :-)\n", "It's a bug in Microsoft's implementation of the C standard library. Python 2.4 used to use the stdlib fstat call to get file information, and hence could end up an hour out in locales that use DST.\nIn Python 2.5 and later, os.stat calls the direct Win32-only API to get file information when running on Windows, resulting in the correct output. See this thread for more.\n" ]
[ 2, 2 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0001957866_python_windows.txt
Q: Python - Idiom to check if string is empty, print default I'm just wondering, is there a Python idiom to check if a string is empty, and then print a default if it's is? (The context is Django, for the __unicode__(self) function for UserProfile - basically, I want to print the first name and last name, if it exists, and then the username if they don't both exist). Cheers, Victor A: displayname = firstname+' '+lastname if firstname and lastname else username A: displayname = firstname + lastname or username will work if firstname and last name has 0 length blank string A: I think this issue is better handled in the templates with something like: {{ user.get_full_name|default:user.username }} That uses Django's included "default" filter. There is also a "default_if_none" filter if you are specifically concerned about a None value, but want to allow a blank value (i.e. ''). The "default" filter will trigger on both a None value and a '' value. Here's the link to the Django docs on it: http://docs.djangoproject.com/en/dev/ref/templates/builtins/#default A: Ok, I'm assuming you meant __unicode__() method. Try something like this (not tested, but real close to being correct): from django.utils.encoding import smart_unicode def __unicode__(self): u = self.user if u.firstname and u.lastname: return u"%s %s" % (u.firstname, u.lastname) return smart_unicode(u.username) I just realized you asked for the Python idiom, not the Django code. Oh well. A: Something like: name = data.Name or "Default Name" A: My schema would have None as an unset first- or lastname, so Frederico's answer wouldn't work. So: print ("%s %s" % (firstname, lastname) if not (firstname and lastname) else username )
Python - Idiom to check if string is empty, print default
I'm just wondering, is there a Python idiom to check if a string is empty, and then print a default if it's is? (The context is Django, for the __unicode__(self) function for UserProfile - basically, I want to print the first name and last name, if it exists, and then the username if they don't both exist). Cheers, Victor
[ "displayname = firstname+' '+lastname if firstname and lastname else username\n\n", "displayname = firstname + lastname or username\n\nwill work if firstname and last name has 0 length blank string\n", "I think this issue is better handled in the templates with something like:\n{{ user.get_full_name|default:user.username }}\nThat uses Django's included \"default\" filter. There is also a \"default_if_none\" filter if you are specifically concerned about a None value, but want to allow a blank value (i.e. ''). The \"default\" filter will trigger on both a None value and a '' value.\nHere's the link to the Django docs on it:\nhttp://docs.djangoproject.com/en/dev/ref/templates/builtins/#default\n", "Ok, I'm assuming you meant __unicode__() method. Try something like this (not tested, but real close to being correct):\nfrom django.utils.encoding import smart_unicode\ndef __unicode__(self):\n u = self.user\n if u.firstname and u.lastname:\n return u\"%s %s\" % (u.firstname, u.lastname)\n return smart_unicode(u.username)\n\nI just realized you asked for the Python idiom, not the Django code. Oh well.\n", "Something like:\nname = data.Name or \"Default Name\"\n\n", "My schema would have None as an unset first- or lastname, so Frederico's answer wouldn't work. So:\nprint (\"%s %s\" % (firstname, lastname)\n if not (firstname and lastname) \n else username )\n\n" ]
[ 6, 4, 4, 2, 1, 0 ]
[]
[]
[ "django", "idioms", "python" ]
stackoverflow_0001956249_django_idioms_python.txt
Q: How to remove blocks surrounded by curly brackets via python Sample text: String -> content within the rev tag (via lxml). I'm trying to remove the {{BLOCKS}} within the text. I've used the following regex to remove simple, one line blocks: p = re.compile('\{\{*.*\}\}') nonBracketedString = p.sub('', bracketedString) However this does not remove the first multi line bracketed section at the beginning of the content. How can one remove the multi-line, curly bracketed blocks? EDIT: Solution from answer: p = re.compile('\{\{*?.*?\}\}', re.DOTALL) nonBracketedString = p.sub('', bracketedString) A: Set the dotall flag. p = re.compile('\{\{*.*?\}\}', re.DOTALL) nonBracketedString = p.sub('', bracketedString) In the default mode, . matches any character except a newline. If the DOTALL flag has been specified, this matches any character including a newline. http://docs.python.org/library/re.html Also, you'll need a non-greedy match between the brackets: .*? A: >>> import urllib2 >>> import re >>> s = "".join(urllib2.urlopen('http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Italian%20War%20of%201542-1546&redirects&rvprop=content&format=xml').readlines()) >>> p = re.compile('\{\{.*?\}\}', re.DOTALL) >>> re.sub(p, '', s) '<?xml version="1.0"?><api><query><redirects><r from="Italian War of 1542-1546" to="Italian War of 1542\xe2\x80\x931546" /></redirects><pages><page pageid="3719774" ns="0" title="Italian War of 1542\xe2\x80\x931546"><revisions><rev xml:space="preserve">\n\n\n\nThe \'\'\'Italian War of 1542\xe2\x80\x9346\'\'\' was a conflict late in the [[Italian Wars]], ... I've truncated the output here, but there's enough to see that it's working. A: Set the dotall flag-- this allows . to match newlines. p = re.compile('\{\{*.*\}\}', re.DOTALL) nonBracketedString = p.sub('', bracketedString)
How to remove blocks surrounded by curly brackets via python
Sample text: String -> content within the rev tag (via lxml). I'm trying to remove the {{BLOCKS}} within the text. I've used the following regex to remove simple, one line blocks: p = re.compile('\{\{*.*\}\}') nonBracketedString = p.sub('', bracketedString) However this does not remove the first multi line bracketed section at the beginning of the content. How can one remove the multi-line, curly bracketed blocks? EDIT: Solution from answer: p = re.compile('\{\{*?.*?\}\}', re.DOTALL) nonBracketedString = p.sub('', bracketedString)
[ "Set the dotall flag.\np = re.compile('\\{\\{*.*?\\}\\}', re.DOTALL)\nnonBracketedString = p.sub('', bracketedString)\n\nIn the default mode, . matches any character except a newline. If the DOTALL flag has been specified, this matches any character including a newline.\nhttp://docs.python.org/library/re.html\nAlso, you'll need a non-greedy match between the brackets: .*?\n", ">>> import urllib2\n>>> import re\n>>> s = \"\".join(urllib2.urlopen('http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Italian%20War%20of%201542-1546&redirects&rvprop=content&format=xml').readlines())\n>>> p = re.compile('\\{\\{.*?\\}\\}', re.DOTALL)\n>>> re.sub(p, '', s)\n'<?xml version=\"1.0\"?><api><query><redirects><r from=\"Italian War of 1542-1546\" to=\"Italian War of 1542\\xe2\\x80\\x931546\" /></redirects><pages><page pageid=\"3719774\" ns=\"0\" title=\"Italian War of 1542\\xe2\\x80\\x931546\"><revisions><rev xml:space=\"preserve\">\\n\\n\\n\\nThe \\'\\'\\'Italian War of 1542\\xe2\\x80\\x9346\\'\\'\\' was a conflict late in the [[Italian Wars]], ...\n\nI've truncated the output here, but there's enough to see that it's working.\n", "Set the dotall flag-- this allows . to match newlines.\np = re.compile('\\{\\{*.*\\}\\}', re.DOTALL)\nnonBracketedString = p.sub('', bracketedString)\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "api", "python", "regex", "wikipedia" ]
stackoverflow_0001956970_api_python_regex_wikipedia.txt
Q: Absolute import failing in subpackage that shadows a stdlib package name Basically I have a subpackage with the same name as a standard library package ("logging") and I'd like it to be able to absolute-import the standard one no matter how I run it, but this fails when I'm in the parent package. It really looks like either a bug, or an undocumented behaviour of the new "absolute import" support (new as of Python 2.5). Tried with 2.5 and 2.6. Package layout: foo/ __init__.py logging/ __init__.py In foo/__init__.py we import our own logging subpackage: from __future__ import absolute_import from . import logging as rel_logging print 'top, relative:', rel_logging In foo/logging/__init__.py we want to import the stdlib logging package: from __future__ import absolute_import print 'sub, name:', __name__ import logging as abs_logging print 'sub, absolute:', abs_logging Note: The folder containing foo is in sys.path. When imported from outside/above foo, the output is as expected: c:\> python -c "import foo" sub, name: foo.logging sub, absolute: <module 'logging' from 'c:\python26\lib\logging\__init__.pyc'> top, relative: <module 'foo.logging' from 'foo\logging\__init__.pyc'> So the absolute import in the subpackage finds the stdlib package as desired. But when we're inside the foo folder, it behaves differently: c:\foo>\python25\python -c "import foo" sub, name: foo.logging sub, name: logging sub, absolute: <module 'logging' from 'logging\__init__.pyc'> sub, absolute: <module 'logging' from 'logging\__init__.pyc'> top, relative: <module 'foo.logging' from 'c:\foo\logging\__init__.pyc'> The double output for "sub, name" shows that my own subpackage called "logging" is importing itself a second time, and it does not find the stdlib "logging" package even though "absolute_import" is enabled. The use case is that I'd like to be able to work with, test, etc, this package regardless of what the current directory is. Changing the name from "logging" to something else would be a workaround, but not a desirable one, and in any case this behaviour doesn't seem to fit with the description of how absolute imports should work. Any ideas what is going on, whether this is a bug (mine or Python's), or whether this behaviour is in fact implied by some documentation? Edit: the answer by gahooa shows clearly what the problem is. A crude work-around that proves that's it is shown here: c:\foo>python -c "import sys; del sys.path[0]; import foo" sub, name: foo.logging sub, absolute: <module 'logging' from 'c:\python26\lib\logging\__init__.pyc'> top, relative: <module 'foo.logging' from 'c:\foo\logging\__init__.pyc'> A: sys.path[0] is by default '', which means "current directory". So if you are sitting in a directory with logging in it, that will be chosen first. I ran into this recently, until I realized that I was actually sitting in that directory and that sys.path was picking up my current directory FIRST, before looking in the standard library.
Absolute import failing in subpackage that shadows a stdlib package name
Basically I have a subpackage with the same name as a standard library package ("logging") and I'd like it to be able to absolute-import the standard one no matter how I run it, but this fails when I'm in the parent package. It really looks like either a bug, or an undocumented behaviour of the new "absolute import" support (new as of Python 2.5). Tried with 2.5 and 2.6. Package layout: foo/ __init__.py logging/ __init__.py In foo/__init__.py we import our own logging subpackage: from __future__ import absolute_import from . import logging as rel_logging print 'top, relative:', rel_logging In foo/logging/__init__.py we want to import the stdlib logging package: from __future__ import absolute_import print 'sub, name:', __name__ import logging as abs_logging print 'sub, absolute:', abs_logging Note: The folder containing foo is in sys.path. When imported from outside/above foo, the output is as expected: c:\> python -c "import foo" sub, name: foo.logging sub, absolute: <module 'logging' from 'c:\python26\lib\logging\__init__.pyc'> top, relative: <module 'foo.logging' from 'foo\logging\__init__.pyc'> So the absolute import in the subpackage finds the stdlib package as desired. But when we're inside the foo folder, it behaves differently: c:\foo>\python25\python -c "import foo" sub, name: foo.logging sub, name: logging sub, absolute: <module 'logging' from 'logging\__init__.pyc'> sub, absolute: <module 'logging' from 'logging\__init__.pyc'> top, relative: <module 'foo.logging' from 'c:\foo\logging\__init__.pyc'> The double output for "sub, name" shows that my own subpackage called "logging" is importing itself a second time, and it does not find the stdlib "logging" package even though "absolute_import" is enabled. The use case is that I'd like to be able to work with, test, etc, this package regardless of what the current directory is. Changing the name from "logging" to something else would be a workaround, but not a desirable one, and in any case this behaviour doesn't seem to fit with the description of how absolute imports should work. Any ideas what is going on, whether this is a bug (mine or Python's), or whether this behaviour is in fact implied by some documentation? Edit: the answer by gahooa shows clearly what the problem is. A crude work-around that proves that's it is shown here: c:\foo>python -c "import sys; del sys.path[0]; import foo" sub, name: foo.logging sub, absolute: <module 'logging' from 'c:\python26\lib\logging\__init__.pyc'> top, relative: <module 'foo.logging' from 'c:\foo\logging\__init__.pyc'>
[ "sys.path[0] is by default '', which means \"current directory\". So if you are sitting in a directory with logging in it, that will be chosen first.\nI ran into this recently, until I realized that I was actually sitting in that directory and that sys.path was picking up my current directory FIRST, before looking in the standard library.\n" ]
[ 10 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001959188_import_python.txt
Q: str.format() problem So I made this class that outputs '{0}' when x=0 or '{1}' for every other value of x. class offset(str): def __init__(self,x): self.x=x def__repr__(self): return repr(str({int(bool(self.x))})) def end(self,end_of_loop): #ignore this def it works fine if self.x==end_of_loop: return '{2}' else: return self I want to do this: offset(1).format('first', 'next') but it will only return the number I give for x as a string. What am I doing wrong? A: Your subclass of str does not override format, so when you call format on one of its instances it just uses the one inherited from str which uses self's "intrinsic value as str", i.e., the string form of whatever you passed to offset(). To change that intrinsic value you might override __new__, e.g.: class offset(str): def __init__(self, x): self.x = x def __new__(cls, x): return str.__new__(cls, '{' + str(int(bool(x))) + '}') for i in (0, 1): x = offset(i) print x print repr(x) print x.format('first', 'next') emits {0} '{0}' first {1} '{1}' next Note there's no need to also override __repr__ if, by overriding __new__, you're already ensuring that the instance's intrinsic value as str is the format you desire.
str.format() problem
So I made this class that outputs '{0}' when x=0 or '{1}' for every other value of x. class offset(str): def __init__(self,x): self.x=x def__repr__(self): return repr(str({int(bool(self.x))})) def end(self,end_of_loop): #ignore this def it works fine if self.x==end_of_loop: return '{2}' else: return self I want to do this: offset(1).format('first', 'next') but it will only return the number I give for x as a string. What am I doing wrong?
[ "Your subclass of str does not override format, so when you call format on one of its instances it just uses the one inherited from str which uses self's \"intrinsic value as str\", i.e., the string form of whatever you passed to offset().\nTo change that intrinsic value you might override __new__, e.g.:\nclass offset(str):\n def __init__(self, x):\n self.x = x\n def __new__(cls, x):\n return str.__new__(cls, '{' + str(int(bool(x))) + '}')\n\nfor i in (0, 1):\n x = offset(i)\n print x\n print repr(x)\n print x.format('first', 'next')\n\nemits\n{0}\n'{0}'\nfirst\n{1}\n'{1}'\nnext\n\nNote there's no need to also override __repr__ if, by overriding __new__, you're already ensuring that the instance's intrinsic value as str is the format you desire.\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0001959364_python.txt
Q: Get physical map location of object based off user input I am getting user input into a python application of a local landmark. With this data I am trying to get the longitude and latitude of their object using Google maps. I am trying to work with the Google gdata api for the maps but I did not find a way to get long and lat data from a search query when working in python. I am ok using any system that will solve my problem in python. A: Are you trying to geocode a street or POI (point of interest)? In that case, geopy is perfect: In [1]: from geopy import geocoders In [2]: g = geocoders.Google(GOOGLE_MAPS_API_KEY) In [3]: (place, point) = g.geocode('Eiffel Tower, Paris') Fetching http://maps.google.com/maps/geo?q=Eiffel+Tower%2C+Paris&output=kml&key=XYZ... In [4]: print point ------> print(point) (48.858204999999998, 2.294359) A: You can use geopy (http://code.google.com/p/geopy/wiki/GettingStarted). It can query Google and several additional services. A: I have a JQuery Plugin that will allow you to plot multiple addresses from a json object. In my demo, I load the json object from a php file, so I'm sure that you can do the same with python. http://grasshopperpebbles.com/ajax/jquery-plugin-imgooglemaps-0-9-multiple-addresses-street-view/
Get physical map location of object based off user input
I am getting user input into a python application of a local landmark. With this data I am trying to get the longitude and latitude of their object using Google maps. I am trying to work with the Google gdata api for the maps but I did not find a way to get long and lat data from a search query when working in python. I am ok using any system that will solve my problem in python.
[ "Are you trying to geocode a street or POI (point of interest)? In that case, geopy is perfect:\nIn [1]: from geopy import geocoders\n\nIn [2]: g = geocoders.Google(GOOGLE_MAPS_API_KEY)\n\nIn [3]: (place, point) = g.geocode('Eiffel Tower, Paris')\nFetching http://maps.google.com/maps/geo?q=Eiffel+Tower%2C+Paris&output=kml&key=XYZ...\n\nIn [4]: print point\n------> print(point)\n(48.858204999999998, 2.294359)\n\n", "You can use geopy (http://code.google.com/p/geopy/wiki/GettingStarted). It can query Google and several additional services.\n", "I have a JQuery Plugin that will allow you to plot multiple addresses from a json object. In my demo, I load the json object from a php file, so I'm sure that you can do the same with python.\nhttp://grasshopperpebbles.com/ajax/jquery-plugin-imgooglemaps-0-9-multiple-addresses-street-view/\n" ]
[ 5, 1, 0 ]
[]
[]
[ "gdata_api", "google_maps", "python" ]
stackoverflow_0001851722_gdata_api_google_maps_python.txt
Q: Python list problem python: m=[[0]*3]*2 for i in range(3): m[0][i]=1 print m I expect that this code should print [[1, 1, 1], [0, 0, 0]] but it prints [[1, 1, 1], [1, 1, 1]] A: This is by design. When you use multiplication on elements of a list, you are reproducing the references. See the section "List creation shortcuts" on the Python Programming/Lists wikibook which goes into detail on the issues with list references to mutable objects. Their recommended workaround is a list comprehension: >>> s = [[0]*3 for i in range(2)] >>> s [[0, 0, 0], [0, 0, 0]] >>> s[0][1] = 1 >>> s [[0, 1, 0], [0, 0, 0]] A: This is a bit devilish, but quite obvious when you understand what you're doing. when you're doing the [[0]*3]*2 bit, you're first creating a list with 3 zeros, then you copy that to make two elements. But when you do that copy, you do not create new lists with the same contents, but rather reference the same list several times. So when you change one, they all change. An example to highlight it: In [49]: s = [[]]*2 # Create two empty lists In [50]: s # See: Out[50]: [[], []] In [51]: s[0].append(2) # Alter the first element (or so we think) In [52]: s # OH MY, they both changed! (because they're the same list!) Out[52]: [[2], [2]]
Python list problem
python: m=[[0]*3]*2 for i in range(3): m[0][i]=1 print m I expect that this code should print [[1, 1, 1], [0, 0, 0]] but it prints [[1, 1, 1], [1, 1, 1]]
[ "This is by design. When you use multiplication on elements of a list, you are reproducing the references.\nSee the section \"List creation shortcuts\" on the Python Programming/Lists wikibook which goes into detail on the issues with list references to mutable objects.\nTheir recommended workaround is a list comprehension:\n>>> s = [[0]*3 for i in range(2)]\n>>> s\n[[0, 0, 0], [0, 0, 0]]\n>>> s[0][1] = 1\n>>> s\n[[0, 1, 0], [0, 0, 0]]\n\n", "This is a bit devilish, but quite obvious when you understand what you're doing. when you're doing the [[0]*3]*2 bit, you're first creating a list with 3 zeros, then you copy that to make two elements. But when you do that copy, you do not create new lists with the same contents, but rather reference the same list several times. So when you change one, they all change.\nAn example to highlight it:\nIn [49]: s = [[]]*2 # Create two empty lists\n\nIn [50]: s # See: \nOut[50]: [[], []]\n\nIn [51]: s[0].append(2) # Alter the first element (or so we think)\n\nIn [52]: s # OH MY, they both changed! (because they're the same list!)\nOut[52]: [[2], [2]]\n\n" ]
[ 18, 8 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001959744_list_python.txt
Q: Django uploading image error I'm trying to upload an image using normal form for normal admin for normal model with normal image field. thumb = fields.ThumbnailField(upload_to=make_upload_path, sizes=settings.VIDEO_THUMB_SIZE, blank=True, null=True) But I'm getting an error: Upload a valid image. The file you uploaded was either not an image or a corrupted image. But my images are valid! I've tried at least ten jpegs and getting the error. What can I do? A: You probably have PIL (Python Imaging Library) installed without JPEG support. If you don't have the libjpeg header files it'll happily compile and install, just with no JPEG support. You need to uninstall PIL, make sure you install libjpeg and the libjpeg development header files, and then reinstall PIL. How you do this depends entirely on your platform.
Django uploading image error
I'm trying to upload an image using normal form for normal admin for normal model with normal image field. thumb = fields.ThumbnailField(upload_to=make_upload_path, sizes=settings.VIDEO_THUMB_SIZE, blank=True, null=True) But I'm getting an error: Upload a valid image. The file you uploaded was either not an image or a corrupted image. But my images are valid! I've tried at least ten jpegs and getting the error. What can I do?
[ "You probably have PIL (Python Imaging Library) installed without JPEG support. If you don't have the libjpeg header files it'll happily compile and install, just with no JPEG support. You need to uninstall PIL, make sure you install libjpeg and the libjpeg development header files, and then reinstall PIL. How you do this depends entirely on your platform.\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001959447_django_python.txt
Q: Dealing with db.Timeout on Google App Engine I'm testing my application (on Google App Engine live servers) and the way I've written it I have about 40 db.GqlQuery() statements in my code (mostly part of classes). I keep getting db.Timeout very often though. How do I deal with this? I was going to surround all my queries with really brutal code like this: querySucceeded = False while not querySucceeded : try : result = db.GqlQuery( """xxx""" ).get() querySucceeded = True #only get here if above line doesn't raise exc except : querySucceeded = False Is this ok? Do you agree? What's a better way to deal with db.Timeouts? Edit: I now use this for any get queries """ Query gets single result """ def queryGet( gql ) : querySucceeded = False while not querySucceeded : try : result = db.GqlQuery( gql ).get() querySucceeded = True #only get here if above line doesn't raise except : querySucceeded = False return result I have similar functions for fetch and count. A: Here's a decorator to retry on db.Timeout, adapted from one from Kay framework: import logging, time from google.appengine.ext import db def retry_on_timeout(retries=3, interval=1.0, exponent=2.0): """A decorator to retry a given function performing db operations.""" def _decorator(func): def _wrapper(*args, **kwargs): count = 0 while True: try: return func(*args, **kwargs) except db.Timeout, e: logging.debug(e) if count >= retries: raise e else: sleep_time = (exponent ** count) * interval logging.warning("Retrying function %r in %d secs" % (func, sleep_time)) time.sleep(sleep_time) count += 1 return _wrapper return _decorator To use it, simply decorate any function that performs db operations and you'd like to do retries: @retry_on_timeout() def do_the_stuff(models): return db.put(models) A: Queries will occasionally fail. You can either show an error message to the user, or retry, as you're doing above. If you retry, however, you should use thread.sleep to add increasing amounts of delay (starting at, say, 50ms) on each retry - retries are more likely to succeed if they're not retried as fast as possible. 40 queries per request is a lot, though. You should consider refactoring your code - it must be possible to eliminate most of those! A: Take a look at this this python Autoretry Datastore Timeouts recipe. Similar to moraes answer, but you only need to call it once at initialization time, rather than decorate functions that perform datastore operations. http://appengine-cookbook.appspot.com/recipe/autoretry-datastore-timeouts A: See the new ROTModel in GAE-Utilities. The second discussion below shows how it does retries. It's a subclass of db.Model, so your classes can inherit from ROTModel instead, and take advantage of it's retries. http://code.google.com/p/gaeutilities http://groups.google.com/group/google-appengine/browse_thread/thread/ac51cc32196d62f8/aa6ccd47f217cb9a?lnk=gst&q=timeout#aa6ccd47f217cb9a
Dealing with db.Timeout on Google App Engine
I'm testing my application (on Google App Engine live servers) and the way I've written it I have about 40 db.GqlQuery() statements in my code (mostly part of classes). I keep getting db.Timeout very often though. How do I deal with this? I was going to surround all my queries with really brutal code like this: querySucceeded = False while not querySucceeded : try : result = db.GqlQuery( """xxx""" ).get() querySucceeded = True #only get here if above line doesn't raise exc except : querySucceeded = False Is this ok? Do you agree? What's a better way to deal with db.Timeouts? Edit: I now use this for any get queries """ Query gets single result """ def queryGet( gql ) : querySucceeded = False while not querySucceeded : try : result = db.GqlQuery( gql ).get() querySucceeded = True #only get here if above line doesn't raise except : querySucceeded = False return result I have similar functions for fetch and count.
[ "Here's a decorator to retry on db.Timeout, adapted from one from Kay framework:\nimport logging, time\nfrom google.appengine.ext import db\n\ndef retry_on_timeout(retries=3, interval=1.0, exponent=2.0):\n \"\"\"A decorator to retry a given function performing db operations.\"\"\"\n def _decorator(func):\n def _wrapper(*args, **kwargs):\n count = 0\n while True:\n try:\n return func(*args, **kwargs)\n except db.Timeout, e:\n logging.debug(e)\n if count >= retries:\n raise e\n else:\n sleep_time = (exponent ** count) * interval\n logging.warning(\"Retrying function %r in %d secs\" %\n (func, sleep_time))\n time.sleep(sleep_time)\n count += 1\n\n return _wrapper\n\n return _decorator\n\nTo use it, simply decorate any function that performs db operations and you'd like to do retries:\n@retry_on_timeout()\ndef do_the_stuff(models):\n return db.put(models)\n\n", "Queries will occasionally fail. You can either show an error message to the user, or retry, as you're doing above. If you retry, however, you should use thread.sleep to add increasing amounts of delay (starting at, say, 50ms) on each retry - retries are more likely to succeed if they're not retried as fast as possible.\n40 queries per request is a lot, though. You should consider refactoring your code - it must be possible to eliminate most of those!\n", "Take a look at this this python Autoretry Datastore Timeouts recipe. Similar to moraes answer, but you only need to call it once at initialization time, rather than decorate functions that perform datastore operations.\nhttp://appengine-cookbook.appspot.com/recipe/autoretry-datastore-timeouts\n", "See the new ROTModel in GAE-Utilities. The second discussion below shows how it does retries. \nIt's a subclass of db.Model, so your classes can inherit from ROTModel instead, and take advantage of it's retries. \nhttp://code.google.com/p/gaeutilities\nhttp://groups.google.com/group/google-appengine/browse_thread/thread/ac51cc32196d62f8/aa6ccd47f217cb9a?lnk=gst&q=timeout#aa6ccd47f217cb9a\n" ]
[ 7, 6, 2, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001456070_google_app_engine_python.txt
Q: Which ldap object mapper for python can you recommend? I have to synchronize two different LDAP servers with different schemas. To make my life easier I'm searching for an object mapper for python like SQLobject/SQLAlchemy, but for LDAP. I found the following packages via pypi and google that might provide such functionality: pumpkin 0.1.0-beta1: Pumpkin is LDAP ORM (without R) for python. afpy.ldap 0.3: This module provide an easy way to deal with ldap stuff in python. bda.ldap 1.3.1: LDAP convenience library. Python LDAP Object Mapper: Provides an ORM-like (Django, Storm, SQLAlchemy, et al.) layer for LDAP in Python. ldapdict 1.4: Python package for connecting to LDAP, returning results as dictionary like classes. Results are cached. Which of these packages could you recommend? Or should I better use something different? A: If I were you I would either use python-ldap or ldaptor. Python-ldap is a wrapper for OpenLDAP so you may have problems with using it on Windows unless you are able to build from source. LDAPtor, is pure python so you avoid that problem. Also, there is a very well written, and graphical description of ldaptor on the website so you should be able to tell whether or not it will do the job you need, just by reading through this web page: http://eagain.net/talks/ldaptor/ A: little late maybe... bda.ldap (http://pypi.python.org/pypi/bda.ldap) wraps again python-ldap to a more simple API than python-ldap itself provides. Further it transparently handles query caching of results due to bda.cache (http://pypi.python.org/pypi/bda.cache). Additionally it provides a LDAPNode object for building end editing LDAP trees via a dict like API. It uses some ZTK stuff as well for integration purposes to the zope framework (primary due to zodict package in LDAPNode implementation). We recently released bda.ldap 1.4.0. If you take a look at README.txt#TODO, you see whats missing from our POV to declare the lib as final. Comments are always welcome, Cheers, Robert A: Giving links to the projects in question would help a lot. Being the developer of Python LDAP Object Mapper, I can tell that it is quite dead at the moment. If you (or anybody else) is up for taking it over, you're welcome :)
Which ldap object mapper for python can you recommend?
I have to synchronize two different LDAP servers with different schemas. To make my life easier I'm searching for an object mapper for python like SQLobject/SQLAlchemy, but for LDAP. I found the following packages via pypi and google that might provide such functionality: pumpkin 0.1.0-beta1: Pumpkin is LDAP ORM (without R) for python. afpy.ldap 0.3: This module provide an easy way to deal with ldap stuff in python. bda.ldap 1.3.1: LDAP convenience library. Python LDAP Object Mapper: Provides an ORM-like (Django, Storm, SQLAlchemy, et al.) layer for LDAP in Python. ldapdict 1.4: Python package for connecting to LDAP, returning results as dictionary like classes. Results are cached. Which of these packages could you recommend? Or should I better use something different?
[ "If I were you I would either use python-ldap or ldaptor. Python-ldap is a wrapper for OpenLDAP so you may have problems with using it on Windows unless you are able to build from source.\nLDAPtor, is pure python so you avoid that problem. Also, there is a very well written, and graphical description of ldaptor on the website so you should be able to tell whether or not it will do the job you need, just by reading through this web page:\nhttp://eagain.net/talks/ldaptor/\n", "little late maybe...\nbda.ldap (http://pypi.python.org/pypi/bda.ldap) wraps again python-ldap to a more simple API than python-ldap itself provides.\nFurther it transparently handles query caching of results due to bda.cache (http://pypi.python.org/pypi/bda.cache).\nAdditionally it provides a LDAPNode object for building end editing LDAP trees via a dict like API.\nIt uses some ZTK stuff as well for integration purposes to the zope framework (primary due to zodict package in LDAPNode implementation).\nWe recently released bda.ldap 1.4.0.\nIf you take a look at README.txt#TODO, you see whats missing from our POV to declare the lib as final.\nComments are always welcome,\nCheers,\nRobert\n", "Giving links to the projects in question would help a lot.\nBeing the developer of Python LDAP Object Mapper, I can tell that it is quite dead at the moment. If you (or anybody else) is up for taking it over, you're welcome :)\n" ]
[ 4, 3, 0 ]
[]
[]
[ "ldap", "orm", "python" ]
stackoverflow_0001544535_ldap_orm_python.txt
Q: Python if else micro-optimization In pondering optimization of code, I was wondering which was more expensive in python: if x: d = 1 else: d = 2 or d = 2 if x: d = 1 Any thoughts? I like the reduced line count in the second but wondered if reassignment was more costly than the condition switching. A: Don't ponder, don't wonder, measure -- with timeit at the shell command line (by far the best, simplest way to use it!). Python 2.5.4 on Mac OSX 10.5 on a laptop...: $ python -mtimeit -s'x=0' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0748 usec per loop $ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0685 usec per loop $ python -mtimeit -s'x=0' 'd=2' 'if x: d=1' 10000000 loops, best of 3: 0.0734 usec per loop $ python -mtimeit -s'x=1' 'd=2' 'if x: d=1' 10000000 loops, best of 3: 0.101 usec per loop so you see: the "just-if" form can save 1.4 nanoseconds when x is false, but costs 40.2 nanoseconds when x is true, compared with the "if/else" form; so, in a micro-optimization context, you should use the former only if x is 30 times more likely to be false than true, or thereabouts. Also: $ python -mtimeit -s'x=0' 'd=1 if x else 2' 10000000 loops, best of 3: 0.0736 usec per loop $ python -mtimeit -s'x=1' 'd=1 if x else 2' 10000000 loops, best of 3: 0.076 usec per loop ...the ternary operator of the if/else has its own miniscule pluses and minuses. When the differences are as tiny as this, you should measure repeatedly, establish what the noise level is, and ensure you're not taking differences "in the noise" as significant. For example, to compare statement vs expression if/else in the "x is true" case, repeat each a few times: $ python -mtimeit -s'x=1' 'd=1 if x else 2' 10000000 loops, best of 3: 0.076 usec per loop $ python -mtimeit -s'x=1' 'd=1 if x else 2' 10000000 loops, best of 3: 0.0749 usec per loop $ python -mtimeit -s'x=1' 'd=1 if x else 2' 10000000 loops, best of 3: 0.0742 usec per loop $ python -mtimeit -s'x=1' 'd=1 if x else 2' 10000000 loops, best of 3: 0.0749 usec per loop $ python -mtimeit -s'x=1' 'd=1 if x else 2' 10000000 loops, best of 3: 0.0745 usec per loop now you can state that the expression forms takes (on this machine and versions of key software) 74.2 to 76.0 nanoseconds -- the range is much more expressive than any single number would be. And similarly: $ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0688 usec per loop $ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0681 usec per loop $ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0687 usec per loop $ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0679 usec per loop $ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2' 10000000 loops, best of 3: 0.0692 usec per loop now you can state confidently that the statement form takes (under identical conditions) 67.9 to 69.2 nanoseconds; so its advantage, for x true, wrt the expression form, is 4.8 to 8.1 nanoseconds (it's quite fair to restrict this latter interval estimation to 6.3 to 6.8 nanoseconds, comparing min/min and max/max instead of min/max and max/min as the wider, more prudential estimate does). How much time and energy is worth devoting to such microscopic differences, once you've realized for any given care that they are microscopic, is, of course, a different issue. A: You should probably benchmark this, but there's also a third form that uses the ternary operator: d = 1 if x else 2 A: The second one should obviously be more expensive, it does the same operations if x is false and twice the assignments if x is true. Assumption: assignment is more expensive in python than a conditional jump, which makes sense since its interpreted and it has to read a run time hash to get the new value then upload it in the same hash. A: I would argue that the one that is most readable is the most optimized (for readability at least). The if...else construct makes it clear that you are dealing with an either/or case. The assignment construct might make more sense if (d==2) is the usual value and your if tests for an unusual case. This construct becomes less clear if your assignment gets moved away from the if. In this simple example it doesn't really matter. For more complex code, I would almost always optimize for readability, even at the expense of a few CPU cycles.
Python if else micro-optimization
In pondering optimization of code, I was wondering which was more expensive in python: if x: d = 1 else: d = 2 or d = 2 if x: d = 1 Any thoughts? I like the reduced line count in the second but wondered if reassignment was more costly than the condition switching.
[ "Don't ponder, don't wonder, measure -- with timeit at the shell command line (by far the best, simplest way to use it!). Python 2.5.4 on Mac OSX 10.5 on a laptop...:\n$ python -mtimeit -s'x=0' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0748 usec per loop\n$ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0685 usec per loop\n$ python -mtimeit -s'x=0' 'd=2' 'if x: d=1'\n10000000 loops, best of 3: 0.0734 usec per loop\n$ python -mtimeit -s'x=1' 'd=2' 'if x: d=1'\n10000000 loops, best of 3: 0.101 usec per loop\n\nso you see: the \"just-if\" form can save 1.4 nanoseconds when x is false, but costs 40.2 nanoseconds when x is true, compared with the \"if/else\" form; so, in a micro-optimization context, you should use the former only if x is 30 times more likely to be false than true, or thereabouts. Also:\n$ python -mtimeit -s'x=0' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.0736 usec per loop\n$ python -mtimeit -s'x=1' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.076 usec per loop\n\n...the ternary operator of the if/else has its own miniscule pluses and minuses.\nWhen the differences are as tiny as this, you should measure repeatedly, establish what the noise level is, and ensure you're not taking differences \"in the noise\" as significant. For example, to compare statement vs expression if/else in the \"x is true\" case, repeat each a few times:\n$ python -mtimeit -s'x=1' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.076 usec per loop\n$ python -mtimeit -s'x=1' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.0749 usec per loop\n$ python -mtimeit -s'x=1' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.0742 usec per loop\n$ python -mtimeit -s'x=1' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.0749 usec per loop\n$ python -mtimeit -s'x=1' 'd=1 if x else 2'\n10000000 loops, best of 3: 0.0745 usec per loop\n\nnow you can state that the expression forms takes (on this machine and versions of key software) 74.2 to 76.0 nanoseconds -- the range is much more expressive than any single number would be. And similarly:\n$ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0688 usec per loop\n$ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0681 usec per loop\n$ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0687 usec per loop\n$ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0679 usec per loop\n$ python -mtimeit -s'x=1' 'if x: d=1' 'else: d=2'\n10000000 loops, best of 3: 0.0692 usec per loop\n\nnow you can state confidently that the statement form takes (under identical conditions) 67.9 to 69.2 nanoseconds; so its advantage, for x true, wrt the expression form, is 4.8 to 8.1 nanoseconds (it's quite fair to restrict this latter interval estimation to 6.3 to 6.8 nanoseconds, comparing min/min and max/max instead of min/max and max/min as the wider, more prudential estimate does).\nHow much time and energy is worth devoting to such microscopic differences, once you've realized for any given care that they are microscopic, is, of course, a different issue.\n", "You should probably benchmark this, but there's also a third form that uses the ternary operator:\nd = 1 if x else 2\n\n", "The second one should obviously be more expensive, it does the same operations if x is false and twice the assignments if x is true.\nAssumption: assignment is more expensive in python than a conditional jump, which makes sense since its interpreted and it has to read a run time hash to get the new value then upload it in the same hash.\n", "I would argue that the one that is most readable is the most optimized (for readability at least).\nThe if...else construct makes it clear that you are dealing with an either/or case.\nThe assignment construct might make more sense if (d==2) is the usual value and your if tests for an unusual case. This construct becomes less clear if your assignment gets moved away from the if.\nIn this simple example it doesn't really matter. For more complex code, I would almost always optimize for readability, even at the expense of a few CPU cycles.\n" ]
[ 20, 5, 2, 1 ]
[]
[]
[ "micro_optimization", "python" ]
stackoverflow_0001959944_micro_optimization_python.txt
Q: GAE template code to check is item in the list How to use "in" statement to check is item in the list or not. If I use: {% for picture in pictures %} {% if picture in article.pictures %} <input type="checkbox" checked="true" name="picture" value="{{ picture.key }}" /> {% else %} <input type="checkbox" name="picture" value="{{ picture.key }}" /> {% endif %} <img src='/img?img_id={{ picture.key }}'></img> <br /> {% endfor %} this is failing with: TemplateSyntaxError: 'if' statement improperly formatted on line {% if picture in article.pictures %} help? A: By default, Django templates do not support full conditional expressions. You can check if one value is "true" with if, or you can check whether two values are equal with ifequal, etc. Perhaps you can decorate your pictures in the view before you render the template. for picture in pictures: picture.is_in_article = (picture in article.pictures) Then in the template you can act on the value of that new attribute. {% for picture in pictures %} {% if picture.is_in_article %} <input type="checkbox" checked="true" name="picture" value="{{ picture.key }}" /> {% else %} <input type="checkbox" name="picture" value="{{ picture.key }}" /> {% endif %} <img src='/img?img_id={{ picture.key }}'></img> <br /> {% endfor %} A: I've not worked with GAE, but the code for a custom "ifin" Django tag can be found in the patch here. As mentioned in my comment, it looks like that functionality may be implemented in Django 1.2
GAE template code to check is item in the list
How to use "in" statement to check is item in the list or not. If I use: {% for picture in pictures %} {% if picture in article.pictures %} <input type="checkbox" checked="true" name="picture" value="{{ picture.key }}" /> {% else %} <input type="checkbox" name="picture" value="{{ picture.key }}" /> {% endif %} <img src='/img?img_id={{ picture.key }}'></img> <br /> {% endfor %} this is failing with: TemplateSyntaxError: 'if' statement improperly formatted on line {% if picture in article.pictures %} help?
[ "By default, Django templates do not support full conditional expressions. You can check if one value is \"true\" with if, or you can check whether two values are equal with ifequal, etc.\nPerhaps you can decorate your pictures in the view before you render the template.\nfor picture in pictures:\n picture.is_in_article = (picture in article.pictures)\n\nThen in the template you can act on the value of that new attribute.\n{% for picture in pictures %}\n {% if picture.is_in_article %}\n <input type=\"checkbox\" checked=\"true\" name=\"picture\" value=\"{{ picture.key }}\" />\n {% else %}\n <input type=\"checkbox\" name=\"picture\" value=\"{{ picture.key }}\" />\n {% endif %}\n <img src='/img?img_id={{ picture.key }}'></img> <br />\n{% endfor %}\n\n", "I've not worked with GAE, but the code for a custom \"ifin\" Django tag can be found in the patch here. As mentioned in my comment, it looks like that functionality may be implemented in Django 1.2\n" ]
[ 2, 1 ]
[]
[]
[ "google_app_engine", "python", "templates" ]
stackoverflow_0001960022_google_app_engine_python_templates.txt
Q: Start a "throwaway" MySQL session for testing code? If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database? My application is in Python, and I'm using unittest on Ubuntu 9.10. A: --datadir for just the data or --basedir A: You can try the Blackhole and Memory table types in MySQL.
Start a "throwaway" MySQL session for testing code?
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database? My application is in Python, and I'm using unittest on Ubuntu 9.10.
[ "--datadir for just the data or --basedir\n", "You can try the Blackhole and Memory table types in MySQL.\n" ]
[ 1, 0 ]
[]
[]
[ "mysql", "python", "ubuntu", "unit_testing" ]
stackoverflow_0001960155_mysql_python_ubuntu_unit_testing.txt
Q: How does this class implement the "__iter__" method without implementing "next"? I have the following code in django.template: class Template(object): def __init__(self, template_string, origin=None, name='<Unknown Template>'): try: template_string = smart_unicode(template_string) except UnicodeDecodeError: raise TemplateEncodingError("Templates can only be constructed from unicode or UTF-8 strings.") if settings.TEMPLATE_DEBUG and origin is None: origin = StringOrigin(template_string) self.nodelist = compile_string(template_string, origin) self.name = name def __iter__(self): for node in self.nodelist: for subnode in node: yield subnode def render(self, context): "Display stage -- can be called many times" return self.nodelist.render(context) The part I am confused about is below. How does this __iter__ method work? I can't find any corresponding next method. def __iter__(self): for node in self.nodelist: for subnode in node: yield subnode This is the only way that I know how to implement __iter__: class a(object): def __init__(self,x=10): self.x = x def __iter__(self): return self def next(self): if self.x > 0: self.x-=1 return self.x else: raise StopIteration ainst = a() for item in aisnt: print item In your answers, try to use code examples rather than text, because my English is not very good. A: From the docs: If a container object’s __iter__() method is implemented as a generator, it will automatically return an iterator object (technically, a generator object) supplying the __iter__() and __next__() methods. Here is your provided example using a generator: class A(): def __init__(self, x=10): self.x = x def __iter__(self): for i in reversed(range(self.x)): yield i a = A() for item in a: print(item) A: That __iter__method returns a python generator (see the documentation), as it uses the yield keyword. The generator will provide the next() method automatically; quoting the documentation: What makes generators so compact is that the __iter__() and next() methods are created automatically. EDIT: Generators are really useful. If you are not familiar with them, I suggest you readup on them, and play around with some test code. Here is some more info on iterators and generators from StackOverflow.
How does this class implement the "__iter__" method without implementing "next"?
I have the following code in django.template: class Template(object): def __init__(self, template_string, origin=None, name='<Unknown Template>'): try: template_string = smart_unicode(template_string) except UnicodeDecodeError: raise TemplateEncodingError("Templates can only be constructed from unicode or UTF-8 strings.") if settings.TEMPLATE_DEBUG and origin is None: origin = StringOrigin(template_string) self.nodelist = compile_string(template_string, origin) self.name = name def __iter__(self): for node in self.nodelist: for subnode in node: yield subnode def render(self, context): "Display stage -- can be called many times" return self.nodelist.render(context) The part I am confused about is below. How does this __iter__ method work? I can't find any corresponding next method. def __iter__(self): for node in self.nodelist: for subnode in node: yield subnode This is the only way that I know how to implement __iter__: class a(object): def __init__(self,x=10): self.x = x def __iter__(self): return self def next(self): if self.x > 0: self.x-=1 return self.x else: raise StopIteration ainst = a() for item in aisnt: print item In your answers, try to use code examples rather than text, because my English is not very good.
[ "From the docs: \n\nIf a container object’s __iter__()\n method is implemented as a generator,\n it will automatically return an\n iterator object (technically, a\n generator object) supplying the\n __iter__() and __next__() methods.\n\nHere is your provided example using a generator:\nclass A():\n def __init__(self, x=10):\n self.x = x\n def __iter__(self):\n for i in reversed(range(self.x)):\n yield i\n\na = A()\nfor item in a:\n print(item)\n\n", "That __iter__method returns a python generator (see the documentation), as it uses the yield keyword.\nThe generator will provide the next() method automatically; quoting the documentation:\n\nWhat makes generators so compact is that the __iter__() and next() methods are created \n automatically.\n\nEDIT: \nGenerators are really useful. If you are not familiar with them, I suggest you readup on them, and play around with some test code.\nHere is some more info on iterators and generators from StackOverflow.\n" ]
[ 41, 16 ]
[]
[]
[ "iterator", "python", "yield" ]
stackoverflow_0001960309_iterator_python_yield.txt
Q: how can i use '@' by myself function like '@staticmethod' the next is my code,it can print 'xxx', but run wrong at last: def a(object): print 'xxx' @a def b(): return 'bbb' b() In your answers, please try to use code examples rather than text, because my English is not very good. Thank you. A: The decorator form @a means: @a def b... is exactly the same as: def b... b = a(b) So, write a as a higher order function, AKA HOF: specifically, a function that takes a function object as an argument, and returns a function object. As you give NO idea in your question about what a is supposed to DO, you're really making it impossible to give a code example that makes any sense whatsoever: good English or not, you're really polluting SO, not contributing to it, by your questions, since you never explain WHAT are you trying to accomplish in your code!!! A: def a(b): print 'xxx' return b @a def b(): return 'bbb' b() This is the same as: def a(b): print 'xxx' return b def b(): return 'bbb' b = a(b) b()
how can i use '@' by myself function like '@staticmethod'
the next is my code,it can print 'xxx', but run wrong at last: def a(object): print 'xxx' @a def b(): return 'bbb' b() In your answers, please try to use code examples rather than text, because my English is not very good. Thank you.
[ "The decorator form @a means:\n@a\ndef b...\n\nis exactly the same as:\ndef b...\n\nb = a(b)\n\nSo, write a as a higher order function, AKA HOF: specifically, a function that takes a function object as an argument, and returns a function object.\nAs you give NO idea in your question about what a is supposed to DO, you're really making it impossible to give a code example that makes any sense whatsoever: good English or not, you're really polluting SO, not contributing to it, by your questions, since you never explain WHAT are you trying to accomplish in your code!!!\n", "def a(b):\n print 'xxx'\n return b\n\n@a\ndef b():\n return 'bbb'\nb()\n\nThis is the same as:\ndef a(b):\n print 'xxx'\n return b\n\n\ndef b():\n return 'bbb'\n\nb = a(b)\nb()\n\n" ]
[ 5, 1 ]
[]
[]
[ "decorator", "python" ]
stackoverflow_0001960659_decorator_python.txt
Q: why does my 'join' function run wrong b=','.join([1,2,3,4,5]) print b I want it to print the string: '1,2,3,4,5' In your answers, please try to use code examples rather than text, because my English is not very good. Thank you. A: b = ','.join(map(str, [1,2,3,4,5])) # => '1,2,3,4,5' Python doesn't automatically turn the ints into strings--you have to convert them to strings first, then join them. A: anystring.join takes an iterable of STRINGS, not one of integers, which is what you're passing to it! So, use ','.join(str(x) for x in range(1, 6)) or the like. A: The join function expects strings not integers, if you did b=','.join(["1","2","3","4","5"]) instead it works. Here's the consoles output: >>> b=','.join(["1","2","3","4","5"]) >>> print b 1,2,3,4,5 >>>
why does my 'join' function run wrong
b=','.join([1,2,3,4,5]) print b I want it to print the string: '1,2,3,4,5' In your answers, please try to use code examples rather than text, because my English is not very good. Thank you.
[ "b = ','.join(map(str, [1,2,3,4,5]))\n# => '1,2,3,4,5'\n\nPython doesn't automatically turn the ints into strings--you have to convert them to strings first, then join them.\n", "anystring.join takes an iterable of STRINGS, not one of integers, which is what you're passing to it!\nSo, use ','.join(str(x) for x in range(1, 6)) or the like.\n", "The join function expects strings not integers, if you did b=','.join([\"1\",\"2\",\"3\",\"4\",\"5\"]) instead it works. \nHere's the consoles output:\n>>> b=','.join([\"1\",\"2\",\"3\",\"4\",\"5\"])\n>>> print b\n1,2,3,4,5\n>>>\n\n" ]
[ 7, 7, 4 ]
[]
[]
[ "python" ]
stackoverflow_0001960698_python.txt
Q: Write to file as Json format? I have A method for format the ouput as json. My keyword_filter will be pass in this this format: <QueryDict: {u'customer_type': [u'ABC'], u'tag': [u'2']}> <QueryDict: {u'customer_type': [u'TDO'], u'tag': [u'3']}> <QueryDict: {u'customer_type': [u'FRI'], u'tag': [u'2,3']}> In fact this I got from request.GET (keyword_filter=request.GET) This is my method:(I am trying) def save_fiter_to_JSON(self, dest, keyword_filter): fwrite = open(dest, 'a') #keyword_filter = <QueryDict: {u'customer_type': [u'FRI'], u'tag': [u'2,3']}> string_input1 =string.replace(str(keyword_filter), '<QueryDict:', '["name:"') string_input2 = string.replace(string_input1, '>', '') fwrite.write(string_input2+",\n") fwrite.close() Everybody here Could help me? The json format That I Want. [ {"name": filter_name, "customer_type": "ABC", "tag": [2,3]}, ] Or the other good one format from you . import simplejson as json >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) '["foo", {"bar": ["baz", null, 1.0, 2]}]' **filter_name will be pass from the method save_fiter_to_JSON. Merry Christmas And a happy New Year. ... A: Some tips: you can convert django's QueryDict to to Python dictionary with dict(keyword_filter) expression, you can add additional record to the dictionary with dict(keyword_filter, name=filter_name) expression. Then use json module to dump JSON and write it to the file. A: Your question is difficult to understand. I am not sure what you need. Here is my best attempt to solve your problem. def save_fiter_to_JSON(self, dest, filter_name, keyword_filter): # start with an empty list lst = [] # I don't know where you will get your qd (QueryDict instance) # filter something using keyword_filter? Replace this with actual code for qd in ??FILTER_SOMETHING??(keyword_filter): # make a mutable copy of the QueryDict d = qd.copy() # update the copy by adding "name" d["name"] = filter_name # append dict instance to end of list lst.append(d) # get a string with JSON encoding the list s = json.dumps(lst) f = open(dest, 'a') f.write(s + "\n") f.close()
Write to file as Json format?
I have A method for format the ouput as json. My keyword_filter will be pass in this this format: <QueryDict: {u'customer_type': [u'ABC'], u'tag': [u'2']}> <QueryDict: {u'customer_type': [u'TDO'], u'tag': [u'3']}> <QueryDict: {u'customer_type': [u'FRI'], u'tag': [u'2,3']}> In fact this I got from request.GET (keyword_filter=request.GET) This is my method:(I am trying) def save_fiter_to_JSON(self, dest, keyword_filter): fwrite = open(dest, 'a') #keyword_filter = <QueryDict: {u'customer_type': [u'FRI'], u'tag': [u'2,3']}> string_input1 =string.replace(str(keyword_filter), '<QueryDict:', '["name:"') string_input2 = string.replace(string_input1, '>', '') fwrite.write(string_input2+",\n") fwrite.close() Everybody here Could help me? The json format That I Want. [ {"name": filter_name, "customer_type": "ABC", "tag": [2,3]}, ] Or the other good one format from you . import simplejson as json >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) '["foo", {"bar": ["baz", null, 1.0, 2]}]' **filter_name will be pass from the method save_fiter_to_JSON. Merry Christmas And a happy New Year. ...
[ "Some tips:\n\nyou can convert django's QueryDict to to Python dictionary with dict(keyword_filter) expression,\nyou can add additional record to the dictionary with dict(keyword_filter, name=filter_name) expression.\n\nThen use json module to dump JSON and write it to the file.\n", "Your question is difficult to understand. I am not sure what you need. Here is my best attempt to solve your problem.\ndef save_fiter_to_JSON(self, dest, filter_name, keyword_filter):\n # start with an empty list\n lst = []\n\n # I don't know where you will get your qd (QueryDict instance)\n # filter something using keyword_filter? Replace this with actual code\n for qd in ??FILTER_SOMETHING??(keyword_filter):\n # make a mutable copy of the QueryDict\n d = qd.copy()\n # update the copy by adding \"name\"\n d[\"name\"] = filter_name\n # append dict instance to end of list\n lst.append(d)\n\n # get a string with JSON encoding the list\n s = json.dumps(lst)\n\n f = open(dest, 'a')\n f.write(s + \"\\n\")\n f.close()\n\n" ]
[ 3, 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001960873_django_python.txt
Q: Refreshing a window in Tkinter I am trying to make a GUI in Tkinter and am wondering how to refresh a window, namely if I fill in a rectangle, I want the GUI to delete it a specified time later. How would I go about doing this? Documentation on Tkinter seems to be thin... A: Each Tkinter widget has a after method, which you can use to call your rectangle delete function e.g. in the example below first I change a msg using after, and then destruct the window using after from Tkinter import * def changeMsg(): label.configure(text="I will self destruct in 2 secs") label.after(2000, root.destroy) root = Tk() mainContainer = Frame(root) label = Label(mainContainer, text="") label.configure(text="msg will change in 3 secs") label.pack(side=LEFT, ipadx=5, ipady=5) mainContainer.pack() label.after(3000, changeMsg) root.title("Timed event") root.mainloop()
Refreshing a window in Tkinter
I am trying to make a GUI in Tkinter and am wondering how to refresh a window, namely if I fill in a rectangle, I want the GUI to delete it a specified time later. How would I go about doing this? Documentation on Tkinter seems to be thin...
[ "Each Tkinter widget has a after method, which you can use to call your rectangle delete function e.g. in the example below first I change a msg using after, and then destruct the window using after\nfrom Tkinter import *\n\ndef changeMsg():\n label.configure(text=\"I will self destruct in 2 secs\")\n label.after(2000, root.destroy)\n\nroot = Tk()\nmainContainer = Frame(root)\nlabel = Label(mainContainer, text=\"\")\nlabel.configure(text=\"msg will change in 3 secs\")\nlabel.pack(side=LEFT, ipadx=5, ipady=5)\nmainContainer.pack()\nlabel.after(3000, changeMsg)\nroot.title(\"Timed event\")\nroot.mainloop()\n\n" ]
[ 5 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0001960725_python_tkinter.txt
Q: Python: Regex needed This is probably simple, but I can't figure this out: I need regex expression which would extract following records (Each record may span multiple lines and delimited by one or more blank lines): TextTextTextTextTextTextText TextTextTextTextTextTextTextTextText (one or more blank lines) TextTextTextTextText TextTextText TextTextTextTextTextTextText (one or more blank lines) TextTextTextTextText TextTextTextTextTextTextTextTextTextText A: import re re.split('\n\n+', text)
Python: Regex needed
This is probably simple, but I can't figure this out: I need regex expression which would extract following records (Each record may span multiple lines and delimited by one or more blank lines): TextTextTextTextTextTextText TextTextTextTextTextTextTextTextText (one or more blank lines) TextTextTextTextText TextTextText TextTextTextTextTextTextText (one or more blank lines) TextTextTextTextText TextTextTextTextTextTextTextTextTextText
[ "import re\nre.split('\\n\\n+', text)\n\n" ]
[ 4 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001961298_python_regex.txt
Q: Where does execution resume following an exception? In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages? A: The code inside the catch block is executed and the original execution continues right after the catch block. A: the execution resumes where the exception is caught, that is at the beginning of the catch block which specifically address the current exception type. the catch block is executed, the other catch blocks are ignored (think of multiple catch block as a switch statement). in some languages, a finally block may also be executed after the catch. then the program proceed with the next instruction following the whole try ... catch ... finally .... you should note that if an exception is not caught in a block, the exception is propagated to the caller of the current function, and up the call stack until a catch processes the exception. in this case, you can think of function calls like a macro: insert the code of each function where it is called, and you will clearly see the nesting of every try .. catch ... finally ... blocks. if there is no handler for an exception, the program generally crashes. (some languages may be different on this point). the behavior for the execution flow is consistent accross every languages i know. the only difference lies in the try ... catch ... finally ... construct: the finally does not exists in every language, some languages does not allow a finally and a catch in the same block (you have to nest two try to use the 2), some languages allows to catch everything (the catch (...) in C++) while some languages don't. A: Execution continues in the catch block (where the exception was caught). This is consistent across languages that uses exceptions. The important point to note (especially in C++) Between the throw and the catch point the stack is unwound in an orderly manor so that all objects created on the stack are correctly destroyed (in the expected order). This has resulted in the technique knows as RAII. A: I don't have my copy of Bjarne Stroustrup's "Design & Evolution" handy, but I believe he wrote in there about some experience with resumable exceptions. They found that they made things considerably harder to get correct. After all, if an unexpected error happens in some line, your exception handler then has to patch the problem up sufficiently to allow execution to resume without knowing the context. This may be possible for an out-of-memory error (although such errors are frequently a result of runaway memory allocation, and adding some more memory won't really fix anything), but not for exceptions in general. So, in C++ and all languages I'm familiar with, execution resumes with the catch, and doesn't automatically go back to the place that threw the exception. A: It resumes where the exception is caught. Otherwise, what would be the point of writing the exception clause?
Where does execution resume following an exception?
In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages?
[ "The code inside the catch block is executed and the original execution continues right after the catch block.\n", "the execution resumes where the exception is caught, that is at the beginning of the catch block which specifically address the current exception type. the catch block is executed, the other catch blocks are ignored (think of multiple catch block as a switch statement). in some languages, a finally block may also be executed after the catch. then the program proceed with the next instruction following the whole try ... catch ... finally ....\nyou should note that if an exception is not caught in a block, the exception is propagated to the caller of the current function, and up the call stack until a catch processes the exception. in this case, you can think of function calls like a macro: insert the code of each function where it is called, and you will clearly see the nesting of every try .. catch ... finally ... blocks. \nif there is no handler for an exception, the program generally crashes. (some languages may be different on this point).\nthe behavior for the execution flow is consistent accross every languages i know. the only difference lies in the try ... catch ... finally ... construct: the finally does not exists in every language, some languages does not allow a finally and a catch in the same block (you have to nest two try to use the 2), some languages allows to catch everything (the catch (...) in C++) while some languages don't.\n", "Execution continues in the catch block (where the exception was caught).\nThis is consistent across languages that uses exceptions.\nThe important point to note (especially in C++)\nBetween the throw and the catch point the stack is unwound in an orderly manor so that all objects created on the stack are correctly destroyed (in the expected order). This has resulted in the technique knows as RAII.\n", "I don't have my copy of Bjarne Stroustrup's \"Design & Evolution\" handy, but I believe he wrote in there about some experience with resumable exceptions. They found that they made things considerably harder to get correct. After all, if an unexpected error happens in some line, your exception handler then has to patch the problem up sufficiently to allow execution to resume without knowing the context. This may be possible for an out-of-memory error (although such errors are frequently a result of runaway memory allocation, and adding some more memory won't really fix anything), but not for exceptions in general.\nSo, in C++ and all languages I'm familiar with, execution resumes with the catch, and doesn't automatically go back to the place that threw the exception.\n", "It resumes where the exception is caught. Otherwise, what would be the point of writing the exception clause?\n" ]
[ 7, 4, 2, 2, 1 ]
[]
[]
[ "c++", "exception", "python" ]
stackoverflow_0001961158_c++_exception_python.txt
Q: what exactly is random.random doing random.shuffle(lst_shuffle, random.random) I know the latter part is an optional argument. But what does it do exactly. I don't understand what this mean. This is from the docs. random.random()¶ Return the next random floating point number in the range [0.0, 1.0). I also see this, is this what this range 0,0, 1,0 means? Pseudorandom number generators Most, if not all programming languages have libraries that include a pseudo-random number generator. This generator usually returns a random number between 0 and 1 (not including 1). In a perfect generator all numbers have the same probability of being selected but in the pseudo generators some numbers have zero probability. A: Existing answers do a good job of addressing the question's specific, but I think it's worth mentioning a side issue: why you're particularly likely to want to pass an alternative "random generator" to shuffle as opposed to other functions in the random module. Quoting the docs: Note that for even rather small len(x), the total number of permutations of x is larger than the period of most random number generators; this implies that most permutations of a long sequence can never be generated. The phrase "random number generators" here refers to what may be more pedantically called pseudo-random number generators -- generators that give a good imitation of randomness, but are entirely algorithmic, and therefore are known not to be "really random". Any such algorithmic approach will have a "period" -- it will start repeating itself eventually. Python's random module uses a particularly good and well-studied pseudo-random generator, the Mersenne Twister, with a period of 2**19937-1 -- a number that has more than 6 thousand digits when written out in decimal digits, as len(str(2**19937-1)) will confirm;-). On my laptop I can generate about 5 million such numbers per second: $ python -mtimeit -s'import random' 'random.random()' 1000000 loops, best of 3: 0.214 usec per loop Assuming a much faster machine, able to generate a billion such numbers per second, the cycle would take about 105985 years to repeat -- and the best current estimate for the age of the Universe is a bit less than 1.5*1012 years. It would thus take an almost-unimaginable number of Universe-lifetimes to reach the point of repetition;-). Making the computation parallel wouldn't help much; there are estimated to be about 1080 atoms in the Universe, so even if you were able to run such a billion-per-second generator on each atom in the Universe, it would still take well over 105800 Universe-lifetimes to start repeating. So, you might be justified in suspecting that this worry about repetition is a tiny little bit of a theoretical, rather than practical, issue;-). Nevertheless, factorials (which count the permutations of a sequence of length N) also grow pretty fast. The Mersenne Twister, for example, might be able to produce all permutations of a sequence of length 2080, but definitely not of one of length 2081 or higher. Were it not for the "lifetime of the Universe" issue, the docs' worry about "even rather small len(x)" would be justified -- we know that many possible permutations can never be reached by shuffling with such a pseudo-RNG, as soon as we have a reasonably long sequence, so one might worry about what kind of bias we're actually introducing with even a few shuffles!: -) os.urandom mediates access to whatever sources of physical randomness the OS provides -- CryptGenRandom on Windows, /dev/urandom on Linux, etc. os.urandom gives sequences of bytes, but with the help of struct it's easy to make them into random numbers: >>> n = struct.calcsize('I') >>> def s2i(s): return struct.unpack('I', s)[0] ... >>> maxi = s2i(b'\xff'*n) + 1 >>> maxi = float(s2i(b'\xff'*n) + 1) >>> def rnd(): return s2i(os.urandom(n))/maxi Now we can call random.shuffle(somelist, rnd) and worry less about bias;-). Unfortunately, measurement shows that this approach to RNG is about 50 times slower than calls to random.random() -- this could be an important practical consideration if we're going to need many random numbers (and if we don't, the worry about possible bias may be misplaced;-). The os.urandom approach is also hard to use in predictable, repeatable ways (e.g., for testing purposes), while with random.random() you need only provide a fixed initial random.seed at the start of the test to guarantee reproducible behavior. In practice, therefore, os.urandom is only used when you need "cryptographic quality" random numbers - ones that a determined attacker can't predict - and are therefore willing to pay the practical price for using it instead of random.random. A: The second argument is used to specify which random number generator to use. This could be useful if you need/have something "better" than random.random. Security-sensitive applications might need to use a cryptographically secure random number generator. The difference between random.random and random.random() is that the first one is a reference to the function that produces simple random numbers, and the second one actually calls that function. If you had another random number generator, you wanted to use, you could say random.shuffle(x, my_random_number_function) As to what random.random (the default generator) is doing, it uses an algorithm called the Mersenne twister to create a seemingly random floating point number between 0 and 1 (not including 1), all the numbers in that interval being of equal likelihood. That the interval is from 0 to 1 is just a convention. A: The second argument is the function which is called to produce random numbers that are in turn used to shuffle the sequence (first argument). The default function used if you don't provide your own is random.random. You might want to provide this parameter if you want to customize how shuffle is performed. And your customized function will have to return numbers in range [0.0, 1.0) - 0.0 included, 1.0 excluded. A: The docs go on saying: The optional argument random is a 0-argument function returning a random float in [0.0, 1.0); by default, this is the function random(). It means that you can either specify your own random number generator function, or tell the module to use the default random function. The second option is almost always the best choice, because Python uses a pretty good PRNG. The function it expects is supposed to return a floating point pseudo-random number in the range [0.0, 1.0), which means 0.0 included and 1.0 isn't included (i.e. 0.9999 is a valid number to be returned, but 1.0 is not). Each number in this range should be in theory returned with equal probability (i.e. this is a linear distribution). A: The shuffle function depends on an RNG (Random Number Generator), which defaults to random.random. The second argument is there so you can provide your own RNG instead of the default. UPDATE: The second argument is a random number generator that generates a new, random number in the range [0.0, 1.0) each time you call it. Here's an example for you: import random def a(): return 0.0 def b(): return 0.999999999999 arr = [1,2,3] random.shuffle(arr) print arr # prints [1, 3, 2] arr.sort() print arr # prints [1, 2, 3] random.shuffle(arr) print arr # prints [3, 2, 1] arr.sort() random.shuffle(arr, a) print arr # prints [2, 3, 1] arr.sort() random.shuffle(arr, a) print arr # prints [2, 3, 1] arr.sort() random.shuffle(arr, b) print arr # prints [1, 2, 3] arr.sort() random.shuffle(arr, b) print arr # prints [1, 2, 3] So if the function always returns the same value, you always get the same permutation. If the function returns random values each time it's called, you get a random permutation. A: From the example: >>> random.random() # Random float x, 0.0 <= x < 1.0 0.37444887175646646 It generates a random floating point number between 0 and 1.
what exactly is random.random doing
random.shuffle(lst_shuffle, random.random) I know the latter part is an optional argument. But what does it do exactly. I don't understand what this mean. This is from the docs. random.random()¶ Return the next random floating point number in the range [0.0, 1.0). I also see this, is this what this range 0,0, 1,0 means? Pseudorandom number generators Most, if not all programming languages have libraries that include a pseudo-random number generator. This generator usually returns a random number between 0 and 1 (not including 1). In a perfect generator all numbers have the same probability of being selected but in the pseudo generators some numbers have zero probability.
[ "Existing answers do a good job of addressing the question's specific, but I think it's worth mentioning a side issue: why you're particularly likely to want to pass an alternative \"random generator\" to shuffle as opposed to other functions in the random module. Quoting the docs:\n\nNote that for even rather small\n len(x), the total number of\n permutations of x is larger than the\n period of most random number\n generators; this implies that most\n permutations of a long sequence can\n never be generated.\n\nThe phrase \"random number generators\" here refers to what may be more pedantically called pseudo-random number generators -- generators that give a good imitation of randomness, but are entirely algorithmic, and therefore are known not to be \"really random\". Any such algorithmic approach will have a \"period\" -- it will start repeating itself eventually.\nPython's random module uses a particularly good and well-studied pseudo-random generator, the Mersenne Twister, with a period of 2**19937-1 -- a number that has more than 6 thousand digits when written out in decimal digits, as len(str(2**19937-1)) will confirm;-). On my laptop I can generate about 5 million such numbers per second:\n$ python -mtimeit -s'import random' 'random.random()'\n1000000 loops, best of 3: 0.214 usec per loop\n\nAssuming a much faster machine, able to generate a billion such numbers per second, the cycle would take about 105985 years to repeat -- and the best current estimate for the age of the Universe is a bit less than 1.5*1012 years. It would thus take an almost-unimaginable number of Universe-lifetimes to reach the point of repetition;-). Making the computation parallel wouldn't help much; there are estimated to be about 1080 atoms in the Universe, so even if you were able to run such a billion-per-second generator on each atom in the Universe, it would still take well over 105800 Universe-lifetimes to start repeating.\nSo, you might be justified in suspecting that this worry about repetition is a tiny little bit of a theoretical, rather than practical, issue;-).\nNevertheless, factorials (which count the permutations of a sequence of length N) also grow pretty fast. The Mersenne Twister, for example, might be able to produce all permutations of a sequence of length 2080, but definitely not of one of length 2081 or higher. Were it not for the \"lifetime of the Universe\" issue, the docs' worry about \"even rather small len(x)\" would be justified -- we know that many possible permutations can never be reached by shuffling with such a pseudo-RNG, as soon as we have a reasonably long sequence, so one might worry about what kind of bias we're actually introducing with even a few shuffles!: -)\nos.urandom mediates access to whatever sources of physical randomness the OS provides -- CryptGenRandom on Windows, /dev/urandom on Linux, etc. os.urandom gives sequences of bytes, but with the help of struct it's easy to make them into random numbers:\n>>> n = struct.calcsize('I')\n>>> def s2i(s): return struct.unpack('I', s)[0]\n... \n>>> maxi = s2i(b'\\xff'*n) + 1\n>>> maxi = float(s2i(b'\\xff'*n) + 1)\n>>> def rnd(): return s2i(os.urandom(n))/maxi\n\nNow we can call random.shuffle(somelist, rnd) and worry less about bias;-).\nUnfortunately, measurement shows that this approach to RNG is about 50 times slower than calls to random.random() -- this could be an important practical consideration if we're going to need many random numbers (and if we don't, the worry about possible bias may be misplaced;-). The os.urandom approach is also hard to use in predictable, repeatable ways (e.g., for testing purposes), while with random.random() you need only provide a fixed initial random.seed at the start of the test to guarantee reproducible behavior.\nIn practice, therefore, os.urandom is only used when you need \"cryptographic quality\" random numbers - ones that a determined attacker can't predict - and are therefore willing to pay the practical price for using it instead of random.random.\n", "The second argument is used to specify which random number generator to use. This could be useful if you need/have something \"better\" than random.random. Security-sensitive applications might need to use a cryptographically secure random number generator.\nThe difference between random.random and random.random() is that the first one is a reference to the function that produces simple random numbers, and the second one actually calls that function.\nIf you had another random number generator, you wanted to use, you could say\nrandom.shuffle(x, my_random_number_function)\n\nAs to what random.random (the default generator) is doing, it uses an algorithm called the Mersenne twister to create a seemingly random floating point number between 0 and 1 (not including 1), all the numbers in that interval being of equal likelihood.\nThat the interval is from 0 to 1 is just a convention.\n", "The second argument is the function which is called to produce random numbers that are in turn used to shuffle the sequence (first argument). The default function used if you don't provide your own is random.random.\nYou might want to provide this parameter if you want to customize how shuffle is performed.\nAnd your customized function will have to return numbers in range [0.0, 1.0) - 0.0 included, 1.0 excluded.\n", "The docs go on saying:\n\nThe optional argument random is a\n 0-argument function returning a random\n float in [0.0, 1.0); by default, this\n is the function random().\n\nIt means that you can either specify your own random number generator function, or tell the module to use the default random function. The second option is almost always the best choice, because Python uses a pretty good PRNG.\nThe function it expects is supposed to return a floating point pseudo-random number in the range [0.0, 1.0), which means 0.0 included and 1.0 isn't included (i.e. 0.9999 is a valid number to be returned, but 1.0 is not). Each number in this range should be in theory returned with equal probability (i.e. this is a linear distribution).\n", "The shuffle function depends on an RNG (Random Number Generator), which defaults to random.random. The second argument is there so you can provide your own RNG instead of the default.\nUPDATE:\nThe second argument is a random number generator that generates a new, random number in the range [0.0, 1.0) each time you call it.\nHere's an example for you:\nimport random\n\ndef a():\n return 0.0\n\ndef b():\n return 0.999999999999\n\narr = [1,2,3]\n\nrandom.shuffle(arr)\nprint arr # prints [1, 3, 2]\n\narr.sort()\nprint arr # prints [1, 2, 3]\n\nrandom.shuffle(arr)\nprint arr # prints [3, 2, 1]\n\narr.sort()\nrandom.shuffle(arr, a)\nprint arr # prints [2, 3, 1]\n\narr.sort()\nrandom.shuffle(arr, a)\nprint arr # prints [2, 3, 1]\n\narr.sort()\nrandom.shuffle(arr, b)\nprint arr # prints [1, 2, 3]\n\narr.sort()\nrandom.shuffle(arr, b)\nprint arr # prints [1, 2, 3]\n\nSo if the function always returns the same value, you always get the same permutation. If the function returns random values each time it's called, you get a random permutation.\n", "From the example:\n>>> random.random() # Random float x, 0.0 <= x < 1.0\n0.37444887175646646\n\nIt generates a random floating point number between 0 and 1.\n" ]
[ 6, 4, 1, 0, 0, 0 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001961340_python_random.txt
Q: efficient algorithm to perform spell check on HTML document I have a HTML document, a list of common spelling mistakes, and the correct spelling for each case. The HTML documents will be up to ~50 pages and there are ~30K spelling correction entries. What is an efficient way to correct all spelling mistakes in this HTML document? (Note: my implementation will be in Python, in case you know of any relevant libraries.) I have thought of 2 possibles approaches: build hashtable of the spelling data parse text from HTML split text by whitespace into tokens if token in spelling hashtable replace with correction build new HTML document with updated text This approach will fail for multi-word spelling corrections, which will exist. The following is a simpler though seemingly less efficient approach that will work for multi-words: iterate spelling data search for word in HTML document if word exists replace with correction A: You are correct that the first approach will be MUCH faster than the second (additionally, I would recommend looking into Tries instead of a straight hash, the space savings will be quite dramatic for 30k words). To still be able to handle the multi-word cases, you could either keep track of the previous token and thereby check your hash for a combined string such as "prev cur". Or else you could leave the multi-word corrections out of the hash and combine your two approaches, first using the hash for single words and then doing a scan for the multi-word combos (or vice versa). This could still be relatively fast if the number of multi-word corrections is relatively small. Be careful tho, pulling out word tokens is trickier than just splitting on whitespace. You don't want to fail to correct an error simply because you didn't find 'instence,' with a comma in your hash. A: I agree with Rob's suggestion of using a trie, based on characters, because I programmed a spelling correction algorithm ages ago based on having a dictionary of valid words stored as a trie. By using branch-and-bound I was able to suggest possibly correct spellings of misspelled words (by Levenshtein distance). In addition, since a trie is just a big finite-state-machine, it is fairly easy to add common prefixes and suffixes, so it could handle "words" like "postnationalizationalism's".
efficient algorithm to perform spell check on HTML document
I have a HTML document, a list of common spelling mistakes, and the correct spelling for each case. The HTML documents will be up to ~50 pages and there are ~30K spelling correction entries. What is an efficient way to correct all spelling mistakes in this HTML document? (Note: my implementation will be in Python, in case you know of any relevant libraries.) I have thought of 2 possibles approaches: build hashtable of the spelling data parse text from HTML split text by whitespace into tokens if token in spelling hashtable replace with correction build new HTML document with updated text This approach will fail for multi-word spelling corrections, which will exist. The following is a simpler though seemingly less efficient approach that will work for multi-words: iterate spelling data search for word in HTML document if word exists replace with correction
[ "You are correct that the first approach will be MUCH faster than the second (additionally, I would recommend looking into Tries instead of a straight hash, the space savings will be quite dramatic for 30k words).\nTo still be able to handle the multi-word cases, you could either keep track of the previous token and thereby check your hash for a combined string such as \"prev cur\".\nOr else you could leave the multi-word corrections out of the hash and combine your two approaches, first using the hash for single words and then doing a scan for the multi-word combos (or vice versa). This could still be relatively fast if the number of multi-word corrections is relatively small.\nBe careful tho, pulling out word tokens is trickier than just splitting on whitespace. You don't want to fail to correct an error simply because you didn't find 'instence,' with a comma in your hash.\n", "I agree with Rob's suggestion of using a trie, based on characters, because I programmed a spelling correction algorithm ages ago based on having a dictionary of valid words stored as a trie. By using branch-and-bound I was able to suggest possibly correct spellings of misspelled words (by Levenshtein distance). In addition, since a trie is just a big finite-state-machine, it is fairly easy to add common prefixes and suffixes, so it could handle \"words\" like \"postnationalizationalism's\".\n" ]
[ 3, 2 ]
[]
[]
[ "algorithm", "html", "performance", "python", "spell_checking" ]
stackoverflow_0001957131_algorithm_html_performance_python_spell_checking.txt
Q: Floating Point Concepts in Python Why Does -22/10 return -3 in python. Any pointers regarding this will be helpful for me. A: Because it's integer division by default. And integer division is rounded towards minus infinity. Take a look: >>> -22/10 -3 >>> -22/10.0 -2.2000000000000002 Positive: >>> 22/10 2 >>> 22/10.0 2.2000000000000002 Regarding the seeming "inaccuracy" of floating point, this is a great article to read: Why are floating point calculations so inaccurate? A: By default, the current versions of Python 2.x (I'm not sure about 3.x) give an integer result for any arithmetic operator when both operands are integers. However, there is a way to change this behaviour. from __future__ import division print(22/10) Outputs 2.2000000000000002 Of course, a simpler way is to simply make one of the operands a float as described by the previous two answers. A: PEP 238, "Changing the Division Operator", explains the issues well, I think. In brief: when Python was designed it adopted the "truncating" meaning for / between integers, simply because most other programming languages did ever since the first FORTRAN compiler was launched in 1957 (all-uppercase language name and all;-). (One widespread language that didn't adopt this meaning, using / to produce a floating point result and div for truncation, was Pascal). In 2001 it was decided that this choice was not optimal (to quote the PEP, "This makes expressions expecting float or complex results error-prone when integers are not expected but possible as inputs"), and to switch to using a new operator // to request division with truncation, and change the meaning of / to produce a float result ("true division"). You can explicitly request this behavior by putting the statement from __future__ import division at the start of a module (the command-line switch -Q to the python interpreter can also control the behavior of division). Missing such an "import from the future" (and command line switch use), Python 2.x, for all values of x, always uses "classic division" (i.e., / is truncating between ints). Python 3, however, always uses "true division" (/ between ints produces a float). Note a curious corollary (in Python 3)...: >>> from fractions import Fraction >>> Fraction(1/2) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/fractions.py", line 100, in __new__ raise TypeError("argument should be a string " TypeError: argument should be a string or a Rational instance since / produces a float, it's not acceptable as the argument to Fraction (otherwise precision might be silently lost). You must use a string, or pass numerator and denominator as separate arguments: >>> Fraction(1, 2) Fraction(1, 2) >>> Fraction('1/2') Fraction(1, 2) gmpy uses a different, more tolerant approach to building mpqs, its equivalent of Python 3 Fractions...: >>> import gmpy >>> gmpy.mpq(1/2) mpq(1,2) Specifically (see lines 3168 and following in the source), gmpy uses a Stern-Brocot tree to get the "best practical approximation" of the floating point argument as a rational (of course, this can mask a loss of precision). A: Because you're doing an integer division. If you do -22.0/10 instead, you'll get the correct result. A: This happens because the operation of integer division returns the number, which when multiplied by the divisor gives the largest possible integer that is no larger than the number you divided. This is exactly why 22/10 gives 2: 10*2=20, which is the largest integer multiple of 10 not bigger than 20. When this goes to the negative, your operation becomes -22/10. Your result is -3. Applying the same logic as in the previous case, we see that 10*-3=-30, which is the largest integer multiple of 10 not bigger than -20. This is why you get a slightly unexpected answer when dealing with negative numbers. Hope that helps
Floating Point Concepts in Python
Why Does -22/10 return -3 in python. Any pointers regarding this will be helpful for me.
[ "Because it's integer division by default. And integer division is rounded towards minus infinity. Take a look:\n>>> -22/10\n-3\n>>> -22/10.0\n-2.2000000000000002\n\nPositive:\n>>> 22/10\n2\n>>> 22/10.0\n2.2000000000000002\n\nRegarding the seeming \"inaccuracy\" of floating point, this is a great article to read: Why are floating point calculations so inaccurate?\n", "By default, the current versions of Python 2.x (I'm not sure about 3.x) give an integer result for any arithmetic operator when both operands are integers. However, there is a way to change this behaviour.\nfrom __future__ import division\nprint(22/10)\n\nOutputs\n2.2000000000000002\n\nOf course, a simpler way is to simply make one of the operands a float as described by the previous two answers.\n", "PEP 238, \"Changing the Division Operator\", explains the issues well, I think. In brief: when Python was designed it adopted the \"truncating\" meaning for / between integers, simply because most other programming languages did ever since the first FORTRAN compiler was launched in 1957 (all-uppercase language name and all;-). (One widespread language that didn't adopt this meaning, using / to produce a floating point result and div for truncation, was Pascal).\nIn 2001 it was decided that this choice was not optimal (to quote the PEP, \"This makes expressions expecting float or complex results error-prone when integers are not expected but possible as inputs\"), and to switch to using a new operator // to request division with truncation, and change the meaning of / to produce a float result (\"true division\").\nYou can explicitly request this behavior by putting the statement\nfrom __future__ import division\n\nat the start of a module (the command-line switch -Q to the python interpreter can also control the behavior of division). Missing such an \"import from the future\" (and command line switch use), Python 2.x, for all values of x, always uses \"classic division\" (i.e., / is truncating between ints).\nPython 3, however, always uses \"true division\" (/ between ints produces a float).\nNote a curious corollary (in Python 3)...:\n>>> from fractions import Fraction\n>>> Fraction(1/2)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/fractions.py\", line 100, in __new__\n raise TypeError(\"argument should be a string \"\nTypeError: argument should be a string or a Rational instance\n\nsince / produces a float, it's not acceptable as the argument to Fraction (otherwise precision might be silently lost). You must use a string, or pass numerator and denominator as separate arguments:\n>>> Fraction(1, 2)\nFraction(1, 2)\n>>> Fraction('1/2')\nFraction(1, 2)\n\ngmpy uses a different, more tolerant approach to building mpqs, its equivalent of Python 3 Fractions...:\n>>> import gmpy\n>>> gmpy.mpq(1/2)\nmpq(1,2)\n\nSpecifically (see lines 3168 and following in the source), gmpy uses a Stern-Brocot tree to get the \"best practical approximation\" of the floating point argument as a rational (of course, this can mask a loss of precision).\n", "Because you're doing an integer division. If you do -22.0/10 instead, you'll get the correct result.\n", "This happens because the operation of integer division returns the number, which when multiplied by the divisor gives the largest possible integer that is no larger than the number you divided.\nThis is exactly why 22/10 gives 2: 10*2=20, which is the largest integer multiple of 10 not bigger than 20.\nWhen this goes to the negative, your operation becomes -22/10. Your result is -3. Applying the same logic as in the previous case, we see that 10*-3=-30, which is the largest integer multiple of 10 not bigger than -20. \nThis is why you get a slightly unexpected answer when dealing with negative numbers.\nHope that helps\n" ]
[ 10, 5, 5, 2, 2 ]
[]
[]
[ "floating_point", "python" ]
stackoverflow_0001961394_floating_point_python.txt
Q: tornado - transferring a file to cdn without blocking I have the nginx upload module handling site uploads, but still need to transfer files (let's say 3-20mb each) to our cdn, and would rather not delegate that to a background job. What is the best way to do this with tornado without blocking other requests? Can i do this in an async callback? A: You may find it useful in the overall architecture of your site to add a message queuing service such as RabbitMQ. This would let you complete the upload via the nginx module, then in the tornado handler, post a message containing the uploaded file path and exit. A separate process would be watching for these messages and handle the transfer to your CDN. This type of service would be useful for many other tasks that could be handled offline ( sending emails, etc.. ). As your system grows, this also provides you a mechanism to scale by moving queue processing to separate machines. I am using an architecture very similar to this. Just make sure to add your message consumer process to supervisord or whatever you are using to manage your processes. In terms of implementation, if you are on Ubuntu installing RabbitMQ is a simple: sudo apt-get install rabbitmq-server On CentOS w/EPEL repositories: yum install rabbit-server There are a number of Python bindings to RabbitMQ. Pika is one of them and it happens to be created by an employee of LShift, who is responsible for RabbitMQ. Below is a bit of sample code from the Pika repo. You can easily imagine how the handle_delivery method would accept a message containing a filepath and push it to your CDN. import sys import pika import asyncore conn = pika.AsyncoreConnection(pika.ConnectionParameters( sys.argv[1] if len(sys.argv) > 1 else '127.0.0.1', credentials = pika.PlainCredentials('guest', 'guest'))) print 'Connected to %r' % (conn.server_properties,) ch = conn.channel() ch.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) should_quit = False def handle_delivery(ch, method, header, body): print "method=%r" % (method,) print "header=%r" % (header,) print " body=%r" % (body,) ch.basic_ack(delivery_tag = method.delivery_tag) global should_quit should_quit = True tag = ch.basic_consume(handle_delivery, queue = 'test') while conn.is_alive() and not should_quit: asyncore.loop(count = 1) if conn.is_alive(): ch.basic_cancel(tag) conn.close() print conn.connection_close A: advice on the tornado google group points to using an async callback (documented at http://www.tornadoweb.org/documentation#non-blocking-asynchronous-requests) to move the file to the cdn. the nginx upload module writes the file to disk and then passes parameters describing the upload(s) back to the view. therefore, the file isn't in memory, but the time it takes to read from disk–which would cause the request process to block itself, but not other tornado processes, afaik–is negligible. that said, anything that doesn't need to be processed online shouldn't be, and should be deferred to a task queue like celeryd or similar.
tornado - transferring a file to cdn without blocking
I have the nginx upload module handling site uploads, but still need to transfer files (let's say 3-20mb each) to our cdn, and would rather not delegate that to a background job. What is the best way to do this with tornado without blocking other requests? Can i do this in an async callback?
[ "You may find it useful in the overall architecture of your site to add a message queuing service such as RabbitMQ.\nThis would let you complete the upload via the nginx module, then in the tornado handler, post a message containing the uploaded file path and exit. A separate process would be watching for these messages and handle the transfer to your CDN. This type of service would be useful for many other tasks that could be handled offline ( sending emails, etc.. ). As your system grows, this also provides you a mechanism to scale by moving queue processing to separate machines. \nI am using an architecture very similar to this. Just make sure to add your message consumer process to supervisord or whatever you are using to manage your processes. \nIn terms of implementation, if you are on Ubuntu installing RabbitMQ is a simple:\nsudo apt-get install rabbitmq-server\n\nOn CentOS w/EPEL repositories:\nyum install rabbit-server\n\nThere are a number of Python bindings to RabbitMQ. Pika is one of them and it happens to be created by an employee of LShift, who is responsible for RabbitMQ.\nBelow is a bit of sample code from the Pika repo. You can easily imagine how the handle_delivery method would accept a message containing a filepath and push it to your CDN.\nimport sys\nimport pika\nimport asyncore\n\nconn = pika.AsyncoreConnection(pika.ConnectionParameters(\n sys.argv[1] if len(sys.argv) > 1 else '127.0.0.1',\n credentials = pika.PlainCredentials('guest', 'guest')))\n\nprint 'Connected to %r' % (conn.server_properties,)\n\nch = conn.channel()\nch.queue_declare(queue=\"test\", durable=True, exclusive=False, auto_delete=False)\n\nshould_quit = False\n\ndef handle_delivery(ch, method, header, body):\n print \"method=%r\" % (method,)\n print \"header=%r\" % (header,)\n print \" body=%r\" % (body,)\n ch.basic_ack(delivery_tag = method.delivery_tag)\n\n global should_quit\n should_quit = True\n\ntag = ch.basic_consume(handle_delivery, queue = 'test')\nwhile conn.is_alive() and not should_quit:\n asyncore.loop(count = 1)\nif conn.is_alive():\n ch.basic_cancel(tag)\n conn.close()\n\nprint conn.connection_close\n\n", "advice on the tornado google group points to using an async callback (documented at http://www.tornadoweb.org/documentation#non-blocking-asynchronous-requests) to move the file to the cdn. \nthe nginx upload module writes the file to disk and then passes parameters describing the upload(s) back to the view. therefore, the file isn't in memory, but the time it takes to read from disk–which would cause the request process to block itself, but not other tornado processes, afaik–is negligible.\nthat said, anything that doesn't need to be processed online shouldn't be, and should be deferred to a task queue like celeryd or similar.\n" ]
[ 5, 0 ]
[]
[]
[ "cdn", "python", "tornado" ]
stackoverflow_0001950055_cdn_python_tornado.txt
Q: Polygon touches in more than one point with Shapely I have a list of Shapely polygons in Python. To find out which polygon touch is easy, using the .touches() method. However, I need something that returns True only when the polygons share more than one point (in other words shares a border). Let me illustrate: In [1]: from shapely.geometry import Polygon In [2]: polygons = [Polygon([(0,0),(0,1),(1,1),(1,0)]), Polygon([(1,0),(1,1),(2,1),(2,0)]), Polygon([(2,1),(2,2),(3,2),(3,1)])] In [3]: polygons[0].touches(polygons[1]) Out[3]: True In [4]: polygons[0].touches(polygons[2]) Out[4]: False In [5]: polygons[1].touches(polygons[2]) Out[5]: True In this case, polygon 0 and 1 shares two points (an entire border). Polygon 1 and 2 only shares one point. What I'm looking for is a function that would give me True, False, False in the above example or just something that returns the number of touching point, then I can do the rest of the logic myself. And of course, any solution that does not involve manually iterating through all points is optimal - if I need to do that, it kind of defeats the purpose of using Shapely :-) A: If you truly want to check if two polygons share more than x number of points you can simply do this: p0,p1,p2 = polygons x = 2 len(set(p1.boundary.coords).intersection(p2.boundary.coords))>=x But I think what you may want is to determine if two edges are colinear (and overlapping). This implementation of Andrew's suggestions is probably what you are looking for: >>> type(p0.intersection(p1)) is geometry.LineString True >>> type(p1.intersection(p2)) is geometry.LineString False A: i haven't used shapely, but have you tried seeing if the intersection of the two polygons is a line?
Polygon touches in more than one point with Shapely
I have a list of Shapely polygons in Python. To find out which polygon touch is easy, using the .touches() method. However, I need something that returns True only when the polygons share more than one point (in other words shares a border). Let me illustrate: In [1]: from shapely.geometry import Polygon In [2]: polygons = [Polygon([(0,0),(0,1),(1,1),(1,0)]), Polygon([(1,0),(1,1),(2,1),(2,0)]), Polygon([(2,1),(2,2),(3,2),(3,1)])] In [3]: polygons[0].touches(polygons[1]) Out[3]: True In [4]: polygons[0].touches(polygons[2]) Out[4]: False In [5]: polygons[1].touches(polygons[2]) Out[5]: True In this case, polygon 0 and 1 shares two points (an entire border). Polygon 1 and 2 only shares one point. What I'm looking for is a function that would give me True, False, False in the above example or just something that returns the number of touching point, then I can do the rest of the logic myself. And of course, any solution that does not involve manually iterating through all points is optimal - if I need to do that, it kind of defeats the purpose of using Shapely :-)
[ "If you truly want to check if two polygons share more than x number of points you can simply do this:\np0,p1,p2 = polygons\nx = 2\nlen(set(p1.boundary.coords).intersection(p2.boundary.coords))>=x\n\nBut I think what you may want is to determine if two edges are colinear (and overlapping).\nThis implementation of Andrew's suggestions is probably what you are looking for:\n>>> type(p0.intersection(p1)) is geometry.LineString\nTrue\n>>> type(p1.intersection(p2)) is geometry.LineString\nFalse\n\n", "i haven't used shapely, but have you tried seeing if the intersection of the two polygons is a line?\n" ]
[ 12, 7 ]
[]
[]
[ "polygon", "python", "shapely" ]
stackoverflow_0001960961_polygon_python_shapely.txt
Q: I've never seen 'class __proxy__' before, what does this mean(I only have seen that like def __str__) this code is in the django.utils.functional.py class __proxy__(Promise): thanks A: "Magic names", ones that start and end with double underscores, are reserved for the language in Python (but the compiler does not enforce that rule at present); Django is violating that rule, or setting itself up as "being the language" -- not a terrible sin, but an unpleasant practice. A: It's just a name. And because it starts with _ it's meant to be private to that module. Why they chose that name? You'll have to ask the developers.
I've never seen 'class __proxy__' before, what does this mean(I only have seen that like def __str__)
this code is in the django.utils.functional.py class __proxy__(Promise): thanks
[ "\"Magic names\", ones that start and end with double underscores, are reserved for the language in Python (but the compiler does not enforce that rule at present); Django is violating that rule, or setting itself up as \"being the language\" -- not a terrible sin, but an unpleasant practice.\n", "It's just a name. And because it starts with _ it's meant to be private to that module. \nWhy they chose that name? You'll have to ask the developers.\n" ]
[ 5, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001962368_django_python.txt
Q: why my code only print 'bbb' once,and it does not run wrong unexpectedly class a: class __b__(object): print 'bbb' b=a() b.__b__() b.__b__() b.__b__() a.__b__() a.__b__() a.__b__() it print 'bbb' only once, thanks A: When python creates a class, it does so by executing the code within the class definition exactly once, therefore creating the class namespace, etc... If you wanted it to run each time you called it, you need to put your code in the __init__ method (which is the constructor). class a: class b: def __init__(self): print 'bbb' a.b() a.b() That will print bbb 2x. Notice that you don't need an instance of a() to access a.b because class b is simply an attribute of class a. Your really don't gain much by nesting classes in python. Notice I did not use __b__, because python reserves words that start and end with double underscores. A: You don't explain what you are trying to do, but I think what you mean is: class a: def __b__(object): print 'bbb' A: The class __b__ statement executes exactly once (when the class a statement executes) and that's the only case in which you're printint anything. The various instantiations are totally irrelevant (none of them has anything to do with the printing).
why my code only print 'bbb' once,and it does not run wrong unexpectedly
class a: class __b__(object): print 'bbb' b=a() b.__b__() b.__b__() b.__b__() a.__b__() a.__b__() a.__b__() it print 'bbb' only once, thanks
[ "When python creates a class, it does so by executing the code within the class definition exactly once, therefore creating the class namespace, etc...\nIf you wanted it to run each time you called it, you need to put your code in the __init__ method (which is the constructor).\nclass a:\n class b:\n def __init__(self):\n print 'bbb'\n\na.b()\na.b()\n\nThat will print bbb 2x. Notice that you don't need an instance of a() to access a.b because class b is simply an attribute of class a. Your really don't gain much by nesting classes in python.\nNotice I did not use __b__, because python reserves words that start and end with double underscores.\n", "You don't explain what you are trying to do, but I think what you mean is:\nclass a:\n def __b__(object):\n print 'bbb'\n\n", "The class __b__ statement executes exactly once (when the class a statement executes) and that's the only case in which you're printint anything. The various instantiations are totally irrelevant (none of them has anything to do with the printing).\n" ]
[ 4, 3, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001962410_python.txt
Q: How can I get more intuitive feels about django relationships(like:Many-to-one,Many-to-many ) i use xampp(it has mysql) I was Confused on this django relationships, who can give me a code example(or text) to let me feel it intuitive .thanks (like:Einstein described the theory of relativity) A: I looked all over for a simple explanation of relationships, but couldn't find anything, so I'll try to summarize it here. Relationships aren't strictly a Django thing. If you really want to understand what Django is doing, learn about database concepts in general. When you have multiple tables of information, you need to link them somehow. If you operate a music site like last.fm, you're going to need to know about artists, genres, tags, songs, albums, etc. All of this data relates somehow. For example, One artist will have many albums (one to many), one genre will apply to many artists (One to many.) e.g. Metallica (one artist) will have several albums, Black Album, St. Anger, etc. but one album will probably not belong to two artists, e.g. Alicia Keys and Metallica both recording the same album. To achieve this relationship, each Album record must have an artist_id to indicate which artist it is related to. mysql> select * from albums where artist_id = 40; +-----+------------------------------+------+---------------------+-----------+----------+------------+ | id | name | year | created_at | artist_id | genre_id | updated_at | +-----+------------------------------+------+---------------------+-----------+----------+------------+ | 309 | Reise, Reise | 2004 | 2009-11-22 16:01:13 | 40 | 2 | NULL | | 310 | Mutter | 2001 | 2009-11-22 16:12:28 | 40 | 2 | NULL | | 311 | Sehnsucht | 1998 | 2009-11-22 16:20:22 | 40 | 2 | NULL | | 312 | Live aus Berlin | 1999 | 2009-11-22 16:29:11 | 40 | 2 | NULL | | 313 | Rosenrot | 2005 | 2009-11-22 16:40:43 | 40 | 4 | NULL | | 314 | The Very Best of Rammstein | 0 | 2009-11-22 16:51:38 | 40 | 2 | NULL | | 315 | Live aus Berlin (bonus disc) | 0 | 2009-11-22 17:05:24 | 40 | 2 | NULL | +-----+------------------------------+------+---------------------+-----------+----------+------------+ 7 rows in set (0.02 sec) A tag will describe several artists (e.g. Metal describes Metallica, Pantera, and Sepultura), and one artist will have several tags (e.g. people might tag Metallica as Metal, Rock, and 80s Metal.) This kind of relationship between data would probably produce three tables. An artists table, a tags table, and a join table. Your join records would look like this for example (purely imaginary and hypothetical situation) | id | artist_id | tag_id | | 1 | 34 | 357 | | 2 | 98 | 234 | the artist_id of 34 might be Metallica, and the tag_id of 357 might be Metal. The point is, there's a table that exists to link tags and artists. In this example. In general, relationships are a way to link records. There are three main relationships, One to One, Many to Many, and Many to One. The best way to fully understand this is to learn Database Design. A: It's hard to answer a confused feeling question, but if you want code perhaps try http://www.djangosnippets.org/ Also the tutorial gives great examples on how the models work in such cases as many-to-many, see http://www.djangobook.com/en/1.0/chapter05/ For example: from django.db import models class Publisher(models.Model): name = models.CharField(maxlength=30) address = models.CharField(maxlength=50) city = models.CharField(maxlength=60) state_province = models.CharField(maxlength=30) country = models.CharField(maxlength=50) website = models.URLField() class Author(models.Model): salutation = models.CharField(maxlength=10) first_name = models.CharField(maxlength=30) last_name = models.CharField(maxlength=40) email = models.EmailField() headshot = models.ImageField(upload_to='/tmp') class Book(models.Model): title = models.CharField(maxlength=100) authors = models.ManyToManyField(Author) publisher = models.ForeignKey(Publisher) publication_date = models.DateField()
How can I get more intuitive feels about django relationships(like:Many-to-one,Many-to-many )
i use xampp(it has mysql) I was Confused on this django relationships, who can give me a code example(or text) to let me feel it intuitive .thanks (like:Einstein described the theory of relativity)
[ "I looked all over for a simple explanation of relationships, but couldn't find anything, so I'll try to summarize it here.\nRelationships aren't strictly a Django thing. If you really want to understand what Django is doing, learn about database concepts in general.\n\nWhen you have multiple tables of information, you need to link them somehow. If you operate a music site like last.fm, you're going to need to know about artists, genres, tags, songs, albums, etc. All of this data relates somehow. \nFor example, One artist will have many albums (one to many), one genre will apply to many artists (One to many.) e.g. Metallica (one artist) will have several albums, Black Album, St. Anger, etc. but one album will probably not belong to two artists, e.g. Alicia Keys and Metallica both recording the same album. To achieve this relationship, each Album record must have an artist_id to indicate which artist it is related to.\nmysql> select * from albums where artist_id = 40;\n+-----+------------------------------+------+---------------------+-----------+----------+------------+\n| id | name | year | created_at | artist_id | genre_id | updated_at |\n+-----+------------------------------+------+---------------------+-----------+----------+------------+\n| 309 | Reise, Reise | 2004 | 2009-11-22 16:01:13 | 40 | 2 | NULL | \n| 310 | Mutter | 2001 | 2009-11-22 16:12:28 | 40 | 2 | NULL | \n| 311 | Sehnsucht | 1998 | 2009-11-22 16:20:22 | 40 | 2 | NULL | \n| 312 | Live aus Berlin | 1999 | 2009-11-22 16:29:11 | 40 | 2 | NULL | \n| 313 | Rosenrot | 2005 | 2009-11-22 16:40:43 | 40 | 4 | NULL | \n| 314 | The Very Best of Rammstein | 0 | 2009-11-22 16:51:38 | 40 | 2 | NULL | \n| 315 | Live aus Berlin (bonus disc) | 0 | 2009-11-22 17:05:24 | 40 | 2 | NULL | \n+-----+------------------------------+------+---------------------+-----------+----------+------------+\n7 rows in set (0.02 sec)\n\n\nA tag will describe several artists (e.g. Metal describes Metallica, Pantera, and Sepultura), and one artist will have several tags (e.g. people might tag Metallica as Metal, Rock, and 80s Metal.) This kind of relationship between data would probably produce three tables. An artists table, a tags table, and a join table. Your join records would look like this for example (purely imaginary and hypothetical situation)\n| id | artist_id | tag_id |\n| 1 | 34 | 357 |\n| 2 | 98 | 234 |\n\nthe artist_id of 34 might be Metallica, and the tag_id of 357 might be Metal. The point is, there's a table that exists to link tags and artists. In this example.\nIn general, relationships are a way to link records. There are three main relationships, One to One, Many to Many, and Many to One.\nThe best way to fully understand this is to learn Database Design.\n", "It's hard to answer a confused feeling question, but if you want code perhaps try http://www.djangosnippets.org/\nAlso the tutorial gives great examples on how the models work in such cases as many-to-many, see http://www.djangobook.com/en/1.0/chapter05/\nFor example:\nfrom django.db import models\n\nclass Publisher(models.Model):\n name = models.CharField(maxlength=30)\n address = models.CharField(maxlength=50)\n city = models.CharField(maxlength=60)\n state_province = models.CharField(maxlength=30)\n country = models.CharField(maxlength=50)\n website = models.URLField()\n\nclass Author(models.Model):\n salutation = models.CharField(maxlength=10)\n first_name = models.CharField(maxlength=30)\n last_name = models.CharField(maxlength=40)\n email = models.EmailField()\n headshot = models.ImageField(upload_to='/tmp')\n\nclass Book(models.Model):\n title = models.CharField(maxlength=100)\n authors = models.ManyToManyField(Author)\n publisher = models.ForeignKey(Publisher)\n publication_date = models.DateField()\n\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python", "sql" ]
stackoverflow_0001962323_django_python_sql.txt
Q: How can i get a list of all special methods available? Special methods are for example (in Django): def __wrapper__ def __deepcopy__ def __mod__ def __cmp__ A: To print Python's reserved words just use >>> import keyword >>> print(keyword.kwlist) ['False', 'None', 'True', 'and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'nonlocal', 'not', 'or', 'pass', 'raise', 'return', 'try', 'while', 'with', 'yield'] To read an explanation of special object methods of Python, read the manual or Dive into Python. Another snippet you can use is >>> [method for method in dir(str) if method[:2]=='__'] ['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__'] to see all the built in special methods of the str class. A: A list of special method names is here, but it's not an exhaustive of magic names -- for example, methods __copy__ and __deepcopy__ are mentioned here instead, the __all__ variable is here, class attributes such as __name__, __bases__, etc are here, and so on. I don't know of any single authoritative list of all such names defined in any given release of the language. However, if you want to check on any single given special name, say __foo__, just search for it in the "Quick search" box of the Python docs (any of the above URLs will do!) -- this way you will find it if it's officially part of the language, and if you don't find it you'll know it is a mistaken usage on the part of some package or framework that's violating the language's conventions. A: You can find most of them here: http://docs.python.org/genindex-all.html#_ identifiers with 2 leading+2 trailing underscores are of course reserved for special method names. Although it is possible to define such identifiers now, may not be the case in future. __deepcopy__, __mod__, and __cmp__ are all special methods built into python that user classes can override to implement class-specific functionality.
How can i get a list of all special methods available?
Special methods are for example (in Django): def __wrapper__ def __deepcopy__ def __mod__ def __cmp__
[ "To print Python's reserved words just use\n>>> import keyword\n>>> print(keyword.kwlist)\n['False', 'None', 'True', 'and', 'as', 'assert', 'break', 'class', 'continue',\n'def', 'del', 'elif', 'else', 'except', 'finally', 'for', 'from', 'global',\n'if', 'import', 'in', 'is', 'lambda', 'nonlocal', 'not', 'or', 'pass',\n'raise', 'return', 'try', 'while', 'with', 'yield']\n\nTo read an explanation of special object methods of Python, read the manual or Dive into Python.\nAnother snippet you can use is\n>>> [method for method in dir(str) if method[:2]=='__']\n['__add__', '__class__', '__contains__', '__delattr__', '__doc__', \n'__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', \n'__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', \n'__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', \n'__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', \n'__sizeof__', '__str__', '__subclasshook__']\n\nto see all the built in special methods of the str class. \n", "A list of special method names is here, but it's not an exhaustive of magic names -- for example, methods __copy__ and __deepcopy__ are mentioned here instead, the __all__ variable is here, class attributes such as __name__, __bases__, etc are here, and so on. I don't know of any single authoritative list of all such names defined in any given release of the language.\nHowever, if you want to check on any single given special name, say __foo__, just search for it in the \"Quick search\" box of the Python docs (any of the above URLs will do!) -- this way you will find it if it's officially part of the language, and if you don't find it you'll know it is a mistaken usage on the part of some package or framework that's violating the language's conventions.\n", "You can find most of them here: http://docs.python.org/genindex-all.html#_\nidentifiers with 2 leading+2 trailing underscores are of course reserved for special method names. Although it is possible to define such identifiers now, may not be the case in future. __deepcopy__, __mod__, and __cmp__ are all special methods built into python that user classes can override to implement class-specific functionality.\n" ]
[ 9, 6, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001962559_django_python.txt
Q: GUI runner in Eclipse for Python/IronPython As much as the console runner is nice, I enjoy the instant red/green view of a graphical runner such as NUnit or MSTest for quickly glancing at broken tests. Does such a tool exist for Eclipse? I've tried Google but only found some awful standalone versions. A: PyDev has a feature to quickly execute the unit tests from withing the IDE. It also allows selecting the unit test cases to run. But it displays the usual textual output, no graphical representation of the test results. The best solution I've ever seen (and actively used) is Wing IDE's Testing pane, which displays a tree of test results with output and traceback as needed. That's pretty usable, but not for Eclipse, unfortunately. I just mentioned this, because it might help you in some way or give you an idea on how to implement this for Eclipse in the future.
GUI runner in Eclipse for Python/IronPython
As much as the console runner is nice, I enjoy the instant red/green view of a graphical runner such as NUnit or MSTest for quickly glancing at broken tests. Does such a tool exist for Eclipse? I've tried Google but only found some awful standalone versions.
[ "PyDev has a feature to quickly execute the unit tests from withing the IDE. It also allows selecting the unit test cases to run. But it displays the usual textual output, no graphical representation of the test results.\nThe best solution I've ever seen (and actively used) is Wing IDE's Testing pane, which displays a tree of test results with output and traceback as needed. That's pretty usable, but not for Eclipse, unfortunately. I just mentioned this, because it might help you in some way or give you an idea on how to implement this for Eclipse in the future.\n" ]
[ 1 ]
[]
[]
[ "eclipse", "ironpython", "python", "unit_testing" ]
stackoverflow_0001957642_eclipse_ironpython_python_unit_testing.txt
Q: How do I make a simple file browser in wxPython? I'm starting to learn both Python and wxPython and as part of the app I'm doing, I need to have a simple browser on the left pane of my app. I'm wondering how do I do it? Or at least point me to the right direction that'll help me more on how to do one. Thanks in advance! EDIT: a sort of side question, how much of wxPython do I need to learn? Should I use tools like wxGlade? A: I think that the GenericDirCtrl widget could be of use for you. This tutorial has many examples, among them a simple usage of that widget in a complete script (screenshot pasted below). And I strongly recommend not to start with wxGlade, but manually layout your first few wx GUIs (with the appropriate sizers). You will learn a lot from this. A: You can take a look at the wxPython examples, they also include code samples for almost all of the widgets supported by wxPython. If you use Windows they can be found in the Start Menu folder of WxPython. A: Sample for wx.html.HtmlWindow import wx import wx.html class MyHtmlFrame(wx.Frame): def __init__(self, parent, title): wx.Frame.__init__(self, parent, -1, title, size=(600,400)) html = wx.html.HtmlWindow(self) wx.CallAfter(html.LoadPage, "http://www.google.com") app = wx.PySimpleApp() frm = MyHtmlFrame(None, "Simple HTML Browser") frm.Show() app.MainLoop() wx.HtmlWindow wx.HtmlWindow is capable of parsing and rendering most simple HTML tags. It is not intended to be a high-end HTML browser. If you're looking for something like that see the IEHtmlWin class, which wraps the core MSIE HTML viewer.
How do I make a simple file browser in wxPython?
I'm starting to learn both Python and wxPython and as part of the app I'm doing, I need to have a simple browser on the left pane of my app. I'm wondering how do I do it? Or at least point me to the right direction that'll help me more on how to do one. Thanks in advance! EDIT: a sort of side question, how much of wxPython do I need to learn? Should I use tools like wxGlade?
[ "I think that the GenericDirCtrl widget could be of use for you. This tutorial has many examples, among them a simple usage of that widget in a complete script (screenshot pasted below). And I strongly recommend not to start with wxGlade, but manually layout your first few wx GUIs (with the appropriate sizers). You will learn a lot from this.\n\n", "You can take a look at the wxPython examples, they also include code samples for almost all of the widgets supported by wxPython. If you use Windows they can be found in the Start Menu folder of WxPython.\n", "Sample for wx.html.HtmlWindow\nimport wx\nimport wx.html\n\nclass MyHtmlFrame(wx.Frame):\n def __init__(self, parent, title):\n wx.Frame.__init__(self, parent, -1, title, size=(600,400))\n html = wx.html.HtmlWindow(self)\n wx.CallAfter(html.LoadPage, \"http://www.google.com\")\n\napp = wx.PySimpleApp()\nfrm = MyHtmlFrame(None, \"Simple HTML Browser\")\nfrm.Show()\napp.MainLoop()\n\n\nwx.HtmlWindow\nwx.HtmlWindow is capable of parsing\n and rendering most simple HTML tags. \nIt is not intended to be a high-end\n HTML browser. If you're looking for\n something like that see the\n IEHtmlWin class, which wraps the core MSIE HTML viewer.\n\n" ]
[ 8, 1, 0 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0001962592_python_wxpython.txt
Q: Cannot use Python 2.6 C interface anymore, but 2.5 works I just noticed that I cannot use the Python 2.6 dll anymore. Python 2.5 works just fine. import ctypes py1 = ctypes.cdll.python25 py2 = ctypes.cdll.python26 # ctypes.cdll.LoadLibrary("libpython2.6.so") in linux py1.Py_Initialize() py2.Py_Initialize() # segmentation fault in Linux py1.PyRun_SimpleString("print 'hello world'") # this works because it is using python 2.5 py2.PyRun_SimpleString("print 'hello world2'") # WindowsError: exception: access violation reading 0x00000004 Am I doing anything wrong or is Python 2.6 broken? Update Tried this with the Python 2.7 alpha dll and it appears to work, so it may be a 2.6 problem. Tried this on Ubuntu x64 with Python 2.7 alpha and it worked without a segmentation fault. A: What you are doing is wrong. You are clearly running Python 2.6 and then trying to initialize the shared library in the same process (and thread), which is going to crash (if you're lucky...if you're not it's going to cause you very ugly trouble later). You should never, ever, try to load Python into itself and call Py_Initialize. A: Well, what I doubt you can do is load both 2.5 and 2.6 in the same process... Does ctypes.cdll.python26.Py_Initialize() alone work? EDIT: wait, are you trying to load Python DLL from inside Python itself? wth?
Cannot use Python 2.6 C interface anymore, but 2.5 works
I just noticed that I cannot use the Python 2.6 dll anymore. Python 2.5 works just fine. import ctypes py1 = ctypes.cdll.python25 py2 = ctypes.cdll.python26 # ctypes.cdll.LoadLibrary("libpython2.6.so") in linux py1.Py_Initialize() py2.Py_Initialize() # segmentation fault in Linux py1.PyRun_SimpleString("print 'hello world'") # this works because it is using python 2.5 py2.PyRun_SimpleString("print 'hello world2'") # WindowsError: exception: access violation reading 0x00000004 Am I doing anything wrong or is Python 2.6 broken? Update Tried this with the Python 2.7 alpha dll and it appears to work, so it may be a 2.6 problem. Tried this on Ubuntu x64 with Python 2.7 alpha and it worked without a segmentation fault.
[ "What you are doing is wrong. You are clearly running Python 2.6 and then trying to initialize the shared library in the same process (and thread), which is going to crash (if you're lucky...if you're not it's going to cause you very ugly trouble later). You should never, ever, try to load Python into itself and call Py_Initialize.\n", "Well, what I doubt you can do is load both 2.5 and 2.6 in the same process... Does ctypes.cdll.python26.Py_Initialize() alone work?\nEDIT: wait, are you trying to load Python DLL from inside Python itself? wth?\n" ]
[ 2, 1 ]
[]
[]
[ "ctypes", "python", "python_2.5", "python_2.6" ]
stackoverflow_0001962545_ctypes_python_python_2.5_python_2.6.txt
Q: How to use on_mouse_motion to move around a lable via pyglet? How can one move a label around in the hello world example using the on_mouse_motion function? The docs aren't clicking for me. on_mouse-motion hello_world_example.py A: Figured it out: Don't know if this is the most efficient solution though. EDIT -> fixed for just xy. #!/usr/bin/env python import pyglet window = pyglet.window.Window() fps_display = pyglet.clock.ClockDisplay() label = pyglet.text.Label('Hello World!',font_name='Arial',font_size=36, x=0, y=0) @window.event def on_mouse_motion(x, y, dx, dy): window.clear() label.x = x label.y = y fps_display = pyglet.clock.ClockDisplay() @window.event def on_draw(): fps_display.draw() label.draw() pyglet.app.run()
How to use on_mouse_motion to move around a lable via pyglet?
How can one move a label around in the hello world example using the on_mouse_motion function? The docs aren't clicking for me. on_mouse-motion hello_world_example.py
[ "Figured it out: Don't know if this is the most efficient solution though.\nEDIT -> fixed for just xy.\n#!/usr/bin/env python\n\nimport pyglet\n\nwindow = pyglet.window.Window()\nfps_display = pyglet.clock.ClockDisplay()\nlabel = pyglet.text.Label('Hello World!',font_name='Arial',font_size=36, x=0, y=0)\n\n@window.event \ndef on_mouse_motion(x, y, dx, dy):\n window.clear()\n label.x = x\n label.y = y\n\nfps_display = pyglet.clock.ClockDisplay()\n\n@window.event\ndef on_draw():\n fps_display.draw()\n label.draw()\n\npyglet.app.run()\n\n" ]
[ 2 ]
[]
[]
[ "pyglet", "python" ]
stackoverflow_0001963003_pyglet_python.txt
Q: What is the significance of a function without a 'self' argument insde a class? class a: def b(): ... what is the Significance of b thanks class a: @staticmethod def b(): return 1 def c(self): b() print a.b() print a().b() print a().c()#error and class a: @staticmethod def b(): return 1 def c(self): return self.b() print a.b() print a().b() print a().c() #1 #1 #1 A: Basically you should use b() as staticmethod so that you can call it either from Class or Object of class e.g: bash-3.2$ python Python 2.6 (trunk:66714:66715M, Oct 1 2008, 18:36:04) [GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> class a: ... @staticmethod ... def b(): ... return 1 ... >>> a_obj = a() >>> print a.b() 1 >>> print a_obj.b() 1 >>> A: Syntax error. Try calling it. >>> class a: ... def b(): ... return 1 ... >>> x=a() >>> x.b() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: b() takes no arguments (1 given) See also: >>> class a: ... def b(): ... return 1 ... def c(self): ... return b() ... >>> a().c() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in c NameError: global name 'b' is not defined A: in a class method self is the instance of the class the method is called on. beware that self is not a keyword in python just a conventional name given to the first argument of a method. look at this example: class A: def foo(self): print "I'm a.foo" @staticmethod def bar(s): print s a = A() a.foo() A.foo(a) here a is the instance of the class A. calling a.foo() you are invoking the method foo of the instance a while A.foo(a) invoke the method foo in the class A but passing the instance a as first argument and they are exactly the same thing (but never use the second form). staticmethod is a decorator that let you define a class method as static. that function is no more a method and the first argument is not the instance of the class but is exactly the first argument you passed at that function: a.bar("i'm a static method") i'm a static method A.bar("i'm a static method too") i'm a static method too PS. i don't want to bothering you but these are the very basis of python, the python tutorial is a nice start for the beginners.
What is the significance of a function without a 'self' argument insde a class?
class a: def b(): ... what is the Significance of b thanks class a: @staticmethod def b(): return 1 def c(self): b() print a.b() print a().b() print a().c()#error and class a: @staticmethod def b(): return 1 def c(self): return self.b() print a.b() print a().b() print a().c() #1 #1 #1
[ "Basically you should use b() as staticmethod so that you can call it either from Class or Object of class e.g:\nbash-3.2$ python\nPython 2.6 (trunk:66714:66715M, Oct 1 2008, 18:36:04) \n[GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> class a:\n... @staticmethod\n... def b():\n... return 1\n... \n>>> a_obj = a()\n>>> print a.b()\n1\n>>> print a_obj.b()\n1\n>>> \n\n", "Syntax error. Try calling it.\n>>> class a:\n... def b():\n... return 1\n... \n>>> x=a()\n>>> x.b()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: b() takes no arguments (1 given)\n\nSee also:\n>>> class a:\n... def b():\n... return 1\n... def c(self):\n... return b()\n... \n>>> a().c()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<stdin>\", line 5, in c\nNameError: global name 'b' is not defined\n\n", "in a class method self is the instance of the class the method is called on. beware that self is not a keyword in python just a conventional name given to the first argument of a method.\nlook at this example:\nclass A:\n\n def foo(self):\n print \"I'm a.foo\"\n\n @staticmethod\n def bar(s):\n print s\n\na = A()\na.foo()\nA.foo(a)\n\nhere a is the instance of the class A. calling a.foo() you are invoking the method foo of the instance a while A.foo(a) invoke the method foo in the class A but passing the instance a as first argument and they are exactly the same thing (but never use the second form).\nstaticmethod is a decorator that let you define a class method as static. that function is no more a method and the first argument is not the instance of the class but is exactly the first argument you passed at that function:\na.bar(\"i'm a static method\")\ni'm a static method\nA.bar(\"i'm a static method too\")\ni'm a static method too\n\nPS. i don't want to bothering you but these are the very basis of python, the python tutorial is a nice start for the beginners.\n" ]
[ 7, 4, 1 ]
[]
[]
[ "class", "python" ]
stackoverflow_0001962983_class_python.txt
Q: I want to print 'eee fff {'e':'eee','f':'fff'}',how can i get it. (it is about __setattr__) the next is my code: class a: w={} def __setattr__(self,name,value): self.w[name]=value def __getattr__(self,name): return self.w[name] b=a() b.e='eee' b['f']='fff' print b.e,b['f'],b.w #error what is the difference between b.e and b['f']. thanks A: __ set/getitem__() are used for indexing. Define them as well. A: class MyClass(object): def __init__(self): self.w = {} def __setitem__(self, k, v): self.w[k] = v def __getitem__(self, k): return self.w[k] mc = MyClass() mc['aa'] = 12 print mc['aa'] setitem/getitem is for indexed access (with square brackets) like shown above. setattr/getattr is for attribute access (i.e. mc.aa) A: You have not defined any method/attribute called self.e If instead, you were to say self.w[e] = 'eee', then you're errors should disappear.
I want to print 'eee fff {'e':'eee','f':'fff'}',how can i get it. (it is about __setattr__)
the next is my code: class a: w={} def __setattr__(self,name,value): self.w[name]=value def __getattr__(self,name): return self.w[name] b=a() b.e='eee' b['f']='fff' print b.e,b['f'],b.w #error what is the difference between b.e and b['f']. thanks
[ "__ set/getitem__() are used for indexing. Define them as well.\n", "class MyClass(object):\n def __init__(self):\n self.w = {}\n\n def __setitem__(self, k, v):\n self.w[k] = v\n\n def __getitem__(self, k):\n return self.w[k]\n\n\nmc = MyClass()\nmc['aa'] = 12\nprint mc['aa']\n\nsetitem/getitem is for indexed access (with square brackets) like shown above. setattr/getattr is for attribute access (i.e. mc.aa)\n", "You have not defined any method/attribute called self.e\nIf instead, you were to say self.w[e] = 'eee', then you're errors should disappear.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001962919_python.txt
Q: Formatted Input in Python I have a peculiar problem. I need to read (from a txt file) using python only those substrings that are present at predefined range of offsets. Let's say 5-8 and 12-16. For example, if a line in the file is something like: abcdefghi akdhflskdhfhglskdjfhghsldk then I would like to read the two words - "efgh" and "kdhfl". Because, in the word "efgh", the offset of character "e" is 5 and that of "h" is 8. Similarly, the other word "kdhfl". Please note that the whitespaces also add to the offset. Infact, the white spaces in my file are not "consistenty occurring" in every line and cannot be depended upon to extract the words of interest. Which is why, I have to bank on the offsets. I hope I've been able to make the question clear. Awaiting answers! Edit - yes, the whitespace amount in each line can change and accounts for the offsets also. For example, consider these two lines - abcz d a bc d In both cases, I view the offset of the final character "d" as the same. As I said, the white spaces in the file are not consistent and I cannot rely on them. I need to pick up the characters based on their offsets. Does your answer still hold? A: assuming its a file, for line in open("file"): print line[4:8] , line[11:16] A: To extract pieces from offsets simply read each line into a string and then access a substring with a slice ([from:to]). It's unclear what you're saying about the inconsistent whitespace. If whitespace adds to the offset, it must be consistent to be meaningful. If the whitespace amount can change but actually accounts for the offsets, you can't reliably extract your data. In your added example, as long as d's offset stays the same, you can extract it with slicing. >>> s = 'a bc d' >>> s[5:6] 'd' >>> s = 'abc d' >>> s[5:6] 'd'
Formatted Input in Python
I have a peculiar problem. I need to read (from a txt file) using python only those substrings that are present at predefined range of offsets. Let's say 5-8 and 12-16. For example, if a line in the file is something like: abcdefghi akdhflskdhfhglskdjfhghsldk then I would like to read the two words - "efgh" and "kdhfl". Because, in the word "efgh", the offset of character "e" is 5 and that of "h" is 8. Similarly, the other word "kdhfl". Please note that the whitespaces also add to the offset. Infact, the white spaces in my file are not "consistenty occurring" in every line and cannot be depended upon to extract the words of interest. Which is why, I have to bank on the offsets. I hope I've been able to make the question clear. Awaiting answers! Edit - yes, the whitespace amount in each line can change and accounts for the offsets also. For example, consider these two lines - abcz d a bc d In both cases, I view the offset of the final character "d" as the same. As I said, the white spaces in the file are not consistent and I cannot rely on them. I need to pick up the characters based on their offsets. Does your answer still hold?
[ "assuming its a file,\nfor line in open(\"file\"):\n print line[4:8] , line[11:16]\n\n", "To extract pieces from offsets simply read each line into a string and then access a substring with a slice ([from:to]). \nIt's unclear what you're saying about the inconsistent whitespace. If whitespace adds to the offset, it must be consistent to be meaningful. If the whitespace amount can change but actually accounts for the offsets, you can't reliably extract your data.\nIn your added example, as long as d's offset stays the same, you can extract it with slicing.\n>>> s = 'a bc d'\n>>> s[5:6]\n'd'\n>>> s = 'abc d'\n>>> s[5:6]\n'd'\n\n" ]
[ 5, 1 ]
[ "What's to stop you from using a regular expression? Besides the whitespace do the offsets vary?\n/.{4}(.{4}).{4}(.{4})/\n\n" ]
[ -1 ]
[ "file_io", "python", "textinput" ]
stackoverflow_0001963546_file_io_python_textinput.txt
Q: Render and scroll through multiline paragraphs using pyglet and ScrollableTextLayout How can one display and scroll through a multi-line strings (contain "\n") via pyglet using the features of ScrollableTextLayout? STL crops what is display, and seems to be the most efficient way to implement scrolling. However I have no idea as to how to use it. The docs do not elucidate much to me. SomeText: string = "Some multiline \n text is contained within this string \n which must be rendered \n such that it is able to be scrolled through." Snippets/Links are appreciated. A: You create one like this: scroll_area = pyglet.text.layout.ScrollableTextLayout(my_text, width, height, multiline=True) And you choose your scroll position with the view_x and view_y values. scroll_area.view_y = 30 # start 30 pixels down Set different values of view_y to scroll vertically.
Render and scroll through multiline paragraphs using pyglet and ScrollableTextLayout
How can one display and scroll through a multi-line strings (contain "\n") via pyglet using the features of ScrollableTextLayout? STL crops what is display, and seems to be the most efficient way to implement scrolling. However I have no idea as to how to use it. The docs do not elucidate much to me. SomeText: string = "Some multiline \n text is contained within this string \n which must be rendered \n such that it is able to be scrolled through." Snippets/Links are appreciated.
[ "You create one like this:\nscroll_area = pyglet.text.layout.ScrollableTextLayout(my_text, width, height, multiline=True) \n\nAnd you choose your scroll position with the view_x and view_y values.\nscroll_area.view_y = 30 # start 30 pixels down\n\nSet different values of view_y to scroll vertically.\n" ]
[ 0 ]
[]
[]
[ "multiline", "pyglet", "python", "scroll" ]
stackoverflow_0001963171_multiline_pyglet_python_scroll.txt
Q: Regular Expression search/replace help needed, Python One rule that I need is that if the last vowel (aeiou) of a string is before a character from the set ('t','k','s','tk'), then a : needs to be added right after the vowel. So, in Python if I have the string "orchestras" I need a rule that will turn it into "orchestra:s" edit: The (t, k, s, tk) would be the final character(s) in the string A: re.sub(r"([aeiou])(t|k|s|tk)([^aeiou]*)$", r"\1:\2\3", "orchestras") re.sub(r"([aeiou])(t|k|s|tk)$", r"\1:\2", "orchestras") You don't say if there can be other consonants after the t/k/s/tk. The first regex allows for this as long as there aren't any more vowels, so it'll change "fist" to "fi:st" for instance. If the word must end with the t/k/s/tk then use the second regex, which will do nothing for "fist". A: If you have not figured it out yet, I recommend trying [python_root]/tools/scripts/redemo.py It is a nice testing area. A: Another take on the replacement regex: re.sub("(?<=[aeiou])(?=(?:t|k|s|tk)$)", ":", "orchestras") This one does not need to replace using remembered groups.
Regular Expression search/replace help needed, Python
One rule that I need is that if the last vowel (aeiou) of a string is before a character from the set ('t','k','s','tk'), then a : needs to be added right after the vowel. So, in Python if I have the string "orchestras" I need a rule that will turn it into "orchestra:s" edit: The (t, k, s, tk) would be the final character(s) in the string
[ "re.sub(r\"([aeiou])(t|k|s|tk)([^aeiou]*)$\", r\"\\1:\\2\\3\", \"orchestras\")\nre.sub(r\"([aeiou])(t|k|s|tk)$\", r\"\\1:\\2\", \"orchestras\")\n\nYou don't say if there can be other consonants after the t/k/s/tk. The first regex allows for this as long as there aren't any more vowels, so it'll change \"fist\" to \"fi:st\" for instance. If the word must end with the t/k/s/tk then use the second regex, which will do nothing for \"fist\".\n", "If you have not figured it out yet, I recommend trying [python_root]/tools/scripts/redemo.py It is a nice testing area.\n", "Another take on the replacement regex:\nre.sub(\"(?<=[aeiou])(?=(?:t|k|s|tk)$)\", \":\", \"orchestras\")\n\nThis one does not need to replace using remembered groups.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001862782_python_regex.txt
Q: What problems will one see in using Python multiprocessing naively? We're considering re-factoring a large application with a complex GUI which is isolated in a decoupled fashion from the back-end, to use the new (Python 2.6) multiprocessing module. The GUI/backend interface uses Queues with Message objects exchanged in both directions. One thing I've just concluded (tentatively, but feel free to confirm it) is that "object identity" would not be preserved across the multiprocessing interface. Currently when our GUI publishes a Message to the back-end, it expects to get the same Message back with a result attached as an attribute. It uses object identity (if received_msg is message_i_sent:) to identify returning messages in some cases... and that seems likely not to work with multiprocessing. This question is to ask what "gotchas" like this you have seen in actual use or can imagine one would encounter in naively using the multiprocessing module, especially in refactoring an existing single-process application. Please specify whether your answer is based on actual experience. Bonus points for providing a usable workaround for the problem. Edit: Although my intent with this question was to gather descriptions of problems in general, I think I made two mistakes: I made it community wiki from the start (which probably makes many people ignore it, as they won't get reputation points), and I included a too-specific example which -- while I appreciate the answers -- probably made many people miss the request for general responses. I'll probably re-word and re-ask this in a new question. For now I'm accepting one answer as best merely to close the question as far as it pertains to the specific example I included. Thanks to those who did answer! A: I have not used multiprocessing itself, but the problems presented are similar to experience I've had in two other domains: distributed systems, and object databases. Python object identity can be a blessing and a curse! As for general gotchas, it helps if the application you are refactoring can acknowledge that tasks are being handled asynchronously. If not, you will generally end up managing locks, and much of the performance you could have gained by using separate processes will be lost to waiting on those locks. I will also suggest that you spend the time to build some scaffolding for debugging across processes. Truly asynchronous processes tend to be doing much more than the mind can hold and verify -- or at least my mind! For the specific case outlined, I would manage object identity at the process border when items queued and returned. When sending a task to be processed, annotate the task with an id(), and stash the task instance in a dictionary using the id() as the key. When the task is updated/completed, retrieve the exact task back by id() from the dictionary, and apply the newly updated state to it. Now the exact task, and therefore its identity, will be maintained. A: Well, of course testing for identity on non-singleton object (es. "a is None" or "a is False") isn't usually a good practice - it might be quick, but a really-quick workaround would be to exchange the "is" for the "==" test and use an incremental counter to define identity: # this is not threadsafe. class Message(object): def _next_id(): i = 0 while True: i += 1 yield i _idgen = _next_id() del _next_id def __init__(self): self.id = self._idgen.next() def __eq__(self, other): return (self.__class__ == other.__class__) and (self.id == other.id) This might be an idea. Also, be aware that if you have tons of "worker processes", memory consumption might be far greater than with a thread-based approach. A: You can try the persistent package from my project GarlicSim. It's LGPL'ed. http://github.com/cool-RR/GarlicSim/tree/development/garlicsim/garlicsim/misc/persistent/ (The main module in it is persistent.py) I often use it like this: # ... self.identity = Persistent() Then I have an identity that is preserved across processes.
What problems will one see in using Python multiprocessing naively?
We're considering re-factoring a large application with a complex GUI which is isolated in a decoupled fashion from the back-end, to use the new (Python 2.6) multiprocessing module. The GUI/backend interface uses Queues with Message objects exchanged in both directions. One thing I've just concluded (tentatively, but feel free to confirm it) is that "object identity" would not be preserved across the multiprocessing interface. Currently when our GUI publishes a Message to the back-end, it expects to get the same Message back with a result attached as an attribute. It uses object identity (if received_msg is message_i_sent:) to identify returning messages in some cases... and that seems likely not to work with multiprocessing. This question is to ask what "gotchas" like this you have seen in actual use or can imagine one would encounter in naively using the multiprocessing module, especially in refactoring an existing single-process application. Please specify whether your answer is based on actual experience. Bonus points for providing a usable workaround for the problem. Edit: Although my intent with this question was to gather descriptions of problems in general, I think I made two mistakes: I made it community wiki from the start (which probably makes many people ignore it, as they won't get reputation points), and I included a too-specific example which -- while I appreciate the answers -- probably made many people miss the request for general responses. I'll probably re-word and re-ask this in a new question. For now I'm accepting one answer as best merely to close the question as far as it pertains to the specific example I included. Thanks to those who did answer!
[ "I have not used multiprocessing itself, but the problems presented are similar to experience I've had in two other domains: distributed systems, and object databases. Python object identity can be a blessing and a curse!\nAs for general gotchas, it helps if the application you are refactoring can acknowledge that tasks are being handled asynchronously. If not, you will generally end up managing locks, and much of the performance you could have gained by using separate processes will be lost to waiting on those locks. I will also suggest that you spend the time to build some scaffolding for debugging across processes. Truly asynchronous processes tend to be doing much more than the mind can hold and verify -- or at least my mind!\nFor the specific case outlined, I would manage object identity at the process border when items queued and returned. When sending a task to be processed, annotate the task with an id(), and stash the task instance in a dictionary using the id() as the key. When the task is updated/completed, retrieve the exact task back by id() from the dictionary, and apply the newly updated state to it. Now the exact task, and therefore its identity, will be maintained.\n", "Well, of course testing for identity on non-singleton object (es. \"a is None\" or \"a is False\") isn't usually a good practice - it might be quick, but a really-quick workaround would be to exchange the \"is\" for the \"==\" test and use an incremental counter to define identity:\n# this is not threadsafe.\nclass Message(object):\n def _next_id():\n i = 0\n while True:\n i += 1\n yield i\n _idgen = _next_id()\n del _next_id\n\n def __init__(self):\n self.id = self._idgen.next()\n\n def __eq__(self, other):\n return (self.__class__ == other.__class__) and (self.id == other.id)\n\nThis might be an idea.\nAlso, be aware that if you have tons of \"worker processes\", memory consumption might be far greater than with a thread-based approach.\n", "You can try the persistent package from my project GarlicSim. It's LGPL'ed.\nhttp://github.com/cool-RR/GarlicSim/tree/development/garlicsim/garlicsim/misc/persistent/\n(The main module in it is persistent.py)\nI often use it like this:\n# ...\nself.identity = Persistent()\n\nThen I have an identity that is preserved across processes.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0001925718_multiprocessing_python.txt
Q: What exception to raise if wrong number of arguments passed in to **kwargs? Suppose in python you have a routine that accepts three named parameters (as **kwargs), but any two out of these three must be filled in. If only one is filled in, it's an error. If all three are, it's an error. What kind of error would you raise? RuntimeError, a specifically created exception, or other? A: Remember that you can subclass Python's built-in exception classes (and TypeError would surely be the right built-in exception class to raise here -- that's what Python raises if the number of arguments does not match the signature, in normal cases without *a or **k forms in the signature). I like having every package define its own class Error(Exception), and then specific exceptions as needed can multiply inherit as appropriate, e.g.: class WrongNumberOfArguments(thispackage.Error, TypeError): Then, I'd raise WrongNumberOfArguments when I detect such a problem situation. This way, any caller who's aware of this package can catch thispackage.Error, if they need to deal with any error specific to the package, while other callers (presumably up higher in the call chain) call still catch the more generic TypeError to deal with any errors such as "wrong number of arguments used in a function call". A: Why not just do what python does? >>> abs(1, 2, 3) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: abs() takes exactly one argument (3 given) A: If (as you say in one of the comments) that this is a programmer error, then you can raise AssertionError: def two(**kwargs): assert len(kwargs) == 2, "Please only provide two args" BTW, if you only have three named arguments, **kwargs seems like an odd way to do it. More natural might be: def two(a=None, b=None, c=None): pass A: I would make a specific one. You can catch it and deal with that specific exception since it is a special circumstance that you created :) A: I would use a ValueError, or a subclass thereof: "Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError." Passing 3 or 1 values when exactly 2 are required would technically be an inappropriate value if you consider all of the arguments a single tuple... At least in my opinion! :) A: I recommend a custom exception. Like so: class NeedExactlyTwo(ValueError): pass Then you can raise NeedExactlyTwo in your code. Be sure to document this in the docstring for your function.
What exception to raise if wrong number of arguments passed in to **kwargs?
Suppose in python you have a routine that accepts three named parameters (as **kwargs), but any two out of these three must be filled in. If only one is filled in, it's an error. If all three are, it's an error. What kind of error would you raise? RuntimeError, a specifically created exception, or other?
[ "Remember that you can subclass Python's built-in exception classes (and TypeError would surely be the right built-in exception class to raise here -- that's what Python raises if the number of arguments does not match the signature, in normal cases without *a or **k forms in the signature). I like having every package define its own class Error(Exception), and then specific exceptions as needed can multiply inherit as appropriate, e.g.:\nclass WrongNumberOfArguments(thispackage.Error, TypeError):\n\nThen, I'd raise WrongNumberOfArguments when I detect such a problem situation.\nThis way, any caller who's aware of this package can catch thispackage.Error, if they need to deal with any error specific to the package, while other callers (presumably up higher in the call chain) call still catch the more generic TypeError to deal with any errors such as \"wrong number of arguments used in a function call\".\n", "Why not just do what python does?\n>>> abs(1, 2, 3)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: abs() takes exactly one argument (3 given)\n\n", "If (as you say in one of the comments) that this is a programmer error, then you can raise AssertionError:\ndef two(**kwargs):\n assert len(kwargs) == 2, \"Please only provide two args\"\n\nBTW, if you only have three named arguments, **kwargs seems like an odd way to do it. More natural might be:\ndef two(a=None, b=None, c=None):\n pass\n\n", "I would make a specific one. You can catch it and deal with that specific exception since it is a special circumstance that you created :)\n", "I would use a ValueError, or a subclass thereof: \"Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError.\"\nPassing 3 or 1 values when exactly 2 are required would technically be an inappropriate value if you consider all of the arguments a single tuple... At least in my opinion! :)\n", "I recommend a custom exception. Like so:\nclass NeedExactlyTwo(ValueError):\n pass\n\nThen you can raise NeedExactlyTwo in your code.\nBe sure to document this in the docstring for your function.\n" ]
[ 17, 15, 4, 3, 0, 0 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0001964126_exception_python.txt
Q: short Unicode \N{} names for Latin-1 characters in Python? Are there short Unicode u"\N{...}" names for Latin1 characters in Python ? \N{A umlaut} etc. would be nice, \N{LATIN SMALL LETTER A WITH DIAERESIS} etc. is just too long to type every time. (Added:) I use an English keyboard, but occasionally need German letters, as in "Löwenbräu Weißbier". Yes one can cut-paste them singly, L cutpaste ö wenbr cutpaste ä ... but that breaks the flow; I was hoping for a keyboard-only way. A: Sorry, no, there's no such thing. In string literals, anyway... you could perhaps piggyback on another encoding scheme, such as HTML: >>> import HTMLParser >>> HTMLParser.HTMLParser().unescape(u'a &auml; b c') u'a \xe4 b' But I don't think this'd be worth it. Hardly anyone even uses the \N notation in any case... for the occasional character the \xnn notation is acceptable; for more involved usage you're better off just typing ä directly and making sure a # coding= is defined in the script as per PEP263. (If you don't have a keyboard layout that can type those diacriticals directly, get one. eg. eurokb on Windows, or using the Compose key on Linux.) A: If you want to do the right thing please use UTF-8 in your python source code. This will keep the code much more readable. Python is able to real UTF-8 source files, all you have to do is to add an additional line after the first one: #!/usr/bin/python # -*- coding: UTF-8 -*- By the way, starting with Python 3.0 UTF-8 is the default encoding so you will not need this line anymore. See PEP3120 A: You can put an actual "ä" character in your string. For this you have to declare the encoding of the source code at the top #!/usr/bin/env python # encoding: utf-8 x = u"ä" A: Have you thought about writing your own converter? It wouldn't be hard to write something that would go through a file and replace \N{A umlaut} with \N{LATIN SMALL LETTER A WITH DIAERESIS} and all the rest. A: You can use the Unicode notation \uXXXX do describe that character: u"\u00E4" A: On Windows, you can use the charmap.exe utility to look up the keyboard shortcut for common letters you're using such as: ALT-0223 = ß ALT-0228 = ä ALT-0246 = ö Then use Unicode and save in UTF-8: # -*- coding: UTF-8 -*- phrase = u'Löwenbräu Weißbier' or use a converter as someone else mentioned and make up your own shortcuts: # -*- coding: UTF-8 -*- def german(s): s = s.replace(u'SS',u'ß') s = s.replace(u'a:',u'ä') s = s.replace(u'o:',u'ö') return s phrase = german(u'Lo:wenbra:u WeiSSbier') print phrase
short Unicode \N{} names for Latin-1 characters in Python?
Are there short Unicode u"\N{...}" names for Latin1 characters in Python ? \N{A umlaut} etc. would be nice, \N{LATIN SMALL LETTER A WITH DIAERESIS} etc. is just too long to type every time. (Added:) I use an English keyboard, but occasionally need German letters, as in "Löwenbräu Weißbier". Yes one can cut-paste them singly, L cutpaste ö wenbr cutpaste ä ... but that breaks the flow; I was hoping for a keyboard-only way.
[ "Sorry, no, there's no such thing. In string literals, anyway... you could perhaps piggyback on another encoding scheme, such as HTML:\n>>> import HTMLParser\n>>> HTMLParser.HTMLParser().unescape(u'a &auml; b c')\nu'a \\xe4 b'\n\nBut I don't think this'd be worth it.\nHardly anyone even uses the \\N notation in any case... for the occasional character the \\xnn notation is acceptable; for more involved usage you're better off just typing ä directly and making sure a # coding= is defined in the script as per PEP263. (If you don't have a keyboard layout that can type those diacriticals directly, get one. eg. eurokb on Windows, or using the Compose key on Linux.)\n", "If you want to do the right thing please use UTF-8 in your python source code. This will keep the code much more readable.\nPython is able to real UTF-8 source files, all you have to do is to add an additional line after the first one:\n#!/usr/bin/python\n# -*- coding: UTF-8 -*-\n\nBy the way, starting with Python 3.0 UTF-8 is the default encoding so you will not need this line anymore. See PEP3120\n", "You can put an actual \"ä\" character in your string. For this you have to declare the encoding of the source code at the top\n#!/usr/bin/env python\n# encoding: utf-8\n\nx = u\"ä\" \n\n", "Have you thought about writing your own converter? It wouldn't be hard to write something that would go through a file and replace \\N{A umlaut} with \\N{LATIN SMALL LETTER A WITH DIAERESIS} and all the rest.\n", "You can use the Unicode notation \\uXXXX do describe that character:\nu\"\\u00E4\"\n\n", "On Windows, you can use the charmap.exe utility to look up the keyboard shortcut for common letters you're using such as:\nALT-0223 = ß\nALT-0228 = ä\nALT-0246 = ö\n\nThen use Unicode and save in UTF-8:\n# -*- coding: UTF-8 -*-\nphrase = u'Löwenbräu Weißbier'\n\nor use a converter as someone else mentioned and make up your own shortcuts:\n# -*- coding: UTF-8 -*-\n\ndef german(s):\n s = s.replace(u'SS',u'ß')\n s = s.replace(u'a:',u'ä')\n s = s.replace(u'o:',u'ö')\n return s\n\nphrase = german(u'Lo:wenbra:u WeiSSbier')\nprint phrase\n\n" ]
[ 3, 3, 1, 0, 0, 0 ]
[]
[]
[ "encoding", "python", "unicode", "utf_8" ]
stackoverflow_0001963353_encoding_python_unicode_utf_8.txt
Q: Time complexity of accessing a Python dict I am writing a simple Python program. My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic. I use a dictionary to memoize values. That seems to be a bottleneck. The values I'm hashing are tuples of points. Each point is: (x,y), 0 <= x,y <= 50 Each key in the dictionary is: A tuple of 2-5 points: ((x1,y1),(x2,y2),(x3,y3),(x4,y4)) The keys are read many times more often than they are written. Am I correct that python dicts suffer from linear access times with such inputs? As far as I know, sets have guaranteed logarithmic access times. How can I simulate dicts using sets(or something similar) in Python? edit As per request, here's a (simplified) version of the memoization function: def memoize(fun): memoized = {} def memo(*args): key = args if not key in memoized: memoized[key] = fun(*args) return memoized[key] return memo A: See Time Complexity. The python dict is a hashmap, its worst case is therefore O(n) if the hash function is bad and results in a lot of collisions. However that is a very rare case where every item added has the same hash and so is added to the same chain which for a major Python implementation would be extremely unlikely. The average time complexity is of course O(1). The best method would be to check and take a look at the hashs of the objects you are using. The CPython Dict uses int PyObject_Hash (PyObject *o) which is the equivalent of hash(o). After a quick check, I have not yet managed to find two tuples that hash to the same value, which would indicate that the lookup is O(1) l = [] for x in range(0, 50): for y in range(0, 50): if hash((x,y)) in l: print "Fail: ", (x,y) l.append(hash((x,y))) print "Test Finished" CodePad (Available for 24 hours) A: You are not correct. dict access is unlikely to be your problem here. It is almost certainly O(1), unless you have some very weird inputs or a very bad hashing function. Paste some sample code from your application for a better diagnosis. A: It would be easier to make suggestions if you provided example code and data. Accessing the dictionary is unlikely to be a problem as that operation is O(1) on average, and O(N) amortized worst case. It's possible that the built-in hashing functions are experiencing collisions for your data. If you're having problems with has the built-in hashing function, you can provide your own. Python's dictionary implementation reduces the average complexity of dictionary lookups to O(1) by requiring that key objects provide a "hash" function. Such a hash function takes the information in a key object and uses it to produce an integer, called a hash value. This hash value is then used to determine which "bucket" this (key, value) pair should be placed into. You can overwrite the __hash__ method in your class to implement a custom hash function like this: def __hash__(self): return hash(str(self)) Depending on what your data actually looks like, you might be able to come up with a faster hash function that has fewer collisions than the standard function. However, this is unlikely. See the Python Wiki page on Dictionary Keys for more information. A: To answer your specific questions: Q1: "Am I correct that python dicts suffer from linear access times with such inputs?" A1: If you mean that average lookup time is O(N) where N is the number of entries in the dict, then it is highly likely that you are wrong. If you are correct, the Python community would very much like to know under what circumstances you are correct, so that the problem can be mitigated or at least warned about. Neither "sample" code nor "simplified" code are useful. Please show actual code and data that reproduce the problem. The code should be instrumented with things like number of dict items and number of dict accesses for each P where P is the number of points in the key (2 <= P <= 5) Q2: "As far as I know, sets have guaranteed logarithmic access times. How can I simulate dicts using sets(or something similar) in Python?" A2: Sets have guaranteed logarithmic access times in what context? There is no such guarantee for Python implementations. Recent CPython versions in fact use a cut-down dict implementation (keys only, no values), so the expectation is average O(1) behaviour. How can you simulate dicts with sets or something similar in any language? Short answer: with extreme difficulty, if you want any functionality beyond dict.has_key(key). A: As others have pointed out, accessing dicts in Python is fast. They are probably the best-oiled data structure in the language, given their central role. The problem lies elsewhere. How many tuples are you memoizing? Have you considered the memory footprint? Perhaps you are spending all your time in the memory allocator or paging memory. A: My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic. I use a dictionary to memoize values. That seems to be a bottleneck. This is evidence of a bug in your memoization method.
Time complexity of accessing a Python dict
I am writing a simple Python program. My program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic. I use a dictionary to memoize values. That seems to be a bottleneck. The values I'm hashing are tuples of points. Each point is: (x,y), 0 <= x,y <= 50 Each key in the dictionary is: A tuple of 2-5 points: ((x1,y1),(x2,y2),(x3,y3),(x4,y4)) The keys are read many times more often than they are written. Am I correct that python dicts suffer from linear access times with such inputs? As far as I know, sets have guaranteed logarithmic access times. How can I simulate dicts using sets(or something similar) in Python? edit As per request, here's a (simplified) version of the memoization function: def memoize(fun): memoized = {} def memo(*args): key = args if not key in memoized: memoized[key] = fun(*args) return memoized[key] return memo
[ "See Time Complexity. The python dict is a hashmap, its worst case is therefore O(n) if the hash function is bad and results in a lot of collisions. However that is a very rare case where every item added has the same hash and so is added to the same chain which for a major Python implementation would be extremely unlikely. The average time complexity is of course O(1).\nThe best method would be to check and take a look at the hashs of the objects you are using. The CPython Dict uses int PyObject_Hash (PyObject *o) which is the equivalent of hash(o).\nAfter a quick check, I have not yet managed to find two tuples that hash to the same value, which would indicate that the lookup is O(1)\nl = []\nfor x in range(0, 50):\n for y in range(0, 50):\n if hash((x,y)) in l:\n print \"Fail: \", (x,y)\n l.append(hash((x,y)))\nprint \"Test Finished\"\n\nCodePad (Available for 24 hours)\n", "You are not correct. dict access is unlikely to be your problem here. It is almost certainly O(1), unless you have some very weird inputs or a very bad hashing function. Paste some sample code from your application for a better diagnosis.\n", "It would be easier to make suggestions if you provided example code and data. \nAccessing the dictionary is unlikely to be a problem as that operation is O(1) on average, and O(N) amortized worst case. It's possible that the built-in hashing functions are experiencing collisions for your data. If you're having problems with has the built-in hashing function, you can provide your own.\n\nPython's dictionary implementation\n reduces the average complexity of\n dictionary lookups to O(1) by\n requiring that key objects provide a\n \"hash\" function. Such a hash function\n takes the information in a key object\n and uses it to produce an integer,\n called a hash value. This hash value\n is then used to determine which\n \"bucket\" this (key, value) pair should\n be placed into. \n\nYou can overwrite the __hash__ method in your class to implement a custom hash function like this:\ndef __hash__(self): \n return hash(str(self))\n\nDepending on what your data actually looks like, you might be able to come up with a faster hash function that has fewer collisions than the standard function. However, this is unlikely. See the Python Wiki page on Dictionary Keys for more information. \n", "To answer your specific questions:\n\nQ1:\n\"Am I correct that python dicts suffer from linear access times with such inputs?\"\n\nA1: If you mean that average lookup time is O(N) where N is the number of entries in the dict, then it is highly likely that you are wrong. If you are correct, the Python community would very much like to know under what circumstances you are correct, so that the problem can be mitigated or at least warned about. Neither \"sample\" code nor \"simplified\" code are useful. Please show actual code and data that reproduce the problem. The code should be instrumented with things like number of dict items and number of dict accesses for each P where P is the number of points in the key (2 <= P <= 5)\n\nQ2:\n\"As far as I know, sets have guaranteed logarithmic access times.\nHow can I simulate dicts using sets(or something similar) in Python?\"\n\nA2: Sets have guaranteed logarithmic access times in what context? There is no such guarantee for Python implementations. Recent CPython versions in fact use a cut-down dict implementation (keys only, no values), so the expectation is average O(1) behaviour. How can you simulate dicts with sets or something similar in any language? Short answer: with extreme difficulty, if you want any functionality beyond dict.has_key(key).\n", "As others have pointed out, accessing dicts in Python is fast. They are probably the best-oiled data structure in the language, given their central role. The problem lies elsewhere.\nHow many tuples are you memoizing? Have you considered the memory footprint? Perhaps you are spending all your time in the memory allocator or paging memory.\n", "\nMy program seems to suffer from linear access to dictionaries, its run-time grows exponentially even though the algorithm is quadratic.\nI use a dictionary to memoize values. That seems to be a bottleneck.\n\nThis is evidence of a bug in your memoization method.\n" ]
[ 100, 10, 9, 6, 2, 2 ]
[]
[]
[ "complexity_theory", "dictionary", "hash", "python" ]
stackoverflow_0001963507_complexity_theory_dictionary_hash_python.txt
Q: How to append '\\?\' to the front of a file path in Python I'm trying to work with some long file paths (Windows) in Python and have come across some problems. After reading the question here, it looks as though I need to append '\\?\' to the front of my long file paths in order to use them with os.stat(filepath). The problem I'm having is that I can't create a string in Python that ends in a backslash. The question here points out that you can't even end strings in Python with a single '\' character. Is there anything in any of the Python standard libraries or anywhere else that lets you simply append '\\?\' to the front of a file path you already have? Or is there any other work around for working with long file paths in Windows with Python? It seems like such a simple thing to do, but I can't figure it out for the life of me. A: "\\\\?\\" should give you exactly the string you want. Longer answer: of course you can end a string in Python with a backslash. You just can't do so when it's a "raw" string (one prefixed with an 'r'). Which you usually use for strings that contains (lots of) backslashes (to avoid the infamous "leaning toothpick" syndrome ;-)) A: Even with a raw string, you can end in a backslash with: >>> print r'\\?\D:\Blah' + '\\' \\?\D:\Blah\ or even: >>> print r'\\?\D:\Blah' '\\' \\?\D:\Blah\ since Python concatenates to literal strings into one.
How to append '\\?\' to the front of a file path in Python
I'm trying to work with some long file paths (Windows) in Python and have come across some problems. After reading the question here, it looks as though I need to append '\\?\' to the front of my long file paths in order to use them with os.stat(filepath). The problem I'm having is that I can't create a string in Python that ends in a backslash. The question here points out that you can't even end strings in Python with a single '\' character. Is there anything in any of the Python standard libraries or anywhere else that lets you simply append '\\?\' to the front of a file path you already have? Or is there any other work around for working with long file paths in Windows with Python? It seems like such a simple thing to do, but I can't figure it out for the life of me.
[ "\"\\\\\\\\?\\\\\" should give you exactly the string you want.\nLonger answer: of course you can end a string in Python with a backslash. You just can't do so when it's a \"raw\" string (one prefixed with an 'r'). Which you usually use for strings that contains (lots of) backslashes (to avoid the infamous \"leaning toothpick\" syndrome ;-))\n", "Even with a raw string, you can end in a backslash with:\n>>> print r'\\\\?\\D:\\Blah' + '\\\\'\n\\\\?\\D:\\Blah\\\n\nor even:\n>>> print r'\\\\?\\D:\\Blah' '\\\\'\n\\\\?\\D:\\Blah\\\n\nsince Python concatenates to literal strings into one.\n" ]
[ 3, 0 ]
[]
[]
[ "backslash", "filepath", "python" ]
stackoverflow_0001963302_backslash_filepath_python.txt
Q: edit in place using xpath Is it possible to do in place edit of XML document using xpath ? I'd prefer any python solution but Java would be fine too. A: XPath is not intended to edit document in place, as far as I know. It is intended to only select nodes of the document. XSLT relies on XPath and can transform documents. Regarding Python, see answer to this question: how to use xpath in python. It mentions also libraries which can do XSLT transformations. A: Using XML to store data is probably not optimal, as you experience here. Editing XML is extremely costly. One way of doing the editing is parsing the xml into a tree, and then inserting stuff into that three, and then rebuilding the xml file. Editing an xml file in place is also possible, but then you need some kind of search mechanism that finds the location you need to edit or insert into, and then write to the file from that point. Remember to also read the remaining data, because it will be overwritten. This is fine for inserting new tags or data, but editing existing data makes it even more complicated. My own rule is to not use XML for storage, but to present data. So the storage facility, or some kind of middle man, needs to form xml files from the data it has.
edit in place using xpath
Is it possible to do in place edit of XML document using xpath ? I'd prefer any python solution but Java would be fine too.
[ "XPath is not intended to edit document in place, as far as I know. It is intended to only select nodes of the document. XSLT relies on XPath and can transform documents.\nRegarding Python, see answer to this question: how to use xpath in python. It mentions also libraries which can do XSLT transformations.\n", "Using XML to store data is probably not optimal, as you experience here. Editing XML is extremely costly.\nOne way of doing the editing is parsing the xml into a tree, and then inserting stuff into that three, and then rebuilding the xml file. \nEditing an xml file in place is also possible, but then you need some kind of search mechanism that finds the location you need to edit or insert into, and then write to the file from that point. Remember to also read the remaining data, because it will be overwritten. This is fine for inserting new tags or data, but editing existing data makes it even more complicated.\nMy own rule is to not use XML for storage, but to present data. So the storage facility, or some kind of middle man, needs to form xml files from the data it has.\n" ]
[ 3, 1 ]
[]
[]
[ "python", "xpath" ]
stackoverflow_0001964583_python_xpath.txt
Q: what is the next code mean, the 'lambda request' and the '**kwargs: {}',i have never see this def validate(request, *args, **kwargs): form_class = kwargs.pop('form_class') extra_args_func = kwargs.pop('callback', lambda request, *args, **kwargs: {}) thanks a={'a':'aaa','b':'bbb'} b=a.pop('a',lambda x,y:x) print a i know dict.pop('a'),but i don't know dict.pop('a',func) what is the use of 'func‘ in here A: The expression: lambda request, *args, **kwargs: {} builds an anonymous function which must be called with at least one argument (which, if named, must be named request) and can be called with any number of positional and named arguments: when called, it ignores all the arguments and returns a new empty dictionary. The code snippet: a={'a':'aaa','b':'bbb'} b=a.pop('a',lambda x,y:x) print a prints {'b': 'bbb'} (which is also the value a stays bound this after the snippet executes) and bings the string 'aaa' to name b. The second argument to the .pop method plays no role in this case: it's only used when the first argument is not found as a key in the dictionary on which the method is called (in which case, .pop's second argument would be the "default value" returned by the call to .pop, without any alteration to the dictionary). In this case, 'a' is indeed found at that time as a key in dictionary a, and therefore it's removed from that dictionary, and the corresponding value, string 'aaa', is returned by the call to .pop (whence it gets then bound to name b). A: That is a "lambda function". It's a short way of expressing a function that's declared inline. It looks like this: lambda arg1,arg2,...: expression and is the equivalent of a nameless function that would look like this: def some_nameless_function(arg1,arg2,...): return expression So, the code you have there, def validate(request, *args, **kwargs): form_class = kwargs.pop('form_class') extra_args_func = kwargs.pop('callback', lambda request, *args, **kwargs: {}) is equivalent to a function that looks like this: def nameless_function(request, *args, **kwargs): return {} def validate(request, *args, **kwargs): form_class = kwargs.pop('form_class') extra_args_func = kwargs.pop('callback', nameless_function) A: What it's doing is pop off a callback function from the kwargs dict, or if that isn't set, it creates a lambda function that does nothing (returns an empty dict). The request, *args, **kwargs part is presumably the "signature" of the callback function.
what is the next code mean, the 'lambda request' and the '**kwargs: {}',i have never see this
def validate(request, *args, **kwargs): form_class = kwargs.pop('form_class') extra_args_func = kwargs.pop('callback', lambda request, *args, **kwargs: {}) thanks a={'a':'aaa','b':'bbb'} b=a.pop('a',lambda x,y:x) print a i know dict.pop('a'),but i don't know dict.pop('a',func) what is the use of 'func‘ in here
[ "The expression:\nlambda request, *args, **kwargs: {}\n\nbuilds an anonymous function which must be called with at least one argument (which, if named, must be named request) and can be called with any number of positional and named arguments: when called, it ignores all the arguments and returns a new empty dictionary.\nThe code snippet:\na={'a':'aaa','b':'bbb'}\nb=a.pop('a',lambda x,y:x)\nprint a\n\nprints {'b': 'bbb'} (which is also the value a stays bound this after the snippet executes) and bings the string 'aaa' to name b. The second argument to the .pop method plays no role in this case: it's only used when the first argument is not found as a key in the dictionary on which the method is called (in which case, .pop's second argument would be the \"default value\" returned by the call to .pop, without any alteration to the dictionary). In this case, 'a' is indeed found at that time as a key in dictionary a, and therefore it's removed from that dictionary, and the corresponding value, string 'aaa', is returned by the call to .pop (whence it gets then bound to name b).\n", "That is a \"lambda function\". It's a short way of expressing a function that's declared inline. It looks like this:\nlambda arg1,arg2,...: expression\n\nand is the equivalent of a nameless function that would look like this:\ndef some_nameless_function(arg1,arg2,...):\n return expression\n\nSo, the code you have there,\ndef validate(request, *args, **kwargs):\n form_class = kwargs.pop('form_class')\n extra_args_func = kwargs.pop('callback', lambda request, *args, **kwargs: {})\n\nis equivalent to a function that looks like this:\ndef nameless_function(request, *args, **kwargs):\n return {}\n\ndef validate(request, *args, **kwargs):\n form_class = kwargs.pop('form_class')\n extra_args_func = kwargs.pop('callback', nameless_function)\n\n", "What it's doing is pop off a callback function from the kwargs dict, or if that isn't set, it creates a lambda function that does nothing (returns an empty dict). The request, *args, **kwargs part is presumably the \"signature\" of the callback function.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001964750_python.txt
Q: Creating a PyBuffer from a C struct EDIT: Upon re-reading my original question I realized very quickly that it was very poorly worded, ambiguous, and too confusing to ever get a decent answer. That's what I get for rushing out a question at the end of my lunch break. Hopefully this will be clearer: I am trying to expose a simple C structure to Python (3.x) as a PyBuffer so I can retrieve a MemoryView from it. The structure I want to expose is similar to this: struct ImageBuffer { void* bytes; int row_count; int bytes_per_row; }; and it is my desire to allow the script writer to access the data like so: img_buffer = img.get_buffer() img_buffer[1::4] = 255 # Set every Red component to full intensity Unfortunately the existing documentation about the C API for these structures is pretty sparse, self contradictory in places, and outright wrong in others (documented function signatures do not match those in the headers, etc.) As such I don't have a very good idea about how to best expose this. Also, I would like to avoid including third party libs to achieve functionality that should be part of the core libs, but it feels to me like the PyBuffer functionality is still fairly immature, and perhaps something like NumPy would be a better choice. Does anyone have any advice on this? A: The set of methods to implement so that your extension type supports the buffer protocol is described here: http://docs.python.org/3.1/c-api/typeobj.html#buffer-object-structures I recognize that the documentation is pretty rough, so the best advice I can give is to start from an existing implementation of the buffer API by a C type, for example bytesobject.c or bytearrayobject.c in the official Python source code. However, please note that the buffer protocol doesn't give access to high-level notations such as the one you quoted: img_buffer[1::4] = 255 won't work on a memoryview object. Edit: to be more precise, memoryviews support some kinds of slice assignment, but not all of them. Also, they are not "smart" enough to understand that assigning 255 to a slice actually means that you want the byte value to be repeated. Example: >>> b = bytearray(b"abcd") >>> m = memoryview(b) >>> m[0:2] = b"xy" >>> b bytearray(b'xycd') >>> m[0:2] = 255 Traceback (most recent call last): File "", line 1, in TypeError: 'int' does not support the buffer interface >>> m[0:2] = b"x" Traceback (most recent call last): File "", line 1, in ValueError: cannot modify size of memoryview object >>> m[0::2] = b"xy" Traceback (most recent call last): File "", line 1, in NotImplementedError
Creating a PyBuffer from a C struct
EDIT: Upon re-reading my original question I realized very quickly that it was very poorly worded, ambiguous, and too confusing to ever get a decent answer. That's what I get for rushing out a question at the end of my lunch break. Hopefully this will be clearer: I am trying to expose a simple C structure to Python (3.x) as a PyBuffer so I can retrieve a MemoryView from it. The structure I want to expose is similar to this: struct ImageBuffer { void* bytes; int row_count; int bytes_per_row; }; and it is my desire to allow the script writer to access the data like so: img_buffer = img.get_buffer() img_buffer[1::4] = 255 # Set every Red component to full intensity Unfortunately the existing documentation about the C API for these structures is pretty sparse, self contradictory in places, and outright wrong in others (documented function signatures do not match those in the headers, etc.) As such I don't have a very good idea about how to best expose this. Also, I would like to avoid including third party libs to achieve functionality that should be part of the core libs, but it feels to me like the PyBuffer functionality is still fairly immature, and perhaps something like NumPy would be a better choice. Does anyone have any advice on this?
[ "The set of methods to implement so that your extension type supports the buffer protocol is described here: http://docs.python.org/3.1/c-api/typeobj.html#buffer-object-structures\nI recognize that the documentation is pretty rough, so the best advice I can give is to start from an existing implementation of the buffer API by a C type, for example bytesobject.c or bytearrayobject.c in the official Python source code.\nHowever, please note that the buffer protocol doesn't give access to high-level notations such as the one you quoted: img_buffer[1::4] = 255 won't work on a memoryview object.\nEdit: to be more precise, memoryviews support some kinds of slice assignment, but not all of them. Also, they are not \"smart\" enough to understand that assigning 255 to a slice actually means that you want the byte value to be repeated. Example:\n\n>>> b = bytearray(b\"abcd\")\n>>> m = memoryview(b)\n>>> m[0:2] = b\"xy\"\n>>> b\nbytearray(b'xycd')\n>>> m[0:2] = 255\nTraceback (most recent call last):\n File \"\", line 1, in \nTypeError: 'int' does not support the buffer interface\n>>> m[0:2] = b\"x\"\nTraceback (most recent call last):\n File \"\", line 1, in \nValueError: cannot modify size of memoryview object\n>>> m[0::2] = b\"xy\"\nTraceback (most recent call last):\n File \"\", line 1, in \nNotImplementedError\n\n" ]
[ 1 ]
[]
[]
[ "pybuffer", "python", "python_3.x" ]
stackoverflow_0001710820_pybuffer_python_python_3.x.txt
Q: Python C-API Object Initialisation What is the correct way to initialise a python object into already existing memory (like the inplace new in c++) I tried this code however it causes an access violation with a debug build because the _ob_prev and _ob_next are not set.. //PyVarObject *mem; -previously allocated memory Py_INCREF(type); //couldnt get PyObject_HEAD_INIT or PyVarObject_HEAD_INIT to compile //however the macros resolve to this PyVarObject init = {{_PyObject_EXTRA_INIT 1, ((_typeobject*)type)}, 0}; *mem = init; //...other init code for type... The crash occures on line 1519 in object.c void _Py_ForgetReference(register PyObject *op) { #ifdef SLOW_UNREF_CHECK register PyObject *p; #endif if (op->ob_refcnt < 0) Py_FatalError("UNREF negative refcnt"); if (op == &refchain || op->_ob_prev->_ob_next != op || op->_ob_next->_ob_prev != op) { //----HERE----// fprintf(stderr, "* ob\n"); _PyObject_Dump(op); fprintf(stderr, "* op->_ob_prev->_ob_next\n"); _PyObject_Dump(op->_ob_prev->_ob_next); fprintf(stderr, "* op->_ob_next->_ob_prev\n"); _PyObject_Dump(op->_ob_next->_ob_prev); Py_FatalError("UNREF invalid object"); } #ifdef SLOW_UNREF_CHECK for (p = refchain._ob_next; p != &refchain; p = p->_ob_next) { if (p == op) break; } if (p == &refchain) /* Not found */ Py_FatalError("UNREF unknown object"); #endif op->_ob_next->_ob_prev = op->_ob_prev; op->_ob_prev->_ob_next = op->_ob_next; op->_ob_next = op->_ob_prev = NULL; _Py_INC_TPFREES(op); } A: What are doing is pretty horrible. Unless this code path is really performance critical I'd advise you to allocate your objects on the heap as is normally done. A: Taking your question at face value, you have a few options. The quick and dirty method is to put an extra Py_INCREF into your initialisation code. Assuming you have no refcount bugs, the refcount will never return to zero, deallocation code will never be called, and there should be no crash. (In fact this may be the way that the statically allocated builtin type objects are managed!) You could write a custom allocator and deallocator for your type that manages memory in your preferred manner. You could in fact write a custom allocator and deallocator for the whole python interpreter. You could manage your python objects in the normal way but store pointers in them to data in the memory you are managing yourself. Looking at the bigger picture... why are you trying to do this? Also, your comments that I tried this code however it causes an access violation with a debug build because the _ob_prev and _ob_next are not set.. and //couldnt get PyObject_HEAD_INIT or PyVarObject_HEAD_INIT to compile //however the macros resolve to this are worrying! Have you successfully defined a type which uses standard memory management, before moving on to more advanced stuff? A: You can look at Py_NoneStruct for how this is done. Your code looks basically right. It is a refcounting bug. Statically allocated objects can never be freed, so _Py_ForgetReference should never be called. If you want to be able to free them you must use use a custom allocator instead of static initialization.
Python C-API Object Initialisation
What is the correct way to initialise a python object into already existing memory (like the inplace new in c++) I tried this code however it causes an access violation with a debug build because the _ob_prev and _ob_next are not set.. //PyVarObject *mem; -previously allocated memory Py_INCREF(type); //couldnt get PyObject_HEAD_INIT or PyVarObject_HEAD_INIT to compile //however the macros resolve to this PyVarObject init = {{_PyObject_EXTRA_INIT 1, ((_typeobject*)type)}, 0}; *mem = init; //...other init code for type... The crash occures on line 1519 in object.c void _Py_ForgetReference(register PyObject *op) { #ifdef SLOW_UNREF_CHECK register PyObject *p; #endif if (op->ob_refcnt < 0) Py_FatalError("UNREF negative refcnt"); if (op == &refchain || op->_ob_prev->_ob_next != op || op->_ob_next->_ob_prev != op) { //----HERE----// fprintf(stderr, "* ob\n"); _PyObject_Dump(op); fprintf(stderr, "* op->_ob_prev->_ob_next\n"); _PyObject_Dump(op->_ob_prev->_ob_next); fprintf(stderr, "* op->_ob_next->_ob_prev\n"); _PyObject_Dump(op->_ob_next->_ob_prev); Py_FatalError("UNREF invalid object"); } #ifdef SLOW_UNREF_CHECK for (p = refchain._ob_next; p != &refchain; p = p->_ob_next) { if (p == op) break; } if (p == &refchain) /* Not found */ Py_FatalError("UNREF unknown object"); #endif op->_ob_next->_ob_prev = op->_ob_prev; op->_ob_prev->_ob_next = op->_ob_next; op->_ob_next = op->_ob_prev = NULL; _Py_INC_TPFREES(op); }
[ "What are doing is pretty horrible. Unless this code path is really performance critical I'd advise you to allocate your objects on the heap as is normally done.\n", "Taking your question at face value, you have a few options. The quick and dirty method is to put an extra Py_INCREF into your initialisation code. Assuming you have no refcount bugs, the refcount will never return to zero, deallocation code will never be called, and there should be no crash. (In fact this may be the way that the statically allocated builtin type objects are managed!)\nYou could write a custom allocator and deallocator for your type that manages memory in your preferred manner. You could in fact write a custom allocator and deallocator for the whole python interpreter.\nYou could manage your python objects in the normal way but store pointers in them to data in the memory you are managing yourself.\nLooking at the bigger picture... why are you trying to do this?\nAlso, your comments that\nI tried this code however it causes an access violation with a debug build because the _ob_prev and _ob_next are not set..\nand \n//couldnt get PyObject_HEAD_INIT or PyVarObject_HEAD_INIT to compile\n//however the macros resolve to this\nare worrying! Have you successfully defined a type which uses standard memory management, before moving on to more advanced stuff?\n", "You can look at Py_NoneStruct for how this is done. Your code looks basically right.\nIt is a refcounting bug. Statically allocated objects can never be freed, so _Py_ForgetReference should never be called.\nIf you want to be able to free them you must use use a custom allocator instead of static initialization.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "c", "python", "python_3.x", "python_c_api" ]
stackoverflow_0000581281_c_python_python_3.x_python_c_api.txt
Q: Unicode handling in ReportLab I am trying to use ReportLab with Unicode characters, but it is not working. I tried tracing through the code until I reached the following line: class TTFont: # ... def splitString(self, text, doc, encoding='utf-8'): # ... cur.append(n & 0xFF) # <-- here is the problem! # ... (This code can be found in ReportLab's repository, in the file pdfbase/ttfonts.py. The code in question is in line 1059.) Why is n's value being manipulated? In the line shown above, n contains the code point of the character being processed (e.g. 65 for 'A', 97 for 'a', or 1588 for Arabic sheen 'ش'). cur is a list that is being filled with the characters to be sent to the final output (AFAIU). Before that line, everything was (apparently) working fine, but in this line, the value of n was manipulated, apparently reducing it to the extended ASCII range! This causes non-ASCII, Unicode characters to lose their value. I cannot understand how this statement is useful, or why it is necessary! So my question is, why is n's value being manipulated here, and how should I proceed about fixing this issue? Edit: In response to the comment regarding my code snippet, here is an example that causes this error: my_doctemplate.build([Paragraph(bulletText = None, encoding = 'utf8', caseSensitive = 1, debug = 0, text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', frags = [ParaFrag(fontName = 'DejaVuSansMono-BoldOblique', text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', sub = 0, rise = 0, greek = 0, link = None, italic = 0, strike = 0, fontSize = 12.0, textColor = Color(0,0,0), super = 0, underline = 0, bold = 0)])]) In PDFTextObject._textOut, _formatText is called, which identifies the font as _dynamicFont, and accordingly calls font.splitString, which is causing the error described above. A: What do you mean, "not working"? You have misquoted the reportlab source code. What it is actually doing is that the lower and upper byte of each 16-bit unicode character are coded separately (the upper byte is only written out when it changes, which I assume is a PDF-specific optimization to make documents smaller). Please explain exactly what the problem is, not what you think what the underlying reason is. Chances are the characters you want to display simply don't exist in the selected font ('DejaVuSansMono-BoldOblique'). A: I'm pretty sure you'd need to change 0xFF to 0xFFFF to use 4-byte unicode characters, as ~unutbu suggested, hence using four bytes instead of two.
Unicode handling in ReportLab
I am trying to use ReportLab with Unicode characters, but it is not working. I tried tracing through the code until I reached the following line: class TTFont: # ... def splitString(self, text, doc, encoding='utf-8'): # ... cur.append(n & 0xFF) # <-- here is the problem! # ... (This code can be found in ReportLab's repository, in the file pdfbase/ttfonts.py. The code in question is in line 1059.) Why is n's value being manipulated? In the line shown above, n contains the code point of the character being processed (e.g. 65 for 'A', 97 for 'a', or 1588 for Arabic sheen 'ش'). cur is a list that is being filled with the characters to be sent to the final output (AFAIU). Before that line, everything was (apparently) working fine, but in this line, the value of n was manipulated, apparently reducing it to the extended ASCII range! This causes non-ASCII, Unicode characters to lose their value. I cannot understand how this statement is useful, or why it is necessary! So my question is, why is n's value being manipulated here, and how should I proceed about fixing this issue? Edit: In response to the comment regarding my code snippet, here is an example that causes this error: my_doctemplate.build([Paragraph(bulletText = None, encoding = 'utf8', caseSensitive = 1, debug = 0, text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', frags = [ParaFrag(fontName = 'DejaVuSansMono-BoldOblique', text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', sub = 0, rise = 0, greek = 0, link = None, italic = 0, strike = 0, fontSize = 12.0, textColor = Color(0,0,0), super = 0, underline = 0, bold = 0)])]) In PDFTextObject._textOut, _formatText is called, which identifies the font as _dynamicFont, and accordingly calls font.splitString, which is causing the error described above.
[ "What do you mean, \"not working\"? You have misquoted the reportlab source code. What it is actually doing is that the lower and upper byte of each 16-bit unicode character are coded separately (the upper byte is only written out when it changes, which I assume is a PDF-specific optimization to make documents smaller).\nPlease explain exactly what the problem is, not what you think what the underlying reason is. Chances are the characters you want to display simply don't exist in the selected font ('DejaVuSansMono-BoldOblique').\n", "I'm pretty sure you'd need to change 0xFF to 0xFFFF to use 4-byte unicode characters, as ~unutbu suggested, hence using four bytes instead of two.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "reportlab", "unicode" ]
stackoverflow_0001594470_python_reportlab_unicode.txt
Q: Random selection ideas I am thinking of giving one or more set of introductory lectures to introduce people in my department with Python and related scientific tools as I did once in the last summer py4science @ UND. To make the meetings more interesting and catch more attention I had given two Python learning materials to one of the lucky audience via the shown ways: 1-) Get names and assign with a number and pick the first one as the winner from the assigned dictionary. import random lucky = {1:'Lucky1',...} random.choice(lucky.keys()) 2-) Similar to the previous one but pop items from the dictionary, thus the last one becomes the luckiest. import random lucky = {1:'Lucky1',...} lucky.pop(random.choice(lucky.keys())) Right now, I am looking at least for one more idea that will have randomness inherently and demonstrate a useful language feature helping me to make a funnier lottery time at the end of one of the sessions. A: Cards are also a source of popular (and familiar!) games of chance. Perhaps you could show how easy it is to generate, shuffle and sample cards: #!/usr/bin/env python import random import itertools numname={1:'Ace',11:'Jack',12:'Queen',13:'King'} suits=['Clubs','Diamonds','Hearts','Spades'] numbers=range(1,14) cards=['%s-%s'%(numname.get(number,number),suit) for number,suit in itertools.product(numbers,suits)] print(cards) random.shuffle(cards) print(cards) hand=random.sample(cards,5) print(hand) A: One of the cutest uses of random numbers for mid-sized crowds is finding cycles. I will describe the physical method, and then some explorations. The Python code is fairly trivial. Start with your group of about 100 people with their names on pieces of paper in a bowl. Everyone descends on the bowl and takes a random piece of paper. Each person goes to the person with that name. This leads to groups clumping together in various sizes. Not always what people expect. For example, if Alice picks Bob, Bob picks Charlie, and Charlie picks Alice, then these three people will end up in their own clump. For some groups, have people join hands with their matches to see everyone being pulled this way and that. Also to see how the matches create chains or clumps. Now write software to watch the number of clumps. Do the match on clumps, asking, for example, "how often is the biggest clump less than half the people"? For example, for N students, an average of 1/N will draw their own names. Do you need code? A: Computing Pi is always fun ;-) import random def approx_pi( n ): # n random (x,y) pairs (as a generator) data = ( (random.random(),random.random()) for _ in range(n) ) return 4.0*sum( 1 for x,y in data if x**2 + y**2 < 1 )/n print approx_pi(100000)
Random selection ideas
I am thinking of giving one or more set of introductory lectures to introduce people in my department with Python and related scientific tools as I did once in the last summer py4science @ UND. To make the meetings more interesting and catch more attention I had given two Python learning materials to one of the lucky audience via the shown ways: 1-) Get names and assign with a number and pick the first one as the winner from the assigned dictionary. import random lucky = {1:'Lucky1',...} random.choice(lucky.keys()) 2-) Similar to the previous one but pop items from the dictionary, thus the last one becomes the luckiest. import random lucky = {1:'Lucky1',...} lucky.pop(random.choice(lucky.keys())) Right now, I am looking at least for one more idea that will have randomness inherently and demonstrate a useful language feature helping me to make a funnier lottery time at the end of one of the sessions.
[ "Cards are also a source of popular (and familiar!) games of chance.\nPerhaps you could show how easy it is to generate, shuffle and sample cards:\n#!/usr/bin/env python\nimport random\nimport itertools\n\nnumname={1:'Ace',11:'Jack',12:'Queen',13:'King'}\nsuits=['Clubs','Diamonds','Hearts','Spades']\nnumbers=range(1,14)\ncards=['%s-%s'%(numname.get(number,number),suit)\n for number,suit in itertools.product(numbers,suits)]\nprint(cards)\nrandom.shuffle(cards)\nprint(cards)\nhand=random.sample(cards,5)\nprint(hand)\n\n", "One of the cutest uses of random numbers for mid-sized crowds is finding cycles. I will describe the physical method, and then some explorations. The Python code is fairly trivial.\nStart with your group of about 100 people with their names on pieces of paper in a bowl. Everyone descends on the bowl and takes a random piece of paper. Each person goes to the person with that name. This leads to groups clumping together in various sizes. Not always what people expect.\nFor example, if Alice picks Bob, Bob picks Charlie, and Charlie picks Alice, then these three people will end up in their own clump. For some groups, have people join hands with their matches to see everyone being pulled this way and that. Also to see how the matches create chains or clumps.\nNow write software to watch the number of clumps. Do the match on clumps, asking, for example, \"how often is the biggest clump less than half the people\"? For example, for N students, an average of 1/N will draw their own names.\nDo you need code?\n", "Computing Pi is always fun ;-) \nimport random\n\ndef approx_pi( n ):\n # n random (x,y) pairs (as a generator)\n data = ( (random.random(),random.random()) for _ in range(n) )\n return 4.0*sum( 1 for x,y in data if x**2 + y**2 < 1 )/n\n\nprint approx_pi(100000)\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001964366_python_random.txt
Q: Error when compiling simple python program This script won't compile. I wanted to made a simple 21-style game for practice but I get an error: X@X:~/Desktop$ python 21.py File "21.py", line 18 int(ptotal) = ptotal + newcard SyntaxError: can't assign to function call Here's the code. Can anyone please help me? I'm obviously a beginner and the code is pretty sloppy. A: Not sure where you got this syntax: int(cone) == random.randrange(1, 11) I think you mean this: cone = random.randrange(1, 11) This is also an (interesting) invention: while hit is not "No" or "no" or "n": You'll need: while hit not in ["No", "no", "n"]:
Error when compiling simple python program
This script won't compile. I wanted to made a simple 21-style game for practice but I get an error: X@X:~/Desktop$ python 21.py File "21.py", line 18 int(ptotal) = ptotal + newcard SyntaxError: can't assign to function call Here's the code. Can anyone please help me? I'm obviously a beginner and the code is pretty sloppy.
[ "Not sure where you got this syntax:\nint(cone) == random.randrange(1, 11)\n\nI think you mean this:\ncone = random.randrange(1, 11)\n\nThis is also an (interesting) invention:\nwhile hit is not \"No\" or \"no\" or \"n\":\n\nYou'll need:\nwhile hit not in [\"No\", \"no\", \"n\"]:\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0001965031_python.txt
Q: Python epoll.register threadsafe? Does anyone know if I can call epoll.register from another thread safely? Here is what I am imagining: Thread 1: epoll.poll() Thread 2: adding some fd to the same epoll object with epoll.register http://docs.python.org/library/select.html A: I changed my answer after you changed the question. This will not be "thread safe" in that each thread will impact the same epoll object. Registering a new fd to the epoll object will still do it to that object. There's no reason for that particular object to have different states across separate threads, because in that scenario one should made one object for each thread. So, short answer: Your setup will work. In fact, the python stdlib http.server package uses that same method (just using poll instead of epoll). It creates a polling object in one thread, and uses a separate thread to poll it.
Python epoll.register threadsafe?
Does anyone know if I can call epoll.register from another thread safely? Here is what I am imagining: Thread 1: epoll.poll() Thread 2: adding some fd to the same epoll object with epoll.register http://docs.python.org/library/select.html
[ "I changed my answer after you changed the question.\nThis will not be \"thread safe\" in that each thread will impact the same epoll object. Registering a new fd to the epoll object will still do it to that object. \nThere's no reason for that particular object to have different states across separate threads, because in that scenario one should made one object for each thread.\nSo, short answer: Your setup will work. \nIn fact, the python stdlib http.server package uses that same method (just using poll instead of epoll). It creates a polling object in one thread, and uses a separate thread to poll it.\n" ]
[ 1 ]
[]
[]
[ "epoll", "multithreading", "python" ]
stackoverflow_0001965092_epoll_multithreading_python.txt
Q: why my code run wrong ,it is about '@property' I used python 2.5, I want to know how can change the next code when the Platform is python2.5 or python2.6 class C(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x a=C() print a.x#error thanks thanks ,alex,i think property must be 3 arguments in your example but ,i have seen a code which with 'property' only use 1 argumennt ,why,can it work class SortingMiddleware(object): def process_request(self, request): request.__class__.field = property(get_field) request.__class__.direction = property(get_direction) A: Python 2.5 does not support the .setter and .deleter sub-decorators of property; they were introduced in Python 2.6. To work on both releases, you can, instead, code something like: class C(object): def __init__(self): self._x = None def _get_x(self): """I'm the 'x' property.""" return self._x def _set_x(self, value): self._x = value def _del_x(self): del self._x x = property(_get_x, _set_x, _del_x)
why my code run wrong ,it is about '@property'
I used python 2.5, I want to know how can change the next code when the Platform is python2.5 or python2.6 class C(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x a=C() print a.x#error thanks thanks ,alex,i think property must be 3 arguments in your example but ,i have seen a code which with 'property' only use 1 argumennt ,why,can it work class SortingMiddleware(object): def process_request(self, request): request.__class__.field = property(get_field) request.__class__.direction = property(get_direction)
[ "Python 2.5 does not support the .setter and .deleter sub-decorators of property; they were introduced in Python 2.6.\nTo work on both releases, you can, instead, code something like:\nclass C(object):\n def __init__(self):\n self._x = None\n\n def _get_x(self):\n \"\"\"I'm the 'x' property.\"\"\"\n return self._x\n def _set_x(self, value):\n self._x = value\n def _del_x(self):\n del self._x\n x = property(_get_x, _set_x, _del_x)\n\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0001965117_python.txt
Q: Python TypeError when using json.dumps When I do json.dumps with a dictionary that maps strings to a list of unicodes, python raises a type error. Why does that not work? A: Works fine for me in Python 2.6 (and identically in 3.1, without the u prefix on the value): >>> import json >>> d={'a': u'fél'} >>> json.dumps(d) '{"a": "f\\u00e9l"}' Can you please reproduce and post (by editing your answer, so you can format it properly) the tiniest bit of code that gives you the problem?
Python TypeError when using json.dumps
When I do json.dumps with a dictionary that maps strings to a list of unicodes, python raises a type error. Why does that not work?
[ "Works fine for me in Python 2.6 (and identically in 3.1, without the u prefix on the value):\n>>> import json\n>>> d={'a': u'fél'}\n>>> json.dumps(d)\n'{\"a\": \"f\\\\u00e9l\"}'\n\nCan you please reproduce and post (by editing your answer, so you can format it properly) the tiniest bit of code that gives you the problem?\n" ]
[ 2 ]
[]
[]
[ "json", "python" ]
stackoverflow_0001965021_json_python.txt
Q: encode Netbios name python I would like to encode "ITSATEST" to it's netbios name value in python; The occurence table and explication are here: http://support.microsoft.com/kb/194203 I dont know how this could be done easily in python, someone can give me a hand ? Thanks ! A: You can map each nibble of the original string, taking its numerical value and offsetting from 'A': encoded_name = ''.join([chr((ord(c)>>4) + ord('A')) + chr((ord(c)&0xF) + ord('A')) for c in original_name]) A: Take a look at RFC 1001, which defines the encoding. In section 14.1 "FIRST LEVEL ENCODING" is the algorithm for the encoding, which you could implement directly in Python.
encode Netbios name python
I would like to encode "ITSATEST" to it's netbios name value in python; The occurence table and explication are here: http://support.microsoft.com/kb/194203 I dont know how this could be done easily in python, someone can give me a hand ? Thanks !
[ "You can map each nibble of the original string, taking its numerical value and offsetting from 'A':\nencoded_name = ''.join([chr((ord(c)>>4) + ord('A'))\n + chr((ord(c)&0xF) + ord('A')) for c in original_name])\n\n", "Take a look at RFC 1001, which defines the encoding. In section 14.1 \"FIRST LEVEL ENCODING\" is the algorithm for the encoding, which you could implement directly in Python.\n" ]
[ 2, 1 ]
[]
[]
[ "netbios", "python" ]
stackoverflow_0001965065_netbios_python.txt
Q: Why is getattr() not working like I think it should? I think this code should print 'sss' the next is my code: class foo: def __init__(self): self.a = "a" def __getattr__(self,x,defalut): if x in self: return x else:return defalut a=foo() print getattr(a,'b','sss') i know the __getattr__ must be 2 argument,but i want to get a default attribute if the attribute is no being. how can i get it, thanks and i found if defined __setattr__,my next code is also can't run class foo: def __init__(self): self.a={} def __setattr__(self,name,value): self.a[name]=value a=foo()#error ,why hi alex, i changed your example: class foo(object): def __init__(self): self.a = {'a': 'boh'} def __getattr__(self, x): if x in self.a: return self.a[x] raise AttributeError a=foo() print getattr(a,'a','sss') it print {'a': 'boh'},not 'boh' i think it will print self.a not self.a['a'], This is obviously not want to see why ,and Is there any way to avoid it A: Your problem number one: you're defining an old-style class (we know you're on Python 2.something, even though you don't tell us, because you're using print as a keyword;-). In Python 2: class foo: means you're defining an old-style, aka legacy, class, whose behavior can be rather quirky at times. Never do that -- there's no good reason! The old-style classes exist only for compatibility with old legacy code that relies on their quirks (and were finally abolished in Python 3). Use new style classes instead: class foo(object): and then the check if x in self: will not cause a recursive __getattr__ call. It will however cause a failure anyway, because your class does not define a __contains__ method and therefore you cannot check if x is contained in an instance of that class. If what you're trying to do is whether x is defined in the instance dict of self, don't bother: __getattr__ doesn't even get called in that case -- it's only called when the attribute is not otherwise found in self. To support three-arguments calls to the getattr built-in, just raise AttributeError in your __getattr__ method if necessary (just as would happen if you had no __getattr__ method at all), and the built-in will do its job (it's the built-in's job to intercept such cases and return the default if provided). That's the reason one never ever calls special methods such as __getattr__ directly but rather uses built-ins and operators which internally call them -- the built-ins and operators provide substantial added value. So to give an example which makes somewhat sense: class foo(object): def __init__(self): self.blah = {'a': 'boh'} def __getattr__(self, x): if x in self.blah: return self.blah[x] raise AttributeError a=foo() print getattr(a,'b','sss') This prints sss, as desired. If you add a __setattr__ method, that one intercepts every attempt to set attributes on self -- including self.blah = whatever. So -- when you need to bypass the very __setattr__ you're defining -- you must use a different approach. For example: class foo(object): def __init__(self): self.__dict__['blah'] = {} def __setattr__(self, name, value): self.blah[name] = value def __getattr__(self, x): if x in self.blah: return self.blah[x] raise AttributeError a=foo() print getattr(a,'b','sss') This also prints sss. Instead of self.__dict__['blah'] = {} you could also use object.__setattr__(self, 'blah', {}) Such "upcalls to the superclass's implementation" (which you could also obtain via the super built-in) are one of the rare exceptions to the rules "don't call special methods directly, call the built-in or use the operator instead" -- here, you want to specifically bypass the normal behavior, so the explicit special-method call is a possibility. A: You are confusing the getattr built-in function, which retrieves some attribute binding of an object dynamically (by name), at runtime, and the __getattr__ method, which is invoked when you access some missing attribute of an object. You can't ask if x in self: from within __getattr__, because the in operator will cause __getattr__ to be invoked, leading to infinite recursion. If you simply want to have undefined attributes all be defined as some value, then def __getattr__(self, ignored): return "Bob Dobbs"
Why is getattr() not working like I think it should? I think this code should print 'sss'
the next is my code: class foo: def __init__(self): self.a = "a" def __getattr__(self,x,defalut): if x in self: return x else:return defalut a=foo() print getattr(a,'b','sss') i know the __getattr__ must be 2 argument,but i want to get a default attribute if the attribute is no being. how can i get it, thanks and i found if defined __setattr__,my next code is also can't run class foo: def __init__(self): self.a={} def __setattr__(self,name,value): self.a[name]=value a=foo()#error ,why hi alex, i changed your example: class foo(object): def __init__(self): self.a = {'a': 'boh'} def __getattr__(self, x): if x in self.a: return self.a[x] raise AttributeError a=foo() print getattr(a,'a','sss') it print {'a': 'boh'},not 'boh' i think it will print self.a not self.a['a'], This is obviously not want to see why ,and Is there any way to avoid it
[ "Your problem number one: you're defining an old-style class (we know you're on Python 2.something, even though you don't tell us, because you're using print as a keyword;-). In Python 2:\nclass foo:\n\nmeans you're defining an old-style, aka legacy, class, whose behavior can be rather quirky at times. Never do that -- there's no good reason! The old-style classes exist only for compatibility with old legacy code that relies on their quirks (and were finally abolished in Python 3). Use new style classes instead:\nclass foo(object):\n\nand then the check if x in self: will not cause a recursive __getattr__ call. It will however cause a failure anyway, because your class does not define a __contains__ method and therefore you cannot check if x is contained in an instance of that class.\nIf what you're trying to do is whether x is defined in the instance dict of self, don't bother: __getattr__ doesn't even get called in that case -- it's only called when the attribute is not otherwise found in self.\nTo support three-arguments calls to the getattr built-in, just raise AttributeError in your __getattr__ method if necessary (just as would happen if you had no __getattr__ method at all), and the built-in will do its job (it's the built-in's job to intercept such cases and return the default if provided). That's the reason one never ever calls special methods such as __getattr__ directly but rather uses built-ins and operators which internally call them -- the built-ins and operators provide substantial added value.\nSo to give an example which makes somewhat sense:\nclass foo(object):\n def __init__(self):\n self.blah = {'a': 'boh'}\n def __getattr__(self, x):\n if x in self.blah:\n return self.blah[x]\n raise AttributeError\n\na=foo()\nprint getattr(a,'b','sss')\n\nThis prints sss, as desired.\nIf you add a __setattr__ method, that one intercepts every attempt to set attributes on self -- including self.blah = whatever. So -- when you need to bypass the very __setattr__ you're defining -- you must use a different approach. For example:\nclass foo(object):\n def __init__(self):\n self.__dict__['blah'] = {}\n def __setattr__(self, name, value):\n self.blah[name] = value\n def __getattr__(self, x):\n if x in self.blah:\n return self.blah[x]\n raise AttributeError\n\na=foo()\nprint getattr(a,'b','sss')\n\nThis also prints sss. Instead of\n self.__dict__['blah'] = {}\n\nyou could also use\n object.__setattr__(self, 'blah', {})\n\nSuch \"upcalls to the superclass's implementation\" (which you could also obtain via the super built-in) are one of the rare exceptions to the rules \"don't call special methods directly, call the built-in or use the operator instead\" -- here, you want to specifically bypass the normal behavior, so the explicit special-method call is a possibility.\n", "You are confusing the getattr built-in function, which retrieves some attribute binding of an object dynamically (by name), at runtime, and the __getattr__ method, which is invoked when you access some missing attribute of an object.\nYou can't ask \nif x in self:\n\nfrom within __getattr__, because the in operator will cause __getattr__ to be invoked, leading to infinite recursion.\nIf you simply want to have undefined attributes all be defined as some value, then\ndef __getattr__(self, ignored):\n return \"Bob Dobbs\"\n\n" ]
[ 5, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001964980_python.txt
Q: Calculate time between time-1 to time-2? enter time-1 // eg 01:12 enter time-2 // eg 18:59 calculate: time-1 to time-2 / 12 // i.e time between 01:12 to 18:59 divided by 12 How can it be done in Python. I'm a beginner so I really have no clue where to start. Edited to add: I don't want a timer. Both time-1 and time-2 are entered by the user manually. Thanks in advance for your help. A: The datetime and timedelta class from the built-in datetime module is what you need. from datetime import datetime # Parse the time strings t1 = datetime.strptime('01:12','%H:%M') t2 = datetime.strptime('18:59','%H:%M') # Do the math, the result is a timedelta object delta = (t2 - t1) / 12 print(delta.seconds) A: Simplest and most direct may be something like: def getime(prom): """Prompt for input, return minutes since midnight""" s = raw_input('Enter time-%s (hh:mm): ' % prom) sh, sm = s.split(':') return int(sm) + 60 * int(sh) time1 = getime('1') time2 = getime('2') diff = time2 - time1 print "Difference: %d hours and %d minutes" % (diff//60, diff%60) E.g., a typical run might be: $ python ti.py Enter time-1 (hh:mm): 01:12 Enter time-2 (hh:mm): 18:59 Difference: 17 hours and 47 minutes A: Here's a timer for timing code execution. Maybe you can use it for what you want. time() returns the current time in seconds and microseconds since 1970-01-01 00:00:00. from time import time t0 = time() # do stuff that takes time print time() - t0 A: Assuming that the user is entering strings like "01:12", you need to convert (as well as validate) those strings into the number of minutes since 00:00 (e.g., "01:12" is 1*60+12, or 72 minutes), then subtract one from the other. You can then convert the difference in minutes back into a string of the form hh:mm.
Calculate time between time-1 to time-2?
enter time-1 // eg 01:12 enter time-2 // eg 18:59 calculate: time-1 to time-2 / 12 // i.e time between 01:12 to 18:59 divided by 12 How can it be done in Python. I'm a beginner so I really have no clue where to start. Edited to add: I don't want a timer. Both time-1 and time-2 are entered by the user manually. Thanks in advance for your help.
[ "The datetime and timedelta class from the built-in datetime module is what you need.\nfrom datetime import datetime\n\n# Parse the time strings\nt1 = datetime.strptime('01:12','%H:%M')\nt2 = datetime.strptime('18:59','%H:%M')\n\n# Do the math, the result is a timedelta object\ndelta = (t2 - t1) / 12\nprint(delta.seconds)\n\n", "Simplest and most direct may be something like:\ndef getime(prom):\n \"\"\"Prompt for input, return minutes since midnight\"\"\"\n s = raw_input('Enter time-%s (hh:mm): ' % prom)\n sh, sm = s.split(':')\n return int(sm) + 60 * int(sh)\n\ntime1 = getime('1')\ntime2 = getime('2')\n\ndiff = time2 - time1\n\nprint \"Difference: %d hours and %d minutes\" % (diff//60, diff%60)\n\nE.g., a typical run might be:\n$ python ti.py \nEnter time-1 (hh:mm): 01:12\nEnter time-2 (hh:mm): 18:59\nDifference: 17 hours and 47 minutes\n\n", "Here's a timer for timing code execution. Maybe you can use it for what you want. time() returns the current time in seconds and microseconds since 1970-01-01 00:00:00.\nfrom time import time\nt0 = time()\n# do stuff that takes time\nprint time() - t0\n\n", "Assuming that the user is entering strings like \"01:12\", you need to convert (as well as validate) those strings into the number of minutes since 00:00 (e.g., \"01:12\" is 1*60+12, or 72 minutes), then subtract one from the other. You can then convert the difference in minutes back into a string of the form hh:mm.\n" ]
[ 17, 6, 4, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001965201_python.txt
Q: File downloading using python with threads I'm creating a python script which accepts a path to a remote file and an n number of threads. The file's size will be divided by the number of threads, when each thread completes I want them to append the fetch data to a local file. How do I manage it so that the order in which the threads where generated will append to the local file in order so that the bytes don't get scrambled? Also, what if I'm to download several files simultaneously? A: You could coordinate the works with locks &c, but I recommend instead using Queue -- usually the best way to coordinate multi-threading (and multi-processing) in Python. I would have the main thread spawn as many worker threads as you think appropriate (you may want to calibrate between performance, and load on the remote server, by experimenting); every worker thread waits at the same global Queue.Queue instance, call it workQ for example, for "work requests" (wr = workQ.get() will do it properly -- each work request is obtained by a single worker thread, no fuss, no muss). A "work request" can in this case simply be a triple (tuple with three items): identification of the remote file (URL or whatever), offset from which it is requested to get data from it, number of bytes to get from it (note that this works just as well for one or multiple files ot fetch). The main thread pushes all work requests to the workQ (just workQ.put((url, from, numbytes)) for each request) and waits for results to come to another Queue instance, call it resultQ (each result will also be a triple: identifier of the file, starting offset, string of bytes that are the results from that file at that offset). As each working thread satisfies the request it's doing, it puts the results into resultQ and goes back to fetch another work request (or wait for one). Meanwhile the main thread (or a separate dedicated "writing thread" if needed -- i.e. if the main thread has other work to do, for example on the GUI) gets results from resultQ and performs the needed open, seek, and write operations to place the data at the right spot. There are several ways to terminate the operation: for example, a special work request may be asking the thread receiving it to terminate -- the main thread puts on workQ just as many of those as there are working threads, after all the actual work requests, then joins all the worker threads when all data have been received and written (many alternatives exist, such as joining the queue directly, having the worker threads daemonic so they just go away when the main thread terminates, and so forth). A: You need to fetch completely separate parts of the file on each thread. Calculate the chunk start and end positions based on the number of threads. Each chunk must have no overlap obviously. For example, if target file was 3000 bytes long and you want to fetch using three thread: Thread 1: fetches bytes 1 to 1000 Thread 2: fetches bytes 1001 to 2000 Thread 3: fetches bytes 2001 to 3000 You would pre-allocate an empty file of the original size, and write back to the respective positions within the file. A: You can use a thread safe "semaphore", like this: class Counter: counter = 0 @classmethod def inc(cls): n = cls.counter = cls.counter + 1 # atomic increment and assignment return n Using Counter.inc() returns an incremented number across threads, which you can use to keep track of the current block of bytes. That being said, there's no need to split up file downloads into several threads, because the downstream is way slower than the writing to disk, so one thread will always finish before the next one is downloading. The best and least resource hungry way is simply to have a download file descriptor linked directly to a file object on disk. A: for "download several files simultaneously", I recommond this article: Practical threaded programming with Python . It provides a simultaneously download related example by combining threads with Queues, I thought it's worth a reading.
File downloading using python with threads
I'm creating a python script which accepts a path to a remote file and an n number of threads. The file's size will be divided by the number of threads, when each thread completes I want them to append the fetch data to a local file. How do I manage it so that the order in which the threads where generated will append to the local file in order so that the bytes don't get scrambled? Also, what if I'm to download several files simultaneously?
[ "You could coordinate the works with locks &c, but I recommend instead using Queue -- usually the best way to coordinate multi-threading (and multi-processing) in Python.\nI would have the main thread spawn as many worker threads as you think appropriate (you may want to calibrate between performance, and load on the remote server, by experimenting); every worker thread waits at the same global Queue.Queue instance, call it workQ for example, for \"work requests\" (wr = workQ.get() will do it properly -- each work request is obtained by a single worker thread, no fuss, no muss).\nA \"work request\" can in this case simply be a triple (tuple with three items): identification of the remote file (URL or whatever), offset from which it is requested to get data from it, number of bytes to get from it (note that this works just as well for one or multiple files ot fetch).\nThe main thread pushes all work requests to the workQ (just workQ.put((url, from, numbytes)) for each request) and waits for results to come to another Queue instance, call it resultQ (each result will also be a triple: identifier of the file, starting offset, string of bytes that are the results from that file at that offset).\nAs each working thread satisfies the request it's doing, it puts the results into resultQ and goes back to fetch another work request (or wait for one). Meanwhile the main thread (or a separate dedicated \"writing thread\" if needed -- i.e. if the main thread has other work to do, for example on the GUI) gets results from resultQ and performs the needed open, seek, and write operations to place the data at the right spot.\nThere are several ways to terminate the operation: for example, a special work request may be asking the thread receiving it to terminate -- the main thread puts on workQ just as many of those as there are working threads, after all the actual work requests, then joins all the worker threads when all data have been received and written (many alternatives exist, such as joining the queue directly, having the worker threads daemonic so they just go away when the main thread terminates, and so forth).\n", "You need to fetch completely separate parts of the file on each thread. Calculate the chunk start and end positions based on the number of threads. Each chunk must have no overlap obviously.\nFor example, if target file was 3000 bytes long and you want to fetch using three thread:\n\nThread 1: fetches bytes 1 to 1000\nThread 2: fetches bytes 1001 to 2000\nThread 3: fetches bytes 2001 to 3000\n\nYou would pre-allocate an empty file of the original size, and write back to the respective positions within the file.\n", "You can use a thread safe \"semaphore\", like this:\nclass Counter:\n counter = 0\n @classmethod\n def inc(cls):\n n = cls.counter = cls.counter + 1 # atomic increment and assignment\n return n\n\nUsing Counter.inc() returns an incremented number across threads, which you can use to keep track of the current block of bytes.\nThat being said, there's no need to split up file downloads into several threads, because the downstream is way slower than the writing to disk, so one thread will always finish before the next one is downloading.\nThe best and least resource hungry way is simply to have a download file descriptor linked directly to a file object on disk.\n", "for \"download several files simultaneously\", I recommond this article: Practical threaded programming with Python . It provides a simultaneously download related example by combining threads with Queues, I thought it's worth a reading.\n" ]
[ 9, 1, 0, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0001965213_multithreading_python.txt
Q: Split a large string into multiple substrings containing 'n' number of words via python Source text: United States Declaration of Independence How can one split the above source text into a number of sub-strings, containing an 'n' number of words? I use split(' ') to extract each word, however I do not know how to do this with multiple words in one operation. I could run through the list of words that I have, and create another by gluing together words in the first list (whilst adding spaces). However my method isn't very pythonic. A: text = """ When in the course of human Events, it becomes necessary for one People to dissolve the Political Bands which have connected them with another, and to assume among the Powers of the Earth, the separate and equal Station to which the Laws of Nature and of Nature?s God entitle them, a decent Respect to the Opinions of Mankind requires that they should declare the causes which impel them to the Separation. We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness?-That to secure these Rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed, that whenever any Form of Government becomes destructive of these Ends, it is the Right of the People to alter or abolish it, and to institute a new Government, laying its Foundation on such Principles, and organizing its Powers in such Form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient Causes; and accordingly all Experience hath shewn, that Mankind are more disposed to suffer, while Evils are sufferable, than to right themselves by abolishing the Forms to which they are accustomed. But when a long Train of Abuses and Usurpations, pursuing invariably the same Object, evinces a Design to reduce them under absolute Despotism, it is their Right, it is their Duty, to throw off such Government, and to provide new Guards for their future Security. Such has been the patient Sufferance of these Colonies; and such is now the Necessity which constrains them to alter their former Systems of Government. The History of the Present King of Great-Britain is a History of repeated Injuries and Usurpations, all having in direct Object the Establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid World. """ words = text.split() subs = [] n = 4 for i in range(0, len(words), n): subs.append(" ".join(words[i:i+n])) print subs[:10] prints: ['When in the course', 'of human Events, it', 'becomes necessary for one', 'People to dissolve the', 'Political Bands which have', 'connected them with another,', 'and to assume among', 'the Powers of the', 'Earth, the separate and', 'equal Station to which'] or, as a list comprehension: subs = [" ".join(words[i:i+n]) for i in range(0, len(words), n)] A: You're trying to create n-grams? Here's how I do it, using the NLTK. punct = re.compile(r'^[^A-Za-z0-9]+|[^a-zA-Z0-9]+$') is_word=re.compile(r'[a-z]', re.IGNORECASE) sentence_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') word_tokenizer=nltk.tokenize.punkt.PunktWordTokenizer() def get_words(sentence): return [punct.sub('',word) for word in word_tokenizer.tokenize(sentence) if is_word.search(word)] def ngrams(text, n): for sentence in sentence_tokenizer.tokenize(text.lower()): words = get_words(sentence) for i in range(len(words)-(n-1)): yield(' '.join(words[i:i+n])) Then for ngram in ngrams(sometext, 3): print ngram A: For large string, iterator is recommended for speed and low memory footprint. import re, itertools # Original text text = "When in the course of human Events, it becomes necessary for one People to dissolve the Political Bands which have connected them with another, and to assume among the Powers of the Earth, the separate and equal Station to which the Laws of Nature and of Nature?s God entitle them, a decent Respect to the Opinions of Mankind requires that they should declare the causes which impel them to the Separation." n = 10 # An iterator which will extract words one by one from text when needed words = itertools.imap(lambda m:m.group(), re.finditer(r'\w+', text)) # The final iterator that combines words into n-length groups word_groups = itertools.izip_longest(*(words,)*n) for g in word_groups: print g will get the following result: ('When', 'in', 'the', 'course', 'of', 'human', 'Events', 'it', 'becomes', 'necessary') ('for', 'one', 'People', 'to', 'dissolve', 'the', 'Political', 'Bands', 'which', 'have') ('connected', 'them', 'with', 'another', 'and', 'to', 'assume', 'among', 'the', 'Powers') ('of', 'the', 'Earth', 'the', 'separate', 'and', 'equal', 'Station', 'to', 'which') ('the', 'Laws', 'of', 'Nature', 'and', 'of', 'Nature', 's', 'God', 'entitle') ('them', 'a', 'decent', 'Respect', 'to', 'the', 'Opinions', 'of', 'Mankind', 'requires') ('that', 'they', 'should', 'declare', 'the', 'causes', 'which', 'impel', 'them', 'to') ('the', 'Separation', None, None, None, None, None, None, None, None)
Split a large string into multiple substrings containing 'n' number of words via python
Source text: United States Declaration of Independence How can one split the above source text into a number of sub-strings, containing an 'n' number of words? I use split(' ') to extract each word, however I do not know how to do this with multiple words in one operation. I could run through the list of words that I have, and create another by gluing together words in the first list (whilst adding spaces). However my method isn't very pythonic.
[ "text = \"\"\"\nWhen in the course of human Events, it becomes necessary for one People to dissolve the Political Bands which have connected them with another, and to assume among the Powers of the Earth, the separate and equal Station to which the Laws of Nature and of Nature?s God entitle them, a decent Respect to the Opinions of Mankind requires that they should declare the causes which impel them to the Separation.\n\nWe hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness?-That to secure these Rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed, that whenever any Form of Government becomes destructive of these Ends, it is the Right of the People to alter or abolish it, and to institute a new Government, laying its Foundation on such Principles, and organizing its Powers in such Form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient Causes; and accordingly all Experience hath shewn, that Mankind are more disposed to suffer, while Evils are sufferable, than to right themselves by abolishing the Forms to which they are accustomed. But when a long Train of Abuses and Usurpations, pursuing invariably the same Object, evinces a Design to reduce them under absolute Despotism, it is their Right, it is their Duty, to throw off such Government, and to provide new Guards for their future Security. Such has been the patient Sufferance of these Colonies; and such is now the Necessity which constrains them to alter their former Systems of Government. The History of the Present King of Great-Britain is a History of repeated Injuries and Usurpations, all having in direct Object the Establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid World.\n\"\"\"\n\nwords = text.split()\nsubs = []\nn = 4\nfor i in range(0, len(words), n):\n subs.append(\" \".join(words[i:i+n]))\nprint subs[:10]\n\nprints:\n['When in the course', 'of human Events, it', 'becomes necessary for one', 'People to dissolve the', 'Political Bands which have', 'connected them with another,', 'and to assume among', 'the Powers of the', 'Earth, the separate and', 'equal Station to which']\n\nor, as a list comprehension:\nsubs = [\" \".join(words[i:i+n]) for i in range(0, len(words), n)]\n\n", "You're trying to create n-grams? Here's how I do it, using the NLTK.\npunct = re.compile(r'^[^A-Za-z0-9]+|[^a-zA-Z0-9]+$')\nis_word=re.compile(r'[a-z]', re.IGNORECASE)\nsentence_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')\nword_tokenizer=nltk.tokenize.punkt.PunktWordTokenizer()\n\ndef get_words(sentence):\n return [punct.sub('',word) for word in word_tokenizer.tokenize(sentence) if is_word.search(word)]\n\ndef ngrams(text, n):\n for sentence in sentence_tokenizer.tokenize(text.lower()):\n words = get_words(sentence)\n for i in range(len(words)-(n-1)):\n yield(' '.join(words[i:i+n]))\n\nThen\nfor ngram in ngrams(sometext, 3):\n print ngram\n\n", "For large string, iterator is recommended for speed and low memory footprint.\nimport re, itertools\n\n# Original text\ntext = \"When in the course of human Events, it becomes necessary for one People to dissolve the Political Bands which have connected them with another, and to assume among the Powers of the Earth, the separate and equal Station to which the Laws of Nature and of Nature?s God entitle them, a decent Respect to the Opinions of Mankind requires that they should declare the causes which impel them to the Separation.\"\nn = 10\n\n# An iterator which will extract words one by one from text when needed\nwords = itertools.imap(lambda m:m.group(), re.finditer(r'\\w+', text))\n# The final iterator that combines words into n-length groups\nword_groups = itertools.izip_longest(*(words,)*n)\n\nfor g in word_groups: print g\n\nwill get the following result:\n('When', 'in', 'the', 'course', 'of', 'human', 'Events', 'it', 'becomes', 'necessary')\n('for', 'one', 'People', 'to', 'dissolve', 'the', 'Political', 'Bands', 'which', 'have')\n('connected', 'them', 'with', 'another', 'and', 'to', 'assume', 'among', 'the', 'Powers')\n('of', 'the', 'Earth', 'the', 'separate', 'and', 'equal', 'Station', 'to', 'which')\n('the', 'Laws', 'of', 'Nature', 'and', 'of', 'Nature', 's', 'God', 'entitle')\n('them', 'a', 'decent', 'Respect', 'to', 'the', 'Opinions', 'of', 'Mankind', 'requires')\n('that', 'they', 'should', 'declare', 'the', 'causes', 'which', 'impel', 'them', 'to')\n('the', 'Separation', None, None, None, None, None, None, None, None)\n\n" ]
[ 7, 3, 3 ]
[]
[]
[ "python", "split", "string", "substring", "words" ]
stackoverflow_0001964999_python_split_string_substring_words.txt
Q: Pygtk graphics contexts and allocating colors I've searched on this, but nothing has what I'm looking for. http://www.mail-archive.com/pygtk@daa.com.au/msg10529.html -- Nobody answered him. This is exactly what I'm experiencing. When I set the foreground on a graphics context, it doesn't seem to actually change. I've been through the tutorial and FAQ, but neither say much. They either just use the black and white contexts or they give you broken links. I'm left thinking maybe it's a bug. But the kid in me says I'm just missing something and I keep ignoring the fact I have a working alternative. This would be better though. And the further I get into this, the more I'm going to need these contexts and colors. Here's my code snippet. def CreatePixmapFromLCDdata(lcdP, ch, widget): width = lcdP.get_char_width() height = lcdP.get_char_height() # Create pixmap pixmap = gtk.gdk.Pixmap(widget.window, width, height) # Working graphics contexts, wrong color black_gc = widget.get_style().black_gc white_gc = widget.get_style().white_gc char_gc = widget.window.new_gc() colormap = char_gc.get_colormap() bg_color = NewColor(text="#78a878", colormap=colormap) print "Before", char_gc.foreground.red, char_gc.foreground.green, char_gc.foreground.blue char_gc.set_foreground(bg_color) print "AFter", char_gc.foreground.red, char_gc.foreground.green, char_gc.foreground.blue fg_color = NewColor(text="#113311", colormap=colormap) pixmap.draw_rectangle(char_gc, True, 0, 0, width, height) char_gc.foreground = fg_color for j in range(lcdP.dots['y']): k = lcdP.pixels['y']*j for i in range(lcdP.dots['x']): if 1<<(lcdP.dots['x']-1-i) & ch[j] == 0: continue m = i*lcdP.pixels['y'] for jj in range(k, k+lcdP.pixels['y']-1): for ii in range(m+1, m+lcdP.pixels['x']): pixmap.draw_point(char_gc, ii, jj) return pixmap I thought maybe it was the way I was allocating the colors. As you see in the snippet, I've used the graphic context's own colormap. I've tried different colormaps, this being the latest. I've even tried an unallocated color. Notice the white_gc and black_gc graphics contexts. When I use those I'm able to draw black on a white background fine. Otherwise (with a created context) everything's black, fg and bg. When I change white's foreground color, it always comes out black. Here's the output. Notice the color doesn't change very much. I'd say it didn't change, or at least not enough to matter visually. Before 6 174 60340 After 5 174 60340 Here's how I allocate the colors. def NewColor(red=0, green=0, blue=0, text=None, colormap=None): if text == None: c = gtk.gdk.Color(red, green, blue) else: c = gtk.gdk.color_parse(text) if colormap == None: colormap = gtk.gdk.colormap_get_system() colormap.alloc_color(c) return c A: I've had some trouble with Drawable and GC in the past. This answer got me started on the way to a solution. Here's a quick example that uses a custom colour gc to draw some squares: import gtk square_sz = 20 pixmap = None colour = "#FF0000" gc = None def configure_event( widget, event): global pixmap x, y, width, height = widget.get_allocation() pixmap = gtk.gdk.Pixmap(widget.window, width, height) white_gc = widget.get_style().white_gc pixmap.draw_rectangle(white_gc, True, 0, 0, width, height) return True def expose_event(widget, event): global pixmap if pixmap: x , y, w, h = event.area drawable_gc = widget.get_style().fg_gc[gtk.STATE_NORMAL] widget.window.draw_drawable(drawable_gc, pixmap, x, y, x, y, w, h) return False def button_press_event(widget, event): global pixmap, square_sz, gc, colour if event.button == 1 and pixmap: x = int(event.x / square_sz) * square_sz y = int(event.y / square_sz) * square_sz if not gc: gc = widget.window.new_gc() gc.set_rgb_fg_color(gtk.gdk.color_parse(colour)) pixmap.draw_rectangle(gc, True, x, y, square_sz, square_sz) widget.queue_draw_area(x, y, square_sz, square_sz) return True if __name__ == "__main__": da = gtk.DrawingArea() da.set_size_request(square_sz*20, square_sz*20) da.connect("expose_event", expose_event) da.connect("configure_event", configure_event) da.connect("button_press_event", button_press_event) da.set_events(gtk.gdk.EXPOSURE_MASK | gtk.gdk.BUTTON_PRESS_MASK) w = gtk.Window() w.add(da) w.show_all() w.connect("destroy", lambda w: gtk.main_quit()) gtk.main() Hope it helps. A: The problem is that the NewColor() function is returning an unallocated color c. colormap.alloc_color() returns a gtk.gdk.Color which is the allocated color. To fix things the last line in NewColor() should be: return colormap.alloc_color(c)
Pygtk graphics contexts and allocating colors
I've searched on this, but nothing has what I'm looking for. http://www.mail-archive.com/pygtk@daa.com.au/msg10529.html -- Nobody answered him. This is exactly what I'm experiencing. When I set the foreground on a graphics context, it doesn't seem to actually change. I've been through the tutorial and FAQ, but neither say much. They either just use the black and white contexts or they give you broken links. I'm left thinking maybe it's a bug. But the kid in me says I'm just missing something and I keep ignoring the fact I have a working alternative. This would be better though. And the further I get into this, the more I'm going to need these contexts and colors. Here's my code snippet. def CreatePixmapFromLCDdata(lcdP, ch, widget): width = lcdP.get_char_width() height = lcdP.get_char_height() # Create pixmap pixmap = gtk.gdk.Pixmap(widget.window, width, height) # Working graphics contexts, wrong color black_gc = widget.get_style().black_gc white_gc = widget.get_style().white_gc char_gc = widget.window.new_gc() colormap = char_gc.get_colormap() bg_color = NewColor(text="#78a878", colormap=colormap) print "Before", char_gc.foreground.red, char_gc.foreground.green, char_gc.foreground.blue char_gc.set_foreground(bg_color) print "AFter", char_gc.foreground.red, char_gc.foreground.green, char_gc.foreground.blue fg_color = NewColor(text="#113311", colormap=colormap) pixmap.draw_rectangle(char_gc, True, 0, 0, width, height) char_gc.foreground = fg_color for j in range(lcdP.dots['y']): k = lcdP.pixels['y']*j for i in range(lcdP.dots['x']): if 1<<(lcdP.dots['x']-1-i) & ch[j] == 0: continue m = i*lcdP.pixels['y'] for jj in range(k, k+lcdP.pixels['y']-1): for ii in range(m+1, m+lcdP.pixels['x']): pixmap.draw_point(char_gc, ii, jj) return pixmap I thought maybe it was the way I was allocating the colors. As you see in the snippet, I've used the graphic context's own colormap. I've tried different colormaps, this being the latest. I've even tried an unallocated color. Notice the white_gc and black_gc graphics contexts. When I use those I'm able to draw black on a white background fine. Otherwise (with a created context) everything's black, fg and bg. When I change white's foreground color, it always comes out black. Here's the output. Notice the color doesn't change very much. I'd say it didn't change, or at least not enough to matter visually. Before 6 174 60340 After 5 174 60340 Here's how I allocate the colors. def NewColor(red=0, green=0, blue=0, text=None, colormap=None): if text == None: c = gtk.gdk.Color(red, green, blue) else: c = gtk.gdk.color_parse(text) if colormap == None: colormap = gtk.gdk.colormap_get_system() colormap.alloc_color(c) return c
[ "I've had some trouble with Drawable and GC in the past. This answer got me started on the way to a solution. Here's a quick example that uses a custom colour gc to draw some squares:\nimport gtk\n\nsquare_sz = 20\npixmap = None\ncolour = \"#FF0000\"\ngc = None\n\ndef configure_event( widget, event):\n global pixmap\n x, y, width, height = widget.get_allocation()\n pixmap = gtk.gdk.Pixmap(widget.window, width, height)\n white_gc = widget.get_style().white_gc\n pixmap.draw_rectangle(white_gc, True, 0, 0, width, height)\n return True\n\ndef expose_event(widget, event):\n global pixmap\n if pixmap:\n x , y, w, h = event.area\n drawable_gc = widget.get_style().fg_gc[gtk.STATE_NORMAL]\n widget.window.draw_drawable(drawable_gc, pixmap, x, y, x, y, w, h)\n return False\n\ndef button_press_event(widget, event):\n global pixmap, square_sz, gc, colour\n if event.button == 1 and pixmap:\n x = int(event.x / square_sz) * square_sz\n y = int(event.y / square_sz) * square_sz\n if not gc:\n gc = widget.window.new_gc()\n gc.set_rgb_fg_color(gtk.gdk.color_parse(colour))\n pixmap.draw_rectangle(gc, True, x, y, square_sz, square_sz)\n widget.queue_draw_area(x, y, square_sz, square_sz)\n\n return True\n\nif __name__ == \"__main__\":\n da = gtk.DrawingArea()\n da.set_size_request(square_sz*20, square_sz*20)\n\n da.connect(\"expose_event\", expose_event)\n da.connect(\"configure_event\", configure_event)\n da.connect(\"button_press_event\", button_press_event)\n\n da.set_events(gtk.gdk.EXPOSURE_MASK | gtk.gdk.BUTTON_PRESS_MASK)\n\n w = gtk.Window()\n w.add(da)\n w.show_all()\n w.connect(\"destroy\", lambda w: gtk.main_quit())\n\n gtk.main()\n\nHope it helps.\n", "The problem is that the NewColor() function is returning an unallocated color c. colormap.alloc_color() returns a gtk.gdk.Color which is the allocated color. To fix things the last line in NewColor() should be:\nreturn colormap.alloc_color(c)\n\n" ]
[ 2, 1 ]
[]
[]
[ "drawing2d", "pygtk", "python" ]
stackoverflow_0000938921_drawing2d_pygtk_python.txt
Q: how to make a python array of particular objects in java, the following code defines an array of the predefined class (myCls): myCls arr[] = new myCls how can I do that in python? I want to have an array of type (myCls)? thanks in advance A: Python is dynamically typed. You do not need to (and in fact CAN'T) create a list that only contains a single type: arr = list() arr = [] If you require it to only contain a single type then you'll have to create your own list-alike, reimplementing the list methods and __setitem__() yourself. A: You can only create arrays of only a single type using the array package, but it is not designed to store user-defined classes. The python way would be to just create a list of objects: a = [myCls() for _ in xrange(10)] You might want to take a look at this stackoverflow question. NOTE: Be careful with this notation, IT PROBABLY DOES NOT WHAT YOU INTEND: a = [myCls()] * 10 This will also create a list with ten times a myCls object, but it is ten times THE SAME OBJECT, not ten independently created objects.
how to make a python array of particular objects
in java, the following code defines an array of the predefined class (myCls): myCls arr[] = new myCls how can I do that in python? I want to have an array of type (myCls)? thanks in advance
[ "Python is dynamically typed. You do not need to (and in fact CAN'T) create a list that only contains a single type:\narr = list()\narr = []\n\nIf you require it to only contain a single type then you'll have to create your own list-alike, reimplementing the list methods and __setitem__() yourself.\n", "You can only create arrays of only a single type using the array package, but it is not designed to store user-defined classes.\nThe python way would be to just create a list of objects:\na = [myCls() for _ in xrange(10)]\n\nYou might want to take a look at this stackoverflow question.\nNOTE:\nBe careful with this notation, IT PROBABLY DOES NOT WHAT YOU INTEND:\na = [myCls()] * 10\n\nThis will also create a list with ten times a myCls object, but it is ten times THE SAME OBJECT, not ten independently created objects.\n" ]
[ 7, 5 ]
[]
[]
[ "arrays", "python", "types" ]
stackoverflow_0001965725_arrays_python_types.txt
Q: 2to3 not working I'm converting a single module using 2to3. test_lib2to3.py is in /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/test/test_lib2to3.py File to be converted is in /Users/Nimbuz/Documents/python31/Excercise 1/time3.py Terminal Session: localhost:test Nimbuz$ 2to3 /Users/Nimbuz/Documents/python31/Excercise\ 1/time3.py RefactoringTool: No files need to be modified. Code: # handling date/time data # Python23 tested vegaseat 3/6/2005 import time print "List the functions within module time:" for funk in dir(time): print funk print time.time(), "seconds since 1/1/1970 00:00:00" print time.time()/(60*60*24), "days since 1/1/1970" # time.clock() gives wallclock seconds, accuracy better than 1 ms # time.clock() is for windows, time.time() is more portable print "Using time.clock() = ", time.clock(), "seconds since first call to clock()" print "\nTiming a 1 million loop 'for loop' ..." start = time.clock() for x in range(1000000): y = x # do something end = time.clock() print "Time elapsed = ", end - start, "seconds" # create a tuple of local time data timeHere = time.localtime() print "\nA tuple of local date/time data using time.localtime():" print "(year,month,day,hour,min,sec,weekday(Monday=0),yearday,dls-flag)" print timeHere # extract a more readable date/time from the tuple # eg. Sat Mar 05 22:51:55 2005 print "\nUsing time.asctime(time.localtime()):", time.asctime(time.localtime()) # the same results print "\nUsing time.ctime(time.time()):", time.ctime(time.time()) print "\nOr using time.ctime():", time.ctime() print "\nUsing strftime():" print "Day and Date:", time.strftime("%a %m/%d/%y", time.localtime()) print "Day, Date :", time.strftime("%A, %B %d, %Y", time.localtime()) print "Time (12hr) :", time.strftime("%I:%M:%S %p", time.localtime()) print "Time (24hr) :", time.strftime("%H:%M:%S", time.localtime()) print "DayMonthYear:",time.strftime("%d%b%Y", time.localtime()) print print "Start a line with this date-time stamp and it will sort:",\ time.strftime("%Y/%m/%d %H:%M:%S", time.localtime()) print def getDayOfWeek(dateString): # day of week (Monday = 0) of a given month/day/year t1 = time.strptime(dateString,"%m/%d/%Y") # year in time_struct t1 can not go below 1970 (start of epoch)! t2 = time.mktime(t1) return(time.localtime(t2)[6]) Weekday = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] # sorry about the limitations, stay above 01/01/1970 # more exactly 01/01/1970 at 0 UT (midnight Greenwich, England) print "11/12/1970 was a", Weekday[getDayOfWeek("11/12/1970")] print print "Calculate difference between two times (12 hour format) of a day:" time1 = raw_input("Enter first time (format 11:25:00AM or 03:15:30PM): ") # pick some plausible date timeString1 = "03/06/05 " + time1 # create a time tuple from this time string format eg. 03/06/05 11:22:00AM timeTuple1 = time.strptime(timeString1, "%m/%d/%y %I:%M:%S%p") #print timeTuple1 # test eg. (2005, 3, 6, 11, 22, 0, 5, 91, -1) time2 = raw_input("Enter second time (format 11:25:00AM or 03:15:30PM): ") # use same date to stay in same day timeString2 = "03/06/05 " + time2 timeTuple2 = time.strptime(timeString2, "%m/%d/%y %I:%M:%S%p") # mktime() gives seconds since epoch 1/1/1970 00:00:00 time_difference = time.mktime(timeTuple2) - time.mktime(timeTuple1) #print type(time_difference) # test <type 'float'> print "Time difference = %d seconds" % int(time_difference) print "Time difference = %0.1f minutes" % (time_difference/60.0) print "Time difference = %0.2f hours" % (time_difference/(60.0*60)) print print "Wait one and a half seconds!" time.sleep(1.5) print "The end!" A: Maybe there's something wrong with path. Try 2to3 "/Users/Nimbuz/Documents/python31/Excercise 1/time3.py" Instead of 2to3 /Users/Nimbuz/Documents/python31/Excercise\ 1/time3.py or just cd to that folder and 2to3 time3.py Ready diff: http://pastebay.com/78746
2to3 not working
I'm converting a single module using 2to3. test_lib2to3.py is in /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/test/test_lib2to3.py File to be converted is in /Users/Nimbuz/Documents/python31/Excercise 1/time3.py Terminal Session: localhost:test Nimbuz$ 2to3 /Users/Nimbuz/Documents/python31/Excercise\ 1/time3.py RefactoringTool: No files need to be modified. Code: # handling date/time data # Python23 tested vegaseat 3/6/2005 import time print "List the functions within module time:" for funk in dir(time): print funk print time.time(), "seconds since 1/1/1970 00:00:00" print time.time()/(60*60*24), "days since 1/1/1970" # time.clock() gives wallclock seconds, accuracy better than 1 ms # time.clock() is for windows, time.time() is more portable print "Using time.clock() = ", time.clock(), "seconds since first call to clock()" print "\nTiming a 1 million loop 'for loop' ..." start = time.clock() for x in range(1000000): y = x # do something end = time.clock() print "Time elapsed = ", end - start, "seconds" # create a tuple of local time data timeHere = time.localtime() print "\nA tuple of local date/time data using time.localtime():" print "(year,month,day,hour,min,sec,weekday(Monday=0),yearday,dls-flag)" print timeHere # extract a more readable date/time from the tuple # eg. Sat Mar 05 22:51:55 2005 print "\nUsing time.asctime(time.localtime()):", time.asctime(time.localtime()) # the same results print "\nUsing time.ctime(time.time()):", time.ctime(time.time()) print "\nOr using time.ctime():", time.ctime() print "\nUsing strftime():" print "Day and Date:", time.strftime("%a %m/%d/%y", time.localtime()) print "Day, Date :", time.strftime("%A, %B %d, %Y", time.localtime()) print "Time (12hr) :", time.strftime("%I:%M:%S %p", time.localtime()) print "Time (24hr) :", time.strftime("%H:%M:%S", time.localtime()) print "DayMonthYear:",time.strftime("%d%b%Y", time.localtime()) print print "Start a line with this date-time stamp and it will sort:",\ time.strftime("%Y/%m/%d %H:%M:%S", time.localtime()) print def getDayOfWeek(dateString): # day of week (Monday = 0) of a given month/day/year t1 = time.strptime(dateString,"%m/%d/%Y") # year in time_struct t1 can not go below 1970 (start of epoch)! t2 = time.mktime(t1) return(time.localtime(t2)[6]) Weekday = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] # sorry about the limitations, stay above 01/01/1970 # more exactly 01/01/1970 at 0 UT (midnight Greenwich, England) print "11/12/1970 was a", Weekday[getDayOfWeek("11/12/1970")] print print "Calculate difference between two times (12 hour format) of a day:" time1 = raw_input("Enter first time (format 11:25:00AM or 03:15:30PM): ") # pick some plausible date timeString1 = "03/06/05 " + time1 # create a time tuple from this time string format eg. 03/06/05 11:22:00AM timeTuple1 = time.strptime(timeString1, "%m/%d/%y %I:%M:%S%p") #print timeTuple1 # test eg. (2005, 3, 6, 11, 22, 0, 5, 91, -1) time2 = raw_input("Enter second time (format 11:25:00AM or 03:15:30PM): ") # use same date to stay in same day timeString2 = "03/06/05 " + time2 timeTuple2 = time.strptime(timeString2, "%m/%d/%y %I:%M:%S%p") # mktime() gives seconds since epoch 1/1/1970 00:00:00 time_difference = time.mktime(timeTuple2) - time.mktime(timeTuple1) #print type(time_difference) # test <type 'float'> print "Time difference = %d seconds" % int(time_difference) print "Time difference = %0.1f minutes" % (time_difference/60.0) print "Time difference = %0.2f hours" % (time_difference/(60.0*60)) print print "Wait one and a half seconds!" time.sleep(1.5) print "The end!"
[ "Maybe there's something wrong with path.\nTry\n2to3 \"/Users/Nimbuz/Documents/python31/Excercise 1/time3.py\"\n\nInstead of\n2to3 /Users/Nimbuz/Documents/python31/Excercise\\ 1/time3.py\n\nor just cd to that folder and \n2to3 time3.py\n\nReady diff: http://pastebay.com/78746\n" ]
[ 0 ]
[]
[]
[ "path", "python", "python_2to3" ]
stackoverflow_0001965387_path_python_python_2to3.txt