content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: String manipulation in Python I have a randomly generated string from 6 letters in this form, example: A' B F2 E' B2 A2 C' D2 C D' E2 F Some letters have " ' " added to them some have number "2". What i want is to add letter "x" to every letter that is on its own. So it would look like this: A' Bx F2 E' B2 A2 C' D2 Cx D' E2 Fx The trick is that it would add the "x" only to those letters that are on their own. No, Bx -> Bx2. Any ideas? A: Transform your string into list with split() s = """A' B F2 E' B2 A2 C' D2 C D' E2 F""" L = s.split(' ') for i in xrange(len(L)): if len(L[i]) == 1: L[i] += 'x' str_out = ' '.join(L) A: The split-comprehend-join version: ' '.join(n+'x' if len(n)==1 else n for n in inputstr.split(' ')) The regex version: >>> inputstr = "A' F B2 C" >>> re.sub(r'([A-Z])(?=\s|$)', r'\1x', inputstr) "A' Fx B2 Cx" In essence, find any uppercase letter not followed by either a space or the end of the string, and replace it with that character followed by an x. I ran a few tests with timeit; the former (list comprehension) appears to run slightly faster than the latter (about 15-20% faster on average). This does not appear to change no matter the number of replacements that need to be done (a string 10 times as long still has about the same ratio of processing time as the original). A: Ugly or Pythonic? items = "A' B F2 E' B2 A2 C' D2 C D' E2 F".split() itemsx = ((a+'x' if len(a)==1 else a) for a in items) out = ' '.join(itemsx) A: With a regular expression, import re newstring = re.sub(r"\b(\w)(?![2'])", r'\1x', oldstring) should be fine. If you're allergic to res, news = ' '.join(x + 'x' if len(x)==1 else x for x in olds.split()) is a concise way of expressing a similar transformation (if length-one is really the only thing you need to check before appending 'x' to an item). A: ' '.join(n if len(n) == 2 else n + 'x' for n in s.split(' ')) A: >>> s="A' B F2 E' B2 A2 C' D2 C D' E2 F".split() >>> import string >>> letters=list(string.letters) >>> for n,i in enumerate(s): ... if i in letters: ... s[n]=i+"x" ... >>> ' '.join(s) "A' Bx F2 E' B2 A2 C' D2 Cx D' E2 Fx" >>> A: >>> ' '.join((i+'x')[:2] for i in items.split()) "A' Bx F2 E' B2 A2 C' D2 Cx D' E2 Fx"
String manipulation in Python
I have a randomly generated string from 6 letters in this form, example: A' B F2 E' B2 A2 C' D2 C D' E2 F Some letters have " ' " added to them some have number "2". What i want is to add letter "x" to every letter that is on its own. So it would look like this: A' Bx F2 E' B2 A2 C' D2 Cx D' E2 Fx The trick is that it would add the "x" only to those letters that are on their own. No, Bx -> Bx2. Any ideas?
[ "Transform your string into list with split()\ns = \"\"\"A' B F2 E' B2 A2 C' D2 C D' E2 F\"\"\"\n\nL = s.split(' ')\n\nfor i in xrange(len(L)):\n if len(L[i]) == 1:\n L[i] += 'x'\n\nstr_out = ' '.join(L)\n\n", "The split-comprehend-join version:\n' '.join(n+'x' if len(n)==1 else n for n in inputstr.split(' '))\n\nThe regex version:\n>>> inputstr = \"A' F B2 C\"\n>>> re.sub(r'([A-Z])(?=\\s|$)', r'\\1x', inputstr)\n\"A' Fx B2 Cx\"\n\nIn essence, find any uppercase letter not followed by either a space or the end of the string, and replace it with that character followed by an x.\nI ran a few tests with timeit; the former (list comprehension) appears to run slightly faster than the latter (about 15-20% faster on average). This does not appear to change no matter the number of replacements that need to be done (a string 10 times as long still has about the same ratio of processing time as the original).\n", "Ugly or Pythonic?\nitems = \"A' B F2 E' B2 A2 C' D2 C D' E2 F\".split()\n\nitemsx = ((a+'x' if len(a)==1 else a) for a in items)\nout = ' '.join(itemsx)\n\n", "With a regular expression,\nimport re\nnewstring = re.sub(r\"\\b(\\w)(?![2'])\", r'\\1x', oldstring)\n\nshould be fine. If you're allergic to res,\nnews = ' '.join(x + 'x' if len(x)==1 else x for x in olds.split())\n\nis a concise way of expressing a similar transformation (if length-one is really the only thing you need to check before appending 'x' to an item).\n", "' '.join(n if len(n) == 2 else n + 'x' for n in s.split(' '))\n\n", ">>> s=\"A' B F2 E' B2 A2 C' D2 C D' E2 F\".split()\n>>> import string\n>>> letters=list(string.letters)\n>>> for n,i in enumerate(s):\n... if i in letters:\n... s[n]=i+\"x\"\n...\n>>> ' '.join(s)\n\"A' Bx F2 E' B2 A2 C' D2 Cx D' E2 Fx\"\n>>>\n\n", ">>> ' '.join((i+'x')[:2] for i in items.split())\n\"A' Bx F2 E' B2 A2 C' D2 Cx D' E2 Fx\"\n\n" ]
[ 5, 4, 3, 2, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002264202_python.txt
Q: Python: How to access parent class object through derived class instance? I'm sorry for my silly question, but... let's suppose I have these classes: class A(): msg = 'hehehe' class B(A): msg = 'hohoho' class C(B): pass and an instance of B or C. How do I get the variable 'msg' from the parent's class object through this instance? I've tried this: foo = B() print super(foo.__class__).msg but got the message: "TypeError: super() argument 1 must be type, not classobj". A: You actually want to use class A(object): ... ... b = B() bar = super(b.__class__, b) print bar.msg Base classes must be new-style classes (inherit from object) A: If the class is single-inherited: foo = B() print foo.__class__.__bases__[0].msg # 'hehehe' If the class is multiple-inherited, the question makes no sense because there may be multiple classes defining the 'msg', and they could all be meaningful. You'd better provide the actual parent (i.e. A.msg). Alternatively you could iterate through all direct bases as described in @Felix's answer. A: Not sure why you want to do this >>> class A(object): ... msg = 'hehehe' ... >>> class B(A): ... msg = 'hohoho' ... >>> foo=B() >>> foo.__class__.__mro__[1].msg 'hehehe' >>> A: Try with: class A(object): msg = 'hehehe' EDIT: For the 'msg' attribute you would need: foo = B() bar = super(foo.__class__, foo) print bar.msg A: As msg is a class variable, you can just do: print C.msg # prints hohoho If you overwrite the variable (as you do in class B), you have to find the right parent class. Remember that Python supports multiple inheritance. But as you define the classes and you now that B inherits from A you can always do this: class B(A): msg = 'hohoho' def get_parent_message(self): return A.msg UPDATE: The most reliable thing would be: def get_parent_attribute(instance, attribute): for parent in instance.__class__.__bases__: if attribute in parent.__dict__: return parent.__dict__[attribute] and then: foo = B() print get_parent_attribute(foo, 'msg') A: #for B() you can use __bases__ print foo.__class__.__bases__[0].msg But this is not gonna be easy when there are multiple base classes and/or the depth of hierarchy is not one.
Python: How to access parent class object through derived class instance?
I'm sorry for my silly question, but... let's suppose I have these classes: class A(): msg = 'hehehe' class B(A): msg = 'hohoho' class C(B): pass and an instance of B or C. How do I get the variable 'msg' from the parent's class object through this instance? I've tried this: foo = B() print super(foo.__class__).msg but got the message: "TypeError: super() argument 1 must be type, not classobj".
[ "You actually want to use\nclass A(object):\n ...\n...\nb = B()\nbar = super(b.__class__, b)\nprint bar.msg\n\nBase classes must be new-style classes (inherit from object)\n", "If the class is single-inherited:\nfoo = B()\nprint foo.__class__.__bases__[0].msg\n# 'hehehe'\n\nIf the class is multiple-inherited, the question makes no sense because there may be multiple classes defining the 'msg', and they could all be meaningful. You'd better provide the actual parent (i.e. A.msg). Alternatively you could iterate through all direct bases as described in @Felix's answer.\n", "Not sure why you want to do this\n>>> class A(object):\n... msg = 'hehehe'\n... \n>>> class B(A):\n... msg = 'hohoho'\n... \n>>> foo=B()\n>>> foo.__class__.__mro__[1].msg\n'hehehe'\n>>> \n\n", "Try with:\nclass A(object):\n msg = 'hehehe'\n\nEDIT:\nFor the 'msg' attribute you would need:\nfoo = B()\nbar = super(foo.__class__, foo)\nprint bar.msg\n\n", "As msg is a class variable, you can just do:\nprint C.msg # prints hohoho\n\nIf you overwrite the variable (as you do in class B), you have to find the right parent class. Remember that Python supports multiple inheritance.\nBut as you define the classes and you now that B inherits from A you can always do this:\nclass B(A):\n msg = 'hohoho'\n\n def get_parent_message(self):\n return A.msg\n\nUPDATE:\nThe most reliable thing would be:\ndef get_parent_attribute(instance, attribute):\n for parent in instance.__class__.__bases__:\n if attribute in parent.__dict__:\n return parent.__dict__[attribute]\n\nand then:\nfoo = B()\nprint get_parent_attribute(foo, 'msg')\n\n", "#for B() you can use __bases__\nprint foo.__class__.__bases__[0].msg\n\nBut this is not gonna be easy when there are multiple base classes and/or the depth of hierarchy is not one.\n" ]
[ 15, 11, 2, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002265060_python.txt
Q: Indicating that GET response is complete w/ Python AppEngine When I get a GET request from a user, I send them the response and then spend maybe a second logging stuff about that request. Is there a way to close the connection when I have the response ready, but continue doing that logging part, so that the user wouldn't have to wait for it to complete? A: From the Google App Engine docs for the Response object: App Engine does not support sending data to the user's browser before exiting the handler. Some web servers use this technique to "stream" data to the user's browser over a period of time in response to a single request. App Engine does not support this streaming technique. So there's no easy way. If you have a bundle of data that you can pass to a longer-running "process and log" method, try using the deferred library. Note that this will requiring bundling your data up and sending it to the task queue to do your processing and logging, so you may not save much time, and the results may not look much like you'd want - for example, you'd be logging from a different request, so might need to radically alter the logging Still, you could try. A: You have two options: Use the Task Queue API. Enqueueing a task should be fast, so long as you have less than 10k of data (which is the limit on a Task Queue payload). Use the 'sneaky' trick described by Rafe in this video to do processing after the response completes.
Indicating that GET response is complete w/ Python AppEngine
When I get a GET request from a user, I send them the response and then spend maybe a second logging stuff about that request. Is there a way to close the connection when I have the response ready, but continue doing that logging part, so that the user wouldn't have to wait for it to complete?
[ "From the Google App Engine docs for the Response object:\n\nApp Engine does not support sending\n data to the user's browser before\n exiting the handler. Some web servers\n use this technique to \"stream\" data to\n the user's browser over a period of\n time in response to a single request.\n App Engine does not support this\n streaming technique.\n\nSo there's no easy way. If you have a bundle of data that you can pass to a longer-running \"process and log\" method, try using the deferred library. Note that this will requiring bundling your data up and sending it to the task queue to do your processing and logging, so\n\nyou may not save much time, and\nthe results may not look much like you'd want - for example, you'd be logging from a different request, so might need to radically alter the logging\n\nStill, you could try.\n", "You have two options:\n\nUse the Task Queue API. Enqueueing a task should be fast, so long as you have less than 10k of data (which is the limit on a Task Queue payload).\nUse the 'sneaky' trick described by Rafe in this video to do processing after the response completes.\n\n" ]
[ 3, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0002261479_google_app_engine_python.txt
Q: why i can't alert this string which is return from django render_to_response django code: return render_to_response(template_name, { "form": form, }, context_instance=RequestContext(request)) and html: <script type="text/javascript"> var a='{{form}}' alert(a) </script> it's error is 'unterminated string literal', and i see this in firebug : <script type="text/javascript"> var a='"<tr><th><label for="id_username">Username:</label></th><td><input id="id_username" type="text" class="textinput" name="username" maxlength="30" /></td></tr><tr><th><label for="id_email">Email (optional):</label></th><td><input id="id_email" type="text" class="textinput" name="email" /></td></tr>"'; alert(a) </script> how do i alert the 'form' string . thanks A: Maybe try putting in semi-colons at the ends of the lines in your Django template file? <script type="text/javascript"> var a='{{form}}'; alert(a); </script> Odd though, I’m pretty sure semi-colons are optional there. Could you do a View Source in Firefox (instead of looking via Firebug), and see what HTML is actually being output by Django? A: Check the HTML source of the page using "View source" rather than Firebug. I predict your {{form}} value has a line break in it, which will cause the error you're seeing. A: I am unable to reproduce this error, pasting the above into a javascript console works without problems. Is it possible that there is something else done with a? Is above a simplification or exact javascript? Maybe you can show the entire HTML?
why i can't alert this string which is return from django render_to_response
django code: return render_to_response(template_name, { "form": form, }, context_instance=RequestContext(request)) and html: <script type="text/javascript"> var a='{{form}}' alert(a) </script> it's error is 'unterminated string literal', and i see this in firebug : <script type="text/javascript"> var a='"<tr><th><label for="id_username">Username:</label></th><td><input id="id_username" type="text" class="textinput" name="username" maxlength="30" /></td></tr><tr><th><label for="id_email">Email (optional):</label></th><td><input id="id_email" type="text" class="textinput" name="email" /></td></tr>"'; alert(a) </script> how do i alert the 'form' string . thanks
[ "Maybe try putting in semi-colons at the ends of the lines in your Django template file?\n<script type=\"text/javascript\">\n var a='{{form}}';\n\n alert(a);\n</script>\n\nOdd though, I’m pretty sure semi-colons are optional there. Could you do a View Source in Firefox (instead of looking via Firebug), and see what HTML is actually being output by Django?\n", "Check the HTML source of the page using \"View source\" rather than Firebug. I predict your {{form}} value has a line break in it, which will cause the error you're seeing.\n", "I am unable to reproduce this error, pasting the above into a javascript console works without problems. Is it possible that there is something else done with a? Is above a simplification or exact javascript? Maybe you can show the entire HTML?\n" ]
[ 1, 1, 0 ]
[]
[]
[ "django", "javascript", "python" ]
stackoverflow_0002264503_django_javascript_python.txt
Q: How would you solve this GPS/location problem and scale it? Would you use a Database? R-tree? Suppose I have a people and their GPS coordinates: User1, 52.99, -41.0 User2, 91.44, -21.4 User3, 5.12, 24.5 ... My objective is: Given a set of coordinates, Out of all those users, find the ones within 20 meters. (how to do a SELECT statement like this?) For each of those users, get the distance. As you probably guessed, these coordinates will be retrieved from a mobile phone. The phones will update their longitude/latitude every 10 seconds, as well as get that list of users <20 meters. It's dynamic. I would like the best way to do this so that it can scale. Would you store the coordinates in a database, and update it every 10 seconds? (If you store it in a database...how would you calculate it...) How would you do this so it can scale? By the way, there is already a formula that can calculate the distance between 2 coordinates http://www.johndcook.com/python_longitude_latitude.html. I just need to know what's the best way to do this technically (Trees, Database? What architecture? More specifically...how would you tie in the long/lat distance formula into the "SELECT" statement?) A: Create a MyISAM table with a column of datatype Point Create a SPATIAL index on this column Convert the GPS coords into UTM (grid) coords and store them in your table Issue this query: SELECT user_id, GLength(LineString(user_point, @mypoint)) FROM users WHERE MBRWithin(user_point, LineString(Point(X(@mypoint) - 20, Y(@mypoint - 20)), Point(X(@mypoint) + 20, Y(@mypoint + 20)) AND GLength(LineString(user_point, @mypoint)) <= 20 Note that this query will most probably be run on very volatile data and you will need to do the additional checks on time. Since MySQL cannot combine SPATIAL indexes, it will be better to use some kind of surface tiling technology: Split the Earth surface into a number of tiles, say, 1 x 1 " (it's about 30 meters of the meridian and 30 * COS(lon) of the parallel. Store the data in the CHAR(14) column: 7 digits of the lat + 7 digits on the lon (14 digits at all). Disable key compression on this column. Create a composite index on (time, tile) On the client, calculate all possible tiles your mates may be in. For 20 meters distance, this will be at most 9 tiles, unless you are deep at North or South. However, you may change the tiling algorithm to handle these cases. Issue this query: SELECT * FROM ( SELECT tile1 UNION ALL SELECT tile2 UNION ALL … ) tiles JOIN users u ON u.tile = tiles.tile AND u.time >= NOW() AND GLength(LineString(user_point, @mypoint)) <= 20 , where tile1 etc are precalculated tiles. SQL Server implements this algorithm for its spatial indexes (rather than R-Tree that MySQL uses). A: Well, the naive approach would be to do an O(n) pass over all points, get their distance from the current point, and find the top 20. This is perfectly Ok for small datasets (say <= 500 points), but on larger sets it's going to be quite slow. In SQL, this would be along the lines of: SELECT point_id, DIST_FORMULA(x, y) as distance FROM points WHERE distance < 20 To address the inefficiency of the above method, you would have to use some sort of preprocessing step, most likely space partitioning. That can often dramatically improve performance in nearest neighbour type of searches like this. However, in your case, if all the points are updated every 10 seconds, you would have to do an Ω(n) pass to update the position of each point in the space partitioning tree. If you have more than a few queries between each update, it will be useful, otherwise it'll simply be an overhead. A: Chapter 11 "Database Design Know it All" has some thoughts on how to design such a database.
How would you solve this GPS/location problem and scale it? Would you use a Database? R-tree?
Suppose I have a people and their GPS coordinates: User1, 52.99, -41.0 User2, 91.44, -21.4 User3, 5.12, 24.5 ... My objective is: Given a set of coordinates, Out of all those users, find the ones within 20 meters. (how to do a SELECT statement like this?) For each of those users, get the distance. As you probably guessed, these coordinates will be retrieved from a mobile phone. The phones will update their longitude/latitude every 10 seconds, as well as get that list of users <20 meters. It's dynamic. I would like the best way to do this so that it can scale. Would you store the coordinates in a database, and update it every 10 seconds? (If you store it in a database...how would you calculate it...) How would you do this so it can scale? By the way, there is already a formula that can calculate the distance between 2 coordinates http://www.johndcook.com/python_longitude_latitude.html. I just need to know what's the best way to do this technically (Trees, Database? What architecture? More specifically...how would you tie in the long/lat distance formula into the "SELECT" statement?)
[ "\nCreate a MyISAM table with a column of datatype Point\nCreate a SPATIAL index on this column\nConvert the GPS coords into UTM (grid) coords and store them in your table\nIssue this query:\nSELECT user_id, GLength(LineString(user_point, @mypoint))\nFROM users\nWHERE MBRWithin(user_point, LineString(Point(X(@mypoint) - 20, Y(@mypoint - 20)), Point(X(@mypoint) + 20, Y(@mypoint + 20))\n AND GLength(LineString(user_point, @mypoint)) <= 20\n\n\nNote that this query will most probably be run on very volatile data and you will need to do the additional checks on time.\nSince MySQL cannot combine SPATIAL indexes, it will be better to use some kind of surface tiling technology:\n\nSplit the Earth surface into a number of tiles, say, 1 x 1 \" (it's about 30 meters of the meridian and 30 * COS(lon) of the parallel.\nStore the data in the CHAR(14) column: 7 digits of the lat + 7 digits on the lon (14 digits at all). Disable key compression on this column.\nCreate a composite index on (time, tile)\nOn the client, calculate all possible tiles your mates may be in. For 20 meters distance, this will be at most 9 tiles, unless you are deep at North or South. However, you may change the tiling algorithm to handle these cases.\nIssue this query:\nSELECT *\nFROM (\n SELECT tile1\n UNION ALL\n SELECT tile2\n UNION ALL\n …\n ) tiles\nJOIN users u\nON u.tile = tiles.tile\n AND u.time >= NOW() \n AND GLength(LineString(user_point, @mypoint)) <= 20\n\n\n, where tile1 etc are precalculated tiles.\nSQL Server implements this algorithm for its spatial indexes (rather than R-Tree that MySQL uses).\n", "Well, the naive approach would be to do an O(n) pass over all points, get their distance from the current point, and find the top 20. This is perfectly Ok for small datasets (say <= 500 points), but on larger sets it's going to be quite slow. In SQL, this would be along the lines of:\nSELECT point_id, DIST_FORMULA(x, y) as distance\nFROM points\nWHERE distance < 20\n\nTo address the inefficiency of the above method, you would have to use some sort of preprocessing step, most likely space partitioning. That can often dramatically improve performance in nearest neighbour type of searches like this. However, in your case, if all the points are updated every 10 seconds, you would have to do an Ω(n) pass to update the position of each point in the space partitioning tree. If you have more than a few queries between each update, it will be useful, otherwise it'll simply be an overhead.\n", "Chapter 11 \"Database Design Know it All\" has some thoughts on how to design such a database.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "computer_science", "database", "mysql", "python" ]
stackoverflow_0002265775_computer_science_database_mysql_python.txt
Q: How to notify user when django's custom action doesn't behave as expected? I am writing a custom action for django admin. This action should only work for records having particular state. For example "Approve Blog" custom action should approve user blog only when blog is not approved. And it must not appove rejected blogs. One option is to filter non approved blogs and then approve them. But there are still chances that rejected blogs can be approved. If user try to approve rejected blog, custom action should notify user about invalid operation in the django admin. Any solution? A: The documentation on admin actions is quite helpful, so go take a look! I think just writing an action that only updates non-rejected blogs ought to do. The following code assumes you've got variables rejected and approved that map to the integral values representing Blogs that have been rejected, and blogs that have been approved respectively: class BlogAdmin(admin.ModelAdmin): ... actions = ['approve'] ... def approve(self, request, queryset): rejects = queryset.filter(state = rejected) if len(rejects) != 0: # You might want to raise an exception here, or notify yourself somehow self.message_user(request, "%s of the blogs you selected were already rejected." % len(rejects)) return rows_updated = queryset.update(state = approved) self.message_user(request, "%s blogs approved." % rows_updated) approve.short_description = "Mark selected blogs as approved"
How to notify user when django's custom action doesn't behave as expected?
I am writing a custom action for django admin. This action should only work for records having particular state. For example "Approve Blog" custom action should approve user blog only when blog is not approved. And it must not appove rejected blogs. One option is to filter non approved blogs and then approve them. But there are still chances that rejected blogs can be approved. If user try to approve rejected blog, custom action should notify user about invalid operation in the django admin. Any solution?
[ "The documentation on admin actions is quite helpful, so go take a look!\nI think just writing an action that only updates non-rejected blogs ought to do.\nThe following code assumes you've got variables rejected and approved that map to the integral values representing Blogs that have been rejected, and blogs that have been approved respectively:\nclass BlogAdmin(admin.ModelAdmin):\n\n ...\n actions = ['approve']\n ...\n\n def approve(self, request, queryset):\n rejects = queryset.filter(state = rejected)\n if len(rejects) != 0:\n # You might want to raise an exception here, or notify yourself somehow\n self.message_user(request,\n \"%s of the blogs you selected were already rejected.\" % len(rejects))\n return\n\n rows_updated = queryset.update(state = approved)\n self.message_user(request, \"%s blogs approved.\" % rows_updated)\n approve.short_description = \"Mark selected blogs as approved\"\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_admin", "python" ]
stackoverflow_0002265101_django_django_admin_python.txt
Q: Permission problem of .egg of easy_install under windows7/vista I use the easy_install to install python packages in a virtuaenv under windows7. Due to the UAV, I have to run the CMD as administrator for installing packages. Here comes the problem, I notice that I can't import the package from a normal user account. >>> import tempita Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named tempita But tempita-0.4-py2.6 is just right there in the site-package. Also, run python as administrator, import works correctly. That's the problem of permission. It's strange, I don't know why, but only .egg files are installed with restricted permissions setting. I find there is an article about this problem: easy_install no longer easy on Vista It doesn't work to change the owner or permissions of parent folder, the only solution I know is to modify the permissions of those egg files one by one. This is really annoying, why easy_install set such a restricted permissions only to .egg files rather than .py files? And how can I solve this problem without shut UAV down or run as a super user? A: I've started using distribute in lieu of setuptools, because the distribute team has been much more proactive in tracking down problems. Curiously, it appears as if distribute no longer creates zip eggs on my Windows 7 system, perhaps for the permissions issues you've encountered. Switching to distribute might be a solution for you, although I would understand if that seems like more of a hack than a fix. A: You might be able to use ICACLS to reset the file permissions. ICACLS c:\Python26\lib\site-packages\*.egg /reset I suggest trying it with one file first before doing *.egg. Note that *.egg will likely match egg folders as well.
Permission problem of .egg of easy_install under windows7/vista
I use the easy_install to install python packages in a virtuaenv under windows7. Due to the UAV, I have to run the CMD as administrator for installing packages. Here comes the problem, I notice that I can't import the package from a normal user account. >>> import tempita Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named tempita But tempita-0.4-py2.6 is just right there in the site-package. Also, run python as administrator, import works correctly. That's the problem of permission. It's strange, I don't know why, but only .egg files are installed with restricted permissions setting. I find there is an article about this problem: easy_install no longer easy on Vista It doesn't work to change the owner or permissions of parent folder, the only solution I know is to modify the permissions of those egg files one by one. This is really annoying, why easy_install set such a restricted permissions only to .egg files rather than .py files? And how can I solve this problem without shut UAV down or run as a super user?
[ "I've started using distribute in lieu of setuptools, because the distribute team has been much more proactive in tracking down problems. Curiously, it appears as if distribute no longer creates zip eggs on my Windows 7 system, perhaps for the permissions issues you've encountered. Switching to distribute might be a solution for you, although I would understand if that seems like more of a hack than a fix.\n", "You might be able to use ICACLS to reset the file permissions.\nICACLS c:\\Python26\\lib\\site-packages\\*.egg /reset\n\nI suggest trying it with one file first before doing *.egg. Note that *.egg will likely match egg folders as well.\n" ]
[ 0, 0 ]
[]
[]
[ "easy_install", "python", "virtualenv", "windows" ]
stackoverflow_0002264488_easy_install_python_virtualenv_windows.txt
Q: How do I prevent Qt buttons from appearing in a separate frame? I'm working on a PyQt application. Currently, there's a status panel (defined as a QWidget) which contains a QHBoxLayout. This layout is frequently updated with QPushButtons created by another portion of the application. Whenever the buttons which appear need to change (which is rather frequently) an update effect gets called. The existing buttons are deleted from the layout (by calling layout.removeWidget(button) and then button.setParent(None)) and the new buttons are added to the layout. Generally, this works. But occasionally, when I call button.setParent(None) on the button to delete, it causes it to pop out of the application and start floating in its own stand-alone frame. How can I remove a button from the layout and ensure it doesn't start floating? A: You should call the button's close() method. If you want it to be deleted when you close it, you can set the Qt.WA_DeleteOnClose attribute: button.setAttribute(Qt.WA_DeleteOnClose) A: Try calling QWidget::hide() on the button before removing from the layout if you don't want to delete your button.
How do I prevent Qt buttons from appearing in a separate frame?
I'm working on a PyQt application. Currently, there's a status panel (defined as a QWidget) which contains a QHBoxLayout. This layout is frequently updated with QPushButtons created by another portion of the application. Whenever the buttons which appear need to change (which is rather frequently) an update effect gets called. The existing buttons are deleted from the layout (by calling layout.removeWidget(button) and then button.setParent(None)) and the new buttons are added to the layout. Generally, this works. But occasionally, when I call button.setParent(None) on the button to delete, it causes it to pop out of the application and start floating in its own stand-alone frame. How can I remove a button from the layout and ensure it doesn't start floating?
[ "You should call the button's close() method. If you want it to be deleted when you close it, you can set the Qt.WA_DeleteOnClose attribute:\nbutton.setAttribute(Qt.WA_DeleteOnClose)\n\n", "Try calling QWidget::hide() on the button before removing from the layout if you don't want to delete your button.\n" ]
[ 2, 2 ]
[]
[]
[ "pyqt", "pyqt4", "python", "qt", "qt4" ]
stackoverflow_0002264482_pyqt_pyqt4_python_qt_qt4.txt
Q: Updating profile with python-twitter I am trying to update my Profile info via python-twitter module. >>> api = twitter.Api(username="username", password="password") >>> user = api.GetUser(user="username") >>> user.SetLocation('New Location') The problem is that it is not getting updated and the documentation is unclear if there's another step I need to do - is there a "save" that I need to call or something like that? A: I don't believe that the python-twitter module currently supports updating a profile. SetLocation will only update your local user object that GetUser has returned. It would be relatively trivial to add support for this to the module though. Have a look at this method: account/update_profile and then add a new method to the Api class that calls account/update_profile with the updated user data.
Updating profile with python-twitter
I am trying to update my Profile info via python-twitter module. >>> api = twitter.Api(username="username", password="password") >>> user = api.GetUser(user="username") >>> user.SetLocation('New Location') The problem is that it is not getting updated and the documentation is unclear if there's another step I need to do - is there a "save" that I need to call or something like that?
[ "I don't believe that the python-twitter module currently supports updating a profile. SetLocation will only update your local user object that GetUser has returned.\nIt would be relatively trivial to add support for this to the module though. Have a look at this method:\naccount/update_profile \nand then add a new method to the Api class that calls account/update_profile with the updated user data.\n" ]
[ 1 ]
[ "This are the setprofile methods from User:\nSetProfileBackgroundColor(self, profile_background_color)\n\nSetProfileBackgroundImageUrl(self, profile_background_image_url)\n\nSetProfileBackgroundTile(self, profile_background_tile)\n Set the boolean flag for whether to tile the profile background image.\n\n Args:\n profile_background_tile: Boolean flag for whether to tile or not.\n\nSetProfileImageUrl(self, profile_image_url)\n Set the url of the thumbnail of this user.\n\n Args:\n profile_image_url: The url of the thumbnail of this user\n\nSetProfileLinkColor(self, profile_link_color)\n\nSetProfileSidebarFillColor(self, profile_sidebar_fill_color)\n\nSetProfileTextColor(self, profile_text_color)\n\nYou can see a list of available methods at http://static.unto.net/python-twitter/0.6/doc/twitter.html\n" ]
[ -1 ]
[ "api", "python", "twitter" ]
stackoverflow_0001278192_api_python_twitter.txt
Q: processing text from a non-flat file (to extract information as if it *were* a flat file) I have a longitudinal data set generated by a computer simulation that can be represented by the following tables ('var' are variables): time subject var1 var2 var3 t1 subjectA ... t2 subjectB ... and subject name subjectA nameA subjectB nameB However, the file generated writes a data file in a format similar to the following: time t1 description subjectA nameA var1 var2 var3 subjectB nameB var1 var2 var3 time t2 description subjectA nameA var1 var2 var3 subjectB nameB var1 var2 var3 ...(and so on) I have been using a (python) script to process this output data into a flat text file so that I can import it into R, python, SQL, or awk/grep it to extract information - an example of the type of information desired from a single query (in SQL notation, after the data is converted to a table) is shown below: SELECT var1, var2, var3 FROM datatable WHERE subject='subjectB' I wonder if there is a more efficient solution as each of these data files can be ~100MB each (and I have hundreds of them) and creating the flat text file is time-consuming and takes up additional hard drive space with redundant information. Ideally, I would interact with the original data set directly to extract the information that I desire, without creating the extra flat text file... Is there an awk/perl solution for such tasks that is simpler? I'm quite proficient at text-processing in python but my skills in awk are rudimentary and I have no working knowledge of perl; I wonder if these or other domain-specific tools can provide a better solution. Thanks! Postscript: Wow, thanks to all! I am sorry that I cannot choose everyone's answers @FM: thanks. My Python script resembles your code without the filtering step. But your organization is clean. @PP: I thought I was already proficient in grep but apparently not! This is very helpful... but I think grepping becomes difficult when mixing the 'time' into the output (which I failed to include as a possible extraction scenario in my example! That's my bad). @ghostdog74: This is just fantastic... but modifying the line to get 'subjectA' was not straightforward... (though I'll be reading up more on awk in the meantime and hopefully I'll grok later). @weismat: Well stated. @S.Lott: This is extremely elegant and flexible - I was not asking for a python(ic) solution but this fits in cleanly with the parse, filter, and output framework suggested by PP, and is flexible enough to accommodate a number of different queries to extract different types of information from this hierarchical file. Again, I am grateful to everyone - thanks so much. A: This is what Python generators are all about. def read_as_flat( someFile ): line_iter= iter(someFile) time_header= None for line in line_iter: words = line.split() if words[0] == 'time': time_header = [ words[1:] ] # the "time" line description= line_iter.next() time_header.append( description ) elif words[0] in subjectNameSet: data = line_iter.next() yield time_header + data You can use this like a standard Python iterator for time, description, var1, var2, var3 in read_as_flat( someFile ): etc. A: If all you want is var1, var2, var3 upon matching a particular subject then you could try the following command: grep -A 1 'subjectB' The -A 1 command line argument instructs grep to print out the matched line and one line after the matched line (and in this case the variables come on a line after the subject). You might want to use the -E option to make grep search for a regular expression and anchor the subject search to the beginning-of-line (e.g. grep -A 1 -E '^subjectB'). Finally the output will now consist of the subject line and variable line you want. You may want to hide the subject line: grep -A 1 'subjectB' |grep -v 'subjectB' And you may wish to process the variable line: grep -A 1 'subjectB' |grep -v 'subjectB' |perl -pe 's/ /,/g' A: The best option would be to modify the computer simulation to produce rectangular output. Assuming you can't do that, here's one approach: In order to be able to use the data in R, SQL, etc. you need to convert it from hierarchical to rectangular one way or another. If you already have a parser that can convert the entire file into a rectangular data set, you are most of the way there. The next step is to add additional flexibility to your parser, so that it can filter out unwanted data records. Instead of having a file converter, you'll have a data extraction utility. The example below is in Perl, but you can do the same thing in Python. The general idea is to maintain a clean separation between (a) parsing, (b) filtering, and (c) output. That way, you have a flexible environment, making it easy to add different filtering or output methods, depending on your immediate data-crunching needs. You can also set up the filtering methods to accept parameters (either from command line or a config file) for greater flexibility. use strict; use warnings; read_file($ARGV[0], \&check_record); sub read_file { my ($file_name, $check_record) = @_; open(my $file_handle, '<', $file_name) or die $!; # A data structure to hold an entire record. my $rec = { time => '', desc => '', subj => '', name => '', vars => [], }; # A code reference to get the next line and do some cleanup. my $get_line = sub { my $line = <$file_handle>; return unless defined $line; chomp $line; $line =~ s/^\s+//; return $line; }; # Start parsing the data file. while ( my $line = $get_line->() ){ if ($line =~ /^time (\w+)/){ $rec->{time} = $1; $rec->{desc} = $get_line->(); } else { ($rec->{subj}, $rec->{name}) = $line =~ /(\w+) +(\w+)/; $rec->{vars} = [ split / +/, $get_line->() ]; # OK, we have a complete record. Now invoke our filtering # code to decide whether to export record to rectangular format. $check_record->($rec); } } } sub check_record { my $rec = shift; # Just an illustration. You'll want to parameterize this, most likely. write_output($rec) if $rec->{subj} eq 'subjectB' and $rec->{time} eq 't1' ; } sub write_output { my $rec = shift; print join("\t", $rec->{time}, $rec->{subj}, $rec->{name}, @{$rec->{vars}}, ), "\n"; } A: If you are lazy and have enough RAM, then I would work on a RAM disk instead of the file system as long as you need them immediately. I do not think that Perl or awk will be faster than Python if you are just recoding your current algorithm into a different language. A: awk '/time/{f=0}/subjectB/{f=1;next}f' file
processing text from a non-flat file (to extract information as if it *were* a flat file)
I have a longitudinal data set generated by a computer simulation that can be represented by the following tables ('var' are variables): time subject var1 var2 var3 t1 subjectA ... t2 subjectB ... and subject name subjectA nameA subjectB nameB However, the file generated writes a data file in a format similar to the following: time t1 description subjectA nameA var1 var2 var3 subjectB nameB var1 var2 var3 time t2 description subjectA nameA var1 var2 var3 subjectB nameB var1 var2 var3 ...(and so on) I have been using a (python) script to process this output data into a flat text file so that I can import it into R, python, SQL, or awk/grep it to extract information - an example of the type of information desired from a single query (in SQL notation, after the data is converted to a table) is shown below: SELECT var1, var2, var3 FROM datatable WHERE subject='subjectB' I wonder if there is a more efficient solution as each of these data files can be ~100MB each (and I have hundreds of them) and creating the flat text file is time-consuming and takes up additional hard drive space with redundant information. Ideally, I would interact with the original data set directly to extract the information that I desire, without creating the extra flat text file... Is there an awk/perl solution for such tasks that is simpler? I'm quite proficient at text-processing in python but my skills in awk are rudimentary and I have no working knowledge of perl; I wonder if these or other domain-specific tools can provide a better solution. Thanks! Postscript: Wow, thanks to all! I am sorry that I cannot choose everyone's answers @FM: thanks. My Python script resembles your code without the filtering step. But your organization is clean. @PP: I thought I was already proficient in grep but apparently not! This is very helpful... but I think grepping becomes difficult when mixing the 'time' into the output (which I failed to include as a possible extraction scenario in my example! That's my bad). @ghostdog74: This is just fantastic... but modifying the line to get 'subjectA' was not straightforward... (though I'll be reading up more on awk in the meantime and hopefully I'll grok later). @weismat: Well stated. @S.Lott: This is extremely elegant and flexible - I was not asking for a python(ic) solution but this fits in cleanly with the parse, filter, and output framework suggested by PP, and is flexible enough to accommodate a number of different queries to extract different types of information from this hierarchical file. Again, I am grateful to everyone - thanks so much.
[ "This is what Python generators are all about.\ndef read_as_flat( someFile ):\n line_iter= iter(someFile)\n time_header= None\n for line in line_iter:\n words = line.split()\n if words[0] == 'time':\n time_header = [ words[1:] ] # the \"time\" line\n description= line_iter.next()\n time_header.append( description )\n elif words[0] in subjectNameSet:\n data = line_iter.next()\n yield time_header + data\n\nYou can use this like a standard Python iterator\nfor time, description, var1, var2, var3 in read_as_flat( someFile ):\n etc.\n\n", "If all you want is var1, var2, var3 upon matching a particular subject then you could try the following command:\n grep -A 1 'subjectB'\n\nThe -A 1 command line argument instructs grep to print out the matched line and one line after the matched line (and in this case the variables come on a line after the subject).\nYou might want to use the -E option to make grep search for a regular expression and anchor the subject search to the beginning-of-line (e.g. grep -A 1 -E '^subjectB').\nFinally the output will now consist of the subject line and variable line you want. You may want to hide the subject line:\n grep -A 1 'subjectB' |grep -v 'subjectB'\n\nAnd you may wish to process the variable line:\n grep -A 1 'subjectB' |grep -v 'subjectB' |perl -pe 's/ /,/g'\n\n", "The best option would be to modify the computer simulation to produce rectangular output. Assuming you can't do that, here's one approach:\nIn order to be able to use the data in R, SQL, etc. you need to convert it from hierarchical to rectangular one way or another. If you already have a parser that can convert the entire file into a rectangular data set, you are most of the way there. The next step is to add additional flexibility to your parser, so that it can filter out unwanted data records. Instead of having a file converter, you'll have a data extraction utility.\nThe example below is in Perl, but you can do the same thing in Python. The general idea is to maintain a clean separation between (a) parsing, (b) filtering, and (c) output. That way, you have a flexible environment, making it easy to add different filtering or output methods, depending on your immediate data-crunching needs. You can also set up the filtering methods to accept parameters (either from command line or a config file) for greater flexibility.\nuse strict;\nuse warnings;\n\nread_file($ARGV[0], \\&check_record);\n\nsub read_file {\n my ($file_name, $check_record) = @_;\n open(my $file_handle, '<', $file_name) or die $!;\n # A data structure to hold an entire record.\n my $rec = {\n time => '',\n desc => '',\n subj => '',\n name => '',\n vars => [],\n };\n # A code reference to get the next line and do some cleanup.\n my $get_line = sub {\n my $line = <$file_handle>;\n return unless defined $line;\n chomp $line;\n $line =~ s/^\\s+//;\n return $line;\n };\n # Start parsing the data file.\n while ( my $line = $get_line->() ){\n if ($line =~ /^time (\\w+)/){\n $rec->{time} = $1;\n $rec->{desc} = $get_line->();\n }\n else {\n ($rec->{subj}, $rec->{name}) = $line =~ /(\\w+) +(\\w+)/;\n $rec->{vars} = [ split / +/, $get_line->() ];\n\n # OK, we have a complete record. Now invoke our filtering\n # code to decide whether to export record to rectangular format.\n $check_record->($rec);\n }\n }\n}\n\nsub check_record {\n my $rec = shift;\n # Just an illustration. You'll want to parameterize this, most likely.\n write_output($rec)\n if $rec->{subj} eq 'subjectB'\n and $rec->{time} eq 't1'\n ;\n}\n\nsub write_output {\n my $rec = shift;\n print join(\"\\t\", \n $rec->{time}, $rec->{subj}, $rec->{name},\n @{$rec->{vars}},\n ), \"\\n\";\n}\n\n", "If you are lazy and have enough RAM, then I would work on a RAM disk instead of the file system as long as you need them immediately.\nI do not think that Perl or awk will be faster than Python if you are just recoding your current algorithm into a different language.\n", "awk '/time/{f=0}/subjectB/{f=1;next}f' file\n\n" ]
[ 4, 2, 2, 1, 1 ]
[]
[]
[ "awk", "flat_file", "perl", "python", "text_processing" ]
stackoverflow_0002264504_awk_flat_file_perl_python_text_processing.txt
Q: Encountering a problem while moving to Django 1.1 I'm trying to move from django 1.0.2 to 1.1 and I am getting the following error in one of my templates: Request Method: GET Request URL: http://localhost:8000/conserv/media_assets/vod/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: 'NoneType' object has no attribute 'label' Exception Location: /opt/local/Library/Frameworks/Python.framework/ Versions/2.6/lib/python2.6/site-packages/django/template/debug.py in render_node, line 81 Python Executable: /opt/local/Library/Frameworks/Python.framework/ Versions/2.6/Resources/Python.app/Contents/MacOS/Python Python Version: 2.6.2 The error is on the line with the "for" tag. My template: {% for field in upload_image_form %} <tr> <td class="label"> {{field.name}} </td> <td> {{field}} </td> </tr> {% endfor %} My form: class UploadImageForm(ModelForm): class Meta: model = ImageUpload fields = ('thumb') My model: class ImageUpload(models.Model): thumb = models.FileField(upload_to='thumbs', blank=True, null=True) Does anyone know how I can solve it? A: there's an error in your form class. The fields should be an iterable, but a tuple with one element should be written ('thumb',) instead of ('thumb'). Change your form class to : class UploadImageForm(ModelForm): class Meta: model = ImageUpload fields = ('thumb',) It should do the trick.
Encountering a problem while moving to Django 1.1
I'm trying to move from django 1.0.2 to 1.1 and I am getting the following error in one of my templates: Request Method: GET Request URL: http://localhost:8000/conserv/media_assets/vod/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: 'NoneType' object has no attribute 'label' Exception Location: /opt/local/Library/Frameworks/Python.framework/ Versions/2.6/lib/python2.6/site-packages/django/template/debug.py in render_node, line 81 Python Executable: /opt/local/Library/Frameworks/Python.framework/ Versions/2.6/Resources/Python.app/Contents/MacOS/Python Python Version: 2.6.2 The error is on the line with the "for" tag. My template: {% for field in upload_image_form %} <tr> <td class="label"> {{field.name}} </td> <td> {{field}} </td> </tr> {% endfor %} My form: class UploadImageForm(ModelForm): class Meta: model = ImageUpload fields = ('thumb') My model: class ImageUpload(models.Model): thumb = models.FileField(upload_to='thumbs', blank=True, null=True) Does anyone know how I can solve it?
[ "there's an error in your form class. The fields should be an iterable, but a tuple with one element should be written ('thumb',) instead of ('thumb'). Change your form class to :\nclass UploadImageForm(ModelForm):\n class Meta: \n model = ImageUpload \n fields = ('thumb',)\n\nIt should do the trick.\n" ]
[ 0 ]
[]
[]
[ "django_templates", "python" ]
stackoverflow_0002265914_django_templates_python.txt
Q: showing list item in python I want to manipulate feed which contains frequently updated (with time) contents using feed parser. Goal is to show all the contents of the updated feed. import feedparser d = feedparser.parse("some URL") print "Information of user" i = range(10) for i in d: print d.entries[i].summary print " " As parsing data is list, and list don't accept string as indices, it shows error like: File "F:\JavaWorkspace\Test\src\rss_parse.py", line 18, in <module> print d.entries[i].summary TypeError: list indices must be integers Then how can I get all contents? can anyone please show me some light on this issue? Thanks in advance! A: i is not an integer. I guess i is already an entry of the feed but better rename it: Try: for entry in d.entries: print entry.summary If you want the first 10 entries you have to do: try: for i in range(10): print d.entries[i].summary except IndexError: pass A: for i in range(10): print d.entries[i].summary A: You first assign a list of integers to i (i = range(10)) and then just lose the reference to this list. Are you sure you didn't mean: r = range(10) for i in r: or simply: for i in range(10): A: for all entries make: import feedparser d = feedparser.parse("some URL") print "Information of user" for i in range(len(d['entries'])): print d.entries[i].summary print " " A: import feedparser from StringIO import StringIO d = feedparser.parse("some URL") buff = StringIO() print >>buff, "Information of user" for i,e in enumerate(d.entries): print >>buff, i, e.summary print >>buff," " print buff If you need the index, I suggest also to use a String Buffer to do I/O operations on big string. A: Say you want to print the 10 first elements of the list if there is 10 or more, or what it contains otherwise. Felix allready proposed a working solution with exception management. You could also use itertools like below. import feedparser d = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml') from itertools import islice for elt in islice(d.entries, 1, 10): print elt.summary What is nice with islice is that if you want to access to elements from say 3 to 10 (a slice) it also works as easily. Just have to replace 1 with 3. It also works with step if you want say only even elements, etc.
showing list item in python
I want to manipulate feed which contains frequently updated (with time) contents using feed parser. Goal is to show all the contents of the updated feed. import feedparser d = feedparser.parse("some URL") print "Information of user" i = range(10) for i in d: print d.entries[i].summary print " " As parsing data is list, and list don't accept string as indices, it shows error like: File "F:\JavaWorkspace\Test\src\rss_parse.py", line 18, in <module> print d.entries[i].summary TypeError: list indices must be integers Then how can I get all contents? can anyone please show me some light on this issue? Thanks in advance!
[ "i is not an integer. I guess i is already an entry of the feed but better rename it:\nTry: \nfor entry in d.entries:\n print entry.summary\n\nIf you want the first 10 entries you have to do:\ntry:\n for i in range(10):\n print d.entries[i].summary\nexcept IndexError:\n pass\n\n", "for i in range(10):\n print d.entries[i].summary\n\n", "You first assign a list of integers to i (i = range(10)) and then just lose the reference to this list. Are you sure you didn't mean:\nr = range(10)\n\nfor i in r:\n\nor simply:\nfor i in range(10):\n\n", "for all entries make:\nimport feedparser\nd = feedparser.parse(\"some URL\")\n\nprint \"Information of user\" \n\nfor i in range(len(d['entries'])):\n print d.entries[i].summary \n\nprint \" \"\n\n", "import feedparser\nfrom StringIO import StringIO\nd = feedparser.parse(\"some URL\")\nbuff = StringIO()\nprint >>buff, \"Information of user\" \n\nfor i,e in enumerate(d.entries):\n print >>buff, i, e.summary \n\nprint >>buff,\" \"\nprint buff\n\nIf you need the index, I suggest also to use a String Buffer to do I/O operations on big string.\n", "Say you want to print the 10 first elements of the list if there is 10 or more, or what it contains otherwise. Felix allready proposed a working solution with exception management. You could also use itertools like below. \nimport feedparser\nd = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')\n\nfrom itertools import islice\n\nfor elt in islice(d.entries, 1, 10):\n print elt.summary\n\nWhat is nice with islice is that if you want to access to elements from say 3 to 10 (a slice) it also works as easily. Just have to replace 1 with 3. It also works with step if you want say only even elements, etc.\n" ]
[ 4, 4, 1, 1, 1, 0 ]
[]
[]
[ "feedparser", "python" ]
stackoverflow_0002265871_feedparser_python.txt
Q: Python, generating PDF using ReportLab.Platypus SimpleDocTemplate, date/time in header I'm working on a project in Python/Django which uses ReportLab's SimpleDocTemplate to generate PDF documents. All the documents generated have the current date/time printed in the top right corner. I can't see that it's being done anywhere in my code, is this a default behaviour in the SimpleDocTemplate object? How do I get rid of this? Regards, Haukur A: I've just tried to reproduce the behavior you described, but unfortunately I cant. So I don't think it's a default behavior. Maybe it would be a good idea if you post a small example where the production date/time in the header is visible. But if it's any help to you, here is what I've done: I used the following example from the user guide, which looks like this. But even when I call doc.build() without the additional arguments, I get no header at all.
Python, generating PDF using ReportLab.Platypus SimpleDocTemplate, date/time in header
I'm working on a project in Python/Django which uses ReportLab's SimpleDocTemplate to generate PDF documents. All the documents generated have the current date/time printed in the top right corner. I can't see that it's being done anywhere in my code, is this a default behaviour in the SimpleDocTemplate object? How do I get rid of this? Regards, Haukur
[ "I've just tried to reproduce the behavior you described, but unfortunately I cant. So I don't think it's a default behavior. Maybe it would be a good idea if you post a small example where the production date/time in the header is visible.\nBut if it's any help to you, here is what I've done: I used the following example from the user guide, which looks like this. But even when I call doc.build() without the additional arguments, I get no header at all.\n" ]
[ 2 ]
[]
[]
[ "django", "pdf_generation", "platypus", "python", "reportlab" ]
stackoverflow_0002265976_django_pdf_generation_platypus_python_reportlab.txt
Q: How to break the following line of python I have come upon a couple of lines of code similar to this one, but I'm unsure how I should break it: blueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex()) Thanks in advance A: blueprint = Blueprint( self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex(), ) A: How about this blueprint_item = self.blueprint_map[str(self.ui.blueprint_combo.currentText())] blueprint = Blueprint(blueprint_item, runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex()) A: I'd do it this way: blueprint = Blueprint( self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex()) A: Anywhere within the brackets should work, such as: blueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex()) A: blueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex())
How to break the following line of python
I have come upon a couple of lines of code similar to this one, but I'm unsure how I should break it: blueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex()) Thanks in advance
[ "blueprint = Blueprint(\n self.blueprint_map[str(self.ui.blueprint_combo.currentText())],\n runs=self.ui.runs_spin.text(), \n me=self.ui.me_spin.text(),\n pe=self.ui.pe_skill_combo.currentIndex(),\n)\n\n", "How about this\nblueprint_item = self.blueprint_map[str(self.ui.blueprint_combo.currentText())]\nblueprint = Blueprint(blueprint_item,\n runs=self.ui.runs_spin.text(),\n me=self.ui.me_spin.text(),\n pe=self.ui.pe_skill_combo.currentIndex())\n\n", "I'd do it this way:\nblueprint = Blueprint(\n self.blueprint_map[str(self.ui.blueprint_combo.currentText())],\n runs=self.ui.runs_spin.text(),\n me=self.ui.me_spin.text(),\n pe=self.ui.pe_skill_combo.currentIndex())\n\n", "Anywhere within the brackets should work, such as:\nblueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())],\n runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(),\n pe=self.ui.pe_skill_combo.currentIndex())\n\n", "blueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())], \n runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(),\n pe=self.ui.pe_skill_combo.currentIndex())\n\n" ]
[ 14, 5, 4, 0, 0 ]
[]
[]
[ "pep8", "python" ]
stackoverflow_0002266659_pep8_python.txt
Q: Python: strange numbers being pulled from binary file /confusion with hex and decimals This might be extremely trivial, and if so I apologise, but I'm getting really confused with the outputs I'm getting: hex? decimal? what? Here's an example, and what it returns: >>> print 'Rx State: ADC Clk=', ADC_Clock_MHz,'MHz DDC Clk=', DDC_Clock_kHz,'kHz Temperature=', Temperature,'C' Rx State: ADC Clk= [1079246848L, 0L] MHz DDC Clk= [1078525952L, 0L] kHz Temperature= [1078140928L, 0L] C Now I admit this is slight guesswork because I don't know exactly what the data is - I have a specification of how to parse it out of the file, but it's giving me very strange answers. As you can see - the values are very similar, all around the 1078000000 mark, which leads me to believe I might be extracting something strange (like hex, but I don't think it is...) The structure is read as follows (apologies for length): #Read block more = 1 while(more == 1): a = array.array("L") a.fromfile(wholeFile,2) if len(a) == 2: structure_id = a[0] print 'structure_id: ', hex(structure_id) structure_length = a[1] print 'structure_length: ', structure_length else: print 'cannot read structure start' numDwords = (structure_length/4) - 2 - 1; print 'numDwords: ', numDwords content = array.array("L") content.fromfile(wholeFile,numDwords) if len(content) != numDwords: print 'cannot read structure' more = 0 ok = 0 and then the above example was retrieved from this by: pos = 2 v1 = [content[pos+1], content[pos]] pos = pos+2 v2 = [content[pos+1], content[pos]] pos = pos+2 v3 = [content[pos+1], content[pos]] pos = pos+2 ADC_Clock_MHz = v1 DDC_Clock_kHz = v2 Temperature = v3 Right sorry again for how verbose that was, but it's not just those values, it seems some values are ok and some aren't, which leads me to believe that the larger numbers are encoded differently... Also I have no idea why all the values are in pairs either! Pants question, but if anyone has any insight it'd be much appreciated. A: The contents are in pairs because you assign a pair to the variables (e.g. ADC_Clock_MHz = v1 and v1 = [content[pos+1], content[pos]]). You are basically assigning a list of two elements to v1 where the first element is the element in the index pos+1 in the array content and the second element is the element in the index pos in the array content. I'm a bit confused why you are constructing this list of those two elements. Are you trying to combine the two elements into a single number? I think you need to tell us a bit more about the file format if you can. Is this homework? And no, the output is not hex, it's decimal. I think you're reading the data incorrectly from the binary file. A: You might be happier with something like the following to read this block. If your data isn't unpacking properly, the most likely cause is that the header is not simply 2 Long integers. Or, it's possible that the doesn't reflect the "endian-ness" of your platform. Using struct allows you to add a "<" or ">" to the format string to try different endian-ness. Also, you can easily change the format of the message to use ordinary ints or signed ints or floats without too much real work. import struct def read( someFile ): header= someFile.read( 8 ) id, length = struct.unpack( "LL", header ) print id, length body = someFile.read( length-8 ) # Common for length to include the header words = (length-8)//4 content= struct.unpack( "L"*words, body ) You can also print repr(header) and repr(body) to try and get a better sense of what your data actually looks like. A: Found out it's incredibly easy to do this with numpy.fromfile Specify what you want to extract (e.g. uint32, int16 etc.) and it extracts it as an array. You can even specify your own types as a collection of existing types, meaning you can extract known structures in one go (e.g. 2 uint32s then 1 string then 5 int16s as an array of 8 values)
Python: strange numbers being pulled from binary file /confusion with hex and decimals
This might be extremely trivial, and if so I apologise, but I'm getting really confused with the outputs I'm getting: hex? decimal? what? Here's an example, and what it returns: >>> print 'Rx State: ADC Clk=', ADC_Clock_MHz,'MHz DDC Clk=', DDC_Clock_kHz,'kHz Temperature=', Temperature,'C' Rx State: ADC Clk= [1079246848L, 0L] MHz DDC Clk= [1078525952L, 0L] kHz Temperature= [1078140928L, 0L] C Now I admit this is slight guesswork because I don't know exactly what the data is - I have a specification of how to parse it out of the file, but it's giving me very strange answers. As you can see - the values are very similar, all around the 1078000000 mark, which leads me to believe I might be extracting something strange (like hex, but I don't think it is...) The structure is read as follows (apologies for length): #Read block more = 1 while(more == 1): a = array.array("L") a.fromfile(wholeFile,2) if len(a) == 2: structure_id = a[0] print 'structure_id: ', hex(structure_id) structure_length = a[1] print 'structure_length: ', structure_length else: print 'cannot read structure start' numDwords = (structure_length/4) - 2 - 1; print 'numDwords: ', numDwords content = array.array("L") content.fromfile(wholeFile,numDwords) if len(content) != numDwords: print 'cannot read structure' more = 0 ok = 0 and then the above example was retrieved from this by: pos = 2 v1 = [content[pos+1], content[pos]] pos = pos+2 v2 = [content[pos+1], content[pos]] pos = pos+2 v3 = [content[pos+1], content[pos]] pos = pos+2 ADC_Clock_MHz = v1 DDC_Clock_kHz = v2 Temperature = v3 Right sorry again for how verbose that was, but it's not just those values, it seems some values are ok and some aren't, which leads me to believe that the larger numbers are encoded differently... Also I have no idea why all the values are in pairs either! Pants question, but if anyone has any insight it'd be much appreciated.
[ "The contents are in pairs because you assign a pair to the variables (e.g. ADC_Clock_MHz = v1 and v1 = [content[pos+1], content[pos]]).\nYou are basically assigning a list of two elements to v1 where the first element is the element in the index pos+1 in the array content and the second element is the element in the index pos in the array content.\nI'm a bit confused why you are constructing this list of those two elements. Are you trying to combine the two elements into a single number? I think you need to tell us a bit more about the file format if you can. Is this homework?\nAnd no, the output is not hex, it's decimal. I think you're reading the data incorrectly from the binary file.\n", "You might be happier with something like the following to read this block.\nIf your data isn't unpacking properly, the most likely cause is that the header is not simply 2 Long integers. Or, it's possible that the doesn't reflect the \"endian-ness\" of your platform.\nUsing struct allows you to add a \"<\" or \">\" to the format string to try different endian-ness. Also, you can easily change the format of the message to use ordinary ints or signed ints or floats without too much real work.\nimport struct\ndef read( someFile ):\n header= someFile.read( 8 )\n id, length = struct.unpack( \"LL\", header )\n print id, length\n body = someFile.read( length-8 ) # Common for length to include the header\n words = (length-8)//4\n content= struct.unpack( \"L\"*words, body )\n\nYou can also print repr(header) and repr(body) to try and get a better sense of what your data actually looks like.\n", "Found out it's incredibly easy to do this with numpy.fromfile\nSpecify what you want to extract (e.g. uint32, int16 etc.) and it extracts it as an array.\nYou can even specify your own types as a collection of existing types, meaning you can extract known structures in one go (e.g. 2 uint32s then 1 string then 5 int16s as an array of 8 values)\n" ]
[ 0, 0, 0 ]
[]
[]
[ "binary", "hex", "numpy", "python" ]
stackoverflow_0002148538_binary_hex_numpy_python.txt
Q: pygst - glimagesink callback I'm trying to use 'glimagesink' element with python. The element (which is GObject inside) has client-draw-callback property which should (in C++ at least) contain a function (bool func(uint t, uint w, uint h)) pointer. I've tried element.set_property('client-draw-callback', myfunc), and creating function pointer with ctypes, but every time it says, TypeError: could not convert argument to correct param type I could find any docs on using glimagesink or glfilterapp in python ): The working c++ code: gboolean drawCallback (GLuint texture, GLuint width, GLuint height) { ... } GstElement* glimagesink = gst_element_factory_make ("glimagesink", "glimagesink0"); g_object_set(G_OBJECT(glimagesink), "client-draw-callback", drawCallback, NULL) A: This isn't the problem you're having (as far as I can tell) but it's important to note that this API has changed recently, now it expects a void pointer of data which allows you to pass in a handle to user_data (or NULL) when you connect your callback. gboolean drawCallback (GLuint texture, GLuint width, GLuint height, gpointer data)
pygst - glimagesink callback
I'm trying to use 'glimagesink' element with python. The element (which is GObject inside) has client-draw-callback property which should (in C++ at least) contain a function (bool func(uint t, uint w, uint h)) pointer. I've tried element.set_property('client-draw-callback', myfunc), and creating function pointer with ctypes, but every time it says, TypeError: could not convert argument to correct param type I could find any docs on using glimagesink or glfilterapp in python ): The working c++ code: gboolean drawCallback (GLuint texture, GLuint width, GLuint height) { ... } GstElement* glimagesink = gst_element_factory_make ("glimagesink", "glimagesink0"); g_object_set(G_OBJECT(glimagesink), "client-draw-callback", drawCallback, NULL)
[ "This isn't the problem you're having (as far as I can tell) but it's important to note that\nthis API has changed recently, now it expects a void pointer of data which allows you to pass in a handle to user_data (or NULL) when you connect your callback.\ngboolean drawCallback (GLuint texture, GLuint width, GLuint height, gpointer data)\n\n" ]
[ 0 ]
[]
[]
[ "ctypes", "gstreamer", "opengl", "python" ]
stackoverflow_0001834990_ctypes_gstreamer_opengl_python.txt
Q: How do I get the content-type from return of urlopen(url) in python2.x? Are theere any functions in 3.x using the http.client.HTTPMessage().get_content_type() ? A: urllib2.urlopen() returns an addinfourl with headers: >>> import urllib2 >>> f = urllib2.urlopen('http://www.python.org/') >>> f.headers['content-type'] 'text/html' >>>
How do I get the content-type from return of urlopen(url) in python2.x?
Are theere any functions in 3.x using the http.client.HTTPMessage().get_content_type() ?
[ "urllib2.urlopen() returns an addinfourl with headers:\n>>> import urllib2\n>>> f = urllib2.urlopen('http://www.python.org/')\n>>> f.headers['content-type']\n'text/html'\n>>> \n\n" ]
[ 3 ]
[]
[]
[ "python" ]
stackoverflow_0002267568_python.txt
Q: overloading augmented arithmetic assignments in python I'm new to Python so apologies in advance if this is a stupid question. For an assignment I need to overload augmented arithmetic assignments(+=, -=, /=, *=, **=, %=) for a class myInt. I checked the Python documentation and this is what I came up with: def __iadd__(self, other): if isinstance(other, myInt): self.a += other.a elif type(other) == int: self.a += other else: raise Exception("invalid argument") self.a and other.a refer to the int stored in each class instance. I tried testing this out as follows, but each time I get 'None' instead of the expected value 5: c = myInt(2) b = myInt(3) c += b print c Can anyone tell me why this is happening? Thanks in advance. A: You need to add return self to your method. Explanation: The semantics of a += b, when type(a) has a special method __iadd__, are defined to be: a = a.__iadd__(b) so if __iadd__ returns something different than self, that's what will be bound to name a after the operation. By missing a return statement, the method you posted is equivalent to one with return None. A: Augmented operators in Python have to return the final value to be assigned to the name they are called on, usually (and in your case) self. Like all Python methods, missing a return statement implies returning None. Also, Never ever ever raise Exception, which is impossible to catch sanely. The code to do so would have to say except Exception, which will catch all exceptions. In this case you want ValueError or TypeError. Don't typecheck with type(foo) == SomeType. In this (and virtually all) cases, isinstance works better or at least the same. Whenever you make your own type, like myInt, you should name it with capital letters so people can recognize it as a class name. A: Yes, you need "return self", it will look like this: def __iadd__(self, other): if isinstance(other, myInt): self.a += other.a return self elif type(other) == int: self.a += other return self else: raise Exception("invalid argument")
overloading augmented arithmetic assignments in python
I'm new to Python so apologies in advance if this is a stupid question. For an assignment I need to overload augmented arithmetic assignments(+=, -=, /=, *=, **=, %=) for a class myInt. I checked the Python documentation and this is what I came up with: def __iadd__(self, other): if isinstance(other, myInt): self.a += other.a elif type(other) == int: self.a += other else: raise Exception("invalid argument") self.a and other.a refer to the int stored in each class instance. I tried testing this out as follows, but each time I get 'None' instead of the expected value 5: c = myInt(2) b = myInt(3) c += b print c Can anyone tell me why this is happening? Thanks in advance.
[ "You need to add return self to your method. Explanation:\nThe semantics of a += b, when type(a) has a special method __iadd__, are defined to be:\n a = a.__iadd__(b)\n\nso if __iadd__ returns something different than self, that's what will be bound to name a after the operation. By missing a return statement, the method you posted is equivalent to one with return None.\n", "Augmented operators in Python have to return the final value to be assigned to the name they are called on, usually (and in your case) self. Like all Python methods, missing a return statement implies returning None.\nAlso,\n\nNever ever ever raise Exception, which is impossible to catch sanely. The code to do so would have to say except Exception, which will catch all exceptions. In this case you want ValueError or TypeError.\nDon't typecheck with type(foo) == SomeType. In this (and virtually all) cases, isinstance works better or at least the same.\nWhenever you make your own type, like myInt, you should name it with capital letters so people can recognize it as a class name.\n\n", "Yes, you need \"return self\", it will look like this:\ndef __iadd__(self, other):\n if isinstance(other, myInt):\n self.a += other.a\n return self\n elif type(other) == int:\n self.a += other\n return self\n else:\n raise Exception(\"invalid argument\")\n\n" ]
[ 14, 7, 1 ]
[]
[]
[ "operator_overloading", "operators", "python" ]
stackoverflow_0002267466_operator_overloading_operators_python.txt
Q: Reading a binary file in Python: takes a very long time to read certain bytes This is very odd I'm reading some (admittedly very large: ~2GB each) binary files using numpy libraries in Python. I'm using the: thingy = np.fromfile(fileObject, np.int16, 1) method. This is right in the middle of a nested loop - I'm doing this loop 4096 times per 'channel', and this 'channel' loop 9 times for every 'receiver', and this 'receiver' loop 4 times (there's 9 channels per receiver, of which there are 4!). This is for every 'block', of which there are ~3600 per file. So you can see, very iterative and I know it will take a long time, but it was taking a LOT longer than I expected - on average 8.5 seconds per 'block'. I ran some benchmarks using time.clock() etc. and found everything going as fast as it should be, except for approximately 1 or 2 samples per 'block' (so 1 or 2 in 4096*9*4) where it would seem to get 'stuck' on for a few seconds. Now this should be a case of returning a simple int16 from binary, not exactly something that should be taking seconds... why is it sticking? From the benchmarking I found it was sticking in the SAME place every time, (block 2, receiver 8, channel 3, sample 1085 was one of them, for the record!), and it would get stuck there for approximately the same amount of time each run. Any ideas?! Thanks, Duncan A: Although it's hard to say without some kind of reproducible sample, this sounds like a buffering problem. The First part is buffered and until you reach the end of the buffer, it is fast; then it slows down until the next buffer is filled, and so on. A: Where are you storing the results? When lists/dicts/whatever get very large there can be a noticeable delay when they need to be reallocated and resized. A: Could it be that garbage collection is kicking in for the lists ? Added: is it funny data, or blockno ? What happens if you read the blocks in random order, along the lines r = range(4096) random.shuffle(r) # inplace for blockno in r: file.seek( blockno * ... ) ...
Reading a binary file in Python: takes a very long time to read certain bytes
This is very odd I'm reading some (admittedly very large: ~2GB each) binary files using numpy libraries in Python. I'm using the: thingy = np.fromfile(fileObject, np.int16, 1) method. This is right in the middle of a nested loop - I'm doing this loop 4096 times per 'channel', and this 'channel' loop 9 times for every 'receiver', and this 'receiver' loop 4 times (there's 9 channels per receiver, of which there are 4!). This is for every 'block', of which there are ~3600 per file. So you can see, very iterative and I know it will take a long time, but it was taking a LOT longer than I expected - on average 8.5 seconds per 'block'. I ran some benchmarks using time.clock() etc. and found everything going as fast as it should be, except for approximately 1 or 2 samples per 'block' (so 1 or 2 in 4096*9*4) where it would seem to get 'stuck' on for a few seconds. Now this should be a case of returning a simple int16 from binary, not exactly something that should be taking seconds... why is it sticking? From the benchmarking I found it was sticking in the SAME place every time, (block 2, receiver 8, channel 3, sample 1085 was one of them, for the record!), and it would get stuck there for approximately the same amount of time each run. Any ideas?! Thanks, Duncan
[ "Although it's hard to say without some kind of reproducible sample, this sounds like a buffering problem. The First part is buffered and until you reach the end of the buffer, it is fast; then it slows down until the next buffer is filled, and so on.\n", "Where are you storing the results? When lists/dicts/whatever get very large there can be a noticeable delay when they need to be reallocated and resized.\n", "Could it be that garbage collection is kicking in for the lists ?\nAdded: is it funny data, or blockno ? What happens if you read the blocks in random order, along the lines\nr = range(4096)\nrandom.shuffle(r) # inplace\nfor blockno in r:\n file.seek( blockno * ... )\n ...\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "binary", "binaryfiles", "numpy", "python" ]
stackoverflow_0002265930_binary_binaryfiles_numpy_python.txt
Q: Django Loading Templates with Inheritance from Specific Directory In our project, we have a bunch of different templates that clients to choose from (for their webstore). The file layout is something like this: templates cart.html closed.html head.html standard bishop default indiana marley mocca nihilists raconteurs tripwire Every subfolder of standard contains a few template files like base.html, browse.html and item.html. Browse and Item inherit from base. What I want to do is render the browse template in a specific template folder (let's say templates/standard/bishop) isolated from any other global template path settings in my app. Is there a way to do that? UPDATE: I'll try to be more clear. If I just render browse.html from the bishop subfolder it tries to extend base.html and it can't find it. I could alter the settings template path to include the bishop folder, but I'm looking for a way to make it work leaving that alone. A: In your templates/standard/bishop/browse.html template you're doing the following: {% extends "base.html" %} This refers to templates/base.html and not templates/standard/bishop/base.html. By default Django will check your installed applications as well as the template directories that you specified under TEMPLATE_DIRS in settings.py. This behavior is specified by TEMPLATE_LOADERS in settings.py: http://docs.djangoproject.com/en/dev/ref/settings/#template-loaders http://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types You might be able to get away with what you're trying to do by creating your own template loader, otherwise simply specify the actual path to base.html: {% extends "standard/bishop/base.html" %}
Django Loading Templates with Inheritance from Specific Directory
In our project, we have a bunch of different templates that clients to choose from (for their webstore). The file layout is something like this: templates cart.html closed.html head.html standard bishop default indiana marley mocca nihilists raconteurs tripwire Every subfolder of standard contains a few template files like base.html, browse.html and item.html. Browse and Item inherit from base. What I want to do is render the browse template in a specific template folder (let's say templates/standard/bishop) isolated from any other global template path settings in my app. Is there a way to do that? UPDATE: I'll try to be more clear. If I just render browse.html from the bishop subfolder it tries to extend base.html and it can't find it. I could alter the settings template path to include the bishop folder, but I'm looking for a way to make it work leaving that alone.
[ "In your templates/standard/bishop/browse.html template you're doing the following:\n{% extends \"base.html\" %}\n\nThis refers to templates/base.html and not templates/standard/bishop/base.html. By default Django will check your installed applications as well as the template directories that you specified under TEMPLATE_DIRS in settings.py.\nThis behavior is specified by TEMPLATE_LOADERS in settings.py:\n\nhttp://docs.djangoproject.com/en/dev/ref/settings/#template-loaders\nhttp://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types\n\nYou might be able to get away with what you're trying to do by creating your own template loader, otherwise simply specify the actual path to base.html:\n{% extends \"standard/bishop/base.html\" %}\n\n" ]
[ 5 ]
[]
[]
[ "django", "python", "templates" ]
stackoverflow_0002266530_django_python_templates.txt
Q: Authenticated commenting in Django 1.1? (Now that Django 1.1 is in release candidate status, it could be a good time to ask this.) I've been searing everywhere for ways to extend Django's comments app to support authenticated comments. After reading through the comments model a few times, I found that a ForeignKey to User already exists. From django.contrib.comments.models: class Comment(BaseCommentAbstractModel): """ A user comment about some object. """ # Who posted this comment? If ``user`` is set then it was an authenticated # user; otherwise at least user_name should have been set and the comment # was posted by a non-authenticated user. user = models.ForeignKey(User, verbose_name=_('user'), blank=True, null=True, related_name="%(class)s_comments") user_name = models.CharField(_("user's name"), max_length=50, blank=True) user_email = models.EmailField(_("user's email address"), blank=True) user_url = models.URLField(_("user's URL"), blank=True) I can't seem to get my head around setting user. If I use comments as is, even if I'm authenticated, it still seems to require the other fields. I'm guessing I should override the form and do it there? On top of that, if I use user, I should ignore the fact that user_name, user_email and user_url will be empty and just pull that information from a related profile model, correct? While the answers could be quite trivial in the end, I'm just surprised that it hasn't been written or even talked about. A: WordPress and other systems make this a no-brainer. If you're logged in, the comment form should just "do the right thing" and remove the name/email/url fields. Isn't this exactly the kind of heavy lifting a framework is supposed to do for you? Rather than dancing around with subclassing models for something that should be trivially easy, I find it simpler to build the form manually in the template and provide the hidden field values it needs. This works perfectly for sites that only accept comments from authenticated users: {% if user.is_authenticated %} {% get_comment_form for [object] as form %} <form action="{% comment_form_target %}" method="POST"> {% csrf_token %} {{ form.comment }} {{ form.honeypot }} {{ form.content_type }} {{ form.object_pk }} {{ form.timestamp }} {{ form.security_hash }} <input type="hidden" name="next" value="{% url [the_view] [object].id %}" /> <input type="submit" value="Add comment" id="id_submit" /> </form> {% else %} <p>Please <a href="{% url auth_login %}">log in</a> to leave a comment.</p> {% endif %} Note that this will leave the honeypot field visible; you'll want to hide it in your CSS: #id_honeypot { visibility:hidden; } If you want to enable comments either for anonymous or authenticated users, replace the auth_login line above with a standard call to a comment form. A: I recommend that when you come up with a question about Django internals, you take a look at the source. If we look at the start of post_comment view we see that the POST querydict is copied and the user's email and name are inserted. They are still required (as seen in the form's source), so these details must either entered in the form or the user must provide them. To answer your question to Superjoe, the view attaches the user to the comment before it is saved (as seen near the end of the post_comment view). A: Use a Profile model for extra account data besides user name and password. You can call user.get_profile() if you include this line in Profile: user = models.ForeignKey(User, unique=True) and this line in settings.py: AUTH_PROFILE_MODULE = 'yourapp.Profile' A: First off, the comments app already supports both authenticated and anonymous users, so I assume you want to accept comments from authenticated users only? Thejaswi Puthraya had a series of articles on his blog addressing this. Basically, he pre-populates the name and email fields in the comment form and replaces them with hidden fields, then defines a wrapper view around post_comment to ensure the user posting the comment is the same as the logged-in user, among other things. Seemed pretty straightforward, though maybe a tad tedious. His blog seems to be down presently...hopefully it's only temporary. A: Theju wrote an authenticated comments app — http://thejaswi.info/tech/blog/2009/08/04/reusable-app-authenticated-comments/ A: According to the comment, it's either-or: the other fields are meant to be used when user isn't set. Have you checked that the relevant columns are definitely NOT NULL? They're marked as blank=True which normally means required=False at the field level. If you have actually tried it, what errors are you getting?
Authenticated commenting in Django 1.1?
(Now that Django 1.1 is in release candidate status, it could be a good time to ask this.) I've been searing everywhere for ways to extend Django's comments app to support authenticated comments. After reading through the comments model a few times, I found that a ForeignKey to User already exists. From django.contrib.comments.models: class Comment(BaseCommentAbstractModel): """ A user comment about some object. """ # Who posted this comment? If ``user`` is set then it was an authenticated # user; otherwise at least user_name should have been set and the comment # was posted by a non-authenticated user. user = models.ForeignKey(User, verbose_name=_('user'), blank=True, null=True, related_name="%(class)s_comments") user_name = models.CharField(_("user's name"), max_length=50, blank=True) user_email = models.EmailField(_("user's email address"), blank=True) user_url = models.URLField(_("user's URL"), blank=True) I can't seem to get my head around setting user. If I use comments as is, even if I'm authenticated, it still seems to require the other fields. I'm guessing I should override the form and do it there? On top of that, if I use user, I should ignore the fact that user_name, user_email and user_url will be empty and just pull that information from a related profile model, correct? While the answers could be quite trivial in the end, I'm just surprised that it hasn't been written or even talked about.
[ "WordPress and other systems make this a no-brainer. If you're logged in, the comment form should just \"do the right thing\" and remove the name/email/url fields. Isn't this exactly the kind of heavy lifting a framework is supposed to do for you? \nRather than dancing around with subclassing models for something that should be trivially easy, I find it simpler to build the form manually in the template and provide the hidden field values it needs. This works perfectly for sites that only accept comments from authenticated users:\n{% if user.is_authenticated %}\n{% get_comment_form for [object] as form %} \n<form action=\"{% comment_form_target %}\" method=\"POST\"> \n {% csrf_token %}\n {{ form.comment }} \n {{ form.honeypot }} \n {{ form.content_type }} \n {{ form.object_pk }} \n {{ form.timestamp }} \n {{ form.security_hash }} \n <input type=\"hidden\" name=\"next\" value=\"{% url [the_view] [object].id %}\" />\n <input type=\"submit\" value=\"Add comment\" id=\"id_submit\" /> \n</form> \n{% else %}\n <p>Please <a href=\"{% url auth_login %}\">log in</a> to leave a comment.</p>\n{% endif %} \n\nNote that this will leave the honeypot field visible; you'll want to hide it in your CSS:\n#id_honeypot {\n visibility:hidden;\n}\n\nIf you want to enable comments either for anonymous or authenticated users, replace the auth_login line above with a standard call to a comment form.\n", "I recommend that when you come up with a question about Django internals, you take a look at the source.\nIf we look at the start of post_comment view we see that the POST querydict is copied and the user's email and name are inserted. They are still required (as seen in the form's source), so these details must either entered in the form or the user must provide them.\nTo answer your question to Superjoe, the view attaches the user to the comment before it is saved (as seen near the end of the post_comment view).\n", "Use a Profile model for extra account data besides user name and password. You can call user.get_profile() if you include this line in Profile:\nuser = models.ForeignKey(User, unique=True)\n\nand this line in settings.py:\nAUTH_PROFILE_MODULE = 'yourapp.Profile'\n\n", "First off, the comments app already supports both authenticated and anonymous users, so I assume you want to accept comments from authenticated users only?\nThejaswi Puthraya had a series of articles on his blog addressing this. Basically, he pre-populates the name and email fields in the comment form and replaces them with hidden fields, then defines a wrapper view around post_comment to ensure the user posting the comment is the same as the logged-in user, among other things. Seemed pretty straightforward, though maybe a tad tedious.\nHis blog seems to be down presently...hopefully it's only temporary.\n", "Theju wrote an authenticated comments app — http://thejaswi.info/tech/blog/2009/08/04/reusable-app-authenticated-comments/\n", "According to the comment, it's either-or: the other fields are meant to be used when user isn't set. Have you checked that the relevant columns are definitely NOT NULL? They're marked as blank=True which normally means required=False at the field level. If you have actually tried it, what errors are you getting?\n" ]
[ 4, 3, 1, 1, 1, 0 ]
[]
[]
[ "comments", "django", "python" ]
stackoverflow_0001163113_comments_django_python.txt
Q: buildbot: run SVNPoller with --trust-server-cert I asked this similar question and got a satisfactory answer. However, doing the same with SVNPoller doesn't work. So how can I pass --trust-server-cert as an extra param to SVNPoller in buildbot A: class MyPoller(SVNPoller): def __init__(...): SVNPoller.__init__(self, ...) def getProcessOutput(self, args): args += ["--trust-server-cert"] return SVNPoller.getProcessOutput(self, args) A: Use extra_args if specified, an array of strings that will be passed as extra arguments to the svn binary.
buildbot: run SVNPoller with --trust-server-cert
I asked this similar question and got a satisfactory answer. However, doing the same with SVNPoller doesn't work. So how can I pass --trust-server-cert as an extra param to SVNPoller in buildbot
[ "class MyPoller(SVNPoller):\n def __init__(...):\n SVNPoller.__init__(self, ...)\n\n def getProcessOutput(self, args):\n args += [\"--trust-server-cert\"]\n return SVNPoller.getProcessOutput(self, args)\n\n", "Use extra_args\nif specified, an array of strings that will be passed as extra arguments to the svn binary.\n" ]
[ 0, 0 ]
[]
[]
[ "build_process", "buildbot", "project_management", "python", "svn" ]
stackoverflow_0001947508_build_process_buildbot_project_management_python_svn.txt
Q: Design pattern to organize non-trivial ORM queries? I am developing a web API with 10 tables or so in the backend, with several one-to-many and many-to-many associations. The API essentially is a database wrapper that performs validated updates and conditional queries. It's written in Python, and I use SQLAlchemy for ORM and CherryPy for HTTP handling. So far I have separated the 30-some queries the API performs into functions of their own, which look like this: # in module "services.inventory" def find_inventories(session, user_id, *inventory_ids, **kwargs): query = session.query(Inventory, Product) query = query.filter_by(user_id=user_id, deleted=False) ... return query.all() def find_inventories_by(session, app_id, user_id, by_app_id, by_type, limit, page): .... # in another service module def remove_old_goodie(session, app_id, user_id): try: old = _current_goodie(session, app_id, user_id) services.inventory._remove(session, app_id, user_id, [old.id]) except ServiceException, e: # log it and do stuff .... The CherryPy request handler calls the query methods, which are scattered across several service modules, as needed. The rationale behind this solution is, since they need to access multiple model classes, they don't belong to individual models, and also these database queries should be separated out from direct handling of API accesses. I realize that the above code might be called Foreign Methods in the realm of refactoring. I could well live with this way of organizing for a while, but as things are starting to look a little messy, I'm looking for a way to refactor this code. Since the queries are tied directly to the API and its business logic, they are hard to generalize like getters and setters. It smells to repeat the session argument like that, but as the current implementation of the API creates a new CherryPy handler instance for each API call and therefore the session object, there is no global way of getting at the current session. Is there a well-established pattern to organize such queries? Should I stick with the Foreign Methods and just try to unify the function signature (argument ordering, naming conventions etc.)? What would you suggest? A: SQLAlchemy strongly suggests that the session maker be part of some global configuration. It is intended that the sessionmaker() function be called within the global scope of an application, and the returned class be made available to the rest of the application as the single class used to instantiate sessions. Queries which are in separate modules isn't an interesting problem. The Django ORM works this way. A web site usually consists of multiple Django "applications", which sounds like your site that has many "service modules". Knitting together multiple services is the point of an application. There aren't a lot of alternatives that are better. A: The standard way to have global access to the current session in a threaded environment is ScopedSession. There are some important aspects to get right when integrating with your framework, mainly transaction control and clearing out sessions between requests. A common pattern is to have an autocommit=False (the default) ScopedSession in a module and wrap any business logic execution in a try-catch clause that rolls back in case of exception and commits if the method succeeded, then finally calls Session.remove(). The business logic would then import the Session object into global scope and use it like a regular session. There seems to be an existing CherryPy-SQLAlchemy integration module, but as I'm not too familiar with CherryPy, I can't comment on its quality. Having queries encapsulated as functions is just fine. Not everything needs to be in a class. If they get too numerous just split into separate modules by topic. What I have found useful is too factor out common criteria fragments. They usually fit rather well as classmethods on model classes. Aside from increasing readability and reducing duplication, they work as implementation hiding abstractions up to some extent, making refactoring the database less painful. (Example: instead of (Foo.valid_from <= func.current_timestamp()) & (Foo.valid_until > func.current_timestamp()) you'd have Foo.is_valid())
Design pattern to organize non-trivial ORM queries?
I am developing a web API with 10 tables or so in the backend, with several one-to-many and many-to-many associations. The API essentially is a database wrapper that performs validated updates and conditional queries. It's written in Python, and I use SQLAlchemy for ORM and CherryPy for HTTP handling. So far I have separated the 30-some queries the API performs into functions of their own, which look like this: # in module "services.inventory" def find_inventories(session, user_id, *inventory_ids, **kwargs): query = session.query(Inventory, Product) query = query.filter_by(user_id=user_id, deleted=False) ... return query.all() def find_inventories_by(session, app_id, user_id, by_app_id, by_type, limit, page): .... # in another service module def remove_old_goodie(session, app_id, user_id): try: old = _current_goodie(session, app_id, user_id) services.inventory._remove(session, app_id, user_id, [old.id]) except ServiceException, e: # log it and do stuff .... The CherryPy request handler calls the query methods, which are scattered across several service modules, as needed. The rationale behind this solution is, since they need to access multiple model classes, they don't belong to individual models, and also these database queries should be separated out from direct handling of API accesses. I realize that the above code might be called Foreign Methods in the realm of refactoring. I could well live with this way of organizing for a while, but as things are starting to look a little messy, I'm looking for a way to refactor this code. Since the queries are tied directly to the API and its business logic, they are hard to generalize like getters and setters. It smells to repeat the session argument like that, but as the current implementation of the API creates a new CherryPy handler instance for each API call and therefore the session object, there is no global way of getting at the current session. Is there a well-established pattern to organize such queries? Should I stick with the Foreign Methods and just try to unify the function signature (argument ordering, naming conventions etc.)? What would you suggest?
[ "SQLAlchemy strongly suggests that the session maker be part of some global configuration.\n\nIt is intended that the sessionmaker()\n function be called within the global\n scope of an application, and the\n returned class be made available to\n the rest of the application as the\n single class used to instantiate\n sessions.\n\nQueries which are in separate modules isn't an interesting problem. The Django ORM works this way. A web site usually consists of multiple Django \"applications\", which sounds like your site that has many \"service modules\".\nKnitting together multiple services is the point of an application. There aren't a lot of alternatives that are better.\n", "The standard way to have global access to the current session in a threaded environment is ScopedSession. There are some important aspects to get right when integrating with your framework, mainly transaction control and clearing out sessions between requests. A common pattern is to have an autocommit=False (the default) ScopedSession in a module and wrap any business logic execution in a try-catch clause that rolls back in case of exception and commits if the method succeeded, then finally calls Session.remove(). The business logic would then import the Session object into global scope and use it like a regular session.\nThere seems to be an existing CherryPy-SQLAlchemy integration module, but as I'm not too familiar with CherryPy, I can't comment on its quality.\nHaving queries encapsulated as functions is just fine. Not everything needs to be in a class. If they get too numerous just split into separate modules by topic.\nWhat I have found useful is too factor out common criteria fragments. They usually fit rather well as classmethods on model classes. Aside from increasing readability and reducing duplication, they work as implementation hiding abstractions up to some extent, making refactoring the database less painful. (Example: instead of (Foo.valid_from <= func.current_timestamp()) & (Foo.valid_until > func.current_timestamp()) you'd have Foo.is_valid())\n" ]
[ 1, 1 ]
[]
[]
[ "design_patterns", "orm", "python", "refactoring", "sqlalchemy" ]
stackoverflow_0002265234_design_patterns_orm_python_refactoring_sqlalchemy.txt
Q: Grab a line's whitespace/indention with Python Basically, if I have a line of text which starts with indention, what's the best way to grab that indention and put it into a variable in Python? For example, if the line is: \t\tthis line has two tabs of indention Then it would return '\t\t'. Or, if the line was: this line has four spaces of indention Then it would return four spaces. So I guess you could say that I just need to strip everything from a string from first non-whitespace character to the end. Thoughts? A: import re s = "\t\tthis line has two tabs of indention" re.match(r"\s*", s).group() // "\t\t" s = " this line has four spaces of indention" re.match(r"\s*", s).group() // " " And to strip leading spaces, use lstrip. As there are down votes probably questioning the efficiency of regex, I've done some profiling to check the efficiency of each cases. Very long string, very short leading space RegEx > Itertools >> lstrip >>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r"\s*")s=" hello world!"*10000', number=100000) 0.10037684440612793 >>> timeit.timeit('"".join(itertools.takewhile(lambda x:x.isspace(),s))', 'import itertools;s=" hello world!"*10000', number=100000) 0.7092740535736084 >>> timeit.timeit('"".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=" hello world!"*10000', number=100000) 0.51730513572692871 >>> timeit.timeit('s[:-len(s.lstrip())]', 's=" hello world!"*10000', number=100000) 2.6478431224822998 Very short string, very short leading space lstrip > RegEx > Itertools If you can limit the string's length to thousounds of chars or less, the lstrip trick maybe better. >>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r"\s*");s=" hello world!"*100', number=100000) 0.099548101425170898 >>> timeit.timeit('"".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=" hello world!"*100', number=100000) 0.53602385520935059 >>> timeit.timeit('s[:-len(s.lstrip())]', 's=" hello world!"*100', number=100000) 0.064291000366210938 This shows the lstrip trick scales roughly as O(√n) and the RegEx and itertool methods are O(1) if the number of leading spaces is not a lot. Very short string, very long leading space lstrip >> RegEx >>> Itertools If there are a lot of leading spaces, don't use RegEx. >>> timeit.timeit('s[:-len(s.lstrip())]', 's=" "*2000', number=10000) 0.047424077987670898 >>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r"\s*");s=" "*2000', number=10000) 0.2433168888092041 >>> timeit.timeit('"".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=" "*2000', number=10000) 3.9949162006378174 Very long string, very long leading space lstrip >>> RegEx >>>>>>>> Itertools >>> timeit.timeit('s[:-len(s.lstrip())]', 's=" "*200000', number=10000) 4.2374031543731689 >>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r"\s*");s=" "*200000', number=10000) 23.877214908599854 >>> timeit.timeit('"".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=" "*200000', number=100)*100 415.72158336639404 This shows all methods scales roughly as O(m) if the non-space part is not a lot. A: A sneaky way: abuse lstrip! fullstr = "\t\tthis line has two tabs of indentation" startwhites = fullstr[:len(fullstr)-len(fullstr.lstrip())] This way you don't have to work through all the details of whitespace! (Thanks Adam for the correction) A: This can also be done with str.isspace and itertools.takewhile instead of regex. import itertools tests=['\t\tthis line has two tabs of indention', ' this line has four spaces of indention'] def indention(astr): # Using itertools.takewhile is efficient -- the looping stops immediately after the first # non-space character. return ''.join(itertools.takewhile(str.isspace,astr)) for test_string in tests: print(indention(test_string)) A: def whites(a): return a[0:a.find(a.strip())] Basically, the my idea is: Find a strip of starting line Find a difference between starting line and stripped one
Grab a line's whitespace/indention with Python
Basically, if I have a line of text which starts with indention, what's the best way to grab that indention and put it into a variable in Python? For example, if the line is: \t\tthis line has two tabs of indention Then it would return '\t\t'. Or, if the line was: this line has four spaces of indention Then it would return four spaces. So I guess you could say that I just need to strip everything from a string from first non-whitespace character to the end. Thoughts?
[ "import re\ns = \"\\t\\tthis line has two tabs of indention\"\nre.match(r\"\\s*\", s).group()\n// \"\\t\\t\"\ns = \" this line has four spaces of indention\"\nre.match(r\"\\s*\", s).group()\n// \" \"\n\nAnd to strip leading spaces, use lstrip.\n\nAs there are down votes probably questioning the efficiency of regex, I've done some profiling to check the efficiency of each cases.\nVery long string, very short leading space\nRegEx > Itertools >> lstrip\n>>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r\"\\s*\")s=\" hello world!\"*10000', number=100000)\n0.10037684440612793\n>>> timeit.timeit('\"\".join(itertools.takewhile(lambda x:x.isspace(),s))', 'import itertools;s=\" hello world!\"*10000', number=100000)\n0.7092740535736084\n>>> timeit.timeit('\"\".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=\" hello world!\"*10000', number=100000)\n0.51730513572692871\n>>> timeit.timeit('s[:-len(s.lstrip())]', 's=\" hello world!\"*10000', number=100000)\n2.6478431224822998\n\nVery short string, very short leading space\nlstrip > RegEx > Itertools\nIf you can limit the string's length to thousounds of chars or less, the lstrip trick maybe better.\n>>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r\"\\s*\");s=\" hello world!\"*100', number=100000)\n0.099548101425170898\n>>> timeit.timeit('\"\".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=\" hello world!\"*100', number=100000)\n0.53602385520935059\n>>> timeit.timeit('s[:-len(s.lstrip())]', 's=\" hello world!\"*100', number=100000)\n0.064291000366210938\n\nThis shows the lstrip trick scales roughly as O(√n) and the RegEx and itertool methods are O(1) if the number of leading spaces is not a lot.\nVery short string, very long leading space\nlstrip >> RegEx >>> Itertools\nIf there are a lot of leading spaces, don't use RegEx.\n>>> timeit.timeit('s[:-len(s.lstrip())]', 's=\" \"*2000', number=10000)\n0.047424077987670898\n>>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r\"\\s*\");s=\" \"*2000', number=10000)\n0.2433168888092041\n>>> timeit.timeit('\"\".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=\" \"*2000', number=10000)\n3.9949162006378174\n\nVery long string, very long leading space\nlstrip >>> RegEx >>>>>>>> Itertools\n>>> timeit.timeit('s[:-len(s.lstrip())]', 's=\" \"*200000', number=10000)\n4.2374031543731689\n>>> timeit.timeit('r.match(s).group()', 'import re;r=re.compile(r\"\\s*\");s=\" \"*200000', number=10000)\n23.877214908599854\n>>> timeit.timeit('\"\".join(itertools.takewhile(str.isspace,s))', 'import itertools;s=\" \"*200000', number=100)*100\n415.72158336639404\n\nThis shows all methods scales roughly as O(m) if the non-space part is not a lot.\n", "A sneaky way: abuse lstrip!\nfullstr = \"\\t\\tthis line has two tabs of indentation\"\nstartwhites = fullstr[:len(fullstr)-len(fullstr.lstrip())]\n\nThis way you don't have to work through all the details of whitespace!\n(Thanks Adam for the correction)\n", "This can also be done with str.isspace and itertools.takewhile instead of regex. \nimport itertools\n\ntests=['\\t\\tthis line has two tabs of indention',\n ' this line has four spaces of indention']\n\ndef indention(astr):\n # Using itertools.takewhile is efficient -- the looping stops immediately after the first\n # non-space character.\n return ''.join(itertools.takewhile(str.isspace,astr))\n\nfor test_string in tests:\n print(indention(test_string))\n\n", "def whites(a):\nreturn a[0:a.find(a.strip())]\n\nBasically, the my idea is:\n\nFind a strip of starting line\nFind a difference between starting line and stripped one\n\n" ]
[ 26, 12, 4, 1 ]
[ "How about using the regex \\s* which matches any whitespace characters. You only want the whitespace at the beginning of the line so either search with the regex ^\\s* or simply match with \\s*.\n", "If you're interested in using regular expressions you can use that. /\\s/ usually matches one whitespace character, so /^\\s+/ would match the whitespace starting a line.\n" ]
[ -2, -2 ]
[ "indentation", "python", "whitespace" ]
stackoverflow_0002268532_indentation_python_whitespace.txt
Q: Teleporting Traveler, Optimal Profit over time Problem I'm new to the whole traveling-salesman problem as well as stackoverflow so let me know if I say something that isn't quite right. Intro: I'm trying to code a profit/time-optimized multiple-trade algorithm for a game which involves multiple cities (nodes) within multiple countries (areas), where: The physical time it takes to travel between two connected cities is always the same ; Cities aren't linearly connected (you can teleport between some cities in the same time); Some countries (areas) have teleport routes which make the shortest path available through other countries. The traveler (or trader) has a limit on his coin-purse, the weight of his goods, and the quantity tradeable in a certain trade-route. The trade route can span multiple cities. Question Parameters: There already exists a database in memory (python:sqlite) which holds trades based on their source city and their destination city, the shortest-path cities inbetween as an array and amount, and the limiting factor with its % return on total capital (or in the case that none of the factors are limiting, then just the method that gives the highest return on total capital). I'm trying to find the optimal profit for a certain preset chunk of time (i.e. 30 minutes) The act of crossing into a new city is actually simultaneous It usually takes the same defined amount of time to travel across the city map (i.e. 2 minutes) The act of initiating the first or any new trade takes the same time as crossing one city map (i.e. 2 minutes) My starting point might not actually have a valid trade ( I would have to travel to the first/nearest/best one ) Pseudo-Solution So Far Optimization First, I realize that because I have a limit on the time it takes, and I know how long each hop takes (including -1 for intiating the trade), I can limit the graph to all trades whose hops are under or equal to max_hops=int(max_time/route_time) -1. I cut elements of the trade database that don't fall within this time limit, pruning cities that have shortest-path lengths greater than max_hops. I make another entry into the trades database that includes the shortest-paths between my current city and the starting cities of all the existing trades that aren't my current city, and give them a return of 0%. I would limit these to where the number of city hops is less than max_hops, and I would also calculate whether the current city to the starting city plus that trades shortest-path-hops would excede max_hops and remove those that exceded this limit. Then I take the remaining trades that aren't (current_city->starting_city) and add trade routes with return of 0% between all destination and starting cities wheres the hops doesn't excede max_hops Then I make one last prune for all cities that aren't in the trades database as either a starting city, destination city, or part of the shortest path city arrays. Graph Search I am left with a (much) smaller graph of trades feasible within the time limit (i.e. 30 mins). Because all the nodes that are connected are adjacent, the connections are by default all weighted 1. I divide the %return over the number of hops in the trade then take the inverse and add + 1 (this would mean a weight of 1.01 for a 100% return route). In the case where the return is 0%, I add ... 2? It should then return the most profitable route... The Question: Mostly, How do I add the ability to take multiple routes when I have left over money or space and keep route finding through path discrete to single trade routes? Due to the nature of the goods being sold at multiple prices and quantities within the city, there would be a lot of overlapping routes. How do I penalize initiating a new trade route? Is graph search even useful in this situation? On A Side Note, What kinds of prunes/optimizations to the graph should I (or should I not) make? Is my weighting method correct? I have a feeling it will give me disproportional weights. Should I use the actual return instead of percentage return? If I am coding in python are libraries such as python-graph suitable for my needs? Or would it save me a lot of overhead (as I understand, graph search algorithms can be computationally intensive) to write a specialized function? Am I best off using A* search ? Should I be precalculating shortest-path points in the trade database and maxing optimizations or should I leave it all to the graph-search? Can you notice anything that I could improve? A: If this is a game where you are playing against humans I would assume the total size of the data space is actually quite limited. If so I would be inclined to throw out all the fancy pruning as I doubt it's worth it. Instead, how about a simple breadth-first search? Build a list of all cities, mark them unvisited Take your starting city, mark the travel time as zero for each city: if not finished and travel time <> infinity then attempt to visit all neighbors, only record the time if city is unvisited mark the city finished repeat until all cities have been visited O(): the outer loop executes cities * maximum hops times. The inner loop executes once per city. No memory allocations are needed. Now, for each city look at what you can buy here and sell there. When figuring the rate of return on a trade remember that growth is exponential, not linear. Twice the profit for a trade that takes twice as long is NOT a good deal! Look up how to calculate the internal rate of return. If the current city has no trade don't bother with the full analysis, simply look over the neighbors and run the analysis on them instead, adding one to the time for each move. If you have CPU cycles to spare (and you very well might, anything meant for a human to play will have a pretty small data space) you can run the analysis on every city adding in the time it takes to get to the city. Edit: Based on your comment you have tons of CPU power available as the game isn't running on your CPU. I stand by my solution: Check everything. I strongly suspect it will take longer to obtain the route and trade info than it will be to calculate the optimal solution. A: I think you've defined something that fits into a class of problems called inventory - routing problems. I assume since you have both goods and coin, the traveller is both buying and selling along the chosen route. Let's first assume that EVERYTHING is deterministic - all quantities of goods in demand, supply available, buying and selling prices, etc are known in advance. The stochastic version gets more difficult (obviously). One objective would be to maximize profits with a constraint on the purse and the goods. If the traveller has to return home its looks like a tour, if not, it looks like a path. Since you haven't required the traveller to visit EVERY node, it is NOT a TSP. That's good - shortest path problems are generally much easier than TSPs to solve. Because of the side constraints and the limited choice of next steps at each node - I'd consider using dynamic programming first attempt at a solution technique. It will help you enumerate what you buy and sell at each stage and there's a limited number of stages. Also, because you put a time constraint on the decision, that limits the state space of choices. To those who suggested Djikstra's algorithm - you may be right - the labelling conventions would need to include the time, coin, and goods and corresponding profits. It may be that the assumptions of Djikstra's may not work for this with the added complexity of profit. Haven't thought through that yet. Here's a link to a similar problem in capital budgeting. Good luck !
Teleporting Traveler, Optimal Profit over time Problem
I'm new to the whole traveling-salesman problem as well as stackoverflow so let me know if I say something that isn't quite right. Intro: I'm trying to code a profit/time-optimized multiple-trade algorithm for a game which involves multiple cities (nodes) within multiple countries (areas), where: The physical time it takes to travel between two connected cities is always the same ; Cities aren't linearly connected (you can teleport between some cities in the same time); Some countries (areas) have teleport routes which make the shortest path available through other countries. The traveler (or trader) has a limit on his coin-purse, the weight of his goods, and the quantity tradeable in a certain trade-route. The trade route can span multiple cities. Question Parameters: There already exists a database in memory (python:sqlite) which holds trades based on their source city and their destination city, the shortest-path cities inbetween as an array and amount, and the limiting factor with its % return on total capital (or in the case that none of the factors are limiting, then just the method that gives the highest return on total capital). I'm trying to find the optimal profit for a certain preset chunk of time (i.e. 30 minutes) The act of crossing into a new city is actually simultaneous It usually takes the same defined amount of time to travel across the city map (i.e. 2 minutes) The act of initiating the first or any new trade takes the same time as crossing one city map (i.e. 2 minutes) My starting point might not actually have a valid trade ( I would have to travel to the first/nearest/best one ) Pseudo-Solution So Far Optimization First, I realize that because I have a limit on the time it takes, and I know how long each hop takes (including -1 for intiating the trade), I can limit the graph to all trades whose hops are under or equal to max_hops=int(max_time/route_time) -1. I cut elements of the trade database that don't fall within this time limit, pruning cities that have shortest-path lengths greater than max_hops. I make another entry into the trades database that includes the shortest-paths between my current city and the starting cities of all the existing trades that aren't my current city, and give them a return of 0%. I would limit these to where the number of city hops is less than max_hops, and I would also calculate whether the current city to the starting city plus that trades shortest-path-hops would excede max_hops and remove those that exceded this limit. Then I take the remaining trades that aren't (current_city->starting_city) and add trade routes with return of 0% between all destination and starting cities wheres the hops doesn't excede max_hops Then I make one last prune for all cities that aren't in the trades database as either a starting city, destination city, or part of the shortest path city arrays. Graph Search I am left with a (much) smaller graph of trades feasible within the time limit (i.e. 30 mins). Because all the nodes that are connected are adjacent, the connections are by default all weighted 1. I divide the %return over the number of hops in the trade then take the inverse and add + 1 (this would mean a weight of 1.01 for a 100% return route). In the case where the return is 0%, I add ... 2? It should then return the most profitable route... The Question: Mostly, How do I add the ability to take multiple routes when I have left over money or space and keep route finding through path discrete to single trade routes? Due to the nature of the goods being sold at multiple prices and quantities within the city, there would be a lot of overlapping routes. How do I penalize initiating a new trade route? Is graph search even useful in this situation? On A Side Note, What kinds of prunes/optimizations to the graph should I (or should I not) make? Is my weighting method correct? I have a feeling it will give me disproportional weights. Should I use the actual return instead of percentage return? If I am coding in python are libraries such as python-graph suitable for my needs? Or would it save me a lot of overhead (as I understand, graph search algorithms can be computationally intensive) to write a specialized function? Am I best off using A* search ? Should I be precalculating shortest-path points in the trade database and maxing optimizations or should I leave it all to the graph-search? Can you notice anything that I could improve?
[ "If this is a game where you are playing against humans I would assume the total size of the data space is actually quite limited. If so I would be inclined to throw out all the fancy pruning as I doubt it's worth it.\nInstead, how about a simple breadth-first search?\nBuild a list of all cities, mark them unvisited\nTake your starting city, mark the travel time as zero\nfor each city: \n if not finished and travel time <> infinity then \n attempt to visit all neighbors, only record the time if city is unvisited\n mark the city finished\nrepeat until all cities have been visited\n\nO(): the outer loop executes cities * maximum hops times. The inner loop executes once per city. No memory allocations are needed.\nNow, for each city look at what you can buy here and sell there. When figuring the rate of return on a trade remember that growth is exponential, not linear. Twice the profit for a trade that takes twice as long is NOT a good deal! Look up how to calculate the internal rate of return.\nIf the current city has no trade don't bother with the full analysis, simply look over the neighbors and run the analysis on them instead, adding one to the time for each move.\nIf you have CPU cycles to spare (and you very well might, anything meant for a human to play will have a pretty small data space) you can run the analysis on every city adding in the time it takes to get to the city.\nEdit: Based on your comment you have tons of CPU power available as the game isn't running on your CPU. I stand by my solution: Check everything. I strongly suspect it will take longer to obtain the route and trade info than it will be to calculate the optimal solution.\n", "I think you've defined something that fits into a class of problems called inventory - routing problems. I assume since you have both goods and coin, the traveller is both buying and selling along the chosen route. Let's first assume that EVERYTHING is deterministic - all quantities of goods in demand, supply available, buying and selling prices, etc are known in advance. The stochastic version gets more difficult (obviously). \nOne objective would be to maximize profits with a constraint on the purse and the goods. If the traveller has to return home its looks like a tour, if not, it looks like a path. Since you haven't required the traveller to visit EVERY node, it is NOT a TSP. That's good - shortest path problems are generally much easier than TSPs to solve. \nBecause of the side constraints and the limited choice of next steps at each node - I'd consider using dynamic programming first attempt at a solution technique. It will help you enumerate what you buy and sell at each stage and there's a limited number of stages. Also, because you put a time constraint on the decision, that limits the state space of choices. \nTo those who suggested Djikstra's algorithm - you may be right - the labelling conventions would need to include the time, coin, and goods and corresponding profits. It may be that the assumptions of Djikstra's may not work for this with the added complexity of profit. Haven't thought through that yet.\nHere's a link to a similar problem in capital budgeting.\nGood luck !\n" ]
[ 2, 1 ]
[]
[]
[ "algorithm", "heuristics", "python", "routing", "traveling_salesman" ]
stackoverflow_0002256589_algorithm_heuristics_python_routing_traveling_salesman.txt
Q: Making super() work in Python's urllib2.Request This afternoon I spent several hours trying to find a bug in my custom extension to urllib2.Request. The problem was, as I found out, the usage of super(ExtendedRequest, self), since urllib2.Request is (I'm on Python 2.5) still an old style class, where the use of super() is not possible. The most obvious way to create a new class with both features, class ExtendedRequest(object, urllib2.Request): def __init__(): super(ExtendedRequest, self).__init__(...) doesn't work. Calling it, I'm left with AttributeError: type raised by urllib2.Request.__getattr__(). Now, before I start and copy'n paste the whole urllib2.Request class from /usr/lib/python just to rewrite it as class Request(object): has anyone an idea, how I could achieve this in a more elegant way? (With this being to have a new-style class based on urllib2.Request with working support for super().) Edit: By the way: the AttributeError mentioned: >>> class ExtendedRequest(object, urllib2.Request): ... def __init__(self): ... super(ExtendedRequest, self).__init__('http://stackoverflow.com') ... >>> ABC = ExtendedRequest () >>> d = urllib2.urlopen(ABC) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/urllib2.py", line 124, in urlopen return _opener.open(url, data) File "/usr/lib/python2.5/urllib2.py", line 373, in open protocol = req.get_type() File "/usr/lib/python2.5/urllib2.py", line 241, in get_type if self.type is None: File "/usr/lib/python2.5/urllib2.py", line 218, in __getattr__ raise AttributeError, attr AttributeError: type A: This should work fine since the hierarchy is simple class ExtendedRequest(urllib2.Request): def __init__(self,...): urllib2.Request.__init__(self,...) A: Using super may not always be the best-practice. There are many difficulties with using super. Read James Knight's http://fuhm.org/super-harmful/ for examples. That link shows (among other issues) that Superclasses must use super if their subclasses do The __init__ signatures of all subclasses that use super should match. You must pass all arguments you receive on to the super function. Your __init__ must be prepared to call any other class's __init__ method in the hierarchy. Never use positional arguments in __init__ In your situation, each of the above critera is violated. James Knight also says, The only situation in which super() can actually be helpful is when you have diamond inheritance. And even then, it is often not as helpful as you might have thought. The conditions under which super can be used correctly are sufficiently onerous, that I think super's usefulness is rather limited. Prefer the Composition design pattern over subclassing. Avoid diamond inheritance if you can. If you control the object hierarchy from top (object) to bottom, and use super consistently, then you are okay. But since you don't control the entire class hierarchy in this case, I'd suggest you abandon using super. A: I think you missed to pass the self parameter to definition of init in your sample. Try this one: class ExtendedRequest(object, urllib2.Request): def __init__(self): super(ExtendedRequest, self).__init__(self) I tested it and it seems to work okey: >>> x = ExtendedRequest() >>> super(ExtendedRequest, x) <super: <class 'ExtendedRequest'>, <ExtendedRequest object>>
Making super() work in Python's urllib2.Request
This afternoon I spent several hours trying to find a bug in my custom extension to urllib2.Request. The problem was, as I found out, the usage of super(ExtendedRequest, self), since urllib2.Request is (I'm on Python 2.5) still an old style class, where the use of super() is not possible. The most obvious way to create a new class with both features, class ExtendedRequest(object, urllib2.Request): def __init__(): super(ExtendedRequest, self).__init__(...) doesn't work. Calling it, I'm left with AttributeError: type raised by urllib2.Request.__getattr__(). Now, before I start and copy'n paste the whole urllib2.Request class from /usr/lib/python just to rewrite it as class Request(object): has anyone an idea, how I could achieve this in a more elegant way? (With this being to have a new-style class based on urllib2.Request with working support for super().) Edit: By the way: the AttributeError mentioned: >>> class ExtendedRequest(object, urllib2.Request): ... def __init__(self): ... super(ExtendedRequest, self).__init__('http://stackoverflow.com') ... >>> ABC = ExtendedRequest () >>> d = urllib2.urlopen(ABC) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/urllib2.py", line 124, in urlopen return _opener.open(url, data) File "/usr/lib/python2.5/urllib2.py", line 373, in open protocol = req.get_type() File "/usr/lib/python2.5/urllib2.py", line 241, in get_type if self.type is None: File "/usr/lib/python2.5/urllib2.py", line 218, in __getattr__ raise AttributeError, attr AttributeError: type
[ "This should work fine since the hierarchy is simple\nclass ExtendedRequest(urllib2.Request):\n def __init__(self,...):\n urllib2.Request.__init__(self,...)\n\n", "Using super may not always be the best-practice. There are many difficulties with using super. Read James Knight's http://fuhm.org/super-harmful/ for examples.\nThat link shows (among other issues) that \n\n Superclasses must use super if their subclasses do\n The __init__ signatures of all subclasses that use super should match. You must pass all arguments you receive on to the super function. Your __init__ must be prepared to call any other class's __init__ method in the hierarchy.\n Never use positional arguments in __init__\n\nIn your situation, each of the above critera is violated. \nJames Knight also says,\n\nThe only situation in which super()\n can actually be helpful is when you\n have diamond inheritance. And even\n then, it is often not as helpful as\n you might have thought.\n\nThe conditions under which super can be used correctly are sufficiently onerous, that I think super's usefulness is rather limited. Prefer the Composition design pattern over subclassing. Avoid diamond inheritance if you can. If you control the object hierarchy from top (object) to bottom, and use super consistently, then you are okay. But since you don't control the entire class hierarchy in this case, I'd suggest you abandon using super.\n", "I think you missed to pass the self parameter to definition of init in your sample.\nTry this one:\nclass ExtendedRequest(object, urllib2.Request):\n def __init__(self):\n super(ExtendedRequest, self).__init__(self)\n\nI tested it and it seems to work okey:\n>>> x = ExtendedRequest()\n>>> super(ExtendedRequest, x)\n<super: <class 'ExtendedRequest'>, <ExtendedRequest object>>\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "new_style_class", "python", "request", "super", "urllib2" ]
stackoverflow_0002267016_new_style_class_python_request_super_urllib2.txt
Q: python glib main loop: delaying until loop is entered Is there a way to schedule the execution of a callable until the glib main loop is entered? Alternatively, is there a signal I can subscribe to that will indicate that the main loop is entered? A: You can use gobject.idle_add which will schedule a callable to be executed when the main loop is idle. gobject.timeout_add is an alternative which uses a timer. Mind that the callable will be called again and again, unless is returns False (or anything that resolves to False, like None).
python glib main loop: delaying until loop is entered
Is there a way to schedule the execution of a callable until the glib main loop is entered? Alternatively, is there a signal I can subscribe to that will indicate that the main loop is entered?
[ "You can use gobject.idle_add which will schedule a callable to be executed when the main loop is idle. gobject.timeout_add is an alternative which uses a timer.\nMind that the callable will be called again and again, unless is returns False (or anything that resolves to False, like None).\n" ]
[ 2 ]
[]
[]
[ "glib", "python" ]
stackoverflow_0002268946_glib_python.txt
Q: How to init twisted reactor in the right way? i have a class MyJabber which init a basic jabber account that print the incoming messages to stdout + put them into a queue. The code that add the client to the reactor is this: def addReactor(self): print 'inside AddReactor' factory = client.basicClientFactory(self.jid, self.option['jabber']['password']) print "factory initialized" factory.addBootstrap(xmlstream.STREAM_AUTHD_EVENT, self.authd) print 'factory bootsraped' reactor.connectTCP(self.option['jabber']['server'], 5222, factory) it's called in this way: jabber = MyJabber(options, to_irc) jabber.addReactor() reactor.run() When i launch the app i see the 'print' of addReactor but after that nothing anymore. i see via 'tcpdump' that something is trying to connect to 'jabber.org' but nothing related to the authd def: def authd(self, xmlstream): global thexmlstream thexmlstream = xmlstream # need to send presence so clients know we're # actually online. print 'Initializing...' presence = domish.Element(('jabber:client', 'presence')) presence.addElement('status').addContent('Online') xmlstream.send(presence) # add a callback for the messages print 'Add gotMessaged callback' xmlstream.addObserver('/message', gotMessage) print 'Add * callback' xmlstream.addObserver('/*', gotSomething) A: This doesn't seem to really be a question about how to "init twisted reactor". Rather, it seems to be more about how to use Twisted Words' XMPP support to send and respond to XMPP messages. You can find a couple examples which do this in the Twisted Words examples directory: http://twistedmatrix.com/documents/current/words/examples/ Try xmpp_client.py and jabber_client.py. A: Fixed, there were 2 errors. 1) I accidentally forgot that a JID is name@domain.tld/extra 2) Forgot to add self. to gotMessage/gotSomething I've also made addReactor return the factory and in the main() wrote: jabber = MyJabber(options, to_irc) factory = jabber.addReactor() reactor.connectTCP(options['jabber']['server'], 5222, factory) reactor.run()
How to init twisted reactor in the right way?
i have a class MyJabber which init a basic jabber account that print the incoming messages to stdout + put them into a queue. The code that add the client to the reactor is this: def addReactor(self): print 'inside AddReactor' factory = client.basicClientFactory(self.jid, self.option['jabber']['password']) print "factory initialized" factory.addBootstrap(xmlstream.STREAM_AUTHD_EVENT, self.authd) print 'factory bootsraped' reactor.connectTCP(self.option['jabber']['server'], 5222, factory) it's called in this way: jabber = MyJabber(options, to_irc) jabber.addReactor() reactor.run() When i launch the app i see the 'print' of addReactor but after that nothing anymore. i see via 'tcpdump' that something is trying to connect to 'jabber.org' but nothing related to the authd def: def authd(self, xmlstream): global thexmlstream thexmlstream = xmlstream # need to send presence so clients know we're # actually online. print 'Initializing...' presence = domish.Element(('jabber:client', 'presence')) presence.addElement('status').addContent('Online') xmlstream.send(presence) # add a callback for the messages print 'Add gotMessaged callback' xmlstream.addObserver('/message', gotMessage) print 'Add * callback' xmlstream.addObserver('/*', gotSomething)
[ "This doesn't seem to really be a question about how to \"init twisted reactor\". Rather, it seems to be more about how to use Twisted Words' XMPP support to send and respond to XMPP messages.\nYou can find a couple examples which do this in the Twisted Words examples directory:\nhttp://twistedmatrix.com/documents/current/words/examples/\nTry xmpp_client.py and jabber_client.py.\n", "Fixed, there were 2 errors.\n1) I accidentally forgot that a JID is name@domain.tld/extra\n2) Forgot to add self. to gotMessage/gotSomething\nI've also made addReactor return the factory and in the main() wrote:\njabber = MyJabber(options, to_irc)\nfactory = jabber.addReactor()\nreactor.connectTCP(options['jabber']['server'], 5222, factory)\nreactor.run()\n\n" ]
[ 4, 0 ]
[]
[]
[ "python", "twisted", "twisted.words", "xmpp" ]
stackoverflow_0002265555_python_twisted_twisted.words_xmpp.txt
Q: Package module not found in Python 2.5, but found in 2.6 I have package structure that looks like this: ae util util contains a method mkdir(dir) that, given a path, creates a directory. If the directory exists, no error is thrown; the method fails silently. The directory ae and its parent directory are both on my PYTHONPATH. When I try to use this method in Python 2.6, everything is fine. However, Python 2.5 gives the following error: util.mkdir(SOURCES) AttributeError: 'module' object has no attribute 'mkdir' Why is Python 2.6 able to find this module and its method with no problems, but Python 2.5 cannot? A: Maybe Python 2.5 is accessing a different version of util that does not have the mkdir method. A: do you import ae.util or import util? Either ae or its parent dir should be in PYTHONPATH, but not both verify you have the right util module by running print util (will print the module's source file) A: It depends where You call this method, and what Your import is. If You write: from ae import util util.mkdir(SOURCES) everything should be ok. The error occurs probably because of the difference in the import policy between Python 2.5 and 2.6.
Package module not found in Python 2.5, but found in 2.6
I have package structure that looks like this: ae util util contains a method mkdir(dir) that, given a path, creates a directory. If the directory exists, no error is thrown; the method fails silently. The directory ae and its parent directory are both on my PYTHONPATH. When I try to use this method in Python 2.6, everything is fine. However, Python 2.5 gives the following error: util.mkdir(SOURCES) AttributeError: 'module' object has no attribute 'mkdir' Why is Python 2.6 able to find this module and its method with no problems, but Python 2.5 cannot?
[ "Maybe Python 2.5 is accessing a different version of util that does not have the mkdir method.\n", "\ndo you import ae.util or import util? Either ae or its parent dir should be in PYTHONPATH, but not both\nverify you have the right util module by running print util (will print the module's source file)\n\n", "It depends where You call this method, and what Your import is. If You write:\nfrom ae import util\nutil.mkdir(SOURCES)\n\neverything should be ok. \nThe error occurs probably because of the difference in the import policy between Python 2.5 and 2.6.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "module", "package", "python" ]
stackoverflow_0002269697_module_package_python.txt
Q: Why can't I do this INSERT in MYSQL? (Python MySQLdb) This is a follow up to this question I asked earlier: Why can't I insert into MySQL? That question solved it partly. Now I'm doing it in Python and it's not working :( cursor.execute("INSERT INTO life(user_id, utm) values(%s,PointFromWKB(point(%s,%s)))",the_user_id, utm_easting, utm_northing) I even did float(utm_easting) and float(utm_northing) Edit: this is the error: execute() takes at most 3 arguments (5 given) A: From here (pdf): Following the statement string argument to execute(), provide a tuple containing the values to be bound to the placeholders, in the order they should appear within the string. If you have only a single value x, specify it as (x,) to indicate a single-element tuple. tl;dr: cursor.execute("""INSERT INTO life(user_id, utm) values(%s,PointFromWKB(point(%s,%s)))""", (the_user_id, utm_easting, utm_northing)) Edit: you can alternatively pass a list as execute()'s second argument. cursor.execute("""INSERT INTO life(user_id, utm) values(%s,PointFromWKB(point(%s,%s)))""", [the_user_id, utm_easting, utm_northing]) A: This could depend on whatever API you use for SQL calls, but it could be that either: a) values are in fact not strings and you need to replace %s with appropriate types (%d for integers, for example?), or b) string values need to be quoted like this: values('%s',PointFromWKB(point('%s','%s'))) A: Solved. Put parantheses around my variables. cursor.execute("INSERT INTO life(user_id, utm) values(%s,PointFromWKB(point(%s,%s)))",(the_user_id, utm_easting, utm_northing))
Why can't I do this INSERT in MYSQL? (Python MySQLdb)
This is a follow up to this question I asked earlier: Why can't I insert into MySQL? That question solved it partly. Now I'm doing it in Python and it's not working :( cursor.execute("INSERT INTO life(user_id, utm) values(%s,PointFromWKB(point(%s,%s)))",the_user_id, utm_easting, utm_northing) I even did float(utm_easting) and float(utm_northing) Edit: this is the error: execute() takes at most 3 arguments (5 given)
[ "From here (pdf):\n\nFollowing the statement string\n argument to execute(), provide a tuple\n containing the values to be bound to\n the placeholders, in the order they\n should appear within the string. If\n you have only a single value x,\n specify it as (x,) to indicate a\n single-element tuple.\n\ntl;dr:\ncursor.execute(\"\"\"INSERT INTO life(user_id, utm) \n values(%s,PointFromWKB(point(%s,%s)))\"\"\", \n (the_user_id, utm_easting, utm_northing))\n\nEdit: you can alternatively pass a list as execute()'s second argument.\ncursor.execute(\"\"\"INSERT INTO life(user_id, utm) \n values(%s,PointFromWKB(point(%s,%s)))\"\"\", \n [the_user_id, utm_easting, utm_northing])\n\n", "This could depend on whatever API you use for SQL calls, but it could be that either:\na) values are in fact not strings and you need to replace %s with appropriate types (%d for integers, for example?), or \nb) string values need to be quoted like this: values('%s',PointFromWKB(point('%s','%s')))\n", "Solved. Put parantheses around my variables.\ncursor.execute(\"INSERT INTO life(user_id, utm) values(%s,PointFromWKB(point(%s,%s)))\",(the_user_id, utm_easting, utm_northing))\n\n" ]
[ 4, 1, 1 ]
[]
[]
[ "database", "insert", "mysql", "python" ]
stackoverflow_0002269776_database_insert_mysql_python.txt
Q: Why does Sphinx generate json? I notice that Sphinx has the ability to generate documentation in JSON. What are these files used for? A: As the docs say, it's for use of a web application (or custom postprocessing tool) that doesn’t use the standard HTML templates. json's a good simple way for language-agnostic data interchange, so, why not?-) A: I assume you're talking about the SerializingHTMLBuilder, in which case I think the answer might be that there isn't necessarily a specific purpose in mind. Rather many things provide conversion routines of various kinds with a "loads/dumps" API convention, and the json module (known as simplejson before it was brought standard library in 2.6) is but one of many such packages. Presumably some people would prefer to work with data in JSON format for their own purposes. If I were trying to build some sort of dynamic Javascripty documentation system, I could well imagine choosing to use JSON as the way to get documentation from the backend out to the client in a manageable format, if for some reason HTML or XML didn't seem like the better option.
Why does Sphinx generate json?
I notice that Sphinx has the ability to generate documentation in JSON. What are these files used for?
[ "As the docs say, it's\n\nfor use of a web application (or\n custom postprocessing tool) that\n doesn’t use the standard HTML\n templates.\n\njson's a good simple way for language-agnostic data interchange, so, why not?-)\n", "I assume you're talking about the SerializingHTMLBuilder, in which case I think the answer might be that there isn't necessarily a specific purpose in mind. Rather many things provide conversion routines of various kinds with a \"loads/dumps\" API convention, and the json module (known as simplejson before it was brought standard library in 2.6) is but one of many such packages.\nPresumably some people would prefer to work with data in JSON format for their own purposes. If I were trying to build some sort of dynamic Javascripty documentation system, I could well imagine choosing to use JSON as the way to get documentation from the backend out to the client in a manageable format, if for some reason HTML or XML didn't seem like the better option.\n" ]
[ 6, 0 ]
[]
[]
[ "json", "python", "python_sphinx" ]
stackoverflow_0002269895_json_python_python_sphinx.txt
Q: Python .flv media file conversion I'm looking for a library similar to FFDshow to help me convert .flv to .avi format and possibly do more. I understand that I can do this via VLC player, but I'd rather do it manually with Python (and in bulk). Similar to: media conversion library/plugin preferably php python automate ffmpeg conversion from upload directory A: pygst with the right plugins can read .flv files (and write other formats). A: Use ffmpeg. You can invoke it from python, if you want to. ffmpeg -i in.flv -f avi -vcodec mpeg4 -acodec libmp3lame out.avi Full ducumentation for converting files with ffmpeg can be found here.
Python .flv media file conversion
I'm looking for a library similar to FFDshow to help me convert .flv to .avi format and possibly do more. I understand that I can do this via VLC player, but I'd rather do it manually with Python (and in bulk). Similar to: media conversion library/plugin preferably php python automate ffmpeg conversion from upload directory
[ "pygst with the right plugins can read .flv files (and write other formats).\n", "Use ffmpeg. You can invoke it from python, if you want to.\nffmpeg -i in.flv -f avi -vcodec mpeg4 -acodec libmp3lame out.avi\n\nFull ducumentation for converting files with ffmpeg can be found here.\n" ]
[ 2, 2 ]
[]
[]
[ "bulk", "flv", "multimedia", "python" ]
stackoverflow_0002267952_bulk_flv_multimedia_python.txt
Q: Python - codec encoding ascii to unicode: error :) I am trying to go about the process of reversing transliteration of an input file(currently in english) back to its original form(in hindi) A sample or a part of the input file looks like this: E-k- b-u-d-z*dhi-m-aan- p-ksii# E-k- ghn-e- j-ngg-l- m-e-ng E-k- b-h-u-t- UUNNc-aa p-e-dr thaa# U-s- k-ii p-t-z*t-o-ng s-e- l-d-ii shaakhaay-e-ng m-j-*zb-uut- b-aaj-u-O-ng k-ii t-r-h- pheil-ii h-u-II thiing# w-n- h-NNs-o-ng k-aa E-k- jhu-nhz*D- I-s- p-e-dr p-r- n-i-w-aas- k-r-t-aa thaa# w-e- s-b- y-h-aaNN s-u-r-ksi-t- the- AUr- b-dre- AAr-aam- s-e- r-h-t-e- the-# U-n- m-e-ng s-e- E-k- p-ksii b-h-u-t- b-u-d-z*dhi-m-aan- thaa# I-s- b-u-d-z*dhi-m-aan- p-ksii n-e- E-k- d-i-n- p-e-dr k-ii j-dr m-e-ng s-e- E-k- l-t-aa k-o- U-g-t-e- d-e-khaa# I-s- k-e- b-aar-e- m-e-ng U-s-n-e- d-uus-r-e- p-ksi-y-o-ng s-e- b-aat- k-ii# "k-z*y-aa t-u-m-z*h-e-ng w-h- l-t-aa d-i-khaaII d-e-t-ii h-ei", U-s- n-e- U-n- s-e- p-uuchaa "t-u-m-z*h-e-ng I-s-e- n-Shz*T- k-r- d-e-n-aa c-aah-i-E-"# "I-s-e- k-z*y-o-ng n-Shz*T- k-r- d-e-n-aa c-aah-i-E-?" h-NNs-o-ng n-e- AAshz*c-*ry- s-e- p-uuchaa "y-h- t-o- I-t-n-ii cho-T-ii s-e- h-ei# h-m-e-ng y-h- k-z*y-aa h-aan-i- p-h-u-NNc-aa s-k-t-ii h-ei"# "m-e-r-e- m-i-tro-ng," b-u-d-z*dhi-m-aan- p-ksii n-e- U-t-z*t-r- d-i-y-aa "w-h- cho-T-ii s-ii l-t-aa j-l-z*d-ii h-ii b-drii h-o- j-aay-e-g-ii# y-h- h-m-aar-e- p-e-dr p-r- c-Dh*z k-r- U-s- s-e- l-i-p-T-t-ii j-aay-e-g-ii AUr- phi-r- m-o-T-ii AUr- m-j-*zb-uut- h-o- j-aay-e-g-ii"# "t-o- k-z*y-aa h-u-AA"# Its equivalent meaning in english is: A WISE OLD BIRD. Deep in the forest stood a very tall tree. Its leafy branches spread out like long arms. This was the home of a flock of wild geese. They were safe there. One of the geese was a wild old bird. One day this wise old bird noticed a small creeper growing at the foot of the tree. He spoke to the other birds about it. "Do you see that creeper ?" he said to them. "You must destroy it." "Why must we destroy it ?" asked the geese in surprise. "It is so small. What harm can it do?" "My friends," replied the wise old bird, " that little creeper will soon grow. My script looks like this: #!/usr/bin/python # -*- coding: UTF-8 -*- import sys CODEC = 'utf-8' input_file=sys.argv[1] output_file=sys.argv[2] list1=[] f=open(input_file,'r') f1 = open(output_file,'w') english_hindi_dict={'A' : u'अ' , 'AA' : u'आ ' , 'I' : u'इ' , 'II' : u'ई ' , 'U' : u'उ ' ,\ 'UU' : u'ऊ' , 'r' : u'ऋ' , 'E' : u'ए' , 'ai' : u'ऐ' , 'O' : u'ओ' , 'AU' : u'औ' ,\ 'k' : u'क' , 'kh' : u'ख' , 'g' : u'ग' , 'gh' : u'घ' , 'c' : u'च' , 'ch' : u'छ',\ 'j': u'ज' , 'jh' : u'झ' , 'tr' : u'त्र' , 'T' : u'ट' , 'Th' : u'ठ' , 'D' : u'ड',\ 'dr' : u'ड' , 'Dh' : u'ढ' , 'Na' : u'ण' , 'th' : u'त' , 'tha' : u'थ',\ 'd' : u'द' , 'dh': u'ध' , 'n' : u'न' , 'p' : u'प' , 'ph' : u'फ' ,\ 'b' : u'ब' , 'bh' : u'भ' , 'm' : u'म' , 'y' : u'य' , 'r' : u'र' , 'l' : u'ल' ,\ 'w' : u'व' , 'sh' : u'श' , 'sha' : u'ष', 's' : u'स' , 'h' : u'ह' , 'ks' : u'क्ष' ,\ 'i' : u'ि' , 'ii' : u'ी' , 'u' : u'ु' , 'uu' : u'ू' , 'e' : u'े' ,\ 'aa' : u'ै' , 'o' : u'ो' , 'AU' : u'ौ' ,'H' : u'्' ,'mn' : u'ं' ,\ 'NN' : u'ँ' , 'AW' : u'ॅ' , 'rr' : u'ृ' , '4' : u'४' , '6': u'६' , '8' : u'८',\ '2' : u'२' , '5' : u'५' , '3' : u'३' , '7' : u'७' , '9' : u'९' , '1' : u'१'} for line in f: #line=line.strip() to remove a line from its newline character.... #line=line.rstrip('.') line=line.replace('-','') line=line.replace('#','|') # i am using the or symbol for poornviram #line=line.replace('।','') #line = line.lower() for word in line: for ch in word: if (ch in english_hindi_dict) : translatedToken = english_hindi_dict[ch] else : translatedToken = ch #{ translatedToken = english_hindi_dict[ch] } #for ch in line: f1.write(translatedToken) #print translatedToken #line = line.replace( char,english_hindi_dict[char] ) #list1.append(line) f.close() f1.write(' '.join(list1)) f1.close() the error that I am getting is: python transliterate_eh_nw.py Hstory.txt op1.txt Traceback (most recent call last): File "transliterate_eh_nw.py", line 43, in <module> f1.write(translatedToken) UnicodeEncodeError: 'ascii' codec can't encode character u'\u092f' in position 0: ordinal not in range(128) Could you please tell me how do I deal with this error. Thank you..:) A: You have a few problems other than the one which you asked about. (1) A conceptual problem: "E-k- b-u-d-z*dhi-m-aan- p-ksii#" is not "english". It is Hindi language written in ASCII using some romanization scheme. It looks like ITRAN but ITRAN doesn't have AA and A, it has only aa and a. Does the scheme have a name? Can you supply a URL? Your object is better described as "transliterate some Hindi text from the unnamed romanization to Devanagari script". (2) Showing the result of translating your text from Hindi to English ("A WISE OLD BIRD" etc) is only moderately useful. The expected Devanagari output would be a better idea. (3) As remarked by @kaiser.se, the transliteration dictionary has multi-byte (up to 3 bytes!) keys, some of which are prefixes of others. Presumably AA must be recognised in priority to A, gh must be recognised before g, etc. Iterating over the items of a dictionary happens in an order that is predictable but for your purposes should be regarded as random. In the code that follows, I've given priority to longer "keys". (4) Either the dictionary is missing some letter keys (a S t z) or the transliteration rules are more complicated than any of us has guessed so far (5) The meaning of the characters # * and - is not 100% obvious. It appears from your input text that z and * appear only in combination as z* (6) It would be a good idea if you explained the interpretation of e.g. shaakhaay-e-ng ... does it start with sh then aa or does it start with sha then a? What are the rules? The answer to the problem that you asked about is of course as several others have pointed out that you need to encode your unicode output using an encoding that is supported by your display device e.g. UTF-8. Here's some code: #!/usr/bin/python # -*- coding: UTF-8 -*- input_data = """ E-k- b-u-d-z*dhi-m-aan- p-ksii# E-k- ghn-e- j-ngg-l- m-e-ng E-k- b-h-u-t- UUNNc-aa p-e-dr thaa# [snip] "t-o- k-z*y-aa h-u-AA"# """ roman_devanagari_dict={'A' : u'अ' , 'AA' : u'आ ' , 'I' : u'इ' , 'II' : u'ई ' , 'U' : u'उ ' ,\ [snip] '2' : u'२' , '5' : u'५' , '3' : u'३' , '7' : u'७' , '9' : u'९' , '1' : u'१'} #Presuming we need to do the 3-letter cases then the 2-letter then the 1-letter replacements = [(-len(k), unicode(k), v) for k, v in roman_devanagari_dict.items()] replacements.sort() data = input_data.decode('ascii') for _junk, from_text, to_text in replacements: data = data.replace(from_text, to_text) # Presuming the '-' are inter-character markers, delete them last, not first data = data.replace(u'-', '') data = data.replace(u'#', '') print "untransliterated:", set(c for c in data if 0x20 < ord(c) < 0x7f) BOM = u'\ufeff' outf = open('devanagari.txt', 'w') outf.write(BOM.encode('utf8')) # for the benefit of clueless Windows s/w outf.write(data.encode('utf8')) outf.close() Output: एक बुदz*धिमैन पक्षी एक घने जनगगल मेनग एक बहुt ऊँचै पेड थa उ स की पtztोनग से लदी षaखैयेनग मजzबूt बैजुओनग की tरह फेिली हुई तीनग वन हँसोनग कै एक झुनहzड इस पेड पर निवैस करtै थa वे सब यहैँ सुरक्षिt ते ौर बडे आ रैम से रहtे ते उ न मेनग से एक पक्षी बहुt बुदzधिमैन थa इस बुदzधिमैन पक्षी ने एक दिन पेड की जड मेनग से एक लtै को उ गtे देखै इस के बैरे मेनग उ सने दूसरे पक्षियोनग से बैt की "कzयै tुमzहेनग वह लtै दिखैई देtी हेि", उ स ने उ न से पूछै "tुमzहेनग इसे नSहzट कर देनै चैहिए" "इसे कzयोनग नSहzट कर देनै चैहिए?" हँसोनग ने आ शzचरय से पूछै "यह tो इtनी छोटी से हेि हमेनग यह कzयै हैनि पहुँचै सकtी हेि" "मेरे मित्रोनग," बुदzधिमैन पक्षी ने उ tztर दियै "वह छोटी सी लtै जलzदी ही बडी हो जैयेगी यह हमैरे पेड पर चढz कर उ स से लिपटtी जैयेगी ौर फिर मोटी ौर मजzबूt हो जैयेगी" "tो कzयै हुआ " which has only a few recognisable words when shoved through Google Translate. Update after examining the transliteration table more closely: Three of the entries (AA, II, and U) have a space after the Devanagari equivalent. Perhaps the spaces should be removed. The general pattern for consonants appears to be: DEVANAGARI LETTER XA is represented by x DEVANAGARI LETTER XXA is represented by X DEVANAGARI LETTER XHA is represented by xh DEVANAGARI LETTER XXHA is represented by Xh However 3 entries break the pattern: SSA -> sha but pattern says S TA -> th but pattern says t THA -> tha but pattern says th Note: changing the above 3 entries stopped my code from complaining that S and t were left unchanged when transliterating your sample text, and removed the seemingly-anomalous sha and tha entries. Entries (D and dr) are mapped to the same character, DEVANAGARI LETTER DDA. D is the expected entry for that character; perhaps dr should be mapped elsewhere. There is no entry for DEVANAGARI LETTER NGA (U+0919); perhaps it should be encoded as ng -- there are a few words ending in ng in the sample text. Are the uncatered-for "z*" occurrences in the sample text anything to do with DEVANAGARI LETTER ZA (U+095B)? A: f1.write(' '.join(list1)) list1, at this point, contains Unicode strings. You can't write Unicode directly to a file, it's a byte interface. You should either encode it explicitly (' '.join(list1).encode('utf-8')), or, as Ignacio suggests, use a codecs wrapper to implicitly encode Unicode strings you send to it. At the moment you are defining a variable CODEC, but not doing anything with it. A: Are you sure you want to remove all the hyphens(-)? Looking at your input file, it looks like all replacements are two- or three-character codes, such as u'I-':u'इ'. If this is so, you could do something like below, but make sure you're using Unicode strings for all your keys and values in the dictionary: import codecs # read the whole file at once f = codecs.open(input_file,'r','ascii') data = f.read() f.close() # perform all the replacements for k,v in english_hindi_dict.items(): data = data.replace(k,v) # write the whole file result f = codecs.open(output_file,'w',CODEC) f.write(data) f.close() Following that theory, I got the following result, which looks like translations such as 'z*', 't-', 'ng', and 'ei' are missing from the dictionary. I don't read Hindi, but Google Translate came up with some of the English words in your translation, so I think I'm on the right track. -z*धिमैन पक्षी एक घने जngगल मेng एक बहुt- ऊँचै पेड तै उस की पt-z*t-ोng से लदी शैखैयेng मज*zबूt- बैजुओng की t-रह फeiली हुई तीng वन हँसोng कै एक झुnhz*ड इस पेड पर निवैस करt-ै तै वे सब यहैँ सुरक्षिt- ते ौर बडे आरैम से रहt-े ते उन मेng से एक पक्षी बहुt- बुदz*धिमैन तै इस बुदz*धिमैन पक्षी ने एक दिन पेड की जड मेng से एक लt-ै को उगt-े देखै इस के बैरे मेng उसने दूसरे पक्षियोng से बैt- की "कz*यै t-ुमz*हेng वह लt-ै दिखैई देt-ी हei", उस ने उन से पूछै "t-ुमz*हेng इसे नShz*ट कर देनै चैहिए" "इसे कz*योng नShz*ट कर देनै चैहिए?" हँसोng ने आशz*च*rय से पूछै "यह t-ो इt-नी छोटी से हei हमेng यह कz*यै हैनि पहुँचै सकt-ी हei" "मेरे मित्रोng," बुदz*धिमैन पक्षी ने उt-z*t-र दियै "वह छोटी सी लt-ै जलz*दी ही बडी हो जैयेगी यह हमैरे पेड पर चढ*z कर उस से लिपटt-ी जैयेगी ौर फिर मोटी ौर मज*zबूt- हो जैयेगी" "t-ो कz*यै हुआ"
Python - codec encoding ascii to unicode: error
:) I am trying to go about the process of reversing transliteration of an input file(currently in english) back to its original form(in hindi) A sample or a part of the input file looks like this: E-k- b-u-d-z*dhi-m-aan- p-ksii# E-k- ghn-e- j-ngg-l- m-e-ng E-k- b-h-u-t- UUNNc-aa p-e-dr thaa# U-s- k-ii p-t-z*t-o-ng s-e- l-d-ii shaakhaay-e-ng m-j-*zb-uut- b-aaj-u-O-ng k-ii t-r-h- pheil-ii h-u-II thiing# w-n- h-NNs-o-ng k-aa E-k- jhu-nhz*D- I-s- p-e-dr p-r- n-i-w-aas- k-r-t-aa thaa# w-e- s-b- y-h-aaNN s-u-r-ksi-t- the- AUr- b-dre- AAr-aam- s-e- r-h-t-e- the-# U-n- m-e-ng s-e- E-k- p-ksii b-h-u-t- b-u-d-z*dhi-m-aan- thaa# I-s- b-u-d-z*dhi-m-aan- p-ksii n-e- E-k- d-i-n- p-e-dr k-ii j-dr m-e-ng s-e- E-k- l-t-aa k-o- U-g-t-e- d-e-khaa# I-s- k-e- b-aar-e- m-e-ng U-s-n-e- d-uus-r-e- p-ksi-y-o-ng s-e- b-aat- k-ii# "k-z*y-aa t-u-m-z*h-e-ng w-h- l-t-aa d-i-khaaII d-e-t-ii h-ei", U-s- n-e- U-n- s-e- p-uuchaa "t-u-m-z*h-e-ng I-s-e- n-Shz*T- k-r- d-e-n-aa c-aah-i-E-"# "I-s-e- k-z*y-o-ng n-Shz*T- k-r- d-e-n-aa c-aah-i-E-?" h-NNs-o-ng n-e- AAshz*c-*ry- s-e- p-uuchaa "y-h- t-o- I-t-n-ii cho-T-ii s-e- h-ei# h-m-e-ng y-h- k-z*y-aa h-aan-i- p-h-u-NNc-aa s-k-t-ii h-ei"# "m-e-r-e- m-i-tro-ng," b-u-d-z*dhi-m-aan- p-ksii n-e- U-t-z*t-r- d-i-y-aa "w-h- cho-T-ii s-ii l-t-aa j-l-z*d-ii h-ii b-drii h-o- j-aay-e-g-ii# y-h- h-m-aar-e- p-e-dr p-r- c-Dh*z k-r- U-s- s-e- l-i-p-T-t-ii j-aay-e-g-ii AUr- phi-r- m-o-T-ii AUr- m-j-*zb-uut- h-o- j-aay-e-g-ii"# "t-o- k-z*y-aa h-u-AA"# Its equivalent meaning in english is: A WISE OLD BIRD. Deep in the forest stood a very tall tree. Its leafy branches spread out like long arms. This was the home of a flock of wild geese. They were safe there. One of the geese was a wild old bird. One day this wise old bird noticed a small creeper growing at the foot of the tree. He spoke to the other birds about it. "Do you see that creeper ?" he said to them. "You must destroy it." "Why must we destroy it ?" asked the geese in surprise. "It is so small. What harm can it do?" "My friends," replied the wise old bird, " that little creeper will soon grow. My script looks like this: #!/usr/bin/python # -*- coding: UTF-8 -*- import sys CODEC = 'utf-8' input_file=sys.argv[1] output_file=sys.argv[2] list1=[] f=open(input_file,'r') f1 = open(output_file,'w') english_hindi_dict={'A' : u'अ' , 'AA' : u'आ ' , 'I' : u'इ' , 'II' : u'ई ' , 'U' : u'उ ' ,\ 'UU' : u'ऊ' , 'r' : u'ऋ' , 'E' : u'ए' , 'ai' : u'ऐ' , 'O' : u'ओ' , 'AU' : u'औ' ,\ 'k' : u'क' , 'kh' : u'ख' , 'g' : u'ग' , 'gh' : u'घ' , 'c' : u'च' , 'ch' : u'छ',\ 'j': u'ज' , 'jh' : u'झ' , 'tr' : u'त्र' , 'T' : u'ट' , 'Th' : u'ठ' , 'D' : u'ड',\ 'dr' : u'ड' , 'Dh' : u'ढ' , 'Na' : u'ण' , 'th' : u'त' , 'tha' : u'थ',\ 'd' : u'द' , 'dh': u'ध' , 'n' : u'न' , 'p' : u'प' , 'ph' : u'फ' ,\ 'b' : u'ब' , 'bh' : u'भ' , 'm' : u'म' , 'y' : u'य' , 'r' : u'र' , 'l' : u'ल' ,\ 'w' : u'व' , 'sh' : u'श' , 'sha' : u'ष', 's' : u'स' , 'h' : u'ह' , 'ks' : u'क्ष' ,\ 'i' : u'ि' , 'ii' : u'ी' , 'u' : u'ु' , 'uu' : u'ू' , 'e' : u'े' ,\ 'aa' : u'ै' , 'o' : u'ो' , 'AU' : u'ौ' ,'H' : u'्' ,'mn' : u'ं' ,\ 'NN' : u'ँ' , 'AW' : u'ॅ' , 'rr' : u'ृ' , '4' : u'४' , '6': u'६' , '8' : u'८',\ '2' : u'२' , '5' : u'५' , '3' : u'३' , '7' : u'७' , '9' : u'९' , '1' : u'१'} for line in f: #line=line.strip() to remove a line from its newline character.... #line=line.rstrip('.') line=line.replace('-','') line=line.replace('#','|') # i am using the or symbol for poornviram #line=line.replace('।','') #line = line.lower() for word in line: for ch in word: if (ch in english_hindi_dict) : translatedToken = english_hindi_dict[ch] else : translatedToken = ch #{ translatedToken = english_hindi_dict[ch] } #for ch in line: f1.write(translatedToken) #print translatedToken #line = line.replace( char,english_hindi_dict[char] ) #list1.append(line) f.close() f1.write(' '.join(list1)) f1.close() the error that I am getting is: python transliterate_eh_nw.py Hstory.txt op1.txt Traceback (most recent call last): File "transliterate_eh_nw.py", line 43, in <module> f1.write(translatedToken) UnicodeEncodeError: 'ascii' codec can't encode character u'\u092f' in position 0: ordinal not in range(128) Could you please tell me how do I deal with this error. Thank you..:)
[ "You have a few problems other than the one which you asked about.\n(1) A conceptual problem: \"E-k- b-u-d-z*dhi-m-aan- p-ksii#\" is not \"english\". It is Hindi language written in ASCII using some romanization scheme. It looks like ITRAN but ITRAN doesn't have AA and A, it has only aa and a. Does the scheme have a name? Can you supply a URL? Your object is better described as \"transliterate some Hindi text from the unnamed romanization to Devanagari script\".\n(2) Showing the result of translating your text from Hindi to English (\"A WISE OLD BIRD\" etc) is only moderately useful. The expected Devanagari output would be a better idea.\n(3) As remarked by @kaiser.se, the transliteration dictionary has multi-byte (up to 3 bytes!) keys, some of which are prefixes of others. Presumably AA must be recognised in priority to A, gh must be recognised before g, etc. Iterating over the items of a dictionary happens in an order that is predictable but for your purposes should be regarded as random. In the code that follows, I've given priority to longer \"keys\".\n(4) Either the dictionary is missing some letter keys (a S t z) or the transliteration rules are more complicated than any of us has guessed so far\n(5) The meaning of the characters # * and - is not 100% obvious. It appears from your input text that z and * appear only in combination as z*\n(6) It would be a good idea if you explained the interpretation of e.g. shaakhaay-e-ng ... does it start with sh then aa or does it start with sha then a? What are the rules? \nThe answer to the problem that you asked about is of course as several others have pointed out that you need to encode your unicode output using an encoding that is supported by your display device e.g. UTF-8.\nHere's some code:\n#!/usr/bin/python\n# -*- coding: UTF-8 -*-\n\ninput_data = \"\"\"\nE-k- b-u-d-z*dhi-m-aan- p-ksii#\n\nE-k- ghn-e- j-ngg-l- m-e-ng E-k- b-h-u-t- UUNNc-aa p-e-dr thaa#\n[snip]\n\"t-o- k-z*y-aa h-u-AA\"#\n\"\"\"\n\nroman_devanagari_dict={'A' : u'अ' , 'AA' : u'आ ' , 'I' : u'इ' , 'II' : u'ई ' , 'U' : u'उ ' ,\\\n[snip]\n '2' : u'२' , '5' : u'५' , '3' : u'३' , '7' : u'७' , '9' : u'९' , '1' : u'१'}\n\n#Presuming we need to do the 3-letter cases then the 2-letter then the 1-letter\nreplacements = [(-len(k), unicode(k), v) for k, v in roman_devanagari_dict.items()]\nreplacements.sort()\n\ndata = input_data.decode('ascii')\n\nfor _junk, from_text, to_text in replacements:\n data = data.replace(from_text, to_text)\n\n# Presuming the '-' are inter-character markers, delete them last, not first\ndata = data.replace(u'-', '')\ndata = data.replace(u'#', '')\nprint \"untransliterated:\", set(c for c in data if 0x20 < ord(c) < 0x7f)\n\nBOM = u'\\ufeff'\noutf = open('devanagari.txt', 'w')\noutf.write(BOM.encode('utf8')) # for the benefit of clueless Windows s/w\noutf.write(data.encode('utf8'))\noutf.close()\n\nOutput:\nएक बुदz*धिमैन पक्षी\nएक घने जनगगल मेनग एक बहुt ऊँचै पेड थa\nउ स की पtztोनग से लदी षaखैयेनग मजzबूt बैजुओनग की tरह फेिली हुई तीनग\nवन हँसोनग कै एक झुनहzड इस पेड पर निवैस करtै थa\nवे सब यहैँ सुरक्षिt ते ौर बडे आ रैम से रहtे ते\nउ न मेनग से एक पक्षी बहुt बुदzधिमैन थa\nइस बुदzधिमैन पक्षी ने एक दिन पेड की जड मेनग से एक लtै को उ गtे देखै \nइस के बैरे मेनग उ सने दूसरे पक्षियोनग से बैt की\n\"कzयै tुमzहेनग वह लtै दिखैई देtी हेि\", उ स ने उ न से पूछै \"tुमzहेनग इसे नSहzट कर देनै चैहिए\"\n\"इसे कzयोनग नSहzट कर देनै चैहिए?\" हँसोनग ने आ शzचरय से पूछै \"यह tो इtनी छोटी से हेि\nहमेनग यह कzयै हैनि पहुँचै सकtी हेि\"\n\"मेरे मित्रोनग,\" बुदzधिमैन पक्षी ने उ tztर दियै \"वह छोटी सी लtै जलzदी ही बडी हो जैयेगी\nयह हमैरे पेड पर चढz कर उ स से लिपटtी जैयेगी ौर फिर मोटी ौर मजzबूt हो जैयेगी\"\n\"tो कzयै हुआ \"\nwhich has only a few recognisable words when shoved through Google Translate.\nUpdate after examining the transliteration table more closely:\n\nThree of the entries (AA, II, and U) have a space after the Devanagari equivalent. Perhaps the spaces should be removed.\nThe general pattern for consonants appears to be:\n\nDEVANAGARI LETTER XA is represented by x\nDEVANAGARI LETTER XXA is represented by X\nDEVANAGARI LETTER XHA is represented by xh\nDEVANAGARI LETTER XXHA is represented by Xh \nHowever 3 entries break the pattern:\nSSA -> sha but pattern says S\nTA -> th but pattern says t\nTHA -> tha but pattern says th \nNote: changing the above 3 entries stopped my code from complaining that S and t were left unchanged when transliterating your sample text, and removed the seemingly-anomalous sha and tha entries.\n\nEntries (D and dr) are mapped to the same character, DEVANAGARI LETTER DDA. D is the expected entry for that character; perhaps dr should be mapped elsewhere.\nThere is no entry for DEVANAGARI LETTER NGA (U+0919); perhaps it should be encoded as ng -- there are a few words ending in ng in the sample text.\nAre the uncatered-for \"z*\" occurrences in the sample text anything to do with DEVANAGARI LETTER ZA (U+095B)?\n\n", "\nf1.write(' '.join(list1))\n\nlist1, at this point, contains Unicode strings. You can't write Unicode directly to a file, it's a byte interface. You should either encode it explicitly (' '.join(list1).encode('utf-8')), or, as Ignacio suggests, use a codecs wrapper to implicitly encode Unicode strings you send to it. At the moment you are defining a variable CODEC, but not doing anything with it.\n", "Are you sure you want to remove all the hyphens(-)? Looking at your input file, it looks like all replacements are two- or three-character codes, such as u'I-':u'इ'. If this is so, you could do something like below, but make sure you're using Unicode strings for all your keys and values in the dictionary:\nimport codecs\n\n# read the whole file at once\nf = codecs.open(input_file,'r','ascii')\ndata = f.read()\nf.close()\n\n# perform all the replacements\nfor k,v in english_hindi_dict.items():\n data = data.replace(k,v)\n\n# write the whole file result\nf = codecs.open(output_file,'w',CODEC)\nf.write(data)\nf.close()\n\nFollowing that theory, I got the following result, which looks like translations such as 'z*', 't-', 'ng', and 'ei' are missing from the dictionary. I don't read Hindi, but Google Translate came up with some of the English words in your translation, so I think I'm on the right track.\n-z*धिमैन पक्षी\n\nएक घने जngगल मेng एक बहुt- ऊँचै पेड तै\nउस की पt-z*t-ोng से लदी शैखैयेng मज*zबूt- बैजुओng की t-रह फeiली हुई तीng\nवन हँसोng कै एक झुnhz*ड इस पेड पर निवैस करt-ै तै\nवे सब यहैँ सुरक्षिt- ते ौर बडे आरैम से रहt-े ते\nउन मेng से एक पक्षी बहुt- बुदz*धिमैन तै\nइस बुदz*धिमैन पक्षी ने एक दिन पेड की जड मेng से एक लt-ै को उगt-े देखै \nइस के बैरे मेng उसने दूसरे पक्षियोng से बैt- की\n\"कz*यै t-ुमz*हेng वह लt-ै दिखैई देt-ी हei\", उस ने उन से पूछै \"t-ुमz*हेng इसे नShz*ट कर देनै चैहिए\"\n\"इसे कz*योng नShz*ट कर देनै चैहिए?\" हँसोng ने आशz*च*rय से पूछै \"यह t-ो इt-नी छोटी से हei\nहमेng यह कz*यै हैनि पहुँचै सकt-ी हei\"\n\"मेरे मित्रोng,\" बुदz*धिमैन पक्षी ने उt-z*t-र दियै \"वह छोटी सी लt-ै जलz*दी ही बडी हो जैयेगी\nयह हमैरे पेड पर चढ*z कर उस से लिपटt-ी जैयेगी ौर फिर मोटी ौर मज*zबूt- हो जैयेगी\"\n\"t-ो कz*यै हुआ\"\n\n" ]
[ 4, 1, 1 ]
[]
[]
[ "python", "transliteration" ]
stackoverflow_0002265270_python_transliteration.txt
Q: Non-Blocking method for parsing (streaming) XML in python I have an XML document coming in over a socket that I need to parse and react to on the fly (ie parsing a partial tree). What I'd like is a non blocking method of doing so, so that I can do other things while waiting for more data to come in (without threading). Something like iterparse would be ideal if it finished iterating when the read buffer was empty, eg: context = iterparse(imaginary_socket_file_wrapper) while 1: for event, elem in context: process_elem(elem) # iteration of context finishes when socket has no more data do_other_stuff() time.sleep(0.1) I guess SAX would also be an option, but iterparse just seems simpler for my needs. Any ideas? Update: Using threads is fine, but introduces a level of complexity that I was hoping to sidestep. I thought that non-blocking calls would be a good way to do so, but I'm finding that it increases the complexity of parsing the XML. A: Diving into the iterparse source provided the solution for me. Here's a simple example of building an XML tree on the fly and processing elements after their close tags: import xml.etree.ElementTree as etree parser = etree.XMLTreeBuilder() def end_tag_event(tag): node = self.parser._end(tag) print node parser._parser.EndElementHandler = end_tag_event def data_received(data): parser.feed(data) In my case I ended up feeding it data from twisted, but it should work with a non-blocking socket also. A: I think there are two components to this, the non-blocking network I/O, and a stream-oriented XML parser. For the former, you'd have to pick a non-blocking network framework, or roll your own solution for this. Twisted certainly would work, but I personally find inversion of control frameworks difficult to wrap my brain around. You would likely have to keep track of a lot of state in your callbacks to feed the parser. For this reason I tend to find Eventlet a bit easier to program to, and I think it would fit well in this situation. Essentially it allows you to write your code as if you were using a blocking socket call (using an ordinary loop or a generator or whatever you like), except that you can spawn it into a separate coroutine (a "greenlet") that will automatically perform a cooperative yield when I/O operations would block, thus allowing other coroutines to run. This makes using any stream-oriented parser trivial again, because the code is structured like an ordinary blocking call. It also means that many libraries that don't directly deal with sockets or other I/O (like the parser for instance) don't have to be specially modified to be non-blocking: if they block, Eventlet yields the coroutine. Admittedly Eventlet is slightly magic, but I find it has a much easier learning curve than Twisted, and results in more straightforward code because you don't have to turn your logic "inside out" to fit the framework. A: If you won't use threads, you can use an event loop and poll non-blocking sockets. asyncore is the standard library module for such stuff. Twisted is the async library for Python, but complex and probably a bit heavyweight for your needs. Alternatively, multiprocessing is the non-thread thread alternative, but I assume you aren't running 2.6. One way or the other, I think you're going to have to use threads, extra processes or weave some equally complex async magic.
Non-Blocking method for parsing (streaming) XML in python
I have an XML document coming in over a socket that I need to parse and react to on the fly (ie parsing a partial tree). What I'd like is a non blocking method of doing so, so that I can do other things while waiting for more data to come in (without threading). Something like iterparse would be ideal if it finished iterating when the read buffer was empty, eg: context = iterparse(imaginary_socket_file_wrapper) while 1: for event, elem in context: process_elem(elem) # iteration of context finishes when socket has no more data do_other_stuff() time.sleep(0.1) I guess SAX would also be an option, but iterparse just seems simpler for my needs. Any ideas? Update: Using threads is fine, but introduces a level of complexity that I was hoping to sidestep. I thought that non-blocking calls would be a good way to do so, but I'm finding that it increases the complexity of parsing the XML.
[ "Diving into the iterparse source provided the solution for me. Here's a simple example of building an XML tree on the fly and processing elements after their close tags:\nimport xml.etree.ElementTree as etree\n\nparser = etree.XMLTreeBuilder()\n\ndef end_tag_event(tag):\n node = self.parser._end(tag)\n print node\n\nparser._parser.EndElementHandler = end_tag_event\n\ndef data_received(data):\n parser.feed(data)\n\nIn my case I ended up feeding it data from twisted, but it should work with a non-blocking socket also.\n", "I think there are two components to this, the non-blocking network I/O, and a stream-oriented XML parser.\nFor the former, you'd have to pick a non-blocking network framework, or roll your own solution for this. Twisted certainly would work, but I personally find inversion of control frameworks difficult to wrap my brain around. You would likely have to keep track of a lot of state in your callbacks to feed the parser. For this reason I tend to find Eventlet a bit easier to program to, and I think it would fit well in this situation.\nEssentially it allows you to write your code as if you were using a blocking socket call (using an ordinary loop or a generator or whatever you like), except that you can spawn it into a separate coroutine (a \"greenlet\") that will automatically perform a cooperative yield when I/O operations would block, thus allowing other coroutines to run.\nThis makes using any stream-oriented parser trivial again, because the code is structured like an ordinary blocking call. It also means that many libraries that don't directly deal with sockets or other I/O (like the parser for instance) don't have to be specially modified to be non-blocking: if they block, Eventlet yields the coroutine.\nAdmittedly Eventlet is slightly magic, but I find it has a much easier learning curve than Twisted, and results in more straightforward code because you don't have to turn your logic \"inside out\" to fit the framework.\n", "If you won't use threads, you can use an event loop and poll non-blocking sockets.\nasyncore is the standard library module for such stuff. Twisted is the async library for Python, but complex and probably a bit heavyweight for your needs.\nAlternatively, multiprocessing is the non-thread thread alternative, but I assume you aren't running 2.6.\nOne way or the other, I think you're going to have to use threads, extra processes or weave some equally complex async magic.\n" ]
[ 8, 4, 1 ]
[]
[]
[ "nonblocking", "parsing", "python", "xml" ]
stackoverflow_0001459648_nonblocking_parsing_python_xml.txt
Q: which is a minimalistic python wsgi development server with support for code reload? From what I can tell wsgiref - no code reload CherryPy - more than just the server mod_wsgi - all the apache overhead paste.httpserver - paste is a huge package with other stuff in it flup - same as paste, too much stuff. Spawning - never used it but seems lightweight enough. Tornado - not really wsgi + full "framework" Werkzeug - runcommand any others out there? which one you prefer? A: One you might want to look at is Werkzeug - it is a WSGI utility toolkit. It includes a runserver function that takes the wsgiref server and adds automatic code reloading (you can also configure it to reload when configuration files change) and an awesome debugger. On a side note, your disdain for frameworks makes it sound like you're planning to handle all the WSGI stuff from scratch, in which case I would recommend you use Werkzeug's utility functions to handle parsing requests and generating responses. It's a lot more fun than doing it yourself. (And for the love of Guido, PLEASE don't use cgi.FieldStorage!) A: Check out run_simple from werkzeug: http://werkzeug.pocoo.org/documentation/0.5.1/serving.html In addition to giving you automatic code reloading, you can use use_debugger=True to include their pretty spiffy debugger on top of your app (which includes console in each line of the traceback). A: One really easy way is CGI (together with a regular web server, and using wsgiref.handlers.CGIHandler). Terrible for performance on a production server, but great for development. You can write a single script that works as both a mod_wsgi WSGIScriptAlias (exposing an application object), and as a mod_cgi ScriptAlias (calling wsgiref when __name__=='__main__'). Many WSGI environments have a way to reload the basic script, for example mod_wsgi's WSGIScriptReloading, which is on by default. Unfortunately, you're likely to be putting much of your code in modules, which isn't so easy to reload. In mod_wsgi you can also do it by sending a SIGINT to perform a reload when in daemon mode. Unfortunately you still have to sniff every module you're using for mtime updates in order to know whether you have to reload. And it doesn't work in embedded mode. A messy but feasible approach is to sniff all modules that are part of your application, and if any have been updated since the last check, reload them all. You have to reload them at once, by removing them all from the sys.modules lookup (remove None-valued entries too whilst you're there, to avoid relative import lookup problems), in order to ensure they don't keep cross-references to the old versions of themselves. And of course they must not leave other references to themselves outside of your application. You can see an example of this in action in the ModuleUpdater class here. (This software isn't ready for release, but has been providing module reloading for my WSGI apps for a few years and seems to be stable. The idea is to put all your WSGI app in an application class in a package, which you can import from a single WSGI/CGI/command-line entry point script; you include the deployment config in that script.) A: So far I've been using CherryPy, and compared to Django (which, while not in your list, is the only other dev server I used) I like it heaps more. It does what is says: it is only there when you need it, and gets out of the way for the rest of the time. Using Django seemed like I needed to subscribe to the Django way of doing things. Although Django provides heaps more functionality out of the box (default admin interface, widgets on your webpages) , using CherryPy seems like just another import that has very good (often surprising you with extra) functionality. A: I'd recommend paste or CherryPy. They're the easiest to get up and running with. A: Also, you missed web.py, which is both small and supports code reload. A: You can use paste.reloader with any wsgi-server, aside of other paste modules. # run paste reloader import paste.reloader as reloader reloader.install() # run wsgiref server from wsgiref import simple_server simple_server.make_server('', 8080, main_wsgi_app).serve_forever() Is that minimalistic enough?
which is a minimalistic python wsgi development server with support for code reload?
From what I can tell wsgiref - no code reload CherryPy - more than just the server mod_wsgi - all the apache overhead paste.httpserver - paste is a huge package with other stuff in it flup - same as paste, too much stuff. Spawning - never used it but seems lightweight enough. Tornado - not really wsgi + full "framework" Werkzeug - runcommand any others out there? which one you prefer?
[ "One you might want to look at is Werkzeug - it is a WSGI utility toolkit. It includes a runserver function that takes the wsgiref server and adds automatic code reloading (you can also configure it to reload when configuration files change) and an awesome debugger.\nOn a side note, your disdain for frameworks makes it sound like you're planning to handle all the WSGI stuff from scratch, in which case I would recommend you use Werkzeug's utility functions to handle parsing requests and generating responses. It's a lot more fun than doing it yourself. (And for the love of Guido, PLEASE don't use cgi.FieldStorage!)\n", "Check out run_simple from werkzeug:\nhttp://werkzeug.pocoo.org/documentation/0.5.1/serving.html\nIn addition to giving you automatic code reloading, you can use use_debugger=True to include their pretty spiffy debugger on top of your app (which includes console in each line of the traceback).\n", "One really easy way is CGI (together with a regular web server, and using wsgiref.handlers.CGIHandler). Terrible for performance on a production server, but great for development. You can write a single script that works as both a mod_wsgi WSGIScriptAlias (exposing an application object), and as a mod_cgi ScriptAlias (calling wsgiref when __name__=='__main__').\nMany WSGI environments have a way to reload the basic script, for example mod_wsgi's WSGIScriptReloading, which is on by default. Unfortunately, you're likely to be putting much of your code in modules, which isn't so easy to reload. In mod_wsgi you can also do it by sending a SIGINT to perform a reload when in daemon mode. Unfortunately you still have to sniff every module you're using for mtime updates in order to know whether you have to reload. And it doesn't work in embedded mode.\nA messy but feasible approach is to sniff all modules that are part of your application, and if any have been updated since the last check, reload them all. You have to reload them at once, by removing them all from the sys.modules lookup (remove None-valued entries too whilst you're there, to avoid relative import lookup problems), in order to ensure they don't keep cross-references to the old versions of themselves. And of course they must not leave other references to themselves outside of your application. You can see an example of this in action in the ModuleUpdater class here.\n(This software isn't ready for release, but has been providing module reloading for my WSGI apps for a few years and seems to be stable. The idea is to put all your WSGI app in an application class in a package, which you can import from a single WSGI/CGI/command-line entry point script; you include the deployment config in that script.)\n", "So far I've been using CherryPy, and compared to Django (which, while not in your list, is the only other dev server I used) I like it heaps more. It does what is says: it is only there when you need it, and gets out of the way for the rest of the time.\nUsing Django seemed like I needed to subscribe to the Django way of doing things. Although Django provides heaps more functionality out of the box (default admin interface, widgets on your webpages) , using CherryPy seems like just another import that has very good (often surprising you with extra) functionality.\n", "I'd recommend paste or CherryPy. They're the easiest to get up and running with.\n", "Also, you missed web.py, which is both small and supports code reload.\n", "You can use paste.reloader with any wsgi-server, aside of other paste modules.\n\n# run paste reloader\nimport paste.reloader as reloader\nreloader.install()\n\n# run wsgiref server\nfrom wsgiref import simple_server\nsimple_server.make_server('', 8080, main_wsgi_app).serve_forever()\n\nIs that minimalistic enough?\n" ]
[ 5, 4, 2, 1, 1, 1, 0 ]
[]
[]
[ "python", "wsgi" ]
stackoverflow_0002161778_python_wsgi.txt
Q: Django: Template context processor request variable I am trying to implement django-facebookconnect, for I need to check if a user logged in via Facebook or a regular user. At the template, I can check if user logged in via facebook by checking request.facebook.uid such as: {% if is_facebook %} {% show_facebook_photo user %} {% endif %} For this, I need to pass is_facebook': request.facebook.uid to the template and I will be using this in everywhere thus I want tried to apply it to an existing template context processor and call the snipplet above at the base.html, and it works fine for Foo objects: def global_variables(request): from django.conf import settings from myproject.myapp.models import Foo return {'is_facebook': request.facebook.uid,'foo_list': Foo.objects.all()} I can list Foo objects at any view without any issue however it fails for this new is_facebook, it simply returns nothing. If I pass 'is_facebook': request.facebook.uid in every single view , it works but I need this globally for any view rendering. A: If you have access via the request object, why do you need to add a special is_facebook boolean at all? Just enable the built-in django.core.context_processors.request and this will ensure that request is present in all templates, then you can do this: {% if request.facebook.uid %} A: It could be a timing issue. Make sure that the Common middleware comes before the facebook middleware in your settings file. You can probably debug and see when the facebook middleware is modifying the request and when your context processor is invoked. That may give you some clue as to why this is happening. But, as Daniel said, you can always just use the request object in your templates.
Django: Template context processor request variable
I am trying to implement django-facebookconnect, for I need to check if a user logged in via Facebook or a regular user. At the template, I can check if user logged in via facebook by checking request.facebook.uid such as: {% if is_facebook %} {% show_facebook_photo user %} {% endif %} For this, I need to pass is_facebook': request.facebook.uid to the template and I will be using this in everywhere thus I want tried to apply it to an existing template context processor and call the snipplet above at the base.html, and it works fine for Foo objects: def global_variables(request): from django.conf import settings from myproject.myapp.models import Foo return {'is_facebook': request.facebook.uid,'foo_list': Foo.objects.all()} I can list Foo objects at any view without any issue however it fails for this new is_facebook, it simply returns nothing. If I pass 'is_facebook': request.facebook.uid in every single view , it works but I need this globally for any view rendering.
[ "If you have access via the request object, why do you need to add a special is_facebook boolean at all? Just enable the built-in django.core.context_processors.request and this will ensure that request is present in all templates, then you can do this:\n{% if request.facebook.uid %}\n\n", "It could be a timing issue. Make sure that the Common middleware comes before the facebook middleware in your settings file. You can probably debug and see when the facebook middleware is modifying the request and when your context processor is invoked. That may give you some clue as to why this is happening. But, as Daniel said, you can always just use the request object in your templates.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002269508_django_python.txt
Q: Actionscript flex sockets and telnet I am trying to make a flex application where it gets data from a telnet connection and I am running into a weird problem. To give a brief introduction, i want to read data from a process that exposes it through a socket. So if in the shell i type telnet localhost 8651i receive the xml and then the connection is closed (I get the following Connection closed by foreign host.) Anyway i found a simple tutorial online for flex that essentially is a telnet client and one would expect it to work but everything follows Murphy's laws and nothing ever works! Now i have messages being printed in every event handler and all places that i can think off. When i connect to the socket nothing happens, no event handler is triggered even the connect or close handler and if i do the following the socket.connected returns false! I get no errors, try catch raises no exception. I am at a loss as to whats going wrong? socket.connect(serverURL, portNumber); msg(socket.connected.toString()); Is there something about telnet that i do not know and its causing this to not work. Whats more interesting is why none of the events get fired. Another interesting thing is that i have some python code that does the same thing and its able to get the xml back! The following is the python code that works! def getStats(host, port): sock = socket.socket() sock.connect((host, port)) res = sock.recv(1024*1024*1024, socket.MSG_WAITALL) sock.close() return statFunc(res) So i ask you whats going wrong!!!!!! Is there some inherent problem with how flex handles sockets? A: What security sandbox are you running this in? if you are running this as a flash application embedded in a web page then this is most likely a security violation. The XMLSocket.connect() method can connect only to computers in the same domain where the SWF file resides. This restriction does not apply to SWF files running off a local disk. (This restriction is identical to the security rules for URLLoader.load().) To connect to a server daemon running in a domain other than the one where the SWF resides, you can create a security policy file on the server that allows access from specific domains. A: For security reasons, the host you're connecting to must serve Flash socket policy requests on port 943 (or on the same port on which you're attempting your connection). This page shows you how to set this up on the server that you're attempting to connect to: http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html During development it is often convenient to add your SWF to the list of files that run in the secure sandbox to alleviate the need to have a socket policy file served. http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager04.html
Actionscript flex sockets and telnet
I am trying to make a flex application where it gets data from a telnet connection and I am running into a weird problem. To give a brief introduction, i want to read data from a process that exposes it through a socket. So if in the shell i type telnet localhost 8651i receive the xml and then the connection is closed (I get the following Connection closed by foreign host.) Anyway i found a simple tutorial online for flex that essentially is a telnet client and one would expect it to work but everything follows Murphy's laws and nothing ever works! Now i have messages being printed in every event handler and all places that i can think off. When i connect to the socket nothing happens, no event handler is triggered even the connect or close handler and if i do the following the socket.connected returns false! I get no errors, try catch raises no exception. I am at a loss as to whats going wrong? socket.connect(serverURL, portNumber); msg(socket.connected.toString()); Is there something about telnet that i do not know and its causing this to not work. Whats more interesting is why none of the events get fired. Another interesting thing is that i have some python code that does the same thing and its able to get the xml back! The following is the python code that works! def getStats(host, port): sock = socket.socket() sock.connect((host, port)) res = sock.recv(1024*1024*1024, socket.MSG_WAITALL) sock.close() return statFunc(res) So i ask you whats going wrong!!!!!! Is there some inherent problem with how flex handles sockets?
[ "What security sandbox are you running this in? if you are running this as a flash application embedded in a web page then this is most likely a security violation.\n\nThe XMLSocket.connect() method can\n connect only to computers in the same\n domain where the SWF file resides.\n This restriction does not apply to SWF\n files running off a local disk. (This\n restriction is identical to the\n security rules for URLLoader.load().)\n To connect to a server daemon running\n in a domain other than the one where\n the SWF resides, you can create a\n security policy file on the server\n that allows access from specific\n domains.\n\n", "For security reasons, the host you're connecting to must serve Flash socket policy requests on port 943 (or on the same port on which you're attempting your connection). This page shows you how to set this up on the server that you're attempting to connect to:\nhttp://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html\nDuring development it is often convenient to add your SWF to the list of files that run in the secure sandbox to alleviate the need to have a socket policy file served.\nhttp://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager04.html\n" ]
[ 0, 0 ]
[]
[]
[ "actionscript_3", "apache_flex", "python", "sockets", "telnet" ]
stackoverflow_0002215308_actionscript_3_apache_flex_python_sockets_telnet.txt
Q: Create a Reg Exp to search for __word__? In a program I'm making in python and I want all words formatted like __word__ to stand out. How could I search for words like these using a regex? A: Perhaps something like \b__(\S+)__\b >>> import re >>> re.findall(r"\b__(\S+)__\b","Here __is__ a __test__ sentence") ['is', 'test'] >>> re.findall(r"\b__(\S+)__\b","__Here__ is a test __sentence__") ['Here', 'sentence'] >>> re.findall(r"\b__(\S+)__\b","__Here's__ a test __sentence__") ["Here's", 'sentence'] or you can put tags around the word like this >>> print re.sub(r"\b(__)(\S+)(__)\b",r"<b>\2<\\b>","__Here__ is a test __sentence__") <b>Here<\b> is a test <b>sentence<\b> If you need more fine grained control over the legal word characters it's best to be explicit \b__([a-zA-Z0-9_':])__\b ### count "'" and ":" as part of words >>> re.findall(r"\b__([a-zA-Z0-9_']+)__\b","__Here's__ a test __sentence:__") ["Here's"] >>> re.findall(r"\b__([a-zA-Z0-9_':]+)__\b","__Here's__ a test __sentence:__") ["Here's", 'sentence:'] A: Take a squizz here: http://docs.python.org/library/re.html That should show you syntax and examples from which you can build a check for word(s) pre- and post-pended with 2 underscores. A: The simplest regex for this would be __.+__ If you want access to the word itself from your code, you should use __(.+)__ A: This will give you a list with all such words >>> import re >>> m = re.findall("(__\w+__)", "What __word__ you search __for__") >>> print m ['__word__', '__for__'] A: \b(__\w+__)\b \b word boundary \w+ one or more word characters - [a-zA-Z0-9_] A: simple string functions. no regex >>> mystring="blah __word__ blah __word2__" >>> for item in mystring.split(): ... if item.startswith("__") and item.endswith("__"): ... print item ... __word__ __word2__
Create a Reg Exp to search for __word__?
In a program I'm making in python and I want all words formatted like __word__ to stand out. How could I search for words like these using a regex?
[ "Perhaps something like\n\\b__(\\S+)__\\b\n\n>>> import re\n>>> re.findall(r\"\\b__(\\S+)__\\b\",\"Here __is__ a __test__ sentence\")\n['is', 'test'] \n>>> re.findall(r\"\\b__(\\S+)__\\b\",\"__Here__ is a test __sentence__\")\n['Here', 'sentence']\n>>> re.findall(r\"\\b__(\\S+)__\\b\",\"__Here's__ a test __sentence__\")\n[\"Here's\", 'sentence']\n\nor you can put tags around the word like this\n>>> print re.sub(r\"\\b(__)(\\S+)(__)\\b\",r\"<b>\\2<\\\\b>\",\"__Here__ is a test __sentence__\")\n<b>Here<\\b> is a test <b>sentence<\\b>\n\nIf you need more fine grained control over the legal word characters it's best to be explicit\n\\b__([a-zA-Z0-9_':])__\\b ### count \"'\" and \":\" as part of words\n\n>>> re.findall(r\"\\b__([a-zA-Z0-9_']+)__\\b\",\"__Here's__ a test __sentence:__\")\n[\"Here's\"]\n>>> re.findall(r\"\\b__([a-zA-Z0-9_':]+)__\\b\",\"__Here's__ a test __sentence:__\")\n[\"Here's\", 'sentence:']\n\n", "Take a squizz here: http://docs.python.org/library/re.html\nThat should show you syntax and examples from which you can build a check for word(s) pre- and post-pended with 2 underscores.\n", "The simplest regex for this would be\n__.+__\n\n\nIf you want access to the word itself from your code, you should use\n__(.+)__\n\n", "This will give you a list with all such words\n>>> import re\n>>> m = re.findall(\"(__\\w+__)\", \"What __word__ you search __for__\")\n>>> print m\n['__word__', '__for__']\n\n", "\\b(__\\w+__)\\b\n\n\\b word boundary\n\\w+ one or more word characters - [a-zA-Z0-9_]\n", "simple string functions. no regex\n>>> mystring=\"blah __word__ blah __word2__\"\n>>> for item in mystring.split():\n... if item.startswith(\"__\") and item.endswith(\"__\"):\n... print item\n...\n__word__\n__word2__\n\n" ]
[ 4, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002270634_python_regex.txt
Q: How to access string value from new class that inherit string type I want to define a new class that inherit the build in str type, and create a method that duplicates the string contents. How do I get access to the string value assigned to the object of my new class ? class str_usr(str): def __new__(cls, arg): return str.__new__(cls, arg) def dub(self): # How to modify the string value in self ? self.<attr> = self.<attr> + self.<attr> Thanks for any help :-) A: Strings in Python are immutable, so once you have one string, you can't change its value. It's almost the same as if you had a class derived from int, and then you added a method to change the value of the int. You can of course return a new value: class str_usr(str): def dup(self): return self + self # or 2 * self s = str_usr("hi") print s # prints hi print s.dup() # print hihi A: It doesn't look like you want to inherit from str at all (which is prettymuch never useful anyhow). Make a new class and have one of its attributes be a string you access, that is class MyString(object): def __init__(self, string): self.string = string def dup(self): self.string *= 2 Also, note about this: Name your classes with CamelCaseCapitalization so people can recognize they are classes. str and some other builtins don't follow this, but everyone's user-defined classes do. You don't usually need to define __new__. Defining __init__ will probably work. They way you've defined __new__ isn't especially helpful. To add new functionality to strings, you probably don't want a class; you probably just want to define functions that take strings. A: Note that in Python you could just use the multiplication operator on string: >>> s = "foo" >>> s*2 'foofoo'
How to access string value from new class that inherit string type
I want to define a new class that inherit the build in str type, and create a method that duplicates the string contents. How do I get access to the string value assigned to the object of my new class ? class str_usr(str): def __new__(cls, arg): return str.__new__(cls, arg) def dub(self): # How to modify the string value in self ? self.<attr> = self.<attr> + self.<attr> Thanks for any help :-)
[ "Strings in Python are immutable, so once you have one string, you can't change its value. It's almost the same as if you had a class derived from int, and then you added a method to change the value of the int.\nYou can of course return a new value:\nclass str_usr(str):\n def dup(self):\n return self + self # or 2 * self\n\ns = str_usr(\"hi\")\nprint s # prints hi\nprint s.dup() # print hihi\n\n", "It doesn't look like you want to inherit from str at all (which is prettymuch never useful anyhow). Make a new class and have one of its attributes be a string you access, that is\nclass MyString(object):\n def __init__(self, string):\n self.string = string\n\n def dup(self):\n self.string *= 2\n\nAlso, note about this:\n\nName your classes with CamelCaseCapitalization so people can recognize they are classes. str and some other builtins don't follow this, but everyone's user-defined classes do.\nYou don't usually need to define __new__. Defining __init__ will probably work. They way you've defined __new__ isn't especially helpful.\nTo add new functionality to strings, you probably don't want a class; you probably just want to define functions that take strings.\n\n", "Note that in Python you could just use the multiplication operator on string:\n>>> s = \"foo\"\n>>> s*2\n'foofoo'\n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "class", "oop", "python" ]
stackoverflow_0002271216_class_oop_python.txt
Q: Is there any limitation in python when handling long file paths? I am writing a file copying utility in Python. But I am getting some error messages when processing files with very long file paths. I suspect Python has some limitations when handling very long file paths. A: Many file systems don't support long filenames, so it's probably a limitation of the OS or your file system. There are also OS-specific issues like API limitations (e.g. in the Windows API).
Is there any limitation in python when handling long file paths?
I am writing a file copying utility in Python. But I am getting some error messages when processing files with very long file paths. I suspect Python has some limitations when handling very long file paths.
[ "Many file systems don't support long filenames, so it's probably a limitation of the OS or your file system.\nThere are also OS-specific issues like API limitations (e.g. in the Windows API).\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0002271437_python.txt
Q: Ready implementation of multivariate Spearman rank correlation I'm looking for a way to calculate multivariate version of Spearman rank correlation $\rho$. Are there any ready to use Python implementation I can use? A: There is one in scipy. A: If now or in the future you will want access to some advanced statistical packages, also consider calling R libraries from Python when needed via the RPy2. And then you can compute spearman using a package such as this.
Ready implementation of multivariate Spearman rank correlation
I'm looking for a way to calculate multivariate version of Spearman rank correlation $\rho$. Are there any ready to use Python implementation I can use?
[ "There is one in scipy.\n", "If now or in the future you will want access to some advanced statistical packages, also consider calling R libraries from Python when needed via the RPy2. \nAnd then you can compute spearman using a package such as this.\n" ]
[ 2, 1 ]
[]
[]
[ "correlation", "python", "statistics" ]
stackoverflow_0002264609_correlation_python_statistics.txt
Q: django calendar free/busy/availabilitty I am trying to implement a calendar system with the ability to schedule other people for appointments. The system has to be able to prevent scheduling a person during another appointment or during their unavailable time. I have looked at all the existing django calendar projects I have found on the internet and none of them seem to have this built-into them (if I missed it somehow, please let me know). Perhaps I am just getting too tired, but the only way I can think of doing this seems a little messy. Here goes in pseudo code: when a user tries to create a new appointment, grab the new appointment's start_time and end_time for each appointment on that same day, check if existing_start_time < new_start_time AND existing_end_time > new_start_time (is the new appointments start time in between any existing appointment's start and end times) existing_start_time < new_end_time AND existing_end_time > new_end_time (is the new appointments end time in between any existing appointment's start and end times) if no objects were found, then go ahead and add the new appointment Considering Django has no filtering based on time, this must all be done using .extra() on the queryset. So, I am asking if there is a better way. A pythonic trick or module or anything that might simplify this. Or an existing project that has what I need or can lead me in the right direction. Thanks. A: What about using Django's range test. For example: appoinment = Appointment() appointment.start_time = datetime.datetime.now() # 1 hour appointment appointment.end_time = appointment.start_time + datetime.timedelta(hours=1) # more stuff here appointment.save() # Checking for collision # where the start time for an appointment is between the the start and end times # You would want to filter this on user, etc # There is also a problem if you book an appointment within another appointment start_conflict = Appointment.objects.filter( start_time__range=(appointment.start_time, appointment.end_time)) end_conflict = Appointment.objects.filter( end_time__range=(appointment.start_time, appointment.end_time)) during_conflict = Appointment.objects.filter( start_date__lte=appointment.start_time, end_date__gte=appointment.end_time) if (start_conflict or end_conflict or during_conflict): # reject, for there is a conflict Something like that? I haven't tried this myself so you may have to tweak it a bit. EDIT: Added the during_conflict bit. A: One caveat here is the different timezones of different users, and bring Daylight saving time into the mix things become very complicated. You might want to take a look at pytz module for taking care of the timezone issue.
django calendar free/busy/availabilitty
I am trying to implement a calendar system with the ability to schedule other people for appointments. The system has to be able to prevent scheduling a person during another appointment or during their unavailable time. I have looked at all the existing django calendar projects I have found on the internet and none of them seem to have this built-into them (if I missed it somehow, please let me know). Perhaps I am just getting too tired, but the only way I can think of doing this seems a little messy. Here goes in pseudo code: when a user tries to create a new appointment, grab the new appointment's start_time and end_time for each appointment on that same day, check if existing_start_time < new_start_time AND existing_end_time > new_start_time (is the new appointments start time in between any existing appointment's start and end times) existing_start_time < new_end_time AND existing_end_time > new_end_time (is the new appointments end time in between any existing appointment's start and end times) if no objects were found, then go ahead and add the new appointment Considering Django has no filtering based on time, this must all be done using .extra() on the queryset. So, I am asking if there is a better way. A pythonic trick or module or anything that might simplify this. Or an existing project that has what I need or can lead me in the right direction. Thanks.
[ "What about using Django's range test.\nFor example:\nappoinment = Appointment()\nappointment.start_time = datetime.datetime.now()\n# 1 hour appointment\nappointment.end_time = appointment.start_time + datetime.timedelta(hours=1)\n# more stuff here\nappointment.save()\n\n# Checking for collision\n# where the start time for an appointment is between the the start and end times\n# You would want to filter this on user, etc \n# There is also a problem if you book an appointment within another appointment\nstart_conflict = Appointment.objects.filter(\n start_time__range=(appointment.start_time,\n appointment.end_time))\nend_conflict = Appointment.objects.filter(\n end_time__range=(appointment.start_time,\n appointment.end_time))\n\nduring_conflict = Appointment.objects.filter(\n start_date__lte=appointment.start_time, \n end_date__gte=appointment.end_time)\n\nif (start_conflict or end_conflict or during_conflict):\n # reject, for there is a conflict\n\nSomething like that? I haven't tried this myself so you may have to tweak it a bit.\nEDIT: Added the during_conflict bit.\n", "One caveat here is the different timezones of different users, and bring Daylight saving time into the mix things become very complicated.\nYou might want to take a look at pytz module for taking care of the timezone issue.\n" ]
[ 15, 0 ]
[]
[]
[ "calendar", "django", "python" ]
stackoverflow_0002271190_calendar_django_python.txt
Q: Hudson unable to navigate relative directories I have a Python project building with Hudson. Most unit tests work correctly, but any tests that require writing to the file system (I have a class that uses tarfiles, for example) can't find the tmp directory I have set up for intermediate processing (my tearDown methods remove any files under the relative tmp directory). Here is my project structure: src tests fixtures (static files here) unit (unit tests here) tmp Here is an example error: OSError: [Errno 2] No such file or directory: '../../tmp' I assume this is happening because Hudson is not processing the files while in the directory unit, but rather some other working directory. What is Hudson's working directory? Can it be configured? Can relative paths work at all? A: Each job in Hudson has it's own working directory, at /path/to/hudson/jobs/[job name]/workspace/ For individual jobs, you can set the "Use custom workspace" option (under "Advanced Project Options") to define where the workspace will be. I guess it would depend on how your tests are being run, but if you inspect the job's workspace you should be able to find where Hudson is writing the files to. A: I don't know how you're initializing your workspace, but typically it's done by checking your project out of version control into the workspace. If this is true in your case, the easiest thing to do is to add your tmp directory to version control (say, with a README file in it, if your version control system doesn't support directories). Then, the tmp directory will get checked out into your workspace and things should work again. A: I don't know anyhting about Hudson, but this is what I do to ensure, that relative path are working right: os.chdir(os.path.dirname(sys.argv[0]))
Hudson unable to navigate relative directories
I have a Python project building with Hudson. Most unit tests work correctly, but any tests that require writing to the file system (I have a class that uses tarfiles, for example) can't find the tmp directory I have set up for intermediate processing (my tearDown methods remove any files under the relative tmp directory). Here is my project structure: src tests fixtures (static files here) unit (unit tests here) tmp Here is an example error: OSError: [Errno 2] No such file or directory: '../../tmp' I assume this is happening because Hudson is not processing the files while in the directory unit, but rather some other working directory. What is Hudson's working directory? Can it be configured? Can relative paths work at all?
[ "Each job in Hudson has it's own working directory, at /path/to/hudson/jobs/[job name]/workspace/\nFor individual jobs, you can set the \"Use custom workspace\" option (under \"Advanced Project Options\") to define where the workspace will be.\nI guess it would depend on how your tests are being run, but if you inspect the job's workspace you should be able to find where Hudson is writing the files to.\n", "I don't know how you're initializing your workspace, but typically it's done by checking your project out of version control into the workspace. If this is true in your case, the easiest thing to do is to add your tmp directory to version control (say, with a README file in it, if your version control system doesn't support directories). Then, the tmp directory will get checked out into your workspace and things should work again.\n", "I don't know anyhting about Hudson, but this is what I do to ensure, that relative path are working right:\nos.chdir(os.path.dirname(sys.argv[0]))\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "continuous_integration", "hudson", "python", "unit_testing" ]
stackoverflow_0002270696_continuous_integration_hudson_python_unit_testing.txt
Q: How to use numpy with cygwin I have a bash shell script which calls some python scripts. I am running windows with cygwin which has python in /usr/bin/python. I also have python and numpy installed as a windows package. When I execute the script from cygwin , I get an ImportError - no module named numpy. I have tried running from windows shell but the bash script does not run. Any ideas? My script is below for target in $(ls large_t) ; do ./emulate.py $target ; # done | sort | gawk '{print $2,$3,$4,$5,$6 > $1}{print $1}' | sort | uniq > frames #frames contains a list of filenames, each files name is the timestamp rm -f video touch video # for each frame for f in $(cat frames) do ./make_target_ant.py $f cat $f.bscan >> video done Thanks A: Windows python and Cygwin Python are independent; if you're using Cygwin's Python, you need to have numpy installed in cygwin. If you'd prefer to use the Windows python, you should be able to call it from a bash script by either: Calling the windows executable directly: c:/Python/python.exe ./emulate.py Changing the hash-bang to point at the Windows install: #!c:/Python/python.exe in the script, rather than #!/usr/bin/env python or #!/usr/bin/python. Putting Windows' python in your path before Cygwin python, for the duration of the script: PATH=c:/Python/:$PATH ./emulate.py where emulate.py uses the /bin/env method of running python. A: The NumPy installed is for the Windows Python, not the cygwin Python. Install NumPy from source built against the cygwin Python, or install it from the cygwin setup if it exists there.
How to use numpy with cygwin
I have a bash shell script which calls some python scripts. I am running windows with cygwin which has python in /usr/bin/python. I also have python and numpy installed as a windows package. When I execute the script from cygwin , I get an ImportError - no module named numpy. I have tried running from windows shell but the bash script does not run. Any ideas? My script is below for target in $(ls large_t) ; do ./emulate.py $target ; # done | sort | gawk '{print $2,$3,$4,$5,$6 > $1}{print $1}' | sort | uniq > frames #frames contains a list of filenames, each files name is the timestamp rm -f video touch video # for each frame for f in $(cat frames) do ./make_target_ant.py $f cat $f.bscan >> video done Thanks
[ "Windows python and Cygwin Python are independent; if you're using Cygwin's Python, you need to have numpy installed in cygwin.\nIf you'd prefer to use the Windows python, you should be able to call it from a bash script by either:\n\nCalling the windows executable directly: c:/Python/python.exe ./emulate.py\nChanging the hash-bang to point at the Windows install: #!c:/Python/python.exe in the script, rather than #!/usr/bin/env python or #!/usr/bin/python.\nPutting Windows' python in your path before Cygwin python, for the duration of the script:\nPATH=c:/Python/:$PATH ./emulate.py \nwhere emulate.py uses the /bin/env method of running python.\n\n", "The NumPy installed is for the Windows Python, not the cygwin Python. Install NumPy from source built against the cygwin Python, or install it from the cygwin setup if it exists there.\n" ]
[ 4, 0 ]
[]
[]
[ "cygwin", "numpy", "python" ]
stackoverflow_0002271565_cygwin_numpy_python.txt
Q: Help generate Facebook API "Sig" in Python I have been struggling with this for over two days and I could use your help. Here's the problem: Whenever a request is made to the Facebook REST server, we have to send an additional parameter called "sig". This sig is generated using the following algorithm: <?php $secret = 'Secret Key'; // where 'Secret Key' is your application secret key $args = array( 'argument1' => $argument1, 'argument2' => $argument2); // insert the actual arguments for your request in place of these example args $request_str = ''; foreach ($args as $key => $value) { $request_str .= $key . '=' . $value; // Note that there is no separator. } $sig = $request_str . $secret; $sig = md5($sig); ?> More information about this: http://wiki.developers.facebook.com/index.php/How_Facebook_Authenticates_Your_Application I have been trying to reproduce this piece of code in Python, here is my attempt: def get_signature(facebook_parameter): sig = "" for key, value in facebook_parameter.parameters: sig += key + "=" + value sig += facebook_parameter.application_secret return hashlib.md5(sig).hexdigest() facebook_paremeter.parameters is a list that looks like this: [('api_key', '...'), ('v', '1.0'), ('format', 'JSON'), ('method', '...')] and facebook_paremeter.application_secret is a valid app secret. This code is running on the Google App Engine development platform (if that makes any difference). Python 2.6.4. Can somebody help me find out where my code is going wrong? Thanks, Sri A: List had to be sorted.
Help generate Facebook API "Sig" in Python
I have been struggling with this for over two days and I could use your help. Here's the problem: Whenever a request is made to the Facebook REST server, we have to send an additional parameter called "sig". This sig is generated using the following algorithm: <?php $secret = 'Secret Key'; // where 'Secret Key' is your application secret key $args = array( 'argument1' => $argument1, 'argument2' => $argument2); // insert the actual arguments for your request in place of these example args $request_str = ''; foreach ($args as $key => $value) { $request_str .= $key . '=' . $value; // Note that there is no separator. } $sig = $request_str . $secret; $sig = md5($sig); ?> More information about this: http://wiki.developers.facebook.com/index.php/How_Facebook_Authenticates_Your_Application I have been trying to reproduce this piece of code in Python, here is my attempt: def get_signature(facebook_parameter): sig = "" for key, value in facebook_parameter.parameters: sig += key + "=" + value sig += facebook_parameter.application_secret return hashlib.md5(sig).hexdigest() facebook_paremeter.parameters is a list that looks like this: [('api_key', '...'), ('v', '1.0'), ('format', 'JSON'), ('method', '...')] and facebook_paremeter.application_secret is a valid app secret. This code is running on the Google App Engine development platform (if that makes any difference). Python 2.6.4. Can somebody help me find out where my code is going wrong? Thanks, Sri
[ "List had to be sorted.\n" ]
[ 1 ]
[]
[]
[ "facebook", "google_app_engine", "python" ]
stackoverflow_0002264333_facebook_google_app_engine_python.txt
Q: QtDesigner or doing all of the Qt boilerplate by hand? When starting up a new project, as a beginner, which would you use? For example, in my situation. I'm going to have a program running on an infinite loop, constantly updating values. I need these values to be represented as a bar graph as they're updating. At the same time, the GUI has to be responsive to user feedback as there will be some QObjects that will be used to updated parameters within that infinite loop. So these need to be on separate threads, if I'm not mistaken. Which choice would give the most/least hassle? A: If I understood your question correctly, updating the GUI has a little to do with the way you programmed it. From my experience, it's easier to design a main window (or whatever your top level object is) in Designer, and add some dynamically updated content in a widget(s) created in your code. In most cases, it saves your time spent on digging through QT documentation, and additionally, you are able to visually inspect positioning, aligning etc. You don't lose anything by using a Designer, every part of the GUI can be modified in your code afterwards, if it needs some custom behavior. Having said that, without knowing all the details of your project is hard to tell which option (QT or in-code) is faster. A: Your right threading is your answer. Use the QT threads they work very well. Where I work when people start out using QT a lot of them start with designer but eventually end up hand coding it. I think you will end up hand coding it but if you are someone who really likes GUIs you may want to start with Designer. I know that isn't a definitive answer but it really depends. A: First of all, the requirements that you've mentioned don't (or shouldn't) have much affect on this decision. Either way, you're going to have to learn something. You might as well investigate both options, and make the decision yourself. Write a couple of "Hello, World!" apps, then start adding some extra widgets/behavior to see how each approach scales. Since you asked, I would probably use Qt Designer. But I'm not you, and I'm not working on (nor do I know much of anything about) your project.
QtDesigner or doing all of the Qt boilerplate by hand?
When starting up a new project, as a beginner, which would you use? For example, in my situation. I'm going to have a program running on an infinite loop, constantly updating values. I need these values to be represented as a bar graph as they're updating. At the same time, the GUI has to be responsive to user feedback as there will be some QObjects that will be used to updated parameters within that infinite loop. So these need to be on separate threads, if I'm not mistaken. Which choice would give the most/least hassle?
[ "If I understood your question correctly, updating the GUI has a little to do with the way you programmed it.\nFrom my experience, it's easier to design a main window (or whatever your top level object is) in Designer, and add some dynamically updated content in a widget(s) created in your code. In most cases, it saves your time spent on digging through QT documentation, and additionally, you are able to visually inspect positioning, aligning etc.\nYou don't lose anything by using a Designer, every part of the GUI can be modified in your code afterwards, if it needs some custom behavior.\nHaving said that, without knowing all the details of your project is hard to tell which option (QT or in-code) is faster.\n", "Your right threading is your answer. Use the QT threads they work very well. \nWhere I work when people start out using QT a lot of them start with designer but eventually end up hand coding it. I think you will end up hand coding it but if you are someone who really likes GUIs you may want to start with Designer. I know that isn't a definitive answer but it really depends.\n", "First of all, the requirements that you've mentioned don't (or shouldn't) have much affect on this decision.\nEither way, you're going to have to learn something. You might as well investigate both options, and make the decision yourself. Write a couple of \"Hello, World!\" apps, then start adding some extra widgets/behavior to see how each approach scales.\nSince you asked, I would probably use Qt Designer. But I'm not you, and I'm not working on (nor do I know much of anything about) your project.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "pyqt", "python", "user_interface" ]
stackoverflow_0002268853_pyqt_python_user_interface.txt
Q: Python: automated change in variable contents I have a Python function which receives numerous variables, and builds an SQL query out of them: def myfunc(name=None, abbr=None, grade=None, ...) These values should build an SQL query. For that purpose, Those who equal None should be changed to NULL, and those who store useful values should be embraced with 's: name="'"+name+"\'" if name else 'NULL' abbr="'"+abbr+"\'" if abbr else 'NULL' ... Lots of lines here - that's my problem! ... And than, query="""INSERT INTO table(name, abbr, ...) VALUES (%(name)s, %(abbr)s, ...) """ locals() cur.execute(query) Is there a nicer, more Pythonic way to change the variable contents according to this rule? Adam A: The best way to form a SQL query is not by string-formatting -- the execute method of a cursor object takes a query string with placeholders and a sequence (or dict, depending on the exact implementation you have of the DB API) with the values to substitute there; it will then perform the None-to-Null and string-quoting that you require. I strongly recommend you look into that possibility. If you need string processing for some other purpose, however, you could do something like: processed = dict((n, "'%s'" % v if v is not None else 'NULL') for n, v in locals().iteritems()) and then use dictionary processed instead of locals() for further string-formatting. A: You could define myfunc as follows: def myfunc(*args, **kwargs) Where kwargs is a dictionary holding all named parameters passed to the function. To get the value of a query parameter, you would use kwargs.get(name_of_parameter, 'NULL'). To build the query, you would just iterate over all dictionary items. Note however, that any parameter passed as a named parameter to the function will end up in the query if you do it this way. A: The correct way to pass arguments to psycopg2 is to use placeholders and let the driver handle the values. None are converted to NULL automatically and the correct string escaping is performed. Concatenating string is a bad idea.
Python: automated change in variable contents
I have a Python function which receives numerous variables, and builds an SQL query out of them: def myfunc(name=None, abbr=None, grade=None, ...) These values should build an SQL query. For that purpose, Those who equal None should be changed to NULL, and those who store useful values should be embraced with 's: name="'"+name+"\'" if name else 'NULL' abbr="'"+abbr+"\'" if abbr else 'NULL' ... Lots of lines here - that's my problem! ... And than, query="""INSERT INTO table(name, abbr, ...) VALUES (%(name)s, %(abbr)s, ...) """ locals() cur.execute(query) Is there a nicer, more Pythonic way to change the variable contents according to this rule? Adam
[ "The best way to form a SQL query is not by string-formatting -- the execute method of a cursor object takes a query string with placeholders and a sequence (or dict, depending on the exact implementation you have of the DB API) with the values to substitute there; it will then perform the None-to-Null and string-quoting that you require.\nI strongly recommend you look into that possibility. If you need string processing for some other purpose, however, you could do something like:\nprocessed = dict((n, \"'%s'\" % v if v is not None else 'NULL')\n for n, v in locals().iteritems())\n\nand then use dictionary processed instead of locals() for further string-formatting.\n", "You could define myfunc as follows:\ndef myfunc(*args, **kwargs)\n\nWhere kwargs is a dictionary holding all named parameters passed to the function.\nTo get the value of a query parameter, you would use kwargs.get(name_of_parameter, 'NULL'). To build the query, you would just iterate over all dictionary items. Note however, that any parameter passed as a named parameter to the function will end up in the query if you do it this way.\n", "The correct way to pass arguments to psycopg2 is to use placeholders and let the driver handle the values. None are converted to NULL automatically and the correct string escaping is performed. \nConcatenating string is a bad idea.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "psycopg2", "python", "variables" ]
stackoverflow_0002172654_psycopg2_python_variables.txt
Q: How to fix "can't adapt error" when saving binary data using python psycopg2 I ran across this bug three times today in one of our projects. Putting the problem and solution online for future reference. impost psycopg2 con = connect(...) def save(long_blob): cur = con.cursor() long_data = struct.unpack('<L', long_blob) cur.execute('insert into blob_records( blob_data ) values (%s)', [long_data]) This will fail with the error "can't adapt" from psycopg2. A: The problem is struct.unpack returns a tuple result, even if there is only one value to unpack. You need to make sure you grab the first item from the tuple, even if there is only one item. Otherwise psycopg2 sql argument parsing will fail trying to convert the tuple to a string giving the "can't adapt" error message. impost psycopg2 con = connect(...) def save(long_blob): cur = con.cursor() long_data = struct.unpack('<L', long_blob) # grab the first result of the tuple long_data = long_data[0] cur.execute('insert into blob_records( blob_data ) values (%s)', [long_data]) A: "Can't adapt" is raised when psycopg doesn't know the type of your long_blob variable. What type is it? You can easily register an adapter to tell psycopg how to convert the value for the database. Because it is a numerical value, chances are that the AsIs adapter would already work for you.
How to fix "can't adapt error" when saving binary data using python psycopg2
I ran across this bug three times today in one of our projects. Putting the problem and solution online for future reference. impost psycopg2 con = connect(...) def save(long_blob): cur = con.cursor() long_data = struct.unpack('<L', long_blob) cur.execute('insert into blob_records( blob_data ) values (%s)', [long_data]) This will fail with the error "can't adapt" from psycopg2.
[ "The problem is struct.unpack returns a tuple result, even if there is only one value to unpack. You need to make sure you grab the first item from the tuple, even if there is only one item. Otherwise psycopg2 sql argument parsing will fail trying to convert the tuple to a string giving the \"can't adapt\" error message.\nimpost psycopg2\n\ncon = connect(...)\n\ndef save(long_blob):\n cur = con.cursor() \n long_data = struct.unpack('<L', long_blob)\n\n # grab the first result of the tuple\n long_data = long_data[0]\n\n cur.execute('insert into blob_records( blob_data ) values (%s)', [long_data])\n\n", "\"Can't adapt\" is raised when psycopg doesn't know the type of your long_blob variable. What type is it?\nYou can easily register an adapter to tell psycopg how to convert the value for the database.\nBecause it is a numerical value, chances are that the AsIs adapter would already work for you.\n" ]
[ 4, 1 ]
[]
[]
[ "iterable_unpacking", "postgresql", "psycopg2", "python", "unpack" ]
stackoverflow_0002149515_iterable_unpacking_postgresql_psycopg2_python_unpack.txt
Q: Why are these lists the same? I can't understand how x and y are the same list. I've been trying to debug it using print statements and import code; code.interact(local=locals()) to drop into various points, but I can't figure out what on earth is going on :-( from collections import namedtuple, OrderedDict coordinates_2d=["x","y"] def virtual_container(virtual_container, objects_type): """Used to create a virtual object given a the type of container and what it holds. The object_type needs to only have normal values.""" if issubclass(virtual_container, list): class my_virtual_container_class: """This singleton class represents the container""" def __init__(self): #Define the default values __vals__=OrderedDict([(key,list()) for key in objects_type]) print(id(__vals__["x"]), id(__vals__["y"]))#ids are different: 12911896 12911968 #Then functions to access them d={key: lambda self: self.__vals__[key] for key in objects_type} d["__vals__"]=__vals__ #Construct a named tuple from this self.attr=type('attr_cl',(), d)() print(id(self.attr.x()), id(self.attr.y()))#ids are same: 32904544 32904544 #TODO: Define the operators __del__, setitem, getitem. Also append return my_virtual_container_class() #Nice method of handling coordinates coordinates=virtual_container(list, coordinates_2d) x=coordinates.attr.x() y=coordinates.attr.y() x.append(1) y.append(2) print(x, y)#Prints [1, 2] [1, 2] A: The problem is with this line: d={key: lambda self: self.__vals__[key] for key in objects_type} The lambda uses the value of the variable key, but that value has changed by the time the lambda is called - so all lambdas will actually use the same value for the key. This can be fixed with a little trick: Pass the key as a default parameter value to the lambda: ... lambda self, key=key: self.__vals__[key] ... This makes sure that the value of key is bound to the one it had at the time the lambda was created. A: I think the following line should look like this (but unfortunately I can't test because I don't have Python 3 available): # Then functions to access them d = dict((key, lambda self: self.__vals__[key]) for key in objects_type)
Why are these lists the same?
I can't understand how x and y are the same list. I've been trying to debug it using print statements and import code; code.interact(local=locals()) to drop into various points, but I can't figure out what on earth is going on :-( from collections import namedtuple, OrderedDict coordinates_2d=["x","y"] def virtual_container(virtual_container, objects_type): """Used to create a virtual object given a the type of container and what it holds. The object_type needs to only have normal values.""" if issubclass(virtual_container, list): class my_virtual_container_class: """This singleton class represents the container""" def __init__(self): #Define the default values __vals__=OrderedDict([(key,list()) for key in objects_type]) print(id(__vals__["x"]), id(__vals__["y"]))#ids are different: 12911896 12911968 #Then functions to access them d={key: lambda self: self.__vals__[key] for key in objects_type} d["__vals__"]=__vals__ #Construct a named tuple from this self.attr=type('attr_cl',(), d)() print(id(self.attr.x()), id(self.attr.y()))#ids are same: 32904544 32904544 #TODO: Define the operators __del__, setitem, getitem. Also append return my_virtual_container_class() #Nice method of handling coordinates coordinates=virtual_container(list, coordinates_2d) x=coordinates.attr.x() y=coordinates.attr.y() x.append(1) y.append(2) print(x, y)#Prints [1, 2] [1, 2]
[ "The problem is with this line:\nd={key: lambda self: self.__vals__[key] for key in objects_type}\n\nThe lambda uses the value of the variable key, but that value has changed by the time the lambda is called - so all lambdas will actually use the same value for the key.\nThis can be fixed with a little trick: Pass the key as a default parameter value to the lambda:\n... lambda self, key=key: self.__vals__[key] ...\n\nThis makes sure that the value of key is bound to the one it had at the time the lambda was created.\n", "I think the following line should look like this (but unfortunately I can't test because I don't have Python 3 available):\n# Then functions to access them\nd = dict((key, lambda self: self.__vals__[key]) for key in objects_type)\n\n" ]
[ 7, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002272119_python.txt
Q: How can I use a large wxCursor? The wx.Cursor class automatically scales the image I give it to 32x32 and I need to use a cursor that is larger than that. On http://support.microsoft.com/kb/307213 I saw what might be the reason for this behavior Although cursors can, in theory, be any size, the system imposes a standard size that is exposed by means of the SM_CXCURSOR and SM_CYCURSOR values. These metrics are read-only. On standard, low-DPI systems, these metrics are set to 32x32 pixels (32 bytes/row). When the system loads cursors by means of the standard LoadCursor function, the cursor is stretched to this dimension. but I also saw that it can be done The system also provides the SetSystemCursor API function that you can use to change the system cursor for specific categories. You can use this function to set a cursor of any size. However, you must call the function programmatically, and you can only use it to set a cursor for a specific category. You cannot use it to make all cursors on the system the same size. Is there something I am missing in the wx docs or must I directly call the windows api? A: It turns out wx doesn't do anything to support non standard sized cursors. http://groups.google.com/group/wxpython-users/browse_thread/thread/326aea0d740b85dd/277483ad5df77539
How can I use a large wxCursor?
The wx.Cursor class automatically scales the image I give it to 32x32 and I need to use a cursor that is larger than that. On http://support.microsoft.com/kb/307213 I saw what might be the reason for this behavior Although cursors can, in theory, be any size, the system imposes a standard size that is exposed by means of the SM_CXCURSOR and SM_CYCURSOR values. These metrics are read-only. On standard, low-DPI systems, these metrics are set to 32x32 pixels (32 bytes/row). When the system loads cursors by means of the standard LoadCursor function, the cursor is stretched to this dimension. but I also saw that it can be done The system also provides the SetSystemCursor API function that you can use to change the system cursor for specific categories. You can use this function to set a cursor of any size. However, you must call the function programmatically, and you can only use it to set a cursor for a specific category. You cannot use it to make all cursors on the system the same size. Is there something I am missing in the wx docs or must I directly call the windows api?
[ "It turns out wx doesn't do anything to support non standard sized cursors.\nhttp://groups.google.com/group/wxpython-users/browse_thread/thread/326aea0d740b85dd/277483ad5df77539\n" ]
[ 0 ]
[]
[]
[ "python", "windows", "wxpython" ]
stackoverflow_0002267986_python_windows_wxpython.txt
Q: Class Inheritance through Multiple Classes I have 3 classes and I run the first class and declare a variable in the second class and want the 3rd class to be able to print out this variable. I have code below to explain this more clearly. from class2 import Class2 class Class1(Class2): def __init__(self): self.value1 = 10 self.value2 = 20 def add(self): self.value3 = self.value1 + self.value2 def multiply(self): Test.subtract() if __name__ == '__main__': Class1 = Class1() Class1.add() Class1.Multiply() The above class calls functions in the second class and the values are used by the functions called. from class3 import Class3 class Class2(Class3): def e(self): self.value4 = self.value3 - self.value2 print self.value4 self.string1 = 'Hello' Class2.printValue() Class2 = Class2() The functions of the 3rd class are called by the 2nd class but the values from the 2nd class are not passed into the 3rd class. class Class3(): def printValue(self): print self.string1 if __name__ == '__main__': Class3 = Class3() So my question is how do I get the variables in the second class to be passed into the 3rd class when I run the the first class as the main script? Thanks for any help. This script is for example purposes only, in my script I have 3 files. I need to start with the 1st class in a file and then use the functions of the 2nd class which in turn create a variable which I then use when I am running a function in the 3rd class. But all this need's to run by executing the first class. All classes have to be in separate files. Sorry for the confusion, Thanks I can do this when I just use functions by passing the value through a parameter like: string1 = 'Hello' printValue(string1) Then this value can be used by the printValue function when it is running. I just can't get it working with Classes as passing parameters seems to be a problem because of self. A: It is kind of hard to understand what you are trying to do as your code does not even run. I think something like this is what you are trying to do: class Class3(): def printValue(self): print self.string1 class Class2(Class3): def e(self): self.value4 = self.value3 - self.value2 print self.value4 self.string1 = 'Hello' self.printValue() class Class1(Class2): def __init__(self): self.value1 = 10 self.value2 = 20 def add(self): self.value3 = self.value1 + self.value2 if __name__ == '__main__': instance1 = Class1() instance1.add() instance1.e() # will print "10" and "Hello" print instance1.value3 # will print "30" A: I'd suggest: File1 class Class3: def __init__(self): #I'd really like to do self.string1 = "" or something concrete here. pass def printValue(self): #Add exception handling #self may not have a `string1` print self.string1 File2 from File1 import Class3 class Class2(Class3): def __init__(self): Class3.__init__(self) self.string1 = 'Hello' def e(self): self.value4 = self.value3 - self.value2 print self.value4 self.printValue() File3 from File2 import Class2 class Class1(Class2): def __init__(self): Class2.__init__(self) self.value1 = 10 self.value2 = 20 def add(self): self.value3 = self.value1 + self.value2 if __name__ == '__main__': obj1 = Class1() #The order of calling is important! value3 isn't defined until add() is called obj1.add() obj1.e() print obj1.value3 I have assumed a total absence of multiple inheritance [Use super() if you need cooperative MI + google: Why python's super is considered harmful!]. All in all, truppo's answer might be exactly what you need. My answer just points out, what in my very subjective opinion is a better way to achieve what you need. I'd also suggest using new style classes.
Class Inheritance through Multiple Classes
I have 3 classes and I run the first class and declare a variable in the second class and want the 3rd class to be able to print out this variable. I have code below to explain this more clearly. from class2 import Class2 class Class1(Class2): def __init__(self): self.value1 = 10 self.value2 = 20 def add(self): self.value3 = self.value1 + self.value2 def multiply(self): Test.subtract() if __name__ == '__main__': Class1 = Class1() Class1.add() Class1.Multiply() The above class calls functions in the second class and the values are used by the functions called. from class3 import Class3 class Class2(Class3): def e(self): self.value4 = self.value3 - self.value2 print self.value4 self.string1 = 'Hello' Class2.printValue() Class2 = Class2() The functions of the 3rd class are called by the 2nd class but the values from the 2nd class are not passed into the 3rd class. class Class3(): def printValue(self): print self.string1 if __name__ == '__main__': Class3 = Class3() So my question is how do I get the variables in the second class to be passed into the 3rd class when I run the the first class as the main script? Thanks for any help. This script is for example purposes only, in my script I have 3 files. I need to start with the 1st class in a file and then use the functions of the 2nd class which in turn create a variable which I then use when I am running a function in the 3rd class. But all this need's to run by executing the first class. All classes have to be in separate files. Sorry for the confusion, Thanks I can do this when I just use functions by passing the value through a parameter like: string1 = 'Hello' printValue(string1) Then this value can be used by the printValue function when it is running. I just can't get it working with Classes as passing parameters seems to be a problem because of self.
[ "It is kind of hard to understand what you are trying to do as your code does not even run.\nI think something like this is what you are trying to do:\nclass Class3():\n\n def printValue(self):\n print self.string1\n\nclass Class2(Class3):\n\n def e(self):\n self.value4 = self.value3 - self.value2\n print self.value4\n self.string1 = 'Hello'\n self.printValue()\n\nclass Class1(Class2):\n\n def __init__(self):\n self.value1 = 10\n self.value2 = 20\n\n def add(self):\n self.value3 = self.value1 + self.value2\n\n\nif __name__ == '__main__':\n instance1 = Class1()\n instance1.add()\n instance1.e() # will print \"10\" and \"Hello\"\n print instance1.value3 # will print \"30\"\n\n", "I'd suggest:\nFile1\nclass Class3:\n def __init__(self):\n #I'd really like to do self.string1 = \"\" or something concrete here.\n pass\n\n def printValue(self):\n #Add exception handling\n #self may not have a `string1`\n print self.string1\n\nFile2\nfrom File1 import Class3\n\nclass Class2(Class3):\n def __init__(self):\n Class3.__init__(self)\n self.string1 = 'Hello'\n\n def e(self):\n self.value4 = self.value3 - self.value2\n print self.value4\n self.printValue()\n\nFile3\nfrom File2 import Class2\n\nclass Class1(Class2):\n def __init__(self):\n Class2.__init__(self)\n self.value1 = 10\n self.value2 = 20\n\n def add(self):\n self.value3 = self.value1 + self.value2\n\n\nif __name__ == '__main__':\n obj1 = Class1()\n #The order of calling is important! value3 isn't defined until add() is called\n obj1.add()\n obj1.e()\n print obj1.value3\n\nI have assumed a total absence of multiple inheritance [Use super() if you need cooperative MI + google: Why python's super is considered harmful!]. All in all, truppo's answer might be exactly what you need. My answer just points out, what in my very subjective opinion is a better way to achieve what you need. I'd also suggest using new style classes.\n" ]
[ 1, 0 ]
[]
[]
[ "class", "inheritance", "parameters", "python" ]
stackoverflow_0002272728_class_inheritance_parameters_python.txt
Q: Face-tracking libraries for Java or Python I'm looking for a way to identify faces (not specific people, just where the faces are) and track them as they move across a room. We're trying to measure walking speed for people, and I assumed this would be the easiest way of identifying a person as a person. We'll have a reasonably fast camera for the project, so I can probably use some logic for seeing if "face1 in frame00 == face1 in frame01". Ideally such a software would return a list of faces (as in x,y locations) and their sizes. A: Checkout OpenCV Python Interface A: "faint" (The Face Annotation Interface) might be what you're looking for. http://faint.sourceforge.net/ http://technoroy.blogspot.com/2008/06/faint-search-for-faces.html I never used it myself. However, I played with the application which bundles with faint. A: There was an article about this in the German "Linux Magazin". They used the Open Computer Vision Library which offers a whole bunch of algorithms to process images in various ways. A: CodeProject has a whole raft of articles in various languages.
Face-tracking libraries for Java or Python
I'm looking for a way to identify faces (not specific people, just where the faces are) and track them as they move across a room. We're trying to measure walking speed for people, and I assumed this would be the easiest way of identifying a person as a person. We'll have a reasonably fast camera for the project, so I can probably use some logic for seeing if "face1 in frame00 == face1 in frame01". Ideally such a software would return a list of faces (as in x,y locations) and their sizes.
[ "Checkout OpenCV Python Interface\n", "\"faint\" (The Face Annotation Interface) might be what you're looking for.\nhttp://faint.sourceforge.net/\nhttp://technoroy.blogspot.com/2008/06/faint-search-for-faces.html\nI never used it myself. However, I played with the application which bundles with faint.\n", "There was an article about this in the German \"Linux Magazin\".\nThey used the Open Computer Vision Library which offers a whole bunch of algorithms to process images in various ways.\n", "CodeProject has a whole raft of articles in various languages.\n" ]
[ 7, 2, 2, 0 ]
[]
[]
[ "face_detection", "java", "python" ]
stackoverflow_0000802243_face_detection_java_python.txt
Q: Python: select function With this code: import scipy from scipy import * x = r_[1:15] print x a = select([x > 7, x >= 4],[x,x+10]) print a I get this answer: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14] [ 0 0 0 14 15 16 17 8 9 10 11 12 13 14] But why do I have zeros in the beginning and not in the end? Thanks in advance. A: You seem to be using numpy. From the documentation for numpy.select(): numpy.select(condlist, choicelist, default=0) ... default: The element inserted in output when all conditions evaluate to False. Since your conditions are x > 7 and x >=4, the output array will have elements from x+10 when x >= 4 and from x when x > 7. When both the conditions are false, i.e., when x < 4, you will get default, which is 0. So you get 3 zeros in the beginning. You don't get any zeros in the end because at least one of the conditions is true (both are true, in fact).
Python: select function
With this code: import scipy from scipy import * x = r_[1:15] print x a = select([x > 7, x >= 4],[x,x+10]) print a I get this answer: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14] [ 0 0 0 14 15 16 17 8 9 10 11 12 13 14] But why do I have zeros in the beginning and not in the end? Thanks in advance.
[ "You seem to be using numpy.\nFrom the documentation for numpy.select():\n\nnumpy.select(condlist, choicelist, default=0)\n...\ndefault: The element inserted in output when all conditions evaluate to False.\n\nSince your conditions are x > 7 and x >=4, the output array will have elements from x+10 when x >= 4 and from x when x > 7. When both the conditions are false, i.e., when x < 4, you will get default, which is 0. So you get 3 zeros in the beginning.\nYou don't get any zeros in the end because at least one of the conditions is true (both are true, in fact).\n" ]
[ 5 ]
[]
[]
[ "numpy", "python", "select" ]
stackoverflow_0002272854_numpy_python_select.txt
Q: how to define generic variables in Python (syntax question) With Python it is easy to declare something like self.x = "something" print self.x #outputs "something" I want to have something like this: param["key"] = "x" self.param["key"] = "something" #here I actually want to access this "self" parameter as below with its value defined above print self.x #supposed to output "something" as well. Note that "x" refers to value defined in the first line Is there any such thing? Is there any similar alternatives? Thanks in advance. A: Use setattr -- setattr(self, param['key'], 'something').
how to define generic variables in Python (syntax question)
With Python it is easy to declare something like self.x = "something" print self.x #outputs "something" I want to have something like this: param["key"] = "x" self.param["key"] = "something" #here I actually want to access this "self" parameter as below with its value defined above print self.x #supposed to output "something" as well. Note that "x" refers to value defined in the first line Is there any such thing? Is there any similar alternatives? Thanks in advance.
[ "Use setattr -- setattr(self, param['key'], 'something').\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0002273211_python.txt
Q: Avoid race condition when asserting file permissions in Python An application wants to parse and "execute" a file, and wants to assert the file is executable for security reasons. A moments thought and you realize this initial code has a race condition that makes the security scheme ineffective: import os class ExecutionError (Exception): pass def execute_file(filepath): """Execute serialized command inside @filepath The file must be executable (comparable to a shell script) >>> execute_file(__file__) # doctest: +ELLIPSIS Traceback (most recent call last): ... ExecutionError: ... (not executable) """ if not os.path.exists(filepath): raise IOError('"%s" does not exist' % (filepath, )) if not os.access(filepath, os.X_OK): raise ExecutionError('No permission to run "%s" (not executable)' % filepath) data = open(filepath).read() print '"Dummy execute"' print data The race condition exists between os.access(filepath, os.X_OK) and data = open(filepath).read() Since there is a possibility of the file being overwritten with a non-executable file of different content between these two system calls. The first solution I have is to change the order of the critical calls (and skip the now-redundant existance check): fobj = open(filepath, "rb") if not os.access(filepath, os.X_OK): raise ExecutionError('No permission to run "%s" (not executable)' % filepath) data = fobj.read() Does this solve the race condition? How can I solve it properly? Security scheme rationale, briefly (I thought) The file will be able to carry out arbitrary commands inside its environment, so it is comparable to a shell script. There was a security hole on free desktops with .desktop files that define applications: The file may specify any executable with arguments, and it may choose its own icon and name. So a randomly downloaded file could hide behind any name or icon and do anything. That was bad. This was solved by requiring that .desktop files have the executable bit set, otherwise they will not be rendered with name/icon, and the free desktop will ask the user if it wants to start the program before commencing. Compare this to Mac OS X's very good design: "This program has been downloaded from the web, are you sure you want to open it?". So in allegory with this, and the fact that you have to chmod +x shell scripts that you download, I thought about the design in the question above. Closing words Maybe in conclusion, maybe we should keep it simple: If the file must be executable, make it executable and let the kernel execute it when invoked by the user. Delegation of the task to where it belongs. A: The executability is attached to the file you open, there is nothing stopping several files from pointing to the inode containing the data you wish to read. In other words, the same data may be readable from a non-executable file elsewhere in the same filesystem. Furthermore, even after opening the file, you can't prevent the executability of that same file from changing, it could even be unlinked. The "best effort" available to you as I see it would be do checks using os.fstat on the opened file, and check protection mode and modification time before and after, but at best this will only reduce the possibility that changes go undetected while you read the file. On second thoughts, if you're the original creator of the data in this file, you could consider writing an inode that's never linked to the filesystem in the first place, this a common technique in memory sharing via files. Alternatively if the data contained must eventually made public to other users, you could use file locking, and then progressively extend the protection bits to those users that require it. Ultimately you must ensure malicious users simply don't have write access to the file. A: You cannot entirely solve this race condition -- e.g., in the version where you first open, then check permissions, it's possible that the permissions get changed just after you've opened the file and just before you've changed the permissions. If you can atomically move the file to a directory where the potential bad guys can't reach, then you can rest assured that nothing about the file will be changed from under your nose while you're dealing with it. If the potential bad guys can reach everywhere, or you can't move the file to where they can't reach, there's no defense. BTW, it's not clear to me how this scheme, even if it could be made to work, would actually add any security -- surely if the bad guys can put poisoned content in the file it's not beyond them to chmod +x it as well? A: The best you can do is : save the permission. change it to your own unique user (something with the program name) and forbid others to run it. make you checks (on the saved permission if needed). run your process. set back the permission to the saved ones. Of course, there are drawbacks, but if your use case is as simple as you seem to say, it could do the trick. A: You should change the files ownership such that an attacker cannot access it "chown root:root file_name". Do a "chmod 700 file_name" so that no other accounts can read/write/execute the file. This avoids the problem of a TOCTOU all together and this is how people prevent files from being modified by an attacker who has a user account on your system. A: Another way to do it is to change the file name to something unexpected, or even copy the entire file - if it's not too big - to a temp dir (encrypted if necessary), make you checks, then rename / copy the file back. Of course it's a very heavy process. But you end up with that because the system has not been set for safety from the beginning. A safe program would sign or encrypt data he wants to keep safe. In your case, it's not possible. Unless you realy on heavy encrypting, there is not way to ensure 100 safety on a machine you don't control.
Avoid race condition when asserting file permissions in Python
An application wants to parse and "execute" a file, and wants to assert the file is executable for security reasons. A moments thought and you realize this initial code has a race condition that makes the security scheme ineffective: import os class ExecutionError (Exception): pass def execute_file(filepath): """Execute serialized command inside @filepath The file must be executable (comparable to a shell script) >>> execute_file(__file__) # doctest: +ELLIPSIS Traceback (most recent call last): ... ExecutionError: ... (not executable) """ if not os.path.exists(filepath): raise IOError('"%s" does not exist' % (filepath, )) if not os.access(filepath, os.X_OK): raise ExecutionError('No permission to run "%s" (not executable)' % filepath) data = open(filepath).read() print '"Dummy execute"' print data The race condition exists between os.access(filepath, os.X_OK) and data = open(filepath).read() Since there is a possibility of the file being overwritten with a non-executable file of different content between these two system calls. The first solution I have is to change the order of the critical calls (and skip the now-redundant existance check): fobj = open(filepath, "rb") if not os.access(filepath, os.X_OK): raise ExecutionError('No permission to run "%s" (not executable)' % filepath) data = fobj.read() Does this solve the race condition? How can I solve it properly? Security scheme rationale, briefly (I thought) The file will be able to carry out arbitrary commands inside its environment, so it is comparable to a shell script. There was a security hole on free desktops with .desktop files that define applications: The file may specify any executable with arguments, and it may choose its own icon and name. So a randomly downloaded file could hide behind any name or icon and do anything. That was bad. This was solved by requiring that .desktop files have the executable bit set, otherwise they will not be rendered with name/icon, and the free desktop will ask the user if it wants to start the program before commencing. Compare this to Mac OS X's very good design: "This program has been downloaded from the web, are you sure you want to open it?". So in allegory with this, and the fact that you have to chmod +x shell scripts that you download, I thought about the design in the question above. Closing words Maybe in conclusion, maybe we should keep it simple: If the file must be executable, make it executable and let the kernel execute it when invoked by the user. Delegation of the task to where it belongs.
[ "The executability is attached to the file you open, there is nothing stopping several files from pointing to the inode containing the data you wish to read. In other words, the same data may be readable from a non-executable file elsewhere in the same filesystem. Furthermore, even after opening the file, you can't prevent the executability of that same file from changing, it could even be unlinked.\nThe \"best effort\" available to you as I see it would be do checks using os.fstat on the opened file, and check protection mode and modification time before and after, but at best this will only reduce the possibility that changes go undetected while you read the file.\nOn second thoughts, if you're the original creator of the data in this file, you could consider writing an inode that's never linked to the filesystem in the first place, this a common technique in memory sharing via files. Alternatively if the data contained must eventually made public to other users, you could use file locking, and then progressively extend the protection bits to those users that require it.\nUltimately you must ensure malicious users simply don't have write access to the file.\n", "You cannot entirely solve this race condition -- e.g., in the version where you first open, then check permissions, it's possible that the permissions get changed just after you've opened the file and just before you've changed the permissions.\nIf you can atomically move the file to a directory where the potential bad guys can't reach, then you can rest assured that nothing about the file will be changed from under your nose while you're dealing with it. If the potential bad guys can reach everywhere, or you can't move the file to where they can't reach, there's no defense.\nBTW, it's not clear to me how this scheme, even if it could be made to work, would actually add any security -- surely if the bad guys can put poisoned content in the file it's not beyond them to chmod +x it as well?\n", "The best you can do is :\n\nsave the permission.\nchange it to your own unique user (something with the program name) and forbid others to run it.\nmake you checks (on the saved permission if needed).\nrun your process.\nset back the permission to the saved ones.\n\nOf course, there are drawbacks, but if your use case is as simple as you seem to say, it could do the trick.\n", "You should change the files ownership such that an attacker cannot access it \"chown root:root file_name\". Do a \"chmod 700 file_name\" so that no other accounts can read/write/execute the file. This avoids the problem of a TOCTOU all together and this is how people prevent files from being modified by an attacker who has a user account on your system. \n", "Another way to do it is to change the file name to something unexpected, or even copy the entire file - if it's not too big - to a temp dir (encrypted if necessary), make you checks, then rename / copy the file back.\nOf course it's a very heavy process.\nBut you end up with that because the system has not been set for safety from the beginning. A safe program would sign or encrypt data he wants to keep safe. In your case, it's not possible.\nUnless you realy on heavy encrypting, there is not way to ensure 100 safety on a machine you don't control.\n" ]
[ 3, 2, 0, 0, 0 ]
[]
[]
[ "posix", "python", "race_condition", "security" ]
stackoverflow_0002258257_posix_python_race_condition_security.txt
Q: Problem with replacing a word in a file, using Python I have a .txt file containing data like this: 1,Rent1,Expense,16/02/2010,1,4000,4000 1,Car Loan1,Expense,16/02/2010,2,4500,9000 1,Flat Loan1,Expense,16/02/2010,2,4000,8000 0,Rent2,Expense,16/02/2010,1,4000,4000 0,Car Loan2,Expense,16/02/2010,2,4500,9000 0,Flat Loan2,Expense,16/02/2010,2,4000,8000 I want to replace the first item. If it is 1, means it should remain the same but if it is 0 means I want to change it to 1. So I have tried using the following code: import fileinput for line in fileinput.FileInput("sample.txt",inplace=1): s=line.split(",") print a print ','.join(s) But after successfully executed the program my .txt file looks like: 1,Rent1,Expense,16/02/2010,1,4000,4000 1,Car Loan1,Expense,16/02/2010,2,4500,9000 1,Flat Loan1,Expense,16/02/2010,2,4000,8000 0,Rent2,Expense,16/02/2010,1,4000,4000 0,Car Loan2,Expense,16/02/2010,2,4500,9000 0,Flat Loan2,Expense,16/02/2010,2,4000,8000 Now I want to remove the empty line. Is it possible, or is there any other way to replace the 0's? A: print adds an extra newline after the input and you already have one newline there. You should either strip the existing newline (line.rstrip("\n")) or use sys.stdout.write() instead. A: import fileinput import re p = re.compile(r'^0,') for line in fileinput.FileInput("sample.txt",inplace=1): print p.sub('1,', line.strip()) The existing code you have doesn't actually change the lines like you want; print a doesn't do anything if a isn't actually defined! So you end up just printing a blank line (the print a bit) and then printing the existing line, hence why you get a file that's unaltered except for the addition of some blank lines. A: Either use rstrip to remove the trailing new lines before printing or use sys.stdout.write instead of print. Also, if you only need to modify the first element, there is no need to split the entire line and join it again. You only need to split on the first comma: line.split(',', 1) If you want even better performance you could also just test the value of line[0] directly. A: fixed = [] for l in file('sample.txt'): parts = l.split(',',1) if(parts[0] == '0'): # not sure what you want to do here, but you want to "change this" number to 1? parts[0] = 1 fixed.append(parts.join(',')) outp = file('sample.txt','w') for f in fixed: outp.write(f) outp.close() This is untested, but it should get you most of the way there. Good luck A: import fileinput for line in fileinput.FileInput("sample.txt",inplace=1): s=line.rstrip().split(",") print a print ','.join(s) A: You have to use a comma at the end of your print so that it doesn't add a newline. Like so: print "Hello", This is what I came up with: input = open('file.txt', 'r') output = open('output.txt', 'w') for line in input: values = line.split(',') if (values[0] == '0'): values[0] = '1' output.write(','.join(values)) If you want a better csv handling library you might want to use this instead of split. A: The cleanest way to do it is to use the CSV parser : import fileinput import csv f = fileinput.FileInput("test.txt",inplace=1) fichiercsv = csv.reader(f, delimiter=',') for line in fichiercsv: line[0] = "1" print ",".join(line)
Problem with replacing a word in a file, using Python
I have a .txt file containing data like this: 1,Rent1,Expense,16/02/2010,1,4000,4000 1,Car Loan1,Expense,16/02/2010,2,4500,9000 1,Flat Loan1,Expense,16/02/2010,2,4000,8000 0,Rent2,Expense,16/02/2010,1,4000,4000 0,Car Loan2,Expense,16/02/2010,2,4500,9000 0,Flat Loan2,Expense,16/02/2010,2,4000,8000 I want to replace the first item. If it is 1, means it should remain the same but if it is 0 means I want to change it to 1. So I have tried using the following code: import fileinput for line in fileinput.FileInput("sample.txt",inplace=1): s=line.split(",") print a print ','.join(s) But after successfully executed the program my .txt file looks like: 1,Rent1,Expense,16/02/2010,1,4000,4000 1,Car Loan1,Expense,16/02/2010,2,4500,9000 1,Flat Loan1,Expense,16/02/2010,2,4000,8000 0,Rent2,Expense,16/02/2010,1,4000,4000 0,Car Loan2,Expense,16/02/2010,2,4500,9000 0,Flat Loan2,Expense,16/02/2010,2,4000,8000 Now I want to remove the empty line. Is it possible, or is there any other way to replace the 0's?
[ "print adds an extra newline after the input and you already have one newline there. You should either strip the existing newline (line.rstrip(\"\\n\")) or use sys.stdout.write() instead.\n", "import fileinput\nimport re\np = re.compile(r'^0,')\nfor line in fileinput.FileInput(\"sample.txt\",inplace=1):\n print p.sub('1,', line.strip())\n\nThe existing code you have doesn't actually change the lines like you want; print a doesn't do anything if a isn't actually defined! So you end up just printing a blank line (the print a bit) and then printing the existing line, hence why you get a file that's unaltered except for the addition of some blank lines.\n", "Either use rstrip to remove the trailing new lines before printing or use sys.stdout.write instead of print.\nAlso, if you only need to modify the first element, there is no need to split the entire line and join it again. You only need to split on the first comma:\nline.split(',', 1)\n\nIf you want even better performance you could also just test the value of line[0] directly.\n", "fixed = []\nfor l in file('sample.txt'):\n parts = l.split(',',1)\n if(parts[0] == '0'):\n # not sure what you want to do here, but you want to \"change this\" number to 1?\n parts[0] = 1\n fixed.append(parts.join(','))\noutp = file('sample.txt','w')\nfor f in fixed:\n outp.write(f)\noutp.close()\n\nThis is untested, but it should get you most of the way there.\nGood luck\n", "import fileinput\nfor line in fileinput.FileInput(\"sample.txt\",inplace=1):\n s=line.rstrip().split(\",\")\n print a\n print ','.join(s)\n\n", "You have to use a comma at the end of your print so that it doesn't add a newline. Like so:\nprint \"Hello\",\n\nThis is what I came up with:\ninput = open('file.txt', 'r')\noutput = open('output.txt', 'w')\nfor line in input:\n values = line.split(',')\n if (values[0] == '0'):\n values[0] = '1'\n output.write(','.join(values))\n\nIf you want a better csv handling library you might want to use this instead of split.\n", "The cleanest way to do it is to use the CSV parser :\nimport fileinput\nimport csv \n\nf = fileinput.FileInput(\"test.txt\",inplace=1)\nfichiercsv = csv.reader(f, delimiter=',')\n\nfor line in fichiercsv:\n line[0] = \"1\"\n print \",\".join(line)\n\n" ]
[ 4, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "file", "python", "replace" ]
stackoverflow_0002271199_file_python_replace.txt
Q: Should pre-commit tests use a big data set and fail if queries take too long, or use a small test database? I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run: some CREATE / UPDATE / DELETE tests against a temporary database that is thrown away at the end of each test case, and some report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change. Some of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices: only put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing. fill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long. I'm not sure which approach to take. Any advice? A: I'd do both. Run against the small set first to make sure all the code works, then run against the large dataset for things which need to be tested for time, this would be selects, searches and reports especially. If you are doing inserts or deletes or updates on multiple row sets, I'd test those as well against the large set. It is unlikely that simple single row action queries will take too long, but if they involve a lot alot of joins, I'd test them as well. If the queries won't run on prod within the timeout limits, that's a fail and far, far better to know as soon as possible so you can fix before you bring prod to it's knees. A: The problem with testing against real data is that it contains lots of duplicate values, and not enough edge cases. It is also difficult to know what the expected values ought to be (especially if your live database is very big). Oh, and depending on what the live application does, it can be illegal to use the data for the purposes of testing or development. Generally the best thing is to write the test data to go with the tests. This is labourious and boring, which is why so many TDD practitioners abhor databases. But if you have a live data set (which you can use for testing) then take a very cut-down sub-set of data for your tests. If you can write valid assertions against a dataset of thirty records, running your tests against a data set of thirty thousand is just a waste of time. But definitely, once you have got the queries returning the correct results put the queries through some performance tests.
Should pre-commit tests use a big data set and fail if queries take too long, or use a small test database?
I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run: some CREATE / UPDATE / DELETE tests against a temporary database that is thrown away at the end of each test case, and some report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change. Some of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices: only put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing. fill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long. I'm not sure which approach to take. Any advice?
[ "I'd do both. Run against the small set first to make sure all the code works, then run against the large dataset for things which need to be tested for time, this would be selects, searches and reports especially. If you are doing inserts or deletes or updates on multiple row sets, I'd test those as well against the large set. It is unlikely that simple single row action queries will take too long, but if they involve a lot alot of joins, I'd test them as well. If the queries won't run on prod within the timeout limits, that's a fail and far, far better to know as soon as possible so you can fix before you bring prod to it's knees. \n", "The problem with testing against real data is that it contains lots of duplicate values, and not enough edge cases. It is also difficult to know what the expected values ought to be (especially if your live database is very big). Oh, and depending on what the live application does, it can be illegal to use the data for the purposes of testing or development. \nGenerally the best thing is to write the test data to go with the tests. This is labourious and boring, which is why so many TDD practitioners abhor databases. But if you have a live data set (which you can use for testing) then take a very cut-down sub-set of data for your tests. If you can write valid assertions against a dataset of thirty records, running your tests against a data set of thirty thousand is just a waste of time.\nBut definitely, once you have got the queries returning the correct results put the queries through some performance tests. \n" ]
[ 1, 1 ]
[]
[]
[ "automated_tests", "mysql", "python", "sql", "tdd" ]
stackoverflow_0002273414_automated_tests_mysql_python_sql_tdd.txt
Q: Is it possible to define a wx.Panel as a class in Python? I want to define several plugins. They all inherit from the superclass Plugin. Each plugin consists on a wx.Panel that have a more specific method called "draw". How can I define a class as a Panel and afterwards call that class in my frame? I've tried like this: class Panel(wx.Panel): def __init__(self, parent): wx.Panel(self, parent) but it gives me this error: in __init__ _windows_.Panel_swiginit(self,_windows_.new_Panel(*args, **kwargs)) TypeError: in method 'new_Panel', expected argument 1 of type 'wxWindow *' Thanks in advance! A: class MyPanel(wx.Panel): def __init__(self, *args): wx.Panel.__init__(self, *args) def draw(self): # Your code here A: There is a class wx.PyPanel that is a version of Panel intended to be subclassed from Python and allows you to override C++ virtual methods. There are PyXxxx versions of a number of other wx classes as well. A: How can I define a class as a Panel and afterwards call that class in my frame? What you tried is close, but you're not properly calling the super class __init__. When subclassing wxPython classes, however, it's generally best to use the following pattern so that you don't have to worry about which specific arguments you are passing to it. (This wouldn't have solved your problem, which was outside of the code in question, but it maybe makes it clearer what's happening.) class Panel(wx.Panel): def __init__(self, *args, **kwargs): wx.Panel.__init__(self, *args, **kwargs) # ... code specific to your subclass goes here This ensures that anything passed in is handed on to the super class method with no additions or removals. That means the signature for your subclass exactly matches the super class signature, which is also what someone else using your subclass would probably expect. If, however, you are not actually doing anything in your own __init__() method other than calling the super class __init__(), you don't need to provide the method at all! As for your original issue: but it gives me this error: in __init__ windows.Panel_swiginit(self,windows.new_Panel(*args, **kwargs)) TypeError: in method 'new_Panel', expected argument 1 of type 'wxWindow *' (Edited) You were actually instantiating a wx.Panel() inside the __init__ rather than calling the super class __init__, as Javier (and Bryan Oakley, correcting me) pointed out. (Javier's change of the "parent" arg to "*args" confused me... sorry to confuse you.)
Is it possible to define a wx.Panel as a class in Python?
I want to define several plugins. They all inherit from the superclass Plugin. Each plugin consists on a wx.Panel that have a more specific method called "draw". How can I define a class as a Panel and afterwards call that class in my frame? I've tried like this: class Panel(wx.Panel): def __init__(self, parent): wx.Panel(self, parent) but it gives me this error: in __init__ _windows_.Panel_swiginit(self,_windows_.new_Panel(*args, **kwargs)) TypeError: in method 'new_Panel', expected argument 1 of type 'wxWindow *' Thanks in advance!
[ "class MyPanel(wx.Panel):\n def __init__(self, *args):\n wx.Panel.__init__(self, *args)\n\n def draw(self):\n # Your code here\n\n", "There is a class wx.PyPanel that is a version of Panel intended to be subclassed from Python and allows you to override C++ virtual methods.\nThere are PyXxxx versions of a number of other wx classes as well.\n", "\nHow can I define a class as a Panel and afterwards call that class in my frame?\n\nWhat you tried is close, but you're not properly calling the super class __init__. When subclassing wxPython classes, however, it's generally best to use the following pattern so that you don't have to worry about which specific arguments you are passing to it. (This wouldn't have solved your problem, which was outside of the code in question, but it maybe makes it clearer what's happening.)\nclass Panel(wx.Panel):\n def __init__(self, *args, **kwargs):\n wx.Panel.__init__(self, *args, **kwargs)\n # ... code specific to your subclass goes here\n\nThis ensures that anything passed in is handed on to the super class method with no additions or removals. That means the signature for your subclass exactly matches the super class signature, which is also what someone else using your subclass would probably expect.\nIf, however, you are not actually doing anything in your own __init__() method other than calling the super class __init__(), you don't need to provide the method at all!\nAs for your original issue:\n\nbut it gives me this error: in __init__ windows.Panel_swiginit(self,windows.new_Panel(*args, **kwargs)) TypeError: in method 'new_Panel', expected argument 1 of type 'wxWindow *'\n\n(Edited) You were actually instantiating a wx.Panel() inside the __init__ rather than calling the super class __init__, as Javier (and Bryan Oakley, correcting me) pointed out. (Javier's change of the \"parent\" arg to \"*args\" confused me... sorry to confuse you.)\n" ]
[ 5, 2, 0 ]
[]
[]
[ "class", "frame", "panel", "python", "wxpython" ]
stackoverflow_0002272889_class_frame_panel_python_wxpython.txt
Q: Python: Visualisation of waves I want to programm an easy visualisation of wave propagation. I tried this with visual python (VPython) but the programm is very slow. I want to use a 2-D visualisation now. Which module could you recommend? Tkinter? Matplotlib? For the computation i use numpy/scipy because it is fast. Thanks in advance. EDIT: Do you think matplotlib is a good choice? It looks very strong. EDIT: I really get stuck. Please help me! A: Try this library: http://linux.wareseeker.com/Programming/summon-1.8.8.zip/2911b4d847 Python Imaging Library is supposed to be good for 2D graphics: http://www.pythonware.com/products/pil/ Other Useful Links: Boost.Python http://www.boost.org/libs/python/doc/ PyOpenGL http://pyopengl.sourceforge.net/ These link's have some good information on them. I'm not familar with matplotlib but it's got some good review's: http://sourceforge.net/projects/matplotlib/reviews/
Python: Visualisation of waves
I want to programm an easy visualisation of wave propagation. I tried this with visual python (VPython) but the programm is very slow. I want to use a 2-D visualisation now. Which module could you recommend? Tkinter? Matplotlib? For the computation i use numpy/scipy because it is fast. Thanks in advance. EDIT: Do you think matplotlib is a good choice? It looks very strong. EDIT: I really get stuck. Please help me!
[ "Try this library:\nhttp://linux.wareseeker.com/Programming/summon-1.8.8.zip/2911b4d847\nPython Imaging Library is supposed to be good for 2D graphics:\nhttp://www.pythonware.com/products/pil/ \nOther Useful Links:\nBoost.Python http://www.boost.org/libs/python/doc/\nPyOpenGL http://pyopengl.sourceforge.net/ \nThese link's have some good information on them.\nI'm not familar with matplotlib but it's got some good review's:\nhttp://sourceforge.net/projects/matplotlib/reviews/\n" ]
[ 1 ]
[]
[]
[ "physics", "python", "visualization", "wave" ]
stackoverflow_0002273699_physics_python_visualization_wave.txt
Q: Regex divide with upper-case I would like to replace strings like 'HDMWhoSomeThing' to 'HDM Who Some Thing' with regex. So I would like to extract words which starts with an upper-case letter or consist of upper-case letters only. Notice that in the string 'HDMWho' the last upper-case letter is in the fact the first letter of the word Who - and should not be included in the word HDM. What is the correct regex to achieve this goal? I have tried many regex' similar to [A-Z][a-z]+ but without success. The [A-Z][a-z]+ gives me 'Who Some Thing' - without 'HDM' of course. Any ideas? Thanks, Rukki A: Try to split with this regular expression: /(?=[A-Z][a-z])/ And if your regular expression engine does not support splitting empty matches, try this regular expression to put spaces between the words: /([A-Z])(?![A-Z])/ Replace it with " $1" (space plus match of the first group). Then you can split at the space. A: one liner : ' '.join(a or b for a,b in re.findall('([A-Z][a-z]+)|(?:([A-Z]*)(?=[A-Z]))',s)) using regexp ([A-Z][a-z]+)|(?:([A-Z]*)(?=[A-Z])) A: #! /usr/bin/env python import re from collections import deque pattern = r'([A-Z]{2,}(?=[A-Z]|$)|[A-Z](?=[a-z]|$))' chunks = deque(re.split(pattern, 'HDMWhoSomeMONKEYThingXYZ')) result = [] while len(chunks): buf = chunks.popleft() if len(buf) == 0: continue if re.match(r'^[A-Z]$', buf) and len(chunks): buf += chunks.popleft() result.append(buf) print ' '.join(result) Output: HDM Who Some MONKEY Thing XYZ Judging by lines of code, this task is a much more natural fit with re.findall: pattern = r'([A-Z]{2,}(?=[A-Z]|$)|[A-Z][a-z]*)' print ' '.join(re.findall(pattern, 'HDMWhoSomeMONKEYThingX')) Output: HDM Who Some MONKEY Thing X A: May be '[A-Z]*?[A-Z][a-z]+'? Edit: This seems to work: [A-Z]{2,}(?![a-z])|[A-Z][a-z]+ import re def find_stuff(str): p = re.compile(r'[A-Z]{2,}(?![a-z])|[A-Z][a-z]+') m = p.findall(str) result = '' for x in m: result += x + ' ' print result find_stuff('HDMWhoSomeThing') find_stuff('SomeHDMWhoThing') Prints out: HDM Who Some Thing Some HDM Who Thing A: So 'words' in this case are: Any number of uppercase letters - unless the last uppercase letter is followed by a lowercase letter. One uppercase letter followed by any number of lowercase letters. so try: ([A-Z]+(?![a-z])|[A-Z][a-z]*) The first alternation includes a negative lookahead (?![a-z]), which handles the boundary between an all-caps word and an initial caps word.
Regex divide with upper-case
I would like to replace strings like 'HDMWhoSomeThing' to 'HDM Who Some Thing' with regex. So I would like to extract words which starts with an upper-case letter or consist of upper-case letters only. Notice that in the string 'HDMWho' the last upper-case letter is in the fact the first letter of the word Who - and should not be included in the word HDM. What is the correct regex to achieve this goal? I have tried many regex' similar to [A-Z][a-z]+ but without success. The [A-Z][a-z]+ gives me 'Who Some Thing' - without 'HDM' of course. Any ideas? Thanks, Rukki
[ "Try to split with this regular expression:\n/(?=[A-Z][a-z])/\n\nAnd if your regular expression engine does not support splitting empty matches, try this regular expression to put spaces between the words:\n/([A-Z])(?![A-Z])/\n\nReplace it with \" $1\" (space plus match of the first group). Then you can split at the space.\n", "one liner : \n' '.join(a or b for a,b in re.findall('([A-Z][a-z]+)|(?:([A-Z]*)(?=[A-Z]))',s))\nusing regexp \n([A-Z][a-z]+)|(?:([A-Z]*)(?=[A-Z]))\n", "#! /usr/bin/env python\n\nimport re\nfrom collections import deque\n\npattern = r'([A-Z]{2,}(?=[A-Z]|$)|[A-Z](?=[a-z]|$))'\nchunks = deque(re.split(pattern, 'HDMWhoSomeMONKEYThingXYZ'))\n\nresult = []\nwhile len(chunks):\n buf = chunks.popleft()\n if len(buf) == 0:\n continue\n if re.match(r'^[A-Z]$', buf) and len(chunks):\n buf += chunks.popleft()\n result.append(buf)\n\nprint ' '.join(result)\n\nOutput:\nHDM Who Some MONKEY Thing XYZ\nJudging by lines of code, this task is a much more natural fit with re.findall:\npattern = r'([A-Z]{2,}(?=[A-Z]|$)|[A-Z][a-z]*)'\nprint ' '.join(re.findall(pattern, 'HDMWhoSomeMONKEYThingX'))\n\nOutput:\nHDM Who Some MONKEY Thing X\n", "May be '[A-Z]*?[A-Z][a-z]+'?\nEdit: This seems to work: [A-Z]{2,}(?![a-z])|[A-Z][a-z]+\nimport re\n\ndef find_stuff(str):\n p = re.compile(r'[A-Z]{2,}(?![a-z])|[A-Z][a-z]+')\n m = p.findall(str)\n result = ''\n for x in m:\n result += x + ' '\n print result\n\nfind_stuff('HDMWhoSomeThing')\nfind_stuff('SomeHDMWhoThing')\n\nPrints out:\n\nHDM Who Some Thing\nSome HDM Who Thing\n\n", "So 'words' in this case are:\n\nAny number of uppercase letters - unless the last uppercase letter is followed by a lowercase letter.\nOne uppercase letter followed by any number of lowercase letters.\n\nso try:\n([A-Z]+(?![a-z])|[A-Z][a-z]*)\nThe first alternation includes a negative lookahead (?![a-z]), which handles the boundary between an all-caps word and an initial caps word.\n" ]
[ 2, 2, 2, 1, 1 ]
[]
[]
[ "python", "regex", "split", "string", "uppercase" ]
stackoverflow_0002273462_python_regex_split_string_uppercase.txt
Q: SQLAlchemy subquery - average of sums is there any way how to write the following SQL statement in SQLAlchemy ORM: SELECT AVG(a1) FROM (SELECT sum(irterm.n) AS a1 FROM irterm GROUP BY irterm.item_id); Thank you A: sums = session.query(func.sum(Irterm.n).label('a1')).group_by(Irterm.item_id).subquery() average = session.query(func.avg(sums.c.a1)).scalar()
SQLAlchemy subquery - average of sums
is there any way how to write the following SQL statement in SQLAlchemy ORM: SELECT AVG(a1) FROM (SELECT sum(irterm.n) AS a1 FROM irterm GROUP BY irterm.item_id); Thank you
[ "sums = session.query(func.sum(Irterm.n).label('a1')).group_by(Irterm.item_id).subquery()\naverage = session.query(func.avg(sums.c.a1)).scalar()\n\n" ]
[ 31 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0002273127_python_sqlalchemy.txt
Q: Encode East Asian languages using Python This may not really be a Python related question, but pertains to language encoding in general. I'm mining tweets from Twitter, and it appears that there is a large Japanese user community (with messages in Japanese). When I tried encoding the tweets for an XML file I used utf-8. e.g tweet=tweet.encode('utf-8') and none of the Japanese tweets appeared as they should have. My question that I am posing is, how should I have encoded them? What was my mistake? If I was to store the data in a CSV, what encoding scheme would I use in that case? A: Normally you would query the format for what encoding the data is in. Having said that, Shift-JIS is quite a popular encoding for Japanese text. >>> u'あいうえお'.encode('shift-jis') '\x82\xa0\x82\xa2\x82\xa4\x82\xa6\x82\xa8' A: There should be a way to query the encoding of the tweets when read from Twitter. You then decode them to Unicode as you read them into your program, then encode them when you write them back out to an XML file. Chinese, for example, might be using gbk encoding: import codecs unicode_data = data.decode('gbk') f = codecs.open('out.xml','w','utf-8') f.write(unicode_data) f.close()
Encode East Asian languages using Python
This may not really be a Python related question, but pertains to language encoding in general. I'm mining tweets from Twitter, and it appears that there is a large Japanese user community (with messages in Japanese). When I tried encoding the tweets for an XML file I used utf-8. e.g tweet=tweet.encode('utf-8') and none of the Japanese tweets appeared as they should have. My question that I am posing is, how should I have encoded them? What was my mistake? If I was to store the data in a CSV, what encoding scheme would I use in that case?
[ "Normally you would query the format for what encoding the data is in. Having said that, Shift-JIS is quite a popular encoding for Japanese text.\n>>> u'あいうえお'.encode('shift-jis')\n'\\x82\\xa0\\x82\\xa2\\x82\\xa4\\x82\\xa6\\x82\\xa8'\n\n", "There should be a way to query the encoding of the tweets when read from Twitter. You then decode them to Unicode as you read them into your program, then encode them when you write them back out to an XML file. Chinese, for example, might be using gbk encoding:\nimport codecs\nunicode_data = data.decode('gbk')\nf = codecs.open('out.xml','w','utf-8')\nf.write(unicode_data)\nf.close()\n\n" ]
[ 3, 2 ]
[]
[]
[ "csv", "encoding", "python", "xml" ]
stackoverflow_0002270928_csv_encoding_python_xml.txt
Q: ldap raises an UNWILLING TO PERFORM error My Django application is using python-ldap library (ldap_groups django application) and must add users against an Active Directory on a Windows 2003 Virtual Machine domain. My application running on a Ubuntu virtual Machine is not member of the Windows domain. Here is the code: settings.py DNS_NAME='IP_ADRESS' LDAP_PORT=389 LDAP_URL='ldap://%s:%s' % (DNS_NAME,LDAP_PORT) BIND_USER='cn=administrateur,cn=users,dc=my,dc=domain,dc=fr' BIND_PASSWORD="AdminPassword" SEARCH_DN='cn=users,dc=my,dc=domain,dc=fr' NT4_DOMAIN='E2C' SEARCH_FIELDS= ['mail','givenName','sn','sAMAccountName','memberOf'] MEMBERSHIP_REQ=['Group_Required','Alternative_Group'] AUTHENTICATION_BACKENDS = ( 'ldap_groups.accounts.backends.ActiveDirectoryGroupMembershipSSLBackend', 'django.contrib.auth.backends.ModelBackend', ) DEBUG=True DEBUG_FILE='/$HOME/ldap.debug' backends.py import ldap import ldap.modlist as modlist username, email, password = kwargs['username'], kwargs['email'], kwargs['password1'] ldap.set_option(ldap.OPT_REFERRALS, 0) # Open a connection l = ldap.initialize(settings.LDAP_URL) # Bind/authenticate with a user with apropriate rights to add objects l.simple_bind_s(settings.BIND_USER,settings.BIND_PASSWORD) # The dn of our new entry/object dn="cn=%s,%s" % (username,settings.SEARCH_DN) # A dict to help build the "body" of the object attrs = {} attrs['objectclass'] = ['top','organizationalRole','simpleSecurityObject'] attrs['cn'] = username.encode('utf-16') attrs['userPassword'] = password.encode('utf-16') attrs['description'] = 'User object for replication using slurpd' # Convert our dict to nice syntax for the add-function using modlist-module ldif = modlist.addModlist(attrs) # Do the actual synchronous add-operation to the ldapserver l.add_s(dn,ldif) # Its nice to the server to disconnect and free resources when done l.unbind_s() When I trace my code, it seems there is a problem while adding user calling "l.add_s". However it returns the followings error: UNWILLING_TO_PERFORM at /accounts/register/ {'info': '00002077: SvcErr: DSID-031907B4, problem 5003 (WILL_NOT_PERFORM), data 0\n', 'desc': 'Server is unwilling to perform'} If I use wrong credentials the server returns INVALID CREDENTIAL, so I think the credentials I'm using above are correct to bind on the ldap directory. Pehaps my Ubuntu should be member of the domain or there is something wrong in my code? A: I found the problem. In fact my objectclass was not compliant with Active Directory. Furthermore change information encoding by a python string. Here is the code to use: attrs = {} attrs['objectclass'] = ['top','person','organizationalPerson','user'] attrs['cn'] = str(username) attrs['userPassword'] = str(password) attrs['mail']=str(email) attrs['givenName']=str(firstname) attrs['sn']=str(surname) attrs['description'] = 'User object for replication using slurpd' I can add an account in my Active Directory successfully. Hope it will help u.
ldap raises an UNWILLING TO PERFORM error
My Django application is using python-ldap library (ldap_groups django application) and must add users against an Active Directory on a Windows 2003 Virtual Machine domain. My application running on a Ubuntu virtual Machine is not member of the Windows domain. Here is the code: settings.py DNS_NAME='IP_ADRESS' LDAP_PORT=389 LDAP_URL='ldap://%s:%s' % (DNS_NAME,LDAP_PORT) BIND_USER='cn=administrateur,cn=users,dc=my,dc=domain,dc=fr' BIND_PASSWORD="AdminPassword" SEARCH_DN='cn=users,dc=my,dc=domain,dc=fr' NT4_DOMAIN='E2C' SEARCH_FIELDS= ['mail','givenName','sn','sAMAccountName','memberOf'] MEMBERSHIP_REQ=['Group_Required','Alternative_Group'] AUTHENTICATION_BACKENDS = ( 'ldap_groups.accounts.backends.ActiveDirectoryGroupMembershipSSLBackend', 'django.contrib.auth.backends.ModelBackend', ) DEBUG=True DEBUG_FILE='/$HOME/ldap.debug' backends.py import ldap import ldap.modlist as modlist username, email, password = kwargs['username'], kwargs['email'], kwargs['password1'] ldap.set_option(ldap.OPT_REFERRALS, 0) # Open a connection l = ldap.initialize(settings.LDAP_URL) # Bind/authenticate with a user with apropriate rights to add objects l.simple_bind_s(settings.BIND_USER,settings.BIND_PASSWORD) # The dn of our new entry/object dn="cn=%s,%s" % (username,settings.SEARCH_DN) # A dict to help build the "body" of the object attrs = {} attrs['objectclass'] = ['top','organizationalRole','simpleSecurityObject'] attrs['cn'] = username.encode('utf-16') attrs['userPassword'] = password.encode('utf-16') attrs['description'] = 'User object for replication using slurpd' # Convert our dict to nice syntax for the add-function using modlist-module ldif = modlist.addModlist(attrs) # Do the actual synchronous add-operation to the ldapserver l.add_s(dn,ldif) # Its nice to the server to disconnect and free resources when done l.unbind_s() When I trace my code, it seems there is a problem while adding user calling "l.add_s". However it returns the followings error: UNWILLING_TO_PERFORM at /accounts/register/ {'info': '00002077: SvcErr: DSID-031907B4, problem 5003 (WILL_NOT_PERFORM), data 0\n', 'desc': 'Server is unwilling to perform'} If I use wrong credentials the server returns INVALID CREDENTIAL, so I think the credentials I'm using above are correct to bind on the ldap directory. Pehaps my Ubuntu should be member of the domain or there is something wrong in my code?
[ "I found the problem. In fact my objectclass was not compliant with Active Directory.\nFurthermore change information encoding by a python string.\nHere is the code to use:\n attrs = {}\n attrs['objectclass'] = ['top','person','organizationalPerson','user']\n attrs['cn'] = str(username)\n attrs['userPassword'] = str(password)\n attrs['mail']=str(email)\n attrs['givenName']=str(firstname)\n attrs['sn']=str(surname)\n attrs['description'] = 'User object for replication using slurpd'\n\nI can add an account in my Active Directory successfully.\nHope it will help u.\n" ]
[ 2 ]
[]
[]
[ "django", "ldap", "python" ]
stackoverflow_0002273117_django_ldap_python.txt
Q: Python: Can't use the command python I want to install summon-module on windows 7. I tried python setup.py install but cmd doesn't know the command "python". I also set the path correctly. What is the problem? Thanks in advance. A: PATH needs to point to the directory your python.exe is in, or it needs to be in the current directory, or you need to specify the full path. PYTHONPATH needs to point to the directory your setup.py is in, or it needs to be in the current directory, or you need to specify the full path. A: Add the directory with the python.exe to your path via Control Panel -> System -> Advanced -> Environment Variables. Then scroll through the System variables listed in the lower part of the screen, highlight "Path", and click edit. Add (don't replace!) the directory to that variable, probably at the end. Make sure there's a semi-colon (';')between it and the entries in front of and (if appropriate) behind it; I recommend putting a semi-colon at the end even if it's the last value. Once you've done this, click the Ok button on the environment variables dialog box and start a new commend shell. You can type path at the prompt to get the path displayed so you can confirm that the python directory has been added. A: on windows it is python.exe and you need the path to the executable added to your environment or to use the fully qualified path
Python: Can't use the command python
I want to install summon-module on windows 7. I tried python setup.py install but cmd doesn't know the command "python". I also set the path correctly. What is the problem? Thanks in advance.
[ "PATH needs to point to the directory your python.exe is in, or it needs to be in the current directory, or you need to specify the full path.\nPYTHONPATH needs to point to the directory your setup.py is in, or it needs to be in the current directory, or you need to specify the full path.\n", "Add the directory with the python.exe to your path via Control Panel -> System -> Advanced -> Environment Variables. Then scroll through the System variables listed in the lower part of the screen, highlight \"Path\", and click edit. Add (don't replace!) the directory to that variable, probably at the end. Make sure there's a semi-colon (';')between it and the entries in front of and (if appropriate) behind it; I recommend putting a semi-colon at the end even if it's the last value. Once you've done this, click the Ok button on the environment variables dialog box and start a new commend shell. You can type path at the prompt to get the path displayed so you can confirm that the python directory has been added.\n", "on windows it is python.exe and you need the path to the executable added to your environment or to use the fully qualified path\n" ]
[ 4, 3, 0 ]
[]
[]
[ "installation", "python", "windows" ]
stackoverflow_0002274319_installation_python_windows.txt
Q: Why User model inheritance doesn't work properly? I'm trying to use a User model inheritance in my django application. Model looks like this: from django.contrib.auth.models import User, UserManager class MyUser(User): ICQ = models.CharField(max_length=9) objects = UserManager() and authentication backend looks like this: import sys from django.db import models from django.db.models import get_model from django.conf import settings from django.contrib.auth.models import User, UserManager from django.contrib.auth.backends import ModelBackend from django.core.exceptions import ImproperlyConfigured class AuthBackend(ModelBackend): def authenticate(self, email=None, username=None, password=None): try: if email: user = self.user_class.objects.get(email = email) else: user = self.user_class.objects.get(username = username) if user.check_password(password): return user except self.user_class.DoesNotExist: return None def get_user(self, user_id): try: return self.user_class.objects.get(pk=user_id) except self.user_class.DoesNotExist: return None @property def user_class(self): if not hasattr(self, '_user_class'): self._user_class = get_model(*settings.CUSTOM_USER_MODEL.split('.', 2)) if not self._user_class: raise ImproperlyConfigured('Could not get custom user model') return self._user_class But if I'm trying to authenticate - there is an "MyUser matching query does not exist" error on the self.user_class.objects.get(username = username) call. It looks like admin user created on base syncing (I'm using sqlite3) stores into User model instead of MyUser (username and password are right). Or it's something different? What I'm doing wrong? This is an example from http://scottbarnham.com/blog/2008/08/21/extending-the-django-user-model-with-inheritance/ A: Contrary to what the blog post you linked to says, storing this kind of data in a profile model is still the recommended way in Django. Subclassing User has all kinds of problems, one of which is the one you are hitting: Django has no idea you have subclassed User and happily creates and reads User models within the Django code base. The same is true for any other 3rd party app you might like to use. Have a look at this ticket on Django's issue tracker to get some understanding of the underlying problems of subclassing User
Why User model inheritance doesn't work properly?
I'm trying to use a User model inheritance in my django application. Model looks like this: from django.contrib.auth.models import User, UserManager class MyUser(User): ICQ = models.CharField(max_length=9) objects = UserManager() and authentication backend looks like this: import sys from django.db import models from django.db.models import get_model from django.conf import settings from django.contrib.auth.models import User, UserManager from django.contrib.auth.backends import ModelBackend from django.core.exceptions import ImproperlyConfigured class AuthBackend(ModelBackend): def authenticate(self, email=None, username=None, password=None): try: if email: user = self.user_class.objects.get(email = email) else: user = self.user_class.objects.get(username = username) if user.check_password(password): return user except self.user_class.DoesNotExist: return None def get_user(self, user_id): try: return self.user_class.objects.get(pk=user_id) except self.user_class.DoesNotExist: return None @property def user_class(self): if not hasattr(self, '_user_class'): self._user_class = get_model(*settings.CUSTOM_USER_MODEL.split('.', 2)) if not self._user_class: raise ImproperlyConfigured('Could not get custom user model') return self._user_class But if I'm trying to authenticate - there is an "MyUser matching query does not exist" error on the self.user_class.objects.get(username = username) call. It looks like admin user created on base syncing (I'm using sqlite3) stores into User model instead of MyUser (username and password are right). Or it's something different? What I'm doing wrong? This is an example from http://scottbarnham.com/blog/2008/08/21/extending-the-django-user-model-with-inheritance/
[ "Contrary to what the blog post you linked to says, storing this kind of data in a profile model is still the recommended way in Django. Subclassing User has all kinds of problems, one of which is the one you are hitting: Django has no idea you have subclassed User and happily creates and reads User models within the Django code base. The same is true for any other 3rd party app you might like to use.\nHave a look at this ticket on Django's issue tracker to get some understanding of the underlying problems of subclassing User\n" ]
[ 4 ]
[]
[]
[ "django", "django_models", "inheritance", "python" ]
stackoverflow_0002274442_django_django_models_inheritance_python.txt
Q: eagerly evaluating boolean expressions in Python Is there a way (using eval or whatever) to evaluate eagerly boolean expressions in python? Let's see this: >>> x = 3 >>> 5 < x < y False Yikes! That's very nice, because this will be false regardless of y's value. The thing is, y can be even undefined, and I'd like to get that exception. How can I get python to evaluate all expressions even if it knows the result beforehand? Hope I made myself clear! Thanks, Manuel Edit: Please bear in mind that the expression must not be modified, just the evaluation technique. A: (5 < x) & (x < y) By using the bit-and operator, &, you get no short-circuiting behavior (as you get with and, or, chaining, all/any). Short-circuiting is normally deemed desirable (fast &c) but it's not hard to do without it if you really want;-). A: all([5 < x, x < y]) A: The most natural way would probably be to evaluate the expressions on prior lines. a = foo() b = bar() if a and b: ... as solutions like all([5 < x, x < y]) hide that the side effects are important and solutions using bitwise and (&) seem subtle and misusing—both of these would require a comment in your code to make it obvious you are forcing evaluation and will cause people reading your code to think What was he thinking???. Putting important calculations on their own lines makes more sense than hiding them within subtle, at-first-glance ugly code. Though my solution doesn't prevent a NameError if b does not exist (i.e., you have a typo) and a is false, this is something you should be able to figure out by reading your code and using a bugfinder if you choose. A: >>> x = 3 >>> y > x > 5 Traceback (most recent call last): File "", line 1, in NameError: name 'y' is not defined A: If it's just the possibility of programmer-error you want to preclude, eagerly evaluating expressions won't do much. For instance, mistakenly doing x or y() instead of x() or y() won't be detected. Perhaps you're actually looking for tools like pylint, pyflakes or pychecker. A: If you are receiving the statement from the user and want to execute it with your own semantics, you should parse it yourself with a tool such as pyparsing. It is messy and insecure to evaluate someone else's code in the middle of yours, mixing their results with yours and it is confusing to evaluate what looks to be Python code but with different semantics.
eagerly evaluating boolean expressions in Python
Is there a way (using eval or whatever) to evaluate eagerly boolean expressions in python? Let's see this: >>> x = 3 >>> 5 < x < y False Yikes! That's very nice, because this will be false regardless of y's value. The thing is, y can be even undefined, and I'd like to get that exception. How can I get python to evaluate all expressions even if it knows the result beforehand? Hope I made myself clear! Thanks, Manuel Edit: Please bear in mind that the expression must not be modified, just the evaluation technique.
[ "(5 < x) & (x < y)\n\nBy using the bit-and operator, &, you get no short-circuiting behavior (as you get with and, or, chaining, all/any). Short-circuiting is normally deemed desirable (fast &c) but it's not hard to do without it if you really want;-).\n", "all([5 < x, x < y])\n\n", "The most natural way would probably be to evaluate the expressions on prior lines. \na = foo()\nb = bar()\nif a and b:\n ...\n\nas solutions like all([5 < x, x < y]) hide that the side effects are important and solutions using bitwise and (&) seem subtle and misusing—both of these would require a comment in your code to make it obvious you are forcing evaluation and will cause people reading your code to think What was he thinking???. Putting important calculations on their own lines makes more sense than hiding them within subtle, at-first-glance ugly code.\nThough my solution doesn't prevent a NameError if b does not exist (i.e., you have a typo) and a is false, this is something you should be able to figure out by reading your code and using a bugfinder if you choose.\n", "\n>>> x = 3\n>>> y > x > 5\nTraceback (most recent call last):\n File \"\", line 1, in \nNameError: name 'y' is not defined\n\n", "If it's just the possibility of programmer-error you want to preclude, eagerly evaluating expressions won't do much. For instance, mistakenly doing x or y() instead of x() or y() won't be detected. Perhaps you're actually looking for tools like pylint, pyflakes or pychecker.\n", "If you are receiving the statement from the user and want to execute it with your own semantics, you should parse it yourself with a tool such as pyparsing. It is messy and insecure to evaluate someone else's code in the middle of yours, mixing their results with yours and it is confusing to evaluate what looks to be Python code but with different semantics.\n" ]
[ 6, 5, 5, 3, 2, 1 ]
[]
[]
[ "eager", "exception_handling", "lazy_evaluation", "python" ]
stackoverflow_0002271017_eager_exception_handling_lazy_evaluation_python.txt
Q: Where to put Python files to be redirected to by urls.py in Django? Where do I put python files to be redirected to by urls.py in Django? The tutorial showed something like this: urlpatterns = patterns('', (r'^polls/$', 'mysite.polls.views.index'), Where do I set up pages to be easily linked as something.something.page like this? I am currently just trying to drop straight .py files in random directories and typing the name of the file in the urls.py file like so: urlpatterns = patterns('', (r'file', 'file.py'), Which is obviously not the correct way to do it. How do I create pages to be linked to in urls.py? Thanks. A: You need to use views. You can create views (keep reading the official django documentation), then import them into your urls.py file and use them. Here's an example: views.py from django.shortcuts import render_to_response def index(request): """ Main page. """ return render_to_response('index.html') # display index.html urls.py from myproject.views import index urlpatterns = patterns('', (r'^$', index), ) This example will display your index.html page whenever you visit the root of your website (eg: /).
Where to put Python files to be redirected to by urls.py in Django?
Where do I put python files to be redirected to by urls.py in Django? The tutorial showed something like this: urlpatterns = patterns('', (r'^polls/$', 'mysite.polls.views.index'), Where do I set up pages to be easily linked as something.something.page like this? I am currently just trying to drop straight .py files in random directories and typing the name of the file in the urls.py file like so: urlpatterns = patterns('', (r'file', 'file.py'), Which is obviously not the correct way to do it. How do I create pages to be linked to in urls.py? Thanks.
[ "You need to use views. You can create views (keep reading the official django documentation), then import them into your urls.py file and use them. Here's an example:\nviews.py\nfrom django.shortcuts import render_to_response\n\ndef index(request):\n \"\"\"\n Main page.\n \"\"\"\n return render_to_response('index.html') # display index.html\n\nurls.py\nfrom myproject.views import index\nurlpatterns = patterns('',\n (r'^$', index),\n)\n\nThis example will display your index.html page whenever you visit the root of your website (eg: /).\n" ]
[ 6 ]
[]
[]
[ "django", "django_urls", "frameworks", "python" ]
stackoverflow_0002275016_django_django_urls_frameworks_python.txt
Q: python multi-processing queue: is putting independent from getting? Is putting an object in a multi-processing queue independent from getting an object from it? In other words, will putting an object block the process P1 if another process P2 is getting from it? Update: I am assuming an infinite queue. A: My reading of the source code is that get obtains a read lock, which is independent of of the lock (called _notempty) acquired by put. If I understand correctly, concurrent gets can block each other, and concurrent puts can block each other (modulo your use of the block parameter), but that gets and puts do not mutually block.
python multi-processing queue: is putting independent from getting?
Is putting an object in a multi-processing queue independent from getting an object from it? In other words, will putting an object block the process P1 if another process P2 is getting from it? Update: I am assuming an infinite queue.
[ "My reading of the source code is that get obtains a read lock, which is independent of of the lock (called _notempty) acquired by put. If I understand correctly, concurrent gets can block each other, and concurrent puts can block each other (modulo your use of the block parameter), but that gets and puts do not mutually block.\n" ]
[ 2 ]
[]
[]
[ "multiprocessing", "python", "queue" ]
stackoverflow_0002275108_multiprocessing_python_queue.txt
Q: Why does django not do for the User model the same as it does for the userprofile model? Why doesn't django just have the model to use for User configured in the settings file? The requirements on the model specified would be that it contain a certain set of fields. Is there a reason why it couldn't be done this way? A: The User model has a lot of dependencies and must conform to a diverse set of API requirements in order to interoperate with the rest of the django framework. This is because of its relationship with authentication and authorization. Changing User means changing the expected behavior of contrib.auth. If you want to do that, you can, and that is configurable in settings.py. More likely, what you want to configure is the extra metadata that relates with users. This extra info isn't in any way involved with authentication, and so it can be configured separately without affecting contrib.auth. In order to make the dependencies easy to manage, this is handled in a separate model. This has the added benefit of making the distinction between authorization dependent data and site specific user metadata much clearer. A: "Why doesn't django just have the model to use for User configured in the settings file?" I have a site that doesn't need users or a login or authentication. I don't want the model for User. In order to support everyone with applications like mine, User is optional.
Why does django not do for the User model the same as it does for the userprofile model?
Why doesn't django just have the model to use for User configured in the settings file? The requirements on the model specified would be that it contain a certain set of fields. Is there a reason why it couldn't be done this way?
[ "The User model has a lot of dependencies and must conform to a diverse set of API requirements in order to interoperate with the rest of the django framework. This is because of its relationship with authentication and authorization. Changing User means changing the expected behavior of contrib.auth. If you want to do that, you can, and that is configurable in settings.py. \nMore likely, what you want to configure is the extra metadata that relates with users. This extra info isn't in any way involved with authentication, and so it can be configured separately without affecting contrib.auth. In order to make the dependencies easy to manage, this is handled in a separate model. This has the added benefit of making the distinction between authorization dependent data and site specific user metadata much clearer.\n", "\"Why doesn't django just have the model to use for User configured in the settings file?\"\nI have a site that doesn't need users or a login or authentication. \nI don't want the model for User.\nIn order to support everyone with applications like mine, User is optional.\n" ]
[ 2, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002275043_django_python.txt
Q: POP3_SSL Not Found in poplib module What would cause this strange error when trying to use the poplib.POP3_SSL class. Traceback (most recent call last): File "test.py", line 131, in <module> M = poplib.POP3_SSL('XXXXXXXX', 995) AttributeError: 'module' object has no attribute 'POP3_SSL' My environment is Python 2.6, REHL5 I've never run into this problem before and it just so happens to be a problem with only one of my servers in rotation. A: Your python might be compiled without ssl support.
POP3_SSL Not Found in poplib module
What would cause this strange error when trying to use the poplib.POP3_SSL class. Traceback (most recent call last): File "test.py", line 131, in <module> M = poplib.POP3_SSL('XXXXXXXX', 995) AttributeError: 'module' object has no attribute 'POP3_SSL' My environment is Python 2.6, REHL5 I've never run into this problem before and it just so happens to be a problem with only one of my servers in rotation.
[ "Your python might be compiled without ssl support.\n" ]
[ 1 ]
[]
[]
[ "pop3", "python", "ssl" ]
stackoverflow_0002275913_pop3_python_ssl.txt
Q: How to change wx.Panel background color on MouseOver? this code: import wx app = None class Plugin(wx.Panel): def __init__(self, parent, *args, **kwargs): wx.Panel.__init__(self, parent, *args, **kwargs) self.SetBackgroundColour((11, 11, 11)) self.name = "plugin" self.Bind(wx.EVT_ENTER_WINDOW, self.onMouseOver) self.Bind(wx.EVT_LEAVE_WINDOW, self.onMouseLeave) wx.EVT_ENTER_WINDOW(self, self.onMouseOver) wx.EVT_LEAVE_WINDOW(self, self.onMouseLeave) def onMouseOver(self, event): self.SetBackgroundColor((179, 179, 179)) self.Refresh() def onMouseLeave(self, event): self.SetBackgroundColor((11, 11, 11)) self.Refresh() def OnClose(self, event): self.Close() app.Destroy() def name(): print self.name app = wx.App() frame = wx.Frame(None, -1, size=(480, 380)) Plugin(frame) frame.Show(True) app.MainLoop() gives me the error: Traceback (most recent call last): File "C:\.... ... ....\plugin.py", line 18, in onMouseOver self.SetBackgroundColor((179, 179, 179)) AttributeError: 'Plugin' object has no attribute 'SetBackgroundColor' What am I doing wrong? P.S.: I need to have this class as a wx.Panel! Thanks in advance A: The method is named SetBackgroundColour, with a u. Also, you're binding events twice with two different methods. Just use the self.Bind style, and remove the other two lines.
How to change wx.Panel background color on MouseOver?
this code: import wx app = None class Plugin(wx.Panel): def __init__(self, parent, *args, **kwargs): wx.Panel.__init__(self, parent, *args, **kwargs) self.SetBackgroundColour((11, 11, 11)) self.name = "plugin" self.Bind(wx.EVT_ENTER_WINDOW, self.onMouseOver) self.Bind(wx.EVT_LEAVE_WINDOW, self.onMouseLeave) wx.EVT_ENTER_WINDOW(self, self.onMouseOver) wx.EVT_LEAVE_WINDOW(self, self.onMouseLeave) def onMouseOver(self, event): self.SetBackgroundColor((179, 179, 179)) self.Refresh() def onMouseLeave(self, event): self.SetBackgroundColor((11, 11, 11)) self.Refresh() def OnClose(self, event): self.Close() app.Destroy() def name(): print self.name app = wx.App() frame = wx.Frame(None, -1, size=(480, 380)) Plugin(frame) frame.Show(True) app.MainLoop() gives me the error: Traceback (most recent call last): File "C:\.... ... ....\plugin.py", line 18, in onMouseOver self.SetBackgroundColor((179, 179, 179)) AttributeError: 'Plugin' object has no attribute 'SetBackgroundColor' What am I doing wrong? P.S.: I need to have this class as a wx.Panel! Thanks in advance
[ "The method is named SetBackgroundColour, with a u.\nAlso, you're binding events twice with two different methods. Just use the self.Bind style, and remove the other two lines.\n" ]
[ 13 ]
[]
[]
[ "panel", "python", "wxpython", "wxwidgets" ]
stackoverflow_0002275917_panel_python_wxpython_wxwidgets.txt
Q: Django generic relations practice i'm developing a authentication backend with object-based permissions for my django-app.I use generic relations between an object and a permission: class GroupPermission(models.Model): content_t= models.ForeignKey(ContentType,related_name='g_content_t') object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_t', 'object_id') permission=models.ForeignKey(Permission) group = models.ForeignKey(Group) And now i want to get all objects of a specified content type for which a сertain group or user has a сertain permission. What's the best way to do that? Shall i define the second end of the relation in the app's models? or better write custom sql? I'm trying to build a generic backend so i don't want it to depend on the app that is using it. Thanks! A: I'm guessing you're looking for something like: perm = Permission.objects.get(pk=1) # pk #1 for brevity. group = Group.objects.get(pk=1) # Again, for brevity. group_perms = GroupPermission.objects.filter(permission=perm, group=group) objects = [x.content_object for x in group_perms] This should get all of the objects which have the Permission of perm, and the Group of group into the variable objects. You could implement this into a Custom Manager class, as well: class GroupPermissionManager(models.Manager): def for(self, perm): group_perms = GroupPermission.objects.filter(permission=perm, group=self) objects = [x.content_object for x in group_perms] class Group(models.Model): name = models.CharField(max_length=30) permissions = GroupPermissionManager() Which would make your view code simpler: perm = Permission.objects.get(pk=1) # pk #1 for brevity. group = Group.objects.get(pk=1) # Again, for brevity. objects = group.permissions.for(perm)
Django generic relations practice
i'm developing a authentication backend with object-based permissions for my django-app.I use generic relations between an object and a permission: class GroupPermission(models.Model): content_t= models.ForeignKey(ContentType,related_name='g_content_t') object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_t', 'object_id') permission=models.ForeignKey(Permission) group = models.ForeignKey(Group) And now i want to get all objects of a specified content type for which a сertain group or user has a сertain permission. What's the best way to do that? Shall i define the second end of the relation in the app's models? or better write custom sql? I'm trying to build a generic backend so i don't want it to depend on the app that is using it. Thanks!
[ "I'm guessing you're looking for something like:\nperm = Permission.objects.get(pk=1) # pk #1 for brevity.\ngroup = Group.objects.get(pk=1) # Again, for brevity.\ngroup_perms = GroupPermission.objects.filter(permission=perm, group=group)\nobjects = [x.content_object for x in group_perms]\n\nThis should get all of the objects which have the Permission of perm, and the Group of group into the variable objects.\nYou could implement this into a Custom Manager class, as well:\nclass GroupPermissionManager(models.Manager):\n def for(self, perm):\n group_perms = GroupPermission.objects.filter(permission=perm, group=self)\n objects = [x.content_object for x in group_perms]\n\nclass Group(models.Model):\n name = models.CharField(max_length=30)\n permissions = GroupPermissionManager()\n\nWhich would make your view code simpler:\nperm = Permission.objects.get(pk=1) # pk #1 for brevity.\ngroup = Group.objects.get(pk=1) # Again, for brevity.\nobjects = group.permissions.for(perm) \n\n" ]
[ 3 ]
[]
[]
[ "django", "django_orm", "generic_relationship", "python" ]
stackoverflow_0002275602_django_django_orm_generic_relationship_python.txt
Q: Django legacy database encoding I'm sure this question is not specific to django, but since I couldn't find any solution for my problem in other questions about python and encodings, I'm going to ask this. I need to add new features to existing website which is written in PHP using MySQL as backend. I inspected the database and created models for tables I am going to use. However, there is a problem with the existing data- half of it is in russian, and (at least it seems to me) it's in utf-8 encoding. When I show that data in django's admin, it doesn't appear right. In [52]: p.name Out[52]: u'\xd0\u02dc\xd0\xb3\xd0\xbe\xd1\u20ac\xd1\u0152 ' In [53]: repr(p.name) Out[53]: "u'\\xd0\\u02dc\\xd0\\xb3\\xd0\\xbe\\xd1\\u20ac\\xd1\\u0152 '" In django admin it displays like this: Игорь Encodings are still a little bit mythical for me, but if I understand this output correctly, basically those are utf-8 bytes in unicode object. The question: is it possible to fix this in django's database layer? I'm going to update existing content in these tables, and I need the existing PHP front-end to be compatible with both the new data and old one. When I add these database options data is displayed in admin correctly, however, I get UnicodeEncode error when saving something. DATABASE_OPTIONS = { 'charset': 'latin1', 'use_unicode': False, } Name returned in this case is: In [2]: p2.name Out[2]: '\xd0\x9b\xd0\xae\xd0\xa1\xd0\xaf' I checked with utf-8 character table, and those are correct characters for the data stored in that row. A: Check your mysql connection parameters. Also, You can specify DATABASE_OPTIONS: DATABASE_OPTIONS = { "charset": "utf8", "init_command": "SET storage_engine=InnoDB", } But check out if it's really utf-8. Also note that connection and server encoding must be in sync. A: Actually this problem was the database's previous character set and collation- it was latin1, but data was inserted using utf-8 charset. It was solved by exporting data using latin1 charset, replacing all occurences of latin1 with utf8 and importing data again. This answer shows how to do this: MySQL Convert latin1 data to UTF8
Django legacy database encoding
I'm sure this question is not specific to django, but since I couldn't find any solution for my problem in other questions about python and encodings, I'm going to ask this. I need to add new features to existing website which is written in PHP using MySQL as backend. I inspected the database and created models for tables I am going to use. However, there is a problem with the existing data- half of it is in russian, and (at least it seems to me) it's in utf-8 encoding. When I show that data in django's admin, it doesn't appear right. In [52]: p.name Out[52]: u'\xd0\u02dc\xd0\xb3\xd0\xbe\xd1\u20ac\xd1\u0152 ' In [53]: repr(p.name) Out[53]: "u'\\xd0\\u02dc\\xd0\\xb3\\xd0\\xbe\\xd1\\u20ac\\xd1\\u0152 '" In django admin it displays like this: Игорь Encodings are still a little bit mythical for me, but if I understand this output correctly, basically those are utf-8 bytes in unicode object. The question: is it possible to fix this in django's database layer? I'm going to update existing content in these tables, and I need the existing PHP front-end to be compatible with both the new data and old one. When I add these database options data is displayed in admin correctly, however, I get UnicodeEncode error when saving something. DATABASE_OPTIONS = { 'charset': 'latin1', 'use_unicode': False, } Name returned in this case is: In [2]: p2.name Out[2]: '\xd0\x9b\xd0\xae\xd0\xa1\xd0\xaf' I checked with utf-8 character table, and those are correct characters for the data stored in that row.
[ "Check your mysql connection parameters. Also, You can specify DATABASE_OPTIONS:\nDATABASE_OPTIONS = {\n \"charset\": \"utf8\",\n \"init_command\": \"SET storage_engine=InnoDB\",\n}\n\nBut check out if it's really utf-8. Also note that connection and server encoding must be in sync. \n", "Actually this problem was the database's previous character set and collation- it was latin1, but data was inserted using utf-8 charset. It was solved by exporting data using latin1 charset, replacing all occurences of latin1 with utf8 and importing data again. This answer shows how to do this: MySQL Convert latin1 data to UTF8\n" ]
[ 1, 1 ]
[]
[]
[ "django", "encoding", "python" ]
stackoverflow_0002267242_django_encoding_python.txt
Q: What's the advantage of queues over pipes when communicating between processes? What would be the advantage(s) (if any) of using 2 Queues over a Pipe to communicate between processes? I am planning on using the multiprocessing python module. A: The big win is that queues are process- and thread- safe. Pipes are not: if two different processes try to read from or write to the same end of a pipe, bad things happen. Queues are also at a somewhat higher level of abstraction than pipes, which may or may not be an advantage in your specific case. A: Queues hold the messages and retains them until the next time the queue is active and pushes it through...regardless if the pipe or connection is broken...with a pipe/connection, its goodbye to the message with an error... Hope this helps, Best regards, Tom.
What's the advantage of queues over pipes when communicating between processes?
What would be the advantage(s) (if any) of using 2 Queues over a Pipe to communicate between processes? I am planning on using the multiprocessing python module.
[ "The big win is that queues are process- and thread- safe. Pipes are not: if two different processes try to read from or write to the same end of a pipe, bad things happen. Queues are also at a somewhat higher level of abstraction than pipes, which may or may not be an advantage in your specific case.\n", "Queues hold the messages and retains them until the next time the queue is active and pushes it through...regardless if the pipe or connection is broken...with a pipe/connection, its goodbye to the message with an error...\nHope this helps,\nBest regards,\nTom.\n" ]
[ 12, 4 ]
[]
[]
[ "linux", "multiprocessing", "pipe", "python", "queue" ]
stackoverflow_0002275909_linux_multiprocessing_pipe_python_queue.txt
Q: How do I get my simple twisted proxy to work? I am attempting to make use of the Twisted.Web framework. Notice the three line comments (#line1, #line2, #line3). I want to create a proxy (gateway?) that will forward a request to one of two servers depending on the url. If I uncomment either comment 1 or 2 (and comment the rest), the request is proxied to the correct server. However, of course, it does not pick the server based on the URL. from twisted.internet import reactor from twisted.web import proxy, server from twisted.web.resource import Resource class Simple(Resource): isLeaf = True allowedMethods = ("GET","POST") def getChild(self, name, request): if name == "/" or name == "": return proxy.ReverseProxyResource('localhost', 8086, '') else: return proxy.ReverseProxyResource('localhost', 8085, '') simple = Simple() # site = server.Site(proxy.ReverseProxyResource('localhost', 8085, '')) #line1 # site = server.Site(proxy.ReverseProxyResource('localhost', 8085, '')) #line2 site = server.Site(simple) #line3 reactor.listenTCP(8080, site) reactor.run() As the code above currently stands, when I run this script and navigate to server "localhost:8080/ANYTHING_AT_ALL" I get the following response. Method Not Allowed Your browser approached me (at /ANYTHING_AT_ALL) with the method "GET". I only allow the methods GET, POST here. I don't know what I am doing wrong? Any help would be appreciated. A: Since your Simple class implements the getChild() method, it is implied that this is not a leaf node, however, you are stating that it is a leaf node by setting isLeaf = True. (How can a leaf node have a child?). Try changing isLeaf = True to isLeaf = False and you'll find that it redirects to the proxy as you'd expect. From the Resource.getChild docstring: ... This will not be called if the class-level variable 'isLeaf' is set in your subclass; instead, the 'postpath' attribute of the request will be left as a list of the remaining path elements.... A: Here is the final working solution. Basically two resource request go to the GAE server, and all remaining request go to the GWT server. Other than implementing mhawke's change, there is only one other change, and that was adding '"/" + name' to the proxy servers path. I assume this had to be done because that portion of the path was consumed and placed in the 'name' variable. from twisted.internet import reactor from twisted.web import proxy, server from twisted.web.resource import Resource class Simple(Resource): isLeaf = False allowedMethods = ("GET","POST") def getChild(self, name, request): print "getChild called with name:'%s'" % name if name == "get.json" or name == "post.json": print "proxy on GAE" return proxy.ReverseProxyResource('localhost', 8085, "/"+name) else: print "proxy on GWT" return proxy.ReverseProxyResource('localhost', 8086, "/"+name) simple = Simple() site = server.Site(simple) reactor.listenTCP(8080, site) reactor.run() Thank you.
How do I get my simple twisted proxy to work?
I am attempting to make use of the Twisted.Web framework. Notice the three line comments (#line1, #line2, #line3). I want to create a proxy (gateway?) that will forward a request to one of two servers depending on the url. If I uncomment either comment 1 or 2 (and comment the rest), the request is proxied to the correct server. However, of course, it does not pick the server based on the URL. from twisted.internet import reactor from twisted.web import proxy, server from twisted.web.resource import Resource class Simple(Resource): isLeaf = True allowedMethods = ("GET","POST") def getChild(self, name, request): if name == "/" or name == "": return proxy.ReverseProxyResource('localhost', 8086, '') else: return proxy.ReverseProxyResource('localhost', 8085, '') simple = Simple() # site = server.Site(proxy.ReverseProxyResource('localhost', 8085, '')) #line1 # site = server.Site(proxy.ReverseProxyResource('localhost', 8085, '')) #line2 site = server.Site(simple) #line3 reactor.listenTCP(8080, site) reactor.run() As the code above currently stands, when I run this script and navigate to server "localhost:8080/ANYTHING_AT_ALL" I get the following response. Method Not Allowed Your browser approached me (at /ANYTHING_AT_ALL) with the method "GET". I only allow the methods GET, POST here. I don't know what I am doing wrong? Any help would be appreciated.
[ "Since your Simple class implements the getChild() method, it is implied that this is not a leaf node, however, you are stating that it is a leaf node by setting isLeaf = True. (How can a leaf node have a child?).\nTry changing isLeaf = True to isLeaf = False and you'll find that it redirects to the proxy as you'd expect.\nFrom the Resource.getChild docstring:\n... This will not be called if the class-level variable 'isLeaf' is set in\n your subclass; instead, the 'postpath' attribute of the request will be\n left as a list of the remaining path elements....\n\n", "Here is the final working solution. Basically two resource request go to the GAE server, and all remaining request go to the GWT server.\nOther than implementing mhawke's change, there is only one other change, and that was adding '\"/\" + name' to the proxy servers path. I assume this had to be done because that portion of the path was consumed and placed in the 'name' variable.\nfrom twisted.internet import reactor\nfrom twisted.web import proxy, server\nfrom twisted.web.resource import Resource\n\nclass Simple(Resource):\n isLeaf = False\n allowedMethods = (\"GET\",\"POST\")\n def getChild(self, name, request):\n print \"getChild called with name:'%s'\" % name\n if name == \"get.json\" or name == \"post.json\":\n print \"proxy on GAE\"\n return proxy.ReverseProxyResource('localhost', 8085, \"/\"+name)\n else:\n print \"proxy on GWT\"\n return proxy.ReverseProxyResource('localhost', 8086, \"/\"+name)\n\nsimple = Simple()\nsite = server.Site(simple)\nreactor.listenTCP(8080, site)\nreactor.run()\n\nThank you.\n" ]
[ 4, 2 ]
[]
[]
[ "proxy", "python", "twisted" ]
stackoverflow_0002269380_proxy_python_twisted.txt
Q: IF in the Django template system How do I do this: {% if thestring %} {% if thestring.find("1") >= 0 %} {% endif %} {% endif %} I am assuming I need to build a template filter? Will that work? A: It would. But use the in operator instead of the find() method. Example: {% if thestring|contains:"1" %} A: You don't need to build a custom filter, though one would work -- the alternative of coding {% if thestring %} {% if "1" in thestring %} {% endif %} {% endif %} would also go just fine. A: I believe you'll find that the Django template system isn't designed to have complex logic in it. This type of processing should happen in your view, then be passed to the template.
IF in the Django template system
How do I do this: {% if thestring %} {% if thestring.find("1") >= 0 %} {% endif %} {% endif %} I am assuming I need to build a template filter? Will that work?
[ "It would. But use the in operator instead of the find() method.\nExample:\n{% if thestring|contains:\"1\" %}\n\n", "You don't need to build a custom filter, though one would work -- the alternative of coding\n{% if thestring %}\n\n {% if \"1\" in thestring %}\n\n {% endif %}\n\n{% endif %}\n\nwould also go just fine.\n", "I believe you'll find that the Django template system isn't designed to have complex logic in it. This type of processing should happen in your view, then be passed to the template.\n" ]
[ 3, 3, 1 ]
[]
[]
[ "django", "python", "templates" ]
stackoverflow_0002276319_django_python_templates.txt
Q: Python multiprocessing process vs. standalone Python VM Aside from the ease of use of the multiprocessing module when it comes to hooking up processes with communication resources, are there any other differences between spawning multiple processes using multiprocessing compared to using subprocess to launch separate Python VMs ? A: On Posix platforms, multiprocessing primitives essentially wrap an os.fork(). What this means is that at point you spawn a process in multiprocessing, the code already imported/initialized remains so in the child process. This can be a boon if you have a lot of things to initialize and then each subprocess essentially performs operations on (copies of) those initialized objects, but not all that helpful if the thing you run in the subprocess is completely unrelated. There are also implications for resources such as file-handles, sockets, etc with multiprocessing on a unix-like platform. Meanwhile, when using subprocess, you are creating an entirely new program/interpreter each time you Popen a new process. This means there can be less shared memory between them, but it also means you can Popen into a completely separate program, or a new entry-point into the same program. On Windows, the differences are less between multiprocessing and subprocess, because windows does not provide fork(). A: If you ignore any communication issues (i.e., if the separate Python VMs do not communicate among themselves, or communicate only through other mechanisms that are explicitly established), there are no other substantial differences. (I believe multiprocessing, under certain conditions -- Unix-like platforms, in particular -- can use the more efficient fork rather than the fork-exec pair always implied by multiprocessing -- but that's not "substantial" when just a few processes are involved [[IOW, the performance difference on startup will not be material to the performance of the whole system]]).
Python multiprocessing process vs. standalone Python VM
Aside from the ease of use of the multiprocessing module when it comes to hooking up processes with communication resources, are there any other differences between spawning multiple processes using multiprocessing compared to using subprocess to launch separate Python VMs ?
[ "On Posix platforms, multiprocessing primitives essentially wrap an os.fork(). What this means is that at point you spawn a process in multiprocessing, the code already imported/initialized remains so in the child process.\nThis can be a boon if you have a lot of things to initialize and then each subprocess essentially performs operations on (copies of) those initialized objects, but not all that helpful if the thing you run in the subprocess is completely unrelated.\nThere are also implications for resources such as file-handles, sockets, etc with multiprocessing on a unix-like platform.\nMeanwhile, when using subprocess, you are creating an entirely new program/interpreter each time you Popen a new process. This means there can be less shared memory between them, but it also means you can Popen into a completely separate program, or a new entry-point into the same program.\nOn Windows, the differences are less between multiprocessing and subprocess, because windows does not provide fork().\n", "If you ignore any communication issues (i.e., if the separate Python VMs do not communicate among themselves, or communicate only through other mechanisms that are explicitly established), there are no other substantial differences. (I believe multiprocessing, under certain conditions -- Unix-like platforms, in particular -- can use the more efficient fork rather than the fork-exec pair always implied by multiprocessing -- but that's not \"substantial\" when just a few processes are involved [[IOW, the performance difference on startup will not be material to the performance of the whole system]]).\n" ]
[ 22, 5 ]
[]
[]
[ "multiprocessing", "python", "virtual_machine" ]
stackoverflow_0002276117_multiprocessing_python_virtual_machine.txt
Q: Where is the phpMailer php class equivalent for Python? i'm new with python.. Actually, i'm trying to send featured email with python: html body, text alternative body, and attachment. So, i've found this tutorial and adapted it with the gmail authentication (tutorial found here) The code i have atm, is that: def createhtmlmail (html, text, subject): """Create a mime-message that will render HTML in popular MUAs, text in better ones""" import MimeWriter import mimetools import cStringIO from email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email.Utils import COMMASPACE, formatdate from email import Encoders import os out = cStringIO.StringIO() # output buffer for our message htmlin = cStringIO.StringIO(html) txtin = cStringIO.StringIO(text) writer = MimeWriter.MimeWriter(out) # # set up some basic headers... we put subject here # because smtplib.sendmail expects it to be in the # message body # writer.addheader("Subject", subject) writer.addheader("MIME-Version", "1.0") # # start the multipart section of the message # multipart/alternative seems to work better # on some MUAs than multipart/mixed # writer.startmultipartbody("alternative") writer.flushheaders() # # the plain text section # subpart = writer.nextpart() subpart.addheader("Content-Transfer-Encoding", "quoted-printable") pout = subpart.startbody("text/plain", [("charset", 'us-ascii')]) mimetools.encode(txtin, pout, 'quoted-printable') txtin.close() # # start the html subpart of the message # subpart = writer.nextpart() subpart.addheader("Content-Transfer-Encoding", "quoted-printable") # # returns us a file-ish object we can write to # pout = subpart.startbody("text/html", [("charset", 'us-ascii')]) mimetools.encode(htmlin, pout, 'quoted-printable') htmlin.close() # # Now that we're done, close our writer and # return the message body # writer.lastpart() msg = out.getvalue() out.close() return msg import smtplib f = open("/path/to/html/version.html", 'r') html = f.read() f.close() f = open("/path/to/txt/version.txt", 'r') text = f.read() subject = "Prova email html da python, con allegato!" message = createhtmlmail(html, text, subject) gmail_user = "thegmailaccount@gmail.com" gmail_pwd = "thegmailpassword" server = smtplib.SMTP("smtp.gmail.com", 587) server.ehlo() server.starttls() server.ehlo() server.login(gmail_user, gmail_pwd) server.sendmail(gmail_user, "example@example.com", message) server.close() and that works.. now only miss the attachment.. And i am not able to add the attachment (from this post) So, why there is not a python class like phpMailer for php? Is it because, for a medium-able python programmer sending a html email with attachment and alt text body is so easy that a class is not needed? Or is because i just didn't find it? If i'll be able to wrote a class like that, when i'll be enough good with python, would that be useful for someone? A: If you can excuse some blatant self promotion, I wrote a mailer module that makes sending email with Python fairly simple. No dependencies other than the Python smtplib and email libraries. Here's a simple example for sending an email with an attachment: from mailer import Mailer from mailer import Message message = Message(From="me@example.com", To=["you@example.com", "him@example.com"]) message.Subject = "Kitty with dynamite" message.Body = """Kitty go boom!""" message.attach("kitty.jpg") sender = Mailer('smtp.example.com') sender.login("username", "password") sender.send(message) Edit: Here's an example of sending an HTML email with alternate text. :) from mailer import Mailer from mailer import Message message = Message(From="me@example.com", To="you@example.com", charset="utf-8") message.Subject = "An HTML Email" message.Html = """This email uses <strong>HTML</strong>!""" message.Body = """This is alternate text.""" sender = Mailer('smtp.example.com') sender.send(message) Edit 2: Thanks to one of the comments, I've added a new version of mailer to pypi that lets you specify the port in the Mailer class. A: Django includes the class you need in core, docs here from django.core.mail import EmailMultiAlternatives subject, from_email, to = 'hello', 'from@example.com', 'to@example.com' text_content = 'This is an important message.' html_content = '<p>This is an <strong>important</strong> message.</p>' msg = EmailMultiAlternatives(subject, text_content, from_email, [to]) msg.attach_alternative(html_content, "text/html") msg.attach_file('/path/to/file.jpg') msg.send() In my settings I have: #GMAIL STUFF EMAIL_USE_TLS = True EMAIL_HOST = 'smtp.gmail.com' EMAIL_HOST_USER = 'name@gmail.com' EMAIL_HOST_PASSWORD = 'password' EMAIL_PORT = 587 A: Just want to point to Lamson Project which was what I was looking for when I found this thread. I did some more searching and found it. It's: Lamson's goal is to put an end to the hell that is "e-mail application development". Rather than stay stuck in the 1970s, Lamson adopts modern web application framework design and uses a proven scripting language (Python). It integrates nicely with Django. But it's more made for email based applications. It looks like pure love though. A: Maybe you can try with turbomail python-turbomail.org It's more easy and useful :) import turbomail # ... message = turbomail.Message("from@example.com", "to@example.com", subject) message.plain = "Hello world!" turbomail.enqueue(message) A: I recommend reading the SMTP rfc. A google search shows that this can easily be done by using the MimeMultipart class which you are importing but never using. Here are some examples on Python's documentation site.
Where is the phpMailer php class equivalent for Python?
i'm new with python.. Actually, i'm trying to send featured email with python: html body, text alternative body, and attachment. So, i've found this tutorial and adapted it with the gmail authentication (tutorial found here) The code i have atm, is that: def createhtmlmail (html, text, subject): """Create a mime-message that will render HTML in popular MUAs, text in better ones""" import MimeWriter import mimetools import cStringIO from email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email.Utils import COMMASPACE, formatdate from email import Encoders import os out = cStringIO.StringIO() # output buffer for our message htmlin = cStringIO.StringIO(html) txtin = cStringIO.StringIO(text) writer = MimeWriter.MimeWriter(out) # # set up some basic headers... we put subject here # because smtplib.sendmail expects it to be in the # message body # writer.addheader("Subject", subject) writer.addheader("MIME-Version", "1.0") # # start the multipart section of the message # multipart/alternative seems to work better # on some MUAs than multipart/mixed # writer.startmultipartbody("alternative") writer.flushheaders() # # the plain text section # subpart = writer.nextpart() subpart.addheader("Content-Transfer-Encoding", "quoted-printable") pout = subpart.startbody("text/plain", [("charset", 'us-ascii')]) mimetools.encode(txtin, pout, 'quoted-printable') txtin.close() # # start the html subpart of the message # subpart = writer.nextpart() subpart.addheader("Content-Transfer-Encoding", "quoted-printable") # # returns us a file-ish object we can write to # pout = subpart.startbody("text/html", [("charset", 'us-ascii')]) mimetools.encode(htmlin, pout, 'quoted-printable') htmlin.close() # # Now that we're done, close our writer and # return the message body # writer.lastpart() msg = out.getvalue() out.close() return msg import smtplib f = open("/path/to/html/version.html", 'r') html = f.read() f.close() f = open("/path/to/txt/version.txt", 'r') text = f.read() subject = "Prova email html da python, con allegato!" message = createhtmlmail(html, text, subject) gmail_user = "thegmailaccount@gmail.com" gmail_pwd = "thegmailpassword" server = smtplib.SMTP("smtp.gmail.com", 587) server.ehlo() server.starttls() server.ehlo() server.login(gmail_user, gmail_pwd) server.sendmail(gmail_user, "example@example.com", message) server.close() and that works.. now only miss the attachment.. And i am not able to add the attachment (from this post) So, why there is not a python class like phpMailer for php? Is it because, for a medium-able python programmer sending a html email with attachment and alt text body is so easy that a class is not needed? Or is because i just didn't find it? If i'll be able to wrote a class like that, when i'll be enough good with python, would that be useful for someone?
[ "If you can excuse some blatant self promotion, I wrote a mailer module that makes sending email with Python fairly simple. No dependencies other than the Python smtplib and email libraries.\nHere's a simple example for sending an email with an attachment:\nfrom mailer import Mailer\nfrom mailer import Message\n\nmessage = Message(From=\"me@example.com\",\n To=[\"you@example.com\", \"him@example.com\"])\nmessage.Subject = \"Kitty with dynamite\"\nmessage.Body = \"\"\"Kitty go boom!\"\"\"\nmessage.attach(\"kitty.jpg\")\n\nsender = Mailer('smtp.example.com')\nsender.login(\"username\", \"password\")\nsender.send(message)\n\nEdit: Here's an example of sending an HTML email with alternate text. :)\nfrom mailer import Mailer\nfrom mailer import Message\n\nmessage = Message(From=\"me@example.com\",\n To=\"you@example.com\",\n charset=\"utf-8\")\nmessage.Subject = \"An HTML Email\"\nmessage.Html = \"\"\"This email uses <strong>HTML</strong>!\"\"\"\nmessage.Body = \"\"\"This is alternate text.\"\"\"\n\nsender = Mailer('smtp.example.com')\nsender.send(message)\n\nEdit 2: Thanks to one of the comments, I've added a new version of mailer to pypi that lets you specify the port in the Mailer class.\n", "Django includes the class you need in core, docs here\nfrom django.core.mail import EmailMultiAlternatives\n\nsubject, from_email, to = 'hello', 'from@example.com', 'to@example.com'\ntext_content = 'This is an important message.'\nhtml_content = '<p>This is an <strong>important</strong> message.</p>'\nmsg = EmailMultiAlternatives(subject, text_content, from_email, [to])\nmsg.attach_alternative(html_content, \"text/html\")\nmsg.attach_file('/path/to/file.jpg')\nmsg.send()\n\nIn my settings I have:\n#GMAIL STUFF\nEMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.gmail.com'\nEMAIL_HOST_USER = 'name@gmail.com'\nEMAIL_HOST_PASSWORD = 'password'\nEMAIL_PORT = 587\n\n", "Just want to point to Lamson Project which was what I was looking for when I found this thread. I did some more searching and found it. It's:\n\nLamson's goal is to put an end to the hell that is \"e-mail application development\". Rather than stay stuck in the 1970s, Lamson adopts modern web application framework design and uses a proven scripting language (Python).\n\nIt integrates nicely with Django. But it's more made for email based applications. It looks like pure love though.\n", "Maybe you can try with turbomail python-turbomail.org\nIt's more easy and useful :) \nimport turbomail\n\n# ...\n\nmessage = turbomail.Message(\"from@example.com\", \"to@example.com\", subject)\nmessage.plain = \"Hello world!\"\n\nturbomail.enqueue(message)\n\n", "I recommend reading the SMTP rfc. A google search shows that this can easily be done by using the MimeMultipart class which you are importing but never using. Here are some examples on Python's documentation site.\n" ]
[ 6, 3, 2, 1, 0 ]
[]
[]
[ "email", "phpmailer", "python" ]
stackoverflow_0000807302_email_phpmailer_python.txt
Q: Extending a list of lists in Python? I might be missing something about the intended behavior of list extend, but why does the following happen? x = [[],[]] y = [[]] * 2 print x # [[],[]] print y # [[],[]] print x == y # True x[0].extend([1]) y[0].extend([1]) print x # [[1],[]], which is what I'd expect print y # [[1],[1]], wtf? I would guess that the * operator is doing something unexpected here, though I'm not exactly sure what. It seems like something is going on under the hood that's making the original x and y (prior to calling extend) not actually be equal even though the == operator and repr both would make it seem as though they were identical. I only came across this because I wanted to pre-populate a list of empty lists of a size determined at runtime, and then realized that it wasn't working the way I imagined. I can find a better way to do the same thing, but now I'm curious as to why this didn't work. This is Python 2.5.2 BTW - I don't have a newer version installed so if this is a bug I'm not sure if it's already fixed. A: In the case of [something] * 2, python is simply making a reference-copy. Therefore, if the enclosed type(s) are mutable, changing them will be reflected anywhere the item is referenced. In your example, y[0] and y[1] point to the same enclosed list object. You can verify this by doing y[0] is y[1] or alternately id(y[0]) == id(y[1]). You can however re-assign list elements, so if you had done: y[0] = [1] You would've re-bound the first element to a new list containing the element "1", and you would've got your expected result. Containers in python store references, and it's possible in most sequence containers to reference the same item multiple times. A list can actually reference itself as an element, though the usefulness of this is limited. This issue wouldn't have come up if you had multiplied a list containing immutable types: a = [0, 1] * 2 The above would give you the list [0, 1, 0, 1] and indeed both instances of 1 point to the same object, but since they are immutable, you cannot change the value of the int object containing "1", only reassign elements. So doing: a[1] = 5 would result in a showing as [0, 5, 0, 1]. A: The statement y = [[]] * 2 binds y to a list containing 2 copies of the same list. Use: y = [[], []] or y = [[] for n in range(2)] A: y contains two references to a single, mutable, list.
Extending a list of lists in Python?
I might be missing something about the intended behavior of list extend, but why does the following happen? x = [[],[]] y = [[]] * 2 print x # [[],[]] print y # [[],[]] print x == y # True x[0].extend([1]) y[0].extend([1]) print x # [[1],[]], which is what I'd expect print y # [[1],[1]], wtf? I would guess that the * operator is doing something unexpected here, though I'm not exactly sure what. It seems like something is going on under the hood that's making the original x and y (prior to calling extend) not actually be equal even though the == operator and repr both would make it seem as though they were identical. I only came across this because I wanted to pre-populate a list of empty lists of a size determined at runtime, and then realized that it wasn't working the way I imagined. I can find a better way to do the same thing, but now I'm curious as to why this didn't work. This is Python 2.5.2 BTW - I don't have a newer version installed so if this is a bug I'm not sure if it's already fixed.
[ "In the case of [something] * 2, python is simply making a reference-copy. Therefore, if the enclosed type(s) are mutable, changing them will be reflected anywhere the item is referenced.\nIn your example, y[0] and y[1] point to the same enclosed list object. You can verify this by doing y[0] is y[1] or alternately id(y[0]) == id(y[1]).\nYou can however re-assign list elements, so if you had done:\ny[0] = [1]\n\nYou would've re-bound the first element to a new list containing the element \"1\", and you would've got your expected result.\nContainers in python store references, and it's possible in most sequence containers to reference the same item multiple times. A list can actually reference itself as an element, though the usefulness of this is limited.\nThis issue wouldn't have come up if you had multiplied a list containing immutable types:\na = [0, 1] * 2\n\nThe above would give you the list [0, 1, 0, 1] and indeed both instances of 1 point to the same object, but since they are immutable, you cannot change the value of the int object containing \"1\", only reassign elements. \nSo doing: a[1] = 5 would result in a showing as [0, 5, 0, 1].\n", "The statement y = [[]] * 2 binds y to a list containing 2 copies of the same list. Use:\ny = [[], []]\n\nor\ny = [[] for n in range(2)]\n\n", "y contains two references to a single, mutable, list.\n" ]
[ 17, 4, 1 ]
[]
[]
[ "extend", "list", "python" ]
stackoverflow_0002276416_extend_list_python.txt
Q: Path of current Python instance? I need to access the Scripts and tcl sub-directories of the currently executing Python instance's installation directory on Windows. What is the best way to locate these directories? A: Have a look at sys.prefix and sys.exec_prefix >>> import sys >>> sys.prefix '/System/Library/Frameworks/Python.framework/Versions/2.6' >>> sys.exec_prefix '/System/Library/Frameworks/Python.framework/Versions/2.6' A: Hmm, find the Lib dir from sys.path and extrapolate from there?
Path of current Python instance?
I need to access the Scripts and tcl sub-directories of the currently executing Python instance's installation directory on Windows. What is the best way to locate these directories?
[ "Have a look at sys.prefix and sys.exec_prefix\n>>> import sys\n>>> sys.prefix\n'/System/Library/Frameworks/Python.framework/Versions/2.6'\n>>> sys.exec_prefix\n'/System/Library/Frameworks/Python.framework/Versions/2.6'\n\n", "Hmm, find the Lib dir from sys.path and extrapolate from there?\n" ]
[ 3, 0 ]
[]
[]
[ "installation", "path", "python", "python_3.x", "windows" ]
stackoverflow_0002276512_installation_path_python_python_3.x_windows.txt
Q: Import OPML subscriptions (file) to Google Reader manually I have a huge (5,000+ feeds) OPML file which freezes and crashes my browser when I try uploading it to my Google Reader account using the following instructions: Login to Google Reader Click Your Subscription Click the More Actions dropdown Select Import Browse for your OPML file Click Open Click Upload You will see the following displayed until it is done: Your subscriptions are being imported... I've looked into using: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI But, it seems more like an ATOM/RSS reader and framework as it states, instead of a library to do what I need. I am open to suggestions and methods to accomplish this via HTTP. A: Found an answer on Superuser on How to import an OPML file with 1500 feeds into Google Reader
Import OPML subscriptions (file) to Google Reader manually
I have a huge (5,000+ feeds) OPML file which freezes and crashes my browser when I try uploading it to my Google Reader account using the following instructions: Login to Google Reader Click Your Subscription Click the More Actions dropdown Select Import Browse for your OPML file Click Open Click Upload You will see the following displayed until it is done: Your subscriptions are being imported... I've looked into using: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI But, it seems more like an ATOM/RSS reader and framework as it states, instead of a library to do what I need. I am open to suggestions and methods to accomplish this via HTTP.
[ "Found an answer on Superuser on How to import an OPML file with 1500 feeds into Google Reader\n" ]
[ 1 ]
[]
[]
[ "api", "file_upload", "google_reader", "opml", "python" ]
stackoverflow_0002076488_api_file_upload_google_reader_opml_python.txt
Q: Python: Separating an HTML snippets to paragraphs I have a snippet of HTML that contains paragraphs. (I mean p tags.) I want to split the string into the different paragraphs. For instance: ''' <p class="my_class">Hello!</p> <p>What's up?</p> <p style="whatever: whatever;">Goodbye!</p> ''' Should become: ['<p class="my_class">Hello!</p>', '<p>What's up?</p>' '<p style="whatever: whatever;">Goodbye!</p>'] What would be a good way to approach this? A: If your string only contains paragraphs, you may be able to get away with a nicely crafted regex and re.split(). However, if your string is more complex HTML, or not always valid HTML, you might want to look at the BeautifulSoup package. Usage goes like: from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(some_html) paragraphs = list(unicode(x) for x in soup.findAll('p')) A: Use lxml.html to parse the HTML into the form you want. This is essentially the same advice as the people who are recommending BeautifulSoup, except lxml is still being actively developed and BeatifulSoup development has slowed. A: Use BeautifulSoup to parse the HTML and iterate over the paragraphs. A: The xml.etree (std lib) or lxml.etree (enhanced) make this easy to do, but I'm not going to get the answer cred for this because I don't remember the exact syntax. I keep mixing it up with similar packages and have to look it up afresh every time.
Python: Separating an HTML snippets to paragraphs
I have a snippet of HTML that contains paragraphs. (I mean p tags.) I want to split the string into the different paragraphs. For instance: ''' <p class="my_class">Hello!</p> <p>What's up?</p> <p style="whatever: whatever;">Goodbye!</p> ''' Should become: ['<p class="my_class">Hello!</p>', '<p>What's up?</p>' '<p style="whatever: whatever;">Goodbye!</p>'] What would be a good way to approach this?
[ "If your string only contains paragraphs, you may be able to get away with a nicely crafted regex and re.split(). However, if your string is more complex HTML, or not always valid HTML, you might want to look at the BeautifulSoup package.\nUsage goes like:\nfrom BeautifulSoup import BeautifulSoup \n\nsoup = BeautifulSoup(some_html)\n\nparagraphs = list(unicode(x) for x in soup.findAll('p'))\n\n", "Use lxml.html to parse the HTML into the form you want. This is essentially the same advice as the people who are recommending BeautifulSoup, except lxml is still being actively developed and BeatifulSoup development has slowed. \n", "Use BeautifulSoup to parse the HTML and iterate over the paragraphs.\n", "The xml.etree (std lib) or lxml.etree (enhanced) make this easy to do, but I'm not going to get the answer cred for this because I don't remember the exact syntax. I keep mixing it up with similar packages and have to look it up afresh every time.\n" ]
[ 5, 2, 0, 0 ]
[]
[]
[ "beautifulsoup", "html", "lxml", "python" ]
stackoverflow_0002276824_beautifulsoup_html_lxml_python.txt
Q: python: Regex matching file extension hi i am trying to get the extension of the file called in a url (eg /wp-includes/js/jquery/jquery.js?ver=1.3.2 HTTP/1.1) and get the query parameters passed to the file too. What would be the best way to the extension? A: urlparse.urlparse() and os.path.splitext().
python: Regex matching file extension
hi i am trying to get the extension of the file called in a url (eg /wp-includes/js/jquery/jquery.js?ver=1.3.2 HTTP/1.1) and get the query parameters passed to the file too. What would be the best way to the extension?
[ "urlparse.urlparse() and os.path.splitext().\n" ]
[ 7 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002277030_python_regex.txt
Q: django-registration, fix for a glitch I am using django-registration version 0.8 I use the default django-registration and Django auth system without any tweak. I did notice a small glitch, once I log in as a user, if I go to the /accounts/login/ , I still get the login entry form, how can I change that it redirect a logged in user to the main root url / instead of bringing this form once again ? Thanks A: You can wrap Django's login view and do the check for already authenticated users there: from django.contrib.auth.views import login from django.http import HttpResponseRedirect def mylogin(request, **kwargs): if request.user.is_authenticated(): return HttpResponseRedirect('/') else: return login(request, **kwargs) Then simply use this view instead of django.contrib.auth.views.login in your urls.py
django-registration, fix for a glitch
I am using django-registration version 0.8 I use the default django-registration and Django auth system without any tweak. I did notice a small glitch, once I log in as a user, if I go to the /accounts/login/ , I still get the login entry form, how can I change that it redirect a logged in user to the main root url / instead of bringing this form once again ? Thanks
[ "You can wrap Django's login view and do the check for already authenticated users there:\nfrom django.contrib.auth.views import login\nfrom django.http import HttpResponseRedirect\n\ndef mylogin(request, **kwargs):\n if request.user.is_authenticated():\n return HttpResponseRedirect('/')\n else:\n return login(request, **kwargs)\n\nThen simply use this view instead of django.contrib.auth.views.login in your urls.py\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002275155_django_python.txt
Q: How do I url unencode in Python? Given this: It%27s%20me%21 Unencode it and turn it into regular text? A: in python2 >>> import urlparse >>> urlparse.unquote('It%27s%20me%21') "It's me!" In python3 >>> import urllib.parse >>> urllib.parse.unquote('It%27s%20me%21') "It's me!" A: Take a look at urllib.unquote and urllib.unquote_plus. That will address your problem. Technically though url "encoding" is the process of passing arguments into a url with the & and ? characters (e.g. www.foo.com?x=11&y=12). A: Use the unquote method from urllib. >>> from urllib import unquote >>> unquote('It%27s%20me%21') "It's me!"
How do I url unencode in Python?
Given this: It%27s%20me%21 Unencode it and turn it into regular text?
[ "in python2\n>>> import urlparse\n>>> urlparse.unquote('It%27s%20me%21')\n\"It's me!\"\n\nIn python3\n>>> import urllib.parse\n>>> urllib.parse.unquote('It%27s%20me%21')\n\"It's me!\"\n\n", "Take a look at urllib.unquote and urllib.unquote_plus. That will address your problem. Technically though url \"encoding\" is the process of passing arguments into a url with the & and ? characters (e.g. www.foo.com?x=11&y=12).\n", "Use the unquote method from urllib.\n>>> from urllib import unquote\n>>> unquote('It%27s%20me%21')\n\"It's me!\"\n\n" ]
[ 21, 11, 4 ]
[]
[]
[ "encoding", "python", "url" ]
stackoverflow_0002277302_encoding_python_url.txt
Q: Deleting a file from Tkinter import * import socket, sys, os import tkMessageBox root = Tk() root.title("File Deleter v1.0") root.config(bg='black') root.resizable(0, 0) text = Text() text3 = Text() frame = Frame(root) frame.config(bg="black") frame.pack(pady=10, padx=5) frame1 = Frame(root) frame1.config(bg="black") frame1.pack(pady=10, padx=5) text.config(width=35, height=1, bg="black", fg="white") text.pack(padx=5) def button1(): try: x = text.get("1.0", END) os.remove(x) except WindowsError: text3.insert(END, "File Not Found... Try Again\n") def clear(): text.delete("1.0", END) c = Button(frame1, text="Clear", width=10, height=2, command=clear) c.config(fg="white", bg="black") c.pack(side=LEFT, padx=5) scrollbar = Scrollbar(root) scrollbar.pack(side=RIGHT, fill=Y) text3.config(width=35, height=15, bg="black", fg="white") text3.pack(side=LEFT, fill=Y) scrollbar.config(command=text3.yview) text3.config(yscrollcommand=scrollbar.set) w = Label(frame, text="Delete A File") w.config(bg='black', fg='white') w.pack(side=TOP, padx=5) b = Button(frame1, text="Enter", width=10, height=2, command=button1) b.config(fg="white", bg="black") b.pack(side=LEFT, padx=5) root.mainloop() I dont get why the delete code is not working, I get a "File not Found" even if the file exist. A: When I run this code on Linux and place a breakpoint in button1(), I see that the value of x includes a trailing newline character. That means the os.remove() call won't work, because the filename I typed in didn't actually contain a newline. If I remove the trailing newline, the code works. A: Perhaps x is not what you think it is, just a guess but maybe there a some whitespace there or somthing, try this to check def button1(): try: x = text.get("1.0", END) print repr(x) os.remove(x) except WindowsError, e: print e text3.insert(END, "File Not Found... Try Again\n") A: I believe gnibbler is on track with whitespace being the problem. The Text Widget gives you endline characters \n. Try adding a .strip() to the end of your text.get or you can use an Entry widget as opposed to a Text Widget since your Text widget is only one line anways. x = text.get('1.0', END).strip()
Deleting a file
from Tkinter import * import socket, sys, os import tkMessageBox root = Tk() root.title("File Deleter v1.0") root.config(bg='black') root.resizable(0, 0) text = Text() text3 = Text() frame = Frame(root) frame.config(bg="black") frame.pack(pady=10, padx=5) frame1 = Frame(root) frame1.config(bg="black") frame1.pack(pady=10, padx=5) text.config(width=35, height=1, bg="black", fg="white") text.pack(padx=5) def button1(): try: x = text.get("1.0", END) os.remove(x) except WindowsError: text3.insert(END, "File Not Found... Try Again\n") def clear(): text.delete("1.0", END) c = Button(frame1, text="Clear", width=10, height=2, command=clear) c.config(fg="white", bg="black") c.pack(side=LEFT, padx=5) scrollbar = Scrollbar(root) scrollbar.pack(side=RIGHT, fill=Y) text3.config(width=35, height=15, bg="black", fg="white") text3.pack(side=LEFT, fill=Y) scrollbar.config(command=text3.yview) text3.config(yscrollcommand=scrollbar.set) w = Label(frame, text="Delete A File") w.config(bg='black', fg='white') w.pack(side=TOP, padx=5) b = Button(frame1, text="Enter", width=10, height=2, command=button1) b.config(fg="white", bg="black") b.pack(side=LEFT, padx=5) root.mainloop() I dont get why the delete code is not working, I get a "File not Found" even if the file exist.
[ "When I run this code on Linux and place a breakpoint in button1(), I see that the value of x includes a trailing newline character. That means the os.remove() call won't work, because the filename I typed in didn't actually contain a newline. If I remove the trailing newline, the code works.\n", "Perhaps x is not what you think it is, just a guess but maybe there a some whitespace there or somthing, try this to check\ndef button1():\n try:\n x = text.get(\"1.0\", END)\n print repr(x)\n os.remove(x)\n except WindowsError, e:\n print e\n text3.insert(END, \"File Not Found... Try Again\\n\")\n\n", "I believe gnibbler is on track with whitespace being the problem. The Text Widget gives you endline characters \\n. Try adding a .strip() to the end of your text.get or you can use an Entry widget as opposed to a Text Widget since your Text widget is only one line anways.\nx = text.get('1.0', END).strip()\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0002277236_python_tkinter.txt
Q: how to login to multiple website accounts concurrently with Python I am using urllib2 and HTTPCookieProcessor to login to a website. I want to login to multiple accounts concurrently and store the cookies to be reused later. Can you recommend an approach or library to achieve this? A: How to achieve this really depends on you needs: what kind of login is it? Digest authentication? Is it a web form? Is JavaScript involved (you're pretty much screwed if this is the case)? A library like mechanize can help you a lot with such stuff: handling of forms, redirection, authentication, cookies... However, you'd have to take care of concurrency yourself by spawning threads/processes. Another approach that works beautifully for concurrency is using Twisted. With that solution however you'd have to handle redirection and cookies etc. yourself -- although you might be able to reuse parts of e.g. mechanize. A: The OP clarified that this is not a concurrency issue. With sequential processing in mind, this is much simpler. I once used something like the following to update a bunch of SIP phone base stations (they had a web front-end which you could use to upload VCard files for the phone book). Note that I just cut away some crap and renamed this and that in this hacky script, I did not test it at all. Its sole purpose is to give the OP an idea on how he could deal with this. #!/usr/bin/python # -*- coding:utf-8 -*- from optparse import OptionParser import sys from mechanize import Browser, CookieJar, Request, urlopen accounts = [ {'ipaddr': '127.0.0.1', 'user': 'joe', 'pass': 'foobar'}, ] class WebsiteAccount(object): def __init__(self, ipaddr, username, password, browser): self.ipaddr = ipaddr self.username = username self.password = password self.browser = browser self.cookiejar = CookieJar() self.browser.set_cookiejar(self.cookiejar) def login(self): self.browser.open('http://'+self.ipaddr+'/login.html') self.browser.select_form(name='loginform') self.browser.form.set_value(self.username, name='username') self.browser.form.set_value(self.password, name='password') resp = self.browser.submit() print 'Logging into account %s@%s ...' % (self.username, self.ipaddr), if resp.geturl().endswith('/login.html'): print 'FAILED!' sys.exit(1) print ' OK' def logout(self): print ('Logging out from account %s@%s...' % (self.username, self.ipaddr), self.browser.open('http://'+self.ipaddr+'/logout.html') self.browser.close() print 'OK' def main(): parser = OptionParser() parser.add_option('-d', '--debug', action='store_true', dest='debug', default=False) parser.add_option('-v', '--verbose', action='store_true', dest='verbose', default=False) (opts, args) = parser.parse_args() for account in accounts: browser = Browser() browser.set_handle_referer(True) browser.set_handle_redirect(True) browser.set_handle_robots(False) bs = WebsiteAccount(account['ipaddr'], account['user'], account['pass'], browser) # DEBUG if opts.debug == True: browser.set_debug_redirects(True) browser.set_debug_responses(True) browser.set_debug_http(True) bs.login() try: # ... do some stuff # save cookies here? pass finally: # you shouldn't use this if you are interested in the login cookies bs.logout() if __name__=='__main__': main()
how to login to multiple website accounts concurrently with Python
I am using urllib2 and HTTPCookieProcessor to login to a website. I want to login to multiple accounts concurrently and store the cookies to be reused later. Can you recommend an approach or library to achieve this?
[ "How to achieve this really depends on you needs: what kind of login is it? Digest authentication? Is it a web form? Is JavaScript involved (you're pretty much screwed if this is the case)? A library like mechanize can help you a lot with such stuff: handling of forms, redirection, authentication, cookies... However, you'd have to take care of concurrency yourself by spawning threads/processes.\nAnother approach that works beautifully for concurrency is using Twisted. With that solution however you'd have to handle redirection and cookies etc. yourself -- although you might be able to reuse parts of e.g. mechanize.\n", "The OP clarified that this is not a concurrency issue. With sequential processing in mind, this is much simpler. I once used something like the following to update a bunch of SIP phone base stations (they had a web front-end which you could use to upload VCard files for the phone book). Note that I just cut away some crap and renamed this and that in this hacky script, I did not test it at all. Its sole purpose is to give the OP an idea on how he could deal with this.\n#!/usr/bin/python\n# -*- coding:utf-8 -*-\n\nfrom optparse import OptionParser\nimport sys\nfrom mechanize import Browser, CookieJar, Request, urlopen\n\n\naccounts = [\n {'ipaddr': '127.0.0.1', 'user': 'joe', 'pass': 'foobar'},\n ]\n\n\nclass WebsiteAccount(object):\n\n def __init__(self, ipaddr, username, password, browser):\n self.ipaddr = ipaddr\n self.username = username\n self.password = password\n self.browser = browser\n self.cookiejar = CookieJar()\n self.browser.set_cookiejar(self.cookiejar)\n\n def login(self):\n self.browser.open('http://'+self.ipaddr+'/login.html')\n self.browser.select_form(name='loginform')\n self.browser.form.set_value(self.username, name='username')\n self.browser.form.set_value(self.password, name='password')\n resp = self.browser.submit()\n print 'Logging into account %s@%s ...' % (self.username, self.ipaddr),\n if resp.geturl().endswith('/login.html'):\n print 'FAILED!'\n sys.exit(1)\n print ' OK'\n\n def logout(self):\n print ('Logging out from account %s@%s...' % (self.username, self.ipaddr),\n self.browser.open('http://'+self.ipaddr+'/logout.html')\n self.browser.close()\n print 'OK'\n\n\ndef main():\n parser = OptionParser()\n parser.add_option('-d', '--debug', action='store_true', dest='debug', default=False)\n parser.add_option('-v', '--verbose', action='store_true', dest='verbose', default=False)\n (opts, args) = parser.parse_args()\n for account in accounts:\n browser = Browser()\n browser.set_handle_referer(True)\n browser.set_handle_redirect(True)\n browser.set_handle_robots(False)\n bs = WebsiteAccount(account['ipaddr'],\n account['user'],\n account['pass'],\n browser)\n # DEBUG\n if opts.debug == True:\n browser.set_debug_redirects(True)\n browser.set_debug_responses(True)\n browser.set_debug_http(True)\n bs.login()\n try:\n # ... do some stuff\n # save cookies here? \n pass\n finally:\n # you shouldn't use this if you are interested in the login cookies\n bs.logout()\n\n\nif __name__=='__main__':\n main()\n\n" ]
[ 1, 1 ]
[]
[]
[ "authentication", "concurrency", "cookies", "python", "urllib2" ]
stackoverflow_0002270881_authentication_concurrency_cookies_python_urllib2.txt
Q: Inter-database communications in PostgreSQL I am using PostgreSQL 8.4. I really like the new unnest() and array_agg() features; it is about time they realize the dynamic processing potential of their Arrays! Anyway, I am working on web server back ends that uses long Arrays a lot. Their will be two successive processes which will each occur on a different physical machine. Each such process is a light python application which ''manage'' SQL queries to the database on each of their machines as well as requests from the front ends. The first process will generate an Array which will be buffered into an SQL Table. Each such generated Array is accessible via a Primary Key. When its done the first python app sends the key to the second python app. Then the second python app, which is running on a different machine, uses it to go get the referenced Array found in the first machine. It then sends it to it's own db for generating a final result. The reason why I send a key is because I am hopping that this will make the two processes go faster. But really what I would like is for a way to have the second database send a query to the first database in the hope of minimizing serialization delay and such. Any help/advice would be appreciated. Thanks A: not sure I totally understand, but you've looked at notify/listen? http://www.postgresql.org/docs/8.1/static/sql-listen.html A: Sounds like you want dblink from contrib. This allows some inter-db postgres communication. The pg docs are great and should provide the needed examples. A: I am thinking either listen/notify or something with a cache such as memcache. You would send the key to memcache and have the second python app retrieve it from there. You could even do it with listen/notify... e.g; send the key and notify your second app that the key is in memcache waiting to be retrieved.
Inter-database communications in PostgreSQL
I am using PostgreSQL 8.4. I really like the new unnest() and array_agg() features; it is about time they realize the dynamic processing potential of their Arrays! Anyway, I am working on web server back ends that uses long Arrays a lot. Their will be two successive processes which will each occur on a different physical machine. Each such process is a light python application which ''manage'' SQL queries to the database on each of their machines as well as requests from the front ends. The first process will generate an Array which will be buffered into an SQL Table. Each such generated Array is accessible via a Primary Key. When its done the first python app sends the key to the second python app. Then the second python app, which is running on a different machine, uses it to go get the referenced Array found in the first machine. It then sends it to it's own db for generating a final result. The reason why I send a key is because I am hopping that this will make the two processes go faster. But really what I would like is for a way to have the second database send a query to the first database in the hope of minimizing serialization delay and such. Any help/advice would be appreciated. Thanks
[ "not sure I totally understand, but you've looked at notify/listen? http://www.postgresql.org/docs/8.1/static/sql-listen.html\n", "Sounds like you want dblink from contrib. This allows some inter-db postgres communication. The pg docs are great and should provide the needed examples.\n", "I am thinking either listen/notify or something with a cache such as memcache. You would send the key to memcache and have the second python app retrieve it from there. You could even do it with listen/notify... e.g; send the key and notify your second app that the key is in memcache waiting to be retrieved.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "arrays", "database_connection", "postgresql", "python" ]
stackoverflow_0002263132_arrays_database_connection_postgresql_python.txt
Q: Subscription web/desktop app [PYTHON] Firstly pardon me if i've yet again failed to title my question correctly. I am required to build an app to manage magazine subscriptions. The client wants to enter subscriber data and then receive alerts at pre-set intervals such as when the subscription of a subscriber is about to expire and also the option to view all subscriber records at any time. Also needed is the facility to send an SMS/e-mail to particular subscribers reminding them for subscription renewal. I am very familiar with python but this will be my first real project. I have decided to build it as a web app using django, allowing the admin user the ability to view/add/modify all records and others to subscribe. What options do I have for integrating an online payment service? Also how do I manage the SMS alert functionality? Any other pointers/suggestions would be welcome. Thank You A: Payment gateway integration: Here is a detailed article about how to integrate the Authorize.net payment system into a Django project. Authorize.net is used by a few popular Django projects, including the Satchmo e-commerce store project. django-paypal is a pluggable Django app which lets you connect to PayPal merchant services. SMS alerts: django-sms is a Django app which is "...designed to make sending SMS text messages as simple as sending an email." so might be a good start. General Django You didn't mention your knowledge level of Django itself; if you need to brush up on your Django skills I would highly recommend the book Django 1.0 Website Development. I think it's also worth pointing out that the resources I've mentioned here were all found in the first few results of a Google search for each topic. These are the search terms I used: django payment gateway integration django paypal integration (because I knew of PayPal beforehand) django sms alerts A: I'd like to comment on the SMS alert part. First, I have to admit that I'm not familiar with Django, but I assume it to be just like most other web frameworks: request based. This might be your first problem, as the alert service needs to run independently of requests. You could of course hack together something to externally trigger a request once a day... :-) Now for the SMS part: much depends on how you plan to implement this. If you are going with an SMS provider, there are many to choose from that let you send SMS with a simple HTTP request. I wouldn't recommend the other approach, namely using a real cellphone or SMS modem and take care of the delivery yourself: it is way too cumbersome and you have to take into account a lot more issues: e.g. retry message transmission for handsets that are turned off or aren't able to receive SMS because their memory is full. Your friendly SMS provider will probably take care of this.
Subscription web/desktop app [PYTHON]
Firstly pardon me if i've yet again failed to title my question correctly. I am required to build an app to manage magazine subscriptions. The client wants to enter subscriber data and then receive alerts at pre-set intervals such as when the subscription of a subscriber is about to expire and also the option to view all subscriber records at any time. Also needed is the facility to send an SMS/e-mail to particular subscribers reminding them for subscription renewal. I am very familiar with python but this will be my first real project. I have decided to build it as a web app using django, allowing the admin user the ability to view/add/modify all records and others to subscribe. What options do I have for integrating an online payment service? Also how do I manage the SMS alert functionality? Any other pointers/suggestions would be welcome. Thank You
[ "Payment gateway integration:\n\nHere is a detailed article about how to integrate the Authorize.net payment system into a Django project. Authorize.net is used by a few popular Django projects, including the Satchmo e-commerce store project.\ndjango-paypal is a pluggable Django app which lets you connect to PayPal merchant services.\n\nSMS alerts:\n\ndjango-sms is a Django app which is \"...designed to make sending SMS text messages as simple as sending an email.\" so might be a good start.\n\nGeneral Django\n\nYou didn't mention your knowledge level of Django itself; if you need to brush up on your Django skills I would highly recommend the book Django 1.0 Website Development.\n\nI think it's also worth pointing out that the resources I've mentioned here were all found in the first few results of a Google search for each topic. These are the search terms I used:\n\ndjango payment gateway integration\ndjango paypal integration (because I knew of PayPal beforehand)\ndjango sms alerts\n\n", "I'd like to comment on the SMS alert part.\nFirst, I have to admit that I'm not familiar with Django, but I assume it to be just like most other web frameworks: request based. This might be your first problem, as the alert service needs to run independently of requests. You could of course hack together something to externally trigger a request once a day... :-)\nNow for the SMS part: much depends on how you plan to implement this. If you are going with an SMS provider, there are many to choose from that let you send SMS with a simple HTTP request. I wouldn't recommend the other approach, namely using a real cellphone or SMS modem and take care of the delivery yourself: it is way too cumbersome and you have to take into account a lot more issues: e.g. retry message transmission for handsets that are turned off or aren't able to receive SMS because their memory is full. Your friendly SMS provider will probably take care of this.\n" ]
[ 2, 0 ]
[]
[]
[ "django", "payment_gateway", "python", "sms" ]
stackoverflow_0002270556_django_payment_gateway_python_sms.txt
Q: How to get data in a histogram bin I want to get a list of the data contained in a histogram bin. I am using numpy, and Matplotlib. I know how to traverse the data and check the bin edges. However, I want to do this for a 2D histogram and the code to do this is rather ugly. Does numpy have any constructs to make this easier? For the 1D case, I can use searchsorted(). But the logic is not that much better, and I don’t really want to do a binary search on each data point when I don’t have to. Most of the nasty logic is due to the bin boundary regions. All regions have boundaries like this: [left edge, right edge). Except the last bin, which has a region like this: [left edge, right edge]. Here is some sample code for the 1D case: import numpy as np data = [0, 0.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 3] hist, edges = np.histogram(data, bins=3) print 'data =', data print 'histogram =', hist print 'edges =', edges getbin = 2 #0, 1, or 2 print '---' print 'alg 1:' #for i in range(len(data)): for d in data: if d >= edges[getbin]: if (getbin == len(edges)-2) or d < edges[getbin+1]: print 'found:', d #end if #end if #end for print '---' print 'alg 2:' for d in data: val = np.searchsorted(edges, d, side='right')-1 if val == getbin or val == len(edges)-1: print 'found:', d #end if #end for Here is some sample code for the 2D case: import numpy as np xdata = [0, 1.5, 1.5, 2.5, 2.5, 2.5, \ 0.5, 0.5, 0.5, 0.5, 1.5, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, \ 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 3] ydata = [0, 5,5, 5, 5, 5, \ 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, \ 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 30] xbins = 3 ybins = 3 hist2d, xedges, yedges = np.histogram2d(xdata, ydata, bins=(xbins, ybins)) print 'data2d =', zip(xdata, ydata) print 'hist2d =' print hist2d print 'xedges =', xedges print 'yedges =', yedges getbin2d = 5 #0 through 8 print 'find data in bin #', getbin2d xedge_i = getbin2d % xbins yedge_i = int(getbin2d / xbins) #IMPORTANT: this is xbins for x, y in zip(xdata, ydata): # x and y left edges if x >= xedges[xedge_i] and y >= yedges[yedge_i]: #x right edge if xedge_i == xbins-1 or x < xedges[xedge_i + 1]: #y right edge if yedge_i == ybins-1 or y < yedges[yedge_i + 1]: print 'found:', x, y #end if #end if #end if #end for Is there a cleaner / more efficient way to do this? It seems like numpy would have something for this. A: digitize, from core NumPy, will give you the index of the bin to which each value in your histogram belongs: import numpy as NP A = NP.random.randint(0, 10, 100) bins = NP.array([0., 20., 40., 60., 80., 100.]) # d is an index array holding the bin id for each point in A d = NP.digitize(A, bins) A: how about something like: data = numpy.array([0, 0.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 3]) hist, edges = numpy.histogram(data, bins=3) for l, r in zip(edges[:-1], edges[1:]): print(data[(data > l) & (data < r)]) Out: [ 0.5] [ 1.5 1.5 1.5] [ 2.5 2.5 2.5] with a bit of code to handle the edge cases.
How to get data in a histogram bin
I want to get a list of the data contained in a histogram bin. I am using numpy, and Matplotlib. I know how to traverse the data and check the bin edges. However, I want to do this for a 2D histogram and the code to do this is rather ugly. Does numpy have any constructs to make this easier? For the 1D case, I can use searchsorted(). But the logic is not that much better, and I don’t really want to do a binary search on each data point when I don’t have to. Most of the nasty logic is due to the bin boundary regions. All regions have boundaries like this: [left edge, right edge). Except the last bin, which has a region like this: [left edge, right edge]. Here is some sample code for the 1D case: import numpy as np data = [0, 0.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 3] hist, edges = np.histogram(data, bins=3) print 'data =', data print 'histogram =', hist print 'edges =', edges getbin = 2 #0, 1, or 2 print '---' print 'alg 1:' #for i in range(len(data)): for d in data: if d >= edges[getbin]: if (getbin == len(edges)-2) or d < edges[getbin+1]: print 'found:', d #end if #end if #end for print '---' print 'alg 2:' for d in data: val = np.searchsorted(edges, d, side='right')-1 if val == getbin or val == len(edges)-1: print 'found:', d #end if #end for Here is some sample code for the 2D case: import numpy as np xdata = [0, 1.5, 1.5, 2.5, 2.5, 2.5, \ 0.5, 0.5, 0.5, 0.5, 1.5, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, \ 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 3] ydata = [0, 5,5, 5, 5, 5, \ 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, \ 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 30] xbins = 3 ybins = 3 hist2d, xedges, yedges = np.histogram2d(xdata, ydata, bins=(xbins, ybins)) print 'data2d =', zip(xdata, ydata) print 'hist2d =' print hist2d print 'xedges =', xedges print 'yedges =', yedges getbin2d = 5 #0 through 8 print 'find data in bin #', getbin2d xedge_i = getbin2d % xbins yedge_i = int(getbin2d / xbins) #IMPORTANT: this is xbins for x, y in zip(xdata, ydata): # x and y left edges if x >= xedges[xedge_i] and y >= yedges[yedge_i]: #x right edge if xedge_i == xbins-1 or x < xedges[xedge_i + 1]: #y right edge if yedge_i == ybins-1 or y < yedges[yedge_i + 1]: print 'found:', x, y #end if #end if #end if #end for Is there a cleaner / more efficient way to do this? It seems like numpy would have something for this.
[ "digitize, from core NumPy, will give you the index of the bin to which each value in your histogram belongs:\nimport numpy as NP\nA = NP.random.randint(0, 10, 100)\n\nbins = NP.array([0., 20., 40., 60., 80., 100.])\n\n# d is an index array holding the bin id for each point in A\nd = NP.digitize(A, bins) \n\n", "how about something like:\ndata = numpy.array([0, 0.5, 1.5, 1.5, 1.5, 2.5, 2.5, 2.5, 3])\n\nhist, edges = numpy.histogram(data, bins=3)\n\nfor l, r in zip(edges[:-1], edges[1:]):\n print(data[(data > l) & (data < r)]) \n\nOut:\n[ 0.5]\n[ 1.5 1.5 1.5]\n[ 2.5 2.5 2.5]\n\nwith a bit of code to handle the edge cases.\n" ]
[ 27, 6 ]
[]
[]
[ "histogram", "matplotlib", "numpy", "python" ]
stackoverflow_0002275924_histogram_matplotlib_numpy_python.txt
Q: Grouping related search keywords I have a log file containing search queries entered into my site's search engine. I'd like to "group" related search queries together for a report. I'm using Python for most of my webapp - so the solution can either be Python based or I can load the strings into Postgres if it is easier to do this with SQL. Example data: dog food good dog trainer cat food veterinarian Groups should include: cat: cat food dog: dog food good dog trainer food: dog food cat food etc... Ideas? Some sort of "indexing algorithm" perhaps? A: f = open('data.txt', 'r') raw = f.readlines() #generate set of all possible groupings groups = set() for lines in raw: data = lines.strip().split() for items in data: groups.add(items) #parse input into groups for group in groups: print "Group \'%s\':" % group for line in raw: if line.find(group) is not -1: print line.strip() print #consider storing into a dictionary instead of just printing This could be heavily optimized, but this will print the following result, assuming you place the raw data in an external text file: Group 'trainer': good dog trainer Group 'good': good dog trainer Group 'food': dog food cat food Group 'dog': dog food good dog trainer Group 'cat': cat food Group 'veterinarian': veterinarian A: Well it seems that you just want to report every query that contains a given word. You can do this easily in plain SQL by using the wildcard matching feature, i.e. SELECT * FROM QUERIES WHERE `querystring` LIKE '%dog%'. The only problem with the query above is that it also finds queries with query strings like "dogbah", you need to write a couple of alternatives using OR to cater for the different cases assuming your words are separated by whitespace. A: Not a concrete algorithm, but what you're looking for is basically an index created from words found in your text lines. So you'll need some sort of parser to recognize words, then you put them in an index structure and link each index entry to the line(s) where it is found. Then, by going over the index entries, you have your "groups". A: Your algorithm needs the following parts (if done by yourself) a parser for the data, breaking up in lines, breaking up the lines in words. A datastructure to hold key value pairs (like a hashtable). The key is a word, the value is a dynamic array of lines (if you keep the lines you parsed in memory pointers or line numbers suffice) in pseudocode (generation): create empty set S for name value pairs. for each line L parsed for each word W in line L seek W in set S -> Item if not found -> add word W -> (empty array) to set S add line L reference to array in Ietm endfor endfor (lookup (word: W)) seek W in set S into Item if found return array from Item else return empty array. A: Modified version of @swanson's answer (not tested): from collections import defaultdict from itertools import chain # generate set of all possible words lines = open('data.txt').readlines() words = set(chain.from_iterable(line.split() for line in lines)) # parse input into groups groups = defaultdict(list) for line in lines: for word in words: if word in line: groups[word].append(line)
Grouping related search keywords
I have a log file containing search queries entered into my site's search engine. I'd like to "group" related search queries together for a report. I'm using Python for most of my webapp - so the solution can either be Python based or I can load the strings into Postgres if it is easier to do this with SQL. Example data: dog food good dog trainer cat food veterinarian Groups should include: cat: cat food dog: dog food good dog trainer food: dog food cat food etc... Ideas? Some sort of "indexing algorithm" perhaps?
[ "f = open('data.txt', 'r')\nraw = f.readlines()\n\n#generate set of all possible groupings\ngroups = set()\nfor lines in raw:\n data = lines.strip().split()\n for items in data:\n groups.add(items)\n\n#parse input into groups\nfor group in groups:\n print \"Group \\'%s\\':\" % group\n for line in raw:\n if line.find(group) is not -1:\n print line.strip()\n print\n\n#consider storing into a dictionary instead of just printing\n\nThis could be heavily optimized, but this will print the following result, assuming you place the raw data in an external text file:\nGroup 'trainer':\ngood dog trainer\n\nGroup 'good':\ngood dog trainer\n\nGroup 'food':\ndog food\ncat food\n\nGroup 'dog':\ndog food\ngood dog trainer\n\nGroup 'cat':\ncat food\n\nGroup 'veterinarian':\nveterinarian\n\n", "Well it seems that you just want to report every query that contains a given word. You can do this easily in plain SQL by using the wildcard matching feature, i.e.\nSELECT * FROM QUERIES WHERE `querystring` LIKE '%dog%'.\n\nThe only problem with the query above is that it also finds queries with query strings like \"dogbah\", you need to write a couple of alternatives using OR to cater for the different cases assuming your words are separated by whitespace.\n", "Not a concrete algorithm, but what you're looking for is basically an index created from words found in your text lines.\nSo you'll need some sort of parser to recognize words, then you put them in an index structure and link each index entry to the line(s) where it is found. Then, by going over the index entries, you have your \"groups\".\n", "Your algorithm needs the following parts (if done by yourself)\n\na parser for the data, breaking up in lines, breaking up the lines in words.\nA datastructure to hold key value pairs (like a hashtable). The key is a word, the value is a dynamic array of lines (if you keep the lines you parsed in memory pointers or line numbers suffice)\n\nin pseudocode (generation):\ncreate empty set S for name value pairs.\nfor each line L parsed\n for each word W in line L\n seek W in set S -> Item\n if not found -> add word W -> (empty array) to set S\n add line L reference to array in Ietm\n endfor\nendfor\n\n(lookup (word: W))\nseek W in set S into Item\nif found return array from Item\nelse return empty array.\n\n", "Modified version of @swanson's answer (not tested):\nfrom collections import defaultdict\nfrom itertools import chain\n\n# generate set of all possible words\nlines = open('data.txt').readlines()\nwords = set(chain.from_iterable(line.split() for line in lines))\n\n# parse input into groups\ngroups = defaultdict(list)\nfor line in lines: \n for word in words:\n if word in line:\n groups[word].append(line)\n\n" ]
[ 4, 1, 0, 0, 0 ]
[]
[]
[ "algorithm", "data_structures", "postgresql", "python" ]
stackoverflow_0002275901_algorithm_data_structures_postgresql_python.txt
Q: Wrapping a pure virtual method with arguments using Boost::Python I'm currently trying to expose a c++ Interface (pure virtual class) to Python using Boost::Python. The c++ interface is: Agent.hpp #include "Tab.hpp" class Agent { virtual void start(const Tab& t) = 0; virtual void stop() = 0; }; And, by reading the "official" tutorial, I managed to write and build the next Python wrapper: Agent.cpp #include <boost/python.hpp> #include <Tabl.hpp> #include <Agent.hpp> using namespace boost::python; struct AgentWrapper: Agent, wrapper<Agent> { public: void start(const Tab& t) { this->get_override("start")(); } void stop() { this->get_override("stop")(); } }; BOOST_PYTHON_MODULE(PythonWrapper) { class_<AgentWrapper, boost::noncopyable>("Agent") .def("start", pure_virtual(&Agent::start) ) .def("stop", pure_virtual(&Agent::stop) ) ; } Note that I have no problems while building it. What concerns me, though, is that as you can see AgentWrapper::start doesn't seem to pass any argument to Agent::start in: void start(const Tab& t) { this->get_override("start")(); } How will the python wrapper know "start" recieves one argument? How can i do so? A: The get_override functions returns an an object of type override which has a number of overloads for differing number of arguments. So you should be able to just do this: void start(const Tab& t) { this->get_override("start")(t); } Did you try this?
Wrapping a pure virtual method with arguments using Boost::Python
I'm currently trying to expose a c++ Interface (pure virtual class) to Python using Boost::Python. The c++ interface is: Agent.hpp #include "Tab.hpp" class Agent { virtual void start(const Tab& t) = 0; virtual void stop() = 0; }; And, by reading the "official" tutorial, I managed to write and build the next Python wrapper: Agent.cpp #include <boost/python.hpp> #include <Tabl.hpp> #include <Agent.hpp> using namespace boost::python; struct AgentWrapper: Agent, wrapper<Agent> { public: void start(const Tab& t) { this->get_override("start")(); } void stop() { this->get_override("stop")(); } }; BOOST_PYTHON_MODULE(PythonWrapper) { class_<AgentWrapper, boost::noncopyable>("Agent") .def("start", pure_virtual(&Agent::start) ) .def("stop", pure_virtual(&Agent::stop) ) ; } Note that I have no problems while building it. What concerns me, though, is that as you can see AgentWrapper::start doesn't seem to pass any argument to Agent::start in: void start(const Tab& t) { this->get_override("start")(); } How will the python wrapper know "start" recieves one argument? How can i do so?
[ "The get_override functions returns an an object of type override which has a number of overloads for differing number of arguments. So you should be able to just do this:\nvoid start(const Tab& t)\n{\n this->get_override(\"start\")(t);\n}\n\nDid you try this?\n" ]
[ 4 ]
[]
[]
[ "boost_python", "c++", "interface", "python", "word_wrap" ]
stackoverflow_0002277018_boost_python_c++_interface_python_word_wrap.txt
Q: What exactly is meant when mr.developer says "The package 'django-quoteme' is dirty." I'm using mr.developer to track some packages on github. When I rerun my buildout, I get: The package 'django-quoteme' is dirty. Do you want to update it anyway? [yes/No/all] y What is meant by "dirty" exactly? A: From http://github.com/fschulze/mr.developer: Dirty SVN You get an error like:: ERROR: Can't switch package 'foo' from 'https://example.com/svn/foo/trunk/', because it's dirty. If you have not modified the package files under src/foo, then you can check what's going on with status -v. One common cause is a *.egg-info folder which gets generated every time you run buildout and this shows up as an untracked item in svn status. You should add .egg-info to your global Subversion ignores in ~/.subversion/config, like this:: global-ignores = *.o *.lo *.la *.al .libs *.so .so.[0-9] *.a *.pyc *.pyo *.rej ~ ## .#* .*.swp .DS_Store *.egg-info So it looks like you should use status -v to see what they mean by "dirty" in your case. A: I don't know what it means specifically in this context, but in the computing science world, "dirty" usually means its been modified. Maybe one the files in the package has been edited, and by updating it, you'll lose those changes, hence the warning. http://en.wikipedia.org/wiki/Dirty_%28computer_science%29
What exactly is meant when mr.developer says "The package 'django-quoteme' is dirty."
I'm using mr.developer to track some packages on github. When I rerun my buildout, I get: The package 'django-quoteme' is dirty. Do you want to update it anyway? [yes/No/all] y What is meant by "dirty" exactly?
[ "From http://github.com/fschulze/mr.developer:\n\nDirty SVN\nYou get an error like::\nERROR: Can't switch package 'foo'\n from\n 'https://example.com/svn/foo/trunk/',\n because it's dirty.\nIf you have not modified the package\n files under src/foo, then you can\n check what's going on with status\n -v. One common cause is a *.egg-info folder which gets\n generated every time you run buildout\n and this shows up as an untracked item\n in svn status.\nYou should add .egg-info to your\n global Subversion ignores in\n ~/.subversion/config, like this::\n global-ignores = *.o *.lo *.la *.al .libs *.so .so.[0-9] *.a *.pyc *.pyo *.rej ~ ## .#* .*.swp .DS_Store *.egg-info\n\nSo it looks like you should use status -v to see what they mean by \"dirty\" in your case.\n", "I don't know what it means specifically in this context, but in the computing science world, \"dirty\" usually means its been modified. Maybe one the files in the package has been edited, and by updating it, you'll lose those changes, hence the warning.\nhttp://en.wikipedia.org/wiki/Dirty_%28computer_science%29\n" ]
[ 5, 4 ]
[]
[]
[ "buildout", "django", "python" ]
stackoverflow_0002277926_buildout_django_python.txt
Q: detecting end of tty output Hi I'm writing a psudo-terminal that can live in a tty and spawn a second tty which is filters input and output from I'm writing it in python for now, spawning the second tty and reading and writing is easy but when I read, the read does not end, it waits for more input. import subprocess pfd = subprocess.Popen(['/bin/sh'], shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE) cmd = "ls" pfd.stdin.write(cmd + '\n') out = '' while 1: c = pfd.stdout.read(1) if not c: # if end of output (this never happends) break if c == '\n': # print line when found print repr(out) out = '' else: out += c ----------------------------- outputs ------------------------ intty $ python intty.py 'intty.py' 'testA_blank' 'testB_blank' (hangs here does not return) it looks like it's reaching the end of hte buffer and instead of returning None or '' it hangs waiting for more input. what should I be looking for to see if the output has completed? the end of the buffer? a non-printable character? ---------------- edit ------------- this happends also when I run xpcshell instead of ls, I'm assuming these interactive programs have some way of knowing to display the prompt again, strangly the prompt, in this case "js>" never apears A: Well, your output actually hasn't completed. Because you spawned /bin/sh, the shell is still running after "ls" completes. There is no EOF indicator, because it's still running. Why not simply run /bin/ls? You could do something like pfd = subprocess.Popen(['ls'], stdout=subprocess.PIPE, stdin=subprocess.PIPE) out, err_output = pfd.communicate() This also highlights subprocess.communicate, which is a safer way to get output (For outputs which fit in memory, anyway) from a single program run. This will return only when the program has finished running. Alternately, you -could- read linewise from the shell, but you'd be looking for a special shell sequence like the sh~# line which could easily show up in program output. Thus, running a shell is probably a bad idea all around. Edit Here is what I was referring to, but it's still not really the best solution, as it has a LOT of caveats: while 1: c = pfd.stdout.read(1) if not c: break elif c == '\n': # print line when found print repr(out) out = '' else: out += c if out.strip() == 'sh#': break Note that this will break out if any other command outputs 'sh#' at the beginning of the line, and also if for some reason the output is different from expected, you will enter the same blocking situation as before. This is why it's a very sub-optimal situation for a shell. A: For applications like a shell, the output will not end until the shell ends. Either use select.select() to check if it has more output waiting for you, or end the process.
detecting end of tty output
Hi I'm writing a psudo-terminal that can live in a tty and spawn a second tty which is filters input and output from I'm writing it in python for now, spawning the second tty and reading and writing is easy but when I read, the read does not end, it waits for more input. import subprocess pfd = subprocess.Popen(['/bin/sh'], shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE) cmd = "ls" pfd.stdin.write(cmd + '\n') out = '' while 1: c = pfd.stdout.read(1) if not c: # if end of output (this never happends) break if c == '\n': # print line when found print repr(out) out = '' else: out += c ----------------------------- outputs ------------------------ intty $ python intty.py 'intty.py' 'testA_blank' 'testB_blank' (hangs here does not return) it looks like it's reaching the end of hte buffer and instead of returning None or '' it hangs waiting for more input. what should I be looking for to see if the output has completed? the end of the buffer? a non-printable character? ---------------- edit ------------- this happends also when I run xpcshell instead of ls, I'm assuming these interactive programs have some way of knowing to display the prompt again, strangly the prompt, in this case "js>" never apears
[ "Well, your output actually hasn't completed. Because you spawned /bin/sh, the shell is still running after \"ls\" completes. There is no EOF indicator, because it's still running.\nWhy not simply run /bin/ls?\nYou could do something like\npfd = subprocess.Popen(['ls'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n\nout, err_output = pfd.communicate()\n\nThis also highlights subprocess.communicate, which is a safer way to get output (For outputs which fit in memory, anyway) from a single program run. This will return only when the program has finished running.\nAlternately, you -could- read linewise from the shell, but you'd be looking for a special shell sequence like the sh~# line which could easily show up in program output. Thus, running a shell is probably a bad idea all around.\n\nEdit Here is what I was referring to, but it's still not really the best solution, as it has a LOT of caveats:\nwhile 1: \n c = pfd.stdout.read(1)\n if not c:\n break\n elif c == '\\n': # print line when found\n print repr(out)\n out = ''\n else:\n out += c\n if out.strip() == 'sh#':\n break\n\nNote that this will break out if any other command outputs 'sh#' at the beginning of the line, and also if for some reason the output is different from expected, you will enter the same blocking situation as before. This is why it's a very sub-optimal situation for a shell.\n", "For applications like a shell, the output will not end until the shell ends. Either use select.select() to check if it has more output waiting for you, or end the process.\n" ]
[ 1, 0 ]
[]
[]
[ "control_characters", "python", "tty" ]
stackoverflow_0002278150_control_characters_python_tty.txt
Q: merging dictionaries in python Sorry for the very general title but I'll try to be as specific as possible. I am working on a text mining application. I have a large number of key value pairs of the form ((word, corpus) -> occurence_count) (everything is an integer) which I am storing in multiple python dictionaries (tuple->int). These values are spread across multiple files on the disk (I pickled them). To make any sense of the data, I need to aggregate these dictionaries Basically, I need to figure out a way to find all the occurrences of a particular key in all the dictionaries, and add them up to get a total count. If I load more than one dictionary at a time, I run out of memory, which is the reason I had to split them in the first place. When I tried , I ran into performance issues. I am currently trying to store the values in a DB (mysql), processing multiple dictionaries at a time, since mysql provides row level locking, which is both good (since it means I can parallelize this operation) and bad (since it slows down the insert queries) What are my options here? Is it a good idea to write a partially disk based dictionary so I can process the dicts one at a time? With an LRU replacement strategy? Is there something that I am completely oblivious to? Thanks! A: A disk-based dictionary-like exists -- see the shelve module. Keys into a shelf must be strings, but you could simply use str on your tuples to obtain equivalent string keys; plus, I read your Q as meaning that you want only word as the key, so that's even easier (either str -- or, for vocabularies < 4GB, a struct.pack -- will be fine). A good relational engine (especially PostgreSQL) would serve you well, but processing one dictionary at a time to aggregate each word occurrences over all corpora into a shelf object should also be OK (not quite as fast, but simpler to code, since a shelf is so similar to a dict except for the type constraint on keys [[and a caveat for mutable values, but as your values are ints that need not concern you). A: Something like this, if I understand your question correctly from collections import defaultdict import pickle result = defaultdict(int) for fn in filenames: data_dict = pickle.load(open(fn)) for k,count in data_dict.items(): word,corpus = k result[k]+=count A: If I understood your question correctly and you have integer ids for the words and corpora, then you can gain some performance by switching from a dict to a list, or even better, a numpy array. This may be annoying! Basically, you need to replace the tuple with a single integer, which we can call the newid. You want all the newids to correspond to a word,corpus pair, so I would count the words in each corpus, and then have, for each corpus, a starting newid. The newid of (word,corpus) will then be word + start_newid[corpus]. If I misunderstood you and you don't have such ids, then I think this advice might still be useful, but you will have to manipulate your data to get it into the tuple of ints format. Another thing you could try is rechunking the data. Let's say that you can only hold 1.1 of these monsters in memory. Then, you can load one, and create a smaller dict or array that only corresponds to the first 10% of (word,corpus) pairs. You can scan through the loaded dict, and deal with any of the ones that are in the first 10%. When you are done, you can write the result back to disk, and do another pass for the second 10%. This will require 10 passes, but that might be OK for you. If you chose your previous chunking based on what would fit in memory, then you will have to arbitrarily break your old dicts in half so that you can hold one in memory while also holding the result dict/array.
merging dictionaries in python
Sorry for the very general title but I'll try to be as specific as possible. I am working on a text mining application. I have a large number of key value pairs of the form ((word, corpus) -> occurence_count) (everything is an integer) which I am storing in multiple python dictionaries (tuple->int). These values are spread across multiple files on the disk (I pickled them). To make any sense of the data, I need to aggregate these dictionaries Basically, I need to figure out a way to find all the occurrences of a particular key in all the dictionaries, and add them up to get a total count. If I load more than one dictionary at a time, I run out of memory, which is the reason I had to split them in the first place. When I tried , I ran into performance issues. I am currently trying to store the values in a DB (mysql), processing multiple dictionaries at a time, since mysql provides row level locking, which is both good (since it means I can parallelize this operation) and bad (since it slows down the insert queries) What are my options here? Is it a good idea to write a partially disk based dictionary so I can process the dicts one at a time? With an LRU replacement strategy? Is there something that I am completely oblivious to? Thanks!
[ "A disk-based dictionary-like exists -- see the shelve module. Keys into a shelf must be strings, but you could simply use str on your tuples to obtain equivalent string keys; plus, I read your Q as meaning that you want only word as the key, so that's even easier (either str -- or, for vocabularies < 4GB, a struct.pack -- will be fine).\nA good relational engine (especially PostgreSQL) would serve you well, but processing one dictionary at a time to aggregate each word occurrences over all corpora into a shelf object should also be OK (not quite as fast, but simpler to code, since a shelf is so similar to a dict except for the type constraint on keys [[and a caveat for mutable values, but as your values are ints that need not concern you).\n", "Something like this, if I understand your question correctly\nfrom collections import defaultdict\nimport pickle\n\nresult = defaultdict(int)\nfor fn in filenames:\n data_dict = pickle.load(open(fn))\n for k,count in data_dict.items():\n word,corpus = k\n result[k]+=count\n\n", "\nIf I understood your question correctly and you have integer ids for the words and corpora, then you can gain some performance by switching from a dict to a list, or even better, a numpy array. This may be annoying! \nBasically, you need to replace the tuple with a single integer, which we can call the newid. You want all the newids to correspond to a word,corpus pair, so I would count the words in each corpus, and then have, for each corpus, a starting newid. The newid of (word,corpus) will then be word + start_newid[corpus].\nIf I misunderstood you and you don't have such ids, then I think this advice might still be useful, but you will have to manipulate your data to get it into the tuple of ints format.\nAnother thing you could try is rechunking the data. \nLet's say that you can only hold 1.1 of these monsters in memory. Then, you can load one, and create a smaller dict or array that only corresponds to the first 10% of (word,corpus) pairs. You can scan through the loaded dict, and deal with any of the ones that are in the first 10%. When you are done, you can write the result back to disk, and do another pass for the second 10%. This will require 10 passes, but that might be OK for you. \nIf you chose your previous chunking based on what would fit in memory, then you will have to arbitrarily break your old dicts in half so that you can hold one in memory while also holding the result dict/array.\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "dictionary", "merge", "python" ]
stackoverflow_0002277895_dictionary_merge_python.txt
Q: Getting UTC offset for a datetime I have tried this but its not correct: In [34]: e_now Out[34]: datetime.datetime(2010, 2, 17, 0, 2, 40, 506444, tzinfo=<DstTzInfo 'US/Eastern' EST-1 day, 19:00:00 STD>) In [35]: e_now.utcoffset() Out[35]: datetime.timedelta(-1, 68400) A: The tzinfo is identified as EST-1 day, 19:00:00 -- and the timedelta is given as -1 day, 68400 seconds (i.e., 19 hours, just as in the tzinfo identification). All timezones east of the London-Paris meridian will have -1 day and a positive number of seconds: for example, when it's a second past midnight in London (UTC), it's 1 second past 7pm (that is, 19:00) of the previous calendar day in New York. Why do you think that's a problem?
Getting UTC offset for a datetime
I have tried this but its not correct: In [34]: e_now Out[34]: datetime.datetime(2010, 2, 17, 0, 2, 40, 506444, tzinfo=<DstTzInfo 'US/Eastern' EST-1 day, 19:00:00 STD>) In [35]: e_now.utcoffset() Out[35]: datetime.timedelta(-1, 68400)
[ "The tzinfo is identified as EST-1 day, 19:00:00 -- and the timedelta is given as -1 day, 68400 seconds (i.e., 19 hours, just as in the tzinfo identification). All timezones east of the London-Paris meridian will have -1 day and a positive number of seconds: for example, when it's a second past midnight in London (UTC), it's 1 second past 7pm (that is, 19:00) of the previous calendar day in New York. Why do you think that's a problem?\n" ]
[ 1 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0002278477_datetime_python.txt
Q: How can I end a string randomly and concatenate another string at the end in Python? Basicaly I have a user inputted string like: "hi my name is bob" what I would like to do is have my program randomly pick a new ending of the string and end it with my specified ending. For example: "hi my name DUR." "hi mDUR." etc etc I'm kinda new to python so hopefully there's an easy solution to this hehe A: Something like this: import random s = "hi my name is bob" r = random.randint(0, len(s)) print s[:r] + "DUR" String concatentation is accomplished with +. The [a:b] notation is called a slice. s[:r] returns the first r characters of s. A: s[:random.randrange(len(s))] + "DUR" A: Not sure why you would want this, but you can do something like the following import random user_string = 'hi my name is bob' my_custom_string = 'DUR' print ''.join([x[:random.randint(0, len(user_string))], my_custom_string]) You should read the docs for the random module to find out which method you should be using. A: just one of the many ways >>> import random >>> specified="DUR" >>> s="hi my name is bob" >>> s[:s.index(random.choice(s))]+specified 'hi mDUR' A: You can use the random module. See an example below: import random s = "hi my name is bob" pos = random.randint(0, len(s)) s = s[:pos] + "DUR" print s
How can I end a string randomly and concatenate another string at the end in Python?
Basicaly I have a user inputted string like: "hi my name is bob" what I would like to do is have my program randomly pick a new ending of the string and end it with my specified ending. For example: "hi my name DUR." "hi mDUR." etc etc I'm kinda new to python so hopefully there's an easy solution to this hehe
[ "Something like this: \nimport random\n\ns = \"hi my name is bob\"\nr = random.randint(0, len(s))\nprint s[:r] + \"DUR\"\n\nString concatentation is accomplished with +. The [a:b] notation is called a slice. s[:r] returns the first r characters of s. \n", "s[:random.randrange(len(s))] + \"DUR\"\n\n", "Not sure why you would want this, but you can do something like the following\nimport random\nuser_string = 'hi my name is bob'\nmy_custom_string = 'DUR'\nprint ''.join([x[:random.randint(0, len(user_string))], my_custom_string])\n\nYou should read the docs for the random module to find out which method you should be using.\n", "just one of the many ways\n>>> import random\n>>> specified=\"DUR\"\n>>> s=\"hi my name is bob\"\n>>> s[:s.index(random.choice(s))]+specified\n'hi mDUR'\n\n", "You can use the random module. See an example below:\nimport random\ns = \"hi my name is bob\"\npos = random.randint(0, len(s))\ns = s[:pos] + \"DUR\"\nprint s\n\n" ]
[ 4, 1, 0, 0, 0 ]
[]
[]
[ "concatenation", "python", "string" ]
stackoverflow_0002278585_concatenation_python_string.txt
Q: How to graphically edit the graph of a mathematical function (with python)? Is there already a python package allowing to graphically edit the graph of a function? A: Chaco is designed to be very interactive, and is significantly more so than matplotlib. For example, the user can use the mouse to drag the legend to different places on a plot, or lasso data, or move a point around on one plot and change the results in another, or change the color of a plot by clicking on a swatch, etc.
How to graphically edit the graph of a mathematical function (with python)?
Is there already a python package allowing to graphically edit the graph of a function?
[ "Chaco is designed to be very interactive, and is significantly more so than matplotlib. For example, the user can use the mouse to drag the legend to different places on a plot, or lasso data, or move a point around on one plot and change the results in another, or change the color of a plot by clicking on a swatch, etc.\n" ]
[ 2 ]
[]
[]
[ "math", "numpy", "python", "user_interface" ]
stackoverflow_0002275845_math_numpy_python_user_interface.txt
Q: what is the usefulness of '>' in python print 'xxx' > 'ssaww' it print 'true' who can give me a clear example . thanks A: Just like in math, > compares two operands and returns True if the left operand is greater than the right, otherwise False. A: In python strings are ordered lexicographically. A: you can test it out on the interpreter >>> 'xxx'>'yyy' #first character 'x' is less than first character 'y', so false False >>> 'xxx'>'xyy' False >>> 'xyy'>'xyx' #3rd character 'y' is greater than 3rd character 'x', so true True A: to order strings syntactically (e.g. alphabetically). A: It can do y>x>z as a nicer way to say y>x and x>z. As others have mentioned its a simple greater-than comparison (in your case for strings).
what is the usefulness of '>' in python
print 'xxx' > 'ssaww' it print 'true' who can give me a clear example . thanks
[ "Just like in math, > compares two operands and returns True if the left operand is greater than the right, otherwise False.\n", "In python strings are ordered lexicographically.\n", "you can test it out on the interpreter\n>>> 'xxx'>'yyy' #first character 'x' is less than first character 'y', so false\nFalse\n>>> 'xxx'>'xyy' \nFalse\n>>> 'xyy'>'xyx' #3rd character 'y' is greater than 3rd character 'x', so true\nTrue\n\n", "to order strings syntactically (e.g. alphabetically). \n", "It can do \ny>x>z \nas a nicer way to say y>x and x>z.\nAs others have mentioned its a simple greater-than comparison (in your case for strings).\n" ]
[ 5, 4, 3, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002278901_python.txt
Q: What would I use Stackless Python for? There are many questions related to Stackless Python. But none answering this my question, I think (correct me if wrong - please!). There's some buzz about it all the time so I curious to know. What would I use Stackless for? How is it better than CPython? Yes it has green threads (stackless) that allow quickly create many lightweight threads as long as no operations are blocking (something like Ruby's threads?). What is this great for? What other features it has I want to use over CPython? A: It allows you to work with massive amounts of concurrency. Nobody sane would create one hundred thousand system threads, but you can do this using stackless. This article tests doing just that, creating one hundred thousand tasklets in both Python and Google Go (a new programming language): http://dalkescientific.com/writings/diary/archive/2009/11/15/100000_tasklets.html Surprisingly, even if Google Go is compiled to native code, and they tout their co-routines implementation, Python still wins. Stackless would be good for implementing a map/reduce algorithm, where you can have a very large number of reducers depending on your input data. A: Stackless Python's main benefit is the support for very lightweight coroutines. CPython doesn't support coroutines natively (although I expect someone to post a generator-based hack in the comments) so Stackless is a clear improvement on CPython when you have a problem that benefits from coroutines. I think the main area where they excel are when you have many concurrent tasks running within your program. Examples might be game entities that run a looping script for their AI, or a web server that is servicing many clients with pages that are slow to create. You still have many of the typical problems with concurrency correctness however regarding shared data, but the deterministic task switching makes it easier to write safe code since you know exactly where control will be transferred and therefore know the exact points at which the shared state must be up to date. A: Thirler already mentioned that stackless was used in Eve Online. Keep in mind, that: (..) stackless adds a further twist to this by allowing tasks to be separated into smaller tasks, Tasklets, which can then be split off the main program to execute on their own. This can be used for fire-and-forget tasks, like sending off an email, or dispatching an event, or for IO operations, e.g. sending and receiving network packets. One tasklet waits for a packet from the network while others continue running the game loop. It is in some ways like threads, but is non-preemptive and explicitly scheduled, so there are fewer issues with synchronization. Also, switching between tasklets is much faster than thread switching, and you can have a huge number of active tasklets whereas the number of threads is severely limited by the computer hardware. (got this citation from here) At PyCon 2009 there was given a very interesting talk, describing why and how Stackless is used at CCP Games. Also, there is a very good introductory material, which describes why stackless is a good solution for Your applications. (it may be somewhat old, but I think that it is worth reading). A: EVEOnline is largely programmed in Stackless Python. They have several dev blogs on the use of it. It seems it is very useful for high performance computing. A: While I've not used Stackless itself, I have used Greenlet for implementing highly-concurrent network applications. Some of the use cases Linden Lab has put it towards are: high-performance smart proxies, a fast system for distributing commands over huge numbers of machines, and an application that does a ton of database writes and reads (at a ratio of about 1:2, which is very write-heavy, so it's spending most of its time waiting for the database to return), and a web-crawler-type-thing for internal web data. Basically any app that's expecting to have to do a lot of network I/O will benefit from being able to create a bajillion lightweight threads. 10,000 connected clients doesn't seem like a huge deal to me. Stackless or Greenlet aren't really a complete solution, though. They are very low-level and you're going to have to do a lot of monkeywork to build an application with them that uses them to their fullest. I know this because I maintain a library that provides a networking and scheduling layer on top of Greenlet, specifically because writing apps is so much easier with it. There are a bunch of these now; I maintain Eventlet, but also there is Concurrence, Chiral, and probably a few more that I don't know about. If the sort of app you want to write sounds like what I wrote about, consider one of these libraries. The choice of Stackless vs Greenlet is somewhat less important than deciding what library best suits the needs of what you want to do. A: The basic usefulness for green threads, the way I see it, is to implement a system in which you have a large amount of objects that do high latency operations. A concrete example would be communicating with other machines: def Run(): # Do stuff request_information() # This call might block # Proceed doing more stuff Threads let you write the above code naturally, but if the number of objects is large enough, threads just cannot perform adequately. But you can use green threads even for in really large amounts. The request_information() above could switch out to some scheduler where other work is waiting and return later. You get all the benefits of being able to call "blocking" functions as if they return immediately without using threads. This is obviously very useful for any kind of distributed computing if you want to write code in a straightforward way. It is also interesting for multiple cores to mitigate waiting for locks: def Run(): # Do some calculations green_lock(the_foo) # Do some more calculations The green_lock function would basically attempt to acquire the lock and just switch out to a main scheduler if it fails due to other cores using the object. Again, green threads are being used to mitigate blocking, allowing code to be written naturally and still perform well.
What would I use Stackless Python for?
There are many questions related to Stackless Python. But none answering this my question, I think (correct me if wrong - please!). There's some buzz about it all the time so I curious to know. What would I use Stackless for? How is it better than CPython? Yes it has green threads (stackless) that allow quickly create many lightweight threads as long as no operations are blocking (something like Ruby's threads?). What is this great for? What other features it has I want to use over CPython?
[ "It allows you to work with massive amounts of concurrency. Nobody sane would create one hundred thousand system threads, but you can do this using stackless.\nThis article tests doing just that, creating one hundred thousand tasklets in both Python and Google Go (a new programming language): http://dalkescientific.com/writings/diary/archive/2009/11/15/100000_tasklets.html\nSurprisingly, even if Google Go is compiled to native code, and they tout their co-routines implementation, Python still wins.\nStackless would be good for implementing a map/reduce algorithm, where you can have a very large number of reducers depending on your input data.\n", "Stackless Python's main benefit is the support for very lightweight coroutines. CPython doesn't support coroutines natively (although I expect someone to post a generator-based hack in the comments) so Stackless is a clear improvement on CPython when you have a problem that benefits from coroutines. \nI think the main area where they excel are when you have many concurrent tasks running within your program. Examples might be game entities that run a looping script for their AI, or a web server that is servicing many clients with pages that are slow to create.\nYou still have many of the typical problems with concurrency correctness however regarding shared data, but the deterministic task switching makes it easier to write safe code since you know exactly where control will be transferred and therefore know the exact points at which the shared state must be up to date.\n", "Thirler already mentioned that stackless was used in Eve Online. Keep in mind, that:\n\n(..) stackless adds a further twist to this by allowing tasks to be separated into smaller tasks, Tasklets, which can then be split off the main program to execute on their own. This can be used for fire-and-forget tasks, like sending off an email, or dispatching an event, or for IO operations, e.g. sending and receiving network packets. One tasklet waits for a packet from the network while others continue running the game loop.\nIt is in some ways like threads, but is non-preemptive and explicitly scheduled, so there are fewer issues with synchronization. Also, switching between tasklets is much faster than thread switching, and you can have a huge number of active tasklets whereas the number of threads is severely limited by the computer hardware.\n\n(got this citation from here)\nAt PyCon 2009 there was given a very interesting talk, describing why and how Stackless is used at CCP Games.\nAlso, there is a very good introductory material, which describes why stackless is a good solution for Your applications. (it may be somewhat old, but I think that it is worth reading).\n", "EVEOnline is largely programmed in Stackless Python. They have several dev blogs on the use of it. It seems it is very useful for high performance computing.\n", "While I've not used Stackless itself, I have used Greenlet for implementing highly-concurrent network applications. Some of the use cases Linden Lab has put it towards are: high-performance smart proxies, a fast system for distributing commands over huge numbers of machines, and an application that does a ton of database writes and reads (at a ratio of about 1:2, which is very write-heavy, so it's spending most of its time waiting for the database to return), and a web-crawler-type-thing for internal web data. Basically any app that's expecting to have to do a lot of network I/O will benefit from being able to create a bajillion lightweight threads. 10,000 connected clients doesn't seem like a huge deal to me.\nStackless or Greenlet aren't really a complete solution, though. They are very low-level and you're going to have to do a lot of monkeywork to build an application with them that uses them to their fullest. I know this because I maintain a library that provides a networking and scheduling layer on top of Greenlet, specifically because writing apps is so much easier with it. There are a bunch of these now; I maintain Eventlet, but also there is Concurrence, Chiral, and probably a few more that I don't know about. \nIf the sort of app you want to write sounds like what I wrote about, consider one of these libraries. The choice of Stackless vs Greenlet is somewhat less important than deciding what library best suits the needs of what you want to do.\n", "The basic usefulness for green threads, the way I see it, is to implement a system in which you have a large amount of objects that do high latency operations. A concrete example would be communicating with other machines:\ndef Run():\n # Do stuff\n request_information() # This call might block\n # Proceed doing more stuff\n\nThreads let you write the above code naturally, but if the number of objects is large enough, threads just cannot perform adequately. But you can use green threads even for in really large amounts. The request_information() above could switch out to some scheduler where other work is waiting and return later. You get all the benefits of being able to call \"blocking\" functions as if they return immediately without using threads.\nThis is obviously very useful for any kind of distributed computing if you want to write code in a straightforward way.\nIt is also interesting for multiple cores to mitigate waiting for locks:\ndef Run():\n # Do some calculations\n green_lock(the_foo)\n # Do some more calculations\n\nThe green_lock function would basically attempt to acquire the lock and just switch out to a main scheduler if it fails due to other cores using the object. \nAgain, green threads are being used to mitigate blocking, allowing code to be written naturally and still perform well.\n" ]
[ 32, 12, 9, 6, 6, 5 ]
[]
[]
[ "python", "python_stackless" ]
stackoverflow_0002220645_python_python_stackless.txt