content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: foreignkey problem Imagine you have this model: class Category(models.Model): node_id = models.IntegerField(primary_key = True) type_id = models.IntegerField(max_length = 20) parent_id = models.IntegerField(max_length = 20) sort_order = models.IntegerField(max_length = 20) name = models.CharField(max_length = 45) lft = models.IntegerField(max_length = 20) rgt = models.IntegerField(max_length = 20) depth = models.IntegerField(max_length = 20) added_on = models.DateTimeField(auto_now = True) updated_on = models.DateTimeField(auto_now = True) status = models.IntegerField(max_length = 20) node = models.ForeignKey(Category_info, verbose_name = 'Category_info', to_field = 'node_id' The important part is the foreignkey. When I try: Category.objects.filter(type_id = 15, parent_id = offset, status = 1) I get an error that get returned more than category, which is fine, because it is supposed to return more than one. But I want to filter the results trough another field, which would be type id (from the second Model) Here it is: class Category_info(models.Model): objtree_label_id = models.AutoField(primary_key = True) node_id = models.IntegerField(unique = True) language_id = models.IntegerField() label = models.CharField(max_length = 255) type_id = models.IntegerField() The type_id can be any number from 1 - 5. I am desparately trying to get only one result where the type_id would be number 1. Here is what I want in sql: SELECT c.*, ci.* FROM category c JOIN category_info ci ON (c.node_id = ci.node_id) WHERE c.type_id = 15 AND c.parent_id = 50 AND ci.type_id = 1 Any help is GREATLY appreciated. Regards A: To filter on fields in a related table, use the double-underscore notation. To get all Category objects where type_id of the related Category_info object is 15, use: Category.objects.filter(node__type_id=15) Django will then automagically understand that you're referring to the type_id field on whatever table node is related to. A: So it still didnt solve my problem.. Let me try to explain in a bit more. Lets say that the sql: SELECT c.*, ci.* FROM category c JOIN category_info ci ON (c.node_id = ci.node_id) WHERE c.type_id = 15 AND c.parent_id = 50 Will return two rows. Both are identical, except type_id field from category_info table where there are two types - 1 and 2. If I will add to the sql - ci.type_id = 1, I will get the right result. But from what I have tried, even with the doubleunderscore notation, it still returns 2 rows in Django. Now I have got: Category.objects.filter(type_id = 15, parent_id = offset, status = 1, node__type_id = 1) Where the node__type_id = 1 represents the "ci.type_id = 1". But it still does return two rows. When I will delete the "to_field" from my model definition it will pass, but return wrong data, because it binds it to the primary key by default. I tried to filter the data afterwards by chaining another filter, but still wont get past it. Here is a bit form the debug, maybe it helps: Caught an exception while rendering: get() returned more than one Category_info -- it returned 2! Lookup parameters were {'node_id__exact': 5379L} It still looks like its trying to look only for the node_id and not for the type_id. Sigh, I could cry... A: Ok so... select_related() together with the underscore thingy worked well...
foreignkey problem
Imagine you have this model: class Category(models.Model): node_id = models.IntegerField(primary_key = True) type_id = models.IntegerField(max_length = 20) parent_id = models.IntegerField(max_length = 20) sort_order = models.IntegerField(max_length = 20) name = models.CharField(max_length = 45) lft = models.IntegerField(max_length = 20) rgt = models.IntegerField(max_length = 20) depth = models.IntegerField(max_length = 20) added_on = models.DateTimeField(auto_now = True) updated_on = models.DateTimeField(auto_now = True) status = models.IntegerField(max_length = 20) node = models.ForeignKey(Category_info, verbose_name = 'Category_info', to_field = 'node_id' The important part is the foreignkey. When I try: Category.objects.filter(type_id = 15, parent_id = offset, status = 1) I get an error that get returned more than category, which is fine, because it is supposed to return more than one. But I want to filter the results trough another field, which would be type id (from the second Model) Here it is: class Category_info(models.Model): objtree_label_id = models.AutoField(primary_key = True) node_id = models.IntegerField(unique = True) language_id = models.IntegerField() label = models.CharField(max_length = 255) type_id = models.IntegerField() The type_id can be any number from 1 - 5. I am desparately trying to get only one result where the type_id would be number 1. Here is what I want in sql: SELECT c.*, ci.* FROM category c JOIN category_info ci ON (c.node_id = ci.node_id) WHERE c.type_id = 15 AND c.parent_id = 50 AND ci.type_id = 1 Any help is GREATLY appreciated. Regards
[ "To filter on fields in a related table, use the double-underscore notation. To get all Category objects where type_id of the related Category_info object is 15, use:\nCategory.objects.filter(node__type_id=15)\n\nDjango will then automagically understand that you're referring to the type_id field on whatever table node is related to.\n", "So it still didnt solve my problem.. Let me try to explain in a bit more.\nLets say that the sql:\nSELECT c.*, ci.*\nFROM category c\nJOIN category_info ci ON (c.node_id = ci.node_id)\nWHERE c.type_id = 15 AND c.parent_id = 50 \n\nWill return two rows. Both are identical, except type_id field from category_info table where there are two types - 1 and 2. If I will add to the sql - ci.type_id = 1, I will get the right result. But from what I have tried, even with the doubleunderscore notation, it still returns 2 rows in Django.\nNow I have got:\nCategory.objects.filter(type_id = 15, parent_id = offset, status = 1, node__type_id = 1)\n\nWhere the node__type_id = 1 represents the \"ci.type_id = 1\". But it still does return two rows. When I will delete the \"to_field\" from my model definition it will pass, but return wrong data, because it binds it to the primary key by default. I tried to filter the data afterwards by chaining another filter, but still wont get past it.\nHere is a bit form the debug, maybe it helps:\nCaught an exception while rendering: get() returned more than one Category_info -- it returned 2! Lookup parameters were {'node_id__exact': 5379L}\n\nIt still looks like its trying to look only for the node_id and not for the type_id. \nSigh, I could cry...\n", "Ok so...\nselect_related() together with the underscore thingy worked well...\n" ]
[ 4, 0, 0 ]
[]
[]
[ "django", "foreign_keys", "python" ]
stackoverflow_0002766637_django_foreign_keys_python.txt
Q: Windows equivalent to this Makefile The advantage of writing a Makefile is that "make" is generally assumed to be present on the various Unices (Linux and Mac primarily). Now I have the following Makefile: PYTHON := python all: e installdeps e: virtualenv --distribute --python=${PYTHON} e installdeps: e/bin/python setup.py develop e/bin/pip install unittest2 test: e/bin/unit2 discover clean: rm -rf e As you can see this Makefile uses simple targets and variable substitution. Can this be achieved on Windows? By that mean - without having to install external tools (like cygwin make); perhaps make.cmd? Typing "make installdeps" for instance, should work both on Unix and Windows. A: Something simple like that, yes. However, if you'd like to continue to improve that makefile, you might consider just writing the "makefile" (rather installation script) in a more portable language. You have to have some assumptions. If its a python project, I'm sure you assume python is installed. So write the equivalent of your makefile in python.
Windows equivalent to this Makefile
The advantage of writing a Makefile is that "make" is generally assumed to be present on the various Unices (Linux and Mac primarily). Now I have the following Makefile: PYTHON := python all: e installdeps e: virtualenv --distribute --python=${PYTHON} e installdeps: e/bin/python setup.py develop e/bin/pip install unittest2 test: e/bin/unit2 discover clean: rm -rf e As you can see this Makefile uses simple targets and variable substitution. Can this be achieved on Windows? By that mean - without having to install external tools (like cygwin make); perhaps make.cmd? Typing "make installdeps" for instance, should work both on Unix and Windows.
[ "Something simple like that, yes. However, if you'd like to continue to improve that makefile, you might consider just writing the \"makefile\" (rather installation script) in a more portable language. You have to have some assumptions. If its a python project, I'm sure you assume python is installed. So write the equivalent of your makefile in python.\n" ]
[ 4 ]
[]
[]
[ "cross_platform", "makefile", "python", "windows" ]
stackoverflow_0002768481_cross_platform_makefile_python_windows.txt
Q: Getting Started with Python: Attribute Error I am new to python and just downloaded it today. I am using it to work on a web spider, so to test it out and make sure everything was working, I downloaded a sample code. Unfortunately, it does not work and gives me the error: "AttributeError: 'MyShell' object has no attribute 'loaded' " I am not sure if the code its self has an error or I failed to do something correctly when installing python. Is there anything you have to do when installing python like adding environmental variables, etc.? And what does that error generally mean? Here is the sample code I used with imported spider class: import chilkat spider = chilkat.CkSpider() spider.Initialize("www.chilkatsoft.com") spider.AddUnspidered("http://www.chilkatsoft.com/") for i in range(0,10): success = spider.CrawlNext() if (success == True): print spider.lastUrl() else: if (spider.get_NumUnspidered() == 0): print "No more URLs to spider" else: print spider.lastErrorText() # Sleep 1 second before spidering the next URL. spider.SleepMs(1000) A: And what does that error generally mean? An Attribute in Python is a name belonging to an object - a method or a variable. An AttributeError means that the program tried to use an attribute of an object, but the object did not have the requested attribute. For instance, string objects have the 'upper' attribute, which is a method that returns the uppercase version of the string. You could write a method that uses it like this: def get_upper(my_string): return my_string.upper() However, note that there's nothing in that method to ensure that you have to give it a string. You could pass in a file object, or a number. Neither of those have the 'upper' attribute, and Python will raise an Attribute Error. As for why you're seeing it in this instance, you haven't provided enough detail for us to work it out. Add the full error message to your question.
Getting Started with Python: Attribute Error
I am new to python and just downloaded it today. I am using it to work on a web spider, so to test it out and make sure everything was working, I downloaded a sample code. Unfortunately, it does not work and gives me the error: "AttributeError: 'MyShell' object has no attribute 'loaded' " I am not sure if the code its self has an error or I failed to do something correctly when installing python. Is there anything you have to do when installing python like adding environmental variables, etc.? And what does that error generally mean? Here is the sample code I used with imported spider class: import chilkat spider = chilkat.CkSpider() spider.Initialize("www.chilkatsoft.com") spider.AddUnspidered("http://www.chilkatsoft.com/") for i in range(0,10): success = spider.CrawlNext() if (success == True): print spider.lastUrl() else: if (spider.get_NumUnspidered() == 0): print "No more URLs to spider" else: print spider.lastErrorText() # Sleep 1 second before spidering the next URL. spider.SleepMs(1000)
[ "\nAnd what does that error generally\n mean?\n\nAn Attribute in Python is a name belonging to an object - a method or a variable. An AttributeError means that the program tried to use an attribute of an object, but the object did not have the requested attribute.\nFor instance, string objects have the 'upper' attribute, which is a method that returns the uppercase version of the string. You could write a method that uses it like this:\ndef get_upper(my_string):\n return my_string.upper()\n\nHowever, note that there's nothing in that method to ensure that you have to give it a string. You could pass in a file object, or a number. Neither of those have the 'upper' attribute, and Python will raise an Attribute Error.\nAs for why you're seeing it in this instance, you haven't provided enough detail for us to work it out. Add the full error message to your question.\n" ]
[ 6 ]
[ "1) Put code in Try ... Except block. get exception details.\n2) Could you tell StackTrace details means which line # and method thrown error\nAnd also are you able to run other simple python scripts without any error. Means just try to run some sample script etc.\n" ]
[ -1 ]
[ "attributeerror", "chilkat", "python", "web_crawler" ]
stackoverflow_0002767607_attributeerror_chilkat_python_web_crawler.txt
Q: Convert array to CSV/TSV-formated string in Python Python provides csv.DictWriter for outputting CSV to a file. What is the simplest way to output CSV to a string or to stdout? For example, given a 2D array like this: [["a b c", "1,2,3"], ["i \"comma-heart\" you", "i \",heart\" u, too"]] return the following string: "a b c, \"1, 2, 3\"\n\"i \"\"comma-heart\"\" you\", \"i \"\",heart\"\" u, too\"" which when printed would look like this: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" (I'm taking csv.DictWriter's word for it that that is in fact the canonical way to output that array as CSV. Excel does parse it correctly that way, though Mathematica does not. From a quick look at the wikipedia page on CSV it seems Mathematica is wrong.) One way would be to write to a temp file with csv.DictWriter and read it back with csv.DictReader. What's a better way? TSV instead of CSV It also occurs to me that I'm not wedded to CSV. TSV would make a lot of the headaches with delimiters and quotes go away: just replace tabs with spaces in the entries of the 2D array and then just intersperse tabs and newlines and you're done. Let's include solutions for both TSV and CSV in the answers to make this as useful as possible for future searchers. A: We use StringIO for this myFakeFile = StringIO.StringIO() wtr = csv.DictWriter( myFakeFile, headings ) ... myFakeFile.getvalue() Usually works.
Convert array to CSV/TSV-formated string in Python
Python provides csv.DictWriter for outputting CSV to a file. What is the simplest way to output CSV to a string or to stdout? For example, given a 2D array like this: [["a b c", "1,2,3"], ["i \"comma-heart\" you", "i \",heart\" u, too"]] return the following string: "a b c, \"1, 2, 3\"\n\"i \"\"comma-heart\"\" you\", \"i \"\",heart\"\" u, too\"" which when printed would look like this: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" (I'm taking csv.DictWriter's word for it that that is in fact the canonical way to output that array as CSV. Excel does parse it correctly that way, though Mathematica does not. From a quick look at the wikipedia page on CSV it seems Mathematica is wrong.) One way would be to write to a temp file with csv.DictWriter and read it back with csv.DictReader. What's a better way? TSV instead of CSV It also occurs to me that I'm not wedded to CSV. TSV would make a lot of the headaches with delimiters and quotes go away: just replace tabs with spaces in the entries of the 2D array and then just intersperse tabs and newlines and you're done. Let's include solutions for both TSV and CSV in the answers to make this as useful as possible for future searchers.
[ "We use StringIO for this\nmyFakeFile = StringIO.StringIO()\nwtr = csv.DictWriter( myFakeFile, headings )\n...\nmyFakeFile.getvalue()\n\nUsually works.\n" ]
[ 5 ]
[]
[]
[ "csv", "python", "string", "text" ]
stackoverflow_0002768810_csv_python_string_text.txt
Q: Dynamic Class Creation in SQLAlchemy We have a need to create SQLAlchemy classes to access multiple external data sources that will increase in number over time. We use the declarative base for our core ORM models and I know we can manually specify new ORM classes using the autoload=True to auto generate the mapping. The problem is that we need to be able generate them dynamically taking something like this: from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() stored={} stored['tablename']='my_internal_table_name' stored['objectname']='MyObject' and turning it into something like this dynamically: class MyObject(Base): __tablename__ = 'my_internal_table_name' __table_args__ = {'autoload':True} We don't want the classes to persist longer than necessary to open a connection, perform the queries, and then closing the connection. Therefore, ideally, we can put the items in the "stored" variable above into a database and pull them as needed. The other challenge is that the object name (e.g. "MyObject") may be used on different connections so we cannot define it once and keep it around. Any suggestions on how this might be accomplished would be greatly appreciated. Thanks... A: You can dynamically create MyObject using the 3-argument call to type: type(name, bases, dict) Return a new type object. This is essentially a dynamic form of the class statement... For example: mydict={'__tablename__':stored['tablename'], '__table_args__':{'autoload':True},} MyObj=type(stored['objectname'],(Base,),mydict) print(MyObj) # <class '__main__.MyObject'> print(MyObj.__base__) # <class '__main__.Base'> print(MyObj.__tablename__) # my_internal_table_name print(MyObj.__table_args__) # {'autoload': True}
Dynamic Class Creation in SQLAlchemy
We have a need to create SQLAlchemy classes to access multiple external data sources that will increase in number over time. We use the declarative base for our core ORM models and I know we can manually specify new ORM classes using the autoload=True to auto generate the mapping. The problem is that we need to be able generate them dynamically taking something like this: from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() stored={} stored['tablename']='my_internal_table_name' stored['objectname']='MyObject' and turning it into something like this dynamically: class MyObject(Base): __tablename__ = 'my_internal_table_name' __table_args__ = {'autoload':True} We don't want the classes to persist longer than necessary to open a connection, perform the queries, and then closing the connection. Therefore, ideally, we can put the items in the "stored" variable above into a database and pull them as needed. The other challenge is that the object name (e.g. "MyObject") may be used on different connections so we cannot define it once and keep it around. Any suggestions on how this might be accomplished would be greatly appreciated. Thanks...
[ "You can dynamically create MyObject using the 3-argument call to type:\ntype(name, bases, dict)\n\n Return a new type object. This is essentially a dynamic form of the \n class statement... \n\nFor example:\nmydict={'__tablename__':stored['tablename'],\n '__table_args__':{'autoload':True},}\n\nMyObj=type(stored['objectname'],(Base,),mydict)\nprint(MyObj)\n# <class '__main__.MyObject'>\nprint(MyObj.__base__)\n# <class '__main__.Base'>\nprint(MyObj.__tablename__)\n# my_internal_table_name\nprint(MyObj.__table_args__)\n# {'autoload': True}\n\n" ]
[ 33 ]
[]
[]
[ "declarative", "metaclass", "metaprogramming", "python", "sqlalchemy" ]
stackoverflow_0002768607_declarative_metaclass_metaprogramming_python_sqlalchemy.txt
Q: Generating content diffs using SequenceMatcher (Python) I want to generate a diff between to revisions of text (more specifically, Markdown-formatted articles) in Python. I want to format this diff in a manner similar to what Github does. I've looked at difflib and have found that it does what I want. However, the Differ class is too high-level; I would have to parse the diff lines to generate HTML with inline diffs. The Differ class uses the SequenceMatcher class to generate its diffs. But looking at the SequenceMatcher it's very low-level in comparison. I haven't even figured out how to do a line-by-line diff (I'll admit I haven't spent a lot of time experimenting). Does anyone know of any resources for using the SequenceMatcher class (besides the difflib documentation)? A: SequenceMatcher is actually not that low-level. The most interesting method for you is get_grouped_opcodes. It will return a generator, which generates lists with change descriptions. I'll explain it on an example from a random commit on GitHub. Let's say you run SequenceMatcher(None, a, b).get_grouped_opcodes() on the old and new file "tabs_events.js". The generator will generate two groups, which are represent by those "..." lines in GitHub. It's basically a group of changes. In each of the groups, you have a list of detailed changes stored as tuples. For the first group, it returns two changes that look like this (the first item is a change type, the next two numbers represent a line range to be removed, followed a line range to be added): ('replace', 24, 29, 24, 29) ('insert', 33, 33, 33, 35) The first one tell you to replace lines 24-28 (starting with 0) from the old file with lines 24-28 from the new file. The second one tells you to insert lines 33-34 from the new file on line 33 in the old file. I think it's clear what would 'delete' do and 'equal' are those lines that are not highlighted in GitHub. If you don't mind reading source code, take a look at the implementation of difflib.unified_diff(). It's quite simple and it generates a plain-text equivalent of what you want.
Generating content diffs using SequenceMatcher (Python)
I want to generate a diff between to revisions of text (more specifically, Markdown-formatted articles) in Python. I want to format this diff in a manner similar to what Github does. I've looked at difflib and have found that it does what I want. However, the Differ class is too high-level; I would have to parse the diff lines to generate HTML with inline diffs. The Differ class uses the SequenceMatcher class to generate its diffs. But looking at the SequenceMatcher it's very low-level in comparison. I haven't even figured out how to do a line-by-line diff (I'll admit I haven't spent a lot of time experimenting). Does anyone know of any resources for using the SequenceMatcher class (besides the difflib documentation)?
[ "SequenceMatcher is actually not that low-level. The most interesting method for you is get_grouped_opcodes. It will return a generator, which generates lists with change descriptions.\nI'll explain it on an example from a random commit on GitHub. Let's say you run SequenceMatcher(None, a, b).get_grouped_opcodes() on the old and new file \"tabs_events.js\". The generator will generate two groups, which are represent by those \"...\" lines in GitHub. It's basically a group of changes. In each of the groups, you have a list of detailed changes stored as tuples. For the first group, it returns two changes that look like this (the first item is a change type, the next two numbers represent a line range to be removed, followed a line range to be added):\n('replace', 24, 29, 24, 29)\n('insert', 33, 33, 33, 35)\n\nThe first one tell you to replace lines 24-28 (starting with 0) from the old file with lines 24-28 from the new file. The second one tells you to insert lines 33-34 from the new file on line 33 in the old file. I think it's clear what would 'delete' do and 'equal' are those lines that are not highlighted in GitHub.\nIf you don't mind reading source code, take a look at the implementation of difflib.unified_diff(). It's quite simple and it generates a plain-text equivalent of what you want.\n" ]
[ 8 ]
[]
[]
[ "diff", "python" ]
stackoverflow_0002767822_diff_python.txt
Q: Is there something similar to node-git for python? http://github.com/creationix/node-git Reading blog posts from a repository. If not, how would you do it (chaching, ...)? A: Well, there's Dulwich, "a pure-Python implementation of the Git file formats and protocols," and GitPython, which is apparently a wrapper around the git command line tools.
Is there something similar to node-git for python?
http://github.com/creationix/node-git Reading blog posts from a repository. If not, how would you do it (chaching, ...)?
[ "Well, there's Dulwich, \"a pure-Python implementation of the Git file formats and protocols,\" and GitPython, which is apparently a wrapper around the git command line tools.\n" ]
[ 1 ]
[]
[]
[ "git", "python" ]
stackoverflow_0002769233_git_python.txt
Q: Creating a unique key based on file content in python I got many, many files to be uploaded to the server, and I just want a way to avoid duplicates. Thus, generating a unique and small key value from a big string seemed something that a checksum was intended to do, and hashing seemed like the evolution of that. So I was going to use hash md5 to do this. But then I read somewhere that "MD5 are not meant to be unique keys" and I thought that's really weird. What's the right way of doing this? edit: by the way, I took two sources to get to the following, which is how I'm currently doing it and it's working just fine, with Python 2.5: import hashlib def md5_from_file (fileName, block_size=2**14): md5 = hashlib.md5() f = open(fileName) while True: data = f.read(block_size) if not data: break md5.update(data) f.close() return md5.hexdigest() A: Sticking with MD5 is a good idea. Just to make sure I'd append the file length or number of chunks to your file-hash table. Yes, there is the possibility that you run into two files that have the same MD5 hash, but that's quite unlikely (if your files are decent sized). Thus adding the number of chunks to your hash may help you reduce that since now you have to find two files the same size with the same MD5. # This is the algorithm you described, but also returns the number of chunks. new_file_hash, nchunks = hash_for_tile(new_file) store_file(new_file, nchunks, hash) def store_file(file, nchunks, hash): "" Tells you whether there is another file with the same contents already, by making a table lookup "" # This can be a DB lookup or some way to obtain your hash map big_table = ObtainTable() # Two level lookup table might help performance # Will vary on the number of entries and nature of big_table if nchunks in big_table: if hash in big_table[hash]: raise DuplicateFileException,\ 'File is dup with %s' big_table[nchunks][lookup_hash] else: big_table[nchunks] = {} big_table[nchunks].update({ hash: file.filename }) file.save() # or something To reduce that possibility switch to SHA1 and use the same method. Or even use both(concatenating) if performance is not an issue. Of course, keep in mind that this will only work with duplicate files at binary level, not images, sounds, video that are "the same" but have different signatures. A: The issue with hashing is that it's generating a "small" identifier from a "large" dataset. It's like a lossy compression. While you can't guarantee uniqueness, you can use it to substantially limit the number of other items you need to compare against. Consider that MD5 yields a 128 bit value (I think that's what it is, although the exact number of bits is irrelevant). If your input data set has 129 bits and you actually use them all, each MD5 value will appear on average twice. For longer datasets (e.g. "all text files of exactly 1024 printable characters") you're still going to run into collisions once you get enough inputs. Contrary to what another answer said, it is a mathematical certainty that you will get collisions. See http://en.wikipedia.org/wiki/Birthday_Paradox Granted, you have around a 1% chance of collisions with a 128 bit hash at 2.6*10^18 entries, but it's better to handle the case that you do get collisions than to hope that you never will. A: The issue with MD5 is that it's broken. For most common uses there's little problem and people still use both MD5 and SHA1, but I think that if you need a hashing function then you need a strong hashing function. To the best of my knowledge there is still no standard substitute for either of these. There are a number of algorithms that are "supposed" to be strong, but we have most experience with SHA1 and MD5. That is, we (think) we know when these two break, whereas we don't really know as much when the newer algorithms break. Bottom line: think about the risks. If you wish to walk the extra mile then you might add extra checks when you find a hash duplicate, for the price of the performance penalty.
Creating a unique key based on file content in python
I got many, many files to be uploaded to the server, and I just want a way to avoid duplicates. Thus, generating a unique and small key value from a big string seemed something that a checksum was intended to do, and hashing seemed like the evolution of that. So I was going to use hash md5 to do this. But then I read somewhere that "MD5 are not meant to be unique keys" and I thought that's really weird. What's the right way of doing this? edit: by the way, I took two sources to get to the following, which is how I'm currently doing it and it's working just fine, with Python 2.5: import hashlib def md5_from_file (fileName, block_size=2**14): md5 = hashlib.md5() f = open(fileName) while True: data = f.read(block_size) if not data: break md5.update(data) f.close() return md5.hexdigest()
[ "Sticking with MD5 is a good idea. Just to make sure I'd append the file length or number of chunks to your file-hash table.\nYes, there is the possibility that you run into two files that have the same MD5 hash, but that's quite unlikely (if your files are decent sized). Thus adding the number of chunks to your hash may help you reduce that since now you have to find two files the same size with the same MD5.\n# This is the algorithm you described, but also returns the number of chunks.\nnew_file_hash, nchunks = hash_for_tile(new_file)\nstore_file(new_file, nchunks, hash)\n\ndef store_file(file, nchunks, hash):\n \"\" Tells you whether there is another file with the same contents already, by \n making a table lookup \"\"\n # This can be a DB lookup or some way to obtain your hash map\n big_table = ObtainTable()\n\n # Two level lookup table might help performance\n # Will vary on the number of entries and nature of big_table\n if nchunks in big_table:\n if hash in big_table[hash]:\n raise DuplicateFileException,\\\n 'File is dup with %s' big_table[nchunks][lookup_hash]\n else:\n big_table[nchunks] = {}\n\n big_table[nchunks].update({\n hash: file.filename\n })\n\n file.save() # or something\n\nTo reduce that possibility switch to SHA1 and use the same method. Or even use both(concatenating) if performance is not an issue.\nOf course, keep in mind that this will only work with duplicate files at binary level, not images, sounds, video that are \"the same\" but have different signatures.\n", "The issue with hashing is that it's generating a \"small\" identifier from a \"large\" dataset. It's like a lossy compression. While you can't guarantee uniqueness, you can use it to substantially limit the number of other items you need to compare against.\nConsider that MD5 yields a 128 bit value (I think that's what it is, although the exact number of bits is irrelevant). If your input data set has 129 bits and you actually use them all, each MD5 value will appear on average twice. For longer datasets (e.g. \"all text files of exactly 1024 printable characters\") you're still going to run into collisions once you get enough inputs. Contrary to what another answer said, it is a mathematical certainty that you will get collisions.\nSee http://en.wikipedia.org/wiki/Birthday_Paradox\nGranted, you have around a 1% chance of collisions with a 128 bit hash at 2.6*10^18 entries, but it's better to handle the case that you do get collisions than to hope that you never will.\n", "The issue with MD5 is that it's broken. For most common uses there's little problem and people still use both MD5 and SHA1, but I think that if you need a hashing function then you need a strong hashing function. To the best of my knowledge there is still no standard substitute for either of these. There are a number of algorithms that are \"supposed\" to be strong, but we have most experience with SHA1 and MD5. That is, we (think) we know when these two break, whereas we don't really know as much when the newer algorithms break.\nBottom line: think about the risks. If you wish to walk the extra mile then you might add extra checks when you find a hash duplicate, for the price of the performance penalty.\n" ]
[ 7, 3, 2 ]
[]
[]
[ "checksum", "cryptography", "hash", "python", "unique_key" ]
stackoverflow_0002769461_checksum_cryptography_hash_python_unique_key.txt
Q: Most elegant way to break CSV columns into separate data structures using Python? I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python? A: Try the CSV module of Python, in particular the DictReader class. A: Not necessary the most pythonic way to achieve the same thing as your code, but this version of your code is somewhat more concise due to the use of generator expressions: from itertools import izip for row in data: dict_a = dict((col_name, val) for col_name, val in izip(headers, row) \ if col_name in group_a) dict_b = dict((col_name, val) for col_name, val in izip(headers, row) \ if col_name in group_b) list_a.append(dict_a) list_b.append(dict_b) Also, use sets for group_a and group_b instead of lists - the in operator works faster on sets. But Jason Humber is right, DictReader is way more elegant, see the following version: from csv import DictReader for row in DictReader(your_file, headers): dict_a = dict((k, row[k]) for k in group_a) dict_b = dict((k, row[k]) for k in group_b) list_a.append(dict_a) list_b.append(dict_b) A: csv.DictReader import csv groups = dict(a=headers_for_list_a, b=headers_for_list_b) lists = dict((name, []) for name in groups) for row in csv.DictReader(csvfile, fieldnames=headers): for name, grp_headers in groups.items(): lists[name].append(dict((header, row[header]) for header in grp_headers))
Most elegant way to break CSV columns into separate data structures using Python?
I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python?
[ "Try the CSV module of Python, in particular the DictReader class.\n", "Not necessary the most pythonic way to achieve the same thing as your code, but this version of your code is somewhat more concise due to the use of generator expressions:\nfrom itertools import izip\n\nfor row in data:\n dict_a = dict((col_name, val) for col_name, val in izip(headers, row) \\\n if col_name in group_a)\n dict_b = dict((col_name, val) for col_name, val in izip(headers, row) \\\n if col_name in group_b)\n list_a.append(dict_a)\n list_b.append(dict_b)\n\nAlso, use sets for group_a and group_b instead of lists - the in operator works faster on sets. But Jason Humber is right, DictReader is way more elegant, see the following version:\nfrom csv import DictReader\n\nfor row in DictReader(your_file, headers):\n dict_a = dict((k, row[k]) for k in group_a)\n dict_b = dict((k, row[k]) for k in group_b)\n list_a.append(dict_a)\n list_b.append(dict_b)\n\n", "csv.DictReader\nimport csv\n\ngroups = dict(a=headers_for_list_a, b=headers_for_list_b)\nlists = dict((name, []) for name in groups)\n\nfor row in csv.DictReader(csvfile, fieldnames=headers):\n for name, grp_headers in groups.items():\n lists[name].append(dict((header, row[header]) for header in grp_headers))\n\n" ]
[ 5, 2, 2 ]
[]
[]
[ "csv", "data_structures", "python" ]
stackoverflow_0002768912_csv_data_structures_python.txt
Q: Iterating through String word at a time in Python I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ? data = str(file_content) for j in dictionary_entity.keys(): cnt = data.count(j+" ") if cnt != -1: dictionary_entity[j] = dictionary_entity[j] + cnt f.close() A: Iterating word-by-word through the contents of a file (the Wizard of Oz from Project Gutenberg, in my case), three different ways: from __future__ import with_statement import time import re from cStringIO import StringIO def word_iter_std(filename): start = time.time() with open(filename) as f: for line in f: for word in line.split(): yield word print 'iter_std took %0.6f seconds' % (time.time() - start) def word_iter_re(filename): start = time.time() with open(filename) as f: txt = f.read() for word in re.finditer('\w+', txt): yield word print 'iter_re took %0.6f seconds' % (time.time() - start) def word_iter_stringio(filename): start = time.time() with open(filename) as f: io = StringIO(f.read()) for line in io: for word in line.split(): yield word print 'iter_io took %0.6f seconds' % (time.time() - start) woo = '/tmp/woo.txt' for word in word_iter_std(woo): pass for word in word_iter_re(woo): pass for word in word_iter_stringio(woo): pass Resulting in: % python /tmp/junk.py iter_std took 0.016321 seconds iter_re took 0.028345 seconds iter_io took 0.016230 seconds A: This sounds like the sort of problem where a trie would really help. You should probably use some sort of compressed trie like a Patricia/radix trie. As long as you can fit the whole dictionary of words/phrases that you are looking for in the trie, this will greatly reduce the time complexity. How it will work is you take the beginning of a word and descend the trie until you find the longest match and increment the counter in that node. This might mean that you have to ascend the trie if a partial match doesn't pan out. Then you would proceed to the beginning of the next word and do it again. The advantage of the trie is that you are searching through the whole dictionary with each search through the trie (each look-up should take about O(m) where m is the average length of a word/phrase in your dictionary). If you can't fit the whole dictionary into one trie, then you could split the dictionary into a few tries (one for all words/phrases starting with a-l, one for m-z for instance) and do a sweep through the whole corpus for each trie. A: If the re module can't do it fast, you're going to be hard pressed doing it any faster. Either way you need to read the entire file. You might consider fixing your regular expression (can you provide one?). Maybe some background on what you are trying to accomplish too. A: You could try doing it the other way around...instead of processing the text corpus 2,000,000 times (once for each word), process it only once. For every single word in the corpus, increment a hash table or similar to store the count of that word. A simple example in pseudocode: word_counts = new hash<string,int> for each word in corpus: if exists(word_counts[word]): word_counts[word]++ else: word_counts[word] = 1 You might be able to speed it up by initializing the word_counts ahead of time with the full list of words, this not needing that if statement...not sure. A: As xyld said, I do not think that you can beat the speed of the re module, although it would help if you posted your regexes and possibly the code as well. All I can add is try profiling before optimizing. You may be quite surprised when you see where most of the processing goes. I use hotshot to profile my code and am quite happy with it. You can find a good introduction to python profiling here http://onlamp.com/pub/a/python/2005/12/15/profiling.html. A: If using re is not performant enough, you're probably using findall(), or finding the matches one by one manually. Using an iterator might make it faster: >>> for i in re.finditer(r'\w+', 'Hello, this is a sentence.'): ... print i.group(0) ... Hello this is a sentence A: #!/usr/bin/env python import re s = '' for i in xrange(0, 100000): s = s + 'Hello, this is a sentence. ' if i == 50000: s = s + " my phrase " s = s + 'AARRGH' print len(s) itr = re.compile(r'(my phrase)|(\w+)').finditer(s) for w in itr: if w.group(0) == 'AARRGH': print 'Found AARRGH' elif w.group(0) == "my phrase": print 'Found "my phrase"' Running this, we get $ time python itrword.py 2700017 Found "my phrase" Found AARRGH real 0m0.616s user 0m0.573s sys 0m0.033s But, each "phrase" explicitly added to the regex will take its toll on performance -- the above is 50% slower than just using "\w+", by my rough measurement. A: Have you considered looking at the Natural Language Toolkit. It includes many nice functions for working with a text corpus, also has a a cool FreqDist class that behaves dict-like (has keys) and list-like (slice).
Iterating through String word at a time in Python
I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ? data = str(file_content) for j in dictionary_entity.keys(): cnt = data.count(j+" ") if cnt != -1: dictionary_entity[j] = dictionary_entity[j] + cnt f.close()
[ "Iterating word-by-word through the contents of a file (the Wizard of Oz from Project Gutenberg, in my case), three different ways:\nfrom __future__ import with_statement\nimport time\nimport re\nfrom cStringIO import StringIO\n\ndef word_iter_std(filename):\n start = time.time()\n with open(filename) as f:\n for line in f:\n for word in line.split():\n yield word\n print 'iter_std took %0.6f seconds' % (time.time() - start)\n\ndef word_iter_re(filename):\n start = time.time()\n with open(filename) as f:\n txt = f.read()\n for word in re.finditer('\\w+', txt):\n yield word\n print 'iter_re took %0.6f seconds' % (time.time() - start)\n\ndef word_iter_stringio(filename):\n start = time.time()\n with open(filename) as f:\n io = StringIO(f.read())\n for line in io:\n for word in line.split():\n yield word\n print 'iter_io took %0.6f seconds' % (time.time() - start)\n\nwoo = '/tmp/woo.txt'\n\nfor word in word_iter_std(woo): pass\nfor word in word_iter_re(woo): pass\nfor word in word_iter_stringio(woo): pass\n\nResulting in:\n% python /tmp/junk.py\niter_std took 0.016321 seconds\niter_re took 0.028345 seconds\niter_io took 0.016230 seconds\n\n", "This sounds like the sort of problem where a trie would really help. You should probably use some sort of compressed trie like a Patricia/radix trie. As long as you can fit the whole dictionary of words/phrases that you are looking for in the trie, this will greatly reduce the time complexity. How it will work is you take the beginning of a word and descend the trie until you find the longest match and increment the counter in that node. This might mean that you have to ascend the trie if a partial match doesn't pan out. Then you would proceed to the beginning of the next word and do it again. The advantage of the trie is that you are searching through the whole dictionary with each search through the trie (each look-up should take about O(m) where m is the average length of a word/phrase in your dictionary).\nIf you can't fit the whole dictionary into one trie, then you could split the dictionary into a few tries (one for all words/phrases starting with a-l, one for m-z for instance) and do a sweep through the whole corpus for each trie. \n", "If the re module can't do it fast, you're going to be hard pressed doing it any faster. Either way you need to read the entire file. You might consider fixing your regular expression (can you provide one?). Maybe some background on what you are trying to accomplish too.\n", "You could try doing it the other way around...instead of processing the text corpus 2,000,000 times (once for each word), process it only once. For every single word in the corpus, increment a hash table or similar to store the count of that word. A simple example in pseudocode:\nword_counts = new hash<string,int>\nfor each word in corpus:\n if exists(word_counts[word]):\n word_counts[word]++\n else:\n word_counts[word] = 1\n\nYou might be able to speed it up by initializing the word_counts ahead of time with the full list of words, this not needing that if statement...not sure.\n", "As xyld said, I do not think that you can beat the speed of the re module, although it would help if you posted your regexes and possibly the code as well. All I can add is try profiling before optimizing. You may be quite surprised when you see where most of the processing goes. I use hotshot to profile my code and am quite happy with it. You can find a good introduction to python profiling here http://onlamp.com/pub/a/python/2005/12/15/profiling.html.\n", "If using re is not performant enough, you're probably using findall(), or finding the matches one by one manually. Using an iterator might make it faster:\n>>> for i in re.finditer(r'\\w+', 'Hello, this is a sentence.'):\n... print i.group(0)\n... \nHello\nthis\nis\na\nsentence\n\n", "#!/usr/bin/env python\nimport re\n\ns = ''\nfor i in xrange(0, 100000):\n s = s + 'Hello, this is a sentence. '\n if i == 50000:\n s = s + \" my phrase \"\n\ns = s + 'AARRGH'\n\nprint len(s)\n\nitr = re.compile(r'(my phrase)|(\\w+)').finditer(s)\nfor w in itr:\n if w.group(0) == 'AARRGH':\n print 'Found AARRGH'\n elif w.group(0) == \"my phrase\":\n print 'Found \"my phrase\"'\n\nRunning this, we get\n$ time python itrword.py\n2700017\nFound \"my phrase\"\nFound AARRGH\n\nreal 0m0.616s\nuser 0m0.573s\nsys 0m0.033s\n\nBut, each \"phrase\" explicitly added to the regex will take its toll on performance -- the above is 50% slower than just using \"\\w+\", by my rough measurement.\n", "Have you considered looking at the Natural Language Toolkit. It includes many nice functions for working with a text corpus, also has a a cool FreqDist class that behaves dict-like (has keys) and list-like (slice). \n" ]
[ 7, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "python", "string", "string_matching" ]
stackoverflow_0002768628_python_string_string_matching.txt
Q: Paginating requests to an API I'm consuming (via urllib/urllib2) an API that returns XML results. The API always returns the total_hit_count for my query, but only allows me to retrieve results in batches of, say, 100 or 1000. The API stipulates I need to specify a start_pos and end_pos for offsetting this, in order to walk through the results. Say the urllib request looks like http://someservice?query='test'&start_pos=X&end_pos=Y. If I send an initial 'taster' query with lowest data transfer such as http://someservice?query='test'&start_pos=1&end_pos=1 in order to get back a result of, for conjecture, total_hits = 1234, I'd like to work out an approach to most cleanly request those 1234 results in batches of, again say, 100 or 1000 or... This is what I came up with so far, and it seems to work, but I'd like to know if you would have done things differently or if I could improve upon this: hits_per_page=100 # or 1000 or 200 or whatever, adjustable total_hits = 1234 # retreived with BSoup from 'taster query' base_url = "http://someservice?query='test'" startdoc_positions = [n for n in range(1, total_hits, hits_per_page)] enddoc_positions = [startdoc_position + hits_per_page - 1 for startdoc_position in startdoc_positions] for start, end in zip(startdoc_positions, enddoc_positions): if end > total_hits: end = total_hits print "url to request is:\n ", print "%s&start_pos=%s&end_pos=%s" % (base_url, start, end) p.s. I'm a long time consumer of StackOverflow, especially the Python questions, but this is my first question posted. You guys are just brilliant. A: I'd suggest using positions = ((n, n + hits_per_page - 1) for n in xrange(1, total_hits, hits_per_page)) for start, end in positions: and then not worry about whether end exceeds hits_per_page unless the API you're using really cares whether you request something out of range; most will handle this case gracefully. P.S. Check out httplib2 as a replacement for the urllib/urllib2 combo. A: It might be interesting to use some kind of generator for this scenario to iterate over the list. def getitems(base_url, per_page=100): content = ...urllib... total_hits = get_total_hits(content) sofar = 0 while sofar < total_hits: items_from_next_query = ...urllib... for item in items_from_next_query: sofar += 1 yield item Mostly just pseudo code, but it could prove quite useful if you need to do this many times by simplifying the logic it takes to get the items as it only returns a list which is quite natural in python. Save you quite a bit of duplicate code also.
Paginating requests to an API
I'm consuming (via urllib/urllib2) an API that returns XML results. The API always returns the total_hit_count for my query, but only allows me to retrieve results in batches of, say, 100 or 1000. The API stipulates I need to specify a start_pos and end_pos for offsetting this, in order to walk through the results. Say the urllib request looks like http://someservice?query='test'&start_pos=X&end_pos=Y. If I send an initial 'taster' query with lowest data transfer such as http://someservice?query='test'&start_pos=1&end_pos=1 in order to get back a result of, for conjecture, total_hits = 1234, I'd like to work out an approach to most cleanly request those 1234 results in batches of, again say, 100 or 1000 or... This is what I came up with so far, and it seems to work, but I'd like to know if you would have done things differently or if I could improve upon this: hits_per_page=100 # or 1000 or 200 or whatever, adjustable total_hits = 1234 # retreived with BSoup from 'taster query' base_url = "http://someservice?query='test'" startdoc_positions = [n for n in range(1, total_hits, hits_per_page)] enddoc_positions = [startdoc_position + hits_per_page - 1 for startdoc_position in startdoc_positions] for start, end in zip(startdoc_positions, enddoc_positions): if end > total_hits: end = total_hits print "url to request is:\n ", print "%s&start_pos=%s&end_pos=%s" % (base_url, start, end) p.s. I'm a long time consumer of StackOverflow, especially the Python questions, but this is my first question posted. You guys are just brilliant.
[ "I'd suggest using\npositions = ((n, n + hits_per_page - 1) for n in xrange(1, total_hits, hits_per_page))\nfor start, end in positions:\n\nand then not worry about whether end exceeds hits_per_page unless the API you're using really cares whether you request something out of range; most will handle this case gracefully.\nP.S. Check out httplib2 as a replacement for the urllib/urllib2 combo.\n", "It might be interesting to use some kind of generator for this scenario to iterate over the list.\ndef getitems(base_url, per_page=100):\n content = ...urllib...\n total_hits = get_total_hits(content)\n sofar = 0\n while sofar < total_hits:\n items_from_next_query = ...urllib...\n for item in items_from_next_query:\n sofar += 1\n yield item\n\nMostly just pseudo code, but it could prove quite useful if you need to do this many times by simplifying the logic it takes to get the items as it only returns a list which is quite natural in python.\nSave you quite a bit of duplicate code also.\n" ]
[ 1, 1 ]
[]
[]
[ "api", "list_comprehension", "python" ]
stackoverflow_0002769857_api_list_comprehension_python.txt
Q: heterogeneous comparisons in python3 I'm 99+% still using python 2.x, but I'm trying to think ahead to the day when I switch. So, I know that using comparison operators (less/greater than, or equal to) on heterogeneous types that don't have a natural ordering is no longer supported in python3.x -- instead of some consistent (but arbitrary) result we raise TypeError instead. I see the logic in that, and even mostly think its a good thing. Consistency and refusing to guess is a virtue. But what if you essentially want the python2.x behavior? What's the best way to go about getting it? For fun (more or less) I was recently implementing a Skip List, a data structure that keeps its elements sorted. I wanted to use heterogeneous types as keys in the data structure, and I've got to compare keys to one another as I walk the data structure. The python2.x way of comparing makes this really convenient -- you get an understandable ordering amongst elements that have a natural ordering, and some ordering amongst those that don't. Consistently using a sort/comparison key like (type(obj).__name__, obj) has the disadvantage of not interleaving the objects that do have a natural ordering; you get all your floats clustered together before your ints, and your str-derived class separates from your strs. I came up with the following: import operator def hetero_sort_key(obj): cls = type(obj) return (cls.__name__+'_'+cls.__module__, obj) def make_hetero_comparitor(fn): def comparator(a, b): try: return fn(a, b) except TypeError: return fn(hetero_sort_key(a), hetero_sort_key(b)) return comparator hetero_lt = make_hetero_comparitor(operator.lt) hetero_gt = make_hetero_comparitor(operator.gt) hetero_le = make_hetero_comparitor(operator.le) hetero_ge = make_hetero_comparitor(operator.gt) Is there a better way? I suspect one could construct a corner case that this would screw up -- a situation where you can compare type A to B and type A to C, but where B and C raise TypeError when compared, and you can end up with something illogical like a > b, a < c, and yet b > c (because of how their class names sorted). I don't know how likely it is that you'd run into this in practice. A: Rather than "fixing" something the python 3.x community "fixed" in the global scope, you may try the approach of enabling your objects/types to sort properly. I'm not as familiar with python 3.x, but I'm sure there still is a __cmp__ method that you could override in a sub-class and fix so that comparisons would work. You could use that in combination with id() to restore the old broken behavior (which just sorted based on position in memory, id(), if my memory serves me correctly).
heterogeneous comparisons in python3
I'm 99+% still using python 2.x, but I'm trying to think ahead to the day when I switch. So, I know that using comparison operators (less/greater than, or equal to) on heterogeneous types that don't have a natural ordering is no longer supported in python3.x -- instead of some consistent (but arbitrary) result we raise TypeError instead. I see the logic in that, and even mostly think its a good thing. Consistency and refusing to guess is a virtue. But what if you essentially want the python2.x behavior? What's the best way to go about getting it? For fun (more or less) I was recently implementing a Skip List, a data structure that keeps its elements sorted. I wanted to use heterogeneous types as keys in the data structure, and I've got to compare keys to one another as I walk the data structure. The python2.x way of comparing makes this really convenient -- you get an understandable ordering amongst elements that have a natural ordering, and some ordering amongst those that don't. Consistently using a sort/comparison key like (type(obj).__name__, obj) has the disadvantage of not interleaving the objects that do have a natural ordering; you get all your floats clustered together before your ints, and your str-derived class separates from your strs. I came up with the following: import operator def hetero_sort_key(obj): cls = type(obj) return (cls.__name__+'_'+cls.__module__, obj) def make_hetero_comparitor(fn): def comparator(a, b): try: return fn(a, b) except TypeError: return fn(hetero_sort_key(a), hetero_sort_key(b)) return comparator hetero_lt = make_hetero_comparitor(operator.lt) hetero_gt = make_hetero_comparitor(operator.gt) hetero_le = make_hetero_comparitor(operator.le) hetero_ge = make_hetero_comparitor(operator.gt) Is there a better way? I suspect one could construct a corner case that this would screw up -- a situation where you can compare type A to B and type A to C, but where B and C raise TypeError when compared, and you can end up with something illogical like a > b, a < c, and yet b > c (because of how their class names sorted). I don't know how likely it is that you'd run into this in practice.
[ "Rather than \"fixing\" something the python 3.x community \"fixed\" in the global scope, you may try the approach of enabling your objects/types to sort properly. I'm not as familiar with python 3.x, but I'm sure there still is a __cmp__ method that you could override in a sub-class and fix so that comparisons would work. You could use that in combination with id() to restore the old broken behavior (which just sorted based on position in memory, id(), if my memory serves me correctly).\n" ]
[ 0 ]
[]
[]
[ "comparison", "python" ]
stackoverflow_0002769996_comparison_python.txt
Q: Setting timeouts to parse webpages using python lxml I am using python lxml library to parse html pages: import lxml.html # this might run indefinitely page = lxml.html.parse('http://stackoverflow.com/') Is there any way to set timeout for parsing? A: It looks to be using urllib.urlopen as the opener, but the easiest way to do this would just to modify the default timeout for the socket handler. import socket timeout = 10 socket.setdefaulttimeout(timeout) Of course this is a quick-and-dirty solution.
Setting timeouts to parse webpages using python lxml
I am using python lxml library to parse html pages: import lxml.html # this might run indefinitely page = lxml.html.parse('http://stackoverflow.com/') Is there any way to set timeout for parsing?
[ "It looks to be using urllib.urlopen as the opener, but the easiest way to do this would just to modify the default timeout for the socket handler.\nimport socket\ntimeout = 10\nsocket.setdefaulttimeout(timeout)\n\nOf course this is a quick-and-dirty solution.\n" ]
[ 1 ]
[]
[]
[ "lxml", "python" ]
stackoverflow_0002770320_lxml_python.txt
Q: Correctly parsing an ATOM feed I currently have setup a Python script that uses feedparser to read a feed and parse it. However, I have recently come across a problem with the date parsing. The feed I am reading contains <modified>2010-05-05T24:17:54Z</modified> - which comes up in Python as a datetime object - 2010-05-06 00:17:54. Notice the discrepancy: the feed entry was modified on the 5th of may, while python reads it as the 6th. So the question is why this is happening. Is the ATOM feed (that is, the one who created the feed) wrong by putting the time as 24:17:54, or is my python script wrong in the way it treats it. And can I solve this? A: There are some interesting special cases in the rfc here (https://www.rfc-editor.org/rfc/rfc3339), however, typically its for the 00:00:60 vs 00:00:59 to allow for leap seconds. It may be though that that is legal. My guess is that its doing the "right thing". In all honesty, date/time things get really messy due to things like DST and local timezones. If its 24:17:54, that might be the right thing after all. A: I think today at 24:17 is intelligently parsed as tomorrow at 00:17.... I'm thinking you are well handling the producer's bug.
Correctly parsing an ATOM feed
I currently have setup a Python script that uses feedparser to read a feed and parse it. However, I have recently come across a problem with the date parsing. The feed I am reading contains <modified>2010-05-05T24:17:54Z</modified> - which comes up in Python as a datetime object - 2010-05-06 00:17:54. Notice the discrepancy: the feed entry was modified on the 5th of may, while python reads it as the 6th. So the question is why this is happening. Is the ATOM feed (that is, the one who created the feed) wrong by putting the time as 24:17:54, or is my python script wrong in the way it treats it. And can I solve this?
[ "There are some interesting special cases in the rfc here (https://www.rfc-editor.org/rfc/rfc3339), however, typically its for the 00:00:60 vs 00:00:59 to allow for leap seconds. It may be though that that is legal. My guess is that its doing the \"right thing\". In all honesty, date/time things get really messy due to things like DST and local timezones. If its 24:17:54, that might be the right thing after all.\n", "I think today at 24:17 is intelligently parsed as tomorrow at 00:17.... I'm thinking you are well handling the producer's bug.\n" ]
[ 1, 0 ]
[]
[]
[ "atom_feed", "feedparser", "python" ]
stackoverflow_0002769955_atom_feed_feedparser_python.txt
Q: Adding variably named fields to Python classes I have a python class, and I need to add an arbitrary number of arbitrarily long lists to it. The names of the lists I need to add are also arbitrary. For example, in PHP, I would do this: class MyClass { } $c = new MyClass(); $n = "hello" $c.$n = array(1, 2, 3); How do I do this in Python? I'm also wondering if this is a reasonable thing to do. The alternative would be to create a dict of lists in the class, but since the number and size of the lists is arbitrary, I was worried there might be a performance hit from this. If you are wondering what I'm trying to accomplish, I'm writing a super-lightweight script interpreter. The interpreter walks through a human-written list and creates some kind of byte-code. The byte-code of each function will be stored as a list named after the function in an "app" class. I'm curious to hear any other suggestions on how to do this as well. A: Use setattr. >>> class A(object): ... pass ... >>> a = A() >>> f = 'field' >>> setattr(a, f, 42) >>> a.field 42 A: I would write it the simplest way for now, and profile next, and then look to optimize specific elements. Let's remember the KISS principle.
Adding variably named fields to Python classes
I have a python class, and I need to add an arbitrary number of arbitrarily long lists to it. The names of the lists I need to add are also arbitrary. For example, in PHP, I would do this: class MyClass { } $c = new MyClass(); $n = "hello" $c.$n = array(1, 2, 3); How do I do this in Python? I'm also wondering if this is a reasonable thing to do. The alternative would be to create a dict of lists in the class, but since the number and size of the lists is arbitrary, I was worried there might be a performance hit from this. If you are wondering what I'm trying to accomplish, I'm writing a super-lightweight script interpreter. The interpreter walks through a human-written list and creates some kind of byte-code. The byte-code of each function will be stored as a list named after the function in an "app" class. I'm curious to hear any other suggestions on how to do this as well.
[ "Use setattr.\n>>> class A(object):\n... pass\n... \n>>> a = A()\n>>> f = 'field'\n>>> setattr(a, f, 42)\n>>> a.field\n42\n\n", "I would write it the simplest way for now, and profile next, and then look to optimize specific elements. \nLet's remember the KISS principle.\n" ]
[ 5, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002770629_python.txt
Q: python regex of a date in some text, enclosed by two keywords This is Part 2 of this question and thanks very much for David's answer. What if I need to extract dates which are bounded by two keywords? Example: text = "One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End" Case 1 bounding keyboards: "One" and "Three" Result expected: ['09 Jun 2011', '10 Dec 2012'] Case 2 bounding keyboards: "Two" and "End" Result expected: ['10 Dec 2012', '15 Jan 2015'] Thanks! A: You can do this with two regular expressions. One regex gets the text between the two keywords. The other regex extracts the dates. match = re.search(r"\bOne\b(.*?)\bThree\b", text, re.DOTALL) if match: betweenwords = match.group(1) dates = re.findall(r'\d\d (?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) \d{4}', betweenwords) A: Do you really need to worry about the keywords? Can you ensure that the keywords will not change? If not, the exact same solution from the previous question can solve this: >>> import re >>> text = "One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End" >>> match = re.findall(r'\d\d\s(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s\d{4}', text) >>> match ['09 Jun 2011', '10 Dec 2012', '15 Jan 2015'] If you really only need two of the dates, you could just use list slicing: >>> match[:2] ['09 Jun 2011', '10 Dec 2012'] >>> match[1:] ['10 Dec 2012', '15 Jan 2015']
python regex of a date in some text, enclosed by two keywords
This is Part 2 of this question and thanks very much for David's answer. What if I need to extract dates which are bounded by two keywords? Example: text = "One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End" Case 1 bounding keyboards: "One" and "Three" Result expected: ['09 Jun 2011', '10 Dec 2012'] Case 2 bounding keyboards: "Two" and "End" Result expected: ['10 Dec 2012', '15 Jan 2015'] Thanks!
[ "You can do this with two regular expressions. One regex gets the text between the two keywords. The other regex extracts the dates.\nmatch = re.search(r\"\\bOne\\b(.*?)\\bThree\\b\", text, re.DOTALL)\nif match:\n betweenwords = match.group(1)\n dates = re.findall(r'\\d\\d (?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) \\d{4}', betweenwords) \n\n", "Do you really need to worry about the keywords? Can you ensure that the keywords will not change? \nIf not, the exact same solution from the previous question can solve this:\n>>> import re\n>>> text = \"One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End\"\n>>> match = re.findall(r'\\d\\d\\s(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\\s\\d{4}', text)\n>>> match\n['09 Jun 2011', '10 Dec 2012', '15 Jan 2015']\n\nIf you really only need two of the dates, you could just use list slicing:\n>>> match[:2]\n['09 Jun 2011', '10 Dec 2012']\n>>> match[1:]\n['10 Dec 2012', '15 Jan 2015']\n\n" ]
[ 3, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0002770260_python_regex.txt
Q: entity set expansion python Do you know of any existing implementation in any language (preferably python) of any entity set expansion algorithms, such that the one from Google sets ? ( http://labs.google.com/sets ) I couldn't find any library implementing such algorithms and I'd like to play with some of those to see how they would perform on some specific task I would like to implement. Any help is welcome ! Thanks a lot for your help, Regards, Nicolas. A: I'm not aware of any ready to use open source libraries that implement the sort of clustering on demand of named entities provided by Google Sets. However, there are a few academic papers that describe in detail how to build similar systems, e.g.: Language-Independent Set Expansion of Named Entities using the Web Wang and Cohen, in EMNLP 2009 Online Demo Bayesian Sets Ghahramani and Heller, in NIPS, 2005 Below is a brief summary of Wang and Cohen's method. If you do end up implementing something like this yourself, it might be good to start with their method. I suspect most people will find it more intuitive than Ghahramani and Heller's formulation. Wang and Cohen 2009 Wang and Cohen start by describing a method for automatically constructing extraction patterns that allow them to find lists of named entities in any sort of structured document. The method looks at the prefixes and suffixes bracketing known occurrences of named entities. These prefix and suffixes are then used to identify other named entities within the same document. To complete a clusters of entities, they build a graph consisting of the interconnections between named entities, the extraction patterns associated with them, and the documents. Using this graph and starting at the nodes for the cluster's seed entities (i.e., the initial set of entities in the set to be completed), they perform numerous random walks on the graph up to 10 steps in length. They count how many times they reach the nodes corresponding to non-seed entities. Non-seed entities with high counts can then be used to complete the cluster.
entity set expansion python
Do you know of any existing implementation in any language (preferably python) of any entity set expansion algorithms, such that the one from Google sets ? ( http://labs.google.com/sets ) I couldn't find any library implementing such algorithms and I'd like to play with some of those to see how they would perform on some specific task I would like to implement. Any help is welcome ! Thanks a lot for your help, Regards, Nicolas.
[ "I'm not aware of any ready to use open source libraries that implement the sort of clustering on demand of named entities provided by Google Sets. However, there are a few academic papers that describe in detail how to build similar systems, e.g.:\n\nLanguage-Independent Set Expansion of Named Entities using the Web Wang and Cohen, in EMNLP 2009 \nOnline Demo \nBayesian Sets Ghahramani and Heller, in NIPS, 2005\n\nBelow is a brief summary of Wang and Cohen's method. If you do end up implementing something like this yourself, it might be good to start with their method. I suspect most people will find it more intuitive than Ghahramani and Heller's formulation.\nWang and Cohen 2009\nWang and Cohen start by describing a method for automatically constructing extraction patterns that allow them to find lists of named entities in any sort of structured document. The method looks at the prefixes and suffixes bracketing known occurrences of named entities. These prefix and suffixes are then used to identify other named entities within the same document. \nTo complete a clusters of entities, they build a graph consisting of the interconnections between named entities, the extraction patterns associated with them, and the documents. Using this graph and starting at the nodes for the cluster's seed entities (i.e., the initial set of entities in the set to be completed), they perform numerous random walks on the graph up to 10 steps in length. They count how many times they reach the nodes corresponding to non-seed entities. Non-seed entities with high counts can then be used to complete the cluster.\n" ]
[ 2 ]
[]
[]
[ "information_retrieval", "java", "nlp", "python" ]
stackoverflow_0002761678_information_retrieval_java_nlp_python.txt
Q: Django urls on json request When making a django request through json as, var info=id + "##" +name+"##" $.post("/supervise/activity/" + info ,[] , function Handler(data,arr) { } In urls.py (r'^activity/(?P<info>\d+)/$, 'activity'), In views, def activity(request,info): print info The request does not go through.info is a string.How can this be resolved Thanks.. A: ^activity/(?P<info>\d+)/$ will only match something like 'activity/42/' and the number (in this case 42) will be info. If you appended '##name##' to the url, it will not be recognized.
Django urls on json request
When making a django request through json as, var info=id + "##" +name+"##" $.post("/supervise/activity/" + info ,[] , function Handler(data,arr) { } In urls.py (r'^activity/(?P<info>\d+)/$, 'activity'), In views, def activity(request,info): print info The request does not go through.info is a string.How can this be resolved Thanks..
[ "^activity/(?P<info>\\d+)/$ will only match something like 'activity/42/' and the number (in this case 42) will be info.\nIf you appended '##name##' to the url, it will not be recognized.\n" ]
[ 4 ]
[]
[]
[ "django", "django_urls", "django_views", "python" ]
stackoverflow_0002771267_django_django_urls_django_views_python.txt
Q: boost::python string-convertible properties I have a C++ class, which has the following methods: class Bar { ... const Foo& getFoo() const; void setFoo(const Foo&); }; where class Foo is convertible to std::string (it has an implicit constructor from std::string and an std::string cast operator). I define a Boost.Python wrapper class, which, among other things, defines a property based on previous two functions: class_<Bar>("Bar") ... .add_property( "foo", make_function( &Bar::getFoo, return_value_policy<return_by_value>()), &Bar::setFoo) ... I also mark the class as convertible to/from std::string. implicitly_convertible<std::string, Foo>(); implicitly_convertible<Foo, std::string>(); But at runtime I still get a conversion error trying to access this property: TypeError: No to_python (by-value) converter found for C++ type: Foo How to achieve the conversion without too much boilerplate of wrapper functions? (I already have all the conversion functions in class Foo, so duplication is undesirable. A: I ended up giving up and implementing something similar to custom string class conversion example in Boost.Python FAQ, which is a bit verbose, but works as advertised.
boost::python string-convertible properties
I have a C++ class, which has the following methods: class Bar { ... const Foo& getFoo() const; void setFoo(const Foo&); }; where class Foo is convertible to std::string (it has an implicit constructor from std::string and an std::string cast operator). I define a Boost.Python wrapper class, which, among other things, defines a property based on previous two functions: class_<Bar>("Bar") ... .add_property( "foo", make_function( &Bar::getFoo, return_value_policy<return_by_value>()), &Bar::setFoo) ... I also mark the class as convertible to/from std::string. implicitly_convertible<std::string, Foo>(); implicitly_convertible<Foo, std::string>(); But at runtime I still get a conversion error trying to access this property: TypeError: No to_python (by-value) converter found for C++ type: Foo How to achieve the conversion without too much boilerplate of wrapper functions? (I already have all the conversion functions in class Foo, so duplication is undesirable.
[ "I ended up giving up and implementing something similar to custom string class conversion example in Boost.Python FAQ, which is a bit verbose, but works as advertised.\n" ]
[ 2 ]
[]
[]
[ "boost", "boost_python", "c++", "python" ]
stackoverflow_0002711432_boost_boost_python_c++_python.txt
Q: What happens when I instantiate class in Python? Could you clarify some ideas behind Python classes and class instances? Consider this: class A(): name = 'A' a = A() a.name = 'B' # point 1 (instance of class A is used here) print a.name print A.name prints: B A if instead in point 1 I use class name, output is different: A.name = 'B' # point 1 (updated, class A itself is used here) prints: B B Even if classes in Python were some kind of prototype for class instances, I'd expect already created instances to remain intact, i.e. output like this: A B Can you explain what is actually going on? A: First of all, the right way in Python to create fields of an instance (rather than class fields) is using the __init__ method. I trust that you know that already. Python does not limit you in assigning values to non-declared fields of an object. For example, consider the following code: class Empty: pass e = Empty() e.f = 5 print e.f # shows 5 So what's going in your code is: You create the class A with a static field name assigned with A. You create an instance of A, a. You create a new field for the object a (but not for other instances of A) and assign B to it You print the value of a.name, which is unique to the object a. You print the value of the static field A.name, which belongs to the class A: You also should look at these SO threads for further explanations: Static class variables in Python In Python how can I access "static" class variables within class methods And an official tutorial: http://docs.python.org/tutorial/classes.html#SECTION0011320000000000000000 Keep in mind that the assignment "=" operator in python behaves differently than C++ or Java: http://docs.python.org/reference/simple_stmts.html#assignment-statements A: Perhaps this example may make things more help clarify. Recall that Python names are not storage (as variables are in other languages) but references to storage. You can find what a name refers to with id(name). The identity operator x is y tells whether two names point at the same object. >>> class A(object): ... name = 'A' ... >>> x = A() >>> A.name is x.name True >>> x.name = 'fred' # x.name was bound to a new object (A.name wasn't) >>> A.name is x.name False >>> x = A() # start over >>> A.name is x.name True # so far so good >>> A.name = 'fred' >>> A.name is x.name True # this is somewhat counter-intuitive
What happens when I instantiate class in Python?
Could you clarify some ideas behind Python classes and class instances? Consider this: class A(): name = 'A' a = A() a.name = 'B' # point 1 (instance of class A is used here) print a.name print A.name prints: B A if instead in point 1 I use class name, output is different: A.name = 'B' # point 1 (updated, class A itself is used here) prints: B B Even if classes in Python were some kind of prototype for class instances, I'd expect already created instances to remain intact, i.e. output like this: A B Can you explain what is actually going on?
[ "First of all, the right way in Python to create fields of an instance (rather than class fields) is using the __init__ method. I trust that you know that already.\nPython does not limit you in assigning values to non-declared fields of an object. For example, consider the following code:\nclass Empty: pass\ne = Empty()\ne.f = 5\nprint e.f # shows 5\n\nSo what's going in your code is:\n\nYou create the class A with a static field name assigned with A.\nYou create an instance of A, a.\nYou create a new field for the object a (but not for other instances of A) and assign B to it\nYou print the value of a.name, which is unique to the object a.\nYou print the value of the static field A.name, which belongs to the class\n\n", "You also should look at these SO threads for further explanations:\nStatic class variables in Python\nIn Python how can I access \"static\" class variables within class methods\nAnd an official tutorial:\nhttp://docs.python.org/tutorial/classes.html#SECTION0011320000000000000000\nKeep in mind that the assignment \"=\" operator in python behaves differently than C++ or Java:\nhttp://docs.python.org/reference/simple_stmts.html#assignment-statements\n", "Perhaps this example may make things more help clarify. Recall that Python names are not storage (as variables are in other languages) but references to storage. You can find what a name refers to with id(name). The identity operator x is y tells whether two names point at the same object.\n>>> class A(object):\n... name = 'A'\n... \n>>> x = A()\n>>> A.name is x.name\nTrue\n>>> x.name = 'fred' # x.name was bound to a new object (A.name wasn't)\n>>> A.name is x.name\nFalse\n>>> x = A() # start over\n>>> A.name is x.name\nTrue # so far so good\n>>> A.name = 'fred'\n>>> A.name is x.name \nTrue # this is somewhat counter-intuitive\n\n" ]
[ 4, 1, 1 ]
[]
[]
[ "class", "python", "variables" ]
stackoverflow_0002771078_class_python_variables.txt
Q: Python profiler and CPU seconds Hey, I'm totally behind this topic. Yesterday I was doing profiling using Python profiler module for some script I'm working on, and the unit for time spent was a 'CPU second'. Can anyone remind me with the definition of it? For example for some profiling I got: 200.750 CPU seconds. What does that supposed to mean? At other case and for time consuming process I got: -347.977 CPU seconds, a negative number! Is there anyway I can convert that time, to calendar time? Cheers, A: Roughly speaking, a CPU time of, say, 200.75 seconds means that if only one processor worked on the task and that processor were working on it all the time, it would have taken 200.75 seconds. CPU time can be contrasted with wall clock time, which means the actual time elapsed from the start of the task to the end of the task on a clock hanging on the wall of your room. The two are not interchangeable and there is no way to convert one to the other unless you know exactly how your task was scheduled and distributed among the CPU cores of your system. The CPU time can be less than the wall clock time if the task was distributed among multiple CPU cores, and it can be more if the system was under a heavy load and your task was interrupted temporarily by other tasks. A: A CPU second is one second that your process is actually scheduled on the CPU. It can be significantly smaller than than the elapsed real time in case of a busy system, and it can be higher in case of your process running on multiple cores (if the count is per-process, not per-thread). It should never be negative, though...
Python profiler and CPU seconds
Hey, I'm totally behind this topic. Yesterday I was doing profiling using Python profiler module for some script I'm working on, and the unit for time spent was a 'CPU second'. Can anyone remind me with the definition of it? For example for some profiling I got: 200.750 CPU seconds. What does that supposed to mean? At other case and for time consuming process I got: -347.977 CPU seconds, a negative number! Is there anyway I can convert that time, to calendar time? Cheers,
[ "Roughly speaking, a CPU time of, say, 200.75 seconds means that if only one processor worked on the task and that processor were working on it all the time, it would have taken 200.75 seconds. CPU time can be contrasted with wall clock time, which means the actual time elapsed from the start of the task to the end of the task on a clock hanging on the wall of your room.\nThe two are not interchangeable and there is no way to convert one to the other unless you know exactly how your task was scheduled and distributed among the CPU cores of your system. The CPU time can be less than the wall clock time if the task was distributed among multiple CPU cores, and it can be more if the system was under a heavy load and your task was interrupted temporarily by other tasks.\n", "A CPU second is one second that your process is actually scheduled on the CPU. It can be significantly smaller than than the elapsed real time in case of a busy system, and it can be higher in case of your process running on multiple cores (if the count is per-process, not per-thread).\nIt should never be negative, though...\n" ]
[ 8, 1 ]
[]
[]
[ "profiling", "python" ]
stackoverflow_0002771561_profiling_python.txt
Q: querying for timestamp field in django In my views i have the date in the following format s_date=20090106 and e_date=20100106 The model is defined as class Activity(models.Model): timestamp = models.DateTimeField(auto_now_add=True) how to query for the timestamp filed with the above info. Activity.objects.filter(timestamp>=s_date and timestamp<=e_date) Thanks..... A: You have to convert your date to an instance of datetime.datetime class. Easiest way to do it for your case is: import datetime # # This creates new instace of `datetime.datetime` from a string according to # the pattern given as the second argument. # start = datetime.datetime.strptime(s_date, '%Y%m%d') end = datetime.datetime.strptime(e_date, '%Y%m%d') # And now the query you want. Mind that you cannot use 'and' keyword # inside .filter() function. Fortunately .filter() automatically ANDs # all criteria you provide. Activity.objects.filter(timestamp__gte=start, timestamp__lte=end) Enjoy! A: Here's one way: s_date = datetime.strptime('20090106', '%Y%m%d') e_date = datetime.strptime('20100106', '%Y%m%d') Activity.objects.filter(timestamp__gte=s_date, timestamp__lte=e_date) Note that first you need to use strptime to convert your string date to a python datetime object. Second, you need to use the gte and lte methods to form a django query.
querying for timestamp field in django
In my views i have the date in the following format s_date=20090106 and e_date=20100106 The model is defined as class Activity(models.Model): timestamp = models.DateTimeField(auto_now_add=True) how to query for the timestamp filed with the above info. Activity.objects.filter(timestamp>=s_date and timestamp<=e_date) Thanks.....
[ "You have to convert your date to an instance of datetime.datetime class. Easiest way to do it for your case is:\nimport datetime\n\n#\n# This creates new instace of `datetime.datetime` from a string according to\n# the pattern given as the second argument.\n#\nstart = datetime.datetime.strptime(s_date, '%Y%m%d')\nend = datetime.datetime.strptime(e_date, '%Y%m%d')\n\n# And now the query you want. Mind that you cannot use 'and' keyword\n# inside .filter() function. Fortunately .filter() automatically ANDs\n# all criteria you provide.\nActivity.objects.filter(timestamp__gte=start, timestamp__lte=end)\n\nEnjoy!\n", "Here's one way:\ns_date = datetime.strptime('20090106', '%Y%m%d')\ne_date = datetime.strptime('20100106', '%Y%m%d')\nActivity.objects.filter(timestamp__gte=s_date, timestamp__lte=e_date)\n\nNote that first you need to use strptime to convert your string date to a python datetime object. Second, you need to use the gte and lte methods to form a django query.\n" ]
[ 6, 2 ]
[]
[]
[ "django", "django_models", "django_views", "python" ]
stackoverflow_0002771739_django_django_models_django_views_python.txt
Q: Import Error when use templatetags in Django Well, when I'm trying to use 'inclusion' in Django, I met some confused problems that I can't solve it by myself. There is the structures for my project. MyProject--- App1--- __init__.py models.py test.py urls.py views.py App2--- ... template--- App1--- some htmls App2--- ... templatetags--- __init__.py inclusion_cld_tags.py manage.py urls.py __init__.py settings.py I have registered templatetags folder in the settings.py (Both in Installed APPS & TEMPLATE_DIRS). But when I want to use {% load inclusion_test %} in my html, it raise an exception like this: 'inclusion_cld_tags' is not a valid tag library: Could not load template library from django.templatetags.inclusion_cld_tags, No module named inclusion_cld_tags I think there is nothing wrong with my import work, how can I do with that? Thanks for help! My django version: 1.0+ My Python version: 2.6.4 A: The templatetags folder should live in the app folder: App1--- __init__.py models.py test.py urls.py views.py templatetags--- __init__.py inclusion_test.py ... Did you registered the tag? Example: register = template.Library() @register.inclusion_tag('platform/templatetags/pagination_links.html') def pagination_links(page, per_page, link):
Import Error when use templatetags in Django
Well, when I'm trying to use 'inclusion' in Django, I met some confused problems that I can't solve it by myself. There is the structures for my project. MyProject--- App1--- __init__.py models.py test.py urls.py views.py App2--- ... template--- App1--- some htmls App2--- ... templatetags--- __init__.py inclusion_cld_tags.py manage.py urls.py __init__.py settings.py I have registered templatetags folder in the settings.py (Both in Installed APPS & TEMPLATE_DIRS). But when I want to use {% load inclusion_test %} in my html, it raise an exception like this: 'inclusion_cld_tags' is not a valid tag library: Could not load template library from django.templatetags.inclusion_cld_tags, No module named inclusion_cld_tags I think there is nothing wrong with my import work, how can I do with that? Thanks for help! My django version: 1.0+ My Python version: 2.6.4
[ "\nThe templatetags folder should live in the app folder:\n App1---\n __init__.py\n models.py\n test.py\n urls.py\n views.py\n templatetags---\n __init__.py\n inclusion_test.py\n ...\n\nDid you registered the tag?\n\nExample:\nregister = template.Library() \n@register.inclusion_tag('platform/templatetags/pagination_links.html')\ndef pagination_links(page, per_page, link):\n\n" ]
[ 2 ]
[]
[]
[ "django", "python", "templatetags" ]
stackoverflow_0002771850_django_python_templatetags.txt
Q: Python subprocess block I'm having a problem with the module subprocess; I'm running a script from Python: subprocess.Popen('./run_pythia.sh', shell=True).communicate() and sometimes it just blocks and it doesn't finish to execute the script. Before I was using .wait(), but I switched to .communicate(). Nevertheless the problem continues. First the script compiles a few files, then it execute into a file: run_pythia.sh: #!/bin/bash #PBS -l walltime=1:00:00 ./compile.sh ./exec > resultado.txt compile.sh: O=`find ./ -name "*.o" | xargs` # LOAD cernlib2005 module load libs/cernlib/2005 # Compile and Link FC=g77 CERNLIBPATH="-L/software/local/cernlib/2005/lib -lpacklib" $FC call_pyth_mix.f analise_tt.f $O $CERNLIBPATH -o exec A: Is the script you execute, is run_pythia.sh guaranteed to finish executing? If not, you might not want to use blocking methods like communicate(). You might want to look into interacting with the .stdout, .stderr, and .stdin file handles of the returned process handle yourself (in a non-blocking manner). Also, if you still want to use communicate(), you need to have had passed subprocess.PIPE object to Popen's constructor arguments. Read the documentation on the module for more details. A: Maybe you can try to do a trace on it: import pdb; pdb.set_trace()
Python subprocess block
I'm having a problem with the module subprocess; I'm running a script from Python: subprocess.Popen('./run_pythia.sh', shell=True).communicate() and sometimes it just blocks and it doesn't finish to execute the script. Before I was using .wait(), but I switched to .communicate(). Nevertheless the problem continues. First the script compiles a few files, then it execute into a file: run_pythia.sh: #!/bin/bash #PBS -l walltime=1:00:00 ./compile.sh ./exec > resultado.txt compile.sh: O=`find ./ -name "*.o" | xargs` # LOAD cernlib2005 module load libs/cernlib/2005 # Compile and Link FC=g77 CERNLIBPATH="-L/software/local/cernlib/2005/lib -lpacklib" $FC call_pyth_mix.f analise_tt.f $O $CERNLIBPATH -o exec
[ "Is the script you execute, is run_pythia.sh guaranteed to finish executing? If not, you might not want to use blocking methods like communicate(). You might want to look into interacting with the .stdout, .stderr, and .stdin file handles of the returned process handle yourself (in a non-blocking manner). \nAlso, if you still want to use communicate(), you need to have had passed subprocess.PIPE object to Popen's constructor arguments.\nRead the documentation on the module for more details.\n", "Maybe you can try to do a trace on it:\nimport pdb; pdb.set_trace()\n\n" ]
[ 3, 0 ]
[]
[]
[ "blocking", "communicate", "python", "subprocess", "wait" ]
stackoverflow_0002769694_blocking_communicate_python_subprocess_wait.txt
Q: Modify headers in Pylons using Middleware I'm trying to modify a header using Middleware in Pylons to make my application RESTful, basically, if the user request "application/json" via GET that is what he get back. The question I have is, the variable headers is basically a long list. Looking something like this: [('Content-Type', 'text/html; charset=utf-8'), ('Pragma', 'no-cache'), ('Cache-Control', 'no-cache'), ('Content-Length','20'), ('Content-Encoding', 'gzip')] Now, I'm looking to just modify the value based on the request - but are these positions fixed? Will 'Content-Type' always be position headers[0][0]? Best Regards, Anders A: Try this from webob import Request, Response from my_wsgi_application import App class MyMiddleware(object): def init(self, app): self.app = app def call(self, environ, start_response): req = Request(environ) ... rsp = req.get_response(app) rsp.headers['Content-type'] = 'application/json' return rsp(environ, start_response) Or simple do request or response .headers['Content-type'] = 'application/json' in you contoller See http://pythonpaste.org/webob/reference.html#headers
Modify headers in Pylons using Middleware
I'm trying to modify a header using Middleware in Pylons to make my application RESTful, basically, if the user request "application/json" via GET that is what he get back. The question I have is, the variable headers is basically a long list. Looking something like this: [('Content-Type', 'text/html; charset=utf-8'), ('Pragma', 'no-cache'), ('Cache-Control', 'no-cache'), ('Content-Length','20'), ('Content-Encoding', 'gzip')] Now, I'm looking to just modify the value based on the request - but are these positions fixed? Will 'Content-Type' always be position headers[0][0]? Best Regards, Anders
[ "Try this\n\nfrom webob import Request, Response\nfrom my_wsgi_application import App\nclass MyMiddleware(object):\n def init(self, app):\n self.app = app\n def call(self, environ, start_response):\n req = Request(environ)\n ...\n rsp = req.get_response(app)\n rsp.headers['Content-type'] = 'application/json'\n return rsp(environ, start_response)\n\nOr simple do request or response .headers['Content-type'] = 'application/json' in you contoller\nSee http://pythonpaste.org/webob/reference.html#headers\n" ]
[ 1 ]
[]
[]
[ "middleware", "pylons", "python", "rest" ]
stackoverflow_0002771974_middleware_pylons_python_rest.txt
Q: Python: create a function to modify a list by reference not value I'm doing some performance-critical Python work and want to create a function that removes a few elements from a list if they meet certain criteria. I'd rather not create any copies of the list because it's filled with a lot of really large objects. Functionality I want to implement: def listCleanup(listOfElements): i = 0 for element in listOfElements: if(element.meetsCriteria()): del(listOfElements[i]) i += 1 return listOfElements myList = range(10000) myList = listCleanup(listOfElements) I'm not familiar with the low-level workings of Python. Is myList being passed by value or by reference? How can I make this faster? Is it possible to somehow extend the list class and implement listCleanup() within that? myList = range(10000) myList.listCleanup() Thanks- Jonathan A: Python passes everything the same way, but calling it "by value" or "by reference" will not clear everything up, since Python's semantics are different than the languages for which those terms usually apply. If I was to describe it, I would say that all passing was by value, and that the value was an object reference. (This is why I didn't want to say it!) If you want to filter out some stuff from a list, you build a new list foo = range(100000) new_foo = [] for item in foo: if item % 3 != 0: # Things divisble by 3 don't get through new_foo.append(item) or, using the list comprehension syntax new_foo = [item for item in foo if item % 3 != 0] Python will not copy the objects in the list, but rather both foo and new_foo will reference the same objects. (Python never implicitly copies any objects.) You have suggested you have performance concerns about this operation. Using repeated del statements from the old list will result in not code that is less idiomatic and more confusing to deal with, but it will introduce quadratic performance because the whole list must be reshuffled each time. To address performance: Get it up and running. You can't figure out what your performance is like unless you have code working. This will also tell you whether it is speed or space that you must optimize for; you mention concerns about both in your code, but oftentimes optimization involves getting one at the cost of the other. Profile. You can use the stdlib tools for performance in time. There are various third-party memory profilers that can be somewhat useful but aren't quite as nice to work with. Measure. Time or reprofile memory when you make a change to see if a change makes an improvement and if so what that improvement is. To make your code more memory-sensitive, you will often want a paradigm shift in how you store your data, not microoptimizastions like not building a second list to do filtering. (The same is true for time, really: changing to a better algorithm will almost always give the best speedup. However, it's harder to generalize about speed optimizations). Some common paradigm shifts to optimize memory consumption in Python include Using Generators. Generators are lazy iterables: they don't load a whole list into memory at once, they figure out what their next items are on the fly. To use generators, the snippets above would look like foo = xrange(100000) # Like generators, xrange is lazy def filter_divisible_by_three(iterable): for item in foo: if item % 3 != 0: yield item new_foo = filter_divisible_by_three(foo) or, using the generator expression syntax, new_foo = (item for item in foo if item % 3 != 0) Using numpy for homogenous sequences, especially ones that are numerical-mathy. This can also speed up code that does lots of vector operations. Storing data to disk, such as in a database. A: In Python, lists are always passed by reference. The size of the objects in the list doesn't affect the lists performance, because the lists only stores references to the objects. However, the number of items in the list does affect the performance of some operations - such as removing an element, which is O(n). As written, listCleanup is worst-case O(n**2), since you have the O(n) del operation within a loop that is potentially O(n) itself. If the order of the elements doesn't matter, you may be able to use the built-in set type instead of a list. The set has O(1) deletions and insertions. However, you will have to ensure that your objects are immutable and hashable. Otherwise, you're better off recreating the list. That's O(n), and your algorithm needs to be at least O(n) since you need to examine every element. You can filter the list in one line like this: listOfElements[:] = [el for el in listOfElements if el.MeetsCriteria()] A: Looks like premature optimization. You should try to get a better understanding of how python works before trying to optimize. In this particular case you don't need to worry about object size. Copying a list is using list comprehension or slice will only perform surface copy (copy references to objects even if the term does not really apply well to python). But the number of items in the list may matter because del is O(n). There may be other solutions, like replacing an item with None or a conventional Null object, or using another data structure like a set or a dictionary where cost of deleting item is much lower. A: I don't think anyone mentioned actually using filter. Since a lot of the answers came from well respected people, I'm sure that I'm the one that's missing something. Could someone explain what would be wrong with this: new_list = filter(lambda o: o.meetsCriteria(), myList) A: modifying your data structure as you're iterating over it is like shooting yourself in the foot... iteration fails. you might as well take others' advice and just make a new list: myList = [element for element in listOfElements if not element.meetsCriteria()] the old list -- if there are no other references to it -- will be deallocated and the memory reclaimed. better yet, don't even make a copy of the list. change the above to a generator expression for a more memory-friendly version: myList = (element for element in listOfElements if not element.meetsCriteria()) all Python object access is by reference. objects are created and variables are just references to those objects. however, if someone wanted to ask the purist question, "what type of call semantics does Python use, call-by-reference or call-by-value?" the answer will have to be, "Neither... and both." the reason is because calling conventions are less important to Python than object type. if an object is mutable, it can be modified regardless of what scope you're in... as long as you have a valid object reference, the object can be changed. if the object is immutable, then that object cannot be changed no matter where you are or what reference you have. A: Deleting list elements in-situ is possible, but not by going forwards through the list. Your code just plain doesn't work -- as the list shrinks, you can miss out examining elements. You need to go backwards, so that the shrinking part is behind you, with rather horrid code. Before I show you that, there are some preliminary considerations: First, how did that rubbish get into the list? Prevention is better than cure. Second, how many elements in the list, and what percentage are likely to need deletion? The higher the percentage, the greater the likelihood that it's better to create a new list. OK, if you still want to do it in-situ, contemplate this: def list_cleanup_fail(alist, is_bad): i = 0 for element in alist: print "i=%d alist=%r alist[i]=%d element=%d" % (i, alist, alist[i], element) if is_bad(element): del alist[i] i += 1 def list_cleanup_ok(alist, is_bad): for i in xrange(len(alist) - 1, -1, -1): print "i=%d alist=%r alist[i]=%d" % (i, alist, alist[i]) if is_bad(alist[i]): del alist[i] def is_not_mult_of_3(x): return x % 3 != 0 for func in (list_cleanup_fail, list_cleanup_ok): print print func.__name__ mylist = range(11) func(mylist, is_not_mult_of_3) print "result", mylist and here is the output: list_cleanup_fail i=0 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=0 element=0 i=1 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=1 element=1 i=2 alist=[0, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=3 element=3 i=3 alist=[0, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=4 element=4 i=4 alist=[0, 2, 3, 5, 6, 7, 8, 9, 10] alist[i]=6 element=6 i=5 alist=[0, 2, 3, 5, 6, 7, 8, 9, 10] alist[i]=7 element=7 i=6 alist=[0, 2, 3, 5, 6, 8, 9, 10] alist[i]=9 element=9 i=7 alist=[0, 2, 3, 5, 6, 8, 9, 10] alist[i]=10 element=10 result [0, 2, 3, 5, 6, 8, 9] list_cleanup_ok i=10 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=10 i=9 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] alist[i]=9 i=8 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] alist[i]=8 i=7 alist=[0, 1, 2, 3, 4, 5, 6, 7, 9] alist[i]=7 i=6 alist=[0, 1, 2, 3, 4, 5, 6, 9] alist[i]=6 i=5 alist=[0, 1, 2, 3, 4, 5, 6, 9] alist[i]=5 i=4 alist=[0, 1, 2, 3, 4, 6, 9] alist[i]=4 i=3 alist=[0, 1, 2, 3, 6, 9] alist[i]=3 i=2 alist=[0, 1, 2, 3, 6, 9] alist[i]=2 i=1 alist=[0, 1, 3, 6, 9] alist[i]=1 i=0 alist=[0, 3, 6, 9] alist[i]=0 result [0, 3, 6, 9] A: Just to be clear: def listCleanup(listOfElements): i = 0 for element in listOfElements: if(element.meetsCriteria()): del(listOfElements[i]) i += 1 return listOfElements myList = range(10000) myList = listCleanup(listOfElements) is the same as def listCleanup(listOfElements): i = 0 for element in listOfElements: if(element.meetsCriteria()): del(listOfElements[i]) i += 1 myList = range(10000) listCleanup(listOfElements) ?
Python: create a function to modify a list by reference not value
I'm doing some performance-critical Python work and want to create a function that removes a few elements from a list if they meet certain criteria. I'd rather not create any copies of the list because it's filled with a lot of really large objects. Functionality I want to implement: def listCleanup(listOfElements): i = 0 for element in listOfElements: if(element.meetsCriteria()): del(listOfElements[i]) i += 1 return listOfElements myList = range(10000) myList = listCleanup(listOfElements) I'm not familiar with the low-level workings of Python. Is myList being passed by value or by reference? How can I make this faster? Is it possible to somehow extend the list class and implement listCleanup() within that? myList = range(10000) myList.listCleanup() Thanks- Jonathan
[ "Python passes everything the same way, but calling it \"by value\" or \"by reference\" will not clear everything up, since Python's semantics are different than the languages for which those terms usually apply. If I was to describe it, I would say that all passing was by value, and that the value was an object reference. (This is why I didn't want to say it!)\nIf you want to filter out some stuff from a list, you build a new list\nfoo = range(100000)\nnew_foo = []\nfor item in foo:\n if item % 3 != 0: # Things divisble by 3 don't get through\n new_foo.append(item)\n\nor, using the list comprehension syntax\n new_foo = [item for item in foo if item % 3 != 0]\n\nPython will not copy the objects in the list, but rather both foo and new_foo will reference the same objects. (Python never implicitly copies any objects.)\n\nYou have suggested you have performance concerns about this operation. Using repeated del statements from the old list will result in not code that is less idiomatic and more confusing to deal with, but it will introduce quadratic performance because the whole list must be reshuffled each time. \nTo address performance:\n\nGet it up and running. You can't figure out what your performance is like unless you have code working. This will also tell you whether it is speed or space that you must optimize for; you mention concerns about both in your code, but oftentimes optimization involves getting one at the cost of the other.\nProfile. You can use the stdlib tools for performance in time. There are various third-party memory profilers that can be somewhat useful but aren't quite as nice to work with.\nMeasure. Time or reprofile memory when you make a change to see if a change makes an improvement and if so what that improvement is.\nTo make your code more memory-sensitive, you will often want a paradigm shift in how you store your data, not microoptimizastions like not building a second list to do filtering. (The same is true for time, really: changing to a better algorithm will almost always give the best speedup. However, it's harder to generalize about speed optimizations).\nSome common paradigm shifts to optimize memory consumption in Python include\n\nUsing Generators. Generators are lazy iterables: they don't load a whole list into memory at once, they figure out what their next items are on the fly. To use generators, the snippets above would look like \nfoo = xrange(100000) # Like generators, xrange is lazy\ndef filter_divisible_by_three(iterable):\n for item in foo:\n if item % 3 != 0:\n yield item\n\nnew_foo = filter_divisible_by_three(foo)\n\nor, using the generator expression syntax, \nnew_foo = (item for item in foo if item % 3 != 0)\n\nUsing numpy for homogenous sequences, especially ones that are numerical-mathy. This can also speed up code that does lots of vector operations.\nStoring data to disk, such as in a database.\n\n\n", "In Python, lists are always passed by reference.\nThe size of the objects in the list doesn't affect the lists performance, because the lists only stores references to the objects. However, the number of items in the list does affect the performance of some operations - such as removing an element, which is O(n).\nAs written, listCleanup is worst-case O(n**2), since you have the O(n) del operation within a loop that is potentially O(n) itself.\nIf the order of the elements doesn't matter, you may be able to use the built-in set type instead of a list. The set has O(1) deletions and insertions. However, you will have to ensure that your objects are immutable and hashable.\nOtherwise, you're better off recreating the list. That's O(n), and your algorithm needs to be at least O(n) since you need to examine every element. You can filter the list in one line like this:\nlistOfElements[:] = [el for el in listOfElements if el.MeetsCriteria()]\n\n", "Looks like premature optimization. You should try to get a better understanding of how python works before trying to optimize.\nIn this particular case you don't need to worry about object size. Copying a list is using list comprehension or slice will only perform surface copy (copy references to objects even if the term does not really apply well to python). But the number of items in the list may matter because del is O(n). There may be other solutions, like replacing an item with None or a conventional Null object, or using another data structure like a set or a dictionary where cost of deleting item is much lower.\n", "I don't think anyone mentioned actually using filter. Since a lot of the answers came from well respected people, I'm sure that I'm the one that's missing something. Could someone explain what would be wrong with this:\nnew_list = filter(lambda o: o.meetsCriteria(), myList)\n", "modifying your data structure as you're iterating over it is like shooting yourself in the foot... iteration fails. you might as well take others' advice and just make a new list:\nmyList = [element for element in listOfElements if not element.meetsCriteria()]\n\nthe old list -- if there are no other references to it -- will be deallocated and the memory reclaimed. better yet, don't even make a copy of the list. change the above to a generator expression for a more memory-friendly version:\nmyList = (element for element in listOfElements if not element.meetsCriteria())\n\nall Python object access is by reference. objects are created and variables are just references to those objects. however, if someone wanted to ask the purist question, \"what type of call semantics does Python use, call-by-reference or call-by-value?\" the answer will have to be, \"Neither... and both.\" the reason is because calling conventions are less important to Python than object type.\nif an object is mutable, it can be modified regardless of what scope you're in... as long as you have a valid object reference, the object can be changed. if the object is immutable, then that object cannot be changed no matter where you are or what reference you have.\n", "Deleting list elements in-situ is possible, but not by going forwards through the list. Your code just plain doesn't work -- as the list shrinks, you can miss out examining elements. You need to go backwards, so that the shrinking part is behind you, with rather horrid code. Before I show you that, there are some preliminary considerations:\nFirst, how did that rubbish get into the list? Prevention is better than cure.\nSecond, how many elements in the list, and what percentage are likely to need deletion? The higher the percentage, the greater the likelihood that it's better to create a new list.\nOK, if you still want to do it in-situ, contemplate this:\ndef list_cleanup_fail(alist, is_bad):\n i = 0\n for element in alist:\n print \"i=%d alist=%r alist[i]=%d element=%d\" % (i, alist, alist[i], element)\n if is_bad(element):\n del alist[i]\n i += 1\n\ndef list_cleanup_ok(alist, is_bad):\n for i in xrange(len(alist) - 1, -1, -1):\n print \"i=%d alist=%r alist[i]=%d\" % (i, alist, alist[i])\n if is_bad(alist[i]):\n del alist[i]\n\ndef is_not_mult_of_3(x):\n return x % 3 != 0\n\nfor func in (list_cleanup_fail, list_cleanup_ok):\n print\n print func.__name__\n mylist = range(11)\n func(mylist, is_not_mult_of_3)\n print \"result\", mylist\n\nand here is the output:\nlist_cleanup_fail\ni=0 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=0 element=0\ni=1 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=1 element=1\ni=2 alist=[0, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=3 element=3\ni=3 alist=[0, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=4 element=4\ni=4 alist=[0, 2, 3, 5, 6, 7, 8, 9, 10] alist[i]=6 element=6\ni=5 alist=[0, 2, 3, 5, 6, 7, 8, 9, 10] alist[i]=7 element=7\ni=6 alist=[0, 2, 3, 5, 6, 8, 9, 10] alist[i]=9 element=9\ni=7 alist=[0, 2, 3, 5, 6, 8, 9, 10] alist[i]=10 element=10\nresult [0, 2, 3, 5, 6, 8, 9]\n\nlist_cleanup_ok\ni=10 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] alist[i]=10\ni=9 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] alist[i]=9\ni=8 alist=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] alist[i]=8\ni=7 alist=[0, 1, 2, 3, 4, 5, 6, 7, 9] alist[i]=7\ni=6 alist=[0, 1, 2, 3, 4, 5, 6, 9] alist[i]=6\ni=5 alist=[0, 1, 2, 3, 4, 5, 6, 9] alist[i]=5\ni=4 alist=[0, 1, 2, 3, 4, 6, 9] alist[i]=4\ni=3 alist=[0, 1, 2, 3, 6, 9] alist[i]=3\ni=2 alist=[0, 1, 2, 3, 6, 9] alist[i]=2\ni=1 alist=[0, 1, 3, 6, 9] alist[i]=1\ni=0 alist=[0, 3, 6, 9] alist[i]=0\nresult [0, 3, 6, 9]\n\n", "Just to be clear: \ndef listCleanup(listOfElements):\n i = 0\n for element in listOfElements:\n if(element.meetsCriteria()):\n del(listOfElements[i])\n i += 1\n return listOfElements\n\nmyList = range(10000)\nmyList = listCleanup(listOfElements)\n\nis the same as\ndef listCleanup(listOfElements):\n i = 0\n for element in listOfElements:\n if(element.meetsCriteria()):\n del(listOfElements[i])\n i += 1\n\nmyList = range(10000)\nlistCleanup(listOfElements)\n\n?\n" ]
[ 30, 6, 2, 2, 1, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0002770038_list_python.txt
Q: Getting unpredictable data into a tabular format The situation: Each page I scrape has <input> elements with a title= and a value= I don't know what is going to be on the page. I want to have all my collected data in a single table at the end, with a column for each title. So basically, I need each row of data to line up with all the others, and if a row doesn't have a certain element, then it should be blank (but there must be something there to keep the alignment). eg. First page has: {animal: cat, colour: blue, fruit: lemon, day: monday} Second page has: {animal: fish, colour: green, day: saturday} Third page has: {animal: dog, number: 10, colour: yellow, fruit: mango, day: tuesday} Then my resulting table should be: animal | number | colour | fruit | day cat | none | blue | lemon | monday fish | none | green | none | saturday dog | 10 | yellow | mango | tuesday Although it would be good to keep the order of the title value pairs, which I know dictionaries wont do. So basically, I need to generate columns from all the titles (kept in order but somehow merged together) What would be the best way of going about this without knowing all the possible titles and explicitly specifying an order for the values to be put in? A: You need a multipass algorithm. Remember all the scraped pages in a list of dicts. In the first pass, go over this list and collect all the titles in a set(), and create an ordering (for example, convert to list sort them alphabetically). In the second pass you print the table and use your generated ordering as column names, extracting the values from the dictionaries as needed (defaulting to empty to handle missing values), for example with dict.get(name, ""). A: I would suggest that you could use optional parameters, or alternatively use overloaded constructors to populate the values: Page(string animal = string.empty, int number = -999, string colour = string.empty, day = string.empty ) Either that or store each key/value pair as type object and then cast it from your pages.
Getting unpredictable data into a tabular format
The situation: Each page I scrape has <input> elements with a title= and a value= I don't know what is going to be on the page. I want to have all my collected data in a single table at the end, with a column for each title. So basically, I need each row of data to line up with all the others, and if a row doesn't have a certain element, then it should be blank (but there must be something there to keep the alignment). eg. First page has: {animal: cat, colour: blue, fruit: lemon, day: monday} Second page has: {animal: fish, colour: green, day: saturday} Third page has: {animal: dog, number: 10, colour: yellow, fruit: mango, day: tuesday} Then my resulting table should be: animal | number | colour | fruit | day cat | none | blue | lemon | monday fish | none | green | none | saturday dog | 10 | yellow | mango | tuesday Although it would be good to keep the order of the title value pairs, which I know dictionaries wont do. So basically, I need to generate columns from all the titles (kept in order but somehow merged together) What would be the best way of going about this without knowing all the possible titles and explicitly specifying an order for the values to be put in?
[ "You need a multipass algorithm. Remember all the scraped pages in a list of dicts. In the first pass, go over this list and collect all the titles in a set(), and create an ordering (for example, convert to list sort them alphabetically). \nIn the second pass you print the table and use your generated ordering as column names, extracting the values from the dictionaries as needed (defaulting to empty to handle missing values), for example with dict.get(name, \"\").\n", "I would suggest that you could use optional parameters, or alternatively use overloaded constructors to populate the values:\nPage(string animal = string.empty, \nint number = -999, string colour = string.empty, day = string.empty )\n\nEither that or store each key/value pair as type object and then cast it from your pages.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "tabular" ]
stackoverflow_0002772294_python_tabular.txt
Q: python get time in minutes import datetime start = datetime.datetime(2009, 1, 31) end = datetime.datetime(2009, 2, 1) print end-start >>1 day, 0:00:00//output How to get the output in minutes Thanks, A: import datetime start = datetime.datetime(2009, 1, 31) end = datetime.datetime(2009, 2, 1) diff = end-start print (diff.days * 1440) + (diff.seconds / 60) >> 1440.0 (I'm assuming you don't need microsecond resolution here - but if you do, just add in a third term using diff.microseconds with the proper divisor to convert to minutes.) and after the release of the python 2.7 you can use the method total_seconds print (diff.total_seconds() / 60)
python get time in minutes
import datetime start = datetime.datetime(2009, 1, 31) end = datetime.datetime(2009, 2, 1) print end-start >>1 day, 0:00:00//output How to get the output in minutes Thanks,
[ "import datetime\nstart = datetime.datetime(2009, 1, 31)\nend = datetime.datetime(2009, 2, 1)\ndiff = end-start\nprint (diff.days * 1440) + (diff.seconds / 60)\n>> 1440.0\n\n(I'm assuming you don't need microsecond resolution here - but if you do, just add in a third term using diff.microseconds with the proper divisor to convert to minutes.)\nand after the release of the python 2.7 you can use the method total_seconds\nprint (diff.total_seconds() / 60)\n\n" ]
[ 4 ]
[]
[]
[ "datetime", "python", "python_2.4" ]
stackoverflow_0002772376_datetime_python_python_2.4.txt
Q: Python: for statement behavior My question concerns the output of this statement: for x in range(4), y in range(4): print x print y Results in: [0, 1, 2, 3] 2 True 2 It seems there is a comparison involved, I just can't figure out why the output is structured like this. A: My guess is that you're running this from an interactive console, and already had y defined with a value of 2 (otherwise, you'd get NameError: name 'y' is not defined). That would lead to the output you observed. This is due to for x in range(4), y in range(4): actually being equivalent to the following when evaluated: for x in (range(4), y in range(4)): which reduces to... for x in ([0,1,2,3], 2 in range(4)): which again reduces to... for x in ([0,1,2,3], True): This then results in 2 iterations of the for loop, since it iterates over each element of the tuple: x = [0,1,2,3] x = True. (And of course, y is still 2.) A: You've created a weird, weird thing there. >>> y = 2 >>> range(4), y in range(4) ([0, 1, 2, 3], True) The y in range(4) is a membership test. The range(4), y in range(4) is a pair of items; a tuple. The variable x is set to range(4), then the result of y in range(4). The variable y is just laying around with a value; it is not set by the for statement. This only works hacking around on the command line typing random stuff with y left laying around. This isn't sensible Python code at all. [And yes, the word in has two meanings. So do ()'s and several other pieces of syntax.] A: You seem to have y defined prior to running this code. What you're iterating over is a two-item tuple: first item is range-generated list, second is True, which is result of the y in range(4): >>> y = 2 >>> for x in range(4), y in range(4): print x, 'x' print y, 'y' [0, 1, 2, 3] x 2 y True x 2 y What I suspect you were trying to do is to iterate over two variables from two lists. Use zip for this. A: Dav nailed down perfectly why the syntax you wrote doesn't work. Here are the syntaxes that do work for what you're probably trying to do: If you want all 4 x 4 combinations for x and y, you want 2 nested loops: for x in range(4): for y in range(4): print x, y Or if you really want to use one loop: import itertools for (x, y) in itertools.product(range(4), range(4)): print x, y itertools.product() generates all possible combinations: This is less readable than 2 loops in this simple case, but the itertools module has many other powerful functions and is worth knowing... If you want x and y to advance in parallel over two sequences (aka "lock-step" iteration): for (x, y) in zip(range(4), range(4)): print x, y # `zip(range(4), range(4))` is silly since you get x == y; # would be useful for different sequences, e.g. # zip(range(4), 'abcd') [Background: The name zip comes from Haskell; think about how a Zipper takes one tooth from here and one from there: zip() cuts off to the length of the shortest sequence; the itertools module has other variants...]
Python: for statement behavior
My question concerns the output of this statement: for x in range(4), y in range(4): print x print y Results in: [0, 1, 2, 3] 2 True 2 It seems there is a comparison involved, I just can't figure out why the output is structured like this.
[ "My guess is that you're running this from an interactive console, and already had y defined with a value of 2 (otherwise, you'd get NameError: name 'y' is not defined). That would lead to the output you observed.\nThis is due to for x in range(4), y in range(4): actually being equivalent to the following when evaluated:\nfor x in (range(4), y in range(4)):\n\nwhich reduces to...\nfor x in ([0,1,2,3], 2 in range(4)):\n\nwhich again reduces to...\nfor x in ([0,1,2,3], True):\n\nThis then results in 2 iterations of the for loop, since it iterates over each element of the tuple:\n\nx = [0,1,2,3]\nx = True.\n\n(And of course, y is still 2.)\n", "You've created a weird, weird thing there.\n>>> y = 2\n>>> range(4), y in range(4)\n([0, 1, 2, 3], True)\n\nThe y in range(4) is a membership test.\nThe range(4), y in range(4) is a pair of items; a tuple.\nThe variable x is set to range(4), then the result of y in range(4).\nThe variable y is just laying around with a value; it is not set by the for statement.\nThis only works hacking around on the command line typing random stuff with y left laying around.\nThis isn't sensible Python code at all.\n[And yes, the word in has two meanings. So do ()'s and several other pieces of syntax.]\n", "You seem to have y defined prior to running this code. What you're iterating over is a two-item tuple: first item is range-generated list, second is True, which is result of the y in range(4):\n>>> y = 2\n>>> for x in range(4), y in range(4):\n print x, 'x'\n print y, 'y'\n\n\n[0, 1, 2, 3] x\n2 y\nTrue x\n2 y\n\nWhat I suspect you were trying to do is to iterate over two variables from two lists. Use zip for this.\n", "Dav nailed down perfectly why the syntax you wrote doesn't work.\nHere are the syntaxes that do work for what you're probably trying to do:\n\nIf you want all 4 x 4 combinations for x and y, you want 2 nested loops:\nfor x in range(4):\n for y in range(4):\n print x, y\n\nOr if you really want to use one loop:\nimport itertools\nfor (x, y) in itertools.product(range(4), range(4)):\n print x, y\n\nitertools.product() generates all possible combinations:\n\nThis is less readable than 2 loops in this simple case, but the itertools module has many other powerful functions and is worth knowing...\n\nIf you want x and y to advance in parallel over two sequences (aka \"lock-step\" iteration):\nfor (x, y) in zip(range(4), range(4)):\n print x, y\n# `zip(range(4), range(4))` is silly since you get x == y;\n# would be useful for different sequences, e.g.\n# zip(range(4), 'abcd')\n\n[Background: The name zip comes from Haskell; think about how a Zipper takes one tooth from here and one from there:\n\nzip() cuts off to the length of the shortest sequence; the itertools module has other variants...]\n" ]
[ 6, 3, 3, 1 ]
[]
[]
[ "python", "syntax" ]
stackoverflow_0002772264_python_syntax.txt
Q: Programmatically sync the db in Django I'm trying to sync my db from a view, something like this: from django import http from django.core import management def syncdb(request): management.call_command('syncdb') return http.HttpResponse('Database synced.') The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything? I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated. A: management.call_command('syncdb', interactive=False) A: Works like this (at least with Django 1.1.): from django.core.management.commands import syncdb syncdb.Command().execute(noinput=True)
Programmatically sync the db in Django
I'm trying to sync my db from a view, something like this: from django import http from django.core import management def syncdb(request): management.call_command('syncdb') return http.HttpResponse('Database synced.') The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything? I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated.
[ "management.call_command('syncdb', interactive=False)\n\n", "Works like this (at least with Django 1.1.):\nfrom django.core.management.commands import syncdb\nsyncdb.Command().execute(noinput=True)\n\n" ]
[ 20, 4 ]
[]
[]
[ "django", "django_admin", "python" ]
stackoverflow_0002772990_django_django_admin_python.txt
Q: Mapping functions of 2D numpy arrays I have a function foo that takes a NxM numpy array as an argument and returns a scalar value. I have a AxNxM numpy array data, over which I'd like to map foo to give me a resultant numpy array of length A. Curently, I'm doing this: result = numpy.array([foo(x) for x in data]) It works, but it seems like I'm not taking advantage of the numpy magic (and speed). Is there a better way? I've looked at numpy.vectorize, and numpy.apply_along_axis, but neither works for a function of 2D arrays. EDIT: I'm doing boosted regression on 24x24 image patches, so my AxNxM is something like 1000x24x24. What I called foo above applies a Haar-like feature to a patch (so, not terribly computationally intensive). A: If NxM is big (say, 100), they the cost of iterating over A will be amortized into basically nothing. Say the array is 1000 X 100 X 100. Iterating is O(1000), but the cumulative cost of the inside function is O(1000 X 100 X 100) - 10,000 times slower. (Note, my terminology is a bit wonky, but I do know what I'm talking about) I'm not sure, but you could try this: result = numpy.empty(data.shape[0]) for i in range(len(data)): result[i] = foo(data[i]) You would save a big of memory allocation on building the list ... but the loop overhead would be greater. Or you could write a parallel version of the loop, and split it across multiple processes. That could be a lot faster, depending on how intensive foo is (as it would have to offset the data handling). A: You can achieve that by reshaping your 3D array as a 2D array with the same leading dimension, and wrap your function foo with a function that works on 1D arrays by reshaping them as required by foo. An example (using trace instead of foo): from numpy import * def apply2d_along_first(func2d, arr3d): a, n, m = arr3d.shape def func1d(arr1d): return func2d(arr1d.reshape((n,m))) arr2d = arr3d.reshape((a,n*m)) return apply_along_axis(func1d, -1, arr2d) A, N, M = 3, 4, 5 data = arange(A*N*M).reshape((A,N,M)) print data print apply2d_along_first(trace, data) Output: [[[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14] [15 16 17 18 19]] [[20 21 22 23 24] [25 26 27 28 29] [30 31 32 33 34] [35 36 37 38 39]] [[40 41 42 43 44] [45 46 47 48 49] [50 51 52 53 54] [55 56 57 58 59]]] [ 36 116 196]
Mapping functions of 2D numpy arrays
I have a function foo that takes a NxM numpy array as an argument and returns a scalar value. I have a AxNxM numpy array data, over which I'd like to map foo to give me a resultant numpy array of length A. Curently, I'm doing this: result = numpy.array([foo(x) for x in data]) It works, but it seems like I'm not taking advantage of the numpy magic (and speed). Is there a better way? I've looked at numpy.vectorize, and numpy.apply_along_axis, but neither works for a function of 2D arrays. EDIT: I'm doing boosted regression on 24x24 image patches, so my AxNxM is something like 1000x24x24. What I called foo above applies a Haar-like feature to a patch (so, not terribly computationally intensive).
[ "If NxM is big (say, 100), they the cost of iterating over A will be amortized into basically nothing.\nSay the array is 1000 X 100 X 100.\nIterating is O(1000), but the cumulative cost of the inside function is O(1000 X 100 X 100) - 10,000 times slower. (Note, my terminology is a bit wonky, but I do know what I'm talking about)\nI'm not sure, but you could try this:\nresult = numpy.empty(data.shape[0])\nfor i in range(len(data)):\n result[i] = foo(data[i])\n\nYou would save a big of memory allocation on building the list ... but the loop overhead would be greater.\nOr you could write a parallel version of the loop, and split it across multiple processes. That could be a lot faster, depending on how intensive foo is (as it would have to offset the data handling).\n", "You can achieve that by reshaping your 3D array as a 2D array with the same leading dimension, and wrap your function foo with a function that works on 1D arrays by reshaping them as required by foo. An example (using trace instead of foo):\nfrom numpy import *\n\ndef apply2d_along_first(func2d, arr3d):\n a, n, m = arr3d.shape\n def func1d(arr1d):\n return func2d(arr1d.reshape((n,m)))\n arr2d = arr3d.reshape((a,n*m))\n return apply_along_axis(func1d, -1, arr2d)\n\nA, N, M = 3, 4, 5\ndata = arange(A*N*M).reshape((A,N,M))\n\nprint data\nprint apply2d_along_first(trace, data)\n\nOutput:\n[[[ 0 1 2 3 4]\n [ 5 6 7 8 9]\n [10 11 12 13 14]\n [15 16 17 18 19]]\n\n [[20 21 22 23 24]\n [25 26 27 28 29]\n [30 31 32 33 34]\n [35 36 37 38 39]]\n\n [[40 41 42 43 44]\n [45 46 47 48 49]\n [50 51 52 53 54]\n [55 56 57 58 59]]]\n[ 36 116 196]\n\n" ]
[ 2, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0002772661_numpy_python.txt
Q: Getting the indices of all non-None items from a sub-list in Python? As per the title, I have a nested lists like so (the nested list is a fixed length): # ID, Name, Value list1 = [[ 1, "foo", 10], [ 2, "bar", None], [ 3, "fizz", 57], [ 4, "buzz", None]] I'd like to return a list (the number of items equal to the length of a sub-list from list1), where the sub-lists are the indices of rows without None as their Xth item, i.e.: [[non-None ID indices], [non-None Name indices], [non-None Value indices]] Using list1 as an example, the result should be: [[0, 1, 2, 3], [0, 1, 2, 3], [0, 2]] My current implementation is: indices = [[] for _ in range(len(list1[0]))] for i, row in enumerate(list1): for j in range(len(row)): if not isinstance(row[j], types.NoneType): indices[j].append(i) ...which works, but can be slow (the lengths of the lists are in the hundreds of thousands). Is there a better/more efficient way to do this? EDIT: I've refactored the above for loops into nested list comprehensions (similar to SilentGhost's answer). The following line gives the same result as the my original implementation, but runs approximately 10x faster. [[i for i in range(len(list1)) if list1[i][j] is not None] for j in range(len(log[0]))] A: >>> [[i for i, j in enumerate(c) if j is not None] for c in zip(*list1)] [[0, 1, 2, 3], [0, 1, 2, 3], [0, 2]] in python-2.x you could use itertools.izip instead of zip to avoid generating intermediate list. A: [[i for i in range(len(list1)) if list1[i] is not None] for _ in range(len(log[0]))] The above seems to be about 10x faster than my original post. A: import numpy as np map(lambda a: np.not_equal(a, None).nonzero()[0], np.transpose(list1)) # -> [array([0, 1, 2, 3]), array([0, 1, 2, 3]), array([0, 2])]
Getting the indices of all non-None items from a sub-list in Python?
As per the title, I have a nested lists like so (the nested list is a fixed length): # ID, Name, Value list1 = [[ 1, "foo", 10], [ 2, "bar", None], [ 3, "fizz", 57], [ 4, "buzz", None]] I'd like to return a list (the number of items equal to the length of a sub-list from list1), where the sub-lists are the indices of rows without None as their Xth item, i.e.: [[non-None ID indices], [non-None Name indices], [non-None Value indices]] Using list1 as an example, the result should be: [[0, 1, 2, 3], [0, 1, 2, 3], [0, 2]] My current implementation is: indices = [[] for _ in range(len(list1[0]))] for i, row in enumerate(list1): for j in range(len(row)): if not isinstance(row[j], types.NoneType): indices[j].append(i) ...which works, but can be slow (the lengths of the lists are in the hundreds of thousands). Is there a better/more efficient way to do this? EDIT: I've refactored the above for loops into nested list comprehensions (similar to SilentGhost's answer). The following line gives the same result as the my original implementation, but runs approximately 10x faster. [[i for i in range(len(list1)) if list1[i][j] is not None] for j in range(len(log[0]))]
[ ">>> [[i for i, j in enumerate(c) if j is not None] for c in zip(*list1)]\n[[0, 1, 2, 3], [0, 1, 2, 3], [0, 2]]\n\nin python-2.x you could use itertools.izip instead of zip to avoid generating intermediate list.\n", "[[i for i in range(len(list1)) if list1[i] is not None] for _ in range(len(log[0]))]\n\nThe above seems to be about 10x faster than my original post.\n", "import numpy as np\n\nmap(lambda a: np.not_equal(a, None).nonzero()[0], np.transpose(list1))\n# -> [array([0, 1, 2, 3]), array([0, 1, 2, 3]), array([0, 2])]\n\n" ]
[ 7, 1, 0 ]
[]
[]
[ "list", "list_comprehension", "python" ]
stackoverflow_0002772528_list_list_comprehension_python.txt
Q: List comprehension from multiple sources in Python? Is it possible to replace the following with a list comprehension? res = [] for a, _, c in myList: for i in c: res.append((a, i)) For example: # Input myList = [("Foo", None, [1, 2, 3]), ("Bar", None, ["i", "j"])] # Output res = [("Foo", 1), ("Foo", 2), ("Foo", 3), ("Bar", "i"), ("Bar", "j")] A: >>> [(i, j) for i, _, k in myList for j in k] [('Foo', 1), ('Foo', 2), ('Foo', 3), ('Bar', 'i'), ('Bar', 'j')]
List comprehension from multiple sources in Python?
Is it possible to replace the following with a list comprehension? res = [] for a, _, c in myList: for i in c: res.append((a, i)) For example: # Input myList = [("Foo", None, [1, 2, 3]), ("Bar", None, ["i", "j"])] # Output res = [("Foo", 1), ("Foo", 2), ("Foo", 3), ("Bar", "i"), ("Bar", "j")]
[ ">>> [(i, j) for i, _, k in myList for j in k]\n[('Foo', 1), ('Foo', 2), ('Foo', 3), ('Bar', 'i'), ('Bar', 'j')]\n\n" ]
[ 7 ]
[]
[]
[ "list", "list_comprehension", "python" ]
stackoverflow_0002773295_list_list_comprehension_python.txt
Q: python: open file, feed line to list, process list data I want to process the data in the file "output.log" and feed it to graphdata['eth0] I have done this but it process only the first line: logread = open("output.log", "r").readlines() for line in logread: print "line", line i = line.rstrip("\n") b = float(i) colors = [ (0.2, 03, .65), (0.5, 0.7, .1), (.35, .2, .45), ] graphData = {} graphData['eth0'] = [b] cairoplot.dot_line_plot("./blog", graphData, 500, 500, axis=True, grid=True, dots=True, series_colors=colors) A: logread = open("output.log", "r").readlines() for line in logread: print "line", line i = line.rstrip("\n") b = float(i) colors = [ (0.2, 03, .65), (0.5, 0.7, .1), (.35, .2, .45), ] graphData = {} graphData['eth0'] = [b] cairoplot.dot_line_plot("./blog", graphData, 500, 500, axis=True, grid=True, dots=True, series_colors=colors) A: Not entirely sure, bit it looks like you're re-initing the array each time. Can you feed it in one big list? A: graphData = {} I believe that is a dictionary. Is that what you intended? If you're looking for a list/array you can use [] instead of {}. What a previous poster said sounds correct. Every time through you are setting graphData = {} and therefore overwriting anything from the past. array.append(x) will append something to an array. If you want all lines displayed all happily at the end you could set graphData = [] before the loop. Then each time through the loop do the graphData.append(line). Then after the loop you can set graph_data_dict = {} graph_data_dict['eth0'] = graph_data_array
python: open file, feed line to list, process list data
I want to process the data in the file "output.log" and feed it to graphdata['eth0] I have done this but it process only the first line: logread = open("output.log", "r").readlines() for line in logread: print "line", line i = line.rstrip("\n") b = float(i) colors = [ (0.2, 03, .65), (0.5, 0.7, .1), (.35, .2, .45), ] graphData = {} graphData['eth0'] = [b] cairoplot.dot_line_plot("./blog", graphData, 500, 500, axis=True, grid=True, dots=True, series_colors=colors)
[ "logread = open(\"output.log\", \"r\").readlines()\nfor line in logread:\n print \"line\", line\n i = line.rstrip(\"\\n\")\n b = float(i)\n colors = [ (0.2, 03, .65), (0.5, 0.7, .1), (.35, .2, .45), ]\n graphData = {}\n graphData['eth0'] = [b]\n cairoplot.dot_line_plot(\"./blog\", graphData, 500, 500, axis=True, grid=True, dots=True, series_colors=colors)\n\n", "Not entirely sure, bit it looks like you're re-initing the array each time. Can you feed it in one big list?\n", "graphData = {}\n\nI believe that is a dictionary. Is that what you intended?\nIf you're looking for a list/array you can use [] instead of {}. What a previous poster said sounds correct. Every time through you are setting graphData = {} and therefore overwriting anything from the past.\narray.append(x)\n\nwill append something to an array.\nIf you want all lines displayed all happily at the end you could set \n graphData = [] \nbefore the loop. Then each time through the loop do the\ngraphData.append(line). \n\nThen after the loop you can set \n graph_data_dict = {}\n graph_data_dict['eth0'] = graph_data_array\n" ]
[ 0, 0, 0 ]
[]
[]
[ "cairo", "cairoplot", "python" ]
stackoverflow_0002773416_cairo_cairoplot_python.txt
Q: Accelerometer data analysis I would like to know if there are some libraries/algorithms/techniques (python, if at all possible) that help to extract features from accelerometer data (extracted from and android phone, btw), like periodicity of movements, energy of acceleration and the like. Has anyone done this kind of task before? Thank you very much in advance :) A: I dont know about android but a lot of work has been done on Python/Symbain Accelerometer stuff. You can find a link here
Accelerometer data analysis
I would like to know if there are some libraries/algorithms/techniques (python, if at all possible) that help to extract features from accelerometer data (extracted from and android phone, btw), like periodicity of movements, energy of acceleration and the like. Has anyone done this kind of task before? Thank you very much in advance :)
[ "I dont know about android but a lot of work has been done on Python/Symbain Accelerometer stuff.\nYou can find a link here\n" ]
[ 0 ]
[]
[]
[ "accelerometer", "android", "data_analysis", "python" ]
stackoverflow_0002773668_accelerometer_android_data_analysis_python.txt
Q: Customizing Django form widgets? - Django I'm having a little problem here! I have discovered the following as being the globally accepted method for customizing Django admin field. from django import forms from django.utils.safestring import mark_safe class AdminImageWidget(forms.FileInput): """ A ImageField Widget for admin that shows a thumbnail. """ def __init__(self, attrs={}): super(AdminImageWidget, self).__init__(attrs) def render(self, name, value, attrs=None): output = [] if value and hasattr(value, "url"): output.append(('<a target="_blank" href="%s">' '<img src="%s" style="height: 28px;" /></a> ' % (value.url, value.url))) output.append(super(AdminImageWidget, self).render(name, value, attrs)) return mark_safe(u''.join(output)) I need to have access to other field of the model in order to decide how to display the field! For example: If I am keeping track of a value, let us call it "sales". If I wish to customize how sales is displayed depending on another field, let us call it "conversion rate". I have no obvious way of accessing the conversion rate field when overriding the sales widget! Any ideas to work around this would be highly appreciated! Thanks :) A: You are correct that the widgets themselves are independent. My first thought for doing something more complex is to either provide a custom admin template that does what you want, or to pass in a piece of javascript code to handle the interrelated fields (much like how prepopulated fields work). A: you probably want to use a custom form for the admin
Customizing Django form widgets? - Django
I'm having a little problem here! I have discovered the following as being the globally accepted method for customizing Django admin field. from django import forms from django.utils.safestring import mark_safe class AdminImageWidget(forms.FileInput): """ A ImageField Widget for admin that shows a thumbnail. """ def __init__(self, attrs={}): super(AdminImageWidget, self).__init__(attrs) def render(self, name, value, attrs=None): output = [] if value and hasattr(value, "url"): output.append(('<a target="_blank" href="%s">' '<img src="%s" style="height: 28px;" /></a> ' % (value.url, value.url))) output.append(super(AdminImageWidget, self).render(name, value, attrs)) return mark_safe(u''.join(output)) I need to have access to other field of the model in order to decide how to display the field! For example: If I am keeping track of a value, let us call it "sales". If I wish to customize how sales is displayed depending on another field, let us call it "conversion rate". I have no obvious way of accessing the conversion rate field when overriding the sales widget! Any ideas to work around this would be highly appreciated! Thanks :)
[ "You are correct that the widgets themselves are independent. My first thought for doing something more complex is to either provide a custom admin template that does what you want, or to pass in a piece of javascript code to handle the interrelated fields (much like how prepopulated fields work).\n", "you probably want to use a custom form for the admin\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_admin", "django_models", "field", "python" ]
stackoverflow_0002766839_django_django_admin_django_models_field_python.txt
Q: Python Locking Implementation (with threading module) This is probably a rudimentary question, but I'm new to threaded programming in Python and am not entirely sure what the correct practice is. Should I be creating a single lock object (either globally or being passed around) and using that everywhere that I need to do locking? Or, should I be creating multiple lock instances in each of the classes where I will be employing them. Take these 2 rudimentary code samples, which direction is best to go? The main difference being that a single lock instance is used in both class A and B in the second, while multiple instances are used in the first. Sample 1 class A(): def __init__(self, theList): self.theList = theList self.lock = threading.Lock() def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList): self.theList = theList self.lock = threading.Lock() self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": aList = [] for x in range(10): B(aList) A(aList).poll() Sample 2 class A(): def __init__(self, theList,lock): self.theList = theList self.lock = lock def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList,lock): self.theList = theList self.lock = lock self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": lock = threading.Lock() aList = [] for x in range(10): B(aList,lock) A(aList,lock).poll() A: In the general case, a single global lock is less efficient (more contention) but safer (no risk of deadlock) as long as it's a RLock (reentrant) rather than a plain Lock. The potential problems come when a thread that's executing while holding a lock tries to acquire another (or the same) lock, for example by calling another method that contains the acquire call. If a thread that's already holding a lock tries to acquire it again, it will block forever if the lock's a plain Lock, but proceed smoothly if it's a slightly more complex RLock -- that's why the latter is called reentrant, because the thread holding it can "enter" (acquire the lock) again. Essentially, a RLock keeps track of which thread holds it, and how many time the thread has acquired the lock, while the simpler Lock does not keep such information around. With multiple locks, the deadlock problem comes when one thread tries to acquire lock A then lock B, while another tries to acquire first lock B, then lock A. If that occurs, then sooner or later you'll be in a situation where the first lock holds A, the second one holds B, and each tries to acquire the lock that the other one is holding -- so both block forever. One way to prevent multiple-lock deadlocks is to make sure that locks are always acquired in the same order, whatever thread is doing the acquiring. However, when each instance has its own lock, that's exceedingly difficult to organize with any clarity and simplicity. A: If you use a separate lock object in each class then you run a risk of deadlocking, e.g. if one operation claims the lock for A and then claims the lock for B while a different operation claims B and then A. If you use a single lock then you're forcing code to single thread when different operations could be run in parallel. That isn't always as serious in Python (which has a global lock in any case) as in other languages, but say you were to hold a global lock while writing to a file Python would release the GIL but you'd have blocked everything else. So it's a tradeoff. I'd say go for little locks as that way you maximise the chance for parallel execution, but take care never to claim more than one lock at a time, and try not to hold onto a lock for any longer than you absolutely have to. So far as your specific examples go, the first one is just plain broken. If you lock operations on theList then you must use the same lock every time or you aren't locking anything. That may not matter here as list.append and list.remove are effectively atomic anyway, but if you do need to lock access to the list you need to be sure to use the same lock every time. The best way to do that is to hold the list and a lock as attributes of a class and force all access to the list to go through methods of the containing class. Then pass the container class around not the list or the lock.
Python Locking Implementation (with threading module)
This is probably a rudimentary question, but I'm new to threaded programming in Python and am not entirely sure what the correct practice is. Should I be creating a single lock object (either globally or being passed around) and using that everywhere that I need to do locking? Or, should I be creating multiple lock instances in each of the classes where I will be employing them. Take these 2 rudimentary code samples, which direction is best to go? The main difference being that a single lock instance is used in both class A and B in the second, while multiple instances are used in the first. Sample 1 class A(): def __init__(self, theList): self.theList = theList self.lock = threading.Lock() def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList): self.theList = theList self.lock = threading.Lock() self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": aList = [] for x in range(10): B(aList) A(aList).poll() Sample 2 class A(): def __init__(self, theList,lock): self.theList = theList self.lock = lock def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList,lock): self.theList = theList self.lock = lock self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": lock = threading.Lock() aList = [] for x in range(10): B(aList,lock) A(aList,lock).poll()
[ "In the general case, a single global lock is less efficient (more contention) but safer (no risk of deadlock) as long as it's a RLock (reentrant) rather than a plain Lock.\nThe potential problems come when a thread that's executing while holding a lock tries to acquire another (or the same) lock, for example by calling another method that contains the acquire call. If a thread that's already holding a lock tries to acquire it again, it will block forever if the lock's a plain Lock, but proceed smoothly if it's a slightly more complex RLock -- that's why the latter is called reentrant, because the thread holding it can \"enter\" (acquire the lock) again. Essentially, a RLock keeps track of which thread holds it, and how many time the thread has acquired the lock, while the simpler Lock does not keep such information around.\nWith multiple locks, the deadlock problem comes when one thread tries to acquire lock A then lock B, while another tries to acquire first lock B, then lock A. If that occurs, then sooner or later you'll be in a situation where the first lock holds A, the second one holds B, and each tries to acquire the lock that the other one is holding -- so both block forever.\nOne way to prevent multiple-lock deadlocks is to make sure that locks are always acquired in the same order, whatever thread is doing the acquiring. However, when each instance has its own lock, that's exceedingly difficult to organize with any clarity and simplicity.\n", "If you use a separate lock object in each class then you run a risk of deadlocking, e.g. if one operation claims the lock for A and then claims the lock for B while a different operation claims B and then A.\nIf you use a single lock then you're forcing code to single thread when different operations could be run in parallel. That isn't always as serious in Python (which has a global lock in any case) as in other languages, but say you were to hold a global lock while writing to a file Python would release the GIL but you'd have blocked everything else.\nSo it's a tradeoff. I'd say go for little locks as that way you maximise the chance for parallel execution, but take care never to claim more than one lock at a time, and try not to hold onto a lock for any longer than you absolutely have to.\nSo far as your specific examples go, the first one is just plain broken. If you lock operations on theList then you must use the same lock every time or you aren't locking anything. That may not matter here as list.append and list.remove are effectively atomic anyway, but if you do need to lock access to the list you need to be sure to use the same lock every time. The best way to do that is to hold the list and a lock as attributes of a class and force all access to the list to go through methods of the containing class. Then pass the container class around not the list or the lock.\n" ]
[ 9, 9 ]
[]
[]
[ "locking", "multithreading", "python" ]
stackoverflow_0002773935_locking_multithreading_python.txt
Q: Creating Python C module from Fortran sources on Ubuntu 10.04 LTS In a project I work on we use a Python C module compiled from Fortran with f2py. I've had no issues building it on Windows 7 32bit (using mingw32) and on the servers it's built on 32bit Linux. But I've recently installed Ubuntu 10.04 LTS 64bit on my laptop that I use for development, and when I build it I get a lot of warnings (even though I've apparently installed all gcc/fortran libraries/compilers), but it does finish the build. However when I try to use the built module in the application, most of it seems to run well but then it crashes with an error: * glibc detected * /home/botondus/Envs/gasit/bin/python: free(): invalid next size (fast): 0x0000000006a44760 *** Warnings on running f2py -c -m module_name ./fortran/source.f90 customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler Could not locate executable g77 Found executable /usr/bin/f77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found customize NAGFCompiler Found executable /usr/bin/f95 customize VastFCompiler customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext I have tried building a 32bit version by installing the gfortran multilib packages and running f2py with -m32 option (but with no success): f2py -c -m module_name ./fortran/source.f90 --f77flags="-m32" --f90flags="-m32" Any suggestions on what I could try to either build 32bit version or correctly build the 64bit version? Edit: It looks like it crashes right at the end of a subroutine. The 'write' executes fine... which is strange. write(6,*)'Eh=',Eh end subroutine calcolo_involucro The full backtrace is very long and I'm not sure if it's any help, but here it is: *** glibc detected *** /home/botondus/Envs/gasit/bin/python: free(): invalid next size (fast): 0x0000000007884690 *** ======= Backtrace: ========= /lib/libc.so.6(+0x775b6)[0x7fe24f8f05b6] /lib/libc.so.6(cfree+0x73)[0x7fe24f8f6e53] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x4183c)[0x7fe24a18183c] /home/botondus/Envs/gasit/bin/python[0x46a50d] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x4fbd8)[0x7fe24a18fbd8] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x5aded)[0x7fe24a19aded] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x516e)[0x4a7c5e] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x5a60)[0x4a8550] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python[0x537620] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python[0x427dff] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python[0x477bff] /home/botondus/Envs/gasit/bin/python[0x46f47f] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4888)[0x4a7378] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python[0x537620] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_CallObjectWithKeywords+0x43)[0x4a1b03] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x2ee94)[0x7fe24a16ee94] /home/botondus/Envs/gasit/bin/python(_PyObject_Str+0x61)[0x454a81] /home/botondus/Envs/gasit/bin/python(PyObject_Str+0xa)[0x454b3a] /home/botondus/Envs/gasit/bin/python[0x461ad3] /home/botondus/Envs/gasit/bin/python[0x46f3b3] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4888)[0x4a7378] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x5a60)[0x4a8550] ======= Memory map: ======== 00400000-0061c000 r-xp 00000000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0081b000-0081c000 r--p 0021b000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0081c000-0087e000 rw-p 0021c000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0087e000-0088d000 rw-p 00000000 00:00 0 01877000-07a83000 rw-p 00000000 00:00 0 [heap] 7fe240000000-7fe240021000 rw-p 00000000 00:00 0 7fe240021000-7fe244000000 ---p 00000000 00:00 0 7fe247631000-7fe2476b1000 r-xp 00000000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2476b1000-7fe2478b1000 ---p 00080000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b1000-7fe2478b6000 r--p 00080000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b6000-7fe2478b7000 rw-p 00085000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b7000-7fe2478bb000 r-xp 00000000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe2478bb000-7fe247aba000 ---p 00004000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247aba000-7fe247abb000 r--p 00003000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247abb000-7fe247abc000 rw-p 00004000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247abc000-7fe247abf000 r-xp 00000000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247abf000-7fe247cbf000 ---p 00003000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cbf000-7fe247cc0000 r--p 00003000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cc0000-7fe247cc1000 rw-p 00004000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cc1000-7fe247cc5000 r-xp 00000000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247cc5000-7fe247ec4000 ---p 00004000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec4000-7fe247ec5000 r--p 00003000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec5000-7fe247ec6000 rw-p 00004000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec6000-7fe24800c000 r-xp 00000000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe24800c000-7fe24820b000 ---p 00146000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe24820b000-7fe248213000 r--p 00145000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe248213000-7fe248215000 rw-p 0014d000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe248215000-7fe248216000 rw-p 00000000 00:00 0 7fe248216000-7fe248229000 r-xp 00000000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248229000-7fe248428000 ---p 00013000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248428000-7fe248429000 r--p 00012000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248429000-7fe24842a000 rw-p 00013000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe24842a000-7fe248464000 r-xp 00000000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248464000-7fe248663000 ---p 0003a000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248663000-7fe248664000 r--p 00039000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248664000-7fe248665000 rw-p 0003a000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248665000-7fe24876e000 r-xp 00000000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24876e000-7fe24896d000 ---p 00109000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24896d000-7fe24896e000 r--p 00108000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24896e000-7fe248999000 rw-p 00109000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe248999000-7fe2489a7000 rw-p 00000000 00:00 0 7fe2489a7000-7fe2489bd000 r-xp 00000000 08:03 132934 /lib/libgcc_s.so.1 A: Fortunately managed to solve this. It seems I didn't notice that the numpy package version in the repositories for Ubuntu 10.04 are only v1.3.0. I removed numpy, then built v1.4.1 from source. After that re-running f2py did give the same warnings, however using the module does not produce the crash anymore.
Creating Python C module from Fortran sources on Ubuntu 10.04 LTS
In a project I work on we use a Python C module compiled from Fortran with f2py. I've had no issues building it on Windows 7 32bit (using mingw32) and on the servers it's built on 32bit Linux. But I've recently installed Ubuntu 10.04 LTS 64bit on my laptop that I use for development, and when I build it I get a lot of warnings (even though I've apparently installed all gcc/fortran libraries/compilers), but it does finish the build. However when I try to use the built module in the application, most of it seems to run well but then it crashes with an error: * glibc detected * /home/botondus/Envs/gasit/bin/python: free(): invalid next size (fast): 0x0000000006a44760 *** Warnings on running f2py -c -m module_name ./fortran/source.f90 customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler Could not locate executable g77 Found executable /usr/bin/f77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found customize NAGFCompiler Found executable /usr/bin/f95 customize VastFCompiler customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext I have tried building a 32bit version by installing the gfortran multilib packages and running f2py with -m32 option (but with no success): f2py -c -m module_name ./fortran/source.f90 --f77flags="-m32" --f90flags="-m32" Any suggestions on what I could try to either build 32bit version or correctly build the 64bit version? Edit: It looks like it crashes right at the end of a subroutine. The 'write' executes fine... which is strange. write(6,*)'Eh=',Eh end subroutine calcolo_involucro The full backtrace is very long and I'm not sure if it's any help, but here it is: *** glibc detected *** /home/botondus/Envs/gasit/bin/python: free(): invalid next size (fast): 0x0000000007884690 *** ======= Backtrace: ========= /lib/libc.so.6(+0x775b6)[0x7fe24f8f05b6] /lib/libc.so.6(cfree+0x73)[0x7fe24f8f6e53] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x4183c)[0x7fe24a18183c] /home/botondus/Envs/gasit/bin/python[0x46a50d] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x4fbd8)[0x7fe24a18fbd8] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x5aded)[0x7fe24a19aded] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x516e)[0x4a7c5e] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x5a60)[0x4a8550] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python[0x537620] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python[0x427dff] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python[0x477bff] /home/botondus/Envs/gasit/bin/python[0x46f47f] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4888)[0x4a7378] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python[0x537620] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_CallObjectWithKeywords+0x43)[0x4a1b03] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x2ee94)[0x7fe24a16ee94] /home/botondus/Envs/gasit/bin/python(_PyObject_Str+0x61)[0x454a81] /home/botondus/Envs/gasit/bin/python(PyObject_Str+0xa)[0x454b3a] /home/botondus/Envs/gasit/bin/python[0x461ad3] /home/botondus/Envs/gasit/bin/python[0x46f3b3] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4888)[0x4a7378] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x5a60)[0x4a8550] ======= Memory map: ======== 00400000-0061c000 r-xp 00000000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0081b000-0081c000 r--p 0021b000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0081c000-0087e000 rw-p 0021c000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0087e000-0088d000 rw-p 00000000 00:00 0 01877000-07a83000 rw-p 00000000 00:00 0 [heap] 7fe240000000-7fe240021000 rw-p 00000000 00:00 0 7fe240021000-7fe244000000 ---p 00000000 00:00 0 7fe247631000-7fe2476b1000 r-xp 00000000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2476b1000-7fe2478b1000 ---p 00080000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b1000-7fe2478b6000 r--p 00080000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b6000-7fe2478b7000 rw-p 00085000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b7000-7fe2478bb000 r-xp 00000000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe2478bb000-7fe247aba000 ---p 00004000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247aba000-7fe247abb000 r--p 00003000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247abb000-7fe247abc000 rw-p 00004000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247abc000-7fe247abf000 r-xp 00000000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247abf000-7fe247cbf000 ---p 00003000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cbf000-7fe247cc0000 r--p 00003000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cc0000-7fe247cc1000 rw-p 00004000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cc1000-7fe247cc5000 r-xp 00000000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247cc5000-7fe247ec4000 ---p 00004000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec4000-7fe247ec5000 r--p 00003000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec5000-7fe247ec6000 rw-p 00004000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec6000-7fe24800c000 r-xp 00000000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe24800c000-7fe24820b000 ---p 00146000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe24820b000-7fe248213000 r--p 00145000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe248213000-7fe248215000 rw-p 0014d000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe248215000-7fe248216000 rw-p 00000000 00:00 0 7fe248216000-7fe248229000 r-xp 00000000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248229000-7fe248428000 ---p 00013000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248428000-7fe248429000 r--p 00012000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248429000-7fe24842a000 rw-p 00013000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe24842a000-7fe248464000 r-xp 00000000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248464000-7fe248663000 ---p 0003a000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248663000-7fe248664000 r--p 00039000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248664000-7fe248665000 rw-p 0003a000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248665000-7fe24876e000 r-xp 00000000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24876e000-7fe24896d000 ---p 00109000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24896d000-7fe24896e000 r--p 00108000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24896e000-7fe248999000 rw-p 00109000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe248999000-7fe2489a7000 rw-p 00000000 00:00 0 7fe2489a7000-7fe2489bd000 r-xp 00000000 08:03 132934 /lib/libgcc_s.so.1
[ "Fortunately managed to solve this.\nIt seems I didn't notice that the numpy package version in the repositories for Ubuntu 10.04 are only v1.3.0. I removed numpy, then built v1.4.1 from source.\nAfter that re-running f2py did give the same warnings, however using the module does not produce the crash anymore.\n" ]
[ 0 ]
[]
[]
[ "32bit_64bit", "f2py", "fortran", "python" ]
stackoverflow_0002768404_32bit_64bit_f2py_fortran_python.txt
Q: Internet Explorer URL blocking with Python? I need to be able to block the urls that are stored in a text file on the hard disk using Python. If the url the user tries to visit is in the file, it redirects them to another page instead. How is this done? A: There are several proxies written in Python: you can pick one of them and modify it so that it proxies most URLs normally but redirects those in your text file. You'll also need to set IE to use that proxy, of course. A: Doing this at the machine level is a weak solution, it would be pretty easy for a technically inclined user to bypass. Even with a server side proxy it will be very easy to bypass unless you firewall normal http traffic, at a bare minimum block ports 80, 443. You could program a proxy in python as Alex suggested, but this is a pretty common problem and there are plenty of off the shelf solutions. That being said, I think that restricting web access will do nothing but aggravate your users.
Internet Explorer URL blocking with Python?
I need to be able to block the urls that are stored in a text file on the hard disk using Python. If the url the user tries to visit is in the file, it redirects them to another page instead. How is this done?
[ "There are several proxies written in Python: you can pick one of them and modify it so that it proxies most URLs normally but redirects those in your text file. You'll also need to set IE to use that proxy, of course.\n", "Doing this at the machine level is a weak solution, it would be pretty easy for a technically inclined user to bypass.\nEven with a server side proxy it will be very easy to bypass unless you firewall normal http traffic, at a bare minimum block ports 80, 443.\nYou could program a proxy in python as Alex suggested, but this is a pretty common problem and there are plenty of off the shelf solutions.\nThat being said, I think that restricting web access will do nothing but aggravate your users.\n" ]
[ 3, 1 ]
[]
[]
[ "internet_explorer", "python", "url", "windows" ]
stackoverflow_0002774006_internet_explorer_python_url_windows.txt
Q: Python/Numpy: Divide array I have some data represented in a 1300x1341 matrix. I would like to split this matrix in several pieces (e.g. 9) so that I can loop over and process them. The data needs to stay ordered in the sense that x[0,1] stays below (or above if you like) x[0,0] and besides x[1,1]. Just like if you had imaged the data, you could draw 2 vertical and 2 horizontal lines over the image to illustrate the 9 parts. If I use numpys reshape (eg. matrix.reshape(9,260,745) or any other combination of 9,260,745) it doesn't yield the required structure since the above mentioned ordering is lost... Did I misunderstand the reshape method or can it be done this way? What other pythonic/numpy way is there to do this? A: Sounds like you need to use numpy.split() which has its documentation here ... or perhaps its sibling numpy.array_split() here. They are for splitting an array into equal subsections without re-arranging the numbers like reshape does, I haven't tested this but something like: numpy.array_split(numpy.zeros((1300,1341)), 9) should do the trick. A: reshape, to quote its docs, Gives a new shape to an array without changing its data. In other words, it does not move the array's data around at all -- it just affects the array's dimension. You, on the other hand, seem to require slicing; again quoting: It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. So for example thearray[0:260, 0:745] is the "upper leftmost part, thearray[260:520, 0:745] the upper left-of-center part, and so forth. You could have references to the various parts in a list (or dict with appropriate keys) to process them separately.
Python/Numpy: Divide array
I have some data represented in a 1300x1341 matrix. I would like to split this matrix in several pieces (e.g. 9) so that I can loop over and process them. The data needs to stay ordered in the sense that x[0,1] stays below (or above if you like) x[0,0] and besides x[1,1]. Just like if you had imaged the data, you could draw 2 vertical and 2 horizontal lines over the image to illustrate the 9 parts. If I use numpys reshape (eg. matrix.reshape(9,260,745) or any other combination of 9,260,745) it doesn't yield the required structure since the above mentioned ordering is lost... Did I misunderstand the reshape method or can it be done this way? What other pythonic/numpy way is there to do this?
[ "Sounds like you need to use numpy.split() which has its documentation here ... or perhaps its sibling numpy.array_split() here. They are for splitting an array into equal subsections without re-arranging the numbers like reshape does,\nI haven't tested this but something like:\nnumpy.array_split(numpy.zeros((1300,1341)), 9)\n\nshould do the trick.\n", "reshape, to quote its docs,\n\nGives a new shape to an array without\n changing its data.\n\nIn other words, it does not move the array's data around at all -- it just affects the array's dimension. You, on the other hand, seem to require slicing; again quoting:\n\nIt is possible to slice and stride\n arrays to extract arrays of the same\n number of dimensions, but of different\n sizes than the original. The slicing\n and striding works exactly the same\n way it does for lists and tuples\n except that they can be applied to\n multiple dimensions as well.\n\nSo for example thearray[0:260, 0:745] is the \"upper leftmost part, thearray[260:520, 0:745] the upper left-of-center part, and so forth. You could have references to the various parts in a list (or dict with appropriate keys) to process them separately.\n" ]
[ 5, 2 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0002773632_numpy_python.txt
Q: Comparing dicts and update a list of result I have a list of dicts and I want to compare each dict in that list with a dict in a resulting list, add it to the result list if it's not there, and if it's there, update a counter associated with that dict. At first I wanted to use the solution described at Python : List of dict, if exists increment a dict value, if not append a new dict but I got an error where one dict can not be used as a key to another dict. So the data structure I opted for is a list where each entry is a dict and an int: r = [[{'src': '', 'dst': '', 'cmd': ''}, 0]] The original dataset (that should be compared to the resulting dataset) is a list of dicts: d1 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} d2 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'} d3 = {'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'} d4 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} o = [d1, d2, d3, d4] The result should be: r = [[{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'}, 2], [{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'}, 1], [{'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'}, 1]] What is the best way to accomplish this? I have a few code examples but none is really good and most is not working correctly. Thanks for any input on this! UPDATE The final code after Tamås comments is: from collections import namedtuple, defaultdict DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') d2 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd2') d3 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') d4 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') ds = d1, d2, d3, d4 r = defaultdict(int) for d in ds: r[d] += 1 print "list to compare" for d in ds: print d print "result after merge" for k, v in r.iteritems(): print("%s: %s" % (k, v)) A: Well, if your original dicts contain only src, dst and cmd, you can use named tuples instead, which are hashable, so you can use named tuples in a dict as keys. from collections import namedtuple DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') (Sorry for the silly class name; since I don't know what your dicts represent, I couldn't come up with a better name). You can even create DataClass instances from dicts: d1 = DataClass(**d1_as_dict) At this point, your main counting loop simplifies to this: from collections import defaultdict, namedtuple r = defaultdict(int) for obj in [d1, d2, d3, d4]: r[obj] += 1 If, for some reason, you are stuck with Python <= 2.5, there is a drop-in namedtuple replacement class here. A: The namedtuple is an excellent idea, if applicable. But if you want to stick with dicts, that is also of course possible, just substantially less efficient. For example: def addadict(r, newd): for i, (d, count) in enumerate(r): if d == newd: r[i] = [d, count+1] break else: r.append([newd, 1])
Comparing dicts and update a list of result
I have a list of dicts and I want to compare each dict in that list with a dict in a resulting list, add it to the result list if it's not there, and if it's there, update a counter associated with that dict. At first I wanted to use the solution described at Python : List of dict, if exists increment a dict value, if not append a new dict but I got an error where one dict can not be used as a key to another dict. So the data structure I opted for is a list where each entry is a dict and an int: r = [[{'src': '', 'dst': '', 'cmd': ''}, 0]] The original dataset (that should be compared to the resulting dataset) is a list of dicts: d1 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} d2 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'} d3 = {'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'} d4 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} o = [d1, d2, d3, d4] The result should be: r = [[{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'}, 2], [{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'}, 1], [{'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'}, 1]] What is the best way to accomplish this? I have a few code examples but none is really good and most is not working correctly. Thanks for any input on this! UPDATE The final code after Tamås comments is: from collections import namedtuple, defaultdict DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') d2 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd2') d3 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') d4 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') ds = d1, d2, d3, d4 r = defaultdict(int) for d in ds: r[d] += 1 print "list to compare" for d in ds: print d print "result after merge" for k, v in r.iteritems(): print("%s: %s" % (k, v))
[ "Well, if your original dicts contain only src, dst and cmd, you can use named tuples instead, which are hashable, so you can use named tuples in a dict as keys.\nfrom collections import namedtuple\n\nDataClass = namedtuple(\"DataClass\", \"src dst cmd\")\nd1 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1')\n\n(Sorry for the silly class name; since I don't know what your dicts represent, I couldn't come up with a better name). You can even create DataClass instances from dicts:\nd1 = DataClass(**d1_as_dict)\n\nAt this point, your main counting loop simplifies to this:\nfrom collections import defaultdict, namedtuple\n\nr = defaultdict(int)\nfor obj in [d1, d2, d3, d4]:\n r[obj] += 1\n\nIf, for some reason, you are stuck with Python <= 2.5, there is a drop-in namedtuple replacement class here.\n", "The namedtuple is an excellent idea, if applicable. But if you want to stick with dicts, that is also of course possible, just substantially less efficient. For example:\ndef addadict(r, newd):\n for i, (d, count) in enumerate(r):\n if d == newd:\n r[i] = [d, count+1]\n break\n else:\n r.append([newd, 1])\n\n" ]
[ 1, 1 ]
[]
[]
[ "compare", "dictionary", "python" ]
stackoverflow_0002773522_compare_dictionary_python.txt
Q: Can distribute setuptools be used to port packages implemented in python 2 to 3 Found some info on porting packages from python 2 to 3 using distribute setuptools in below link. http://packages.python.org/distribute/python3.html I have a C api which could be build using python 2.x, but i need to build it in python 3.x. Can it be done using distribute. Do anyone have idea on this? A: No, it cannot be done using Distribute. Distribute just calls the 2to3 script in the build phase, but 2to3 can convert only between Python 2.x source files and Python 3.x source files. For the C API, you have to do it the hard way by manually tweaking your code to compile with both Python APIs. A very incomplete list of C API changes between Python 2.x and Python 3.x is to be found here. The same document also outlines the major differences between Python 2.x and 3.x on the Python source code level. A: distriubte uses Python's 2to3 tool to automatically (try to) convert Python 2 code to Python 3 code. However, that only works for code written in Python. C code needs to be ported by hand. The good news is that Python's C API has not changed much between Python 2.6 and 3.1. The main difference is that Python 3 now uses Unicode for all strings and has a separate bytes type for handling raw binary data.
Can distribute setuptools be used to port packages implemented in python 2 to 3
Found some info on porting packages from python 2 to 3 using distribute setuptools in below link. http://packages.python.org/distribute/python3.html I have a C api which could be build using python 2.x, but i need to build it in python 3.x. Can it be done using distribute. Do anyone have idea on this?
[ "No, it cannot be done using Distribute. Distribute just calls the 2to3 script in the build phase, but 2to3 can convert only between Python 2.x source files and Python 3.x source files. For the C API, you have to do it the hard way by manually tweaking your code to compile with both Python APIs.\nA very incomplete list of C API changes between Python 2.x and Python 3.x is to be found here. The same document also outlines the major differences between Python 2.x and 3.x on the Python source code level.\n", "distriubte uses Python's 2to3 tool to automatically (try to) convert Python 2 code to Python 3 code. However, that only works for code written in Python. C code needs to be ported by hand.\nThe good news is that Python's C API has not changed much between Python 2.6 and 3.1. The main difference is that Python 3 now uses Unicode for all strings and has a separate bytes type for handling raw binary data.\n" ]
[ 3, 0 ]
[]
[]
[ "build", "c", "package", "python", "python_3.x" ]
stackoverflow_0002774654_build_c_package_python_python_3.x.txt
Q: How do I strip the comma from the end of a string in Python? How do I strip comma from the end of a string? I tried awk = subprocess.Popen([r"awk", "{print $10}"], stdin=subprocess.PIPE) awk_stdin = awk.communicate(uptime_stdout)[0] print awk_stdin temp = awk_stdin t = temp.strip(",") also tried t = temp.rstrip(","), both don't work. This is the code: uptime = subprocess.Popen([r"uptime"], stdout=subprocess.PIPE) uptime_stdout = uptime.communicate()[0] print uptime_stdout awk = subprocess.Popen([r"awk", "{print $11}"], stdin=subprocess.PIPE) awk_stdin = awk.communicate(uptime_stdout)[0] print repr(awk_stdin) temp = awk_stdin tem = temp.rstrip("\n") logfile = open('/usr/src/python/uptime.log', 'a') logfile.write(tem + "\n") logfile.close() This is the output: 17:07:32 up 27 days, 37 min, 2 users, load average: 5.23, 5.09, 4.79 5.23, None Traceback (most recent call last): File "uptime.py", line 21, in ? tem = temp.rstrip("\n") AttributeError: 'NoneType' object has no attribute 'rstrip' A: Err, how about the venerable: if len(str) > 0: if str[-1:] == ",": str = str[:-1] On second thought, rstrip itself should work fine, so there's something about the string you're getting from awk that's not quite what you expect. We'll need to see that. I suspect it's because your string doesn't actually end with a comma. When you run: str = "hello," print str.rstrip(",") str = "hello,\n" print str.rstrip(",") print str.rstrip(",\n") the output is: hello hello, hello In other words, if there's a newline at the end of the string as well as a comma, you'll need to rstrip both characters with ",\n". Okay, based on your comment, here's what you're trying: uptime = subprocess.Popen([r"uptime"], stdout=subprocess.PIPE) uptime_stdout = uptime.communicate()[0] print uptime_stdout awk = subprocess.Popen([r"awk", "{print $11}"], stdin=subprocess.PIPE) awk_stdin = awk.communicate(uptime_stdout)[0] print repr(awk_stdin) temp = awk_stdin tem = temp.rstrip("\n") logfile = open('/usr/src/python/uptime.log', 'a') logfile.write(tem + "\n") logfile.close() What are you actually getting from your two print statements and what is being appended to the log file? My particular uptime doesn't have a $11: 23:43:10 up 5:10, 0 users, load average: 0.00, 0.00, 0.00 but yours may be different. Still, we need to see the output of your script. A: When you say awk = subprocess.Popen([r"awk", "{print $11}"], stdin=subprocess.PIPE) awk_stdout = awk.communicate(uptime_stdout)[0] then the output of the awk process is printed to stdout (e.g. a terminal). awk_stdout is set to None. awk_stdout.rstrip('\n') raises an AttributeError because None has no attribute called rstrip. When you say awk = subprocess.Popen([r"awk", "{print $11}"], stdin=subprocess.PIPE, stdout=subprocess.PIPE) awk_stdout = awk.communicate(uptime_stdout)[0] then nothing is printed to stdout (e.g. the terminal), and awk_stdout gets the output of the awk command as a string. A: Remove all commas at the end of the string: str = '1234,,,' str = str.rstrip(',') A: I think you'll find that awk_stdin actually ends with a newline (print repr(awk_stdin) to show it clearly), so you'll need to rstrip that away, before rstrip'ping the comma (or, you could do both at once with a RE, but the basic idea is that the comma isn't actually the last character in that string!-). A: If you have whitespace/non printing characters then try something like this: a_string = 'abcdef,\n' a_string.strip().rstrip(',') if a_string.strip().endswith(',') else a_string.strip() saves you the trouble of checking string lengths and figuring out slice indexes. Of course if you do not need to do anything different for strings that do not end in a comma then you could just do: a_string.strip().rstrip(',')
How do I strip the comma from the end of a string in Python?
How do I strip comma from the end of a string? I tried awk = subprocess.Popen([r"awk", "{print $10}"], stdin=subprocess.PIPE) awk_stdin = awk.communicate(uptime_stdout)[0] print awk_stdin temp = awk_stdin t = temp.strip(",") also tried t = temp.rstrip(","), both don't work. This is the code: uptime = subprocess.Popen([r"uptime"], stdout=subprocess.PIPE) uptime_stdout = uptime.communicate()[0] print uptime_stdout awk = subprocess.Popen([r"awk", "{print $11}"], stdin=subprocess.PIPE) awk_stdin = awk.communicate(uptime_stdout)[0] print repr(awk_stdin) temp = awk_stdin tem = temp.rstrip("\n") logfile = open('/usr/src/python/uptime.log', 'a') logfile.write(tem + "\n") logfile.close() This is the output: 17:07:32 up 27 days, 37 min, 2 users, load average: 5.23, 5.09, 4.79 5.23, None Traceback (most recent call last): File "uptime.py", line 21, in ? tem = temp.rstrip("\n") AttributeError: 'NoneType' object has no attribute 'rstrip'
[ "Err, how about the venerable:\nif len(str) > 0:\n if str[-1:] == \",\":\n str = str[:-1]\n\nOn second thought, rstrip itself should work fine, so there's something about the string you're getting from awk that's not quite what you expect. We'll need to see that.\n\nI suspect it's because your string doesn't actually end with a comma. When you run:\nstr = \"hello,\"\nprint str.rstrip(\",\")\n\nstr = \"hello,\\n\"\nprint str.rstrip(\",\")\nprint str.rstrip(\",\\n\")\n\nthe output is:\nhello\nhello,\n\nhello\n\nIn other words, if there's a newline at the end of the string as well as a comma, you'll need to rstrip both characters with \",\\n\".\n\nOkay, based on your comment, here's what you're trying:\nuptime = subprocess.Popen([r\"uptime\"], stdout=subprocess.PIPE)\nuptime_stdout = uptime.communicate()[0]\nprint uptime_stdout\nawk = subprocess.Popen([r\"awk\", \"{print $11}\"], stdin=subprocess.PIPE)\nawk_stdin = awk.communicate(uptime_stdout)[0]\nprint repr(awk_stdin)\ntemp = awk_stdin\ntem = temp.rstrip(\"\\n\")\nlogfile = open('/usr/src/python/uptime.log', 'a')\nlogfile.write(tem + \"\\n\")\nlogfile.close()\n\nWhat are you actually getting from your two print statements and what is being appended to the log file?\nMy particular uptime doesn't have a $11:\n23:43:10 up 5:10, 0 users, load average: 0.00, 0.00, 0.00\n\nbut yours may be different.\nStill, we need to see the output of your script.\n", "When you say\nawk = subprocess.Popen([r\"awk\", \"{print $11}\"], stdin=subprocess.PIPE)\nawk_stdout = awk.communicate(uptime_stdout)[0]\n\nthen the output of the awk process is printed to stdout (e.g. a terminal).\nawk_stdout is set to None. awk_stdout.rstrip('\\n') raises an AttributeError because None has no attribute called rstrip.\nWhen you say\nawk = subprocess.Popen([r\"awk\", \"{print $11}\"], stdin=subprocess.PIPE,\n stdout=subprocess.PIPE)\nawk_stdout = awk.communicate(uptime_stdout)[0]\n\nthen nothing is printed to stdout (e.g. the terminal), and awk_stdout gets the output of the awk command as a string.\n", "Remove all commas at the end of the string:\nstr = '1234,,,'\nstr = str.rstrip(',')\n\n", "I think you'll find that awk_stdin actually ends with a newline (print repr(awk_stdin) to show it clearly), so you'll need to rstrip that away, before rstrip'ping the comma (or, you could do both at once with a RE, but the basic idea is that the comma isn't actually the last character in that string!-).\n", "If you have whitespace/non printing characters then try something like this:\na_string = 'abcdef,\\n'\na_string.strip().rstrip(',') if a_string.strip().endswith(',') else a_string.strip()\n\nsaves you the trouble of checking string lengths and figuring out slice indexes.\nOf course if you do not need to do anything different for strings that do not end in a comma then you could just do:\na_string.strip().rstrip(',')\n\n" ]
[ 7, 7, 2, 1, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0002774558_python_string.txt
Q: merging indexed array in Python Suppose that I have two numpy arrays of the form x = [[1,2] [2,4] [3,6] [4,NaN] [5,10]] y = [[0,-5] [1,0] [2,5] [5,20] [6,25]] is there an efficient way to merge them such that I have xmy = [[0, NaN, -5 ] [1, 2, 0 ] [2, 4, 5 ] [3, 6, NaN] [4, NaN, NaN] [5, 10, 20 ] [6, NaN, 25 ] I can implement a simple function using search to find the index but this is not elegant and potentially inefficient for a lot of arrays and large dimensions. Any pointer is appreciated. A: See numpy.lib.recfunctions.join_by It only works on structured arrays or recarrays, so there are a couple of kinks. First you need to be at least somewhat familiar with structured arrays. See here if you're not. import numpy as np import numpy.lib.recfunctions # Define the starting arrays as structured arrays with two fields ('key' and 'field') dtype = [('key', np.int), ('field', np.float)] x = np.array([(1, 2), (2, 4), (3, 6), (4, np.NaN), (5, 10)], dtype=dtype) y = np.array([(0, -5), (1, 0), (2, 5), (5, 20), (6, 25)], dtype=dtype) # You want an outer join, rather than the default inner join # (all values are returned, not just ones with a common key) join = np.lib.recfunctions.join_by('key', x, y, jointype='outer') # Now we have a structured array with three fields: 'key', 'field1', and 'field2' # (since 'field' was in both arrays, it renamed x['field'] to 'field1', and # y['field'] to 'field2') # This returns a masked array, if you want it filled with # NaN's, do the following... join.fill_value = np.NaN join = join.filled() # Just displaying it... Keep in mind that as a structured array, # it has one dimension, where each row contains the 3 fields for row in join: print row This outputs: (0, nan, -5.0) (1, 2.0, 0.0) (2, 4.0, 5.0) (3, 6.0, nan) (4, nan, nan) (5, 10.0, 20.0) (6, nan, 25.0) Hope that helps! Edit1: Added example Edit2: Really shouldn't join with floats... Changed 'key' field to an int.
merging indexed array in Python
Suppose that I have two numpy arrays of the form x = [[1,2] [2,4] [3,6] [4,NaN] [5,10]] y = [[0,-5] [1,0] [2,5] [5,20] [6,25]] is there an efficient way to merge them such that I have xmy = [[0, NaN, -5 ] [1, 2, 0 ] [2, 4, 5 ] [3, 6, NaN] [4, NaN, NaN] [5, 10, 20 ] [6, NaN, 25 ] I can implement a simple function using search to find the index but this is not elegant and potentially inefficient for a lot of arrays and large dimensions. Any pointer is appreciated.
[ "See numpy.lib.recfunctions.join_by \nIt only works on structured arrays or recarrays, so there are a couple of kinks. \nFirst you need to be at least somewhat familiar with structured arrays. See here if you're not.\nimport numpy as np\nimport numpy.lib.recfunctions\n\n# Define the starting arrays as structured arrays with two fields ('key' and 'field')\ndtype = [('key', np.int), ('field', np.float)]\nx = np.array([(1, 2),\n (2, 4),\n (3, 6),\n (4, np.NaN),\n (5, 10)],\n dtype=dtype)\n\ny = np.array([(0, -5),\n (1, 0),\n (2, 5),\n (5, 20),\n (6, 25)],\n dtype=dtype)\n\n# You want an outer join, rather than the default inner join\n# (all values are returned, not just ones with a common key)\njoin = np.lib.recfunctions.join_by('key', x, y, jointype='outer')\n\n# Now we have a structured array with three fields: 'key', 'field1', and 'field2'\n# (since 'field' was in both arrays, it renamed x['field'] to 'field1', and\n# y['field'] to 'field2')\n\n# This returns a masked array, if you want it filled with\n# NaN's, do the following...\njoin.fill_value = np.NaN\njoin = join.filled()\n\n# Just displaying it... Keep in mind that as a structured array,\n# it has one dimension, where each row contains the 3 fields\nfor row in join: \n print row\n\nThis outputs:\n(0, nan, -5.0)\n(1, 2.0, 0.0)\n(2, 4.0, 5.0)\n(3, 6.0, nan)\n(4, nan, nan)\n(5, 10.0, 20.0)\n(6, nan, 25.0)\n\nHope that helps!\nEdit1: Added example\nEdit2: Really shouldn't join with floats... Changed 'key' field to an int.\n" ]
[ 10 ]
[]
[]
[ "arrays", "numpy", "python", "scipy" ]
stackoverflow_0002774949_arrays_numpy_python_scipy.txt
Q: Intellisense on custom types in Iron Python I'm just starting to play around with IronPython and am having a hard time using it with custom types created in C#. I can get IronPython to load in assemblies from C# classes, but I'm struggling without the help of intellisense. If I have a class in C# as defined below, how can I make it so that IronPython will be able to see the methods/properties that are available in it? public class Person { public string Name { get; set; } public int Age{ get; set; } public double Weight{ get; set; } public double Height { get; set; } public double CalculateBMI() { return Weight/Math.Pow(Height, 2); } } In Iron python I'd instance a Person object as follows: newPerson = Person() newPerson.Name = 'John' newPerson.Age = 25 newPerson.Weight = 75 newPerson.Height = 1.70 newPerson.CalculateBMI() The thing that is annoying me is that I want to be able to say newPerson = Person() And then be able to see all the methods and properties associated with the person object whenever I type: newPerson. Anyone have any ideas if this can be done? A: If you want to do it from an editor/IDE, IronPython Tools for Visual Studio has that capability (and much more). If you don't have VS 2010 Pro, you can install it into the Integrated Shell. If you want to do it from the console, I don't believe that it's possible yet.
Intellisense on custom types in Iron Python
I'm just starting to play around with IronPython and am having a hard time using it with custom types created in C#. I can get IronPython to load in assemblies from C# classes, but I'm struggling without the help of intellisense. If I have a class in C# as defined below, how can I make it so that IronPython will be able to see the methods/properties that are available in it? public class Person { public string Name { get; set; } public int Age{ get; set; } public double Weight{ get; set; } public double Height { get; set; } public double CalculateBMI() { return Weight/Math.Pow(Height, 2); } } In Iron python I'd instance a Person object as follows: newPerson = Person() newPerson.Name = 'John' newPerson.Age = 25 newPerson.Weight = 75 newPerson.Height = 1.70 newPerson.CalculateBMI() The thing that is annoying me is that I want to be able to say newPerson = Person() And then be able to see all the methods and properties associated with the person object whenever I type: newPerson. Anyone have any ideas if this can be done?
[ "If you want to do it from an editor/IDE, IronPython Tools for Visual Studio has that capability (and much more). If you don't have VS 2010 Pro, you can install it into the Integrated Shell.\nIf you want to do it from the console, I don't believe that it's possible yet.\n" ]
[ 2 ]
[]
[]
[ "c#", "intellisense", "ironpython", "python" ]
stackoverflow_0002772230_c#_intellisense_ironpython_python.txt
Q: How do I check the methods that an object has, in Python? For example, a list. l1 = [1, 5 , 7] How do I check the methods that it has? (l1.append, for example) Or a string... string.lower( A: You can use dir to get a list the methods of any object. This is very useful in the interactive prompt: >>> dir(l1) ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] The interesting methods are usually those not starting with underscores. You can write your own version of dir that ignores names starting with underscores if you wish: >>> mydir = lambda a:[x for x in dir(a) if not x.startswith('_')] >>> mydir([]) ['append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] A: You might want to look at the getmembers function in the inspect module In [1]: import inspect In [2]: inspect? Type: module Base Class: <type 'module'> String Form: <module 'inspect' from '/usr/lib/python2.6/inspect.pyc'> Namespace: Interactive File: /usr/lib/python2.6/inspect.py Docstring: Get useful information from live Python objects. This module encapsulates the interface provided by the internal special attributes (func_*, co_*, im_*, tb_*, etc.) in a friendlier fashion. It also provides some help for examining source code and class layout. Here are some of the useful functions provided by this module: ismodule(), isclass(), ismethod(), isfunction(), isgeneratorfunction(), isgenerator(), istraceback(), isframe(), iscode(), isbuiltin(), isroutine() - check object types getmembers() - get members of an object that satisfy a given condition getfile(), getsourcefile(), getsource() - find an object's source code getdoc(), getcomments() - get documentation on an object getmodule() - determine the module that an object came from getclasstree() - arrange classes so as to represent their hierarchy getargspec(), getargvalues() - get info about function arguments formatargspec(), formatargvalues() - format an argument spec getouterframes(), getinnerframes() - get info about frames currentframe() - get the current stack frame stack(), trace() - get info about frames on the stack or in a traceback In [3]: l1=[1,5,7] In [4]: inspect.getmembers(l1) Out[4]: [('__add__', <method-wrapper '__add__' of list object at 0xa38716c>), ('__class__', <type 'list'>), ('__contains__', <method-wrapper '__contains__' of list object at 0xa38716c>), ('__delattr__', <method-wrapper '__delattr__' of list object at 0xa38716c>), ('__delitem__', <method-wrapper '__delitem__' of list object at 0xa38716c>), ('__delslice__', <method-wrapper '__delslice__' of list object at 0xa38716c>), ('__doc__', "list() -> new list\nlist(sequence) -> new list initialized from sequence's items"), ('__eq__', <method-wrapper '__eq__' of list object at 0xa38716c>), ('__format__', <built-in method __format__ of list object at 0xa38716c>), ('__ge__', <method-wrapper '__ge__' of list object at 0xa38716c>), ('__getattribute__', <method-wrapper '__getattribute__' of list object at 0xa38716c>), ('__getitem__', <built-in method __getitem__ of list object at 0xa38716c>), ('__getslice__', <method-wrapper '__getslice__' of list object at 0xa38716c>), ('__gt__', <method-wrapper '__gt__' of list object at 0xa38716c>), ('__hash__', None), ('__iadd__', <method-wrapper '__iadd__' of list object at 0xa38716c>), ('__imul__', <method-wrapper '__imul__' of list object at 0xa38716c>), ('__init__', <method-wrapper '__init__' of list object at 0xa38716c>), ('__iter__', <method-wrapper '__iter__' of list object at 0xa38716c>), ('__le__', <method-wrapper '__le__' of list object at 0xa38716c>), ('__len__', <method-wrapper '__len__' of list object at 0xa38716c>), ('__lt__', <method-wrapper '__lt__' of list object at 0xa38716c>), ('__mul__', <method-wrapper '__mul__' of list object at 0xa38716c>), ('__ne__', <method-wrapper '__ne__' of list object at 0xa38716c>), ('__new__', <built-in method __new__ of type object at 0x822be40>), ('__reduce__', <built-in method __reduce__ of list object at 0xa38716c>), ('__reduce_ex__', <built-in method __reduce_ex__ of list object at 0xa38716c>), ('__repr__', <method-wrapper '__repr__' of list object at 0xa38716c>), ('__reversed__', <built-in method __reversed__ of list object at 0xa38716c>), ('__rmul__', <method-wrapper '__rmul__' of list object at 0xa38716c>), ('__setattr__', <method-wrapper '__setattr__' of list object at 0xa38716c>), ('__setitem__', <method-wrapper '__setitem__' of list object at 0xa38716c>), ('__setslice__', <method-wrapper '__setslice__' of list object at 0xa38716c>), ('__sizeof__', <built-in method __sizeof__ of list object at 0xa38716c>), ('__str__', <method-wrapper '__str__' of list object at 0xa38716c>), ('__subclasshook__', <built-in method __subclasshook__ of type object at 0x822be40>), ('append', <built-in method append of list object at 0xa38716c>), ('count', <built-in method count of list object at 0xa38716c>), ('extend', <built-in method extend of list object at 0xa38716c>), ('index', <built-in method index of list object at 0xa38716c>), ('insert', <built-in method insert of list object at 0xa38716c>), ('pop', <built-in method pop of list object at 0xa38716c>), ('remove', <built-in method remove of list object at 0xa38716c>), ('reverse', <built-in method reverse of list object at 0xa38716c>), ('sort', <built-in method sort of list object at 0xa38716c>)] A: Interactive Python has a help function you can use with anything: >>> help(list) Help on class list in module __builtin__: class list(object) | list() -> new list | list(sequence) -> new list initialized from sequence´s items | | Methods defined here: | | __add__(...) | x.__add__(y) <==> x+y | | __contains__(...) | x.__contains__(y) <==> y in x | | __delitem__(...) | x.__delitem__(y) <==> del x[y] | | __delslice__(...) | x.__delslice__(i, j) <==> del x[i:j] | | Use of negative indices is not supported. | | __eq__(...) | x.__eq__(y) <==> x==y | | __ge__(...) | x.__ge__(y) <==> x>=y | | __getattribute__(...) | x.__getattribute__('name') <==> x.name | | __getitem__(...) | x.__getitem__(y) <==> x[y] | | __getslice__(...) | x.__getslice__(i, j) <==> x[i:j] | | Use of negative indices is not supported. | | __gt__(...) | x.__gt__(y) <==> x>y | | __iadd__(...) | x.__iadd__(y) <==> x+=y | | __imul__(...) | x.__imul__(y) <==> x*=y | | __init__(...) | x.__init__(...) initializes x; see x.__class__.__doc__ for signature | | __iter__(...) | x.__iter__() <==> iter(x) | | __le__(...) | x.__le__(y) <==> x<=y | | __len__(...) | x.__len__() <==> len(x) | | __lt__(...) | x.__lt__(y) <==> x<y | | __mul__(...) | x.__mul__(n) <==> x*n | | __ne__(...) | x.__ne__(y) <==> x!=y | | __repr__(...) | x.__repr__() <==> repr(x) | | __reversed__(...) | L.__reversed__() -- return a reverse iterator over the list | | __rmul__(...) | x.__rmul__(n) <==> n*x | | __setitem__(...) | x.__setitem__(i, y) <==> x[i]=y | | __setslice__(...) | x.__setslice__(i, j, y) <==> x[i:j]=y | | Use of negative indices is not supported. | | __sizeof__(...) | L.__sizeof__() -- size of L in memory, in bytes | | append(...) | L.append(object) -- append object to end | | count(...) | L.count(value) -> integer -- return number of occurrences of value | | extend(...) | L.extend(iterable) -- extend list by appending elements from the iterable | | index(...) | L.index(value, [start, [stop]]) -> integer -- return first index of value. | Raises ValueError if the value is not present. | | insert(...) | L.insert(index, object) -- insert object before index | | pop(...) | L.pop([index]) -> item -- remove and return item at index (default last). | Raises IndexError if list is empty or index is out of range. | | remove(...) | L.remove(value) -- remove first occurrence of value. | Raises ValueError if the value is not present. | | reverse(...) | L.reverse() -- reverse *IN PLACE* | | sort(...) | L.sort(cmp=None, key=None, reverse=False) -- stable sort *IN PLACE*; | cmp(x, y) -> -1, 0, 1 | | ---------------------------------------------------------------------- | Data and other attributes defined here: | | __hash__ = None | | __new__ = <built-in method __new__ of type object at 0x1E1CF100> | T.__new__(S, ...) -> a new object with type S, a subtype of T A: If you install IPython, then you can do this: % ipython Python 2.6.4 (r264:75706, Nov 2 2009, 14:38:03) Type "copyright", "credits" or "license" for more information. IPython 0.10 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: l1=[1,5,7] In [2]: l1. l1.__add__ l1.__getslice__ l1.__new__ l1.append l1.__class__ l1.__gt__ l1.__reduce__ l1.count l1.__contains__ l1.__hash__ l1.__reduce_ex__ l1.extend l1.__delattr__ l1.__iadd__ l1.__repr__ l1.index l1.__delitem__ l1.__imul__ l1.__reversed__ l1.insert l1.__delslice__ l1.__init__ l1.__rmul__ l1.pop l1.__doc__ l1.__iter__ l1.__setattr__ l1.remove l1.__eq__ l1.__le__ l1.__setitem__ l1.reverse l1.__format__ l1.__len__ l1.__setslice__ l1.sort l1.__ge__ l1.__lt__ l1.__sizeof__ l1.__getattribute__ l1.__mul__ l1.__str__ l1.__getitem__ l1.__ne__ l1.__subclasshook__ In [2]: l1. On the last line, you type the object name, a period, and then press TAB. IPython then lists all the attributes of the object. I find IPython an invaluable tool for exploring object attributes. It is far more convenient to use than the standard Python interactive prompt. Among other nifty things, putting a question mark after an object gives you its doc string: In [6]: d.update? Type: builtin_function_or_method Base Class: <type 'builtin_function_or_method'> String Form: <built-in method update of dict object at 0xa3c024c> Namespace: Interactive Docstring: D.update(E, **F) -> None. Update D from dict/iterable E and F. If E has a .keys() method, does: for k in E: D[k] = E[k] If E lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] And, when available, two question marks gives you its source code: In [18]: np.sum?? Type: function Base Class: <type 'function'> String Form: <function sum at 0x9c501ec> Namespace: Interactive File: /usr/lib/python2.6/dist-packages/numpy/core/fromnumeric.py Definition: np.sum(a, axis=None, dtype=None, out=None) Source: def sum(a, axis=None, dtype=None, out=None): ... if isinstance(a, _gentype): res = _sum_(a) if out is not None: out[...] = res return out return res try: sum = a.sum except AttributeError: return _wrapit(a, 'sum', axis, dtype, out) return sum(axis, dtype, out) A: As it happens, all the members of list instances are methods. If that weren't the case, you could use this: l1 = [1, 5 , 7] print [name for name in dir(l1) if type(getattr(l1, name) == type(l1.append))] That will exclude members that aren't methods. A: If the object (which often might be a module) has many methods or attributes using dir or the TAB completion of ipython can get to complex to keep track. In such cases I use filter like in the following example: filter(lambda s: 'sin' in s.lower(), dir(numpy)) which results in ['arcsin', 'arcsinh', 'csingle', 'isinf', 'isposinf', 'sin', 'sinc', 'single', 'singlecomplex', 'sinh'] I find that very handy to explore unknown objects from which I expect that they must have a method (or attribute) which should have as part of its name.
How do I check the methods that an object has, in Python?
For example, a list. l1 = [1, 5 , 7] How do I check the methods that it has? (l1.append, for example) Or a string... string.lower(
[ "You can use dir to get a list the methods of any object. This is very useful in the interactive prompt:\n>>> dir(l1)\n['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__',\n'__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__',\n'__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__',\n'__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__setslice__',\n'__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n\nThe interesting methods are usually those not starting with underscores. You can write your own version of dir that ignores names starting with underscores if you wish:\n>>> mydir = lambda a:[x for x in dir(a) if not x.startswith('_')]\n>>> mydir([])\n['append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n\n", "You might want to look at the getmembers function in the inspect module\nIn [1]: import inspect\n\nIn [2]: inspect?\nType: module\nBase Class: <type 'module'>\nString Form: <module 'inspect' from '/usr/lib/python2.6/inspect.pyc'>\nNamespace: Interactive\nFile: /usr/lib/python2.6/inspect.py\nDocstring:\n Get useful information from live Python objects.\n\n This module encapsulates the interface provided by the internal special\n attributes (func_*, co_*, im_*, tb_*, etc.) in a friendlier fashion.\n It also provides some help for examining source code and class layout.\n\n Here are some of the useful functions provided by this module:\n\n ismodule(), isclass(), ismethod(), isfunction(), isgeneratorfunction(),\n isgenerator(), istraceback(), isframe(), iscode(), isbuiltin(),\n isroutine() - check object types\n getmembers() - get members of an object that satisfy a given condition\n\n getfile(), getsourcefile(), getsource() - find an object's source code\n getdoc(), getcomments() - get documentation on an object\n getmodule() - determine the module that an object came from\n getclasstree() - arrange classes so as to represent their hierarchy\n\n getargspec(), getargvalues() - get info about function arguments\n formatargspec(), formatargvalues() - format an argument spec\n getouterframes(), getinnerframes() - get info about frames\n currentframe() - get the current stack frame\n stack(), trace() - get info about frames on the stack or in a traceback\n\nIn [3]: l1=[1,5,7]\n\nIn [4]: inspect.getmembers(l1)\nOut[4]: \n[('__add__', <method-wrapper '__add__' of list object at 0xa38716c>),\n ('__class__', <type 'list'>),\n ('__contains__', <method-wrapper '__contains__' of list object at 0xa38716c>),\n ('__delattr__', <method-wrapper '__delattr__' of list object at 0xa38716c>),\n ('__delitem__', <method-wrapper '__delitem__' of list object at 0xa38716c>),\n ('__delslice__', <method-wrapper '__delslice__' of list object at 0xa38716c>),\n ('__doc__',\n \"list() -> new list\\nlist(sequence) -> new list initialized from sequence's items\"),\n ('__eq__', <method-wrapper '__eq__' of list object at 0xa38716c>),\n ('__format__', <built-in method __format__ of list object at 0xa38716c>),\n ('__ge__', <method-wrapper '__ge__' of list object at 0xa38716c>),\n ('__getattribute__',\n <method-wrapper '__getattribute__' of list object at 0xa38716c>),\n ('__getitem__', <built-in method __getitem__ of list object at 0xa38716c>),\n ('__getslice__', <method-wrapper '__getslice__' of list object at 0xa38716c>),\n ('__gt__', <method-wrapper '__gt__' of list object at 0xa38716c>),\n ('__hash__', None),\n ('__iadd__', <method-wrapper '__iadd__' of list object at 0xa38716c>),\n ('__imul__', <method-wrapper '__imul__' of list object at 0xa38716c>),\n ('__init__', <method-wrapper '__init__' of list object at 0xa38716c>),\n ('__iter__', <method-wrapper '__iter__' of list object at 0xa38716c>),\n ('__le__', <method-wrapper '__le__' of list object at 0xa38716c>),\n ('__len__', <method-wrapper '__len__' of list object at 0xa38716c>),\n ('__lt__', <method-wrapper '__lt__' of list object at 0xa38716c>),\n ('__mul__', <method-wrapper '__mul__' of list object at 0xa38716c>),\n ('__ne__', <method-wrapper '__ne__' of list object at 0xa38716c>),\n ('__new__', <built-in method __new__ of type object at 0x822be40>),\n ('__reduce__', <built-in method __reduce__ of list object at 0xa38716c>),\n ('__reduce_ex__',\n <built-in method __reduce_ex__ of list object at 0xa38716c>),\n ('__repr__', <method-wrapper '__repr__' of list object at 0xa38716c>),\n ('__reversed__', <built-in method __reversed__ of list object at 0xa38716c>),\n ('__rmul__', <method-wrapper '__rmul__' of list object at 0xa38716c>),\n ('__setattr__', <method-wrapper '__setattr__' of list object at 0xa38716c>),\n ('__setitem__', <method-wrapper '__setitem__' of list object at 0xa38716c>),\n ('__setslice__', <method-wrapper '__setslice__' of list object at 0xa38716c>),\n ('__sizeof__', <built-in method __sizeof__ of list object at 0xa38716c>),\n ('__str__', <method-wrapper '__str__' of list object at 0xa38716c>),\n ('__subclasshook__',\n <built-in method __subclasshook__ of type object at 0x822be40>),\n ('append', <built-in method append of list object at 0xa38716c>),\n ('count', <built-in method count of list object at 0xa38716c>),\n ('extend', <built-in method extend of list object at 0xa38716c>),\n ('index', <built-in method index of list object at 0xa38716c>),\n ('insert', <built-in method insert of list object at 0xa38716c>),\n ('pop', <built-in method pop of list object at 0xa38716c>),\n ('remove', <built-in method remove of list object at 0xa38716c>),\n ('reverse', <built-in method reverse of list object at 0xa38716c>),\n ('sort', <built-in method sort of list object at 0xa38716c>)]\n\n", "Interactive Python has a help function you can use with anything:\n>>> help(list)\nHelp on class list in module __builtin__:\n\nclass list(object)\n | list() -> new list\n | list(sequence) -> new list initialized from sequence´s items\n |\n | Methods defined here:\n |\n | __add__(...)\n | x.__add__(y) <==> x+y\n |\n | __contains__(...)\n | x.__contains__(y) <==> y in x\n |\n | __delitem__(...)\n | x.__delitem__(y) <==> del x[y]\n |\n | __delslice__(...)\n | x.__delslice__(i, j) <==> del x[i:j]\n |\n | Use of negative indices is not supported.\n |\n | __eq__(...)\n | x.__eq__(y) <==> x==y\n |\n | __ge__(...)\n | x.__ge__(y) <==> x>=y\n |\n | __getattribute__(...)\n | x.__getattribute__('name') <==> x.name\n |\n | __getitem__(...)\n | x.__getitem__(y) <==> x[y]\n |\n | __getslice__(...)\n | x.__getslice__(i, j) <==> x[i:j]\n |\n | Use of negative indices is not supported.\n |\n | __gt__(...)\n | x.__gt__(y) <==> x>y\n |\n | __iadd__(...)\n | x.__iadd__(y) <==> x+=y\n |\n | __imul__(...)\n | x.__imul__(y) <==> x*=y\n |\n | __init__(...)\n | x.__init__(...) initializes x; see x.__class__.__doc__ for signature\n |\n | __iter__(...)\n | x.__iter__() <==> iter(x)\n |\n | __le__(...)\n | x.__le__(y) <==> x<=y\n |\n | __len__(...)\n | x.__len__() <==> len(x)\n |\n | __lt__(...)\n | x.__lt__(y) <==> x<y\n |\n | __mul__(...)\n | x.__mul__(n) <==> x*n\n |\n | __ne__(...)\n | x.__ne__(y) <==> x!=y\n |\n | __repr__(...)\n | x.__repr__() <==> repr(x)\n |\n | __reversed__(...)\n | L.__reversed__() -- return a reverse iterator over the list\n |\n | __rmul__(...)\n | x.__rmul__(n) <==> n*x\n |\n | __setitem__(...)\n | x.__setitem__(i, y) <==> x[i]=y\n |\n | __setslice__(...)\n | x.__setslice__(i, j, y) <==> x[i:j]=y\n |\n | Use of negative indices is not supported.\n |\n | __sizeof__(...)\n | L.__sizeof__() -- size of L in memory, in bytes\n |\n | append(...)\n | L.append(object) -- append object to end\n |\n | count(...)\n | L.count(value) -> integer -- return number of occurrences of value\n |\n | extend(...)\n | L.extend(iterable) -- extend list by appending elements from the iterable\n |\n | index(...)\n | L.index(value, [start, [stop]]) -> integer -- return first index of value.\n | Raises ValueError if the value is not present.\n |\n | insert(...)\n | L.insert(index, object) -- insert object before index\n |\n | pop(...)\n | L.pop([index]) -> item -- remove and return item at index (default last).\n | Raises IndexError if list is empty or index is out of range.\n |\n | remove(...)\n | L.remove(value) -- remove first occurrence of value.\n | Raises ValueError if the value is not present.\n |\n | reverse(...)\n | L.reverse() -- reverse *IN PLACE*\n |\n | sort(...)\n | L.sort(cmp=None, key=None, reverse=False) -- stable sort *IN PLACE*;\n | cmp(x, y) -> -1, 0, 1\n |\n | ----------------------------------------------------------------------\n | Data and other attributes defined here:\n |\n | __hash__ = None\n |\n | __new__ = <built-in method __new__ of type object at 0x1E1CF100>\n | T.__new__(S, ...) -> a new object with type S, a subtype of T\n\n", "If you install IPython, then you can do this:\n% ipython\nPython 2.6.4 (r264:75706, Nov 2 2009, 14:38:03) \nType \"copyright\", \"credits\" or \"license\" for more information.\n\nIPython 0.10 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object'. ?object also works, ?? prints more.\n\nIn [1]: l1=[1,5,7]\n\nIn [2]: l1.\nl1.__add__ l1.__getslice__ l1.__new__ l1.append\nl1.__class__ l1.__gt__ l1.__reduce__ l1.count\nl1.__contains__ l1.__hash__ l1.__reduce_ex__ l1.extend\nl1.__delattr__ l1.__iadd__ l1.__repr__ l1.index\nl1.__delitem__ l1.__imul__ l1.__reversed__ l1.insert\nl1.__delslice__ l1.__init__ l1.__rmul__ l1.pop\nl1.__doc__ l1.__iter__ l1.__setattr__ l1.remove\nl1.__eq__ l1.__le__ l1.__setitem__ l1.reverse\nl1.__format__ l1.__len__ l1.__setslice__ l1.sort\nl1.__ge__ l1.__lt__ l1.__sizeof__ \nl1.__getattribute__ l1.__mul__ l1.__str__ \nl1.__getitem__ l1.__ne__ l1.__subclasshook__ \n\nIn [2]: l1.\n\nOn the last line, you type the object name, a period, and then press TAB. IPython then lists all the attributes of the object. \nI find IPython an invaluable tool for exploring object attributes. It is far more convenient to use than the standard Python interactive prompt. Among other nifty things, putting a question mark after an object gives you its doc string:\nIn [6]: d.update?\nType: builtin_function_or_method\nBase Class: <type 'builtin_function_or_method'>\nString Form: <built-in method update of dict object at 0xa3c024c>\nNamespace: Interactive\nDocstring:\n D.update(E, **F) -> None. Update D from dict/iterable E and F.\n If E has a .keys() method, does: for k in E: D[k] = E[k]\n If E lacks .keys() method, does: for (k, v) in E: D[k] = v\n In either case, this is followed by: for k in F: D[k] = F[k]\n\nAnd, when available, two question marks gives you its source code:\nIn [18]: np.sum??\nType: function\nBase Class: <type 'function'>\nString Form: <function sum at 0x9c501ec>\nNamespace: Interactive\nFile: /usr/lib/python2.6/dist-packages/numpy/core/fromnumeric.py\nDefinition: np.sum(a, axis=None, dtype=None, out=None)\nSource:\ndef sum(a, axis=None, dtype=None, out=None):\n...\n if isinstance(a, _gentype):\n res = _sum_(a)\n if out is not None:\n out[...] = res\n return out\n return res\n try:\n sum = a.sum\n except AttributeError:\n return _wrapit(a, 'sum', axis, dtype, out)\n return sum(axis, dtype, out)\n\n", "As it happens, all the members of list instances are methods. If that weren't the case, you could use this:\nl1 = [1, 5 , 7]\nprint [name for name in dir(l1) if type(getattr(l1, name) == type(l1.append))]\n\nThat will exclude members that aren't methods.\n", "If the object (which often might be a module) has many methods or attributes using dir or the TAB completion of ipython can get to complex to keep track.\nIn such cases I use filter like in the following example:\nfilter(lambda s: 'sin' in s.lower(), dir(numpy))\n\nwhich results in\n['arcsin',\n 'arcsinh',\n 'csingle',\n 'isinf',\n 'isposinf',\n 'sin',\n 'sinc',\n 'single',\n 'singlecomplex',\n 'sinh']\n\nI find that very handy to explore unknown objects from which I expect that they must have a method (or attribute) which should have as part of its name.\n" ]
[ 21, 4, 2, 1, 1, 0 ]
[]
[]
[ "methods", "object", "python" ]
stackoverflow_0001897960_methods_object_python.txt
Q: A question about DOM parser used with Python I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node> A: Per XML's standards for the DOM, attributes are not held as an ordered collection; in Python's xml.dom implementations, they're a NamedNodeMap, whose docs say: The order you get the attributes in is arbitrary but will be consistent for the life of a DOM In particular, there's no promise that this arbitrary order will be the same as the (semantically irrelevant) order found in the XML source that was parsed to build this DOM. A: There is no guarantee on the ordering of sub-elements or attibutes in the XML spec. you should NOT rely on ordering of attributes or sub-elements in your business logic, it is guaranteed to not work as expected with all the various parsers. As a side note, I think ElementTree is a much better way to manipulate the DOM than minidom, especially if you are using 2.5.x or newer it is built-in.
A question about DOM parser used with Python
I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node>
[ "Per XML's standards for the DOM, attributes are not held as an ordered collection; in Python's xml.dom implementations, they're a NamedNodeMap, whose docs say:\n\nThe order you get the attributes in is\n arbitrary but will be consistent for\n the life of a DOM\n\nIn particular, there's no promise that this arbitrary order will be the same as the (semantically irrelevant) order found in the XML source that was parsed to build this DOM.\n", "There is no guarantee on the ordering of sub-elements or attibutes in the XML spec. you should NOT rely on ordering of attributes or sub-elements in your business logic, it is guaranteed to not work as expected with all the various parsers. As a side note, I think ElementTree is a much better way to manipulate the DOM than minidom, especially if you are using 2.5.x or newer it is built-in.\n" ]
[ 2, 1 ]
[]
[]
[ "dom", "python", "xml" ]
stackoverflow_0002775202_dom_python_xml.txt
Q: Help with Python structure in *nixes I came from a Windows background whern it comes to development environments. I'm used to run .exe's from everything I need to run and just forget. I usually code in php, javascript, css, html and python. Now, I have to use Linux at my work, in a non changeable Ubuntu 8.04, with permissions to upgrade my system using company's repositories only. I need to install Python 2.4.3 to start coding in an old legacy system. I had Python 2.5. I downloaded Python 2.4.3 tarballs, ran ./configure make and such. Everything worked out, but now the "default" installation is my system is Python2.4 instead of of Python2.5. I want help from you to change it back, and if possible, some material to read about symlinks, multiple Python installations, virtualenvs and such: everything I need to know before installing/upgrading Python modules. I installed for example the ElementTree package and don't even know in which Python installation it was installed. Thanks in advance! A: You may have installed Python 2.4 in /usr/local/bin, which, in turn, may come in your $PATH before /usr/bin where 2.5 lives. There are various possible remediations, if that is the case: simplest is probably to rm the link named /usr/local/bin/python (leaving only the "system" one named /usr/bin/python). You will then have to use explicitly python2.4 to invoke the 2.4 installation, while just python will go to the system-installed Python 2.5 installation. A: If you have root access you could just create a new simlink. sudo mv /usr/bin/python /usr/bin/python2.4 sudo ln -s /usr/bin/python25 /usr/bin/python I don't have too much experience with ubuntu, but i guess it shouldn't brake anything. To learn more about ln read man ln. A: For which version of Python will run when you invoke the python command you will have to manually change the symlink that /usr/bin/python points to, but that won't change what the packaging system considers the "default version of Python" and means you will still have to install version-specific libraries if they are different for a specific version. Luckily, those packages have an easy naming convention, instead of just python-<foo> they are python2.4-<foo> and installing those will put them in the right path (specifically the right site-packages directory). EDIT: apparently python isn't managed by the alternatives system, silly Debian/Ubuntu A: Running sudo apt-get install --reinstall python-minimal python python2.5 should restore the default Python installation. Unlike Windows Ubuntu comes with quite a lot of software packaged by the distributor, and it is a good idea to stay with this packages if possible instead of downloading software from the net. Ubuntu 8.04 has Python 2.4.5 (package python2.4), maybe that works for you. If you need to install Python from source use ./configure --prefix=/usr/local/ instead of a plain ./configure. This makes python to be install at /usr/local/ so it doesn't overwrite the distribution's files A: Piggybacking off of @rebus: sudo ln -s /usr/bin/python2.5 /usr/bin/python Seems to have worked.
Help with Python structure in *nixes
I came from a Windows background whern it comes to development environments. I'm used to run .exe's from everything I need to run and just forget. I usually code in php, javascript, css, html and python. Now, I have to use Linux at my work, in a non changeable Ubuntu 8.04, with permissions to upgrade my system using company's repositories only. I need to install Python 2.4.3 to start coding in an old legacy system. I had Python 2.5. I downloaded Python 2.4.3 tarballs, ran ./configure make and such. Everything worked out, but now the "default" installation is my system is Python2.4 instead of of Python2.5. I want help from you to change it back, and if possible, some material to read about symlinks, multiple Python installations, virtualenvs and such: everything I need to know before installing/upgrading Python modules. I installed for example the ElementTree package and don't even know in which Python installation it was installed. Thanks in advance!
[ "You may have installed Python 2.4 in /usr/local/bin, which, in turn, may come in your $PATH before /usr/bin where 2.5 lives. There are various possible remediations, if that is the case: simplest is probably to rm the link named /usr/local/bin/python (leaving only the \"system\" one named /usr/bin/python). You will then have to use explicitly python2.4 to invoke the 2.4 installation, while just python will go to the system-installed Python 2.5 installation.\n", "If you have root access you could just create a new simlink.\nsudo mv /usr/bin/python /usr/bin/python2.4\nsudo ln -s /usr/bin/python25 /usr/bin/python\n\nI don't have too much experience with ubuntu, but i guess it shouldn't brake anything.\nTo learn more about ln read man ln.\n", "For which version of Python will run when you invoke the python command you will have to manually change the symlink that /usr/bin/python points to, but that won't change what the packaging system considers the \"default version of Python\" and means you will still have to install version-specific libraries if they are different for a specific version. Luckily, those packages have an easy naming convention, instead of just python-<foo> they are python2.4-<foo> and installing those will put them in the right path (specifically the right site-packages directory).\nEDIT: apparently python isn't managed by the alternatives system, silly Debian/Ubuntu\n", "Running\n sudo apt-get install --reinstall python-minimal python python2.5\n\nshould restore the default Python installation.\nUnlike Windows Ubuntu comes with quite a lot of software packaged by the distributor, and it is a good idea to stay with this packages if possible instead of downloading software from the net. Ubuntu 8.04 has Python 2.4.5 (package python2.4), maybe that works for you.\nIf you need to install Python from source use\n ./configure --prefix=/usr/local/\n\ninstead of a plain ./configure. This makes python to be install at /usr/local/ so it doesn't overwrite the distribution's files\n", "Piggybacking off of @rebus:\nsudo ln -s /usr/bin/python2.5 /usr/bin/python\n\nSeems to have worked.\n" ]
[ 2, 1, 1, 1, 0 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0002775213_linux_python.txt
Q: Django ImageField validation & PIL On sunday, I had problems with python modules, when I installed stackless python. Now I have compiled and installed : setuptools & python-mysqldb and i got my django project up and running again. (i also reinstalled django-1.1), Then I compiled and installed, jpeg, freetype2 and PIL. I also started using mod_wsgi instead of mod_python. But when uploading imagefield in form I get validationerror: Upload a valid image. The file you uploaded was either not an image or a corrupted image. Searchmonkey shows that it comes from field.py imagefield validation. before raising this error it imports Image from PIL, opens file and verfies it. I tried importing PIL from python prompt manually - it worked just fine. Same with Image.open and Image.verify. So what could be causing this problem? Alan A: Might want to check out this blog post and see if it addresses your problem. http://www.chipx86.com/blog/2008/07/25/django-tips-pil-imagefield-and-unit-tests/
Django ImageField validation & PIL
On sunday, I had problems with python modules, when I installed stackless python. Now I have compiled and installed : setuptools & python-mysqldb and i got my django project up and running again. (i also reinstalled django-1.1), Then I compiled and installed, jpeg, freetype2 and PIL. I also started using mod_wsgi instead of mod_python. But when uploading imagefield in form I get validationerror: Upload a valid image. The file you uploaded was either not an image or a corrupted image. Searchmonkey shows that it comes from field.py imagefield validation. before raising this error it imports Image from PIL, opens file and verfies it. I tried importing PIL from python prompt manually - it worked just fine. Same with Image.open and Image.verify. So what could be causing this problem? Alan
[ "Might want to check out this blog post and see if it addresses your problem.\nhttp://www.chipx86.com/blog/2008/07/25/django-tips-pil-imagefield-and-unit-tests/\n" ]
[ 0 ]
[]
[]
[ "django", "mod_wsgi", "python", "python_imaging_library", "python_stackless" ]
stackoverflow_0001292061_django_mod_wsgi_python_python_imaging_library_python_stackless.txt
Q: Recurrent yearly date alert in Python A user can set a day alert for a birthday. (We do not care about the year of birth) He also picks if he wants to be alerted 0, 1, 2, ou 7 days (Delta) before the D day. Users have a timezone setting. I want the server to send the alerts at 8 am on the the D day - deleta +- user timezone Example: 12 jun, with "alert me 3 days before" will give 9 of Jun. My idea was to have a trigger_datetime extra field saved on the 'recurrent event' object. Like this a cron Job running every hour on my server will just check for all events matching irs current time hour, day and month and send to the alert. The problem from a year to the next the trigger_date could change ! If the alert is set on 1st of March, with a one day delay that could be either 28 or 29 of February .. Maybe i should not use the trigger date trick and use some other kind of scheme. All plans are welcome. A: Although using plain datetime python module you will be able to implement all you need, a much more powerful python-dateutil extension is available, especially if you need to work with recurring events. The code below should give you an indication of how to achieve your goal: from datetime import * from dateutil.rrule import rrule, YEARLY # GLOBAL CONFIG td_8am = timedelta(seconds=3600*8) td_jobfrequency = timedelta(seconds=3600) # hourly job # USER DATA:: # birthday: assumed to be retrieved from some data source bday = date(1960, 5, 12) # reminder delta: number of days before the b-day td_delta = timedelta(days=6) # difference between the user TZ and the server TZ tz_diff = timedelta(seconds=3600*5) # example: +5h # from current time minus the job periodicity and the delta sday = date.today() # occr will return the first birthday from today on occr = rrule(YEARLY, bymonth=bday.month, bymonthday=bday.day, dtstart=sday, count=1)[0] # adjust: subtract the reminder delta, fixed 8h (8am) and tz difference occr -= (td_delta + td_8am + tz_diff) # send the reminder when the adjusted occurance is within this job time period if datetime.now() - td_jobfrequency < occr < datetime.now(): print occr, '@todo: send the reminder' else: print occr, 'no reminder' And I suggest that you do not store the next year reminder date, because the delta may change, or timezone may change, and even birthday itself, so you would need to recompute it. The method above basically computes the reminder date (and time) on the fly. One more thing I can suggest is to store for which date (birthday including the year) the last reminder has been sent. So if there is a system downtime you would not miss the reminders but send all that have not been sent. You will need to adapt the code to have this additional check and update. A: Assuming you cannot use an existing library to provide the functionality you require the Python standard library module datetime in particular the timedelta object should give you the primitives to implement the behaviour you need: from datetime import timedelta, date start_date = date(2010, 6, 12) notification_date = start_date + timedelta(days=365) - timedelta(days=3) print notification_date This code snippet prints: 2011-06-09, timedelta also accepts hours so the precision you require is possible. You could run code like I have shown above on a regular basis on all your events to see if any of them should send an alert. In your case since you want this even to recur you could just update the start year when the alert is fired setting it up for the next year. A: You can do it in two parts. Cron Job that triggers the application hourly,halfhourly or some other suitable frequency. The Application checks if its time for any alerts to be triggered. Adds those alerts to a sending list. Sends email alerts to those customers. A: Try out something like this. Select * from alerts where date in (GetDate(), GetDate()+2, GetDATE()+5, GETDATE()+7) and date-GetDate() =alertFrequency Send the alerts to all the results of the above query A: How about running a query like SELECT `user` FROM `mytable` WHERE now() >= (`birthday` - INTERVAL `delta` DAY); daily?
Recurrent yearly date alert in Python
A user can set a day alert for a birthday. (We do not care about the year of birth) He also picks if he wants to be alerted 0, 1, 2, ou 7 days (Delta) before the D day. Users have a timezone setting. I want the server to send the alerts at 8 am on the the D day - deleta +- user timezone Example: 12 jun, with "alert me 3 days before" will give 9 of Jun. My idea was to have a trigger_datetime extra field saved on the 'recurrent event' object. Like this a cron Job running every hour on my server will just check for all events matching irs current time hour, day and month and send to the alert. The problem from a year to the next the trigger_date could change ! If the alert is set on 1st of March, with a one day delay that could be either 28 or 29 of February .. Maybe i should not use the trigger date trick and use some other kind of scheme. All plans are welcome.
[ "Although using plain datetime python module you will be able to implement all you need, a much more powerful python-dateutil extension is available, especially if you need to work with recurring events. The code below should give you an indication of how to achieve your goal:\nfrom datetime import *\nfrom dateutil.rrule import rrule, YEARLY\n\n# GLOBAL CONFIG\ntd_8am = timedelta(seconds=3600*8)\ntd_jobfrequency = timedelta(seconds=3600) # hourly job\n\n\n# USER DATA::\n# birthday: assumed to be retrieved from some data source\nbday = date(1960, 5, 12)\n# reminder delta: number of days before the b-day\ntd_delta = timedelta(days=6)\n# difference between the user TZ and the server TZ\ntz_diff = timedelta(seconds=3600*5) # example: +5h\n\n\n# from current time minus the job periodicity and the delta\nsday = date.today()\n# occr will return the first birthday from today on\noccr = rrule(YEARLY, bymonth=bday.month, bymonthday=bday.day, dtstart=sday, count=1)[0]\n\n# adjust: subtract the reminder delta, fixed 8h (8am) and tz difference\noccr -= (td_delta + td_8am + tz_diff)\n\n# send the reminder when the adjusted occurance is within this job time period\nif datetime.now() - td_jobfrequency < occr < datetime.now():\n print occr, '@todo: send the reminder'\nelse:\n print occr, 'no reminder'\n\nAnd I suggest that you do not store the next year reminder date, because the delta may change, or timezone may change, and even birthday itself, so you would need to recompute it. The method above basically computes the reminder date (and time) on the fly.\nOne more thing I can suggest is to store for which date (birthday including the year) the last reminder has been sent. So if there is a system downtime you would not miss the reminders but send all that have not been sent. You will need to adapt the code to have this additional check and update.\n", "Assuming you cannot use an existing library to provide the functionality you require the Python standard library module datetime in particular the timedelta object should give you the primitives to implement the behaviour you need:\nfrom datetime import timedelta, date\nstart_date = date(2010, 6, 12)\nnotification_date = start_date + timedelta(days=365) - timedelta(days=3)\nprint notification_date\n\nThis code snippet prints: 2011-06-09, timedelta also accepts hours so the precision you require is possible. You could run code like I have shown above on a regular basis on all your events to see if any of them should send an alert. In your case since you want this even to recur you could just update the start year when the alert is fired setting it up for the next year.\n", "You can do it in two parts.\n\nCron Job that triggers the application hourly,halfhourly or some other suitable frequency.\nThe Application checks if its time for any alerts to be triggered. Adds those alerts to a sending list. Sends email alerts to those customers.\n\n", "Try out something like this.\nSelect * from alerts where\ndate in (GetDate(), GetDate()+2, GetDATE()+5, GETDATE()+7)\nand date-GetDate() =alertFrequency\nSend the alerts to all the results of the above query\n", "How about running a query like\nSELECT `user` FROM `mytable` WHERE now() >= (`birthday` - INTERVAL `delta` DAY);\n\ndaily?\n" ]
[ 3, 2, 0, 0, 0 ]
[]
[]
[ "algorithm", "date", "python", "timezone" ]
stackoverflow_0002776311_algorithm_date_python_timezone.txt
Q: traversing an object tree I'm trying to find information on different ways to traverse an object tree in python. I don't know much about the language in general yet, so any suggestions/techniques would be welcome. Thanks so much jml A: See the inspect module. It has functions for accessing/listing all kinds of object information. A: i found out how to do it. basically myobject.membername1.membername2
traversing an object tree
I'm trying to find information on different ways to traverse an object tree in python. I don't know much about the language in general yet, so any suggestions/techniques would be welcome. Thanks so much jml
[ "See the inspect module. It has functions for accessing/listing all kinds of object information.\n", "i found out how to do it. basically myobject.membername1.membername2\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002776663_python.txt
Q: How do you store accented characters coming from a web service into a database? I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form? A: The fault is already in the string you pass to json.loads(). \u00c3 is "A tilde" and \00a9 is the copyright sign. Correct for é would be \u00e9. Probably the string has been encoded in UTF-8 by the sender and decoded as ISO-8859-1 by the receiver. For example, if you run the following Python script: # -*- encoding: utf-8 -*- import json data = {'name': u'André'} print('data: {0}'.format(repr(data))) code = json.dumps(data) print('code: {0}'.format(repr(code))) conv = json.loads(code) print('conv: {0}'.format(repr(conv))) name = conv['name'] print(u'Name is {0}'.format(name)) The output should look like: data: {'name': u'Andr\xe9'} code: '{"name": "Andr\\u00e9"}' conv: {u'name': u'Andr\xe9'} Name is André Managing unicode in Python 2.x can sometimes become a nuisance. Unfortunately, Django does not yet support Python 3.
How do you store accented characters coming from a web service into a database?
I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?
[ "The fault is already in the string you pass to json.loads(). \\u00c3 is \"A tilde\" and \\00a9 is the copyright sign. Correct for é would be \\u00e9.\nProbably the string has been encoded in UTF-8 by the sender and decoded as ISO-8859-1 by the receiver.\nFor example, if you run the following Python script:\n# -*- encoding: utf-8 -*-\n\nimport json\n\ndata = {'name': u'André'}\nprint('data: {0}'.format(repr(data)))\n\ncode = json.dumps(data)\nprint('code: {0}'.format(repr(code)))\n\nconv = json.loads(code)\nprint('conv: {0}'.format(repr(conv)))\n\nname = conv['name']\nprint(u'Name is {0}'.format(name))\n\nThe output should look like:\ndata: {'name': u'Andr\\xe9'}\ncode: '{\"name\": \"Andr\\\\u00e9\"}'\nconv: {u'name': u'Andr\\xe9'}\nName is André\n\nManaging unicode in Python 2.x can sometimes become a nuisance. Unfortunately, Django does not yet support Python 3.\n" ]
[ 6 ]
[]
[]
[ "django", "mysql", "python" ]
stackoverflow_0002775751_django_mysql_python.txt
Q: if else-if making code look ugly any cleaner solution? I have around 20 functions (is_func1, is_fucn2, is_func3...) returning boolean I assume there is only one function which returns true and I want that! I am doing: if is_func1(param1, param2): # I pass 1 to following abc(1) # I pass 1 some_list.append(1) elif is_func2(param1, param2): # I pass 2 to following abc(2) # I pass 1 some_list.append(2) ... . . elif is_func20(param1, param2): ... Please note: param1 and param2 are different for each, abc and some_list take parameters depending on the function. The code looks big and there is repetition in calling abc and some_list, I can pull this login in a function! but is there any other cleaner solution? I can think of putting functions in a data structure and loop to call them. A: What about functionList = [is_func1, is_func2, ..., is_func20] for index, func in enumerate(functionList): if(func(param1, param2)): abc(index+1) some_list.append(index+1) break A: I can think of putting functions in a data structure and loop to call them. Yes, probably you should do that since your code needs to be refactored, and a data driven design is a good choice. An example similar to BlueRaja's answer, # arg1, arg2 and ret can have any values on each record data = ((isfunc1, arg1, arg2, ret), (isfunc2, arg1, arg2, ret), (isfunc3, arg1, arg2, ret), ...) for d in data: if d[0](d[1], d[2]): abc(d[3]) some_list.append(d[3]) break A: If each branch of your event dispatcher is in fact different, then there just isn't any way to get around writing the individual branch handlers, and there isn't any way to get around polling the different cases and choosing a branch. A: Try this: value = 1 if is_func1(param1, param2) else \ 2 if is_func2(param1x, param2x) else \ ... else \ 20 if is_func20(param1z, param2z) else 0 abc(value) some_list.append(value) Bear in mind that this statement was cobbled together using various websites as a reference for Python syntax, so please don't shoot me if it doesn't compile. The basic gist is to produce a single value that corresponds to the function called (1 for is_func1, 2 for is_func2, etc.) then use that value in the abc and some_list.append functions. Going on what I was able to read about Python boolean expression evaluation, this should properly short-circuit the evaluation so that the functions stop being called as soon as one evaluates to true. A: This looks a good case to apply Chain of responsibility pattern. I know how to give the example with objects, not functions, so I'll do that: class HandleWithFunc1 def __init__(self, otherHandler): self.otherHandler = otherHandler def Handle(param1, param2): if ( should I handle with func1? ): #Handle with func1 return if otherHandler == None: raise "Nobody handled the call!" otherHandler.Handle(param1, param2) class HandleWithFunc2: def __init__(self, otherHandler): self.otherHandler = otherHandler def Handle(param1, param2): if ( should I handle with func1? ): #Handle with func1 return if otherHandler == None: raise "Nobody handled the call!" otherHandler.Handle(param1, param2) So you create all your classes like a chain: handle = HandleWithFunc1(HandleWithFunc2()) then: handle.Handle(param1, param2) This code is prone to refactoring, here only to illustrate the usage A: I modified BlueRaja answer for different parameters... function_list = {is_func01: (pa1, pa2, ...), is_func02: (pa1, pa2, pa3, ...), .... is_func20: (pa1, ...)} for func, pa_list in function_list.items: if(func(*pa_list)): abc(pa_list_dependent_parameters) some_list.append(pa_list_dependent_parameters) break I don't see why it shouldn't work. A: I've not used python before, but can you refer to functions by a variable? If so, you can create an enum with entries representing each function, test all the functions in a loop, and set a variable to the 'true' function's enum. Then you can do a switch statement on the enum. Still, that won't 'clean up' the code much: when you have n options and need to drive down to the correct one, you'll need n blocks of code to handle it. A: I'm not sure if it would be cleaner, but I think is's quite interesting solution. First of all you should define new function, let it be semi_func, which will call abc and some_list.append do make code DRY. Then set new variable to act as a binary result of all boolean functions, so the is_func1 is 20th bit of it, is_func2 is 19th and so on. 32 bits of integer type should be enough to handle all 20 results. While setting value to this result variable you should use shift left to add new functions: result = is_func1(param1, param2) << 1 result = (result | is_func2(param1, param2)) << 1 ... result = (result | is_func20(param1, param2)) For ease access define new constants like IS_FUNC20_TRUE = 1 IS_FUNC19_TRUE = 2 IS_FUNC18_TRUE = 4 ... values should be powers of 2 And in the end use switch/sase statement to call semi_func.
if else-if making code look ugly any cleaner solution?
I have around 20 functions (is_func1, is_fucn2, is_func3...) returning boolean I assume there is only one function which returns true and I want that! I am doing: if is_func1(param1, param2): # I pass 1 to following abc(1) # I pass 1 some_list.append(1) elif is_func2(param1, param2): # I pass 2 to following abc(2) # I pass 1 some_list.append(2) ... . . elif is_func20(param1, param2): ... Please note: param1 and param2 are different for each, abc and some_list take parameters depending on the function. The code looks big and there is repetition in calling abc and some_list, I can pull this login in a function! but is there any other cleaner solution? I can think of putting functions in a data structure and loop to call them.
[ "What about\nfunctionList = [is_func1, is_func2, ..., is_func20]\nfor index, func in enumerate(functionList):\n if(func(param1, param2)):\n abc(index+1)\n some_list.append(index+1)\n break\n\n", "\nI can think of putting functions in a data structure and loop to call them.\n\nYes, probably you should do that since your code needs to be refactored,\nand a data driven design is a good choice.\nAn example similar to BlueRaja's answer,\n# arg1, arg2 and ret can have any values on each record\ndata = ((isfunc1, arg1, arg2, ret),\n (isfunc2, arg1, arg2, ret),\n (isfunc3, arg1, arg2, ret),\n ...)\n\nfor d in data:\n if d[0](d[1], d[2]):\n abc(d[3])\n some_list.append(d[3])\n break\n\n", "If each branch of your event dispatcher is in fact different, then there just isn't any way to get around writing the individual branch handlers, and there isn't any way to get around polling the different cases and choosing a branch.\n", "Try this:\nvalue = 1 if is_func1(param1, param2) else \\\n 2 if is_func2(param1x, param2x) else \\\n ... else \\\n 20 if is_func20(param1z, param2z) else 0\n\nabc(value)\nsome_list.append(value)\n\nBear in mind that this statement was cobbled together using various websites as a reference for Python syntax, so please don't shoot me if it doesn't compile.\nThe basic gist is to produce a single value that corresponds to the function called (1 for is_func1, 2 for is_func2, etc.) then use that value in the abc and some_list.append functions. Going on what I was able to read about Python boolean expression evaluation, this should properly short-circuit the evaluation so that the functions stop being called as soon as one evaluates to true.\n", "This looks a good case to apply Chain of responsibility pattern.\nI know how to give the example with objects, not functions, so I'll do that:\nclass HandleWithFunc1\n def __init__(self, otherHandler):\n self.otherHandler = otherHandler\n\n def Handle(param1, param2):\n if ( should I handle with func1? ):\n #Handle with func1\n return\n if otherHandler == None:\n raise \"Nobody handled the call!\"\n\n otherHandler.Handle(param1, param2)\n\nclass HandleWithFunc2:\n def __init__(self, otherHandler):\n self.otherHandler = otherHandler\n\n def Handle(param1, param2):\n if ( should I handle with func1? ):\n #Handle with func1\n return\n if otherHandler == None:\n raise \"Nobody handled the call!\"\n\n otherHandler.Handle(param1, param2)\n\nSo you create all your classes like a chain:\nhandle = HandleWithFunc1(HandleWithFunc2())\n\nthen:\nhandle.Handle(param1, param2)\n\nThis code is prone to refactoring, here only to illustrate the usage\n", "I modified BlueRaja answer for different parameters...\nfunction_list = {is_func01: (pa1, pa2, ...),\n is_func02: (pa1, pa2, pa3, ...), \n ....\n is_func20: (pa1, ...)}\n\nfor func, pa_list in function_list.items:\n if(func(*pa_list)):\n abc(pa_list_dependent_parameters)\n some_list.append(pa_list_dependent_parameters)\n break\n\nI don't see why it shouldn't work.\n", "I've not used python before, but can you refer to functions by a variable?\nIf so, you can create an enum with entries representing each function, test all the functions in a loop, and set a variable to the 'true' function's enum.\nThen you can do a switch statement on the enum.\nStill, that won't 'clean up' the code much: when you have n options and need to drive down to the correct one, you'll need n blocks of code to handle it.\n", "I'm not sure if it would be cleaner, but I think is's quite interesting solution.\nFirst of all you should define new function, let it be semi_func, which will call abc and some_list.append do make code DRY.\nThen set new variable to act as a binary result of all boolean functions, so the is_func1 is 20th bit of it, is_func2 is 19th and so on.\n32 bits of integer type should be enough to handle all 20 results.\nWhile setting value to this result variable you should use shift left to add new functions:\nresult = is_func1(param1, param2) << 1\nresult = (result | is_func2(param1, param2)) << 1\n...\nresult = (result | is_func20(param1, param2))\n\nFor ease access define new constants like\nIS_FUNC20_TRUE = 1\nIS_FUNC19_TRUE = 2\nIS_FUNC18_TRUE = 4\n... values should be powers of 2\n\nAnd in the end use switch/sase statement to call semi_func.\n" ]
[ 4, 2, 1, 1, 1, 1, 0, 0 ]
[ "I know I will be modded down for being offtopic, but still. If you find anything that can be done with standard control constructs off-putting, then you need to use a different language, such as Common Lisp, which allows for macros, in effect makes it possible to create your own control constructs. (Having recently discovered anaphoric macros, I just have to recommend this.)\nThis specific case would be a perfect example where a macro would help, but only assuming you are doing it at multiple places in your code, otherwise it's probably not worth improving at all. And in fact, Common Lisp already has a macro like that, it's called cond.\nAnyway, in Python, I think you should go along with a list of functions and a loop.\n" ]
[ -3 ]
[ "python" ]
stackoverflow_0002776125_python.txt
Q: python httplib and broken tcp connection How do I find out if a connection has been broken using the httplib library? Seems like something so basic yet I can't find the answer on here or google. A: While Connecting You get one of these: http://docs.python.org/library/httplib.html#httplib.HTTPException you could do something like this. >>> import httplib >>> conn = httplib.HTTPConnection("www.python.org") >>> try: >>> conn.request("GET", "/index.html") >>> except Exception as e: >>> #take action according to the error. >>> print(type(e)) >>> r1 = conn.getresponse() >>> print r1.status, r1.reason Example taken from www.python.org and edited
python httplib and broken tcp connection
How do I find out if a connection has been broken using the httplib library? Seems like something so basic yet I can't find the answer on here or google.
[ "While Connecting You get one of these:\nhttp://docs.python.org/library/httplib.html#httplib.HTTPException\nyou could do something like this.\n>>> import httplib\n>>> conn = httplib.HTTPConnection(\"www.python.org\")\n>>> try:\n>>> conn.request(\"GET\", \"/index.html\")\n>>> except Exception as e:\n>>> #take action according to the error.\n>>> print(type(e))\n>>> r1 = conn.getresponse()\n>>> print r1.status, r1.reason\n\nExample taken from www.python.org and edited\n" ]
[ 4 ]
[]
[]
[ "http", "httplib", "python", "sockets", "tcp" ]
stackoverflow_0002777435_http_httplib_python_sockets_tcp.txt
Q: Python operators returning ints Is there any way to have Python operators line "==" and ">" return ints instead of bools. I know that I could use the int function (int(1 == 1)) or add 0 ((1 == 1) + 0) but I was wondering if there was an easy way to do it. Like when you want division to return floats you could type from __future__ import division. Is there any way to do this with operators returning ints? Or could I make a class extending __future__._Feature that would do what I want? A: You cant override the built-in comparison functions. In some sense the comparison operators are already returning int. bool is a subclass of int, so you can do anything to it that you can do to a int. The question then becomes why would you want to have comparisons return int objects, not bool objects? A: You can have the comparison operators of your custom classes return whatever you like -- simply implement the relevant methods (__eq__, __ne__, __gt__, __lt__, __ge__, __le__) to return what you want. For objects that you don't control you cannot change this, but there should be no need to: bools are ints, because of the Liskov substitution principle. Code that notices a difference between the bool returned by the builtin types' __eq__ methods and any other integer is using the result wrong. The __future__ module isn't relevant here; you can't use it to do whatever you want, you can only use it to change specific settings that were added to Python. You can turn division into true division with the __future__ import because that's what was added to Python. The only way to add more __future__ imports is by modifying Python itself. A: Based on your clarification, you might change your comparison operator to something like: stack.push(1 if stack.pop() > stack.pop() else 0) This will convert the boolean result of > to 1 or 0 as you would like. Also, be careful about calling stack.pop() twice in the same expression. You don't know (for sure) what order the arguments will be evaluated in, and different implementations of Python may very well pop the arguments in a different order. You will need to use temporary variables: x = stack.pop() y = stack.pop() stack.push(1 if x > y else 0) A: On your own objects, it is easy to override each comparison operator. For built-ins, the override methods are "read only" so all my attempts to set them don't pan out. >>> class foo: def __lt__(self, other): return cmp(5, other) >>> f = foo() >>> f<3 1 >>> f<7 -1 >>> f<5 0 >>> j="" >>> j.__lt__=lambda other: cmp(5, other) Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'str' object attribute '__lt__' is read-only A: Cast your bool to an int? >>> int(True) 1 >>> int(False) 0 Or cast that to a str? >>> str(int(True)) '1' >>> str(int(False)) '0' A: No, you can't. When Guido unified types and classes, he found a way to override the behavior of built-in types (due to the way he implemented things), but he declared it a bug and plugged the loophole. Changing the behavior of built-in types (except for your example - importing division from future, which is there for a good reason) is forbidden. Sorry, but I can't find the mailing list post. I remember it though, as it was quite interesting.
Python operators returning ints
Is there any way to have Python operators line "==" and ">" return ints instead of bools. I know that I could use the int function (int(1 == 1)) or add 0 ((1 == 1) + 0) but I was wondering if there was an easy way to do it. Like when you want division to return floats you could type from __future__ import division. Is there any way to do this with operators returning ints? Or could I make a class extending __future__._Feature that would do what I want?
[ "You cant override the built-in comparison functions. In some sense the comparison operators are already returning int. bool is a subclass of int, so you can do anything to it that you can do to a int. The question then becomes why would you want to have comparisons return int objects, not bool objects?\n", "You can have the comparison operators of your custom classes return whatever you like -- simply implement the relevant methods (__eq__, __ne__, __gt__, __lt__, __ge__, __le__) to return what you want. For objects that you don't control you cannot change this, but there should be no need to: bools are ints, because of the Liskov substitution principle. Code that notices a difference between the bool returned by the builtin types' __eq__ methods and any other integer is using the result wrong.\nThe __future__ module isn't relevant here; you can't use it to do whatever you want, you can only use it to change specific settings that were added to Python. You can turn division into true division with the __future__ import because that's what was added to Python. The only way to add more __future__ imports is by modifying Python itself.\n", "Based on your clarification, you might change your comparison operator to something like:\nstack.push(1 if stack.pop() > stack.pop() else 0)\n\nThis will convert the boolean result of > to 1 or 0 as you would like.\nAlso, be careful about calling stack.pop() twice in the same expression. You don't know (for sure) what order the arguments will be evaluated in, and different implementations of Python may very well pop the arguments in a different order. You will need to use temporary variables:\nx = stack.pop()\ny = stack.pop()\nstack.push(1 if x > y else 0)\n\n", "On your own objects, it is easy to override each comparison operator. For built-ins, the override methods are \"read only\" so all my attempts to set them don't pan out. \n>>> class foo:\n def __lt__(self, other):\n return cmp(5, other)\n\n>>> f = foo()\n>>> f<3\n1\n>>> f<7\n-1\n>>> f<5\n0\n\n>>> j=\"\"\n>>> j.__lt__=lambda other: cmp(5, other)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nAttributeError: 'str' object attribute '__lt__' is read-only\n\n", "Cast your bool to an int?\n>>> int(True)\n1\n>>> int(False)\n0\nOr cast that to a str?\n>>> str(int(True))\n'1'\n>>> str(int(False))\n'0'\n", "No, you can't. When Guido unified types and classes, he found a way to override the behavior of built-in types (due to the way he implemented things), but he declared it a bug and plugged the loophole. Changing the behavior of built-in types (except for your example - importing division from future, which is there for a good reason) is forbidden.\nSorry, but I can't find the mailing list post. I remember it though, as it was quite interesting.\n" ]
[ 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "boolean", "int", "operators", "python" ]
stackoverflow_0002777366_boolean_int_operators_python.txt
Q: Python Drawing Portion of Image When Mouse Hover My question relates to Python GTK I have an image -a JPG - which I draw onto a drawing area. I want to reveal a portion of the image -say a 10pix by 10 px square -only where the mouse pointer is currently at. Everything 10 x 10 px square away from the mouse should hidden i.e. black. I'm am new to PyGtk please can anyone help? Thanks A: #!/usr/bin/python import os import sys import gtk MASK_COLOR = 0x000000 def composite(source, start_x=345, start_y=345): width = 50 height = 50 alpha = 255 dest = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8 ,800,800) dest.fill(MASK_COLOR) source.composite(dest, start_x, start_y, width, height, 0, 0, 1, 1, gtk.gdk.INTERP_NEAREST, alpha) return dest def it_moved(widget, event, window, masked, original): r = window.get_display().get_window_at_pointer() masked.set_from_pixbuf(composite(original.get_pixbuf(), r[1], r[2])) return True if __name__ == '__main__': window = gtk.Window() eb = gtk.EventBox() original = gtk.Image() original.set_from_file(sys.argv[1]) masked = gtk.Image() masked.set_from_pixbuf(composite(original.get_pixbuf())) eb.add(masked) eb.set_property('events', gtk.gdk.POINTER_MOTION_MASK) eb.connect('motion_notify_event', it_moved, window, masked, original) window.add(eb) window.set_size_request(800,800) window.show_all() gtk.main() This should do something like you describe. I chose to show a 50x50 area since yours was kinda small to see under the pointer. I didn't hide that either.
Python Drawing Portion of Image When Mouse Hover
My question relates to Python GTK I have an image -a JPG - which I draw onto a drawing area. I want to reveal a portion of the image -say a 10pix by 10 px square -only where the mouse pointer is currently at. Everything 10 x 10 px square away from the mouse should hidden i.e. black. I'm am new to PyGtk please can anyone help? Thanks
[ "#!/usr/bin/python \n\nimport os\nimport sys\nimport gtk\n\nMASK_COLOR = 0x000000\n\ndef composite(source, start_x=345, start_y=345):\n width = 50 \n height = 50 \n alpha = 255 \n dest = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8 ,800,800)\n dest.fill(MASK_COLOR) \n source.composite(dest, \n start_x, \n start_y,\n width,\n height,\n 0,\n 0,\n 1,\n 1,\n gtk.gdk.INTERP_NEAREST,\n alpha)\n\n return dest\n\n\ndef it_moved(widget, event, window, masked, original):\n r = window.get_display().get_window_at_pointer()\n masked.set_from_pixbuf(composite(original.get_pixbuf(), r[1], r[2]))\n return True\n\n\nif __name__ == '__main__':\n window = gtk.Window()\n eb = gtk.EventBox()\n original = gtk.Image()\n original.set_from_file(sys.argv[1])\n\n masked = gtk.Image()\n masked.set_from_pixbuf(composite(original.get_pixbuf()))\n\n eb.add(masked)\n eb.set_property('events', gtk.gdk.POINTER_MOTION_MASK)\n eb.connect('motion_notify_event', it_moved, window, masked, original)\n window.add(eb)\n window.set_size_request(800,800)\n window.show_all()\n gtk.main()\n\nThis should do something like you describe. I chose to show a 50x50 area since yours was kinda small to see under the pointer. I didn't hide that either.\n" ]
[ 2 ]
[]
[]
[ "image_processing", "pygtk", "python" ]
stackoverflow_0002336916_image_processing_pygtk_python.txt
Q: Shutting Down SSH Tunnel in Paramiko Programmatically We are attempting to use the paramiko module for creating SSH tunnels on demand to arbitrary servers for purposes of querying remote databases. We attempted to use the forward.py demo that ships with paramiko but the big limitation is there does not seem to be an easy way to close an SSH tunnel and the SSH connection once the socket server is started up. The limitation we have is that we cannot activate this from a shell and then kill the shell manually to stop the listner. We need to open the SSH connection, tunnel, perform some actions through the tunnel, close the tunnel, and close the SSH connection within python. I've seen references to a server.shutdown() method but it isn't clear how to implement it correctly. A: I'm not sure what you mean by "implement it correctly" -- you just need to keep track of the server object and call shutdown on it when you want. In forward.py, the server isn't kept track of, because the last line of forward_tunnel is ForwardServer(('', local_port), SubHander).serve_forever() so the server object is not easily reachable any more. But you can just change that to, e.g.: global theserver theserver = ForwardServer(('', local_port), SubHander) theserver.serve_forever() and run the forward_tunnel function in a separate thread, so that the main function gets control back (while the serve_forever is running in said separate thread) and can call theserver.shutdown() whenever that's appropriate and needed.
Shutting Down SSH Tunnel in Paramiko Programmatically
We are attempting to use the paramiko module for creating SSH tunnels on demand to arbitrary servers for purposes of querying remote databases. We attempted to use the forward.py demo that ships with paramiko but the big limitation is there does not seem to be an easy way to close an SSH tunnel and the SSH connection once the socket server is started up. The limitation we have is that we cannot activate this from a shell and then kill the shell manually to stop the listner. We need to open the SSH connection, tunnel, perform some actions through the tunnel, close the tunnel, and close the SSH connection within python. I've seen references to a server.shutdown() method but it isn't clear how to implement it correctly.
[ "I'm not sure what you mean by \"implement it correctly\" -- you just need to keep track of the server object and call shutdown on it when you want. In forward.py, the server isn't kept track of, because the last line of forward_tunnel is\nForwardServer(('', local_port), SubHander).serve_forever()\n\nso the server object is not easily reachable any more. But you can just change that to, e.g.:\nglobal theserver\ntheserver = ForwardServer(('', local_port), SubHander)\ntheserver.serve_forever()\n\nand run the forward_tunnel function in a separate thread, so that the main function gets control back (while the serve_forever is running in said separate thread) and can call theserver.shutdown() whenever that's appropriate and needed.\n" ]
[ 5 ]
[]
[]
[ "paramiko", "python", "tunnel" ]
stackoverflow_0002777884_paramiko_python_tunnel.txt
Q: python webtest port configuration? I am attempting to write some tests using webtest to test out my python GAE application. The problem I am running into is that the application is listening on port 8080 but I cannot configure webtest to hit that port. For example, I want to use app.get('/getreport') to hit http://localhost:8080/getreport. Obviously, it hits just thits http:// localhost/getreport. Is there a way to set up webtest to hit a particular port? A: With paste.proxy.TransparentProxy you can test anything that responds to an http request... from webtest import TestApp from paste.proxy import TransparentProxy testapp = TestApp(TransparentProxy()) res = testapp.get("http://google.com") assert res.status=="200 OK","failure....." A: In config, and I quote, port Required? No, defaults is "80" Defines the port number to use for executing requests, e.g. "8080". Edit: the user clarified that they mean this webtest (pythonpaste's), not the widely used Canoo application. I wouldn't have guessed, because pythonpaste's webtest is a very different kettle of fish, and I quote...: With this you can test your web applications without starting an HTTP server, and without poking into the web framework shortcutting pieces of your application that need to be tested. The tests WebTest runs are entirely equivalent to how a WSGI HTTP server would call an application No HTTP server being started, there is no concept of "port" -- things run in-process, at WSGI level, without actual TCP/IP and HTTP in play. So, the "application" is not listening on port 8080 (or any other port), but rather its WSGI entry points are called directly, "just as if" an HTTP server was calling them. If you want to test an actual running HTTP server, then you need Canoo's webtest (or other equivalent frameworks), not pythonpaste's -- the latter will make for faster testing by avoiding any socket-layer and HTTP-layer overhead, but you can't test a separate, existing, running server (such as GAE's SDK's) in this way. A: I think you're misunderstanding what WebTest does. Something like app.get('/getreport') shouldn't make any kind of request to localhost on any port. The beauty of WebTest is that it doesn't require your app to actually be running on any server. Here's a quote from the "What This Does" section of the WebTest docs: With this you can test your web applications without starting an HTTP server, and without poking into the web framework shortcutting pieces of your application that need to be tested. The tests WebTest runs are entirely equivalent to how a WSGI HTTP server would call an application.
python webtest port configuration?
I am attempting to write some tests using webtest to test out my python GAE application. The problem I am running into is that the application is listening on port 8080 but I cannot configure webtest to hit that port. For example, I want to use app.get('/getreport') to hit http://localhost:8080/getreport. Obviously, it hits just thits http:// localhost/getreport. Is there a way to set up webtest to hit a particular port?
[ "With paste.proxy.TransparentProxy you can test anything that responds to an http request...\nfrom webtest import TestApp\nfrom paste.proxy import TransparentProxy\ntestapp = TestApp(TransparentProxy())\nres = testapp.get(\"http://google.com\")\nassert res.status==\"200 OK\",\"failure.....\"\n\n", "In config, and I quote,\nport\n\n\nRequired? No, defaults is \"80\"\nDefines the port number to use for\n executing requests, e.g. \"8080\".\n\nEdit: the user clarified that they mean this webtest (pythonpaste's), not the widely used Canoo application. I wouldn't have guessed, because pythonpaste's webtest is a very different kettle of fish, and I quote...:\n\nWith this you can test your web\n applications without starting an HTTP\n server, and without poking into the\n web framework shortcutting pieces of\n your application that need to be\n tested. The tests WebTest runs are\n entirely equivalent to how a WSGI HTTP\n server would call an application\n\nNo HTTP server being started, there is no concept of \"port\" -- things run in-process, at WSGI level, without actual TCP/IP and HTTP in play. So, the \"application\" is not listening on port 8080 (or any other port), but rather its WSGI entry points are called directly, \"just as if\" an HTTP server was calling them.\nIf you want to test an actual running HTTP server, then you need Canoo's webtest (or other equivalent frameworks), not pythonpaste's -- the latter will make for faster testing by avoiding any socket-layer and HTTP-layer overhead, but you can't test a separate, existing, running server (such as GAE's SDK's) in this way.\n", "I think you're misunderstanding what WebTest does. Something like app.get('/getreport') shouldn't make any kind of request to localhost on any port. The beauty of WebTest is that it doesn't require your app to actually be running on any server.\nHere's a quote from the \"What This Does\" section of the WebTest docs:\n\nWith this you can test your web applications without starting an HTTP server, and without poking into the web framework shortcutting pieces of your application that need to be tested. The tests WebTest runs are entirely equivalent to how a WSGI HTTP server would call an application.\n\n" ]
[ 4, 2, 2 ]
[]
[]
[ "google_app_engine", "python", "webtest" ]
stackoverflow_0002774249_google_app_engine_python_webtest.txt
Q: Content-Length header not returned from Pylons response I'm still struggling to Stream a file to the HTTP response in Pylons. In addition to the original problem, I'm finding that I cannot return the Content-Length header, so that for large files the client cannot estimate how long the download will take. I've tried response.content_length = 12345 and I've tried response.headers['Content-Length'] = 12345 In both cases the HTTP response (viewed in Fiddler) simply does not contain the Content-Length header. How do I get Pylons to return this header? (Oh, and if you have any ideas on making it stream the file please reply to the original question - I'm all out of ideas there.) Edit: while not a generic solution, for serving static files FileApp allows sending the Content-Length header. For dynamic content it looks like Alex Martelli's answer is the only option. A: There's a bit of middleware code here that ensures all responses get a content length header if they're missing it. You could tweak it so that you set some other header in your response (say 'X-The-Content-Length') and the middleware uses that to make the content length if the latter's missing. I view the whole thing as a workaround for what I consider a pylons bug (its cavalier attitude to content length!) but apparently the pylons authors disagree with me on that score, so it's nice to at least have workarounds for it!-) A: Try: response.headerlist.append((str("Content-Length"), str(" 123456")))
Content-Length header not returned from Pylons response
I'm still struggling to Stream a file to the HTTP response in Pylons. In addition to the original problem, I'm finding that I cannot return the Content-Length header, so that for large files the client cannot estimate how long the download will take. I've tried response.content_length = 12345 and I've tried response.headers['Content-Length'] = 12345 In both cases the HTTP response (viewed in Fiddler) simply does not contain the Content-Length header. How do I get Pylons to return this header? (Oh, and if you have any ideas on making it stream the file please reply to the original question - I'm all out of ideas there.) Edit: while not a generic solution, for serving static files FileApp allows sending the Content-Length header. For dynamic content it looks like Alex Martelli's answer is the only option.
[ "There's a bit of middleware code here that ensures all responses get a content length header if they're missing it. You could tweak it so that you set some other header in your response (say 'X-The-Content-Length') and the middleware uses that to make the content length if the latter's missing. I view the whole thing as a workaround for what I consider a pylons bug (its cavalier attitude to content length!) but apparently the pylons authors disagree with me on that score, so it's nice to at least have workarounds for it!-)\n", "Try:\nresponse.headerlist.append((str(\"Content-Length\"), str(\" 123456\")))\n\n" ]
[ 1, 0 ]
[]
[]
[ "content_length", "http", "pylons", "python" ]
stackoverflow_0002777866_content_length_http_pylons_python.txt
Q: Embed FCKeditor in python app I have a python application which need a gui HTML editor, I know FCKeditor is nice, so how to embed the FCKeditor in a python desktop app? A: To embed FCKeditor (or maybe better the current CKeditor?), you basically need to embed a full-fledged browser (with Javascript) -- I believe wxPython may currently be the best bet for that, as I hear it has wxIE for Windows and wxWebKitCtrl for the Mac (I don't know if old summer-of-code ideas about making something suitable for Linux ever panned out, though). Most "HTML viewer" widgets, in most GUIs, don't support Javascript, and that's a must for (F?)CKeditor. A: In order of difficulty: If you just need to support Windows, you can embed IE in wx - see the docs and demos. wxWebKit is looking a bit more mature, but it's still in development. You could just use the web-browser using webbrowser.open(url). Things will be very crude, and interaction will be a pain. A fourth option - you could try out pyjamas for your whole GUI, then run it all in a web-browser.
Embed FCKeditor in python app
I have a python application which need a gui HTML editor, I know FCKeditor is nice, so how to embed the FCKeditor in a python desktop app?
[ "To embed FCKeditor (or maybe better the current CKeditor?), you basically need to embed a full-fledged browser (with Javascript) -- I believe wxPython may currently be the best bet for that, as I hear it has wxIE for Windows and wxWebKitCtrl for the Mac (I don't know if old summer-of-code ideas about making something suitable for Linux ever panned out, though). Most \"HTML viewer\" widgets, in most GUIs, don't support Javascript, and that's a must for (F?)CKeditor.\n", "In order of difficulty:\nIf you just need to support Windows, you can embed IE in wx - see the docs and demos.\nwxWebKit is looking a bit more mature, but it's still in development.\nYou could just use the web-browser using webbrowser.open(url). Things will be very crude, and interaction will be a pain.\nA fourth option - you could try out pyjamas for your whole GUI, then run it all in a web-browser.\n" ]
[ 1, 0 ]
[]
[]
[ "editor", "html", "python" ]
stackoverflow_0002777919_editor_html_python.txt
Q: Difference between URLLIB2 call in IDLE and from Django? The following piece of code works as expected when running in a local install of django apache 2.2 fx = urllib2.Request(f); fx.add_header('User-Agent','Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.36 Safari/525.19'); url_opened = urllib2.urlopen(fx); However when I enter that code into IDLE on the same machine I get the following error: url_opened = urllib2.urlopen(fx); File "C:\Python25\lib\urllib2.py", line 124, in urlopen return _opener.open(url, data) File "C:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "C:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 407: Proxy Authentication Required Any ideas? A: urllib and urllib2 I think look at environment variables for proxies if one isn't set programatically. Maybe the proxy environment variables haven't been set properly in IDLE? Compare the output of the following from IDLE to the Django program: import os, pprint for k in os.environ: if 'proxy' in k.lower(): # look for proxy environment variables print k, os.environ[k] EDIT: Quoting http://docs.python.org/library/urllib2.html#urllib2.ProxyHandler: Cause requests to go through a proxy. If proxies is given, it must be a dictionary mapping protocol names to URLs of proxies. The default is to read the list of proxies from the environment variables. If no proxy environment variables are set, in a Windows environment, proxy settings are obtained from the Internet Settings section and in a Mac OS X environment, proxy information is retrieved from the OS X System Configuration Framework. To disable autodetected proxy pass an empty dictionary. Maybe Django creates a ProxyHandler? Try calling urllib2.ProxyHandler() in IDLE. A: Maybe the Django version has already prepped urllib2 with the needed credentials for the proxy, while the IDLE version hasn't?
Difference between URLLIB2 call in IDLE and from Django?
The following piece of code works as expected when running in a local install of django apache 2.2 fx = urllib2.Request(f); fx.add_header('User-Agent','Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.36 Safari/525.19'); url_opened = urllib2.urlopen(fx); However when I enter that code into IDLE on the same machine I get the following error: url_opened = urllib2.urlopen(fx); File "C:\Python25\lib\urllib2.py", line 124, in urlopen return _opener.open(url, data) File "C:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "C:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 407: Proxy Authentication Required Any ideas?
[ "urllib and urllib2 I think look at environment variables for proxies if one isn't set programatically. Maybe the proxy environment variables haven't been set properly in IDLE? \nCompare the output of the following from IDLE to the Django program:\nimport os, pprint\nfor k in os.environ:\n if 'proxy' in k.lower(): # look for proxy environment variables\n print k, os.environ[k]\n\nEDIT: Quoting http://docs.python.org/library/urllib2.html#urllib2.ProxyHandler:\nCause requests to go through a proxy. If proxies is given, it must be a \ndictionary mapping protocol names to URLs of proxies. The default is to read the \nlist of proxies from the environment variables. If no proxy environment \nvariables are set, in a Windows environment, proxy settings are obtained from \nthe Internet Settings section and in a Mac OS X environment, proxy \ninformation is retrieved from the OS X System Configuration Framework.\n\nTo disable autodetected proxy pass an empty dictionary.\nMaybe Django creates a ProxyHandler? Try calling urllib2.ProxyHandler() in IDLE.\n", "Maybe the Django version has already prepped urllib2 with the needed credentials for the proxy, while the IDLE version hasn't?\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002778000_django_python.txt
Q: Convert a sequence of sequences to a dictionary and vice-versa One way to manually persist a dictionary to a database is to flatten it into a sequence of sequences and pass the sequence as an argument to cursor.executemany(). The opposite is also useful, i.e. reading rows from a database and turning them into dictionaries for later use. What's the best way to go from myseq to mydict and from mydict to myseq? >>> myseq = ((0,1,2,3), (4,5,6,7), (8,9,10,11)) >>> mydict = {0: (1, 2, 3), 8: (9, 10, 11), 4: (5, 6, 7)} A: mydict = dict((s[0], s[1:]) for s in myseq) myseq = tuple(sorted((k,) + v for k, v in mydict.iteritems())) A: >>> mydict = dict((t[0], t[1:]) for t in myseq)) >>> myseq = tuple(((key,) + values) for (key, values) in mydict.items()) The ordering of tuples in myseq is not preserved, since dictionaries are unordered.
Convert a sequence of sequences to a dictionary and vice-versa
One way to manually persist a dictionary to a database is to flatten it into a sequence of sequences and pass the sequence as an argument to cursor.executemany(). The opposite is also useful, i.e. reading rows from a database and turning them into dictionaries for later use. What's the best way to go from myseq to mydict and from mydict to myseq? >>> myseq = ((0,1,2,3), (4,5,6,7), (8,9,10,11)) >>> mydict = {0: (1, 2, 3), 8: (9, 10, 11), 4: (5, 6, 7)}
[ "mydict = dict((s[0], s[1:]) for s in myseq)\n\nmyseq = tuple(sorted((k,) + v for k, v in mydict.iteritems()))\n\n", ">>> mydict = dict((t[0], t[1:]) for t in myseq))\n\n>>> myseq = tuple(((key,) + values) for (key, values) in mydict.items())\n\nThe ordering of tuples in myseq is not preserved, since dictionaries are unordered.\n" ]
[ 5, 2 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0002778452_dictionary_list_python.txt
Q: MongoDB lists with paginations? for documents with lists with pagination, is it better to embed or use reference? im reading the custom type "SONManipulator" and it appears to transform every thing on retrieval, even the sub docs. i want to keep the list in the document sorted, should this impact anything? A: I don't fully understand your question, but it is generally better to embed documents for performance reasons. That is one of the major advantages of MongoDB's approach, data locality. The pymongo lib uses SON sorted dict implementation which will maintain the ordering of your document keys. If you document contains a list/array of elements and you are concerned about the ordering of the elements, fear not because the array order is maintained as well.
MongoDB lists with paginations?
for documents with lists with pagination, is it better to embed or use reference? im reading the custom type "SONManipulator" and it appears to transform every thing on retrieval, even the sub docs. i want to keep the list in the document sorted, should this impact anything?
[ "I don't fully understand your question, but it is generally better to embed documents for performance reasons. That is one of the major advantages of MongoDB's approach, data locality. The pymongo lib uses SON sorted dict implementation which will maintain the ordering of your document keys.\nIf you document contains a list/array of elements and you are concerned about the ordering of the elements, fear not because the array order is maintained as well.\n" ]
[ 1 ]
[]
[]
[ "mongodb", "python" ]
stackoverflow_0002777859_mongodb_python.txt
Q: Stream a file to the HTTP response in Pylons I have a Pylons controller action that needs to return a file to the client. (The file is outside the web root, so I can't just link directly to it.) The simplest way is, of course, this: with open(filepath, 'rb') as f: response.write(f.read()) That works, but it's obviously inefficient for large files. What's the best way to do this? I haven't been able to find any convenient methods in Pylons to stream the contents of the file. Do I really have to write the code to read a chunk at a time myself from scratch? A: The correct tool to use is shutil.copyfileobj, which copies from one to the other a chunk at a time. Example usage: import shutil with open(filepath, 'r') as f: shutil.copyfileobj(f, response) This will not result in very large memory usage, and does not require implementing the code yourself. The usual care with exceptions should be taken - if you handle signals (such as SIGCHLD) you have to handle EINTR because the writes to response could be interrupted, and IOError/OSError can occur for various reasons when doing I/O. A: I finally got it to work using the FileApp class, thanks to Chris AtLee and THC4k (from this answer). This method also allowed me to set the Content-Length header, something Pylons has a lot of trouble with, which enables the browser to show an estimate of the time remaining. Here's the complete code: def _send_file_response(self, filepath): user_filename = '_'.join(filepath.split('/')[-2:]) file_size = os.path.getsize(filepath) headers = [('Content-Disposition', 'attachment; filename=\"' + user_filename + '\"'), ('Content-Type', 'text/plain'), ('Content-Length', str(file_size))] from paste.fileapp import FileApp fapp = FileApp(filepath, headers=headers) return fapp(request.environ, self.start_response) A: The key here is that WSGI, and pylons by extension, work with iterable responses. So you should be able to write some code like (warning, untested code below!): def file_streamer(): with open(filepath, 'rb') as f: while True: block = f.read(4096) if not block: break yield block response.app_iter = file_streamer() Also, paste.fileapp.FileApp is designed to be able to return file data for you, so you can also try: return FileApp(filepath) in your controller method.
Stream a file to the HTTP response in Pylons
I have a Pylons controller action that needs to return a file to the client. (The file is outside the web root, so I can't just link directly to it.) The simplest way is, of course, this: with open(filepath, 'rb') as f: response.write(f.read()) That works, but it's obviously inefficient for large files. What's the best way to do this? I haven't been able to find any convenient methods in Pylons to stream the contents of the file. Do I really have to write the code to read a chunk at a time myself from scratch?
[ "The correct tool to use is shutil.copyfileobj, which copies from one to the other a chunk at a time.\nExample usage:\nimport shutil\nwith open(filepath, 'r') as f:\n shutil.copyfileobj(f, response)\n\nThis will not result in very large memory usage, and does not require implementing the code yourself.\nThe usual care with exceptions should be taken - if you handle signals (such as SIGCHLD) you have to handle EINTR because the writes to response could be interrupted, and IOError/OSError can occur for various reasons when doing I/O.\n", "I finally got it to work using the FileApp class, thanks to Chris AtLee and THC4k (from this answer). This method also allowed me to set the Content-Length header, something Pylons has a lot of trouble with, which enables the browser to show an estimate of the time remaining.\nHere's the complete code:\ndef _send_file_response(self, filepath):\n user_filename = '_'.join(filepath.split('/')[-2:])\n file_size = os.path.getsize(filepath)\n\n headers = [('Content-Disposition', 'attachment; filename=\\\"' + user_filename + '\\\"'),\n ('Content-Type', 'text/plain'),\n ('Content-Length', str(file_size))]\n\n from paste.fileapp import FileApp\n fapp = FileApp(filepath, headers=headers)\n\n return fapp(request.environ, self.start_response)\n\n", "The key here is that WSGI, and pylons by extension, work with iterable responses. So you should be able to write some code like (warning, untested code below!):\ndef file_streamer():\n with open(filepath, 'rb') as f:\n while True:\n block = f.read(4096)\n if not block:\n break\n yield block\nresponse.app_iter = file_streamer()\n\nAlso, paste.fileapp.FileApp is designed to be able to return file data for you, so you can also try:\nreturn FileApp(filepath)\n\nin your controller method.\n" ]
[ 8, 4, 1 ]
[]
[]
[ "http", "pylons", "python" ]
stackoverflow_0002413707_http_pylons_python.txt
Q: Python: Problems finding string in website source code I open a website with urlopen. I then put the website sourcecode into a variable like so source = website.read() When I just print the source it comes out formatted correctly, however when I try to iterate through each line each character is it's own line. for example when I just print it looks like this <HTML> title</html> When I do this for line in source: print line it looks like this < H T M L ... etc I need to find a string that starts with "var" and then print that entire line. A: Use readlines() instead of read() to get a list of lines. A: Or use: for line in source.split("\n"): ...
Python: Problems finding string in website source code
I open a website with urlopen. I then put the website sourcecode into a variable like so source = website.read() When I just print the source it comes out formatted correctly, however when I try to iterate through each line each character is it's own line. for example when I just print it looks like this <HTML> title</html> When I do this for line in source: print line it looks like this < H T M L ... etc I need to find a string that starts with "var" and then print that entire line.
[ "Use readlines() instead of read() to get a list of lines.\n", "Or use:\nfor line in source.split(\"\\n\"):\n ...\n\n" ]
[ 5, 1 ]
[]
[]
[ "python" ]
stackoverflow_0002779115_python.txt
Q: How can I find "week" in django's calendar app? MyCalendar.py Code: from django import template imort calendar import datetime date = datetime.date.today() week = ??? ... The question is that I want to get the week which contains today's date. How can I do? Thanks for help! Ver: Django-1.0 Python-2.6.4 A: After reading your comment I think this is what you want: import datetime today = datetime.date.today() weekday = today.weekday() start_delta = datetime.timedelta(days=weekday) start_of_week = today - start_delta week_dates = [start_of_week + datetime.timedelta(days=i) for i in range(7)] print week_dates Prints: [datetime.date(2010, 5, 3), datetime.date(2010, 5, 4), datetime.date(2010, 5, 5), datetime.date(2010, 5, 6), datetime.date(2010, 5, 7), datetime.date(2010, 5, 8), datetime.date(2010, 5, 9)] A: If you want the week number i.e. 0-53 try the isocalendar method. date=datetime.date.today() week=date.isocalendar()[1] http://docs.python.org/library/datetime.html#datetime.datetime.isocalendar
How can I find "week" in django's calendar app?
MyCalendar.py Code: from django import template imort calendar import datetime date = datetime.date.today() week = ??? ... The question is that I want to get the week which contains today's date. How can I do? Thanks for help! Ver: Django-1.0 Python-2.6.4
[ "After reading your comment I think this is what you want:\nimport datetime\n\ntoday = datetime.date.today()\nweekday = today.weekday()\nstart_delta = datetime.timedelta(days=weekday)\nstart_of_week = today - start_delta\nweek_dates = [start_of_week + datetime.timedelta(days=i) for i in range(7)]\nprint week_dates\n\nPrints:\n[datetime.date(2010, 5, 3), datetime.date(2010, 5, 4), datetime.date(2010, 5, 5), datetime.date(2010, 5, 6), datetime.date(2010, 5, 7), datetime.date(2010, 5, 8), datetime.date(2010, 5, 9)]\n\n", "If you want the week number i.e. 0-53 try the isocalendar method.\ndate=datetime.date.today()\nweek=date.isocalendar()[1]\n\nhttp://docs.python.org/library/datetime.html#datetime.datetime.isocalendar\n" ]
[ 11, 3 ]
[]
[]
[ "calendar", "django", "python" ]
stackoverflow_0002778715_calendar_django_python.txt
Q: Run and terminate a program (Python under Windows) I'd like to create a small script that basically does this: run program1.exe --> kill program1.exe after n seconds --> run program1.exe again. I know some basic Python and would read up on this, but I'm in a bit of a hurry and just need this to get done ASAP. If someone has a script/idea or could help my out with just the syntax I need to open and kill the .exe file, please... I don't mind solutions in other languages either. I'm sorry if this is a bit "please write my code"-ish, that's not something I typically do. A: Read up on the subprocess module. import subprocess, time p = subprocess.Popen(['program1.exe']) time.sleep(1) # Parameter is in seconds p.terminate() p.wait()
Run and terminate a program (Python under Windows)
I'd like to create a small script that basically does this: run program1.exe --> kill program1.exe after n seconds --> run program1.exe again. I know some basic Python and would read up on this, but I'm in a bit of a hurry and just need this to get done ASAP. If someone has a script/idea or could help my out with just the syntax I need to open and kill the .exe file, please... I don't mind solutions in other languages either. I'm sorry if this is a bit "please write my code"-ish, that's not something I typically do.
[ "Read up on the subprocess module.\nimport subprocess, time\np = subprocess.Popen(['program1.exe'])\ntime.sleep(1) # Parameter is in seconds\np.terminate()\np.wait()\n\n" ]
[ 3 ]
[]
[]
[ "autorun", "python", "windows" ]
stackoverflow_0002779464_autorun_python_windows.txt
Q: Python faster way to read fixed length fields form a file into dictionary I have a file of names and addresses as follows (example line) OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6) def line_to_dictionary(self, file_line,rec_num): file_line = file_line.lower() # Make it all lowercase return_rec = {} # Return record as a dictionary for (field_start, field_length, field_name) in self.field_list: field_data = file_line[field_start:field_start+field_length] if self.strip_fields == True: # Strip off white spaces first field_data = field_data.strip() if field_data != '': # Only add non-empty fields to dictionary return_rec[field_name] = field_data # Set hidden fields # return_rec['_rec_num_'] = rec_num return_rec['_dataset_name_'] = self.name return return_rec A: struct.unpack() combined with s specifiers with lengths will tear the string apart faster than slicing. A: Edit: Just saw your remark below about commas. The approach below is fast when it comes to file reading, but it is delimiter-based, and would fail in your case. It's useful in other cases, though. If you want to read the file really fast, you can use a dedicated module, such as the almost standard Numpy: data = numpy.loadtxt('file_name.txt', dtype=('S10', 'S8'), delimiter=',') # dtype must be adapted to your column sizes loadtxt() also allows you to process fields on the fly (with the converters argument). Numpy also allows you to give names to columns (see the doc), so that you can do: data['name'][42] # Name # 42 The structure obtained is like an Excel array; it is quite memory efficient, compared to a dictionary. If you really need to use a dictionary, you can use a dedicated loop over the data array read quickly by Numpy, in a way similar to what you have done. A: If you want to get some speed up, you can also store field_start+field_length directly in self.field_list, instead of storing field_length. Also, if field_data != '' can more simply be written as if field_data (if this gives any speed up, it is marginal, though). I would say that your method is quite fast, compared to what standard Python can do (i.e., without using non-standard, dedicated modules). A: If your lines include commas like the example, you can use line.split(',') instead of several slices. This may prove to be faster. A: You'll want to use the csv module. It handle not only csv, but any csv-like format which yours seems to be.
Python faster way to read fixed length fields form a file into dictionary
I have a file of names and addresses as follows (example line) OSCAR ,CANNONS ,8 ,STIEGLITZ CIRCUIT And I want to read it into a dictionary of name and value. Here self.field_list is a list of the name, length and start point of the fixed fields in the file. What ways are there to speed up this method? (python 2.6) def line_to_dictionary(self, file_line,rec_num): file_line = file_line.lower() # Make it all lowercase return_rec = {} # Return record as a dictionary for (field_start, field_length, field_name) in self.field_list: field_data = file_line[field_start:field_start+field_length] if self.strip_fields == True: # Strip off white spaces first field_data = field_data.strip() if field_data != '': # Only add non-empty fields to dictionary return_rec[field_name] = field_data # Set hidden fields # return_rec['_rec_num_'] = rec_num return_rec['_dataset_name_'] = self.name return return_rec
[ "struct.unpack() combined with s specifiers with lengths will tear the string apart faster than slicing.\n", "Edit: Just saw your remark below about commas. The approach below is fast when it comes to file reading, but it is delimiter-based, and would fail in your case. It's useful in other cases, though.\nIf you want to read the file really fast, you can use a dedicated module, such as the almost standard Numpy:\ndata = numpy.loadtxt('file_name.txt', dtype=('S10', 'S8'), delimiter=',') # dtype must be adapted to your column sizes\n\nloadtxt() also allows you to process fields on the fly (with the converters argument). Numpy also allows you to give names to columns (see the doc), so that you can do:\ndata['name'][42] # Name # 42\n\nThe structure obtained is like an Excel array; it is quite memory efficient, compared to a dictionary.\nIf you really need to use a dictionary, you can use a dedicated loop over the data array read quickly by Numpy, in a way similar to what you have done.\n", "If you want to get some speed up, you can also store field_start+field_length directly in self.field_list, instead of storing field_length.\nAlso, if field_data != '' can more simply be written as if field_data (if this gives any speed up, it is marginal, though).\nI would say that your method is quite fast, compared to what standard Python can do (i.e., without using non-standard, dedicated modules).\n", "If your lines include commas like the example, you can use line.split(',') instead of several slices. This may prove to be faster.\n", "You'll want to use the csv module.\nIt handle not only csv, but any csv-like format which yours seems to be.\n" ]
[ 2, 2, 0, 0, 0 ]
[]
[]
[ "dictionary", "file", "performance", "python" ]
stackoverflow_0002778724_dictionary_file_performance_python.txt
Q: How to programatically pause spotify when a call comes in on skype Skype has an inbuilt function where iTunes playback is paused and resumed automatically when a call comes in. It would be nice to have something similar for Spotify. Both provide a python API so this would seem the obvious route to go down. A: I've had a stab at doing this in python. It runs in the background as a daemon, pausing/resuming spotify when a call comes. It uses the Python libraries for Skype & Spotify: http://code.google.com/p/pytify/ https://developer.skype.com/wiki/Skype4Py import Skype4Py import time from pytify import Spotify # Create Skype object skype = Skype4Py.Skype() skype.Attach() # Create Spotify object spotify = Spotify() spotifyPlaying = spotify.isPlaying() # Create handler for when Skype call status changes def on_call_status(call, status): if status == Skype4Py.clsInProgress: # Save current spotify state global spotifyPlaying spotifyPlaying = spotify.isPlaying() if spotify.isPlaying(): print "Call started, pausing spotify" # Call started, pause Spotify spotify.stop() elif status == Skype4Py.clsFinished: # Call finished, resume Spotify if it was playing if spotifyPlaying and not spotify.isPlaying(): print "Call finished, resuming spotify" spotify.playpause() skype.OnCallStatus = on_call_status while True: time.sleep(10)
How to programatically pause spotify when a call comes in on skype
Skype has an inbuilt function where iTunes playback is paused and resumed automatically when a call comes in. It would be nice to have something similar for Spotify. Both provide a python API so this would seem the obvious route to go down.
[ "I've had a stab at doing this in python. It runs in the background as a daemon, pausing/resuming spotify when a call comes. It uses the Python libraries for Skype & Spotify:\nhttp://code.google.com/p/pytify/\nhttps://developer.skype.com/wiki/Skype4Py\nimport Skype4Py\nimport time\nfrom pytify import Spotify\n\n# Create Skype object\nskype = Skype4Py.Skype()\nskype.Attach()\n\n# Create Spotify object\nspotify = Spotify()\nspotifyPlaying = spotify.isPlaying()\n\n# Create handler for when Skype call status changes\ndef on_call_status(call, status):\n if status == Skype4Py.clsInProgress:\n # Save current spotify state\n global spotifyPlaying\n spotifyPlaying = spotify.isPlaying()\n\n if spotify.isPlaying():\n print \"Call started, pausing spotify\"\n # Call started, pause Spotify\n spotify.stop()\n\n elif status == Skype4Py.clsFinished:\n # Call finished, resume Spotify if it was playing\n if spotifyPlaying and not spotify.isPlaying():\n print \"Call finished, resuming spotify\"\n spotify.playpause() \n\nskype.OnCallStatus = on_call_status\n\nwhile True:\n time.sleep(10)\n\n" ]
[ 6 ]
[]
[]
[ "python", "skype", "spotify" ]
stackoverflow_0002779578_python_skype_spotify.txt
Q: Parsing groupings of strings (Python) I have a string that looks something like this: [["Name1","ID1","DDY1", "CALL1", "WHEN1"], ["Name2","ID2","DDY2", "CALL2", "WHEN2"],...]; This string was taken from a website. There can be any amount of groupings. How could I parse this string and print just the Name variables of each grouping? A: Hope I understood well. >>> import json >>> a = json.loads('[["Name1","ID1","DDY1", "CALL1", "WHEN1"], ["Name2","ID2","DDY2", "CALL2", "WHEN2"]]') >>> [x[0] for x in a] [u'Name1', u'Name2'] >>> A: import ast y = ast.literal_eval(input) [x[0] for x in y] Thanks to @stephan for pointing me in the right direction with ast.literal_eval. As described by the documentaion: Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, numbers, tuples, lists, dicts, booleans, and None. This can be used for safely evaluating strings containing Python expressions from untrusted sources without the need to parse the values oneself. Note: this is new functionality in Python 2.6.
Parsing groupings of strings (Python)
I have a string that looks something like this: [["Name1","ID1","DDY1", "CALL1", "WHEN1"], ["Name2","ID2","DDY2", "CALL2", "WHEN2"],...]; This string was taken from a website. There can be any amount of groupings. How could I parse this string and print just the Name variables of each grouping?
[ "Hope I understood well.\n>>> import json\n>>> a = json.loads('[[\"Name1\",\"ID1\",\"DDY1\", \"CALL1\", \"WHEN1\"], [\"Name2\",\"ID2\",\"DDY2\", \"CALL2\", \"WHEN2\"]]')\n>>> [x[0] for x in a]\n[u'Name1', u'Name2']\n>>> \n\n", "import ast\ny = ast.literal_eval(input)\n[x[0] for x in y]\n\nThanks to @stephan for pointing me in the right direction with ast.literal_eval. As described by the documentaion:\n\nSafely evaluate an expression node or\n a string containing a Python\n expression. The string or node\n provided may only consist of the\n following Python literal structures:\n strings, numbers, tuples, lists,\n dicts, booleans, and None.\nThis can be used for safely evaluating\n strings containing Python expressions\n from untrusted sources without the\n need to parse the values oneself.\n\nNote: this is new functionality in Python 2.6.\n" ]
[ 5, 3 ]
[]
[]
[ "python" ]
stackoverflow_0002779584_python.txt
Q: Is it OK to set "Cache-Control: public" when sending “304 Not Modified” for images stored in the datastore After asking a question about sending “304 Not Modified” for images stored in the in the Google App Engine datastore, I now have a question about Cache-Control. My app now sends Last-Modified and Etag, but by default GAE alsto sends Cache-Control: no-cache. According to this page: The “no-cache” directive, according to the RFC, tells the browser that it should revalidate with the server before serving the page from the cache. [...] In practice, IE and Firefox have started treating the no-cache directive as if it instructs the browser not to even cache the page. As I DO want browsers to cache the image, I've added the following line to my code: self.response.headers['Cache-Control'] = "public" According to the same page as before: The “cache-control: public” directive [...] tells the browser and proxies [...] that the page may be cached. This is good for non-sensitive pages, as caching improves performance. The question is if this could be harmful to the application in some way? Would it be best to send Cache-Control: must-revalidate to "force" the browser to revalidate (I suppose that is the behavior that was originally the reason behind sending Cache-Control: no-cache) This directive insists that the browser must revalidate the page against the server before serving it from cache. Note that it implicitly lets the browser cache the page. A: It isn't necessary to set Cache-Control: public unless your content is protected by HTTP authentication or SSL. Try setting Cache-Control: max-age=nn (where nn is an integer number of seconds that you'd like caches to consider the response fresh for). AppEngine should remove the no-cache. A: See http://www.kyle-jensen.com/proxy-caching-on-google-appengine, gives a good explaination of setting cache-control headers for GAE. A: This can not be harmful to your application, the only risk described in that page amounts to public proxies (such as the ones used by ISPs) caching your image. If the image is confidential or user specific, you don't want this to happen. In all other cases, the caching is exactly what you want.
Is it OK to set "Cache-Control: public" when sending “304 Not Modified” for images stored in the datastore
After asking a question about sending “304 Not Modified” for images stored in the in the Google App Engine datastore, I now have a question about Cache-Control. My app now sends Last-Modified and Etag, but by default GAE alsto sends Cache-Control: no-cache. According to this page: The “no-cache” directive, according to the RFC, tells the browser that it should revalidate with the server before serving the page from the cache. [...] In practice, IE and Firefox have started treating the no-cache directive as if it instructs the browser not to even cache the page. As I DO want browsers to cache the image, I've added the following line to my code: self.response.headers['Cache-Control'] = "public" According to the same page as before: The “cache-control: public” directive [...] tells the browser and proxies [...] that the page may be cached. This is good for non-sensitive pages, as caching improves performance. The question is if this could be harmful to the application in some way? Would it be best to send Cache-Control: must-revalidate to "force" the browser to revalidate (I suppose that is the behavior that was originally the reason behind sending Cache-Control: no-cache) This directive insists that the browser must revalidate the page against the server before serving it from cache. Note that it implicitly lets the browser cache the page.
[ "It isn't necessary to set Cache-Control: public unless your content is protected by HTTP authentication or SSL. \nTry setting Cache-Control: max-age=nn (where nn is an integer number of seconds that you'd like caches to consider the response fresh for). AppEngine should remove the no-cache.\n", "See http://www.kyle-jensen.com/proxy-caching-on-google-appengine, gives a good explaination of setting cache-control headers for GAE.\n", "This can not be harmful to your application, the only risk described in that page amounts to public proxies (such as the ones used by ISPs) caching your image. If the image is confidential or user specific, you don't want this to happen. In all other cases, the caching is exactly what you want.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "caching", "google_app_engine", "http", "python" ]
stackoverflow_0002754644_caching_google_app_engine_http_python.txt
Q: How to build and deploy Python web applications I have a Python web application consisting of several Python packages. What is the best way of building and deploying this to the servers? Currently I'm deploying the packages with Capistrano, installing the packages into a virtualenv with bash, and configuring the servers with puppet, but I would like to go for a more Python based solution. I've been looking a bit into zc.buildout, but it's not clear for me what I can/should use it for. A: Depends on what Your infrastructure is. We're just using debian packages and buildbot to make them. On other setups, I use Fabric scripts. As for format, I'm just using tbz2 files, but I've heard about people just depoloying eggs. I'd strongly recommend having proper build and having BuildBot/Hudson to build packages, as using SCM beats the purpose and encourage bad practices. A: Paver is a rake/make work alike for python. I don't know if this is what your looking for, still haven't found anything equivalent to puppet for python... A: Would SCons do what you want? http://www.scons.org/ A: pyinstall looks like it should be a simpler solution for you. At least as far as packaging the python stuff and installing in virtualenv goes. I don't know of a pythonic way to do server configuration... A: I use Mercurial as my SCM system, and also for deployment too. It's just a matter of cloning the repository from another one, and then a pull/update or a fetch will get it up to date. I use several instances of the repository - one on the development server, one (or more, depending upon circumstance) on my local machine, one on the production server, and one 'Master' repository that is available to the greater internet (although only by SSH). The only thing it doesn't do is automatically update the database if it is changed, but with incoming hooks I could probably do this too.
How to build and deploy Python web applications
I have a Python web application consisting of several Python packages. What is the best way of building and deploying this to the servers? Currently I'm deploying the packages with Capistrano, installing the packages into a virtualenv with bash, and configuring the servers with puppet, but I would like to go for a more Python based solution. I've been looking a bit into zc.buildout, but it's not clear for me what I can/should use it for.
[ "Depends on what Your infrastructure is. We're just using debian packages and buildbot to make them.\nOn other setups, I use Fabric scripts. As for format, I'm just using tbz2 files, but I've heard about people just depoloying eggs.\nI'd strongly recommend having proper build and having BuildBot/Hudson to build packages, as using SCM beats the purpose and encourage bad practices. \n", "Paver is a rake/make work alike for python. I don't know if this is what your looking for, still haven't found anything equivalent to puppet for python...\n", "Would SCons do what you want?\nhttp://www.scons.org/\n", "pyinstall looks like it should be a simpler solution for you. At least as far as packaging the python stuff and installing in virtualenv goes. I don't know of a pythonic way to do server configuration...\n", "I use Mercurial as my SCM system, and also for deployment too. It's just a matter of cloning the repository from another one, and then a pull/update or a fetch will get it up to date.\nI use several instances of the repository - one on the development server, one (or more, depending upon circumstance) on my local machine, one on the production server, and one 'Master' repository that is available to the greater internet (although only by SSH).\nThe only thing it doesn't do is automatically update the database if it is changed, but with incoming hooks I could probably do this too.\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "deployment", "python" ]
stackoverflow_0000166334_deployment_python.txt
Q: Setting the first day of the week with the wx.lib.calendar.Calendar control? As per the title, is it possible to change the first day of the week (Monday instead of Sunday)? A: Use the wx.CAL_MONDAY_FIRST value in the style argument of the constructir. Update Hier is code generated by wxGlade: wx.calendar.CalendarCtrl(self, -1, style=wx.calendar.CAL_MONDAY_FIRST) It has Monday in the leftmost column. A: Found it: you have to call cal.SetBusType().
Setting the first day of the week with the wx.lib.calendar.Calendar control?
As per the title, is it possible to change the first day of the week (Monday instead of Sunday)?
[ "Use the wx.CAL_MONDAY_FIRST value in the style argument of the constructir.\nUpdate\nHier is code generated by wxGlade:\nwx.calendar.CalendarCtrl(self, -1, style=wx.calendar.CAL_MONDAY_FIRST)\n\nIt has Monday in the leftmost column.\n", "Found it: you have to call cal.SetBusType().\n" ]
[ 0, 0 ]
[]
[]
[ "calendar", "python", "wxpython" ]
stackoverflow_0002779883_calendar_python_wxpython.txt
Q: Getting error on inserting tuple values in postgreSQL table using python I want to keep last.fm's user recent music tracks list to postgresql database table using pylast interface.But when I tried to insert values to the table it shows errors.Code example: import pylast import psycopg2 import re from md5 import md5 import sys import codecs import psycopg2.extensions psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) user_name = raw_input("Enter last.fm username: ") user_password = raw_input("Enter last.fm password: ") api_key = '*********' api_secret = '********' #Lastfm network authentication md5_user_password = md5(user_password).hexdigest() network = pylast.get_lastfm_network(api_key, api_secret,user_name,md5_user_password) used=pylast.User(user_name, network) recent_tracks=used.get_recent_tracks(10) # Database connection try: conn=psycopg2.connect("dbname='**' user='postgres' host='localhost' password='*'") conn.set_client_encoding('UNICODE') except: print "I am unable to connect to the database, exiting." sys.exit() cur=conn.cursor() for i, artist in enumerate(recent_tracks): for key in sorted(artist): cur.execute(""" INSERT INTO u_recent_track(Playback_date,Time_stamp,Track) VALUES (%s,%s,%s)""", (key, artist[key])) conn.commit() cur.execute("SELECT * FROM u_recent_track;") cur.fetchone() for row in cur: print ' '.join(row[1:]) cur.close() conn.close() Here "recent_tracks" tuple have the values for example: artist 0 - playback_date : 5 May 2010, 11:14 - timestamp : 1273058099 - track : Brian Eno - Web I want to store these value under u_recent_track(Tid, Playback_date, Time_stamp, Track).Can anybody have idea how to sort out this problem? when I tried to run, it shows error: Traceback (most recent call last): File "F:\JavaWorkspace\Test\src\recent_track_database.py", line 50, in <module> VALUES (%s,%s,%s)""", (key, artist[key])) IndexError: tuple index out of range A: sorted(artist) returns a ordered list of artist, when you're iterating over it it returns still elements of artist. So when you're trying to access artist[key] it is actually trying to access an element of artist indexed by the index, which is an element of artist itself. Tuples do not work this way. It seems you're using python2.5 or lower and therefore you could do: cur.executemany(""" INSERT INTO u_recent_track(Playback_date,Time_stamp,Track) VALUES (%(playback_date)s,%(timestamp)s,%(track)s)""", recent_tracks) conn.commit() This should work. A: (Playback_date,Time_stamp,Track) indicates you want to insert three values into a row. VALUES (%s,%s) should therefore be VALUES (%s,%s,%s) and (key, artist[key]) should be a tuple with 3 elements, not 2. Try: for track in recent_tracks: cur.execute(""" INSERT INTO u_recent_track(Playback_date,Time_stamp,Track) VALUES (%s,%s,%s)""", (track.get_date(), track.get_timestamp(), track.get_track())) conn.commit() PS. This is where I'm getting my information about the pylast API. PPS. If my reading of the documentation is correct, track.get_track() will return a Track object. It has methods like get_album, get_artist, get_id and get_title. Exactly what do you want stored in the Track column of the u_recent_track database table? A: This error isn't anything to do with Postgres, but with the artist variable. You're firstly saying: for key in sorted(artist): implying that it's a list, but then you're accessing it as if it were a dictionary, which is raising an error. Which is it? Can you show an example of the full contents?
Getting error on inserting tuple values in postgreSQL table using python
I want to keep last.fm's user recent music tracks list to postgresql database table using pylast interface.But when I tried to insert values to the table it shows errors.Code example: import pylast import psycopg2 import re from md5 import md5 import sys import codecs import psycopg2.extensions psycopg2.extensions.register_type(psycopg2.extensions.UNICODE) user_name = raw_input("Enter last.fm username: ") user_password = raw_input("Enter last.fm password: ") api_key = '*********' api_secret = '********' #Lastfm network authentication md5_user_password = md5(user_password).hexdigest() network = pylast.get_lastfm_network(api_key, api_secret,user_name,md5_user_password) used=pylast.User(user_name, network) recent_tracks=used.get_recent_tracks(10) # Database connection try: conn=psycopg2.connect("dbname='**' user='postgres' host='localhost' password='*'") conn.set_client_encoding('UNICODE') except: print "I am unable to connect to the database, exiting." sys.exit() cur=conn.cursor() for i, artist in enumerate(recent_tracks): for key in sorted(artist): cur.execute(""" INSERT INTO u_recent_track(Playback_date,Time_stamp,Track) VALUES (%s,%s,%s)""", (key, artist[key])) conn.commit() cur.execute("SELECT * FROM u_recent_track;") cur.fetchone() for row in cur: print ' '.join(row[1:]) cur.close() conn.close() Here "recent_tracks" tuple have the values for example: artist 0 - playback_date : 5 May 2010, 11:14 - timestamp : 1273058099 - track : Brian Eno - Web I want to store these value under u_recent_track(Tid, Playback_date, Time_stamp, Track).Can anybody have idea how to sort out this problem? when I tried to run, it shows error: Traceback (most recent call last): File "F:\JavaWorkspace\Test\src\recent_track_database.py", line 50, in <module> VALUES (%s,%s,%s)""", (key, artist[key])) IndexError: tuple index out of range
[ "sorted(artist) returns a ordered list of artist, when you're iterating over it it returns still elements of artist. So when you're trying to access artist[key] it is actually trying to access an element of artist indexed by the index, which is an element of artist itself. Tuples do not work this way.\nIt seems you're using python2.5 or lower and therefore you could do:\ncur.executemany(\"\"\"\n INSERT INTO u_recent_track(Playback_date,Time_stamp,Track) \n VALUES (%(playback_date)s,%(timestamp)s,%(track)s)\"\"\", recent_tracks)\nconn.commit()\n\nThis should work.\n", "(Playback_date,Time_stamp,Track) indicates you want to insert three values into a row.\nVALUES (%s,%s) should therefore be VALUES (%s,%s,%s)\nand (key, artist[key]) should be a tuple with 3 elements, not 2.\nTry:\nfor track in recent_tracks:\n cur.execute(\"\"\"\n INSERT INTO u_recent_track(Playback_date,Time_stamp,Track) \n VALUES (%s,%s,%s)\"\"\", (track.get_date(), track.get_timestamp(), track.get_track()))\n conn.commit()\n\nPS. This is where I'm getting my information about the pylast API. \nPPS. If my reading of the documentation is correct, track.get_track() will return a Track object. It has methods like get_album, get_artist, get_id and get_title. Exactly what do you want stored in the Track column of the u_recent_track database table?\n", "This error isn't anything to do with Postgres, but with the artist variable. You're firstly saying:\nfor key in sorted(artist):\n\nimplying that it's a list, but then you're accessing it as if it were a dictionary, which is raising an error. Which is it? Can you show an example of the full contents?\n" ]
[ 1, 0, 0 ]
[]
[]
[ "last.fm", "python" ]
stackoverflow_0002780579_last.fm_python.txt
Q: Extract data from PostgreSQL DB without using pg_dump There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks! A: It puzzles me the bit about "I do not have the permissions needed to just dump a table as SQL from within psql." pg_dump runs standalone, outside psql (both are clients) and if you have permission to connect to the database and select a table, I'd guess you'd also be able to dump it using pg_dump -t <table>. Am I missing something? A: If you use psycopg2 you can use cursor.description to check column names, and use fetched data type to convert it to required string like data to acceptable format. This code creates INSERT statements that you can use not only with PostgreSQL, but also with other databases (then you probably will have to change date format): cursor.execute("SELECT * FROM %s" % (table_name)) column_names = [] columns_descr = cursor.description for c in columns_descr: column_names.append(c[0]) insert_prefix = 'insert into %s (%s) values ' % (table_name, ', '.join(column_names)) rows = cursor.fetchall() for row in rows: row_data = [] for rd in row: if rd is None: row_data.append('NULL') elif isinstance(rd, datetime.datetime): row_data.append("'%s'" % (rd.strftime('%Y-%m-%d %H:%M:%S') )) else: row_data.append(repr(rd)) print('%s (%s);' % (insert_prefix, ', '.join(row_data))) In psycopg2 there is even support for COPY. Look at: COPY-related methods on their docs If you prefer using metadata then you can use my recipe: Dump PostgreSQL db schema to text. It is based on Extracting META information from PostgreSQL by Lorenzo Alberton A: You could use these queries (gotten by using "psql --echo-hidden" and "\d ") to get the base metadata: -- GET OID SET oid FROM pg_class WHERE relname = <YOUR_TABLE_NAME> -- GET METADATA SELECT a.attname, pg_catalog.format_type(a.atttypid, a.atttypmod), (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) FROM pg_catalog.pg_attrdef d WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef), a.attnotnull, a.attnum FROM pg_catalog.pg_attribute a WHERE a.attrelid = <YOUR_TABLES_OID_FROM_PG_CLASS> AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum; This gives you the name, data type, default, null flag and field order within the row. To get the actual data, your best bet is still CSV--the built in COPY table TO STDOUT WITH CSV HEADER is very robust. But if you are worried about encoding, be sure to get the value of server_encoding and client_encoding just before dumping the CSV data. That combined with the metadata from the above query should give enough information to properly interpret a CSV dump.
Extract data from PostgreSQL DB without using pg_dump
There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!
[ "It puzzles me the bit about \"I do not have the permissions needed to just dump a table as SQL from within psql.\" pg_dump runs standalone, outside psql (both are clients) and if you have permission to connect to the database and select a table, I'd guess you'd also be able to dump it using pg_dump -t <table>. Am I missing something?\n", "If you use psycopg2 you can use cursor.description to check column names, and use fetched data type to convert it to required string like data to acceptable format.\nThis code creates INSERT statements that you can use not only with PostgreSQL, but also with other databases (then you probably will have to change date format):\ncursor.execute(\"SELECT * FROM %s\" % (table_name))\ncolumn_names = []\ncolumns_descr = cursor.description\nfor c in columns_descr:\n column_names.append(c[0])\ninsert_prefix = 'insert into %s (%s) values ' % (table_name, ', '.join(column_names))\nrows = cursor.fetchall()\nfor row in rows:\n row_data = []\n for rd in row:\n if rd is None:\n row_data.append('NULL')\n elif isinstance(rd, datetime.datetime):\n row_data.append(\"'%s'\" % (rd.strftime('%Y-%m-%d %H:%M:%S') ))\n else:\n row_data.append(repr(rd))\n print('%s (%s);' % (insert_prefix, ', '.join(row_data)))\n\nIn psycopg2 there is even support for COPY. Look at: COPY-related methods on their docs\nIf you prefer using metadata then you can use my recipe: Dump PostgreSQL db schema to text. It is based on Extracting META information from PostgreSQL by Lorenzo Alberton\n", "You could use these queries (gotten by using \"psql --echo-hidden\" and \"\\d \") to get the base metadata:\n-- GET OID\nSET oid FROM pg_class WHERE relname = <YOUR_TABLE_NAME>\n\n-- GET METADATA\nSELECT a.attname,\n pg_catalog.format_type(a.atttypid, a.atttypmod),\n (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)\n FROM pg_catalog.pg_attrdef d\n WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),\n a.attnotnull, a.attnum\nFROM pg_catalog.pg_attribute a\nWHERE a.attrelid = <YOUR_TABLES_OID_FROM_PG_CLASS> AND a.attnum > 0 AND NOT a.attisdropped\nORDER BY a.attnum;\n\nThis gives you the name, data type, default, null flag and field order within the row. To get the actual data, your best bet is still CSV--the built in COPY table TO STDOUT WITH CSV HEADER is very robust. But if you are worried about encoding, be sure to get the value of server_encoding and client_encoding just before dumping the CSV data. That combined with the metadata from the above query should give enough information to properly interpret a CSV dump.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "postgresql", "python", "sql", "xml" ]
stackoverflow_0002770792_postgresql_python_sql_xml.txt
Q: stop minidom converting < > to < > Im trying to output some data from my google app engine datastore to xml so that a flash file can read it, The problem is when using CDATA tags the outputted xml contains &lt; instead of < e.g <name>&lt;![CDATA][name]]&gt;</name> here is my python which outputs the xml: doc = Document() feed = doc.createElement("feed") doc.appendChild(feed) tags_element = doc.createElement("names") feed.appendChild(tags_element) copen = "<![CDATA][" cclose = "]]>" tags = db.GqlQuery("SELECT * FROM Tag ORDER BY date DESC") for tag in tags: tag_element = doc.createElement("name") tags_element.appendChild(tag_element) the_tag = doc.createTextNode("%s%s%s" % (copen,str(tag.thetag), cclose)) tag_element.appendChild(the_tag) self.response.headers["Content-Type"] = "application/xml" self.response.out.write(doc.toprettyxml(indent=" ")) i know this is an encoding issue just can't seem to get to the route of the problem, thanks in advance A: It seems the createCDATASection method works for me. for tag in tags: tag_element = doc.createCDATASection(tag.thetag) tags_element.appendChild(tag_element) A: To do what you are attempting, you need to actually add a CDATA-block using the appropriate minidom methods. It's not an encoding issue, per se, but when you use createTextNode it encodes XML control characters to actual text characters for you, in order to be helpful, no doubt. A: createTextNode converts reserved chars (< > &) into entities.
stop minidom converting < > to < >
Im trying to output some data from my google app engine datastore to xml so that a flash file can read it, The problem is when using CDATA tags the outputted xml contains &lt; instead of < e.g <name>&lt;![CDATA][name]]&gt;</name> here is my python which outputs the xml: doc = Document() feed = doc.createElement("feed") doc.appendChild(feed) tags_element = doc.createElement("names") feed.appendChild(tags_element) copen = "<![CDATA][" cclose = "]]>" tags = db.GqlQuery("SELECT * FROM Tag ORDER BY date DESC") for tag in tags: tag_element = doc.createElement("name") tags_element.appendChild(tag_element) the_tag = doc.createTextNode("%s%s%s" % (copen,str(tag.thetag), cclose)) tag_element.appendChild(the_tag) self.response.headers["Content-Type"] = "application/xml" self.response.out.write(doc.toprettyxml(indent=" ")) i know this is an encoding issue just can't seem to get to the route of the problem, thanks in advance
[ "It seems the createCDATASection method works for me.\nfor tag in tags:\n tag_element = doc.createCDATASection(tag.thetag)\n tags_element.appendChild(tag_element)\n\n", "To do what you are attempting, you need to actually add a CDATA-block using the appropriate minidom methods. It's not an encoding issue, per se, but when you use createTextNode it encodes XML control characters to actual text characters for you, in order to be helpful, no doubt.\n", "createTextNode converts reserved chars (< > &) into entities.\n" ]
[ 8, 0, 0 ]
[]
[]
[ "google_app_engine", "minidom", "python", "xml" ]
stackoverflow_0002780506_google_app_engine_minidom_python_xml.txt
Q: why in python giving to str func a unicode string will throw an exception? for example the following: str(u'לשום') will throw an error. how can i prevent these? A: Calling str() on a unicode is the same as calling .encode(sys.getdefaultencoding()) on it. If the unicode contains characters that can't be encoded in the default encoding then it will throw a UnicodeEncodeError. The fix is to explicitly encode the unicode in a useful encoding, such as 'utf-8'. A: If you're running on Python 3, the u'' notation is a syntax error. Is this your problem? Because in Python <3, your code is absolutely correct, and since 'test' is plain ASCII there are no decoding issues.
why in python giving to str func a unicode string will throw an exception?
for example the following: str(u'לשום') will throw an error. how can i prevent these?
[ "Calling str() on a unicode is the same as calling .encode(sys.getdefaultencoding()) on it. If the unicode contains characters that can't be encoded in the default encoding then it will throw a UnicodeEncodeError. The fix is to explicitly encode the unicode in a useful encoding, such as 'utf-8'.\n", "If you're running on Python 3, the u'' notation is a syntax error. Is this your problem? Because in Python <3, your code is absolutely correct, and since 'test' is plain ASCII there are no decoding issues.\n" ]
[ 7, 0 ]
[]
[]
[ "python", "string", "unicode" ]
stackoverflow_0002780413_python_string_unicode.txt
Q: Can Distribute setuptool be used to convert Python 2 packages to Python 3? Possible Duplicate: Can distribute setuptools be used to port packages implemented in python 2 to 3 Also, does the tool make it easy? A: The only thing that Distribute does is that it calls the 2to3 script (supplied with Python 3) that converts a Python 2.x source code to Python 3 using some automatic transformations. Basically, you write your code using Python 2.x and let Distribute convert it to Python 3 when your package is installed on Python 3. There are several things that Distribute won't do for you, though: It won't check whether the conversion succeeded or not. You should have a fairly exhaustive set of unit tests to ensure that the behaviour of the converted package is correct as not all the transformations can be done automatically by 2to3, and some other transformations it will do might not make sense. Read this case study for more information about porting a real Python package to Python 3 and in particular this section about things not handled by 2to3. It won't convert modules written using the C API of Python (see this question), you will have to convert these yourself.
Can Distribute setuptool be used to convert Python 2 packages to Python 3?
Possible Duplicate: Can distribute setuptools be used to port packages implemented in python 2 to 3 Also, does the tool make it easy?
[ "The only thing that Distribute does is that it calls the 2to3 script (supplied with Python 3) that converts a Python 2.x source code to Python 3 using some automatic transformations. Basically, you write your code using Python 2.x and let Distribute convert it to Python 3 when your package is installed on Python 3.\nThere are several things that Distribute won't do for you, though:\n\nIt won't check whether the conversion succeeded or not. You should have a fairly exhaustive set of unit tests to ensure that the behaviour of the converted package is correct as not all the transformations can be done automatically by 2to3, and some other transformations it will do might not make sense. Read this case study for more information about porting a real Python package to Python 3 and in particular this section about things not handled by 2to3.\nIt won't convert modules written using the C API of Python (see this question), you will have to convert these yourself.\n\n" ]
[ 2 ]
[]
[]
[ "events", "package", "python", "python_3.x" ]
stackoverflow_0002781001_events_package_python_python_3.x.txt
Q: Django 1.2 + South 0.7 + django-annoying's AutoOneToOneField leads to TypeError: 'LegacyConnection' object is not iterable I'm using Django 1.2 trunk with South 0.7 and an AutoOneToOneField copied from django-annoying. South complained that the field does not have rules defined and the new version of South no longer has an automatic field type parser. So I read the South documentation and wrote the following definition (basically an exact copy of the OneToOneField rules): rules = [ ( (AutoOneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^myapp"]) Now South raises the following error when I do a schemamigration. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "django/core/management/base.py", line 223, in execute output = self.handle(*args, **options) File "South-0.7-py2.6.egg/south/management/commands/schemamigration.py", line 92, in handle (k, v) for k, v in freezer.freeze_apps([migrations.app_label()]).items() File "South-0.7-py2.6.egg/south/creator/freezer.py", line 33, in freeze_apps model_defs[model_key(model)] = prep_for_freeze(model) File "South-0.7-py2.6.egg/south/creator/freezer.py", line 65, in prep_for_freeze fields = modelsinspector.get_model_fields(model, m2m=True) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 322, in get_model_fields args, kwargs = introspector(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 271, in introspector arg_defs, kwarg_defs = matching_details(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 187, in matching_details if any([isinstance(field, x) for x in classes]): TypeError: 'LegacyConnection' object is not iterable Is this related to a recent change in Django 1.2 trunk? How do I fix this? I use this field as follows: class Bar(models.Model): foo = AutoOneToOneField("foo.Foo", primary_key=True, related_name="bar") For reference the field code from django-tagging: class AutoSingleRelatedObjectDescriptor(SingleRelatedObjectDescriptor): def __get__(self, instance, instance_type=None): try: return super(AutoSingleRelatedObjectDescriptor, self).__get__(instance, instance_type) except self.related.model.DoesNotExist: obj = self.related.model(**{self.related.field.name: instance}) obj.save() return obj class AutoOneToOneField(OneToOneField): def contribute_to_related_class(self, cls, related): setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related)) A: Try to change this line (AutoOneToOneField), To this: (AutoOneToOneField,), A tuple declared like you did, is not iterable. A: Solved the problem by removing the rules and adding the following method to AutoOneToOneField: def south_field_triple(self): "Returns a suitable description of this field for South." from south.modelsinspector import introspector field_class = OneToOneField.__module__ + "." + OneToOneField.__name__ args, kwargs = introspector(self) return (field_class, args, kwargs) A: Your rule have simple python related problem.. In tuple, you must add comma if only single item inside. So change (AutoOneToOneField), to (AutoOneToOneField,), But to be honest, i didn't know that I can use method inside field instead of rules. I will apply your patch and submit to django-annoying repository.
Django 1.2 + South 0.7 + django-annoying's AutoOneToOneField leads to TypeError: 'LegacyConnection' object is not iterable
I'm using Django 1.2 trunk with South 0.7 and an AutoOneToOneField copied from django-annoying. South complained that the field does not have rules defined and the new version of South no longer has an automatic field type parser. So I read the South documentation and wrote the following definition (basically an exact copy of the OneToOneField rules): rules = [ ( (AutoOneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^myapp"]) Now South raises the following error when I do a schemamigration. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "django/core/management/base.py", line 223, in execute output = self.handle(*args, **options) File "South-0.7-py2.6.egg/south/management/commands/schemamigration.py", line 92, in handle (k, v) for k, v in freezer.freeze_apps([migrations.app_label()]).items() File "South-0.7-py2.6.egg/south/creator/freezer.py", line 33, in freeze_apps model_defs[model_key(model)] = prep_for_freeze(model) File "South-0.7-py2.6.egg/south/creator/freezer.py", line 65, in prep_for_freeze fields = modelsinspector.get_model_fields(model, m2m=True) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 322, in get_model_fields args, kwargs = introspector(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 271, in introspector arg_defs, kwarg_defs = matching_details(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 187, in matching_details if any([isinstance(field, x) for x in classes]): TypeError: 'LegacyConnection' object is not iterable Is this related to a recent change in Django 1.2 trunk? How do I fix this? I use this field as follows: class Bar(models.Model): foo = AutoOneToOneField("foo.Foo", primary_key=True, related_name="bar") For reference the field code from django-tagging: class AutoSingleRelatedObjectDescriptor(SingleRelatedObjectDescriptor): def __get__(self, instance, instance_type=None): try: return super(AutoSingleRelatedObjectDescriptor, self).__get__(instance, instance_type) except self.related.model.DoesNotExist: obj = self.related.model(**{self.related.field.name: instance}) obj.save() return obj class AutoOneToOneField(OneToOneField): def contribute_to_related_class(self, cls, related): setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related))
[ "Try to change this line\n(AutoOneToOneField),\n\nTo this:\n(AutoOneToOneField,),\n\nA tuple declared like you did, is not iterable.\n", "Solved the problem by removing the rules and adding the following method to AutoOneToOneField:\ndef south_field_triple(self):\n \"Returns a suitable description of this field for South.\"\n from south.modelsinspector import introspector\n field_class = OneToOneField.__module__ + \".\" + OneToOneField.__name__\n args, kwargs = introspector(self)\n return (field_class, args, kwargs)\n\n", "Your rule have simple python related problem.. In tuple, you must add comma if only single item inside.\nSo change (AutoOneToOneField), to (AutoOneToOneField,),\nBut to be honest, i didn't know that I can use method inside field instead of rules. I will apply your patch and submit to django-annoying repository.\n" ]
[ 5, 3, 1 ]
[]
[]
[ "django", "django_models", "django_south", "python" ]
stackoverflow_0002781210_django_django_models_django_south_python.txt
Q: cgi.FieldStorage always empty - never returns POSTed form Data This problem is probably embarrassingly simple. I'm trying to give python a spin. I thought a good way to start doing that would be to create a simple cgi script to process some form data and do some magic. My python script is executed properly by apache using mod_python, and will print out whatever I want it to print out. My only problem is that cgi.FieldStorage() is always empty. I've tried using both POST and GET. Each trial I fill out both form fields. <form action="pythonScript.py" method="POST" name="ARGH"> <input name="TaskName" type="text" /> <input name="TaskNumber" type="text" /> <input type="submit" /> </form> If I change the form to point to a perl script it reports the form data properly. The python page always gives me the same result: number of keys: 0 #!/usr/bin/python import cgi def index(req): pageContent = """<html><head><title>A page from""" pageContent += """Python</title></head><body>""" form = cgi.FieldStorage() keys = form.keys() keys.sort() pageContent += "<br />number of keys: "+str(len(keys)) for key in keys: pageContent += fieldStorage[ key ].value pageContent += """</body></html>""" return pageContent I'm using Python 2.5.2 and Apache/2.2.3. This is what's in my apache conf file (and my script is in /var/www/python): <Directory /var/www/python/> Options FollowSymLinks +ExecCGI Order allow,deny allow from all AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> A: Your problem is that you're mixing two different approaches: CGI and mod_python. You declare your script as a mod_python publisher, which is why its index method gets called -- and which also makes it a module, not a script. If you were using CGI, you would remove the mod_python directives from your Apache configuration, just leave the ExecCGI, and either rename the script to have the .cgi extension or set the handler for the .py extension to be CGI, as well. Then your script would be executed as a script, which means the index function you defined in your script wouldn't be executed unless you called it from the toplevel of the script. As I recall -- but it's been a long time since I bothered with mod_python -- if you want to use mod_python instead, you should be using mod_python.util.FieldStorage instead of cgi.FieldStorage to access the POST data. All that said, a much better choice for bare-bones web stuff is WSGI, for example through mod_wsgi. And usually a better choice than bare-bones web stuff is using a web framework, like Django, TurboGears, Pylons, or one of the many others listed on, for example, the web frameworks page on wiki.python.org
cgi.FieldStorage always empty - never returns POSTed form Data
This problem is probably embarrassingly simple. I'm trying to give python a spin. I thought a good way to start doing that would be to create a simple cgi script to process some form data and do some magic. My python script is executed properly by apache using mod_python, and will print out whatever I want it to print out. My only problem is that cgi.FieldStorage() is always empty. I've tried using both POST and GET. Each trial I fill out both form fields. <form action="pythonScript.py" method="POST" name="ARGH"> <input name="TaskName" type="text" /> <input name="TaskNumber" type="text" /> <input type="submit" /> </form> If I change the form to point to a perl script it reports the form data properly. The python page always gives me the same result: number of keys: 0 #!/usr/bin/python import cgi def index(req): pageContent = """<html><head><title>A page from""" pageContent += """Python</title></head><body>""" form = cgi.FieldStorage() keys = form.keys() keys.sort() pageContent += "<br />number of keys: "+str(len(keys)) for key in keys: pageContent += fieldStorage[ key ].value pageContent += """</body></html>""" return pageContent I'm using Python 2.5.2 and Apache/2.2.3. This is what's in my apache conf file (and my script is in /var/www/python): <Directory /var/www/python/> Options FollowSymLinks +ExecCGI Order allow,deny allow from all AddHandler mod_python .py PythonHandler mod_python.publisher </Directory>
[ "Your problem is that you're mixing two different approaches: CGI and mod_python. You declare your script as a mod_python publisher, which is why its index method gets called -- and which also makes it a module, not a script.\nIf you were using CGI, you would remove the mod_python directives from your Apache configuration, just leave the ExecCGI, and either rename the script to have the .cgi extension or set the handler for the .py extension to be CGI, as well. Then your script would be executed as a script, which means the index function you defined in your script wouldn't be executed unless you called it from the toplevel of the script.\nAs I recall -- but it's been a long time since I bothered with mod_python -- if you want to use mod_python instead, you should be using mod_python.util.FieldStorage instead of cgi.FieldStorage to access the POST data.\nAll that said, a much better choice for bare-bones web stuff is WSGI, for example through mod_wsgi. And usually a better choice than bare-bones web stuff is using a web framework, like Django, TurboGears, Pylons, or one of the many others listed on, for example, the web frameworks page on wiki.python.org\n" ]
[ 6 ]
[]
[]
[ "apache", "cgi", "mod_python", "python" ]
stackoverflow_0002781493_apache_cgi_mod_python_python.txt
Q: Informational messages in python unit testing I'm using Python's unittest module for unit testing. I'd like to be able to report informational messages as part of the unit test output—other than pass/fail status. Specifically in my case, I want to report whether the module under test is using the pure Python implementation or the C extension. Is there a mechanism in unittest to output informational messages as part of the test report? Can it be done in alternative Python unit test frameworks? A: Yes, you can use nosetests, which has a plugin system which allows this. For example the TestResult api allows to provide extended reporting: Provides a TextTestResult that extends unittest’s _TextTestResult to provide support for error classes (such as the builtin skip and deprecated classes), and hooks for plugins to take over or extend reporting. A: I wasn't aware of nosetests and it might make sense to use that plugin for a variety of reasons. I would use the logging module myself.
Informational messages in python unit testing
I'm using Python's unittest module for unit testing. I'd like to be able to report informational messages as part of the unit test output—other than pass/fail status. Specifically in my case, I want to report whether the module under test is using the pure Python implementation or the C extension. Is there a mechanism in unittest to output informational messages as part of the test report? Can it be done in alternative Python unit test frameworks?
[ "Yes, you can use nosetests, which has a plugin system which allows this. For example the TestResult api allows to provide extended reporting:\n\nProvides a TextTestResult that extends unittest’s _TextTestResult to provide support for error classes (such as the builtin skip and deprecated classes), and hooks for plugins to take over or extend reporting.\n\n", "I wasn't aware of nosetests and it might make sense to use that plugin for a variety of reasons. I would use the logging module myself.\n" ]
[ 3, 1 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0002781186_python_unit_testing.txt
Q: Two Dimensional Python Array: Sort second by First I have a Multidimensional array in Python. How do I go about sorting the second array, by the first - all the while keeping it in the same order? A: I'm not sure from your answer if this is what you want, but take a look and see. If I have a multidimensional array x: >>> x = [[100,50,39,69,22,23,19,80,94,72],range(10)] >>> print x [[100, 50, 39, 69, 22, 23, 19, 80, 94, 72], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]] and I want to sort the second subarray by the first subarray, I could do the following: >>> x[1].sort(key = x[0].__getitem__) >>> print x [[100, 50, 39, 69, 22, 23, 19, 80, 94, 72], [6, 4, 5, 2, 1, 3, 9, 7, 8, 0]] Is that what you're looking for?
Two Dimensional Python Array: Sort second by First
I have a Multidimensional array in Python. How do I go about sorting the second array, by the first - all the while keeping it in the same order?
[ "I'm not sure from your answer if this is what you want, but take a look and see. If I have a multidimensional array x:\n>>> x = [[100,50,39,69,22,23,19,80,94,72],range(10)]\n>>> print x\n[[100, 50, 39, 69, 22, 23, 19, 80, 94, 72], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]\n\nand I want to sort the second subarray by the first subarray, I could do the following:\n>>> x[1].sort(key = x[0].__getitem__)\n>>> print x\n[[100, 50, 39, 69, 22, 23, 19, 80, 94, 72], [6, 4, 5, 2, 1, 3, 9, 7, 8, 0]]\n\nIs that what you're looking for?\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0002781622_python.txt
Q: Why does Python output a string and a unicode of the same value differently? I'm using Python 2.6.5 and when I run the following in the Python shell, I get: >>> print u'Andr\xc3\xa9' André >>> print 'Andr\xc3\xa9' André >>> What's the explanation for the above? Given u'Andr\xc3\xa9', how can I display the above value properly in an html page so that it shows André instead of André? A: '\xc3\xa9' is the UTF-8 encoding of the unicode character u'\u00e9' (which can also be specified as u'\xe9'). So you can use u'Andr\u00e9' or u'Andr\xe9'. You can convert from one to the other: >>> 'Andr\xc3\xa9'.decode('utf-8') u'Andr\xe9' >>> u'Andr\xe9'.encode('utf-8') 'Andr\xc3\xa9' Note that the reason print 'Andr\xc3\xa9' gave you the expected result is only because your system's default encoding is UTF-8. For example, on Windows I get: >>> print 'Andr\xc3\xa9' Andr├⌐ As for outputting HTML, it depends on which web framework you use and what encoding you output in the HTML page. Some frameworks (e.g. Django) will convert unicode values to the correct encoding automatically, while others will require you to do so manually. A: Try this: >>> unicode('Andr\xc3\xa9', 'utf-8') u'Andr\xe9' >>> print u'Andr\xe9' André That may answer your question. EDIT: or see the above answer
Why does Python output a string and a unicode of the same value differently?
I'm using Python 2.6.5 and when I run the following in the Python shell, I get: >>> print u'Andr\xc3\xa9' André >>> print 'Andr\xc3\xa9' André >>> What's the explanation for the above? Given u'Andr\xc3\xa9', how can I display the above value properly in an html page so that it shows André instead of André?
[ "'\\xc3\\xa9' is the UTF-8 encoding of the unicode character u'\\u00e9' (which can also be specified as u'\\xe9'). So you can use u'Andr\\u00e9' or u'Andr\\xe9'.\nYou can convert from one to the other:\n>>> 'Andr\\xc3\\xa9'.decode('utf-8')\nu'Andr\\xe9'\n>>> u'Andr\\xe9'.encode('utf-8')\n'Andr\\xc3\\xa9'\n\nNote that the reason print 'Andr\\xc3\\xa9' gave you the expected result is only because your system's default encoding is UTF-8. For example, on Windows I get:\n>>> print 'Andr\\xc3\\xa9'\nAndr├⌐\n\nAs for outputting HTML, it depends on which web framework you use and what encoding you output in the HTML page. Some frameworks (e.g. Django) will convert unicode values to the correct encoding automatically, while others will require you to do so manually.\n", "Try this:\n>>> unicode('Andr\\xc3\\xa9', 'utf-8')\nu'Andr\\xe9'\n>>> print u'Andr\\xe9'\nAndré\n\nThat may answer your question.\nEDIT: or see the above answer\n" ]
[ 11, 1 ]
[ "I am not sure, but I would guess that different codecs are applied by the print operation. Probably some utf-8 vs. unicode issue. \nFor HTML, you would need to encode certain characters using the HTML syntax for unicode.\nI think that the Python codecs module might be able to help you.\n" ]
[ -2 ]
[ "python", "unicode" ]
stackoverflow_0002782085_python_unicode.txt
Q: Remove n characters from a start of a string I want to remove the first characters from a string. Is there a function that works like this? >>> a = "BarackObama" >>> print myfunction(4,a) ckObama >>> b = "The world is mine" >>> print myfunction(6,b) rld is mine A: Yes, just use slices: >> a = "BarackObama" >> a[4:] 'ckObama' Documentation is here http://docs.python.org/tutorial/introduction.html#strings A: The function could be: def cutit(s,n): return s[n:] and then you call it like this: name = "MyFullName" print cutit(name, 2) # prints "FullName" A: Use slicing. >>> a = "BarackObama" >>> a[4:] 'ckObama' >>> b = "The world is mine" >>> b[6:10] 'rld ' >>> b[:9] 'The world' >>> b[:3] 'The' >>>b[:-3] 'The world is m' You can read about this and most other language features in the official tutorial: http://docs.python.org/tut/ A: a = 'BarackObama' a[4:] # ckObama b = 'The world is mine' b[6:] # rld is mine
Remove n characters from a start of a string
I want to remove the first characters from a string. Is there a function that works like this? >>> a = "BarackObama" >>> print myfunction(4,a) ckObama >>> b = "The world is mine" >>> print myfunction(6,b) rld is mine
[ "Yes, just use slices:\n >> a = \"BarackObama\"\n >> a[4:]\n 'ckObama'\n\nDocumentation is here http://docs.python.org/tutorial/introduction.html#strings\n", "The function could be:\ndef cutit(s,n): \n return s[n:]\n\nand then you call it like this:\nname = \"MyFullName\"\n\nprint cutit(name, 2) # prints \"FullName\"\n\n", "Use slicing.\n>>> a = \"BarackObama\"\n>>> a[4:]\n'ckObama'\n>>> b = \"The world is mine\"\n>>> b[6:10]\n'rld '\n>>> b[:9]\n'The world'\n>>> b[:3]\n'The'\n>>>b[:-3]\n'The world is m'\n\nYou can read about this and most other language features in the official tutorial: http://docs.python.org/tut/\n", "a = 'BarackObama'\na[4:] # ckObama\nb = 'The world is mine'\nb[6:] # rld is mine\n\n" ]
[ 18, 13, 8, 4 ]
[]
[]
[ "python", "string" ]
stackoverflow_0002782318_python_string.txt
Q: libxml2 install error: command 'gcc' failed with exit status 1? What are the dependencies of libxml2? I am trying to install libxml2 on Ubuntu 9.10 and getting errors: $ sudo python setup.py develop Its a very lengthy error message but the last error is Setup script exited with error: Command 'gcc' failed with exit status 1. Can anybody tell me why I am getting this error? What are the dependencies or libraries required to install this? scenario: I am trying to setup the reddit.com clone, and when I run develop command, its generating a huge error as stated above. A: I encountered the same problem today on Centos 5.4. If you experience such an error on this system (and on RHEL and probably Fedora) you have to install libxml2-devel and/or libxslt-devel. P.S. I know it's not an answer for this question in general however it maybe helpful for someone so I decided to write down it here. A: What is wrong with $ sudo apt-get install libxml2-dev or the graphical equivalents? A: As Dirk says, you can install the libxml2 package with apt. If you are building from source for some reason a good starting point is this command $ sudo apt-get build-dep libxml2
libxml2 install error: command 'gcc' failed with exit status 1? What are the dependencies of libxml2?
I am trying to install libxml2 on Ubuntu 9.10 and getting errors: $ sudo python setup.py develop Its a very lengthy error message but the last error is Setup script exited with error: Command 'gcc' failed with exit status 1. Can anybody tell me why I am getting this error? What are the dependencies or libraries required to install this? scenario: I am trying to setup the reddit.com clone, and when I run develop command, its generating a huge error as stated above.
[ "I encountered the same problem today on Centos 5.4. If you experience such an error on this system (and on RHEL and probably Fedora) you have to install libxml2-devel and/or libxslt-devel.\nP.S. I know it's not an answer for this question in general however it maybe helpful for someone so I decided to write down it here.\n", "What is wrong with\n$ sudo apt-get install libxml2-dev \n\nor the graphical equivalents?\n", "As Dirk says, you can install the libxml2 package with apt.\nIf you are building from source for some reason a good starting point is this command\n$ sudo apt-get build-dep libxml2\n\n" ]
[ 3, 1, 0 ]
[ "This is the entire error:\named ‘_extensions’\nsrc/lxml/lxml.etree.c:134800: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134800: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134800: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134800: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134801: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134801: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134801: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134801: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134802: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134802: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134802: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134802: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134803: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134803: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134803: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134803: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134804: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134804: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134804: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134804: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134805: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134805: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134805: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134805: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134806: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134806: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134806: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134806: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134807: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c:134807: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c:134807: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c:134807: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_traverse_4lxml_5etree__BaseContext’:\nsrc/lxml/lxml.etree.c:134814: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_doc’\nsrc/lxml/lxml.etree.c:134815: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_doc’\nsrc/lxml/lxml.etree.c:134817: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_extensions’\nsrc/lxml/lxml.etree.c:134818: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_extensions’\nsrc/lxml/lxml.etree.c:134820: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134821: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134823: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134824: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134826: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134827: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134829: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134830: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134832: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134833: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134835: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134836: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134838: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134839: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134841: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c:134842: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_clear_4lxml_5etree__BaseContext’:\nsrc/lxml/lxml.etree.c:134850: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_doc’\nsrc/lxml/lxml.etree.c:134851: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_doc’\nsrc/lxml/lxml.etree.c:134853: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_extensions’\nsrc/lxml/lxml.etree.c:134854: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_extensions’\nsrc/lxml/lxml.etree.c:134856: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134857: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_namespaces’\nsrc/lxml/lxml.etree.c:134859: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134860: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_global_namespaces’\nsrc/lxml/lxml.etree.c:134862: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134863: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_utf_refs’\nsrc/lxml/lxml.etree.c:134865: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134866: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_function_cache’\nsrc/lxml/lxml.etree.c:134868: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134869: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_eval_context_dict’\nsrc/lxml/lxml.etree.c:134871: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134872: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_refs’\nsrc/lxml/lxml.etree.c:134874: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134875: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_temp_documents’\nsrc/lxml/lxml.etree.c:134877: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c:134878: error: ‘struct __pyx_obj_4lxml_5etree__BaseContext’ has no member named ‘_exc’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_new_4lxml_5etree__XPathEvaluatorBase’:\nsrc/lxml/lxml.etree.c:135462: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135463: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_dealloc_4lxml_5etree__XPathEvaluatorBase’:\nsrc/lxml/lxml.etree.c:135478: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135478: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135478: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135478: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135479: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:135479: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:135479: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:135479: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_traverse_4lxml_5etree__XPathEvaluatorBase’:\nsrc/lxml/lxml.etree.c:135486: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135487: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135489: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:135490: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_clear_4lxml_5etree__XPathEvaluatorBase’:\nsrc/lxml/lxml.etree.c:135498: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135499: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_context’\nsrc/lxml/lxml.etree.c:135501: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:135502: error: ‘struct __pyx_obj_4lxml_5etree__XPathEvaluatorBase’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_new_4lxml_5etree_XPath’:\nsrc/lxml/lxml.etree.c:136025: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_dealloc_4lxml_5etree_XPath’:\nsrc/lxml/lxml.etree.c:136040: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c:136040: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c:136040: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c:136040: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_traverse_4lxml_5etree_XPath’:\nsrc/lxml/lxml.etree.c:136048: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c:136049: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_clear_4lxml_5etree_XPath’:\nsrc/lxml/lxml.etree.c:136058: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c:136059: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c: At top level:\nsrc/lxml/lxml.etree.c:136071: error: ‘struct __pyx_obj_4lxml_5etree_XPath’ has no member named ‘path’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_new_4lxml_5etree__XSLTResolverContext’:\nsrc/lxml/lxml.etree.c:136388: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_dealloc_4lxml_5etree__XSLTResolverContext’:\nsrc/lxml/lxml.etree.c:136394: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c:136394: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c:136394: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c:136394: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_traverse_4lxml_5etree__XSLTResolverContext’:\nsrc/lxml/lxml.etree.c:136402: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c:136403: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_clear_4lxml_5etree__XSLTResolverContext’:\nsrc/lxml/lxml.etree.c:136412: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c:136413: error: ‘struct __pyx_obj_4lxml_5etree__XSLTResolverContext’ has no member named ‘_parser’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_new_4lxml_5etree__XSLTContext’:\nsrc/lxml/lxml.etree.c:136758: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136759: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_dealloc_4lxml_5etree__XSLTContext’:\nsrc/lxml/lxml.etree.c:136765: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136765: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136765: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136765: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136766: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c:136766: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c:136766: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c:136766: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_traverse_4lxml_5etree__XSLTContext’:\nsrc/lxml/lxml.etree.c:136774: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136775: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136777: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c:136778: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_clear_4lxml_5etree__XSLTContext’:\nsrc/lxml/lxml.etree.c:136787: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136788: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_element_proxy’\nsrc/lxml/lxml.etree.c:136790: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c:136791: error: ‘struct __pyx_obj_4lxml_5etree__XSLTContext’ has no member named ‘_extension_elements’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_new_4lxml_5etree_XSLT’:\nsrc/lxml/lxml.etree.c:137137: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137138: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137139: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_dealloc_4lxml_5etree_XSLT’:\nsrc/lxml/lxml.etree.c:137155: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137155: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137155: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137155: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137156: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137156: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137156: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137156: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137157: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:137157: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:137157: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:137157: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_traverse_4lxml_5etree_XSLT’:\nsrc/lxml/lxml.etree.c:137167: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137168: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137170: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137171: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137173: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:137174: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘__pyx_tp_clear_4lxml_5etree_XSLT’:\nsrc/lxml/lxml.etree.c:137185: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137186: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_xslt_resolver_context’\nsrc/lxml/lxml.etree.c:137188: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137189: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_access_control’\nsrc/lxml/lxml.etree.c:137191: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c:137192: error: ‘struct __pyx_obj_4lxml_5etree_XSLT’ has no member named ‘_error_log’\nsrc/lxml/lxml.etree.c: In function ‘initetree’:\nsrc/lxml/lxml.etree.c:139524: error: ‘tagMatches’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139527: error: ‘hasText’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139528: error: ‘hasTail’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139529: error: ‘textOf’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139530: error: ‘tailOf’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139531: error: ‘setNodeText’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139532: error: ‘setTailText’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139533: error: ‘attributeValue’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139534: error: ‘attributeValueFromNsName’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139537: error: ‘collectAttributes’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139540: error: ‘delAttributeFromNsName’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139541: error: ‘hasChild’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139542: error: ‘findChild’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139543: error: ‘findChildForwards’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139544: error: ‘findChildBackwards’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139545: error: ‘nextElement’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139546: error: ‘previousElement’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139551: error: ‘namespacedName’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139555: error: ‘findOrBuildNodeNsPrefix’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139604: error: ‘struct __pyx_vtabstruct_4lxml_5etree__Document’ has no member named ‘_findOrBuildNodeNs’\nsrc/lxml/lxml.etree.c:139604: error: ‘__pyx_f_4lxml_5etree_9_Document__findOrBuildNodeNs’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139605: error: ‘struct __pyx_vtabstruct_4lxml_5etree__Document’ has no member named ‘_setNodeNs’\nsrc/lxml/lxml.etree.c:139630: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_newParserCtxt’\nsrc/lxml/lxml.etree.c:139630: error: ‘__pyx_f_4lxml_5etree_11_BaseParser__newParserCtxt’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139631: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_newPushParserCtxt’\nsrc/lxml/lxml.etree.c:139631: error: ‘__pyx_f_4lxml_5etree_11_BaseParser__newPushParserCtxt’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139632: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_copy’\nsrc/lxml/lxml.etree.c:139633: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_parseUnicodeDoc’\nsrc/lxml/lxml.etree.c:139633: error: ‘__pyx_f_4lxml_5etree_11_BaseParser__parseUnicodeDoc’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139634: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_parseDoc’\nsrc/lxml/lxml.etree.c:139634: error: ‘__pyx_f_4lxml_5etree_11_BaseParser__parseDoc’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139635: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_parseDocFromFile’\nsrc/lxml/lxml.etree.c:139635: error: ‘__pyx_f_4lxml_5etree_11_BaseParser__parseDocFromFile’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139636: error: ‘struct __pyx_vtabstruct_4lxml_5etree__BaseParser’ has no member named ‘_parseDocFromFilelike’\nsrc/lxml/lxml.etree.c:139636: error: ‘__pyx_f_4lxml_5etree_11_BaseParser__parseDocFromFilelike’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139815: error: ‘struct __pyx_vtabstruct_4lxml_5etree_ElementDepthFirstIterator’ has no member named ‘_nextNodeAnyTag’\nsrc/lxml/lxml.etree.c:139815: error: ‘__pyx_f_4lxml_5etree_25ElementDepthFirstIterator__nextNodeAnyTag’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:139816: error: ‘struct __pyx_vtabstruct_4lxml_5etree_ElementDepthFirstIterator’ has no member named ‘_nextNodeMatchTag’\nsrc/lxml/lxml.etree.c:139816: error: ‘__pyx_f_4lxml_5etree_25ElementDepthFirstIterator__nextNodeMatchTag’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140011: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserContext’ has no member named ‘_handleParseResultDoc’\nsrc/lxml/lxml.etree.c:140011: error: ‘__pyx_f_4lxml_5etree_14_ParserContext__handleParseResultDoc’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140045: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserContext’ has no member named ‘_handleParseResultDoc’\nsrc/lxml/lxml.etree.c:140045: error: ‘__pyx_f_4lxml_5etree_20_TargetParserContext__handleParseResultDoc’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140106: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘_getThreadDict’\nsrc/lxml/lxml.etree.c:140106: error: ‘__pyx_f_4lxml_5etree_24_ParserDictionaryContext__getThreadDict’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140107: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘initThreadDictRef’\nsrc/lxml/lxml.etree.c:140108: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘initParserDict’\nsrc/lxml/lxml.etree.c:140109: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘initXPathParserDict’\nsrc/lxml/lxml.etree.c:140110: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘initDocDict’\nsrc/lxml/lxml.etree.c:140111: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘findImpliedContext’\nsrc/lxml/lxml.etree.c:140112: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘pushImpliedContextFromParser’\nsrc/lxml/lxml.etree.c:140113: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘pushImpliedContext’\nsrc/lxml/lxml.etree.c:140114: error: ‘struct __pyx_vtabstruct_4lxml_5etree__ParserDictionaryContext’ has no member named ‘popImpliedContext’\nsrc/lxml/lxml.etree.c:140128: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FileReaderContext’ has no member named ‘_createParserInputBuffer’\nsrc/lxml/lxml.etree.c:140128: error: ‘__pyx_f_4lxml_5etree_18_FileReaderContext__createParserInputBuffer’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140129: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FileReaderContext’ has no member named ‘_createParserInput’\nsrc/lxml/lxml.etree.c:140129: error: ‘__pyx_f_4lxml_5etree_18_FileReaderContext__createParserInput’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140130: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FileReaderContext’ has no member named ‘_readDtd’\nsrc/lxml/lxml.etree.c:140130: error: ‘__pyx_f_4lxml_5etree_18_FileReaderContext__readDtd’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140131: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FileReaderContext’ has no member named ‘_readDoc’\nsrc/lxml/lxml.etree.c:140131: error: ‘__pyx_f_4lxml_5etree_18_FileReaderContext__readDoc’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140132: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FileReaderContext’ has no member named ‘copyToBuffer’\nsrc/lxml/lxml.etree.c:140236: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FilelikeWriter’ has no member named ‘_createOutputBuffer’\nsrc/lxml/lxml.etree.c:140236: error: ‘__pyx_f_4lxml_5etree_15_FilelikeWriter__createOutputBuffer’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:140237: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FilelikeWriter’ has no member named ‘write’\nsrc/lxml/lxml.etree.c:140238: error: ‘struct __pyx_vtabstruct_4lxml_5etree__FilelikeWriter’ has no member named ‘close’\nsrc/lxml/lxml.etree.c:140508: error: ‘struct __pyx_vtabstruct_4lxml_5etree_XSLT’ has no member named ‘_run_transform’\nsrc/lxml/lxml.etree.c:140508: error: ‘__pyx_f_4lxml_5etree_4XSLT__run_transform’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:141405: error: ‘xmlParserVersion’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:141555: error: ‘LIBXML_VERSION’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142365: error: ‘__pyx_v_4lxml_5etree___DEFAULT_ENTITY_LOADER’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142374: error: ‘__pyx_f_4lxml_5etree__local_resolver’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142383: error: ‘XML_PARSE_NOENT’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142383: error: ‘XML_PARSE_NOCDATA’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142383: error: invalid operands to binary | (have ‘struct PyMemberDef *’ and ‘struct PyMemberDef *’)\nsrc/lxml/lxml.etree.c:142383: error: ‘XML_PARSE_NONET’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142383: error: invalid operands to binary | (have ‘struct PyMemberDef *’ and ‘struct PyMemberDef *’)\nsrc/lxml/lxml.etree.c:142383: error: invalid operands to binary | (have ‘struct PyMemberDef *’ and ‘int’)\nsrc/lxml/lxml.etree.c:142643: error: ‘XML_PARSE_RECOVER’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142643: error: ‘HTML_PARSE_NONET’ undeclared (first use in this function)\nsrc/lxml/lxml.etree.c:142643: error: invalid operands to binary | (have ‘struct PyMemberDef *’ and ‘struct PyMemberDef *’)\nsrc/lxml/lxml.etree.c:142643: error: invalid operands to binary | (have ‘struct PyMemberDef *’ and ‘int’)\nsrc/lxml/lxml.etree.c:143135: error\n" ]
[ -1 ]
[ "python", "ubuntu" ]
stackoverflow_0002330062_python_ubuntu.txt
Q: Datastore query outputting for Django form instance I'm using google appengine and Django. I'm using de djangoforms module and wanted to specify the form instance with the information that comes from the query below. userquery = db.GqlQuery("SELECT * FROM User WHERE googleaccount = :1", users.get_current_user()) form = forms.AccountForm(data=request.POST or None,instance=?????) I've found a snippet in a sample app that does this trick, but I can't modify it to work with the query I need. gift = User.get(db.Key.from_path(User.kind(), int(gift_id))) if gift is None: return http.HttpResponseNotFound('No gift exists with that key (%r)' % gift_id) form = RegisterForm(data=request.POST or None, instance=gift) Could anyone help me? A: If you know the userquery will only have one User object in it (or if you only care about the first one if there are duplicates), you can modify your code like so: userquery = db.GqlQuery("SELECT * FROM User WHERE googleaccount = :1", users.get_current_user()) user = userquery.get() # Gets the first User instance from the query, or None form = forms.AccountForm(data=request.POST or None, instance=user)
Datastore query outputting for Django form instance
I'm using google appengine and Django. I'm using de djangoforms module and wanted to specify the form instance with the information that comes from the query below. userquery = db.GqlQuery("SELECT * FROM User WHERE googleaccount = :1", users.get_current_user()) form = forms.AccountForm(data=request.POST or None,instance=?????) I've found a snippet in a sample app that does this trick, but I can't modify it to work with the query I need. gift = User.get(db.Key.from_path(User.kind(), int(gift_id))) if gift is None: return http.HttpResponseNotFound('No gift exists with that key (%r)' % gift_id) form = RegisterForm(data=request.POST or None, instance=gift) Could anyone help me?
[ "If you know the userquery will only have one User object in it (or if you only care about the first one if there are duplicates), you can modify your code like so:\nuserquery = db.GqlQuery(\"SELECT * FROM User WHERE googleaccount = :1\", users.get_current_user())\nuser = userquery.get() # Gets the first User instance from the query, or None\nform = forms.AccountForm(data=request.POST or None, instance=user)\n\n" ]
[ 2 ]
[]
[]
[ "django", "forms", "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0002782137_django_forms_google_app_engine_google_cloud_datastore_python.txt
Q: name of the class that contains the method code I'm trying to find the name of the class that contains method code. In the example underneath I use self.__class__.__name__, but of course this returns the name of the class of which self is an instance and not class that contains the test() method code. b.test() will print 'B' while I would like to get 'A'. I looked into the inspect module documentation but did not find anything directly useful. class A: def __init__(self): pass def test(self): print self.__class__.__name__ class B(A): def __init__(self): A.__init__(self) a = A() b = B() a.test() b.test() A: In Python 3.x, you can simply use __class__.__name__. The __class__ name is mildly magic, and not the same thing as the __class__ attribute of self. In Python 2.x, there is no good way to get at that information. You can use stack inspection to get the code object, then walk the class hierarchy looking for the right method, but it's slow and tedious and will probably break when you don't want it to. You can also use a metaclass or a class decorator to post-process the class in some way, but both of those are rather intrusive approaches. And you can do something really ugly, like accessing self.__nonexistant_attribute, catching the AttributeError and extracting the class name from the mangled name. None of those approaches are really worth it if you just want to avoid typing the name twice; at least forgetting to update the name can be made a little more obvious by doing something like: class C: ... def report_name(self): print C.__name__ A: inspect.getmro gives you a tuple of the classes where the method might come from, in order. As soon as you find one of them that has the method's name in its dict, you're done: for c in inspect.getmro(self.__class__): if 'test' in vars(c): break return c.__name__ A: Use __dict__ of class object itself: class A(object): def foo(self): pass class B(A): pass def find_decl_class(cls, method): if method in cls.__dict__: return cls for b in cls.__bases__: decl = find_decl_class(b, method) if decl: return decl print 'foo' in A.__dict__ print 'foo' in B.__dict__ print find_decl_class(B, 'foo').__name__ Will print True, False, A A: You can use (abuse?) private name mangling to accomplish this effect. If you look up an attribute on self that starts with __ from inside a method, python changes the name from __attribute to _classThisMethodWasDefinedIn__attribute. Just somehow stash the classname you want in mangled-form where the method can see it. As an example, we can define a __new__ method on the base class that does it: def mangle(cls, attrname): if not attrname.startswith('__'): raise ValueError('attrname must start with __') return '_%s%s' % (cls.__name__, attrname) class A(object): def __new__(cls, *args, **kwargs): obj = object.__new__(cls) for c in cls.mro(): setattr(obj, mangle(c, '__defn_classname'), c.__name__) return obj def __init__(self): pass def test(self): print self.__defn_classname class B(A): def __init__(self): A.__init__(self) a = A() b = B() a.test() b.test() which prints: A A A: You can do >>> class A(object): ... def __init__(self): ... pass ... def test(self): ... for b in self.__class__.__bases__: ... if hasattr(b, 'test'): ... return b.__name__ ... return self.__class__.__name__ ... >>> class B(A): ... def __init__(self): ... A.__init__(self) ... >>> B().test() 'A' >>> A().test() 'A' >>> Keep in mind that you could simplify it by using __class__.__base__, but if you use multiple inheritance, this version will work better. It simply checks first on its baseclasses for test. It's not the prettiest, but it works.
name of the class that contains the method code
I'm trying to find the name of the class that contains method code. In the example underneath I use self.__class__.__name__, but of course this returns the name of the class of which self is an instance and not class that contains the test() method code. b.test() will print 'B' while I would like to get 'A'. I looked into the inspect module documentation but did not find anything directly useful. class A: def __init__(self): pass def test(self): print self.__class__.__name__ class B(A): def __init__(self): A.__init__(self) a = A() b = B() a.test() b.test()
[ "In Python 3.x, you can simply use __class__.__name__. The __class__ name is mildly magic, and not the same thing as the __class__ attribute of self.\nIn Python 2.x, there is no good way to get at that information. You can use stack inspection to get the code object, then walk the class hierarchy looking for the right method, but it's slow and tedious and will probably break when you don't want it to. You can also use a metaclass or a class decorator to post-process the class in some way, but both of those are rather intrusive approaches. And you can do something really ugly, like accessing self.__nonexistant_attribute, catching the AttributeError and extracting the class name from the mangled name. None of those approaches are really worth it if you just want to avoid typing the name twice; at least forgetting to update the name can be made a little more obvious by doing something like:\nclass C:\n ...\n def report_name(self):\n print C.__name__\n\n", "inspect.getmro gives you a tuple of the classes where the method might come from, in order. As soon as you find one of them that has the method's name in its dict, you're done:\nfor c in inspect.getmro(self.__class__):\n if 'test' in vars(c): break\nreturn c.__name__\n\n", "Use __dict__ of class object itself:\nclass A(object):\n def foo(self):\n pass\n\nclass B(A):\n pass\n\ndef find_decl_class(cls, method):\n if method in cls.__dict__:\n return cls\n for b in cls.__bases__:\n decl = find_decl_class(b, method)\n if decl:\n return decl\n\nprint 'foo' in A.__dict__\nprint 'foo' in B.__dict__\nprint find_decl_class(B, 'foo').__name__\n\nWill print True, False, A\n", "You can use (abuse?) private name mangling to accomplish this effect. If you look up an attribute on self that starts with __ from inside a method, python changes the name from __attribute to _classThisMethodWasDefinedIn__attribute. \nJust somehow stash the classname you want in mangled-form where the method can see it. As an example, we can define a __new__ method on the base class that does it:\ndef mangle(cls, attrname):\n if not attrname.startswith('__'):\n raise ValueError('attrname must start with __')\n return '_%s%s' % (cls.__name__, attrname)\n\nclass A(object):\n\n def __new__(cls, *args, **kwargs):\n obj = object.__new__(cls)\n for c in cls.mro():\n setattr(obj, mangle(c, '__defn_classname'), c.__name__)\n return obj\n\n def __init__(self):\n pass\n\n def test(self):\n print self.__defn_classname\n\nclass B(A):\n\n def __init__(self):\n A.__init__(self)\n\n\n\na = A()\nb = B()\n\na.test()\nb.test()\n\nwhich prints:\nA\nA\n\n", "You can do \n>>> class A(object):\n... def __init__(self):\n... pass\n... def test(self):\n... for b in self.__class__.__bases__:\n... if hasattr(b, 'test'):\n... return b.__name__\n... return self.__class__.__name__\n... \n>>> class B(A):\n... def __init__(self):\n... A.__init__(self)\n...\n>>> B().test()\n'A'\n>>> A().test()\n'A'\n>>> \n\nKeep in mind that you could simplify it by using __class__.__base__, but if you use multiple inheritance, this version will work better.\nIt simply checks first on its baseclasses for test. It's not the prettiest, but it works.\n" ]
[ 7, 4, 2, 2, 0 ]
[]
[]
[ "introspection", "python" ]
stackoverflow_0002781701_introspection_python.txt
Q: Python: Hack to call a method on an object that isn't of its class Assume you define a class, which has a method which does some complicated processing: class A(object): def my_method(self): # Some complicated processing is done here return self And now you want to use that method on some object from another class entirely. Like, you want to do A.my_method(7). This is what you'd get: TypeError: unbound method my_method() must be called with A instance as first argument (got int instance instead). Now, is there any possibility to hack things so you could call that method on 7? I'd want to avoid moving the function or rewriting it. (Note that the method's logic does depend on self.) One note: I know that some people will want to say, "You're doing it wrong! You're abusing Python! You shouldn't do it!" So yes, I know, this is a terrible terrible thing I want to do. I'm asking if someone knows how to do it, not how to preach to me that I shouldn't do it. A: Of course I wouldn't recommend doing this in real code, but yes, sure, you can reach inside of classes and use its methods as functions: class A(object): def my_method(self): # Some complicated processing is done here return 'Hi' print(A.__dict__['my_method'](7)) # Hi A: You can't. The restriction has actually been lifted in Python 3000, but I presume you are not using that. However, why can't you do something like: def method_implementation(self, x,y): # do whatever class A(): def method(self, x, y): return method_implementation(self, x, y) If you are really in the mood for python abuse, write a descriptor class that implements the behavior. Something like class Hack: def __init__(self, fn): self.fn = fn def __get__(self, obj, cls): if obj is None: # called staticly return self.fn else: def inner(*args, **kwargs): return self.fn(obj, *args, **kwargs) return inner Note that this is completely untested, will probably break some corner cases, and is all around evil. A: def some_method(self): # Some complicated processing is done here return self class A(object): my_method = some_method a = A() print some_method print a.my_method print A.my_method print A.my_method.im_func print A.__dict__['my_method'] prints: <function some_method at 0x719f0> <bound method A.some_method of <__main__.A object at 0x757b0>> <unbound method A.some_method> <function some_method at 0x719f0> <function some_method at 0x719f0> It sounds like you're looking up a method on a class and getting an unbound method. An unbound method expects a object of the appropriate type as the first argument. If you want to apply the function as a function, you've got to get a handle to the function version of it instead. A: That's what's called a staticmethod: class A(object): @staticmethod def my_method(a, b, c): return a, b, c However in staticmethods, you do not get a reference to self. If you'd like a reference to the class not the instance (instance implies reference to self), you can use a classmethod: class A(object): classvar = "var" @classmethod def my_method(cls, a, b, c): print cls.classvar return a, b, c But you'll only get access to class variables, not to instance variables (those typically created/defined inside the __init__ constructor). If that's not good enough, then you will need to somehow pass a "bound" method or pass "self" into the method like so: class A(object): def my_method(self): # use self and manipulate the object inst = A() A.my_method(inst) As some people have already said, it's not a bad idea to just inherit one class from the other: class A(object): ... methods ... class B(A): def my_method(self): ... use self newA = B() A: You could just put that method into a superclass of the two objects that need to call it, couldn't you? If its so critical that you can't copy it, nor can you change it to not use self, thats the only other option I can see. A: >>> class A(): ... me = 'i am A' ... >>> class B(): ... me = 'i am B' ... >>> def get_name(self): ... print self.me ... >>> A.name = get_name >>> a=A() >>> a.name() i am A >>> >>> B.name = get_name >>> b=B() >>> b.name() i am B >>> A: Why cant you do this class A(object): def my_method(self,arg=None): if (arg!=None): #Do Some Complicated Processing with both objects and return something else: # Some complicated processing is done here return self A: In Python functions are not required to be enclosed in classes. It sounds like what you need is utility function, so just define it as such: def my_function(object): # Some complicated processing is done here return object my_function(7) my_function("Seven") As long as your processing is using methods and attribute available on all objects that you pass to my_function through the magic of duck typing everything will work fine.
Python: Hack to call a method on an object that isn't of its class
Assume you define a class, which has a method which does some complicated processing: class A(object): def my_method(self): # Some complicated processing is done here return self And now you want to use that method on some object from another class entirely. Like, you want to do A.my_method(7). This is what you'd get: TypeError: unbound method my_method() must be called with A instance as first argument (got int instance instead). Now, is there any possibility to hack things so you could call that method on 7? I'd want to avoid moving the function or rewriting it. (Note that the method's logic does depend on self.) One note: I know that some people will want to say, "You're doing it wrong! You're abusing Python! You shouldn't do it!" So yes, I know, this is a terrible terrible thing I want to do. I'm asking if someone knows how to do it, not how to preach to me that I shouldn't do it.
[ "Of course I wouldn't recommend doing this in real code, but yes, sure, you can reach inside of classes and use its methods as functions:\nclass A(object):\n def my_method(self):\n # Some complicated processing is done here\n return 'Hi'\n\nprint(A.__dict__['my_method'](7))\n# Hi\n\n", "You can't. The restriction has actually been lifted in Python 3000, but I presume you are not using that.\nHowever, why can't you do something like:\ndef method_implementation(self, x,y):\n # do whatever\n\nclass A():\n def method(self, x, y):\n return method_implementation(self, x, y)\n\nIf you are really in the mood for python abuse, write a descriptor class that implements the behavior. Something like\nclass Hack:\n def __init__(self, fn):\n self.fn = fn\n def __get__(self, obj, cls):\n if obj is None: # called staticly\n return self.fn\n else:\n def inner(*args, **kwargs):\n return self.fn(obj, *args, **kwargs)\n return inner\n\nNote that this is completely untested, will probably break some corner cases, and is all around evil.\n", "def some_method(self):\n # Some complicated processing is done here\n return self\n\nclass A(object):\n my_method = some_method\na = A()\n\nprint some_method\nprint a.my_method\nprint A.my_method\nprint A.my_method.im_func\nprint A.__dict__['my_method']\n\nprints:\n<function some_method at 0x719f0>\n<bound method A.some_method of <__main__.A object at 0x757b0>>\n<unbound method A.some_method>\n<function some_method at 0x719f0>\n<function some_method at 0x719f0>\n\nIt sounds like you're looking up a method on a class and getting an unbound method. An unbound method expects a object of the appropriate type as the first argument.\nIf you want to apply the function as a function, you've got to get a handle to the function version of it instead.\n", "That's what's called a staticmethod:\nclass A(object):\n @staticmethod\n def my_method(a, b, c):\n return a, b, c\n\nHowever in staticmethods, you do not get a reference to self.\nIf you'd like a reference to the class not the instance (instance implies reference to self), you can use a classmethod:\nclass A(object):\n classvar = \"var\"\n\n @classmethod\n def my_method(cls, a, b, c):\n print cls.classvar\n return a, b, c\n\nBut you'll only get access to class variables, not to instance variables (those typically created/defined inside the __init__ constructor).\nIf that's not good enough, then you will need to somehow pass a \"bound\" method or pass \"self\" into the method like so:\nclass A(object):\n def my_method(self):\n # use self and manipulate the object\n\ninst = A()\nA.my_method(inst)\n\nAs some people have already said, it's not a bad idea to just inherit one class from the other:\nclass A(object):\n ... methods ...\n\nclass B(A):\n def my_method(self):\n ... use self\n\nnewA = B()\n\n", "You could just put that method into a superclass of the two objects that need to call it, couldn't you? If its so critical that you can't copy it, nor can you change it to not use self, thats the only other option I can see. \n", ">>> class A():\n... me = 'i am A'\n... \n>>> class B():\n... me = 'i am B'\n... \n>>> def get_name(self):\n... print self.me\n... \n>>> A.name = get_name\n>>> a=A()\n>>> a.name()\ni am A\n>>> \n>>> B.name = get_name\n>>> b=B()\n>>> b.name()\ni am B\n>>> \n\n", "Why cant you do this\nclass A(object):\n def my_method(self,arg=None):\n if (arg!=None):\n #Do Some Complicated Processing with both objects and return something \n else:\n # Some complicated processing is done here\n return self\n\n", "In Python functions are not required to be enclosed in classes. It sounds like what you need is utility function, so just define it as such:\ndef my_function(object):\n # Some complicated processing is done here\n return object\n\nmy_function(7)\nmy_function(\"Seven\")\n\nAs long as your processing is using methods and attribute available on all objects that you pass to my_function through the magic of duck typing everything will work fine.\n" ]
[ 6, 2, 2, 0, 0, 0, 0, 0 ]
[]
[]
[ "class", "methods", "object", "python" ]
stackoverflow_0002782516_class_methods_object_python.txt
Q: Mutable global variables don't get hide in python functions, right? Please see the following code: def good(): foo[0] = 9 # why this foo isn't local variable who hides the global one def bad(): foo = [9, 2, 3] # foo is local, who hides the global one for func in [good, bad]: foo = [1,2,3] print('Before "{}": {}'.format(func.__name__, foo)) func() print('After "{}": {}'.format(func.__name__, foo)) The result is as below: # python3 foo.py Before "good": [1, 2, 3] After "good": [9, 2, 3] Before "bad" : [1, 2, 3] After "bad" : [1, 2, 3] A: Because you're not setting foo, you're getting something in foo (foo[0] to be exact). In bad you create a new variable foo. In good you do something like foo.set(0, 9) (set item 0 to value 9). Which is using a variable, and not defining a new name. A: Variables will look to their inner scope first then to outer scopes in python. For instance: FOO = 'global' def printfoo(): print FOO # prints 'global' def printfoolocal(): FOO = 'local' print FOO # prints 'local' If you'd like to modify a globally scoped variable, you need to use the global keyword def modifyfoo(): global FOO FOO = 'modified' print FOO # prints 'modified' A: To make simple, python first look in the local variables and after in the global ones. (For reading or for creating a variable) So, in good, you take the foo variable : No local foo variable and a global foo variable => you take the global one and you modify it. In bad, you create a new (local) variable, so the global one is not modified. You can specify that a variable is global with the global keyword : def good2(): global foo foo = [9, 2, 3] A: If, like good, you want to replace the contents of the list foo, then you could assign to a slice of the whole list like: def good2(): foo[:] = [9, 2, 3] Just like where good assigned to one element of the list, this replaces the whole contents. In bad you were binding a new list to the name foo.
Mutable global variables don't get hide in python functions, right?
Please see the following code: def good(): foo[0] = 9 # why this foo isn't local variable who hides the global one def bad(): foo = [9, 2, 3] # foo is local, who hides the global one for func in [good, bad]: foo = [1,2,3] print('Before "{}": {}'.format(func.__name__, foo)) func() print('After "{}": {}'.format(func.__name__, foo)) The result is as below: # python3 foo.py Before "good": [1, 2, 3] After "good": [9, 2, 3] Before "bad" : [1, 2, 3] After "bad" : [1, 2, 3]
[ "Because you're not setting foo, you're getting something in foo (foo[0] to be exact).\nIn bad you create a new variable foo. In good you do something like foo.set(0, 9) (set item 0 to value 9). Which is using a variable, and not defining a new name.\n", "Variables will look to their inner scope first then to outer scopes in python. For instance:\nFOO = 'global'\n\ndef printfoo():\n print FOO\n# prints 'global'\n\ndef printfoolocal():\n FOO = 'local'\n print FOO\n# prints 'local'\n\nIf you'd like to modify a globally scoped variable, you need to use the global keyword\ndef modifyfoo():\n global FOO\n FOO = 'modified'\nprint FOO\n# prints 'modified'\n\n", "To make simple, python first look in the local variables and after in the global ones. (For reading or for creating a variable)\nSo, in good, you take the foo variable : No local foo variable and a global foo variable => you take the global one and you modify it.\nIn bad, you create a new (local) variable, so the global one is not modified.\nYou can specify that a variable is global with the global keyword :\ndef good2():\n global foo\n foo = [9, 2, 3]\n\n", "If, like good, you want to replace the contents of the list foo, then you could assign to a slice of the whole list like:\ndef good2():\n foo[:] = [9, 2, 3]\n\nJust like where good assigned to one element of the list, this replaces the whole contents.\nIn bad you were binding a new list to the name foo.\n" ]
[ 7, 0, 0, 0 ]
[]
[]
[ "python", "scope" ]
stackoverflow_0002781690_python_scope.txt
Q: Break up a polygon into smaller ones I am working with geodjango and I want to breakup a 2D Rectangular Polygon into smaller ones. My input is a big rectangle and I want to subdivide it in smaller rectangles. The sum of the smaller rectangles must be the original rectangle. All subrectangles should be equal size. How can I do that? Thank you. A: Pick any point inside the rectangle Draw two lines through it parallel to the edges of the rectangle. Now you've divided your rectangle into four smaller ones.
Break up a polygon into smaller ones
I am working with geodjango and I want to breakup a 2D Rectangular Polygon into smaller ones. My input is a big rectangle and I want to subdivide it in smaller rectangles. The sum of the smaller rectangles must be the original rectangle. All subrectangles should be equal size. How can I do that? Thank you.
[ "\nPick any point inside the rectangle\nDraw two lines through it parallel to the edges of the rectangle. Now you've divided your rectangle into four smaller ones.\n\n" ]
[ 2 ]
[]
[]
[ "geodjango", "geometry", "gis", "python" ]
stackoverflow_0002783075_geodjango_geometry_gis_python.txt
Q: Can I use django.contrib.gis on GAE? Can I use GeoDjango with GAE / BigTable? A: Another limitation is that the GEOS and GDAL libs aren't available on App Engine. A: No. You can't use Django models on App Engine, and therefore, can't use anything else that uses them, such as django.contrib.gis. A: You might be interested in geohash: read a previous answer of mine.
Can I use django.contrib.gis on GAE?
Can I use GeoDjango with GAE / BigTable?
[ "Another limitation is that the GEOS and GDAL libs aren't available on App Engine.\n", "No. You can't use Django models on App Engine, and therefore, can't use anything else that uses them, such as django.contrib.gis.\n", "You might be interested in geohash: read a previous answer of mine.\n" ]
[ 5, 4, 0 ]
[]
[]
[ "django", "gis", "google_app_engine", "python" ]
stackoverflow_0002774723_django_gis_google_app_engine_python.txt
Q: Compare string with all values in list I am trying to fumble through python, and learn the best way to do things. I have a string where I am doing a compare with another string to see if there is a match: if paid[j].find(d)>=0: #BLAH BLAH If d were an list, what is the most efficient way to see if the string contained in paid[j] has a match to any value in d? A: If you only want to know if any item of d is contained in paid[j], as you literally say: if any(x in paid[j] for x in d): ... If you also want to know which items of d are contained in paid[j]: contained = [x for x in d if x in paid[j]] contained will be an empty list if no items of d are contained in paid[j]. There are other solutions yet if what you want is yet another alternative, e.g., get the first item of d contained in paid[j] (and None if no item is so contained): firstone = next((x for x in d if x in paid[j]), None) BTW, since in a comment you mention sentences and words, maybe you don't necessarily want a string check (which is what all of my examples are doing), because they can't consider word boundaries -- e.g., each example will say that 'cat' is in 'obfuscate' (because, 'obfuscate' contains 'cat' as a substring). To allow checks on word boundaries, rather than simple substring checks, you might productively use regular expressions... but I suggest you open a separate question on that, if that's what you require -- all of the code snippets in this answer, depending on your exact requirements, will work equally well if you change the predicate x in paid[j] into some more sophisticated predicate such as somere.search(paid[j]) for an appropriate RE object somere. (Python 2.6 or better -- slight differences in 2.5 and earlier). If your intention is something else again, such as getting one or all of the indices in d of the items satisfying your constrain, there are easy solutions for those different problems, too... but, if what you actually require is so far away from what you said, I'd better stop guessing and hope you clarify;-). A: I assume you mean list and not array? There is such a thing as an array in Python, but more often than not you want a list instead of an array. The way to check if a list contains a value is to use in: if paid[j] in d: # ... A: In Python you may use the in operator. You can do stuff like this: >>> "c" in "abc" True Taking this further, you can check for complex structures, like tuples: >>> (2, 4, 8) in ((1, 2, 3), (2, 4, 8)) True A: for word in d: if d in paid[j]: do_something() will try all the words in the list d and check if they can be found in the string paid[j]. This is not very efficient since paid[j] has to be scanned again for each word in d. You could also use two sets, one composed of the words in the sentence, one of your list, and then look at the intersection of the sets. sentence = "words don't come easy" d = ["come", "together", "easy", "does", "it"] s1 = set(sentence.split()) s2 = set(d) print (s1.intersection(s2)) Output: {'come', 'easy'}
Compare string with all values in list
I am trying to fumble through python, and learn the best way to do things. I have a string where I am doing a compare with another string to see if there is a match: if paid[j].find(d)>=0: #BLAH BLAH If d were an list, what is the most efficient way to see if the string contained in paid[j] has a match to any value in d?
[ "If you only want to know if any item of d is contained in paid[j], as you literally say:\nif any(x in paid[j] for x in d): ...\n\nIf you also want to know which items of d are contained in paid[j]:\ncontained = [x for x in d if x in paid[j]]\n\ncontained will be an empty list if no items of d are contained in paid[j].\nThere are other solutions yet if what you want is yet another alternative, e.g., get the first item of d contained in paid[j] (and None if no item is so contained):\nfirstone = next((x for x in d if x in paid[j]), None)\n\nBTW, since in a comment you mention sentences and words, maybe you don't necessarily want a string check (which is what all of my examples are doing), because they can't consider word boundaries -- e.g., each example will say that 'cat' is in 'obfuscate' (because, 'obfuscate' contains 'cat' as a substring). To allow checks on word boundaries, rather than simple substring checks, you might productively use regular expressions... but I suggest you open a separate question on that, if that's what you require -- all of the code snippets in this answer, depending on your exact requirements, will work equally well if you change the predicate x in paid[j] into some more sophisticated predicate such as somere.search(paid[j]) for an appropriate RE object somere.\n(Python 2.6 or better -- slight differences in 2.5 and earlier).\nIf your intention is something else again, such as getting one or all of the indices in d of the items satisfying your constrain, there are easy solutions for those different problems, too... but, if what you actually require is so far away from what you said, I'd better stop guessing and hope you clarify;-).\n", "I assume you mean list and not array? There is such a thing as an array in Python, but more often than not you want a list instead of an array.\nThe way to check if a list contains a value is to use in:\nif paid[j] in d:\n # ...\n\n", "In Python you may use the in operator. You can do stuff like this:\n>>> \"c\" in \"abc\"\nTrue\n\nTaking this further, you can check for complex structures, like tuples:\n>>> (2, 4, 8) in ((1, 2, 3), (2, 4, 8))\nTrue\n\n", "for word in d:\n if d in paid[j]:\n do_something()\n\nwill try all the words in the list d and check if they can be found in the string paid[j].\nThis is not very efficient since paid[j] has to be scanned again for each word in d. You could also use two sets, one composed of the words in the sentence, one of your list, and then look at the intersection of the sets.\nsentence = \"words don't come easy\"\nd = [\"come\", \"together\", \"easy\", \"does\", \"it\"]\n\ns1 = set(sentence.split())\ns2 = set(d)\n\nprint (s1.intersection(s2))\n\nOutput:\n{'come', 'easy'}\n\n" ]
[ 56, 11, 10, 2 ]
[]
[]
[ "python" ]
stackoverflow_0002783969_python.txt
Q: Python - copy by reference Is there any possibility to copy variable by reference no matter if its int or class instance? My goal is to have two lists of the same objects and when one changes, change is visible in second. In other words i need pointers:/ I simply want int, float and other standard types which are normally copied by value, force to copy by reference. This will make my code more consistent. If there is no such possibility, class wrapper is the best solution. A: Python always works by reference, unless you explicitly ask for a copy (a slice of a built-in list is deemed to "ask for a copy" -- but a slice of a numpy array also works by reference). However, exactly because of that, alist=anotherlist; alist.sort() means the single list objects (with two equivalent names alist and anotherlist) gets sorted -- you can't maintain two different orderings at the same time on the same list object. So, in this case, you must explicitly ask for a copy (e.g. alist=list(anotherlist)) -- and once you've done that there is no more connection between the two distinct list objects. You can't have it both ways: either you work by reference (and have a single list object and thus a single ordering!), or you make a copy (in which case you end up with two separate list objects). You could take advantage of the fact that the copies discussed so far are shallow -- the objects (items) that the two lists refer to are the same... until and unless you perform removals, additions, or reassignments of items on either list (mutation of mutable items on the other hand don't alter this connection: it's a completely separate and drastically different situation from any of the above, since removals, additions and reassignments are operation on the list, while calling a mutating method on an item is an operation on the item -- items are oblivious to any operation on one or more lists referring to them, lists are oblivious to any operation on one or more of the items they refer to). There's not much you can do about removals and additions, except keeping two lists wrapped and synced up in a single object as suggested in other answers; but for reassignments of items, if that's all you require, you could turn those into mutation of items by adding one level of indirection -- instead of having a list directly referring to the items, have it refer e.g. to one-item sublists. For example: >>> alist = list([x] for x in 'ciao') >>> blist = list(alist) >>> blist.sort() >>> alist [['c'], ['i'], ['a'], ['o']] >>> blist [['a'], ['c'], ['i'], ['o']] >>> blist[-1][0] = 'z' >>> blist [['a'], ['c'], ['i'], ['z']] >>> alist [['c'], ['i'], ['a'], ['z']] Whether this concept of an extra indirection level can help at all with what you're exactly trying to do, only you can tell, since we don't really know what it is that you are trying to do;-). A: You can wrap you immutable objects in a class: class MutableWrapper(object): def __init__(self, value): self.value = value a = MutableWrapper(10) b = a a.value = 20 assert b.value == 20 A: (edited to show example of dereferencing to same memory location) Luper Rouch's approach is spot-on when dealing with lists of mixed-types. Just wrap the immutable types with containers. If you really insist on C-style arrays where the elements are constrained to being a single type (array of ints, array of chars, etc), you could use the ctypes module. It'll give you access to c data types and pointers in addition to an FFI to use DLLs. from ctypes import * containerTYPE = POINTER( c_uint ) * 10 #Array of pointers to UINTs of size 10 containerA = containerTYPE() containerB = containerTYPE() for i in range( 10 ): val = c_uint( i ) containerA[ i ] = pointer( val ) containerB[ -1 - i ] = pointer( val ) print "A\tB" for i in range( 10 ): print containerA[ i ].contents.value, "\t", containerB[ i ].contents.value for i in range( 10 ): #affects both arrays derefed = containerA[ i ].contents derefed.value = i * 2 print print "A\tB" for i in range( 10 ): print containerA[ i ].contents.value, "\t", containerB[ i ].contents.value Result: A B 0 9 1 8 2 7 3 6 4 5 5 4 6 3 7 2 8 1 9 0 A B 0 18 2 16 4 14 6 12 8 10 10 8 12 6 14 4 16 2 18 0 A: I am not sure what API you must provide. It's possible you want something like import bisect class DualLists(object): def __init__(self, iterable=[]): self.insertion_order = list(iterable) self.sorted = sorted(self.insertion_order) def append(self, item): self.insertion_order.append(item) bisect.insort(self.sorted, item) >>> d = DualLists() >>> d.append(4) >>> d.append(6) >>> d.append(1) >>> d.insertion_order [4, 6, 1] >>> d.sorted [1, 4, 6] Note that the third-party package blist provides a more efficient sorted list type than using the bisect module with the built-in list type can provide. The operation of this class might also be better served by using a database, such as the one accessed by the built-in sqlite3 module. A: There is probably a more elegant, pythonic way to handle this than using pointers. Could you provide a little more context into what you are trying to do. Based on what you've given so far, I would subclass the built-in list type and have it store an alternative version of itself. Override the list methods to operate on itself, and the alternative version of itself where it makes sense. Where it doesn't make sense, like in the sort() function, define a second function for the alternate list. This only really makes sense if the sort is unreasonably expensive; otherwise, I would just maintain one list and sort on demand. class MyList(list): def __init__(self, li): super(MyList, self).__init__(li) self.altlist = list(li) def append(self, x): super(MyList, self).append(x) self.altlist.append(x) def sortalt(self): ... ...
Python - copy by reference
Is there any possibility to copy variable by reference no matter if its int or class instance? My goal is to have two lists of the same objects and when one changes, change is visible in second. In other words i need pointers:/ I simply want int, float and other standard types which are normally copied by value, force to copy by reference. This will make my code more consistent. If there is no such possibility, class wrapper is the best solution.
[ "Python always works by reference, unless you explicitly ask for a copy (a slice of a built-in list is deemed to \"ask for a copy\" -- but a slice of a numpy array also works by reference). However, exactly because of that, alist=anotherlist; alist.sort() means the single list objects (with two equivalent names alist and anotherlist) gets sorted -- you can't maintain two different orderings at the same time on the same list object.\nSo, in this case, you must explicitly ask for a copy (e.g. alist=list(anotherlist)) -- and once you've done that there is no more connection between the two distinct list objects. You can't have it both ways: either you work by reference (and have a single list object and thus a single ordering!), or you make a copy (in which case you end up with two separate list objects).\nYou could take advantage of the fact that the copies discussed so far are shallow -- the objects (items) that the two lists refer to are the same... until and unless you perform removals, additions, or reassignments of items on either list (mutation of mutable items on the other hand don't alter this connection: it's a completely separate and drastically different situation from any of the above, since removals, additions and reassignments are operation on the list, while calling a mutating method on an item is an operation on the item -- items are oblivious to any operation on one or more lists referring to them, lists are oblivious to any operation on one or more of the items they refer to).\nThere's not much you can do about removals and additions, except keeping two lists wrapped and synced up in a single object as suggested in other answers; but for reassignments of items, if that's all you require, you could turn those into mutation of items by adding one level of indirection -- instead of having a list directly referring to the items, have it refer e.g. to one-item sublists. For example:\n>>> alist = list([x] for x in 'ciao')\n>>> blist = list(alist)\n>>> blist.sort()\n>>> alist\n[['c'], ['i'], ['a'], ['o']]\n>>> blist\n[['a'], ['c'], ['i'], ['o']]\n>>> blist[-1][0] = 'z'\n>>> blist\n[['a'], ['c'], ['i'], ['z']]\n>>> alist\n[['c'], ['i'], ['a'], ['z']]\n\nWhether this concept of an extra indirection level can help at all with what you're exactly trying to do, only you can tell, since we don't really know what it is that you are trying to do;-).\n", "You can wrap you immutable objects in a class:\nclass MutableWrapper(object):\n\n def __init__(self, value):\n self.value = value\n\na = MutableWrapper(10)\nb = a\na.value = 20\nassert b.value == 20\n\n", "(edited to show example of dereferencing to same memory location)\nLuper Rouch's approach is spot-on when dealing with lists of mixed-types. Just wrap the immutable types with containers.\nIf you really insist on C-style arrays where the elements are constrained to being a single type (array of ints, array of chars, etc), you could use the ctypes module. It'll give you access to c data types and pointers in addition to an FFI to use DLLs.\nfrom ctypes import *\ncontainerTYPE = POINTER( c_uint ) * 10 #Array of pointers to UINTs of size 10\ncontainerA = containerTYPE()\ncontainerB = containerTYPE()\n\nfor i in range( 10 ):\n val = c_uint( i )\n containerA[ i ] = pointer( val ) \n containerB[ -1 - i ] = pointer( val ) \n\nprint \"A\\tB\"\nfor i in range( 10 ):\n print containerA[ i ].contents.value, \"\\t\", containerB[ i ].contents.value\n\nfor i in range( 10 ): #affects both arrays\n derefed = containerA[ i ].contents\n derefed.value = i * 2\n\nprint\nprint \"A\\tB\"\nfor i in range( 10 ):\n print containerA[ i ].contents.value, \"\\t\", containerB[ i ].contents.value\n\nResult:\nA B\n0 9\n1 8\n2 7\n3 6\n4 5\n5 4\n6 3\n7 2\n8 1\n9 0\n\nA B\n0 18\n2 16\n4 14\n6 12\n8 10\n10 8\n12 6\n14 4\n16 2\n18 0\n\n", "I am not sure what API you must provide. It's possible you want something like \nimport bisect\n\nclass DualLists(object):\n def __init__(self, iterable=[]):\n self.insertion_order = list(iterable)\n self.sorted = sorted(self.insertion_order)\n\n def append(self, item):\n self.insertion_order.append(item)\n bisect.insort(self.sorted, item)\n\n>>> d = DualLists()\n>>> d.append(4)\n>>> d.append(6)\n>>> d.append(1)\n>>> d.insertion_order\n[4, 6, 1]\n>>> d.sorted\n[1, 4, 6]\n\nNote that the third-party package blist provides a more efficient sorted list type than using the bisect module with the built-in list type can provide. The operation of this class might also be better served by using a database, such as the one accessed by the built-in sqlite3 module.\n", "There is probably a more elegant, pythonic way to handle this than using pointers. Could you provide a little more context into what you are trying to do.\nBased on what you've given so far, I would subclass the built-in list type and have it store an alternative version of itself. Override the list methods to operate on itself, and the alternative version of itself where it makes sense. Where it doesn't make sense, like in the sort() function, define a second function for the alternate list.\nThis only really makes sense if the sort is unreasonably expensive; otherwise, I would just maintain one list and sort on demand.\nclass MyList(list):\n\n def __init__(self, li):\n super(MyList, self).__init__(li)\n self.altlist = list(li)\n\n def append(self, x):\n super(MyList, self).append(x)\n self.altlist.append(x)\n\n def sortalt(self):\n ...\n\n ...\n\n" ]
[ 9, 8, 1, 0, 0 ]
[]
[]
[ "python", "reference" ]
stackoverflow_0002783489_python_reference.txt
Q: Python - How to catch outside exceptions inside methods I want to know if there would be a way to catch exceptions inside called methods. Example: def foo(value): print value foo(x) This would throw a NameError exception, because x is not declared. I'd like to catch this NameError exception inside foo method. Is there a way? A: The NameError occurs when x is attempted to be evaluated. foo is never entered, so you can't catch the NameError inside foo. I think what you think is that when you do foo(x), foo is entered, and then x is looked up. You'd like to say, "I don't know what x is", instead of letting a NameError get raised. Unfortunately (for what you want to do), Python, like pretty much every other programming language, evaluates its arguments before they are passed to the function. There's no way to stop the NameError of a value passed into foo from inside foo. A: Not exactly, but there is a way to catch every exception that isn't handled: >>> import sys >>> >>> def handler(type, value, traceback): >>> print "Blocked:", value >>> sys.excepthook = handler >>> >>> def foo(value): >>> print value >>> >>> foo(x) Blocked: name 'x' is not defined Unfortunately, sys.excepthook is only called "just before the program exits," so you can't return control to your program, much less insert the exception into foo().
Python - How to catch outside exceptions inside methods
I want to know if there would be a way to catch exceptions inside called methods. Example: def foo(value): print value foo(x) This would throw a NameError exception, because x is not declared. I'd like to catch this NameError exception inside foo method. Is there a way?
[ "The NameError occurs when x is attempted to be evaluated. foo is never entered, so you can't catch the NameError inside foo.\nI think what you think is that when you do foo(x), foo is entered, and then x is looked up. You'd like to say, \"I don't know what x is\", instead of letting a NameError get raised.\nUnfortunately (for what you want to do), Python, like pretty much every other programming language, evaluates its arguments before they are passed to the function. There's no way to stop the NameError of a value passed into foo from inside foo.\n", "Not exactly, but there is a way to catch every exception that isn't handled:\n>>> import sys\n>>> \n>>> def handler(type, value, traceback):\n>>> print \"Blocked:\", value\n>>> sys.excepthook = handler\n>>> \n>>> def foo(value):\n>>> print value\n>>> \n>>> foo(x)\nBlocked: name 'x' is not defined\n\nUnfortunately, sys.excepthook is only called \"just before the program exits,\" so you can't return control to your program, much less insert the exception into foo().\n" ]
[ 9, 1 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0002784695_exception_python.txt
Q: Is it possible to use P4Python API with IronPython? Is it possible to use P4Python (the perforce python api) with IronPython? I'd like to use the python api because it seems much faster than using p4.net implementionat of a Perforce API but when I try to import p4 into IronPython I receive the following error. IronPython 2.6.1 (2.6.10920.0) on .NET 4.0.30128.1 Type "help", "copyright", "credits" or "license" for more information. import P4 Traceback (most recent call last): File "", line 1, in File "C:\Program Files\IronPython 2.6 for .NET 4.0\lib\site-packages\P4.py", l ine 210, in ImportError: No module named P4API A: I guess P4API is CPython extension so it does not work in IronPython. In that case, try ironclad.
Is it possible to use P4Python API with IronPython?
Is it possible to use P4Python (the perforce python api) with IronPython? I'd like to use the python api because it seems much faster than using p4.net implementionat of a Perforce API but when I try to import p4 into IronPython I receive the following error. IronPython 2.6.1 (2.6.10920.0) on .NET 4.0.30128.1 Type "help", "copyright", "credits" or "license" for more information. import P4 Traceback (most recent call last): File "", line 1, in File "C:\Program Files\IronPython 2.6 for .NET 4.0\lib\site-packages\P4.py", l ine 210, in ImportError: No module named P4API
[ "I guess P4API is CPython extension so it does not work in IronPython. In that case, try ironclad.\n" ]
[ 1 ]
[]
[]
[ "ironpython", "p4python", "perforce", "python" ]
stackoverflow_0002783285_ironpython_p4python_perforce_python.txt
Q: Assign variable with variable in function Let's say we have def Foo(Bar=0,Song=0): print(Bar) print(Song) And I want to assign any one of the two parameters in the function with the variable sing and SongVal: Sing = Song SongVal = 2 So that it can be run like: Foo(Sing=SongVal) Where Sing would assign the Song parameter to the SongVal which is 2. The result should be printed like so: 0 2 So should I rewrite my function or is it possible to do it the way I want to? (With the code above you get an error saying Foo has no parameter Sing. Which I understand why, any way to overcome this without rewriting the function too much? Thanks in advance! A: What you're looking for is the **kwargs way of passing arbitrary keyword arguments: kwargs = {Sing: SongVal} foo(**kwargs) See section 4.7 of the tutorial at www.python.org for more examples.
Assign variable with variable in function
Let's say we have def Foo(Bar=0,Song=0): print(Bar) print(Song) And I want to assign any one of the two parameters in the function with the variable sing and SongVal: Sing = Song SongVal = 2 So that it can be run like: Foo(Sing=SongVal) Where Sing would assign the Song parameter to the SongVal which is 2. The result should be printed like so: 0 2 So should I rewrite my function or is it possible to do it the way I want to? (With the code above you get an error saying Foo has no parameter Sing. Which I understand why, any way to overcome this without rewriting the function too much? Thanks in advance!
[ "What you're looking for is the **kwargs way of passing arbitrary keyword arguments:\nkwargs = {Sing: SongVal}\nfoo(**kwargs)\n\nSee section 4.7 of the tutorial at www.python.org for more examples.\n" ]
[ 4 ]
[]
[]
[ "function", "python" ]
stackoverflow_0002784977_function_python.txt
Q: What are some strategies for maintaining a common database schema with a team of developers and no DBA? I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..) We're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because: We couldn't track changes using the ORM The state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints) There was no way to audit database changes unless the developer documented the database change via email to the team. Our solution to this problem was to basically have a "gatekeeper" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database. I would like to know what are some strategies to maintain database schemas and if ours seems reasonable. Thanks! A: The solution is rather administrative then technical :) The general rule is easy, there should only be tree-like dependencies in the project: - There should always be a single master source of schema, stored together with the project source code in the version control - Everything affected by the change in the master source should be automatically re-generated every time the master source is updated, no manual intervention allowed never, if automatic generation does not work -- fix either master source or generator, don't manually update the source code - All re-generations should be performed by the same person who updated the master source and all changes including the master source change should be considered a single transaction (single source control commit, single build/deployment for every affected environment including DBs update) Being enforced, this gives 100% reliable result. There are essentially 3 possible choices of the master source 1) DB metadata, sources are generated after DB update by some tool connecting to the live DB 2) Source code, some tool is generating SQL scheme from the sources, annotated in a special way and then SQL is run on the DB 3) DDL, both SQL schema and source code are generated by some tool 4) some other description is used (say a text file read by a special Perl script generating both SQL schema and the source code) 1,2,3 are equally good, providing that the tool you need exists and is not over expensive 4 is a universal approach, but it should be applied from the very beginning of the project and has an overhead of couple thousands lines of code in a strange language to maintain A: Have you tried the SQLalchemy Migrate tools? They are specifically designed to auto-migrate your database design changes. A: So am I correct in assuming you are designing your db directly on the physical db? I used to do this many years ago but the quality of the resultant db was pretty poor. If you use a modelling tool (personally I think Sybase pdesigner is still best of breed, but look around) everybody can make changes to the model and just sync their local db’s as required (it will also pick up documentation tasks). So, re bobah’s post, the master is the pdesigner model rather than a physical db. Is your accepted_db_changes.sql file one humongous list of change scripts? I’m not sure I like the idea of changing the file name, etc. Given that the difference between to two db versions is a sequential list of alter scripts, how about a model like: Ver1 (folder) Change 1-1.sql Change 1-2.sql Change 1-3.sql Ver2 (folder) Change 2-1.sql Change 2-2.sql Change 2-3.sql Where each change (new file) is reviewed before committing. A general rule should be to make a conscious effort to automate as much of the db deployment in your dev environments as possible; we have defiantly got a respectable ROI on this work. You can use tools like redgate to generate your ddl (it has an api, not sure if it works with SQLAlchemy though). IMO, DB changes should be trivial, if you are finding they are blocking then look at what you can automate. A: You might find the book Refactoring Databases helpful as it contains general strategies for managing database, not just how to refractor them. His system expects that every developer will have their own copy of the database as well as some general test database used before deploying to production. Your situation is one of the easier situations in the book describes as you don't have a number of separate applications using the database (although you do need someone who knows how to describe database migrations). The biggest thing is to be able to build the database from information in source control and have changes described by small migrations (see @WoLpH's answer) rather than just making the change in the database. Also you will find things easier if you have at least ORM <-> database tests to check that they are still in sync.
What are some strategies for maintaining a common database schema with a team of developers and no DBA?
I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..) We're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because: We couldn't track changes using the ORM The state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints) There was no way to audit database changes unless the developer documented the database change via email to the team. Our solution to this problem was to basically have a "gatekeeper" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database. I would like to know what are some strategies to maintain database schemas and if ours seems reasonable. Thanks!
[ "The solution is rather administrative then technical :)\nThe general rule is easy, there should only be tree-like dependencies in the project:\n- There should always be a single master source of schema, stored together with the project source code in the version control\n- Everything affected by the change in the master source should be automatically re-generated every time the master source is updated, no manual intervention allowed never, if automatic generation does not work -- fix either master source or generator, don't manually update the source code\n- All re-generations should be performed by the same person who updated the master source and all changes including the master source change should be considered a single transaction (single source control commit, single build/deployment for every affected environment including DBs update)\nBeing enforced, this gives 100% reliable result.\nThere are essentially 3 possible choices of the master source\n1) DB metadata, sources are generated after DB update by some tool connecting to the live DB\n2) Source code, some tool is generating SQL scheme from the sources, annotated in a special way and then SQL is run on the DB\n3) DDL, both SQL schema and source code are generated by some tool\n4) some other description is used (say a text file read by a special Perl script generating both SQL schema and the source code)\n1,2,3 are equally good, providing that the tool you need exists and is not over expensive\n4 is a universal approach, but it should be applied from the very beginning of the project and has an overhead of couple thousands lines of code in a strange language to maintain\n", "Have you tried the SQLalchemy Migrate tools?\nThey are specifically designed to auto-migrate your database design changes.\n", "So am I correct in assuming you are designing your db directly on the physical db? I used to do this many years ago but the quality of the resultant db was pretty poor. If you use a modelling tool (personally I think Sybase pdesigner is still best of breed, but look around) everybody can make changes to the model and just sync their local db’s as required (it will also pick up documentation tasks). So, re bobah’s post, the master is the pdesigner model rather than a physical db.\nIs your accepted_db_changes.sql file one humongous list of change scripts? I’m not sure I like the idea of changing the file name, etc. Given that the difference between to two db versions is a sequential list of alter scripts, how about a model like:\nVer1 (folder)\n Change 1-1.sql\n Change 1-2.sql\n Change 1-3.sql\nVer2 (folder)\n Change 2-1.sql\n Change 2-2.sql\n Change 2-3.sql\n\nWhere each change (new file) is reviewed before committing.\nA general rule should be to make a conscious effort to automate as much of the db deployment in your dev environments as possible; we have defiantly got a respectable ROI on this work. You can use tools like redgate to generate your ddl (it has an api, not sure if it works with SQLAlchemy though). IMO, DB changes should be trivial, if you are finding they are blocking then look at what you can automate.\n", "You might find the book Refactoring Databases helpful as it contains general strategies for managing database, not just how to refractor them.\nHis system expects that every developer will have their own copy of the database as well as some general test database used before deploying to production. Your situation is one of the easier situations in the book describes as you don't have a number of separate applications using the database (although you do need someone who knows how to describe database migrations). The biggest thing is to be able to build the database from information in source control and have changes described by small migrations (see @WoLpH's answer) rather than just making the change in the database. Also you will find things easier if you have at least ORM <-> database tests to check that they are still in sync.\n" ]
[ 2, 1, 1, 1 ]
[]
[]
[ "database", "database_schema", "postgresql", "python", "sqlalchemy" ]
stackoverflow_0002748946_database_database_schema_postgresql_python_sqlalchemy.txt
Q: Capture global touch events (Symbian) Basically I wanted what the pys60 module keycapture does (global capture of keystrokes) but I wanted to do this with the touchscreen. So if the program is running, all touch events can be intercepted and logged by the program. How is this possible? A: Not quite sure if I understand, but I am witnessing a plug-in kind of touch-screen interceptor that uses FEP (Front End Processor). This way some people override standard touch-screen keyboard. http://www.google.com/search?hl=en&source=hp&q=S60+Front+End+Processor&aq=f&aqi=&aql=&oq=&gs_rfai=
Capture global touch events (Symbian)
Basically I wanted what the pys60 module keycapture does (global capture of keystrokes) but I wanted to do this with the touchscreen. So if the program is running, all touch events can be intercepted and logged by the program. How is this possible?
[ "Not quite sure if I understand, but I am witnessing a plug-in kind of touch-screen interceptor that uses FEP (Front End Processor). This way some people override standard touch-screen keyboard.\nhttp://www.google.com/search?hl=en&source=hp&q=S60+Front+End+Processor&aq=f&aqi=&aql=&oq=&gs_rfai=\n" ]
[ 0 ]
[]
[]
[ "nokia", "pys60", "python", "s60", "symbian" ]
stackoverflow_0002744691_nokia_pys60_python_s60_symbian.txt
Q: SQLAlchemy introspection of ORM classes/objects I am looking for a way to introspect SQLAlchemy ORM classes/entities to determine the types and other constraints (like maximum lengths) of an entity's properties. For example, if I have a declarative class: class User(Base): __tablename__ = "USER_TABLE" id = sa.Column(sa.types.Integer, primary_key=True) fullname = sa.Column(sa.types.String(100)) username = sa.Column(sa.types.String(20), nullable=False) password = sa.Column(sa.types.String(20), nullable=False) created_timestamp = sa.Column(sa.types.DateTime, nullable=False) I would want to be able to find out that the 'fullname' field should be a String with a maximum length of 100, and is nullable. And the 'created_timestamp' field is a DateTime and is not nullable. A: Something like: table = User.__table__ field = table.c["fullname"] print "Type", field.type print "Length", field.type.length print "Nullable", field.nullable EDIT: The upcoming 0.8 version has a New Class Inspection System: New Class Inspection System Status: completed, needs docs Lots of SQLAlchemy users are writing systems that require the ability to inspect the attributes of a mapped class, including being able to get at the primary key columns, object relationships, plain attributes, and so forth, typically for the purpose of building data-marshalling systems, like JSON/XML conversion schemes and of course form libraries galore. Originally, the Table and Column model were the original inspection points, which have a well-documented system. While SQLAlchemy ORM models are also fully introspectable, this has never been a fully stable and supported feature, and users tended to not have a clear idea how to get at this information. 0.8 has a plan to produce a consistent, stable and fully documented API for this purpose, which would provide an inspection system that works on classes, instances, and possibly other things as well. While many elements of this system are already available, the plan is to lock down the API including various accessors available from such objects as Mapper, InstanceState, and MapperProperty: (follow the link for more info)
SQLAlchemy introspection of ORM classes/objects
I am looking for a way to introspect SQLAlchemy ORM classes/entities to determine the types and other constraints (like maximum lengths) of an entity's properties. For example, if I have a declarative class: class User(Base): __tablename__ = "USER_TABLE" id = sa.Column(sa.types.Integer, primary_key=True) fullname = sa.Column(sa.types.String(100)) username = sa.Column(sa.types.String(20), nullable=False) password = sa.Column(sa.types.String(20), nullable=False) created_timestamp = sa.Column(sa.types.DateTime, nullable=False) I would want to be able to find out that the 'fullname' field should be a String with a maximum length of 100, and is nullable. And the 'created_timestamp' field is a DateTime and is not nullable.
[ "Something like:\ntable = User.__table__\nfield = table.c[\"fullname\"]\nprint \"Type\", field.type\nprint \"Length\", field.type.length\nprint \"Nullable\", field.nullable\n\nEDIT:\nThe upcoming 0.8 version has a New Class Inspection System:\n\nNew Class Inspection System\nStatus: completed, needs docs\nLots of SQLAlchemy users are writing systems that require the ability\n to inspect the attributes of a mapped class, including being able to\n get at the primary key columns, object relationships, plain\n attributes, and so forth, typically for the purpose of building\n data-marshalling systems, like JSON/XML conversion schemes and of\n course form libraries galore.\nOriginally, the Table and Column model were the original inspection\n points, which have a well-documented system. While SQLAlchemy ORM\n models are also fully introspectable, this has never been a fully\n stable and supported feature, and users tended to not have a clear\n idea how to get at this information.\n0.8 has a plan to produce a consistent, stable and fully documented API for this purpose, which would provide an inspection system that\n works on classes, instances, and possibly other things as well. While\n many elements of this system are already available, the plan is to\n lock down the API including various accessors available from such\n objects as Mapper, InstanceState, and MapperProperty:\n\n(follow the link for more info)\n" ]
[ 11 ]
[]
[]
[ "introspection", "python", "sqlalchemy" ]
stackoverflow_0002784986_introspection_python_sqlalchemy.txt
Q: Include upper bound in range() How can I include the upper bound in range() function? I can't add by 1 because my for-loop looks like: for x in range(1,math.floor(math.sqrt(x))): y = math.sqrt(n - x * x) But as I understand it will actually be 1 < x < M where I need 1 < x <= M Adding 1 will completely change the result. I am trying to rewrite my old program from C# to Python. That's how it looked in C#: for (int x = 1; x <= Math.Floor(Math.Sqrt(n)); x++) double y = Math.Sqrt(n - x * x); A: Just add one to the second argument of your range function: range(1,math.floor(math.sqrt(x))+1) You could also use this: range(math.floor(math.sqrt(x))) and then add one inside your loop. The former will be faster, however. As an additional note, unless you're working with Python 3, you should be using xrange instead of range, for idiom/efficiency. More idiomatically, you could also call int instead of math.floor. A: Why exactly can't you simply add one to the upper bound of your range call? Also, it seems like you want to refer to n in your first line, i.e.: for x in range(1,math.floor(math.sqrt(n)) + 1): ...assuming you want to have the same behavior as your C# snippet.
Include upper bound in range()
How can I include the upper bound in range() function? I can't add by 1 because my for-loop looks like: for x in range(1,math.floor(math.sqrt(x))): y = math.sqrt(n - x * x) But as I understand it will actually be 1 < x < M where I need 1 < x <= M Adding 1 will completely change the result. I am trying to rewrite my old program from C# to Python. That's how it looked in C#: for (int x = 1; x <= Math.Floor(Math.Sqrt(n)); x++) double y = Math.Sqrt(n - x * x);
[ "Just add one to the second argument of your range function:\nrange(1,math.floor(math.sqrt(x))+1)\nYou could also use this:\nrange(math.floor(math.sqrt(x)))\nand then add one inside your loop. The former will be faster, however.\nAs an additional note, unless you're working with Python 3, you should be using xrange instead of range, for idiom/efficiency. More idiomatically, you could also call int instead of math.floor.\n", "Why exactly can't you simply add one to the upper bound of your range call?\nAlso, it seems like you want to refer to n in your first line, i.e.:\nfor x in range(1,math.floor(math.sqrt(n)) + 1):\n\n...assuming you want to have the same behavior as your C# snippet.\n" ]
[ 8, 4 ]
[]
[]
[ "python" ]
stackoverflow_0002785370_python.txt