content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Getting python MySQLdb to run on Ubuntu I created a virtualbox with a fresh install of ubuntu 9.10. I am trying to get MySQLdb to run on python but I'm failing at the import MySQLdb I first tried sudo easy_install MySQL_python-1.2.3c1-py2.6-linux-i686.egg and then sudo apt-get install python-mysqldb. Both apparently installed ok, but gave me the following error message when in python I have the import line: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg/MySQLdb/__init__.py", line 19, in <module> File "/usr/local/lib/python2.6/dist-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg/_mysql.py", line 7, in <module> File "/usr/local/lib/python2.6/dist-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg/_mysql.py", line 6, in __bootstrap__ ImportError: libmysqlclient_r.so.15: cannot open shared object file: No such file or directory I have already installed MySQL and it is running, if that matters at all. I tried following this, but failed in step 2 A: Your MySQLdb egg installation looks like it is not working properly. You should go into /usr/local/lib/python2.6/dist-packages and remove it. The Ubuntu python-mysqldb package should work fine. Unless you have a good reason, you should stick to your distribution's package manager when installing new software.
Getting python MySQLdb to run on Ubuntu
I created a virtualbox with a fresh install of ubuntu 9.10. I am trying to get MySQLdb to run on python but I'm failing at the import MySQLdb I first tried sudo easy_install MySQL_python-1.2.3c1-py2.6-linux-i686.egg and then sudo apt-get install python-mysqldb. Both apparently installed ok, but gave me the following error message when in python I have the import line: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg/MySQLdb/__init__.py", line 19, in <module> File "/usr/local/lib/python2.6/dist-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg/_mysql.py", line 7, in <module> File "/usr/local/lib/python2.6/dist-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg/_mysql.py", line 6, in __bootstrap__ ImportError: libmysqlclient_r.so.15: cannot open shared object file: No such file or directory I have already installed MySQL and it is running, if that matters at all. I tried following this, but failed in step 2
[ "Your MySQLdb egg installation looks like it is not working properly. You should go into /usr/local/lib/python2.6/dist-packages and remove it.\nThe Ubuntu python-mysqldb package should work fine. Unless you have a good reason, you should stick to your distribution's package manager when installing new software.\n" ]
[ 7 ]
[]
[]
[ "mysql", "python", "ubuntu_9.10" ]
stackoverflow_0002198260_mysql_python_ubuntu_9.10.txt
Q: no procedural code and non mvc in ruby and python? i know that these 2 languages require oop. but inside the classes, could you have procedural code? and for the mvc part, that pattern comes with their frameworks right? nothing to do with the actual language itself? A: Neither of these languages require OOP, especially Python. In Python you can write as many ordinary functions as you want, and there are plenty of modules which don't define any classes. In Ruby you can do the same thing, except instead of functions you have methods on a module. You are correct that MVC is related to the frameworks and not to the languages themselves. A: They don't require OOP, but they do require procedural code, as they are imperative languages, and not functional ones. You can use some functional techniques. There are plenty of frameworks that use MVC for both languages, yes.
no procedural code and non mvc in ruby and python?
i know that these 2 languages require oop. but inside the classes, could you have procedural code? and for the mvc part, that pattern comes with their frameworks right? nothing to do with the actual language itself?
[ "Neither of these languages require OOP, especially Python. In Python you can write as many ordinary functions as you want, and there are plenty of modules which don't define any classes. In Ruby you can do the same thing, except instead of functions you have methods on a module.\nYou are correct that MVC is related to the frameworks and not to the languages themselves.\n", "They don't require OOP, but they do require procedural code, as they are imperative languages, and not functional ones. You can use some functional techniques.\nThere are plenty of frameworks that use MVC for both languages, yes.\n" ]
[ 4, 3 ]
[]
[]
[ "model_view_controller", "oop", "python", "ruby" ]
stackoverflow_0002198562_model_view_controller_oop_python_ruby.txt
Q: referencing static methods from class variable I know it's wired to have such a case but somehow I have it: class foo #static method @staticmethod def test(): pass # class variable c = {'name' : <i want to reference test method here.>} What's the way to it? Just for the record: I believe this should be considered as python worst practices. Using static methods is not really pythoish way if ever... A: class Foo: # static method @staticmethod def test(): pass # class variable c = {'name' : test } A: The problem is static methods in python are descriptor objects. So in the following code: class Foo: # static method @staticmethod def test(): pass # class variable c = {'name' : test } Foo.c['name'] is the descriptor object, thus is not callable. You would have to type Foo.c['name'].__get__(None, Foo)() to correctly call test() here. If you're unfamiliar with descriptors in python, have a look at the glossary, and there's plenty of docs on the web. Also, have a look at this thread, which seems to be close to your use-case. To keep things simple, you could probably create that c class attribute just outside of the class definition: class Foo(object): @staticmethod def test(): pass Foo.c = {'name': Foo.test} or, if you feel like it, dive in the documentation of __metaclass__.
referencing static methods from class variable
I know it's wired to have such a case but somehow I have it: class foo #static method @staticmethod def test(): pass # class variable c = {'name' : <i want to reference test method here.>} What's the way to it? Just for the record: I believe this should be considered as python worst practices. Using static methods is not really pythoish way if ever...
[ "class Foo:\n # static method\n @staticmethod\n def test():\n pass\n\n # class variable\n c = {'name' : test }\n\n", "The problem is static methods in python are descriptor objects. So in the following code:\nclass Foo:\n # static method\n @staticmethod\n def test():\n pass\n\n # class variable\n c = {'name' : test }\n\nFoo.c['name'] is the descriptor object, thus is not callable. You would have to type Foo.c['name'].__get__(None, Foo)() to correctly call test() here. If you're unfamiliar with descriptors in python, have a look at the glossary, and there's plenty of docs on the web. Also, have a look at this thread, which seems to be close to your use-case.\nTo keep things simple, you could probably create that c class attribute just outside of the class definition:\nclass Foo(object):\n @staticmethod\n def test():\n pass\n\nFoo.c = {'name': Foo.test}\n\nor, if you feel like it, dive in the documentation of __metaclass__.\n" ]
[ 5, 4 ]
[]
[]
[ "class_variables", "python", "static_methods" ]
stackoverflow_0002194185_class_variables_python_static_methods.txt
Q: encoding mp3 from a audio stream of PyTTS I work on text-to-speech trasforming text, in audio mp3 files, using python 2.5. I use pyTSS as a python Text-To-Speech module, to transform text in audio .wav files (in pyTTS is not possible to encode in mp3 format directly). So after that, I code these wav files, in mp3 format, using lame command line encoder. Now, the problem is that, I would like to insert (in particular point of an audio mp3 file, between two words) a particular external sound file (like a sound warning) or (if possible a generated warning sound). Questions are: 1) I have seen that PyTTS have possibilities to save audio stream on a file or in a memory stream. using two function: tts.SpeakToWave(file, text) or tts.SpeakToMemory(text) Exploiting tts.SpeakToMemory(text) function, and using PyMedia I have been able to save an mp3 directly but mp3 file (when reproducing), sounds uncomprensible like donald duck! :-) Here a snippet of code: params = {'id': acodec.getCodecID('mp3'), 'bitrate': 128000, 'sample_rate': 44100, 'ext': 'mp3', 'channels': 2} m = tts.SpeakToMemory(p.Text) soundBytes = m.GetData() enc = acodec.Encoder(params) frames = enc.encode(soundBytes) f = file("test.mp3", 'wb') for frame in frames: f.write(frame) f.close() I can not understand where is the problem?!? This possibility (if it would work correctly), it would be good to skip wav files transformation step. 2) As second problem, I need to concatenate audio mp3 file (obtained from text-to-speech module) with a particular warning sound. Obviously, it would be great if I could concatenate audio memory streams of text (after text-to-speech module) and the stream of a warning sound, before encoding the whole audio memory stream in an unique mp3 file. I have seen also that tksnack libraries, can concatenate audio, but they are not able to write mp3 files. I hope to have been clear. :-) Many thanks to for your answers to my questions. Giulio A: I don't think PyTTS produces default PCM data (i.e. 44100 Hz, stereo, 16-bit). You should check the format like this: memStream = tts.SpeakToMemory("some text") format = memStream.Format.GetWaveFormatEx() ...and hand it over correctly to acodec. Therefore you can use the attributes format.Channels, format.BitsPerSample and format.SamplesPerSec. As to your second question, if the sounds are in the same format, you should be able to simply pass them all to enc.encode, one after another. A: can't provide a definitive answer here, sorry. But there is some trial and error: I'd look at the docuemtation of the pymedia module to check if tehre are any quality configurations that you can set. And the other thign is that unlike wave or raw audio, you won't be able to simply concatenate mp3 encoded audio: whatever the solution you reach, you will have to concatenate/mix your sounds while they are uncompressed (unencoded), and afterwards generate the mp3 encoded audio. Also, sometimes we just have the feeling that recordign a fiel to disk and reconvertignit, instead of doing it in "one step" is awkward - while in pratie, the software does exsactly that behind the scenes,even if we don't specify a file ourselves. If you are on a Unix-like system you can always create a FIFO special file (with the mkfifo command) and send yoru .wav data there for encodin in a separate process (using lame): for your programs it will look like you are using an intermediate file, but you actually won't.
encoding mp3 from a audio stream of PyTTS
I work on text-to-speech trasforming text, in audio mp3 files, using python 2.5. I use pyTSS as a python Text-To-Speech module, to transform text in audio .wav files (in pyTTS is not possible to encode in mp3 format directly). So after that, I code these wav files, in mp3 format, using lame command line encoder. Now, the problem is that, I would like to insert (in particular point of an audio mp3 file, between two words) a particular external sound file (like a sound warning) or (if possible a generated warning sound). Questions are: 1) I have seen that PyTTS have possibilities to save audio stream on a file or in a memory stream. using two function: tts.SpeakToWave(file, text) or tts.SpeakToMemory(text) Exploiting tts.SpeakToMemory(text) function, and using PyMedia I have been able to save an mp3 directly but mp3 file (when reproducing), sounds uncomprensible like donald duck! :-) Here a snippet of code: params = {'id': acodec.getCodecID('mp3'), 'bitrate': 128000, 'sample_rate': 44100, 'ext': 'mp3', 'channels': 2} m = tts.SpeakToMemory(p.Text) soundBytes = m.GetData() enc = acodec.Encoder(params) frames = enc.encode(soundBytes) f = file("test.mp3", 'wb') for frame in frames: f.write(frame) f.close() I can not understand where is the problem?!? This possibility (if it would work correctly), it would be good to skip wav files transformation step. 2) As second problem, I need to concatenate audio mp3 file (obtained from text-to-speech module) with a particular warning sound. Obviously, it would be great if I could concatenate audio memory streams of text (after text-to-speech module) and the stream of a warning sound, before encoding the whole audio memory stream in an unique mp3 file. I have seen also that tksnack libraries, can concatenate audio, but they are not able to write mp3 files. I hope to have been clear. :-) Many thanks to for your answers to my questions. Giulio
[ "I don't think PyTTS produces default PCM data (i.e. 44100 Hz, stereo, 16-bit). You should check the format like this:\nmemStream = tts.SpeakToMemory(\"some text\")\nformat = memStream.Format.GetWaveFormatEx()\n\n...and hand it over correctly to acodec. Therefore you can use the attributes format.Channels, format.BitsPerSample and format.SamplesPerSec.\nAs to your second question, if the sounds are in the same format, you should be able to simply pass them all to enc.encode, one after another.\n", "can't provide a definitive answer here, sorry. But there is some trial and error: I'd look at the docuemtation of the pymedia module to check if tehre are any quality configurations that you can set. \nAnd the other thign is that unlike wave or raw audio, you won't be able to simply concatenate mp3 encoded audio: whatever the solution you reach, you will have to concatenate/mix your sounds while they are uncompressed (unencoded), and afterwards generate the mp3 encoded audio.\nAlso, sometimes we just have the feeling that recordign a fiel to disk and reconvertignit, instead of doing it in \"one step\" is awkward - while in pratie, the software does exsactly that behind the scenes,even if we don't specify a file ourselves. If you are on a Unix-like system you can always create a FIFO special file (with the mkfifo command) and send yoru .wav data there for encodin in a separate process (using lame): for your programs it will look like you are using an intermediate file, but you actually won't.\n" ]
[ 1, 0 ]
[]
[]
[ "encoder", "mp3", "python", "text_to_speech" ]
stackoverflow_0002199151_encoder_mp3_python_text_to_speech.txt
Q: How do I turn a dictionary into a string? params = {'fruit':'orange', 'color':'red', 'size':'5'} How can I turn that into a string: fruit=orange&color=red&size=5 A: You can do it like this: '&'.join('%s=%s' % (k,v) for k,v in params.items()) If you are building strings for a URL it would be better to use urllib as this will escape correctly for you too: >>> params = { 'foo' : 'bar+baz', 'qux' : 'quux' } >>> urllib.urlencode(params) 'qux=quux&foo=bar%2Bbaz' A: If you are encoding url parameters, you can use urllib.urlencode to accomplish this. For example: import urllib params = {'fruit':'orange', 'color':'red', 'size':'5'} encoding = urllib.urlencode(params) this will also run the dict through urllib.quote_plus. A: Are you asking about this? >>> params = {'fruit':'orange', 'color':'red', 'size':'5'} >>> import urllib >>> urllib.urlencode( params ) 'color=red&fruit=orange&size=5' A: If you want to create a querystring, Python comes with batteries included: import urllib print(urllib.urlencode({'fruit':'orange', 'color':'red', 'size':'5'}))
How do I turn a dictionary into a string?
params = {'fruit':'orange', 'color':'red', 'size':'5'} How can I turn that into a string: fruit=orange&color=red&size=5
[ "You can do it like this:\n'&'.join('%s=%s' % (k,v) for k,v in params.items())\n\nIf you are building strings for a URL it would be better to use urllib as this will escape correctly for you too:\n>>> params = { 'foo' : 'bar+baz', 'qux' : 'quux' }\n>>> urllib.urlencode(params)\n'qux=quux&foo=bar%2Bbaz'\n\n", "If you are encoding url parameters, you can use urllib.urlencode to accomplish this. For example:\nimport urllib\nparams = {'fruit':'orange', 'color':'red', 'size':'5'}\nencoding = urllib.urlencode(params)\n\nthis will also run the dict through urllib.quote_plus.\n", "Are you asking about this?\n>>> params = {'fruit':'orange', 'color':'red', 'size':'5'}\n>>> import urllib\n>>> urllib.urlencode( params )\n'color=red&fruit=orange&size=5'\n\n", "If you want to create a querystring, Python comes with batteries included:\n import urllib\n print(urllib.urlencode({'fruit':'orange', 'color':'red', 'size':'5'}))\n\n" ]
[ 11, 3, 3, 2 ]
[]
[]
[ "dictionary", "python", "string" ]
stackoverflow_0002199303_dictionary_python_string.txt
Q: Problem writing a database query I have two models, Location and Event, that are linked by ForeignKey on the Event model. The models break down as follows: class Location(models.Model): city = models.CharField('city', max_length=25) slug = models.SlugField('slug') class Event(models.Model): location = models.ForeignKey(Location) title = models.CharField('title', max_length=75) start_date = models.DateTimeField('start_date') end_date = models.DateTimeField('end_date') Each location has multiple events that are ordered by start_date descending. The query I am trying to make retrieves the next upcoming event for each of the locations. Ideally I would like to make this in a single query (I don't want to run a query for every location, as this would cause a lot of unnecessary hits to the database). I have tried using Django's ORM, and I've also tried using raw SQL, but I've hit a bit of a roadblock. Any help would be greatly appreciated. Update I have come up with a potential solution, though I'm not convinced that it's the best method. It works, which should be enough, but I'm curious to see what the best method of doing this would be. Anyway, the code I've written reads thus: l = Location.objects.select_related() qs = None # Concatenate the related event querysets into a new queryset for e in l: if qs is None: qs = e.event_set.all() else: qs = qs | e.event_set.all() # Order a slice of the queryset by start_date ascending qs = sorted(qs[:l.count()], key=lambda s: s.start_date) A: select id, ( select * from event where location=location.id and start_date>NOW() order by start_date asc limit 1 ) from location A: "Ideally I would like to make this in a single query (I don't want to run a query for every location, as this would cause a lot of unnecessary hits to the database)." This is a false assumption. 1) Django's ORM uses a cache. It may not query the database as often as you think. Databases have a cache. The cost of a query may not be what you think, either. 2) You have select_related. The ORM can do the join for you. http://docs.djangoproject.com/en/dev/ref/models/querysets/#id4 Just write the simplest possible loop for fetching Locations and Events. In the very unlikely event that this is the slowest part of your application (and you can prove that it's slowest), then add select_related and see how much that improves things. Until you can prove that this specific query is killing your application, move on and don't worry about "hits to the database". Next event in each location? [ l.event_set.order_by( start_date ).all()[0] for l in Location.objects.select_related().all() ] Or perhaps events = [] for l in Location.objects.select_related().all(): events.append( l.event_set.order_by( start_date ).all()[0] ) And return that to the template to be rendered. Don't dismiss this until you've benchmarked it and proven that it is the bottleneck in your application. A: I think you should look at django aggregationlink text, so your result will be agragated by location with a condition to filter future/expired events and min(start_time) returns next event time A: Something like the following may be useful: SELECT * FROM EVENTS V WHERE (V.LOCATION, V.START_DATE) IN (SELECT E.LOCATION, MIN(E.START_DATE) FROM EVENTS E WHERE E.START_DATE >= NOW GROUP BY E.LOCATION) Share and enjoy.
Problem writing a database query
I have two models, Location and Event, that are linked by ForeignKey on the Event model. The models break down as follows: class Location(models.Model): city = models.CharField('city', max_length=25) slug = models.SlugField('slug') class Event(models.Model): location = models.ForeignKey(Location) title = models.CharField('title', max_length=75) start_date = models.DateTimeField('start_date') end_date = models.DateTimeField('end_date') Each location has multiple events that are ordered by start_date descending. The query I am trying to make retrieves the next upcoming event for each of the locations. Ideally I would like to make this in a single query (I don't want to run a query for every location, as this would cause a lot of unnecessary hits to the database). I have tried using Django's ORM, and I've also tried using raw SQL, but I've hit a bit of a roadblock. Any help would be greatly appreciated. Update I have come up with a potential solution, though I'm not convinced that it's the best method. It works, which should be enough, but I'm curious to see what the best method of doing this would be. Anyway, the code I've written reads thus: l = Location.objects.select_related() qs = None # Concatenate the related event querysets into a new queryset for e in l: if qs is None: qs = e.event_set.all() else: qs = qs | e.event_set.all() # Order a slice of the queryset by start_date ascending qs = sorted(qs[:l.count()], key=lambda s: s.start_date)
[ "select id, (\n select * from event \n where location=location.id \n and start_date>NOW() \n order by start_date asc \n limit 1\n )\n from location\n\n", "\"Ideally I would like to make this in a single query (I don't want to run a query for every location, as this would cause a lot of unnecessary hits to the database).\"\nThis is a false assumption.\n1) Django's ORM uses a cache. It may not query the database as often as you think. Databases have a cache. The cost of a query may not be what you think, either.\n2) You have select_related. The ORM can do the join for you. http://docs.djangoproject.com/en/dev/ref/models/querysets/#id4\nJust write the simplest possible loop for fetching Locations and Events. In the very unlikely event that this is the slowest part of your application (and you can prove that it's slowest), then add select_related and see how much that improves things. \nUntil you can prove that this specific query is killing your application, move on and don't worry about \"hits to the database\".\nNext event in each location?\n[ l.event_set.order_by( start_date ).all()[0] for l in Location.objects.select_related().all() ]\n\nOr perhaps\nevents = []\nfor l in Location.objects.select_related().all():\n events.append( l.event_set.order_by( start_date ).all()[0] )\n\nAnd return that to the template to be rendered.\nDon't dismiss this until you've benchmarked it and proven that it is the bottleneck in your application.\n", "I think you should look at django aggregationlink text, so your result will be agragated by location with a condition to filter future/expired events and min(start_time) returns next event time\n", "Something like the following may be useful:\nSELECT * FROM EVENTS V\n WHERE (V.LOCATION, V.START_DATE) IN\n (SELECT E.LOCATION, MIN(E.START_DATE)\n FROM EVENTS E\n WHERE E.START_DATE >= NOW\n GROUP BY E.LOCATION)\n\nShare and enjoy.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "django", "python", "sql" ]
stackoverflow_0002198820_django_python_sql.txt
Q: matching multiple line in python regular expression I want to extract the data between <tr> tags from an html page. I used the following code.But i didn't get any result. The html between the <tr> tags is in multiple lines category =re.findall('<tr>(.*?)</tr>',data); Please suggest a fix for this problem. A: just to clear up the issue. Despite all those links to re.M it wouldn't work here as simple skimming of the its explanation would reveal. You'd need re.S, if you wouldn't try to parse html, of course: >>> doc = """<table border="1"> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" >>> re.findall('<tr>(.*?)</tr>', doc, re.S) ['\n <td>row 1, cell 1</td>\n <td>row 1, cell 2</td>\n ', '\n <td>row 2, cell 1</td>\n <td>row 2, cell 2</td>\n '] >>> re.findall('<tr>(.*?)</tr>', doc, re.M) [] A: Don't use regex, use a HTML parser such as BeautifulSoup: html = '<html><body>foo<tr>bar</tr>baz<tr>qux</tr></body></html>' import BeautifulSoup soup = BeautifulSoup.BeautifulSoup(html) print soup.findAll("tr") Result: [<tr>bar</tr>, <tr>qux</tr>] If you just want the contents, without the tr tags: for tr in soup.findAll("tr"): print tr.contents Result: bar qux Using an HTML parser isn't as scary as it sounds! And it will work more reliably than any regex that will be posted here. A: Do not use regular expressions to parse HTML. Use an HTML parser such as lxml or BeautifulSoup. A: pat=re.compile('<tr>(.*?)</tr>',re.DOTALL|re.M) print pat.findall(data) Or non regex way, for item in data.split("</tr>"): if "<tr>" in item: print item[item.find("<tr>")+len("<tr>"):] A: As other have suggested the specific problem you are having can be resolved by allowing multi-line matching using re.MULTILINE However you are going down a treacherous patch parsing HTML with regular expressions. Use an XML/HTML parser instead, BeautifulSoup works great for this! doc = """<table border="1"> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(doc) all_trs = soup.findAll("tr")
matching multiple line in python regular expression
I want to extract the data between <tr> tags from an html page. I used the following code.But i didn't get any result. The html between the <tr> tags is in multiple lines category =re.findall('<tr>(.*?)</tr>',data); Please suggest a fix for this problem.
[ "just to clear up the issue. Despite all those links to re.M it wouldn't work here as simple skimming of the its explanation would reveal. You'd need re.S, if you wouldn't try to parse html, of course:\n>>> doc = \"\"\"<table border=\"1\">\n <tr>\n <td>row 1, cell 1</td>\n <td>row 1, cell 2</td>\n </tr>\n <tr>\n <td>row 2, cell 1</td>\n <td>row 2, cell 2</td>\n </tr>\n</table>\"\"\"\n\n>>> re.findall('<tr>(.*?)</tr>', doc, re.S)\n['\\n <td>row 1, cell 1</td>\\n <td>row 1, cell 2</td>\\n ', \n '\\n <td>row 2, cell 1</td>\\n <td>row 2, cell 2</td>\\n ']\n>>> re.findall('<tr>(.*?)</tr>', doc, re.M)\n[]\n\n", "Don't use regex, use a HTML parser such as BeautifulSoup:\nhtml = '<html><body>foo<tr>bar</tr>baz<tr>qux</tr></body></html>'\n\nimport BeautifulSoup\nsoup = BeautifulSoup.BeautifulSoup(html)\nprint soup.findAll(\"tr\")\n\nResult:\n[<tr>bar</tr>, <tr>qux</tr>]\n\nIf you just want the contents, without the tr tags:\nfor tr in soup.findAll(\"tr\"):\n print tr.contents\n\nResult:\nbar\nqux\n\nUsing an HTML parser isn't as scary as it sounds! And it will work more reliably than any regex that will be posted here.\n", "Do not use regular expressions to parse HTML. Use an HTML parser such as lxml or BeautifulSoup.\n", "pat=re.compile('<tr>(.*?)</tr>',re.DOTALL|re.M)\nprint pat.findall(data)\n\nOr non regex way,\nfor item in data.split(\"</tr>\"):\n if \"<tr>\" in item:\n print item[item.find(\"<tr>\")+len(\"<tr>\"):]\n\n", "As other have suggested the specific problem you are having can be resolved by allowing multi-line matching using re.MULTILINE\nHowever you are going down a treacherous patch parsing HTML with regular expressions. Use an XML/HTML parser instead, BeautifulSoup works great for this!\ndoc = \"\"\"<table border=\"1\">\n <tr>\n <td>row 1, cell 1</td>\n <td>row 1, cell 2</td>\n </tr>\n <tr>\n <td>row 2, cell 1</td>\n <td>row 2, cell 2</td>\n </tr>\n</table>\"\"\"\n\nfrom BeautifulSoup import BeautifulSoup\nsoup = BeautifulSoup(doc)\nall_trs = soup.findAll(\"tr\")\n\n" ]
[ 18, 5, 2, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002199552_python.txt
Q: PyQt debugging in main loop Can I debug PyQt application when is main loop running ? Pdb, NetBeans, PyDev, all "freeze" when sys.exit(app.exec_()) is executed. I probably missing something obvious. Or what can be problem, please ? I apologize for my "creepy" english. Thanks. A: I'm assuming your main() function looks something like this: def __name__ == '__main__': app = QtGui.QApplication(sys.argv) myapp = MyApplication() myapp.show() sys.exit(app.exec_()) If not, post some example code to help determine what coudl be wrong. If that is what your code looks like, you can debug any part of you program using IDLE (included in Python install). Once in IDLE, goto Debug-->Debugger to turn DEBUGGING ON. Then open your .py file, and run it (F5). You can set breakpoints by right-clicking on any line in the file, and choosing Set Breakpoint. Check this other SO question for more info and good links to alternative debuggers/IDEs: Cleanest way to run/debug python programs in windows
PyQt debugging in main loop
Can I debug PyQt application when is main loop running ? Pdb, NetBeans, PyDev, all "freeze" when sys.exit(app.exec_()) is executed. I probably missing something obvious. Or what can be problem, please ? I apologize for my "creepy" english. Thanks.
[ "I'm assuming your main() function looks something like this:\ndef __name__ == '__main__':\n app = QtGui.QApplication(sys.argv)\n myapp = MyApplication()\n myapp.show()\n sys.exit(app.exec_())\n\nIf not, post some example code to help determine what coudl be wrong.\nIf that is what your code looks like, you can debug any part of you program using IDLE (included in Python install). Once in IDLE, goto Debug-->Debugger to turn DEBUGGING ON. Then open your .py file, and run it (F5). You can set breakpoints by right-clicking on any line in the file, and choosing Set Breakpoint.\nCheck this other SO question for more info and good links to alternative debuggers/IDEs:\nCleanest way to run/debug python programs in windows\n" ]
[ 1 ]
[]
[]
[ "debugging", "pyqt", "python" ]
stackoverflow_0002199703_debugging_pyqt_python.txt
Q: pydoc fails under Windows and Python 2.6.4 When trying to use pydoc under Windows and python.org 2.6.4 I get the following error: C:\>pydoc sys 'import site' failed; use -v for traceback Traceback (most recent call last): File "C:\programs\Python26\Lib\pydoc.py", line 55, in ? import sys, imp, os, re, types, inspect, __builtin__, pkgutil File "C:\programs\Python26\Lib\os.py", line 758 bs = b"" ^ SyntaxError: invalid syntax What could be wrong here? A: Typical windows problem: I had a program installed lately which brought its own Python 2.4. This installation overwrote the Windows file handlers for python scripts, but did not appear on the PATH. So scripts started from the console ran in the old-version python, but calling "python" ran the 2.6 version. Thx to Nadia for the first hint. A: how about setting your PATH c:\> set PATH=C:\Python26\Lib;%PATH% c:\> pydoc.py sys
pydoc fails under Windows and Python 2.6.4
When trying to use pydoc under Windows and python.org 2.6.4 I get the following error: C:\>pydoc sys 'import site' failed; use -v for traceback Traceback (most recent call last): File "C:\programs\Python26\Lib\pydoc.py", line 55, in ? import sys, imp, os, re, types, inspect, __builtin__, pkgutil File "C:\programs\Python26\Lib\os.py", line 758 bs = b"" ^ SyntaxError: invalid syntax What could be wrong here?
[ "Typical windows problem: I had a program installed lately which brought its own Python 2.4. This installation overwrote the Windows file handlers for python scripts, but did not appear on the PATH. So scripts started from the console ran in the old-version python, but calling \"python\" ran the 2.6 version.\nThx to Nadia for the first hint. \n", "how about setting your PATH\nc:\\> set PATH=C:\\Python26\\Lib;%PATH%\nc:\\> pydoc.py sys\n\n" ]
[ 2, 0 ]
[]
[]
[ "pydoc", "python" ]
stackoverflow_0002199739_pydoc_python.txt
Q: extend Python namedtuple with many @properties? How can namedtuples be extended or subclassed with many additional @properties ? For a few, one can just write the text below; but there are many, so I'm looking for a generator or property factory. One way would be to generate text from _fields and exec it; another would be an add_fields with the same effect at runtime. (My @props are to get rows and fields in a database scattered across several tables, so that rec.pname is persontable[rec.personid].pname; but namedtuples-with-smart-fields would have other uses too.) """ extend namedtuple with many @properties ? """ from collections import namedtuple Person = namedtuple( "Person", "pname paddr" ) # ... persontable = [ Person( "Smith", "NY" ), Person( "Jones", "IL" ) ] class Top( namedtuple( "Top_", "topid amount personid" )): """ @property .person -> persontable[personid] .pname -> person.pname ... """ __slots__ = () @property def person(self): return persontable[self.personid] # def add_fields( self, Top.person, Person._fields ) with the same effect as these ? @property def pname(self): return self.person.pname @property def paddr(self): return self.person.paddr # ... many more rec = Top( 0, 42, 1 ) print rec.person, rec.pname, rec.paddr A: The answer to your question How can namedtuples be extended or subclassed with additional @properties ? is: exactly the way you're doing it! What error are you getting? To see a simpler case, >>> class x(collections.namedtuple('y', 'a b c')): ... @property ... def d(self): return 23 ... >>> a=x(1, 2, 3) >>> a.d 23 >>> A: How about this? class Top( namedtuple( "Top_", "topid amount personid" )): """ @property .person -> persontable[personid] .pname -> person.pname ... """ __slots__ = () @property def person(self): return persontable[self.personid] def __getattr__(self,attr): if attr in Person._fields: return getattr(self.person, attr) raise AttributeError("no such attribute '%s'" % attr)
extend Python namedtuple with many @properties?
How can namedtuples be extended or subclassed with many additional @properties ? For a few, one can just write the text below; but there are many, so I'm looking for a generator or property factory. One way would be to generate text from _fields and exec it; another would be an add_fields with the same effect at runtime. (My @props are to get rows and fields in a database scattered across several tables, so that rec.pname is persontable[rec.personid].pname; but namedtuples-with-smart-fields would have other uses too.) """ extend namedtuple with many @properties ? """ from collections import namedtuple Person = namedtuple( "Person", "pname paddr" ) # ... persontable = [ Person( "Smith", "NY" ), Person( "Jones", "IL" ) ] class Top( namedtuple( "Top_", "topid amount personid" )): """ @property .person -> persontable[personid] .pname -> person.pname ... """ __slots__ = () @property def person(self): return persontable[self.personid] # def add_fields( self, Top.person, Person._fields ) with the same effect as these ? @property def pname(self): return self.person.pname @property def paddr(self): return self.person.paddr # ... many more rec = Top( 0, 42, 1 ) print rec.person, rec.pname, rec.paddr
[ "The answer to your question\n\nHow can namedtuples be extended or\n subclassed with additional @properties\n ?\n\nis: exactly the way you're doing it! What error are you getting? To see a simpler case,\n>>> class x(collections.namedtuple('y', 'a b c')):\n... @property\n... def d(self): return 23\n... \n>>> a=x(1, 2, 3)\n>>> a.d\n23\n>>> \n\n", "How about this?\nclass Top( namedtuple( \"Top_\", \"topid amount personid\" )): \n \"\"\" @property \n .person -> persontable[personid] \n .pname -> person.pname ... \n \"\"\" \n __slots__ = () \n @property \n def person(self): \n return persontable[self.personid] \n\n def __getattr__(self,attr):\n if attr in Person._fields:\n return getattr(self.person, attr)\n raise AttributeError(\"no such attribute '%s'\" % attr)\n\n" ]
[ 18, 2 ]
[ "Here's one approach, a little language: \nturn this into Python text like the above, and exec it.\n(Expanding text-to-text is easy to do, and easy to test —\nyou can look at the intermediate text.)\nI'm sure there are similar if not-so-little such, links please ?\n# example of a little language for describing multi-table databases 3feb\n# why ?\n# less clutter, toprec.pname -> persontable[toprec.personid].pname\n# describe in one place: easier to understand, easier to change\n\nTop:\n topid amount personid\n person: persontable[self.personid] + Person\n # toprec.person = persontable[self.personid]\n # pname = person.pname\n # locid = person.locid\n # todo: chaining, toprec.city -> toprec.person.loc.city\n\nPerson:\n personid pname locid\n loc: loctable[self.locid] + Loc\n\nLoc:\n locid zipcode province city\n\n" ]
[ -1 ]
[ "namedtuple", "properties", "python" ]
stackoverflow_0002193009_namedtuple_properties_python.txt
Q: In Django, my request.session is not carrying over...does anyone know why? In one view, I set: request.session.set_expiry(999) request.session['test'] = '123' In another view, I do: print request.session['test'] and it cannot be found. (error) It's very simple, I just have 2 views. It seems that once I leave a view and come back to it...it's gone! Why? A: Could it be related to this?, just found it at http://code.djangoproject.com/wiki/NewbieMistakes Appending to a list in session doesn't work Problem If you have a list in your session, append operations don't get saved to the object. Solution Copy the list out of the session object, append to it, then copy it back in: sessionlist = request.session['my_list'] sessionlist.append(new_object) request.session['my_list'] = sessionlist A: Are you, by any chance, setting the session itself to an empty dictionary, somewhere?
In Django, my request.session is not carrying over...does anyone know why?
In one view, I set: request.session.set_expiry(999) request.session['test'] = '123' In another view, I do: print request.session['test'] and it cannot be found. (error) It's very simple, I just have 2 views. It seems that once I leave a view and come back to it...it's gone! Why?
[ "Could it be related to this?, just found it at http://code.djangoproject.com/wiki/NewbieMistakes\nAppending to a list in session doesn't work \nProblem \nIf you have a list in your session, append operations don't get saved to the object.\nSolution \nCopy the list out of the session object, append to it, then copy it back in:\nsessionlist = request.session['my_list']\nsessionlist.append(new_object)\nrequest.session['my_list'] = sessionlist\n\n", "Are you, by any chance, setting the session itself to an empty dictionary, somewhere?\n" ]
[ 18, 1 ]
[]
[]
[ "django", "python", "session" ]
stackoverflow_0002199150_django_python_session.txt
Q: Images caching in browser - app-engine-patch application I have a little problem with caching the images in the browser for my app-engine application I`m sending last-modified, expires and cache-control headers but image is loaded from the server every time. Here is the header part of the code: response['Content-Type'] = 'image/jpg' response['Last-Modified'] = current_time.strftime('%a, %d %b %Y %H:%M:%S GMT') response['Expires'] = current_time + timedelta(days=30) response['Cache-Control'] = 'public, max-age=2592000' A: Here is an example code for my fix copy in dpaste here def view_image(request, key): data = memcache.get(key) if data is not None: if(request.META.get('HTTP_IF_MODIFIED_SINCE') >= data['Last-Modified']): data.status_code = 304 return data else: image_content_blob = #some code to get the image from the data store current_time = datetime.utcnow() response = HttpResponse() last_modified = current_time - timedelta(days=1) response['Content-Type'] = 'image/jpg' response['Last-Modified'] = last_modified.strftime('%a, %d %b %Y %H:%M:%S GMT') response['Expires'] = current_time + timedelta(days=30) response['Cache-Control'] = 'public, max-age=315360000' response['Date'] = current_time response.content = image_content_blob memcache.add(image_key, response, 86400) return response
Images caching in browser - app-engine-patch application
I have a little problem with caching the images in the browser for my app-engine application I`m sending last-modified, expires and cache-control headers but image is loaded from the server every time. Here is the header part of the code: response['Content-Type'] = 'image/jpg' response['Last-Modified'] = current_time.strftime('%a, %d %b %Y %H:%M:%S GMT') response['Expires'] = current_time + timedelta(days=30) response['Cache-Control'] = 'public, max-age=2592000'
[ "Here is an example code for my fix copy in dpaste here\ndef view_image(request, key):\n data = memcache.get(key) \n if data is not None: \n if(request.META.get('HTTP_IF_MODIFIED_SINCE') >= data['Last-Modified']): \n data.status_code = 304 \n return data \n else: \n image_content_blob = #some code to get the image from the data store \n current_time = datetime.utcnow()\n response = HttpResponse()\n last_modified = current_time - timedelta(days=1)\n response['Content-Type'] = 'image/jpg'\n response['Last-Modified'] = last_modified.strftime('%a, %d %b %Y %H:%M:%S GMT')\n response['Expires'] = current_time + timedelta(days=30)\n response['Cache-Control'] = 'public, max-age=315360000'\n response['Date'] = current_time\n response.content = image_content_blob\n\n memcache.add(image_key, response, 86400)\n return response\n\n" ]
[ 7 ]
[]
[]
[ "app_engine_patch", "browser", "caching", "google_app_engine", "python" ]
stackoverflow_0002185449_app_engine_patch_browser_caching_google_app_engine_python.txt
Q: Access contents of PyBuffer from C I have created a buffer object in python like so: f = io.open('some_file', 'rb') byte_stream = buffer(f.read(4096)) I'm now passing byte_stream as a parameter to a C function, through SWIG. I have a typemap for converting the data which looks like this: %typemap(in) unsigned char * byte_stream { PyObject *buf = $input; //some code to read the contents of buf } I have tried a few different things bug can't get to the actual content/value of my byte_stream. How do I convert or access the content of my byte_stream using the C API? There are many different methods for converting a C data to a buffer but none that I can find for going the other way around. I have tried looking at this object in gcb but neither it, or the values it points to contain my data. (I'm using buffers because I want to avoid the overhead of converting the data to a string when reading it from the file) I'm using python 2.6 on Linux. -- Thanks Pavel A: I'm using buffers because I want to avoid the overhead of converting the data to a string when reading it from the file You are not avoiding anything. The string is already built by the read() method. Calling buffer() just builds an additional buffer object pointing to that string. As for getting at the memory pointed to by the buffer object, try PyObject_AsReadBuffer(). See also http://docs.python.org/c-api/objbuffer.html. A: As soon as you use the read method on your file object, the data will be converted to a str object; calling the buffer method does not convert it into a stream of any kind. If you want to avoid the overhead of creating the string object, you could simply pass the file object to your C code and then use it via its C API.
Access contents of PyBuffer from C
I have created a buffer object in python like so: f = io.open('some_file', 'rb') byte_stream = buffer(f.read(4096)) I'm now passing byte_stream as a parameter to a C function, through SWIG. I have a typemap for converting the data which looks like this: %typemap(in) unsigned char * byte_stream { PyObject *buf = $input; //some code to read the contents of buf } I have tried a few different things bug can't get to the actual content/value of my byte_stream. How do I convert or access the content of my byte_stream using the C API? There are many different methods for converting a C data to a buffer but none that I can find for going the other way around. I have tried looking at this object in gcb but neither it, or the values it points to contain my data. (I'm using buffers because I want to avoid the overhead of converting the data to a string when reading it from the file) I'm using python 2.6 on Linux. -- Thanks Pavel
[ "\nI'm using buffers because I want to\n avoid the overhead of converting the\n data to a string when reading it from\n the file\n\nYou are not avoiding anything. The string is already built by the read() method. Calling buffer() just builds an additional buffer object pointing to that string.\nAs for getting at the memory pointed to by the buffer object, try PyObject_AsReadBuffer(). See also http://docs.python.org/c-api/objbuffer.html.\n", "As soon as you use the read method on your file object, the data will be converted to a str object; calling the buffer method does not convert it into a stream of any kind. If you want to avoid the overhead of creating the string object, you could simply pass the file object to your C code and then use it via its C API.\n" ]
[ 2, 1 ]
[]
[]
[ "pybuffer", "python", "python_c_api", "swig" ]
stackoverflow_0002128195_pybuffer_python_python_c_api_swig.txt
Q: which is more efficient for buffer manipulations: python strings or array() I am building a routine that processes disk buffers for forensic purposes. Am I better off using python strings or the array() type? My first thought was to use strings, but I'm trying to void unicode problems, so perhaps array('c') is better? A: Write the code using what is most natural (strings), find out if it's too slow and then improve it. Arrays can be used as drop-in replacements for str in most cases, as long as you restrict yourself to index and slice access. Both are fixed-length. Both should have about the same memory requirements. Arrays are mutable, in case you need to change the buffers. Arrays can read directly from files, so there's no speed penalty involved when reading. I don't understand how you avoid Unicode problems by using arrays, though. str is just an array of bytes and doesn't know anything about the encoding of the string. I assume that the "disk buffers" you mention can be rather large, so you might think about using mmap: Memory-mapped file objects behave like both strings and like file objects. Unlike normal string objects, however, these are mutable. You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file. Since they’re mutable, you can change a single character by doing obj[index] = 'a', or change a substring by assigning to a slice: obj[i1:i2] = '...'. You can also read and write data starting at the current file position, and seek() through the file to different positions. A: If you need to alter the buffer in-place (it's not clear if you do, since you use the ambiguous term "to process"), arrays will likely be better, since strings are immutable. In Python 2.6 or better, however, bytearrays can be the best of both worlds -- mutable and rich of methods and usable with regular expressions too. For read-only operations, strings have the edge over array (thanks to many more methods, plus extras such as regular expressions, available on them), if you're stuck with old Python versions and so cannot use bytearray. Unicode is not an issue in either case (in Python 2; in Python 3, definitely go for bytearray!-).
which is more efficient for buffer manipulations: python strings or array()
I am building a routine that processes disk buffers for forensic purposes. Am I better off using python strings or the array() type? My first thought was to use strings, but I'm trying to void unicode problems, so perhaps array('c') is better?
[ "Write the code using what is most natural (strings), find out if it's too slow and then improve it. \nArrays can be used as drop-in replacements for str in most cases, as long as you restrict yourself to index and slice access. Both are fixed-length. Both should have about the same memory requirements. Arrays are mutable, in case you need to change the buffers. Arrays can read directly from files, so there's no speed penalty involved when reading. \nI don't understand how you avoid Unicode problems by using arrays, though. str is just an array of bytes and doesn't know anything about the encoding of the string.\nI assume that the \"disk buffers\" you mention can be rather large, so you might think about using mmap:\n\nMemory-mapped file objects behave like both strings and like file objects. Unlike normal string objects, however, these are mutable. You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file. Since they’re mutable, you can change a single character by doing obj[index] = 'a', or change a substring by assigning to a slice: obj[i1:i2] = '...'. You can also read and write data starting at the current file position, and seek() through the file to different positions.\n\n", "If you need to alter the buffer in-place (it's not clear if you do, since you use the ambiguous term \"to process\"), arrays will likely be better, since strings are immutable. In Python 2.6 or better, however, bytearrays can be the best of both worlds -- mutable and rich of methods and usable with regular expressions too.\nFor read-only operations, strings have the edge over array (thanks to many more methods, plus extras such as regular expressions, available on them), if you're stuck with old Python versions and so cannot use bytearray. Unicode is not an issue in either case (in Python 2; in Python 3, definitely go for bytearray!-).\n" ]
[ 9, 6 ]
[]
[]
[ "arrays", "performance", "python" ]
stackoverflow_0002200027_arrays_performance_python.txt
Q: How bad is it to override a method from a third-party module? How bad is it to redefine a class method from another, third-party module, in Python? In fact, users can create NumPy matrices that contain numbers with uncertainty; ideally, I would like their code to run unmodified (compared to when the code manipulates float matrices); in particular, it would be great if the inverse of matrix m could still be obtained with m.I, despite the fact that m.I has to be calculated with my own code (the original I method does not work, in general). How bad is it to redefine numpy.matrix.I? For one thing, it does tamper with third-party code, which I don't like, as it may not be robust (what if other modules do the same?…). Another problem is that the new numpy.matrix.I is a wrapper that involves a small overhead when the original numpy.matrix.I can actually be applied in order to obtain the inverse matrix. Is subclassing NumPy matrices and only changing their I method better? this would force users to update their code and create matrices of numbers with uncertainty with m = matrix_with_uncert(…) (instead of keeping using numpy.matrix(…), as for a matrix of floats), but maybe this is an inconvenience that should be accepted for the sake of robustness? Matrix inversions could still be performed with m.I, which is good… On the other hand, it would be nice if users could build all their matrices (of floats or of numbers with uncertainties) with numpy.matrix() directly, without having to bother checking for data types. Any comment, or additional approach would be welcome! A: Subclassing (which does involve overriding, as the term is generally used) is generally much preferable to "monkey-patching" (stuffing altered methods into existing classes or modules), even when the latter is available (built-in types, meaning ones implemented in C, can protect themselves against monkey-patching, and most of them do). For example, if your functionality relies on monkey-patching, it will break and stop upgrades if at any time the class you're monkey patching gets upgraded to be implemented in C (for speed or specifically to defend against monkey patching). Maintainers of third party packages hate monkey-patching because it means they get bogus bug reports from hapless users who (unbeknownst to them) are in fact using a buggy monkey-patch which breaks the third party package, where the latter (unless broken monkey-wise) is flawless. You've already remarked on the possible performance hit. Conceptually, a "matrix of numbers with uncertainty" is a different concept from a "matrix of numbers". Subclassing expresses this cleanly, monkey-patching tries to hide it. That's really the root of what's wrong with monkey-patching in general: a covert channel operating through global, hidden means, without clarity and transparency. All the many practical problems descend in a sense from this root conceptual problem. I strongly urge you to reject monkey-patching in favor of clean solutions such as subclassing. A: In general, it's perfectly acceptable to override methods that are ... Intentionally permit overrides In a way they document (satisfying LSP won't hurt) If both conditions are met, then overriding should be safe. A: Depends on what you mean with "redefine". Obviously you can use your own version of it, no problem at all. Also you can redefine it by subclassing if it's a method. You can also make a new method, and patch it into the class, a practice known as monkey_patching. Like so: from amodule import aclass def newfunction(self, param): do_something() aclass.oldfunction = newfunction This will make all instances of aclass use your new function instead of the old one, including instances in any "fourth-party" modules. This works and is highly useful, but it's regarded as very ugly and a last resort option. This is because there is nothing in the aclass code to suggest that you have overridden the method, so it's hard to debug. And even worse it gets when two modules monkeypatch the same thing. Then you really get confused.
How bad is it to override a method from a third-party module?
How bad is it to redefine a class method from another, third-party module, in Python? In fact, users can create NumPy matrices that contain numbers with uncertainty; ideally, I would like their code to run unmodified (compared to when the code manipulates float matrices); in particular, it would be great if the inverse of matrix m could still be obtained with m.I, despite the fact that m.I has to be calculated with my own code (the original I method does not work, in general). How bad is it to redefine numpy.matrix.I? For one thing, it does tamper with third-party code, which I don't like, as it may not be robust (what if other modules do the same?…). Another problem is that the new numpy.matrix.I is a wrapper that involves a small overhead when the original numpy.matrix.I can actually be applied in order to obtain the inverse matrix. Is subclassing NumPy matrices and only changing their I method better? this would force users to update their code and create matrices of numbers with uncertainty with m = matrix_with_uncert(…) (instead of keeping using numpy.matrix(…), as for a matrix of floats), but maybe this is an inconvenience that should be accepted for the sake of robustness? Matrix inversions could still be performed with m.I, which is good… On the other hand, it would be nice if users could build all their matrices (of floats or of numbers with uncertainties) with numpy.matrix() directly, without having to bother checking for data types. Any comment, or additional approach would be welcome!
[ "Subclassing (which does involve overriding, as the term is generally used) is generally much preferable to \"monkey-patching\" (stuffing altered methods into existing classes or modules), even when the latter is available (built-in types, meaning ones implemented in C, can protect themselves against monkey-patching, and most of them do).\nFor example, if your functionality relies on monkey-patching, it will break and stop upgrades if at any time the class you're monkey patching gets upgraded to be implemented in C (for speed or specifically to defend against monkey patching). Maintainers of third party packages hate monkey-patching because it means they get bogus bug reports from hapless users who (unbeknownst to them) are in fact using a buggy monkey-patch which breaks the third party package, where the latter (unless broken monkey-wise) is flawless. You've already remarked on the possible performance hit.\nConceptually, a \"matrix of numbers with uncertainty\" is a different concept from a \"matrix of numbers\". Subclassing expresses this cleanly, monkey-patching tries to hide it. That's really the root of what's wrong with monkey-patching in general: a covert channel operating through global, hidden means, without clarity and transparency. All the many practical problems descend in a sense from this root conceptual problem.\nI strongly urge you to reject monkey-patching in favor of clean solutions such as subclassing.\n", "In general, it's perfectly acceptable to override methods that are ...\n\nIntentionally permit overrides\nIn a way they document (satisfying LSP won't hurt)\n\nIf both conditions are met, then overriding should be safe.\n", "Depends on what you mean with \"redefine\". Obviously you can use your own version of it, no problem at all. Also you can redefine it by subclassing if it's a method.\nYou can also make a new method, and patch it into the class, a practice known as monkey_patching. Like so:\nfrom amodule import aclass\n\ndef newfunction(self, param):\n do_something()\n\naclass.oldfunction = newfunction\n\nThis will make all instances of aclass use your new function instead of the old one, including instances in any \"fourth-party\" modules. This works and is highly useful, but it's regarded as very ugly and a last resort option. This is because there is nothing in the aclass code to suggest that you have overridden the method, so it's hard to debug. And even worse it gets when two modules monkeypatch the same thing. Then you really get confused.\n" ]
[ 12, 1, 1 ]
[]
[]
[ "numpy", "overriding", "python" ]
stackoverflow_0002200880_numpy_overriding_python.txt
Q: Terracotta for Python world? Would you know if something similar to Terracotta (in Java world) exists for Python world? Twisted ? Or something else. A: I think Twisted is the best alternative you can find. Let me warn you that it will give you some headaches, as it forces you to code in a completely different way. But once you understand it, it's not that hard.... http://twistedmatrix.com/projects/core/documentation/howto/index.html A: Pyro can be used similarly in python as terracotta in java A: If you want to use something that gives you a Distributed Hashtable/Tuple Space type of implementation, Entangled seems to be a Python implementation. I'm sure there are others out there though if you google for them. A: try to run your code with JPython and use Terracotta ;)
Terracotta for Python world?
Would you know if something similar to Terracotta (in Java world) exists for Python world? Twisted ? Or something else.
[ "I think Twisted is the best alternative you can find.\nLet me warn you that it will give you some headaches, as it forces you to code in a completely different way. But once you understand it, it's not that hard....\nhttp://twistedmatrix.com/projects/core/documentation/howto/index.html\n", "Pyro can be used similarly in python as terracotta in java\n", "If you want to use something that gives you a Distributed Hashtable/Tuple Space type of implementation, Entangled seems to be a Python implementation. I'm sure there are others out there though if you google for them.\n", "try to run your code with JPython and use Terracotta ;)\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "java", "python", "terracotta" ]
stackoverflow_0001393689_java_python_terracotta.txt
Q: Need code to get all files in directory and past another local? i need code python to get all pic files in diretorie e paste another dir for ex. in "c:\capture" confirm exist pic files, if true get all and paste in c:\backup\paste.1 sleep 30 min and confirm new files in c:\capture if true get all and past in c:\backup\paste.2 sorry my bad english tks A: I'm not sure what exactly you want to do, by are you looking for os.listdir(), os.rename() and the other functions of the os module? The shutil module might also be useful, depending on the specifics of what you want to do.
Need code to get all files in directory and past another local?
i need code python to get all pic files in diretorie e paste another dir for ex. in "c:\capture" confirm exist pic files, if true get all and paste in c:\backup\paste.1 sleep 30 min and confirm new files in c:\capture if true get all and past in c:\backup\paste.2 sorry my bad english tks
[ "I'm not sure what exactly you want to do, by are you looking for os.listdir(), os.rename() and the other functions of the os module?\nThe shutil module might also be useful, depending on the specifics of what you want to do.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0002201173_python.txt
Q: Does a library to prevent duplicate form submissions exist for django? I am trying to find a way to prevent users from double-submitting my forms. I have javascript that disables the submit button, but there is still an occasional user who finds a way to double-submit. I have a vision of a re-usable library that I could create to protect from this. In my ideal library, the code block would look something like this: try: with acquire_lock({'field1':'abc', 'field2':'def'}) as lock: response = #do some credit card processing lock.response = response except SubmissionWasDuplicate, e: response = e.response The lock table would look something like this: duplicate_submission_locks submission_hash # a MD5 of the submitted arguments response # pickled data created_at # used for sweeping this table lock_expired # boolean signifying if the lock has expired Does anyone know if this already exists? It doesn't seem to difficult to write, so if it doesn't exist I may write it myself. A: You can use a session to store the hash import hashlib def contact(request): if request.method == 'POST': form = MyForm(request.POST) #join all the fields in one string hashstring=hashlib.sha1(fieldsstring) if request.session.get('sesionform')!=hashstring: if form.is_valid() : request.session['sesionform'] = hashstring #do some stuff... return HttpResponseRedirect('/thanks/') # Redirect after POST else raise SubmissionWasDuplicate("duplicate") else: form = MyForm() With this approach (not deleting the session cookie) the user can't re-store the data util the session expires, by the way, i'm assuming that exist something who identify the user who send the data A: One easy solution to this problem is to add a unique hash to each form. Then you can have a rolling table of current forms. When a form is submitted, or the hash gets too old, you can expire it out of your table, and reject any form which does not have a matching hash in your table. The HTTPRedirect is the correct way to do it, as previously mentioned. Unfortunately, even Django's own built in admin is prone to problems related to this issue. In some cases, the cross-site scripting framework can assist to prevent some of this, but I'm afraid the current production versions just don't have this built in. A: To be honest, your best bet (easy and good practice) is to issue a HTTPRedirect() to the thank you page, and if the thank you page is the same one as the form, that's OK. You can still do this. A: Kristian Damian's answer is really a great suggestion. I just thought of a slight variation on that theme, but it might have more overhead. You could try implementing something that is used in django-piston for BaseHandler objects, which is a method called exists() that checks to see if what you are submitting is already in the database. From handler.py (BaseHandler): def exists(self, **kwargs): if not self.has_model(): raise NotImplementedError try: self.model.objects.get(**kwargs) return True except self.model.DoesNotExist: return False So let's say make that a function called request_exists(), instead of a method: if form.is_valid() if request_exists(request): # gracefully reject dupe submission else: # do stuff to save the request ... # and ALWAYS redirect after a POST!! return HttpResponseRedirect('/thanks/') A: It is always good to use the redirect-after-post method. This prevents user from accidently resubmitting the form using refresh function from the browser. It is also helpful even when you use the hash method. It's because without redirect after a POST, in case of hitting Back/Refresh button, user will see a question message about resubmitting the form, which can confuse her. If you do a GET redirect after every POST, then hitting Back/Refresh won't display this wierd (for usual user) message. So for full protection use Hash+redirect-after-post.
Does a library to prevent duplicate form submissions exist for django?
I am trying to find a way to prevent users from double-submitting my forms. I have javascript that disables the submit button, but there is still an occasional user who finds a way to double-submit. I have a vision of a re-usable library that I could create to protect from this. In my ideal library, the code block would look something like this: try: with acquire_lock({'field1':'abc', 'field2':'def'}) as lock: response = #do some credit card processing lock.response = response except SubmissionWasDuplicate, e: response = e.response The lock table would look something like this: duplicate_submission_locks submission_hash # a MD5 of the submitted arguments response # pickled data created_at # used for sweeping this table lock_expired # boolean signifying if the lock has expired Does anyone know if this already exists? It doesn't seem to difficult to write, so if it doesn't exist I may write it myself.
[ "You can use a session to store the hash \nimport hashlib\n\ndef contact(request):\n if request.method == 'POST':\n form = MyForm(request.POST)\n #join all the fields in one string\n hashstring=hashlib.sha1(fieldsstring)\n if request.session.get('sesionform')!=hashstring:\n if form.is_valid() : \n request.session['sesionform'] = hashstring\n #do some stuff...\n return HttpResponseRedirect('/thanks/') # Redirect after POST \n else\n raise SubmissionWasDuplicate(\"duplicate\")\n else:\n form = MyForm() \n\nWith this approach (not deleting the session cookie) the user can't re-store the data util the session expires, by the way, i'm assuming that exist something who identify the user who send the data\n", "One easy solution to this problem is to add a unique hash to each form. Then you can have a rolling table of current forms. When a form is submitted, or the hash gets too old, you can expire it out of your table, and reject any form which does not have a matching hash in your table.\nThe HTTPRedirect is the correct way to do it, as previously mentioned.\nUnfortunately, even Django's own built in admin is prone to problems related to this issue. In some cases, the cross-site scripting framework can assist to prevent some of this, but I'm afraid the current production versions just don't have this built in.\n", "To be honest, your best bet (easy and good practice) is to issue a HTTPRedirect() to the thank you page, and if the thank you page is the same one as the form, that's OK. You can still do this.\n", "Kristian Damian's answer is really a great suggestion. I just thought of a slight variation on that theme, but it might have more overhead.\nYou could try implementing something that is used in django-piston for BaseHandler objects, which is a method called exists() that checks to see if what you are submitting is already in the database.\nFrom handler.py (BaseHandler):\ndef exists(self, **kwargs):\n if not self.has_model():\n raise NotImplementedError\n\n try:\n self.model.objects.get(**kwargs)\n return True\n except self.model.DoesNotExist:\n return False\n\nSo let's say make that a function called request_exists(), instead of a method:\nif form.is_valid()\n if request_exists(request):\n # gracefully reject dupe submission\n else:\n # do stuff to save the request\n ...\n # and ALWAYS redirect after a POST!!\n return HttpResponseRedirect('/thanks/') \n\n", "It is always good to use the redirect-after-post method. This prevents user from accidently resubmitting the form using refresh function from the browser. It is also helpful even when you use the hash method. It's because without redirect after a POST, in case of hitting Back/Refresh button, user will see a question message about resubmitting the form, which can confuse her. \nIf you do a GET redirect after every POST, then hitting Back/Refresh won't display this wierd (for usual user) message. So for full protection use Hash+redirect-after-post.\n" ]
[ 12, 6, 3, 3, 2 ]
[]
[]
[ "code_reuse", "django", "python" ]
stackoverflow_0002136954_code_reuse_django_python.txt
Q: How do I access my classes from the python console on MAC OSX? I'm trying to access my classes via from project import * But from the python console something seems to be off with the paths. How do I set the correct paths to my project so I can import classes? My models are stored in: /Users/username/project/project/model from project import * And the error reads: ImportError: No module named project Thanks. A: You have the following choices Start your python session in the /User/username/project folder Change your import line to from project.project import * Set the PYTHONPATH environment variable to /User/username/project (setenv PYTHONPATH /User/username/project) Append /User/username/project to sys.path import sys sys.path.append('/User/username/project') A: Most likely you will have to set the PYTHONPATH env variable, or change in the correct directory. I assume you do not start your console from: /Users/username/project You have several options now: Change to that directory Set the PYTHONPATH env variable to that directory (however that is done in MacOSX) Use the site module to add the path: python docs A: This might be a silly suggestion, but do you have a __init__.py file in the module you're importing? if not, then create an empty one. You're also going to need to run from project import * from the /Users/name/project/ directory. ie: you'll need to start the python CLI from /Users/name/project/. If that isnt suitable thenas already suggested you can change where python looks for modules. As a sidenote, using from module import * is commonly seen as bad form. Try to specify what you want imported.
How do I access my classes from the python console on MAC OSX?
I'm trying to access my classes via from project import * But from the python console something seems to be off with the paths. How do I set the correct paths to my project so I can import classes? My models are stored in: /Users/username/project/project/model from project import * And the error reads: ImportError: No module named project Thanks.
[ "You have the following choices\n\nStart your python session in the /User/username/project folder\nChange your import line to from project.project import *\nSet the PYTHONPATH environment variable to /User/username/project (setenv PYTHONPATH /User/username/project)\nAppend /User/username/project to sys.path\n\nimport sys\nsys.path.append('/User/username/project')\n\n", "Most likely you will have to set the PYTHONPATH env variable, or change in the correct directory.\nI assume you do not start your console from: /Users/username/project\nYou have several options now:\n\nChange to that directory\nSet the PYTHONPATH env variable to that directory (however that is done in MacOSX)\nUse the site module to add the path: python docs\n\n", "This might be a silly suggestion, but do you have a __init__.py file in the module you're importing? if not, then create an empty one. You're also going to need to run from project import * from the /Users/name/project/ directory. ie: you'll need to start the python CLI from /Users/name/project/. If that isnt suitable thenas already suggested you can change where python looks for modules.\nAs a sidenote, using from module import * is commonly seen as bad form. Try to specify what you want imported.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "macos", "pylons", "python" ]
stackoverflow_0002201461_macos_pylons_python.txt
Q: Replace newlines in a Unicode string I am trying to replace newline characters in a unicode string and seem to be missing some magic codes. My particular example is that I am working on AppEngine and trying to put titles from HTML pages into a db.StringProperty() in my model. So I do something like: link.title = unicode(page_title,"utf-8").replace('\n','').replace('\r','') and I get: Property title is not multi-line Are there other codes I should be using for the replace? A: Try ''.join(unicode(page_title, 'utf-8').splitlines()). splitlines() should let the standard library take care of all the possible crazy Unicode line breaks, and then you just join them all back together with the empty string to get a single-line version. A: Python uses these characters for splitting in unicode.splitlines(): U+000A LINE FEED (\n) U+000D CARRIAGE RETURN (\r) U+001C FILE SEPARATOR U+001D GROUP SEPARATOR U+001E RECORD SEPARATOR U+0085 NEXT LINE U+2028 LINE SEPARATOR U+2029 PARAGRAPH SEPARATOR As Hank says, using splitlines() will let Python take care of all of the details for you, but if you need to do it manually, then this should be the complete list. A: It would be useful to print the repr() of the page_title that is seen to be multiline, but the obvious candidate would be '\r'.
Replace newlines in a Unicode string
I am trying to replace newline characters in a unicode string and seem to be missing some magic codes. My particular example is that I am working on AppEngine and trying to put titles from HTML pages into a db.StringProperty() in my model. So I do something like: link.title = unicode(page_title,"utf-8").replace('\n','').replace('\r','') and I get: Property title is not multi-line Are there other codes I should be using for the replace?
[ "Try ''.join(unicode(page_title, 'utf-8').splitlines()). splitlines() should let the standard library take care of all the possible crazy Unicode line breaks, and then you just join them all back together with the empty string to get a single-line version.\n", "Python uses these characters for splitting in unicode.splitlines():\n\nU+000A LINE FEED (\\n)\nU+000D CARRIAGE RETURN (\\r)\nU+001C FILE SEPARATOR\nU+001D GROUP SEPARATOR\nU+001E RECORD SEPARATOR\nU+0085 NEXT LINE\nU+2028 LINE SEPARATOR\nU+2029 PARAGRAPH SEPARATOR\n\nAs Hank says, using splitlines() will let Python take care of all of the details for you, but if you need to do it manually, then this should be the complete list.\n", "It would be useful to print the repr() of the page_title that is seen to be multiline, but the obvious candidate would be '\\r'.\n" ]
[ 22, 11, 0 ]
[]
[]
[ "google_app_engine", "python", "unicode" ]
stackoverflow_0002201633_google_app_engine_python_unicode.txt
Q: How to actually build 64-bit Python on OS X 10.6.2 Why? I want to do this because installation of SciPy recommends it, and I thought it would be a good learning experience. This question has been asked before (e.g. here). The preferred answer seems to be to use MacPorts, but as I say, I'd like to understand how it's done. Anyway, I grab the source (Python-2.6.4.tgz) and unzip. I read the instructions on how to build a 64-bit "framework" build. As I understand it, I should run ./configure --enable-framework --enable-universalsdk=/ --with-univeral-archs=intel configure runs for a while...and finishes. When I do make, it's obviously got a problem: $ make gcc -c -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c In file included from //usr/include/architecture/i386/math.h:626, from //usr/include/math.h:28, from Include/pyport.h:235, from Include/Python.h:58, from ./Modules/python.c:3: //usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. gcc is being called with the wrong arguments. Do I have the wrong arguments to configure, or should I set compiler flags in the environment, or what? Edit: I don't see any errors in the output from configure...and I see this line: checking for OSX 10.5 SDK or later... yes it ends with creating Modules/Setup creating Modules/Setup.local creating Makefile Edit2: I thought I copied from the readme... I did! There's a typo in the readme spec! My age-related dyslexia is acting up again. ;) A: Your ./configure option is not correct. --enable-universalsdk should be set to the correct SDK, not /! That's why gcc got confused, see the option -isysroot. So, check what SDKs you have in /Developer/SDKs, and set the correct one. Moreover, your gcc is called only with -arch ppc -arch i386, which do not include -arch x86_64 which is the intel 64 bit flag. A: In order to choose an answer as correct, I'm paraphrasing comments above: As noticed by Virgil Dupras, there was a typo in this flag: --with-universal-archs=intel It originates from the file Mac/readme, but I should've caught it before posting. Also I recommend that you read Ned Deily's very helpful comments. Check out those guys and vote 'em up.
How to actually build 64-bit Python on OS X 10.6.2
Why? I want to do this because installation of SciPy recommends it, and I thought it would be a good learning experience. This question has been asked before (e.g. here). The preferred answer seems to be to use MacPorts, but as I say, I'd like to understand how it's done. Anyway, I grab the source (Python-2.6.4.tgz) and unzip. I read the instructions on how to build a 64-bit "framework" build. As I understand it, I should run ./configure --enable-framework --enable-universalsdk=/ --with-univeral-archs=intel configure runs for a while...and finishes. When I do make, it's obviously got a problem: $ make gcc -c -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c In file included from //usr/include/architecture/i386/math.h:626, from //usr/include/math.h:28, from Include/pyport.h:235, from Include/Python.h:58, from ./Modules/python.c:3: //usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. gcc is being called with the wrong arguments. Do I have the wrong arguments to configure, or should I set compiler flags in the environment, or what? Edit: I don't see any errors in the output from configure...and I see this line: checking for OSX 10.5 SDK or later... yes it ends with creating Modules/Setup creating Modules/Setup.local creating Makefile Edit2: I thought I copied from the readme... I did! There's a typo in the readme spec! My age-related dyslexia is acting up again. ;)
[ "Your ./configure option is not correct. --enable-universalsdk should be set to the correct SDK, not /! \nThat's why gcc got confused, see the option -isysroot.\nSo, check what SDKs you have in /Developer/SDKs, and set the correct one.\nMoreover, your gcc is called only with -arch ppc -arch i386, which do not include -arch x86_64 which is the intel 64 bit flag. \n", "In order to choose an answer as correct, I'm paraphrasing comments above:\nAs noticed by Virgil Dupras, there was a typo in this flag:\n--with-universal-archs=intel\n\nIt originates from the file Mac/readme, but I should've caught it before posting.\nAlso I recommend that you read Ned Deily's very helpful comments. Check out those guys and vote 'em up.\n" ]
[ 1, 1 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0002177705_macos_python.txt
Q: Implementing preg_match_all in Python I basically want the same functionality of preg_match_all()from PHP in a Python way. If I have a regex pattern and a string, is there a way to search the string and get back a dictionary of each occurrence of a vowel, along with its position in the string? Example: s = "supercalifragilisticexpialidocious" Would return: { 'u' : 1, 'e' : 3, 'a' : 6, 'i' : 8, 'a' : 11, 'i' : 13, 'i' : 15 } A: You can do this faster without regexp [(x,i) for i,x in enumerate(s) if x in "aeiou"] Here are some timings: For s = "supercalifragilisticexpialidocious" timeit [(m.group(0), m.start()) for m in re.finditer('[aeiou]',s)] 10000 loops, best of 3: 27.5 µs per loop timeit [(x,i) for i,x in enumerate(s) if x in "aeiou"] 100000 loops, best of 3: 14.4 µs per loop For s = "supercalifragilisticexpialidocious"*100 timeit [(m.group(0), m.start()) for m in re.finditer('[aeiou]',s)] 100 loops, best of 3: 2.01 ms per loop timeit [(x,i) for i,x in enumerate(s) if x in "aeiou"] 1000 loops, best of 3: 1.24 ms per loop A: What you ask for can't be a dictionary, since it has multiple identical keys. However, you can put it in a list of tuples like this: >>> [(m.group(0), m.start()) for m in re.finditer('[aeiou]',s)] [('u', 1), ('e', 3), ('a', 6), ('i', 8), ('a', 11), ('i', 13), ('i', 15), ('i', 18), ('e', 20), ('i', 23), ('a', 24), ('i', 26), ('o', 28), ('i', 30), ('o', 31), ('u', 32)] A: Like this, for example: import re def findall(pattern, string): res = {} for match in re.finditer(pattern, string): res[match.group(0)] = match.start() return res print findall("[aeiou]", "Test this thang") Note that re.finditer only finds non-overlapping matches. And the dict keys will be overwritten, so if you want the first match, you'll have to replace the innermost loop by: for match in re.finditer(pattern, string): if match.group(0) not in res: # <-- don't overwrite key res[match.group(0)] = match.start()
Implementing preg_match_all in Python
I basically want the same functionality of preg_match_all()from PHP in a Python way. If I have a regex pattern and a string, is there a way to search the string and get back a dictionary of each occurrence of a vowel, along with its position in the string? Example: s = "supercalifragilisticexpialidocious" Would return: { 'u' : 1, 'e' : 3, 'a' : 6, 'i' : 8, 'a' : 11, 'i' : 13, 'i' : 15 }
[ "You can do this faster without regexp\n[(x,i) for i,x in enumerate(s) if x in \"aeiou\"]\n\nHere are some timings:\nFor s = \"supercalifragilisticexpialidocious\"\ntimeit [(m.group(0), m.start()) for m in re.finditer('[aeiou]',s)]\n10000 loops, best of 3: 27.5 µs per loop\n\ntimeit [(x,i) for i,x in enumerate(s) if x in \"aeiou\"]\n100000 loops, best of 3: 14.4 µs per loop\n\nFor s = \"supercalifragilisticexpialidocious\"*100\ntimeit [(m.group(0), m.start()) for m in re.finditer('[aeiou]',s)]\n100 loops, best of 3: 2.01 ms per loop\n\ntimeit [(x,i) for i,x in enumerate(s) if x in \"aeiou\"]\n1000 loops, best of 3: 1.24 ms per loop\n\n", "What you ask for can't be a dictionary, since it has multiple identical keys. However, you can put it in a list of tuples like this:\n>>> [(m.group(0), m.start()) for m in re.finditer('[aeiou]',s)]\n[('u', 1), ('e', 3), ('a', 6), ('i', 8), ('a', 11), ('i', 13), ('i', 15), ('i', 18), ('e', 20), ('i', 23), ('a', 24), ('i', 26), ('o', 28), ('i', 30), ('o', 31), ('u', 32)]\n\n", "Like this, for example:\nimport re\n\ndef findall(pattern, string):\n res = {}\n for match in re.finditer(pattern, string):\n res[match.group(0)] = match.start()\n return res\n\nprint findall(\"[aeiou]\", \"Test this thang\")\n\nNote that re.finditer only finds non-overlapping matches. And the dict keys will be overwritten, so if you want the first match, you'll have to replace the innermost loop by:\n for match in re.finditer(pattern, string):\n if match.group(0) not in res: # <-- don't overwrite key\n res[match.group(0)] = match.start()\n\n" ]
[ 6, 5, 0 ]
[]
[]
[ "php", "python", "regex" ]
stackoverflow_0002202360_php_python_regex.txt
Q: How to catch 404 error in urllib.urlretrieve Background: I am using urllib.urlretrieve, as opposed to any other function in the urllib* modules, because of the hook function support (see reporthook below) .. which is used to display a textual progress bar. This is Python >=2.6. >>> urllib.urlretrieve(url[, filename[, reporthook[, data]]]) However, urlretrieve is so dumb that it leaves no way to detect the status of the HTTP request (eg: was it 404 or 200?). >>> fn, h = urllib.urlretrieve('http://google.com/foo/bar') >>> h.items() [('date', 'Thu, 20 Aug 2009 20:07:40 GMT'), ('expires', '-1'), ('content-type', 'text/html; charset=ISO-8859-1'), ('server', 'gws'), ('cache-control', 'private, max-age=0')] >>> h.status '' >>> What is the best known way to download a remote HTTP file with hook-like support (to show progress bar) and a decent HTTP error handling? A: Check out urllib.urlretrieve's complete code: def urlretrieve(url, filename=None, reporthook=None, data=None): global _urlopener if not _urlopener: _urlopener = FancyURLopener() return _urlopener.retrieve(url, filename, reporthook, data) In other words, you can use urllib.FancyURLopener (it's part of the public urllib API). You can override http_error_default to detect 404s: class MyURLopener(urllib.FancyURLopener): def http_error_default(self, url, fp, errcode, errmsg, headers): # handle errors the way you'd like to fn, h = MyURLopener().retrieve(url, reporthook=my_report_hook) A: You should use: import urllib2 try: resp = urllib2.urlopen("http://www.google.com/this-gives-a-404/") except urllib2.URLError, e: if not hasattr(e, "code"): raise resp = e print "Gave", resp.code, resp.msg print "=" * 80 print resp.read(80) Edit: The rationale here is that unless you expect the exceptional state, it is an exception for it to happen, and you probably didn't even think about it -- so instead of letting your code continue to run while it was unsuccessful, the default behavior is--quite sensibly--to inhibit its execution. A: The URL Opener object's "retreive" method supports the reporthook and throws an exception on 404. http://docs.python.org/library/urllib.html#url-opener-objects
How to catch 404 error in urllib.urlretrieve
Background: I am using urllib.urlretrieve, as opposed to any other function in the urllib* modules, because of the hook function support (see reporthook below) .. which is used to display a textual progress bar. This is Python >=2.6. >>> urllib.urlretrieve(url[, filename[, reporthook[, data]]]) However, urlretrieve is so dumb that it leaves no way to detect the status of the HTTP request (eg: was it 404 or 200?). >>> fn, h = urllib.urlretrieve('http://google.com/foo/bar') >>> h.items() [('date', 'Thu, 20 Aug 2009 20:07:40 GMT'), ('expires', '-1'), ('content-type', 'text/html; charset=ISO-8859-1'), ('server', 'gws'), ('cache-control', 'private, max-age=0')] >>> h.status '' >>> What is the best known way to download a remote HTTP file with hook-like support (to show progress bar) and a decent HTTP error handling?
[ "Check out urllib.urlretrieve's complete code:\ndef urlretrieve(url, filename=None, reporthook=None, data=None):\n global _urlopener\n if not _urlopener:\n _urlopener = FancyURLopener()\n return _urlopener.retrieve(url, filename, reporthook, data)\n\nIn other words, you can use urllib.FancyURLopener (it's part of the public urllib API). You can override http_error_default to detect 404s:\nclass MyURLopener(urllib.FancyURLopener):\n def http_error_default(self, url, fp, errcode, errmsg, headers):\n # handle errors the way you'd like to\n\nfn, h = MyURLopener().retrieve(url, reporthook=my_report_hook)\n\n", "You should use:\nimport urllib2\n\ntry:\n resp = urllib2.urlopen(\"http://www.google.com/this-gives-a-404/\")\nexcept urllib2.URLError, e:\n if not hasattr(e, \"code\"):\n raise\n resp = e\n\nprint \"Gave\", resp.code, resp.msg\nprint \"=\" * 80\nprint resp.read(80)\n\nEdit: The rationale here is that unless you expect the exceptional state, it is an exception for it to happen, and you probably didn't even think about it -- so instead of letting your code continue to run while it was unsuccessful, the default behavior is--quite sensibly--to inhibit its execution.\n", "The URL Opener object's \"retreive\" method supports the reporthook and throws an exception on 404.\nhttp://docs.python.org/library/urllib.html#url-opener-objects\n" ]
[ 28, 15, 2 ]
[]
[]
[ "http", "python", "url", "urllib" ]
stackoverflow_0001308542_http_python_url_urllib.txt
Q: Python: deferToThread XMLRPC Server - Twisted - Cherrypy? This question is related to others I have asked on here, mainly regarding sorting huge sets of data in memory. Basically this is what I want / have: Twisted XMLRPC server running. This server keeps several (32) instances of Foo class in memory. Each Foo class contains a list bar (which will contain several million records). There is a service that retrieves data from a database, and passes it to the XMLRPC server. The data is basically a dictionary, with keys corresponding to each Foo instance, and values are a list of dictionaries, like so: data = {'foo1':[{'k1':'v1', 'k2':'v2'}, {'k1':'v1', 'k2':'v2'}], 'foo2':...} Each Foo instance is then passed the value corresponding to it's key, and the Foo.bar dictionaries are updated and sorted. class XMLRPCController(xmlrpc.XMLRPC): def __init__(self): ... self.foos = {'foo1':Foo(), 'foo2':Foo(), 'foo3':Foo()} ... def update(self, data): for k, v in data: threads.deferToThread(self.foos[k].processData, v) def getData(self, fookey): # return first 10 records of specified Foo.bar return self.foos[fookey].bar[0:10] class Foo(): def __init__(self): bar = [] def processData(self, new_bar_data): for record in new_bar_data: # do processing, and add record, then sort # BUNCH OF PROCESSING CODE self.bar.sort(reverse=True) The problem is that when the update function is called in the XMLRPCController with a lot of records (say 100K +) it stops responding to my getData calls until all 32 Foo instances have completed the process_data method. I thought deferToThread would work, but I think I am misunderstanding where the problem is. Any suggestions... I am open to using something else, like Cherrypy if it supports this required behavior. EDIT @Troy: This is how the reactor is set up reactor.listenTCP(port_no, server.Site(XMLRPCController) reactor.run() As far as GIL, would it be a viable option to change sys.setcheckinterval() value to something smaller, so the lock on the data is released so it can be read? A: The easiest way to get the app to be responsive is to break up the CPU-intensive processing in smaller chunks, while letting the twisted reactor run in between. For example by calling reactor.callLater(0, process_next_chunk) to advance to next chunk. Effectively implementing cooperative multitasking by yourself. Another way would be to use separate processes to do the work, then you will benefit from multiple cores. Take a look at Ampoule: https://launchpad.net/ampoule It provides an API similar to deferToThread. A: I don't know how long your processData method runs nor how you're setting up your twisted reactor. By default, the twisted reactor has a thread pool of between 0 and 10 threads. You may be trying to defer as many as 32 long-running calculations to as many as 10 threads. This is sub-optimal. You also need to ask what role the GIL is playing in updating all these collections. Edit: Before you make any serious changes to your program (like calling sys.setcheckinterval()) you should probably run it using the profiler or the python trace module. These should tell you what methods are using all your time. Without the right information, you can't make the right changes.
Python: deferToThread XMLRPC Server - Twisted - Cherrypy?
This question is related to others I have asked on here, mainly regarding sorting huge sets of data in memory. Basically this is what I want / have: Twisted XMLRPC server running. This server keeps several (32) instances of Foo class in memory. Each Foo class contains a list bar (which will contain several million records). There is a service that retrieves data from a database, and passes it to the XMLRPC server. The data is basically a dictionary, with keys corresponding to each Foo instance, and values are a list of dictionaries, like so: data = {'foo1':[{'k1':'v1', 'k2':'v2'}, {'k1':'v1', 'k2':'v2'}], 'foo2':...} Each Foo instance is then passed the value corresponding to it's key, and the Foo.bar dictionaries are updated and sorted. class XMLRPCController(xmlrpc.XMLRPC): def __init__(self): ... self.foos = {'foo1':Foo(), 'foo2':Foo(), 'foo3':Foo()} ... def update(self, data): for k, v in data: threads.deferToThread(self.foos[k].processData, v) def getData(self, fookey): # return first 10 records of specified Foo.bar return self.foos[fookey].bar[0:10] class Foo(): def __init__(self): bar = [] def processData(self, new_bar_data): for record in new_bar_data: # do processing, and add record, then sort # BUNCH OF PROCESSING CODE self.bar.sort(reverse=True) The problem is that when the update function is called in the XMLRPCController with a lot of records (say 100K +) it stops responding to my getData calls until all 32 Foo instances have completed the process_data method. I thought deferToThread would work, but I think I am misunderstanding where the problem is. Any suggestions... I am open to using something else, like Cherrypy if it supports this required behavior. EDIT @Troy: This is how the reactor is set up reactor.listenTCP(port_no, server.Site(XMLRPCController) reactor.run() As far as GIL, would it be a viable option to change sys.setcheckinterval() value to something smaller, so the lock on the data is released so it can be read?
[ "The easiest way to get the app to be responsive is to break up the CPU-intensive processing in smaller chunks, while letting the twisted reactor run in between. For example by calling reactor.callLater(0, process_next_chunk) to advance to next chunk. Effectively implementing cooperative multitasking by yourself.\nAnother way would be to use separate processes to do the work, then you will benefit from multiple cores. Take a look at Ampoule: https://launchpad.net/ampoule It provides an API similar to deferToThread.\n", "I don't know how long your processData method runs nor how you're setting up your twisted reactor. By default, the twisted reactor has a thread pool of between 0 and 10 threads. You may be trying to defer as many as 32 long-running calculations to as many as 10 threads. This is sub-optimal.\nYou also need to ask what role the GIL is playing in updating all these collections.\nEdit:\nBefore you make any serious changes to your program (like calling sys.setcheckinterval()) you should probably run it using the profiler or the python trace module. These should tell you what methods are using all your time. Without the right information, you can't make the right changes.\n" ]
[ 1, 0 ]
[]
[]
[ "cherrypy", "multithreading", "python", "twisted" ]
stackoverflow_0002202231_cherrypy_multithreading_python_twisted.txt
Q: Yield multiple objects at a time from an iterable object? How can I yield multiple items at a time from an iterable object? For example, with a sequence of arbitrary length, how can I iterate through the items in the sequence, in groups of X consecutive items per iteration? A: Your question is a bit vague, but check out the grouper recipe in the itertools documentation. def grouper(n, iterable, fillvalue=None): "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return izip_longest(fillvalue=fillvalue, *args) (Zipping the same iterator several times with [iter(iterable)]*n is an old trick, but encapsulating it in this function avoids confusing code, and it is the same exact form and interface many people will use. It's a somewhat common need and it's a bit of a shame it isn't actually in the itertools module.) A: Here's another approach that works on older version of Python that don't have izip_longest: def grouper(n, seq): result = [] for x in seq: result.append(x) if len(result) >= n: yield tuple(result) del result[:] if result: yield tuple(result) No filler, so the last group might have fewer than n elements.
Yield multiple objects at a time from an iterable object?
How can I yield multiple items at a time from an iterable object? For example, with a sequence of arbitrary length, how can I iterate through the items in the sequence, in groups of X consecutive items per iteration?
[ "Your question is a bit vague, but check out the grouper recipe in the itertools documentation.\ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\n(Zipping the same iterator several times with [iter(iterable)]*n is an old trick, but encapsulating it in this function avoids confusing code, and it is the same exact form and interface many people will use. It's a somewhat common need and it's a bit of a shame it isn't actually in the itertools module.)\n", "Here's another approach that works on older version of Python that don't have izip_longest:\ndef grouper(n, seq):\n result = []\n for x in seq:\n result.append(x)\n if len(result) >= n:\n yield tuple(result)\n del result[:]\n if result:\n yield tuple(result)\n\nNo filler, so the last group might have fewer than n elements.\n" ]
[ 7, 2 ]
[]
[]
[ "grouping", "iterator", "python", "yield" ]
stackoverflow_0002202461_grouping_iterator_python_yield.txt
Q: S60 camera focusing In my project I have to use mobile camera from my own program. I use python with S60 platform under NOKIA 6220 Classic. It has 5mp-camera. The problem is that photos quality are very-very low. Seems that auto-focusing doesn't work. I'd like to know maybe anyone from you made something before. I can buy new telephone if I'll need this. The main problem - quality of photo. I'm going to screen paper with text. A: This is just a guess. Are you maybe getting low-res images that are suitable for sending via MMS? I would look at the API.
S60 camera focusing
In my project I have to use mobile camera from my own program. I use python with S60 platform under NOKIA 6220 Classic. It has 5mp-camera. The problem is that photos quality are very-very low. Seems that auto-focusing doesn't work. I'd like to know maybe anyone from you made something before. I can buy new telephone if I'll need this. The main problem - quality of photo. I'm going to screen paper with text.
[ "This is just a guess. Are you maybe getting low-res images that are suitable for sending via MMS? I would look at the API.\n" ]
[ 0 ]
[]
[]
[ "camera", "python", "s60" ]
stackoverflow_0002202991_camera_python_s60.txt
Q: Python: How do you call a method when you only have the string name of the method? This is for use in a JSON API. I don't want to have: if method_str == 'method_1': method_1() if method_str == 'method_2': method_2() For obvious reasons this is not optimal. How would I use map strings to methods like this in a reusable way (also note that I need to pass in arguments to the called functions). Here is an example: INCOMING JSON: { 'method': 'say_something', 'args': [ 135487, 'a_465cc1' ] 'kwargs': { 'message': 'Hello World', 'volume': 'Loud' } } # JSON would be turned into Python with Python's built in json module. Resulting call: # Either this say_something(135487, 'a_465cc1', message='Hello World', volume='Loud') # Or this (this is more preferable of course) say_something(*args, **kwargs) A: For methods of instances, use getattr >>> class MyClass(object): ... def sayhello(self): ... print "Hello World!" ... >>> m=MyClass() >>> getattr(m,"sayhello")() Hello World! >>> For functions you can look in the global dict >>> def sayhello(): ... print "Hello World!" ... >>> globals().get("sayhello")() Hello World! In this case, since there is no function called prove_riemann_hypothesis the default function (sayhello) is used >>> globals().get("prove_riemann_hypothesis", sayhello)() Hello World! The problem with this approach is that you are sharing the namespace with whatever else is in there. You might want to guard against the json calling methods it is not supposed to. A good way to do this is to decorate your functions like this >>> json_functions={} >>> def make_available_to_json(f): ... json_functions[f.__name__]=f ... return f ... >>> @make_available_to_json ... def sayhello(): ... print "Hello World!" ... >>> json_functions.get("sayhello")() Hello World! >>> json_functions["sayhello"]() Hello World! >>> json_functions.get("prove_riemann_hypothesis", sayhello)() Hello World! A: Use getattr. For example: class Test(object): def say_hello(self): print 'Hell no, world!!111' def test(self): getattr(self, 'say_hello')() A: The clean, safe way to do this is to make a dict mapping names to functions. If these are actually methods, the best way is still to make such a dict, though getattr is also available. Using globals or eval is unsafe and dirty. A: Assuming the functions are all global variables (they are, unless they were defined inside another functions), they can be accessed with the globals() function. globals() returns a dictionary of all global variables, including functions. For example: $ python Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> def some_function(): ... print "Hello World!" ... >>> globals() {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, 'some_function': <function some_function at 0x6326b0>, '__package__': None} >>> globals()['some_function']() Hello World!
Python: How do you call a method when you only have the string name of the method?
This is for use in a JSON API. I don't want to have: if method_str == 'method_1': method_1() if method_str == 'method_2': method_2() For obvious reasons this is not optimal. How would I use map strings to methods like this in a reusable way (also note that I need to pass in arguments to the called functions). Here is an example: INCOMING JSON: { 'method': 'say_something', 'args': [ 135487, 'a_465cc1' ] 'kwargs': { 'message': 'Hello World', 'volume': 'Loud' } } # JSON would be turned into Python with Python's built in json module. Resulting call: # Either this say_something(135487, 'a_465cc1', message='Hello World', volume='Loud') # Or this (this is more preferable of course) say_something(*args, **kwargs)
[ "For methods of instances, use getattr\n>>> class MyClass(object):\n... def sayhello(self):\n... print \"Hello World!\"\n... \n>>> m=MyClass()\n>>> getattr(m,\"sayhello\")()\nHello World!\n>>> \n\nFor functions you can look in the global dict\n>>> def sayhello():\n... print \"Hello World!\"\n... \n>>> globals().get(\"sayhello\")()\nHello World!\n\nIn this case, since there is no function called prove_riemann_hypothesis the default function (sayhello) is used\n>>> globals().get(\"prove_riemann_hypothesis\", sayhello)()\nHello World!\n\nThe problem with this approach is that you are sharing the namespace with whatever else is in there. You might want to guard against the json calling methods it is not supposed to. A good way to do this is to decorate your functions like this\n>>> json_functions={}\n>>> def make_available_to_json(f):\n... json_functions[f.__name__]=f\n... return f\n...\n>>> @make_available_to_json\n... def sayhello():\n... print \"Hello World!\"\n...\n>>> json_functions.get(\"sayhello\")()\nHello World!\n>>> json_functions[\"sayhello\"]()\nHello World!\n>>> json_functions.get(\"prove_riemann_hypothesis\", sayhello)()\nHello World!\n\n", "Use getattr.\nFor example:\nclass Test(object):\n def say_hello(self):\n print 'Hell no, world!!111'\n def test(self):\n getattr(self, 'say_hello')()\n\n", "The clean, safe way to do this is to make a dict mapping names to functions. If these are actually methods, the best way is still to make such a dict, though getattr is also available. Using globals or eval is unsafe and dirty.\n", "Assuming the functions are all global variables (they are, unless they were defined inside another functions), they can be accessed with the globals() function. globals() returns a dictionary of all global variables, including functions.\nFor example:\n$ python\nPython 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) \n[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> def some_function():\n... print \"Hello World!\"\n... \n>>> globals()\n{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, 'some_function': <function some_function at 0x6326b0>, '__package__': None}\n>>> globals()['some_function']()\nHello World!\n\n" ]
[ 25, 6, 6, 1 ]
[]
[]
[ "api", "json", "python", "serialization" ]
stackoverflow_0002203438_api_json_python_serialization.txt
Q: Distinguishing parent model's children with Django inheritance Basically I have a Base class called "Program". I then have more specific program model types that use Program as a base class. For 99% of my needs, I don't care whether or not a Program is one of the specific child types. Of course there's that 1% of the time that I do want to know if it's one of the children. The problem is that if I have let's say, a SwimProgram model and a CampProgram model using Program as their base, that it's problematic to find out what they are without a bunch of try/except blocks. What I want is something like the following: program = models.Program.objects.get(id=15) if program.swimprogram: ## do stuff elif program.campprogram: ## do stuff else: ## do other stuff Of course this throws DoesNotExist exceptions. I could either use try/excepts which are uglier, or I could have Program have a 'type' field that the children set on save. Both are doable, but I'm curious if anyone has any better methods. A: Have you tried hasattr()? Something like this: if hasattr(program, 'swimprogram'): # ... elif hasattr(program, 'campprogram'): # ... If you are unsure about this approach, try it out in a simple test app first. Here are two simple models that should show if it will work for you and the version of django that you are using (tested in django-1.1.1). class Archive(models.Model): pub_date = models.DateField() def __unicode__(self): return "Archive: %s" % self.pub_date class ArchiveB(Archive): def __unicode__(self): return "ArchiveB: %s" % self.pub_date And then giving it a spin in the shell: > a_id = Archive.objects.create(pub_date="2010-10-10").id > b_id = ArchiveB.objects.create(pub_date="2011-11-11").id > a = Archive.objects.get(id=a_id) > b = Archive.objects.get(id=b_id) > (a, b) # they both look like archive objects (<Archive: Archive: 2010-10-10>, <Archive: Archive: 2011-11-11>) > hasattr(a, 'archiveb') False > hasattr(b, 'archiveb') # but only one has access to an ArchiveB True A: A couple of weeks ago, someone on the django-developers mailing list introduced a very interesting extension to Django's ORM that makes QuerySets return subclassed objects instead of objects of the parent class. You can read all about it here: http://bserve.webhop.org/wiki/django_polymorphic I haven't tried it myself yet (but certainly will), but it seems to fit your use case. A: // Update As pointed out in the comments to this post I got the question wrong. The answer below will not solve the problem. Hi f4nt, the easiest way I can think of right now would be the following: program = models.Program.objects.get(id=15) if program.__class__.__name__ == 'ModelA': # to something if program.__class__.__name__ == 'ModelB': # to something To make this a bit better you could write a method in the base model: class MyModel(models.Model): def instanceOfModel(self, model_name): return self.__class__.__name__ == model_name That way the code from above would look like this: program = models.Program.objects.get(id=15) if program.instanceOfModel('ModelA'): # to something if program.instanceOfModel('ModelB'): # to something But as you can imagine this is ugly. You could look into the content type framework which might help you to do the same except more elegent. Hope that helps!
Distinguishing parent model's children with Django inheritance
Basically I have a Base class called "Program". I then have more specific program model types that use Program as a base class. For 99% of my needs, I don't care whether or not a Program is one of the specific child types. Of course there's that 1% of the time that I do want to know if it's one of the children. The problem is that if I have let's say, a SwimProgram model and a CampProgram model using Program as their base, that it's problematic to find out what they are without a bunch of try/except blocks. What I want is something like the following: program = models.Program.objects.get(id=15) if program.swimprogram: ## do stuff elif program.campprogram: ## do stuff else: ## do other stuff Of course this throws DoesNotExist exceptions. I could either use try/excepts which are uglier, or I could have Program have a 'type' field that the children set on save. Both are doable, but I'm curious if anyone has any better methods.
[ "Have you tried hasattr()? Something like this:\nif hasattr(program, 'swimprogram'):\n # ...\nelif hasattr(program, 'campprogram'):\n # ...\n\nIf you are unsure about this approach, try it out in a simple test app first. Here are two simple models that should show if it will work for you and the version of django that you are using (tested in django-1.1.1).\nclass Archive(models.Model):\n pub_date = models.DateField()\n\n def __unicode__(self):\n return \"Archive: %s\" % self.pub_date\n\nclass ArchiveB(Archive):\n def __unicode__(self):\n return \"ArchiveB: %s\" % self.pub_date\n\nAnd then giving it a spin in the shell:\n> a_id = Archive.objects.create(pub_date=\"2010-10-10\").id\n> b_id = ArchiveB.objects.create(pub_date=\"2011-11-11\").id\n> a = Archive.objects.get(id=a_id)\n> b = Archive.objects.get(id=b_id)\n> (a, b) # they both look like archive objects\n(<Archive: Archive: 2010-10-10>, <Archive: Archive: 2011-11-11>)\n> hasattr(a, 'archiveb')\nFalse\n> hasattr(b, 'archiveb') # but only one has access to an ArchiveB\nTrue\n\n", "A couple of weeks ago, someone on the django-developers mailing list introduced a very interesting extension to Django's ORM that makes QuerySets return subclassed objects instead of objects of the parent class. You can read all about it here:\nhttp://bserve.webhop.org/wiki/django_polymorphic\nI haven't tried it myself yet (but certainly will), but it seems to fit your use case.\n", "// Update\nAs pointed out in the comments to this post I got the question wrong. The answer below will not solve the problem.\n\nHi f4nt,\nthe easiest way I can think of right now would be the following:\nprogram = models.Program.objects.get(id=15)\n\nif program.__class__.__name__ == 'ModelA':\n # to something\n\nif program.__class__.__name__ == 'ModelB':\n # to something\n\nTo make this a bit better you could write a method in the base model:\nclass MyModel(models.Model):\n\n def instanceOfModel(self, model_name):\n return self.__class__.__name__ == model_name\n\nThat way the code from above would look like this:\nprogram = models.Program.objects.get(id=15)\n\nif program.instanceOfModel('ModelA'):\n # to something\n\nif program.instanceOfModel('ModelB'):\n # to something\n\nBut as you can imagine this is ugly. You could look into the content type framework which might help you to do the same except more elegent.\nHope that helps!\n" ]
[ 4, 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002202232_django_python.txt
Q: Trouble with rdfstore with mysql - how to debug? I have a mysql server running and can connect to it from my Django ORM. Can't connect using the rdflib functionality. How can I debug this problem? Thanks. rdflib 2.4.2, python 2.6, MySQL Community 5.1.42 Trace: configString = "host=localhost,user=root,password=...,db=..." print configString host=localhost,user=root,password=...,db=... store = plugin.get('MySQL', Store)('rdfstore') print store Traceback (most recent call last): File "D:\GR\Personal\Career\Python\medCE\semantix\foaf_rdf.py", line 26, in print store File "C:\Program Files\Python26\lib\site-packages\rdflib\store\MySQL.py", line 1029, in _____repr_____ c=self._db.cursor() AttributeError: 'NoneType' object has no attribute 'cursor' rt = store.open(configString,create=False) table kb_7b066eca61_relations Doesn't exist table kb_7b066eca61_relations Doesn't exist print rt 0 if rt == 0: store.open(configString,create=True) Traceback (most recent call last): File "", line 3, in store.open(configString,create=True) File "C:\Program Files\Python26\lib\site-packages\rdflib\store\MySQL.py", line 602, in open host=configDict['host'], File "C:\Program Files\Python26\lib\site-packages\MySQLdb__init__.py", line 74, in Connect return Connection(*args, **kwargs) File "C:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 170, in init super(Connection, self).init(*args, **kwargs2) OperationalError: (1049, "Unknown database 'test'") A: I commented code in the rdflib/store directory in MySQL.py and now it all works: # test_db = MySQLdb.connect(user=configDict['user'], # passwd=configDict['password'], # db='test', # port=configDict['port'], # host=configDict['host'], # #use_unicode=True, # #read_default_file='/etc/my-client.cnf' # ) # c=test_db.cursor() # c.execute("""SET AUTOCOMMIT=0""") # c.execute("""SHOW DATABASES""") # if not (configDict['db'].encode('utf-8'),) in c.fetchall(): # print >> sys.stderr, "creating %s (doesn't exist)"%(configDict['db']) # c.execute("""CREATE DATABASE %s"""%(configDict['db'],)) # test_db.commit() # c.close() # test_db.close()
Trouble with rdfstore with mysql - how to debug?
I have a mysql server running and can connect to it from my Django ORM. Can't connect using the rdflib functionality. How can I debug this problem? Thanks. rdflib 2.4.2, python 2.6, MySQL Community 5.1.42 Trace: configString = "host=localhost,user=root,password=...,db=..." print configString host=localhost,user=root,password=...,db=... store = plugin.get('MySQL', Store)('rdfstore') print store Traceback (most recent call last): File "D:\GR\Personal\Career\Python\medCE\semantix\foaf_rdf.py", line 26, in print store File "C:\Program Files\Python26\lib\site-packages\rdflib\store\MySQL.py", line 1029, in _____repr_____ c=self._db.cursor() AttributeError: 'NoneType' object has no attribute 'cursor' rt = store.open(configString,create=False) table kb_7b066eca61_relations Doesn't exist table kb_7b066eca61_relations Doesn't exist print rt 0 if rt == 0: store.open(configString,create=True) Traceback (most recent call last): File "", line 3, in store.open(configString,create=True) File "C:\Program Files\Python26\lib\site-packages\rdflib\store\MySQL.py", line 602, in open host=configDict['host'], File "C:\Program Files\Python26\lib\site-packages\MySQLdb__init__.py", line 74, in Connect return Connection(*args, **kwargs) File "C:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 170, in init super(Connection, self).init(*args, **kwargs2) OperationalError: (1049, "Unknown database 'test'")
[ "I commented code in the rdflib/store directory in MySQL.py and now it all works:\n# test_db = MySQLdb.connect(user=configDict['user'],\n# passwd=configDict['password'],\n# db='test',\n# port=configDict['port'],\n# host=configDict['host'],\n# #use_unicode=True,\n# #read_default_file='/etc/my-client.cnf'\n# )\n# c=test_db.cursor()\n# c.execute(\"\"\"SET AUTOCOMMIT=0\"\"\")\n# c.execute(\"\"\"SHOW DATABASES\"\"\")\n# if not (configDict['db'].encode('utf-8'),) in c.fetchall():\n# print >> sys.stderr, \"creating %s (doesn't exist)\"%(configDict['db'])\n# c.execute(\"\"\"CREATE DATABASE %s\"\"\"%(configDict['db'],))\n# test_db.commit()\n# c.close()\n# test_db.close()\n\n" ]
[ 1 ]
[]
[]
[ "mysql", "mysql_error_1049", "python", "rdflib", "rdfstore" ]
stackoverflow_0002197157_mysql_mysql_error_1049_python_rdflib_rdfstore.txt
Q: Code Changes While Keeping Large Objects In Memory in Python I have an application that starts by loading a large pickled trie (173M) from disk and then uses it to do some processing. I'm making frequent changes to the processing part, which is inconvenient because loading the trie takes 15 minutes or so. I'm looking for a way to eliminate the repeated loading during testing, since the trie never changes. One thing I can't do is use a smaller version of the trie. Ideas I've had so far are memcached and turning the trie into a web service that accepts a query and returns the data I need. What I'm looking for is the least-effort path to a situation in which I can repeatedly change and reload the processing code while maintaining access to the in-memory trie. A direct reference to the tree would be preferable since this would require minimal code changes, but really I'm looking to minimize overall effort. A: You could try using Pythons built-in reload method or the livecoding project. A: The usual problem with reload is that instances stay bound to the old version of the class. If you are not keeping old instances around, reload is simple and works very well.
Code Changes While Keeping Large Objects In Memory in Python
I have an application that starts by loading a large pickled trie (173M) from disk and then uses it to do some processing. I'm making frequent changes to the processing part, which is inconvenient because loading the trie takes 15 minutes or so. I'm looking for a way to eliminate the repeated loading during testing, since the trie never changes. One thing I can't do is use a smaller version of the trie. Ideas I've had so far are memcached and turning the trie into a web service that accepts a query and returns the data I need. What I'm looking for is the least-effort path to a situation in which I can repeatedly change and reload the processing code while maintaining access to the in-memory trie. A direct reference to the tree would be preferable since this would require minimal code changes, but really I'm looking to minimize overall effort.
[ "You could try using Pythons built-in reload method or the livecoding project.\n", "The usual problem with reload is that instances stay bound to the old version of the class. If you are not keeping old instances around, reload is simple and works very well.\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002203492_python.txt
Q: Is this a bug in Django or what? Logging out of the Django Authentication system...will remove all sessiosn? I'm using sessions across my application. And using logins. When I do a simple: #log out the user. logout(request) ...the request.sessions get erased. What is this??! A: If by request.sessions you mean request.session, then it's a documented feature: http://docs.djangoproject.com/en/dev/topics/auth/#django.contrib.auth.logout A: I believe the correct syntax is: logout(request.user)
Is this a bug in Django or what? Logging out of the Django Authentication system...will remove all sessiosn?
I'm using sessions across my application. And using logins. When I do a simple: #log out the user. logout(request) ...the request.sessions get erased. What is this??!
[ "If by request.sessions you mean request.session, then it's a documented feature:\nhttp://docs.djangoproject.com/en/dev/topics/auth/#django.contrib.auth.logout\n", "I believe the correct syntax is:\nlogout(request.user)\n\n" ]
[ 4, 1 ]
[]
[]
[ "django", "python", "session" ]
stackoverflow_0002203604_django_python_session.txt
Q: How do i set the PDU mode of a modem using python sms 0.3 module? Am using python sms 0.3 module to access my modem on com port. Am trying to send an sms but am getting the following error sms.ModemError: ['\r\n', '+CMS ERROR: 304\r\n'] When i read the Modem error codes, Error code 304 is for PDU mode, am just wondering how do i set the mode using sms 0.3 Am using a USB modem, Huawei model E220. Gath A: Not sure if you can do it via the SMS library, but you can do it directly by sending via the serial port to the modem.: "AT+CMGF=0" sets PDU mode for SMS messages "AT+CMGF=1" sets text mode for SMS messages "AT+CMGF?" should give the current setting
How do i set the PDU mode of a modem using python sms 0.3 module?
Am using python sms 0.3 module to access my modem on com port. Am trying to send an sms but am getting the following error sms.ModemError: ['\r\n', '+CMS ERROR: 304\r\n'] When i read the Modem error codes, Error code 304 is for PDU mode, am just wondering how do i set the mode using sms 0.3 Am using a USB modem, Huawei model E220. Gath
[ "Not sure if you can do it via the SMS library, but you can do it directly by sending via the serial port to the modem.:\n\"AT+CMGF=0\" sets PDU mode for SMS messages\n\"AT+CMGF=1\" sets text mode for SMS messages\n\"AT+CMGF?\" should give the current setting\n" ]
[ 0 ]
[]
[]
[ "modem", "python", "sms" ]
stackoverflow_0001415162_modem_python_sms.txt
Q: Python win32com: Excel set chart type to Line This VBA macro works: Sub Draw_Graph() Columns("A:B").Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=ActiveSheet.Range("$A:$B") ActiveChart.ChartType = xlLine End Sub This Python (near) Equivalent almost works: from win32com import client excel=client.Dispatch("Excel.Application") excel.Visible=True book=excel.Workbooks.Open("myfile.csv", False, True) sheet=book.Worksheets(1) chart=book.Charts.Add() chart.SetSourceData(sheet.Range("$A:$B")) chart.ChartType=client.constants.xlLine Apart from the last bit - I can't get the chart type to be "xlLine" (plain line graph). Any ideas ? A: Needed to run the 'makepy.py' to get it to work. http://docs.activestate.com/activepython/2.4/pywin32/html/com/win32com/HTML/QuickStartClientCom.html#UsingComConstants
Python win32com: Excel set chart type to Line
This VBA macro works: Sub Draw_Graph() Columns("A:B").Select ActiveSheet.Shapes.AddChart.Select ActiveChart.SetSourceData Source:=ActiveSheet.Range("$A:$B") ActiveChart.ChartType = xlLine End Sub This Python (near) Equivalent almost works: from win32com import client excel=client.Dispatch("Excel.Application") excel.Visible=True book=excel.Workbooks.Open("myfile.csv", False, True) sheet=book.Worksheets(1) chart=book.Charts.Add() chart.SetSourceData(sheet.Range("$A:$B")) chart.ChartType=client.constants.xlLine Apart from the last bit - I can't get the chart type to be "xlLine" (plain line graph). Any ideas ?
[ "Needed to run the 'makepy.py' to get it to work.\nhttp://docs.activestate.com/activepython/2.4/pywin32/html/com/win32com/HTML/QuickStartClientCom.html#UsingComConstants\n" ]
[ 1 ]
[]
[]
[ "charts", "excel", "python", "python_2.6", "win32com" ]
stackoverflow_0002204069_charts_excel_python_python_2.6_win32com.txt
Q: Loading Google App Engine app from one file for all the URLs or one file per URL for loading speed I have a small web app running on AppEngine and have all my URL processing in one file and the other processing done in another file that is imported at the top of the main python. e.g. import wsgiref.handlers from wsgiref.handlers import format_date_time import logging import os import cgi import datetime from time import mktime #Google Libraries from django.utils import simplejson from google.appengine.ext import webapp from google.appengine.ext import db from google.appengine.ext.db import Error from google.appengine.ext.webapp import template from google.appengine.api import memcache #Model Libraries from Models import * from Render import * from Sound import * #Few classes to handle the URLS and since these are at the top of the file they are loaded first when any of the URLS are hit. I have done it this way because the some URLs need to have the same libraries. My question is, if I carried on building my app this way, would it be better split the URLs into their own files with the libraries they need so that slowly but sure the libraries are moved into memory as more URLs are requested or would it be better to do everything in one big hit when any of the URLS is hit p.s. I appreciate that in a real world this probably not an issue but I am just curious A: There's no need to split your handlers into separate files. However, if you're importing something that's going to use a lot of CPU when it's imported and won't be used by many of your handlers, it's best to move your imports inside your handler classes so you can take advantage of lazy loading.
Loading Google App Engine app from one file for all the URLs or one file per URL for loading speed
I have a small web app running on AppEngine and have all my URL processing in one file and the other processing done in another file that is imported at the top of the main python. e.g. import wsgiref.handlers from wsgiref.handlers import format_date_time import logging import os import cgi import datetime from time import mktime #Google Libraries from django.utils import simplejson from google.appengine.ext import webapp from google.appengine.ext import db from google.appengine.ext.db import Error from google.appengine.ext.webapp import template from google.appengine.api import memcache #Model Libraries from Models import * from Render import * from Sound import * #Few classes to handle the URLS and since these are at the top of the file they are loaded first when any of the URLS are hit. I have done it this way because the some URLs need to have the same libraries. My question is, if I carried on building my app this way, would it be better split the URLs into their own files with the libraries they need so that slowly but sure the libraries are moved into memory as more URLs are requested or would it be better to do everything in one big hit when any of the URLS is hit p.s. I appreciate that in a real world this probably not an issue but I am just curious
[ "There's no need to split your handlers into separate files. However, if you're importing something that's going to use a lot of CPU when it's imported and won't be used by many of your handlers, it's best to move your imports inside your handler classes so you can take advantage of lazy loading.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "performance", "python" ]
stackoverflow_0002203348_google_app_engine_performance_python.txt
Q: How can I optimize multiple nested SELECTs in SQLite (w/Python)? I'm building a CGI script that polls a SQLite database and builds a table of statistics. The source database table is described below, as is the chunk of pertinent code. Everything works (functionally), but the CGI itself is very slow as I have multiple nested SELECT COUNT(id) calls. I figure my best shot at optimization is to ask the SO community as my time with Google has been relatively fruitless. The table: CREATE TABLE messages ( id TEXT PRIMARY KEY ON CONFLICT REPLACE, date TEXT, hour INTEGER, sender TEXT, size INTEGER, origin TEXT, destination TEXT, relay TEXT, day TEXT); (Yes, I know the table isn't normalized but it's populated with extracts from a mail log... I was happy enough to get the extract & populate working, let alone normalize it. I don't think the table structure has a lot to do with my question at this point, but I could be wrong.) Sample row: 476793200A7|Jan 29 06:04:47|6|admin@mydomain.com|4656|web02.mydomain.pvt|user@example.com|mail01.mydomain.pvt|Jan 29 And, the Python code that builds my tables: #!/usr/bin/python print 'Content-type: text/html\n\n' from datetime import date import re p = re.compile('(\w+) (\d+)') d_month = {'Jan':1,'Feb':2,'Mar':3,'Apr':4,'May':5,'Jun':6,'Jul':7,'Aug':8,'Sep':9,'Oct':10,'Nov':11,'Dec':12} l_wkday = ['Mo','Tu','We','Th','Fr','Sa','Su'] days = [] curs.execute('SELECT DISTINCT(day) FROM messages ORDER BY day') for day in curs.fetchall(): m = p.match(day[0]).group(1) m = d_month[m] d = p.match(day[0]).group(2) days.append([day[0],"%s (%s)" % (day[0],l_wkday[date.weekday(date(2010,int(m),int(d)))])]) curs.execute('SELECT DISTINCT(sender) FROM messages') senders = curs.fetchall() for sender in senders: curs.execute('SELECT COUNT(id) FROM messages WHERE sender=%s',(sender[0])) print ' <div id="'+sender[0]+'">' print ' <h1>Stats for Sender: '+sender[0]+'</h1>' print ' <table><caption>Total messages in database: %d</caption>' % curs.fetchone()[0] print ' <tr><td>&nbsp;</td><th colspan=24>Hour of Day</th></tr>' print ' <tr><td class="left">Day</td><th>%s</th></tr>' % '</th><th>'.join(map(str,range(24))) for day in days: print ' <tr><td>%s</td>' % day[1] for hour in range(24): sql = 'SELECT COUNT(id) FROM messages WHERE sender="%s" AND day="%s" AND hour="%s"' % (sender[0],day[0],str(hour)) curs.execute(sql) d = curs.fetchone()[0] print ' <td>%s</td>' % (d>0 and str(d) or '') print ' </tr>' print ' </table></div>' print ' </body>\n</html>\n' I'm not sure if there are any ways I can combine some of the queries, or approach it from a different angle to extract the data. I had also thought about building a second table with the counts in it and just updating it when the original table is updated. I've been staring at this for entirely too long today so I'm going to attack it fresh again tomorrow, hopefully with some insight from the experts ;) Edit: Using the GROUP BY answer provided below, I was able to get the data needed from the database in one query. I switched to Perl since Python's nested dict support just didn't work very well for the way I needed to approach this (building a set of HTML tables in a specific way). Here's a snippet of the revised code: my %data; my $rows = $db->selectall_arrayref("SELECT COUNT(id),sender,day,hour FROM messages GROUP BY sender,day,hour ORDER BY sender,day,hour"); for my $row (@$rows) { my ($ct, $se, $dy, $hr) = @$row; $data{$se}{$dy}{$hr} = $ct; } for my $se (keys %data) { print "Sender: $se\n"; for my $dy (keys %{$data{$se}}) { print "Day: ",time2str('%a',str2time("$dy 2010"))," $dy\n"; for my $hr (keys %{$data{$se}{$dy}}) { print "Hour: $hr = ".$data{$se}{$dy}{$hr}."\n"; } } print "\n"; } What once executed in about 28.024s now takes 0.415s! A: first of all you can use the group by clause: select count(*), sender from messages group by sender; and with this you execute one query for all senders instead of on query for each sender. Another possibility could be: select count(*), sender, day, hour from messages group by sender, day, hour order by sender, day, hour; i didn't test it but at least now you know the existances of group by clause. this should reduce the number of queries and i think this is the first big step to increase performance. second, create indexes based on search columns, in your case sender, day and hour. if this isn't enough use profiling tools to find where the most the time is spent. you should also consider the use of fetchmany instead of fetchall to keep low memory consumption. remember that since sqlite module is coded in C use it as much as possible. A: For starters, create an index: CREATE INDEX messages_sender_by_day ON messages (sender, day); (You probably don't need to include "hour" in there.) If that doesn't help or you've already tried it, then please fix up your question a bit: give us some code to generate test data and SQL for all indexes on the table. Maintaining a count cache is fairly common, but I can't tell if that's needed here.
How can I optimize multiple nested SELECTs in SQLite (w/Python)?
I'm building a CGI script that polls a SQLite database and builds a table of statistics. The source database table is described below, as is the chunk of pertinent code. Everything works (functionally), but the CGI itself is very slow as I have multiple nested SELECT COUNT(id) calls. I figure my best shot at optimization is to ask the SO community as my time with Google has been relatively fruitless. The table: CREATE TABLE messages ( id TEXT PRIMARY KEY ON CONFLICT REPLACE, date TEXT, hour INTEGER, sender TEXT, size INTEGER, origin TEXT, destination TEXT, relay TEXT, day TEXT); (Yes, I know the table isn't normalized but it's populated with extracts from a mail log... I was happy enough to get the extract & populate working, let alone normalize it. I don't think the table structure has a lot to do with my question at this point, but I could be wrong.) Sample row: 476793200A7|Jan 29 06:04:47|6|admin@mydomain.com|4656|web02.mydomain.pvt|user@example.com|mail01.mydomain.pvt|Jan 29 And, the Python code that builds my tables: #!/usr/bin/python print 'Content-type: text/html\n\n' from datetime import date import re p = re.compile('(\w+) (\d+)') d_month = {'Jan':1,'Feb':2,'Mar':3,'Apr':4,'May':5,'Jun':6,'Jul':7,'Aug':8,'Sep':9,'Oct':10,'Nov':11,'Dec':12} l_wkday = ['Mo','Tu','We','Th','Fr','Sa','Su'] days = [] curs.execute('SELECT DISTINCT(day) FROM messages ORDER BY day') for day in curs.fetchall(): m = p.match(day[0]).group(1) m = d_month[m] d = p.match(day[0]).group(2) days.append([day[0],"%s (%s)" % (day[0],l_wkday[date.weekday(date(2010,int(m),int(d)))])]) curs.execute('SELECT DISTINCT(sender) FROM messages') senders = curs.fetchall() for sender in senders: curs.execute('SELECT COUNT(id) FROM messages WHERE sender=%s',(sender[0])) print ' <div id="'+sender[0]+'">' print ' <h1>Stats for Sender: '+sender[0]+'</h1>' print ' <table><caption>Total messages in database: %d</caption>' % curs.fetchone()[0] print ' <tr><td>&nbsp;</td><th colspan=24>Hour of Day</th></tr>' print ' <tr><td class="left">Day</td><th>%s</th></tr>' % '</th><th>'.join(map(str,range(24))) for day in days: print ' <tr><td>%s</td>' % day[1] for hour in range(24): sql = 'SELECT COUNT(id) FROM messages WHERE sender="%s" AND day="%s" AND hour="%s"' % (sender[0],day[0],str(hour)) curs.execute(sql) d = curs.fetchone()[0] print ' <td>%s</td>' % (d>0 and str(d) or '') print ' </tr>' print ' </table></div>' print ' </body>\n</html>\n' I'm not sure if there are any ways I can combine some of the queries, or approach it from a different angle to extract the data. I had also thought about building a second table with the counts in it and just updating it when the original table is updated. I've been staring at this for entirely too long today so I'm going to attack it fresh again tomorrow, hopefully with some insight from the experts ;) Edit: Using the GROUP BY answer provided below, I was able to get the data needed from the database in one query. I switched to Perl since Python's nested dict support just didn't work very well for the way I needed to approach this (building a set of HTML tables in a specific way). Here's a snippet of the revised code: my %data; my $rows = $db->selectall_arrayref("SELECT COUNT(id),sender,day,hour FROM messages GROUP BY sender,day,hour ORDER BY sender,day,hour"); for my $row (@$rows) { my ($ct, $se, $dy, $hr) = @$row; $data{$se}{$dy}{$hr} = $ct; } for my $se (keys %data) { print "Sender: $se\n"; for my $dy (keys %{$data{$se}}) { print "Day: ",time2str('%a',str2time("$dy 2010"))," $dy\n"; for my $hr (keys %{$data{$se}{$dy}}) { print "Hour: $hr = ".$data{$se}{$dy}{$hr}."\n"; } } print "\n"; } What once executed in about 28.024s now takes 0.415s!
[ "first of all you can use the group by clause:\nselect count(*), sender from messages group by sender;\n\nand with this you execute one query for all senders instead of on query for each sender. Another possibility could be:\nselect count(*), sender, day, hour\n from messages group by sender, day, hour\n order by sender, day, hour;\n\ni didn't test it but at least now you know the existances of group by clause. this should reduce the number of queries and i think this is the first big step to increase performance.\nsecond, create indexes based on search columns, in your case sender, day and hour.\nif this isn't enough use profiling tools to find where the most the time is spent. you should also consider the use of fetchmany instead of fetchall to keep low memory consumption. remember that since sqlite module is coded in C use it as much as possible.\n", "For starters, create an index:\nCREATE INDEX messages_sender_by_day ON messages (sender, day);\n(You probably don't need to include \"hour\" in there.)\nIf that doesn't help or you've already tried it, then please fix up your question a bit: give us some code to generate test data and SQL for all indexes on the table.\nMaintaining a count cache is fairly common, but I can't tell if that's needed here.\n" ]
[ 3, 1 ]
[]
[]
[ "optimization", "perl", "python", "query_optimization", "sqlite" ]
stackoverflow_0002203709_optimization_perl_python_query_optimization_sqlite.txt
Q: Python: defining new functions on the fly using "with" I want to convert the following code: ... urls = [many urls] links = [] funcs = [] for url in urls: func = getFunc(url, links) funcs.append(func) ... def getFunc(url, links): def func(): page = open(url) link = searchForLink(page) links.append(link) return func into the much more convenient code: urls = [many urls] links = [] funcs = [] for url in urls: <STATEMENT>(funcs): page = open(url) link = searchForLink(page) links.append(link) I was hoping to do this with the with statement. As I commented bellow, I was hoping to achieve: def __enter__(): def func(): ..code in the for loop.. def __exit__(): funcs.append(func) Of course this doesn't work. List comprehensions is not good for cases were the action searchForLink is not just one function but many functions. It would turn into an extremely unreadable code. For example even this would be problematic with list comprehensions: for url in urls: page = open(url) link1 = searchForLink(page) link2 = searchForLink(page) actionOnLink(link1) actionOnLink(link2) .... many more of these actions... links.append(link1) A: It makes no sense to use with here. Instead use a list comprehension: funcs = [getFunc(url, links) for url in urls] A: A bit unconventional, but you can have a decorator register the func and bind any loop variables as default arguments: urls = [many urls] links = [] funcs = [] for url in urls: @funcs.append def func(url=url): page = open(url) link = searchForLink(page) links.append(link) A: Lose the line <STATEMENT>(funcs): Edit: I mean: why would you do this? Why define a new function for each page? Why not just do this? urls = [many urls] links = [] for url in urls: page = open(url) link = searchForLink(page) links.append(link) A: There are only two ways to create functions: def and lambda. Lambdas are meant for tiny functions, so they may not be very appropriate for your case. However, if you really want to, you can enclose two lambdas within each other: urls = [many urls] links = [] funcs = [(lambda x: lambda: links.append(searchForLink(open(x))))(u) for u in urls] A little too LISPish for my taste. A: You should not use "with" to do this (even though, given that it's Python, you almost certainly could, using some bizarre side-effect and Python's dynamicism). The purpose of "with" in Python is, as described in the docs, "to wrap the execution of a block with methods defined by a context manager. This allows common try...except...finally usage patterns to be encapsulated for convenient reuse." I think you're confusing Python's "with" with the Javascript/VisualBasic "with", which may be cosmetically similar but which is effectively unrelated. A: Good old itertools. from itertools import imap links.extend(imap(searchForLink, imap(open, urls))) Although, maybe you'd prefer functional. from functional import * funcs = [partial(compose(compose(links.append, searchForLink), open), url) for url in urls] for func in funcs: func() I don't think it's worthwhile creating a class for with use: it's more work to create __enter__ and __exit__ than it is to just write a helper function. A: You may be better using generators to achieve the delayed computation you're after. def MakeLinks(urls): for url in urls: page = open(url) link = searchForLink(page) yield link links = MakeLinks(urls) When you want the links: for link in links: print link The urls will be looked up during this loop, and not all at once (which it looks like you're tring to avoid).
Python: defining new functions on the fly using "with"
I want to convert the following code: ... urls = [many urls] links = [] funcs = [] for url in urls: func = getFunc(url, links) funcs.append(func) ... def getFunc(url, links): def func(): page = open(url) link = searchForLink(page) links.append(link) return func into the much more convenient code: urls = [many urls] links = [] funcs = [] for url in urls: <STATEMENT>(funcs): page = open(url) link = searchForLink(page) links.append(link) I was hoping to do this with the with statement. As I commented bellow, I was hoping to achieve: def __enter__(): def func(): ..code in the for loop.. def __exit__(): funcs.append(func) Of course this doesn't work. List comprehensions is not good for cases were the action searchForLink is not just one function but many functions. It would turn into an extremely unreadable code. For example even this would be problematic with list comprehensions: for url in urls: page = open(url) link1 = searchForLink(page) link2 = searchForLink(page) actionOnLink(link1) actionOnLink(link2) .... many more of these actions... links.append(link1)
[ "It makes no sense to use with here. Instead use a list comprehension:\nfuncs = [getFunc(url, links) for url in urls]\n\n", "A bit unconventional, but you can have a decorator register the func and bind any loop variables as default arguments:\nurls = [many urls]\nlinks = []\nfuncs = []\n\nfor url in urls:\n @funcs.append\n def func(url=url):\n page = open(url)\n link = searchForLink(page)\n links.append(link)\n\n", "Lose the line <STATEMENT>(funcs):\nEdit:\nI mean: why would you do this? Why define a new function for each page? Why not just do this?\nurls = [many urls]\nlinks = []\nfor url in urls:\n page = open(url)\n link = searchForLink(page)\n links.append(link) \n\n", "There are only two ways to create functions: def and lambda. Lambdas are meant for tiny functions, so they may not be very appropriate for your case. However, if you really want to, you can enclose two lambdas within each other:\nurls = [many urls]\nlinks = []\nfuncs = [(lambda x:\n lambda:\n links.append(searchForLink(open(x))))(u)\n for u in urls]\n\nA little too LISPish for my taste.\n", "You should not use \"with\" to do this (even though, given that it's Python, you almost certainly could, using some bizarre side-effect and Python's dynamicism). \nThe purpose of \"with\" in Python is, as described in the docs, \"to wrap the execution of a block with methods defined by a context manager. This allows common try...except...finally usage patterns to be encapsulated for convenient reuse.\"\nI think you're confusing Python's \"with\" with the Javascript/VisualBasic \"with\", which may be cosmetically similar but which is effectively unrelated.\n", "Good old itertools.\nfrom itertools import imap\nlinks.extend(imap(searchForLink, imap(open, urls)))\n\nAlthough, maybe you'd prefer functional.\nfrom functional import *\nfuncs = [partial(compose(compose(links.append, searchForLink), open), url) for url in urls]\nfor func in funcs: func()\n\nI don't think it's worthwhile creating a class for with use: it's more work to create __enter__ and __exit__ than it is to just write a helper function.\n", "You may be better using generators to achieve the delayed computation you're after.\ndef MakeLinks(urls):\n for url in urls:\n page = open(url)\n link = searchForLink(page)\n yield link\n\nlinks = MakeLinks(urls)\n\nWhen you want the links:\nfor link in links:\n print link\n\nThe urls will be looked up during this loop, and not all at once (which it looks like you're tring to avoid).\n" ]
[ 6, 4, 2, 2, 1, 1, 1 ]
[]
[]
[ "python", "with_statement" ]
stackoverflow_0002200026_python_with_statement.txt
Q: What's a good web framework and/or tool for a software developer? I'd like to make a website, it's not a huge project, but I'm a bit out of the web design loop. The last time I made a website was probably around 2002. I figure the web frameworks and tools have come a ways since then. It's mostly the design aspect that I'd like it to make easier. I can do the backend language in any language. My question is: What are some tools or web frameworks that make the design aspect of making a website easier. It could be a framework in php/python/ruby. As far as tools go, free/open source is preferred, but I wouldn't mind looking at good commercial alternatives. A: You'll get many different subjective answers for your question, but as for me I would recommend django. It is flexible unlike CMS and the admin saves you alot of pain. A: For PHP, I like the CMS Drupal and have found it to be very fast in getting a site up and running. Drupal also has a ton of modules to do almost anything you want. It is also very customizable (although that takes a little reading to figure out how to do it). Ruby's de facto standard web framework is Ruby on Rails. It's a straight web framework, not a CMS like Drupal, but it doesn't take very much work to get a simple site up and running. It uses convention over configuration to be that simple, so you've got to learn the conventions to really understand what's going on. I haven't used a Python web framework (except the one I wrote back in college), but I've heard good things about Django. If you have experience with Java, there's a Groovy framework called Grails that is similar to Ruby on Rails, but runs on Java servers. A: I once played around with CodeIgniter for a couple of weeks and found it pretty easy and fast to jump into. Check out this list of PHP frameworks: http://woork.blogspot.com/2008/11/20-great-php-framework-for-developers.html Joomla is also said to be amazing, although that's more of a Content Management System than just a framework. But it makes the design of the site really simple. A: It really depends on a couple things: What are you familiar with? You indicated that you've done some web development in the past. What did you use? If you were using classic ASP, then learning ASP.NET should be less of a jump for you. What are you trying to create? If all you need are static HTML files with a tiny bit of functionality, you could try learning PHP as it's pretty quick and easy to get going. If you need light database access, then maybe Ruby on Rails will be your cup of tea. With that being said, I'd recommend the following in no particular order (just because I've tried them and they're all pretty decent): Ruby on Rails ASP.NET / ASP.NET MVC PHP A: django on Google App Engine gets you free(up to a point) and scalable hosting
What's a good web framework and/or tool for a software developer?
I'd like to make a website, it's not a huge project, but I'm a bit out of the web design loop. The last time I made a website was probably around 2002. I figure the web frameworks and tools have come a ways since then. It's mostly the design aspect that I'd like it to make easier. I can do the backend language in any language. My question is: What are some tools or web frameworks that make the design aspect of making a website easier. It could be a framework in php/python/ruby. As far as tools go, free/open source is preferred, but I wouldn't mind looking at good commercial alternatives.
[ "You'll get many different subjective answers for your question, but as for me I would recommend django. It is flexible unlike CMS and the admin saves you alot of pain.\n", "For PHP, I like the CMS Drupal and have found it to be very fast in getting a site up and running. Drupal also has a ton of modules to do almost anything you want. It is also very customizable (although that takes a little reading to figure out how to do it).\nRuby's de facto standard web framework is Ruby on Rails. It's a straight web framework, not a CMS like Drupal, but it doesn't take very much work to get a simple site up and running. It uses convention over configuration to be that simple, so you've got to learn the conventions to really understand what's going on.\nI haven't used a Python web framework (except the one I wrote back in college), but I've heard good things about Django.\nIf you have experience with Java, there's a Groovy framework called Grails that is similar to Ruby on Rails, but runs on Java servers.\n", "I once played around with CodeIgniter for a couple of weeks and found it pretty easy and fast to jump into.\nCheck out this list of PHP frameworks:\nhttp://woork.blogspot.com/2008/11/20-great-php-framework-for-developers.html\nJoomla is also said to be amazing, although that's more of a Content Management System than just a framework. But it makes the design of the site really simple.\n", "It really depends on a couple things:\n\nWhat are you familiar with? You indicated that you've done some web development in the past. What did you use? If you were using classic ASP, then learning ASP.NET should be less of a jump for you.\nWhat are you trying to create? If all you need are static HTML files with a tiny bit of functionality, you could try learning PHP as it's pretty quick and easy to get going. If you need light database access, then maybe Ruby on Rails will be your cup of tea.\n\nWith that being said, I'd recommend the following in no particular order (just because I've tried them and they're all pretty decent):\n\nRuby on Rails\nASP.NET / ASP.NET MVC\nPHP\n\n", "django on Google App Engine gets you free(up to a point) and scalable hosting \n" ]
[ 5, 3, 1, 0, 0 ]
[]
[]
[ "php", "python", "ruby" ]
stackoverflow_0002204223_php_python_ruby.txt
Q: Using a PFX Certificate to connect to an HTTP site Here's the scenario: I need to connect to a web site to retrieve electronic lab results formatted in XML. In order to connect, I need to use a digital certificate. I've been able to get a version of this working in Perl. It looks like this: #!/usr/bin/env perl use strict; use WWW::Mechanize; $|++; my $username = 'xxx'; my $password = 'yyy'; $ENV{HTTPS_PKCS12_FILE} = 'CERTFILE.pfx'; $ENV{HTTPS_PKCS12_PASSWORD} = 'PathCert'; my $mech = WWW::Mechanize->new(); $mech->agent_alias('Windows IE 6'); $mech->get("https://www.example.org/xyz/,DanaInfo=999.33.1.10+"); $mech->get("https://www.example.org/xyz/isapi_pathnet.dll?Page=Login&Mode=Silent&UserID=xxx&Password=yyy,DanaInfo=999.33.1.10"); $mech->get("https://www.example.org/xyz/isapi_pathnet.dll?Page=HL7&Query=NewRequests,DanaInfo=999.33.1.10"); print $mech->content(); Now, this works when I run it from my workstation. However: If I compile it using perl2exe, it doesn't work. If I try to compile it with pp (e.g. "pp -r sslclient.pl"), all I get back is "500 SSL negotiation failed:" If I copy this whole directory to another computer, the script simply hangs at the first $mech->get() statement. What I really want is to find an equivalent to this in Python (the rest of my app is Python), but so far no luck. So, plenty of problems here. Anyone have any ideas?
Using a PFX Certificate to connect to an HTTP site
Here's the scenario: I need to connect to a web site to retrieve electronic lab results formatted in XML. In order to connect, I need to use a digital certificate. I've been able to get a version of this working in Perl. It looks like this: #!/usr/bin/env perl use strict; use WWW::Mechanize; $|++; my $username = 'xxx'; my $password = 'yyy'; $ENV{HTTPS_PKCS12_FILE} = 'CERTFILE.pfx'; $ENV{HTTPS_PKCS12_PASSWORD} = 'PathCert'; my $mech = WWW::Mechanize->new(); $mech->agent_alias('Windows IE 6'); $mech->get("https://www.example.org/xyz/,DanaInfo=999.33.1.10+"); $mech->get("https://www.example.org/xyz/isapi_pathnet.dll?Page=Login&Mode=Silent&UserID=xxx&Password=yyy,DanaInfo=999.33.1.10"); $mech->get("https://www.example.org/xyz/isapi_pathnet.dll?Page=HL7&Query=NewRequests,DanaInfo=999.33.1.10"); print $mech->content(); Now, this works when I run it from my workstation. However: If I compile it using perl2exe, it doesn't work. If I try to compile it with pp (e.g. "pp -r sslclient.pl"), all I get back is "500 SSL negotiation failed:" If I copy this whole directory to another computer, the script simply hangs at the first $mech->get() statement. What I really want is to find an equivalent to this in Python (the rest of my app is Python), but so far no luck. So, plenty of problems here. Anyone have any ideas?
[]
[]
[ "I have no idea what is going on with your perl issues. However, mechanize for Python can be found here.\n" ]
[ -1 ]
[ "certificate", "perl", "python", "ssl" ]
stackoverflow_0002204252_certificate_perl_python_ssl.txt
Q: Error "Could not locate a bind configured on mapper" for SQLAlchemy and pylons I'm not sure what I'm doing wrong here to warrant this message. Any help with my configuration would be appreciated. """The application's model objects""" import sqlalchemy as sa from sqlalchemy import orm from project.model import meta def now(): return datetime.datetime.now() def init_model(engine): """Call me before using any of the tables or classes in the model""" sm = orm.sessionmaker(autoflush=True, autocommit=True, bind=engine) meta.Session.configure(bind=engine) meta.engine = engine meta.Session = orm.scoped_session(sm) class User(object): pass t_user = sa.Table("User", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("name", sa.types.String(100), nullable=False), sa.Column("first_name", sa.types.String(100), nullable=False), sa.Column("last_name", sa.types.String(100), nullable=False), sa.Column("email", sa.types.String(100), nullable=False), sa.Column("password", sa.types.String(32), nullable=False) ) orm.mapper(User,t_user) From the python console, I am executing: from project.model import * mr_jones = User() meta.Session.add(mr_jones) mr_jones.name = 'JR Jones' meta.Session.commit() And the error I receive is: sqlalchemy.exc.UnboundExecutionError: Could not locate a bind configured on mapper Mapper|User|User or this Session Thanks for your help. A: This issue was resolved. I didn't know that when using pylons from the CLI, I have to include the entire environment: from paste.deploy import appconfig from pylons import config from project.config.environment import load_environment conf = appconfig('config:development.ini', relative_to='.') load_environment(conf.global_conf, conf.local_conf) from project.model import * After this the database queries executed without a problem.
Error "Could not locate a bind configured on mapper" for SQLAlchemy and pylons
I'm not sure what I'm doing wrong here to warrant this message. Any help with my configuration would be appreciated. """The application's model objects""" import sqlalchemy as sa from sqlalchemy import orm from project.model import meta def now(): return datetime.datetime.now() def init_model(engine): """Call me before using any of the tables or classes in the model""" sm = orm.sessionmaker(autoflush=True, autocommit=True, bind=engine) meta.Session.configure(bind=engine) meta.engine = engine meta.Session = orm.scoped_session(sm) class User(object): pass t_user = sa.Table("User", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("name", sa.types.String(100), nullable=False), sa.Column("first_name", sa.types.String(100), nullable=False), sa.Column("last_name", sa.types.String(100), nullable=False), sa.Column("email", sa.types.String(100), nullable=False), sa.Column("password", sa.types.String(32), nullable=False) ) orm.mapper(User,t_user) From the python console, I am executing: from project.model import * mr_jones = User() meta.Session.add(mr_jones) mr_jones.name = 'JR Jones' meta.Session.commit() And the error I receive is: sqlalchemy.exc.UnboundExecutionError: Could not locate a bind configured on mapper Mapper|User|User or this Session Thanks for your help.
[ "This issue was resolved. I didn't know that when using pylons from the CLI, I have to include the entire environment:\nfrom paste.deploy import appconfig\nfrom pylons import config\n\nfrom project.config.environment import load_environment\n\nconf = appconfig('config:development.ini', relative_to='.')\nload_environment(conf.global_conf, conf.local_conf)\n\nfrom project.model import *\n\nAfter this the database queries executed without a problem.\n" ]
[ 3 ]
[]
[]
[ "pylons", "python", "sqlalchemy" ]
stackoverflow_0002203496_pylons_python_sqlalchemy.txt
Q: I Keep receiving an "The MetaData is not bound to an Engine or Connection." when trying to create my SQL tables with pylons Here is my current code: def init_model(engine): global t_user t_user = sa.Table("User", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("name", sa.types.String(100), nullable=False), sa.Column("first_name", sa.types.String(100), nullable=False), sa.Column("last_name", sa.types.String(100), nullable=False), sa.Column("email", sa.types.String(100), nullable=False), sa.Column("password", sa.types.String, nullable=False), autoload=True, autoload_with=engine ) orm.mapper(User, t_user) meta.Session.configure(bind=engine) meta.Session = orm.scoped_session(sm) meta.engine = engine I then try to execute: >>> meta.metadata.create_all(bind=meta.engine) And receive the error: raise exc.UnboundExecutionError(msg) sqlalchemy.exc.UnboundExecutionError: The MetaData is not bound to an Engine or Connection. Execution can not proceed without a database to execute against. Either execute with an explicit connection or assign the MetaData's .bind to enable implicit execution. In my development.ini I have: # SQLAlchemy database URL sqlalchemy.url = sqlite:///%(here)s/development.db I'm new to Python's pylons and have no idea how to resolve this message. This is probably an easy fix to the trained eye. Thank you. A: This issue was resolved. I didn't know that when using pylons from the CLI, I have to include the entire environment: from paste.deploy import appconfig from pylons import config from project.config.environment import load_environment conf = appconfig('config:development.ini', relative_to='.') load_environment(conf.global_conf, conf.local_conf) from project.model import * After this the database queries executed without a problem.
I Keep receiving an "The MetaData is not bound to an Engine or Connection." when trying to create my SQL tables with pylons
Here is my current code: def init_model(engine): global t_user t_user = sa.Table("User", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("name", sa.types.String(100), nullable=False), sa.Column("first_name", sa.types.String(100), nullable=False), sa.Column("last_name", sa.types.String(100), nullable=False), sa.Column("email", sa.types.String(100), nullable=False), sa.Column("password", sa.types.String, nullable=False), autoload=True, autoload_with=engine ) orm.mapper(User, t_user) meta.Session.configure(bind=engine) meta.Session = orm.scoped_session(sm) meta.engine = engine I then try to execute: >>> meta.metadata.create_all(bind=meta.engine) And receive the error: raise exc.UnboundExecutionError(msg) sqlalchemy.exc.UnboundExecutionError: The MetaData is not bound to an Engine or Connection. Execution can not proceed without a database to execute against. Either execute with an explicit connection or assign the MetaData's .bind to enable implicit execution. In my development.ini I have: # SQLAlchemy database URL sqlalchemy.url = sqlite:///%(here)s/development.db I'm new to Python's pylons and have no idea how to resolve this message. This is probably an easy fix to the trained eye. Thank you.
[ "This issue was resolved. I didn't know that when using pylons from the CLI, I have to include the entire environment:\nfrom paste.deploy import appconfig\nfrom pylons import config\n\nfrom project.config.environment import load_environment\n\nconf = appconfig('config:development.ini', relative_to='.')\nload_environment(conf.global_conf, conf.local_conf)\n\nfrom project.model import *\n\nAfter this the database queries executed without a problem.\n" ]
[ 2 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0002202927_pylons_python.txt
Q: Why am I getting an error about my class defining __slots__ when trying to pickle an object? I'm trying to pickle an object of a (new-style) class I defined. But I'm getting the following error: >>> with open('temp/connection.pickle','w') as f: ... pickle.dump(c,f) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "/usr/lib/python2.5/pickle.py", line 1362, in dump Pickler(file, protocol).dump(obj) File "/usr/lib/python2.5/pickle.py", line 224, in dump self.save(obj) File "/usr/lib/python2.5/pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python2.5/pickle.py", line 419, in save_reduce save(state) File "/usr/lib/python2.5/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.5/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.5/pickle.py", line 663, in _batch_setitems save(v) File "/usr/lib/python2.5/pickle.py", line 306, in save rv = reduce(self.proto) File "/usr/lib/python2.5/copy_reg.py", line 76, in _reduce_ex raise TypeError("a class that defines __slots__ without " TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled I didn't explicitly define __slots__ in my class. Did something I do implicitly define it? How do I work around this? Do I need to define __getstate__? Update: gnibbler chose a good example. The class of the object I'm trying to pickle wraps a socket. (It occurs to me now that) sockets define __slots__ and not __getstate__ for good reason. I assume once a process ends, another process can't unpickle and use the previous process's socket connection. So while I'm accepting Alex Martelli's excellent answer, I'm going to have to pursue a different strategy than pickling to "share" the object reference. A: The class defining __slots__ (and not __getstate__) can be either an ancestor class of yours, or a class (or ancestor class) of an attribute or item of yours, directly or indirectly: essentially, the class of any object in the directed graph of references with your object as root, since pickling needs to save the entire graph. A simple solution to your quandary is to use protocol -1, which means "the best protocol pickle can use"; the default is an ancient ASCII-based protocol which imposes this limitation about __slots__ vs __getstate__. Consider: >>> class sic(object): ... __slots__ = 'a', 'b' ... >>> import pickle >>> pickle.dumps(sic(), -1) '\x80\x02c__main__\nsic\nq\x00)\x81q\x01.' >>> pickle.dumps(sic()) Traceback (most recent call last): [snip snip] raise TypeError("a class that defines __slots__ without " TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled >>> As you see, protocol -1 takes the __slots__ in stride, while the default protocol gives the same exception you saw. The issues with protocol -1: it produces a binary string/file, rather than an ASCII one like the default protocol; the resulting pickled file would not be loadable by sufficiently ancient versions of Python. Advantages, besides the key one wrt __slots__, include more compact results, and better performance. If you're forced to use the default protocol, then you'll need to identify exactly which class is giving you trouble and exactly why. We can discuss strategies if this is the case (but if you can possibly use the -1 protocol, that's so much better that it's not worth discussing;-) and simple code inspection looking for the troublesome class/object is proving too complicated (I have in mind some deepcopy-based tricks to get a usable representation of the whole graph, in case you're wondering). A: Perhaps an attribute of your instance is using __slots__ For example, socket has __slots__ so it can't be pickled You need to identify which attribute is causing the error and write your own __getstate__ and __setstate__ to ignore that attribute A: From PEP 307: The __getstate__ method should return a picklable value representing the object's state without referencing the object itself. If no __getstate__ method exists, a default implementation is used that returns self.__dict__.
Why am I getting an error about my class defining __slots__ when trying to pickle an object?
I'm trying to pickle an object of a (new-style) class I defined. But I'm getting the following error: >>> with open('temp/connection.pickle','w') as f: ... pickle.dump(c,f) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "/usr/lib/python2.5/pickle.py", line 1362, in dump Pickler(file, protocol).dump(obj) File "/usr/lib/python2.5/pickle.py", line 224, in dump self.save(obj) File "/usr/lib/python2.5/pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python2.5/pickle.py", line 419, in save_reduce save(state) File "/usr/lib/python2.5/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.5/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.5/pickle.py", line 663, in _batch_setitems save(v) File "/usr/lib/python2.5/pickle.py", line 306, in save rv = reduce(self.proto) File "/usr/lib/python2.5/copy_reg.py", line 76, in _reduce_ex raise TypeError("a class that defines __slots__ without " TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled I didn't explicitly define __slots__ in my class. Did something I do implicitly define it? How do I work around this? Do I need to define __getstate__? Update: gnibbler chose a good example. The class of the object I'm trying to pickle wraps a socket. (It occurs to me now that) sockets define __slots__ and not __getstate__ for good reason. I assume once a process ends, another process can't unpickle and use the previous process's socket connection. So while I'm accepting Alex Martelli's excellent answer, I'm going to have to pursue a different strategy than pickling to "share" the object reference.
[ "The class defining __slots__ (and not __getstate__) can be either an ancestor class of yours, or a class (or ancestor class) of an attribute or item of yours, directly or indirectly: essentially, the class of any object in the directed graph of references with your object as root, since pickling needs to save the entire graph.\nA simple solution to your quandary is to use protocol -1, which means \"the best protocol pickle can use\"; the default is an ancient ASCII-based protocol which imposes this limitation about __slots__ vs __getstate__. Consider:\n>>> class sic(object):\n... __slots__ = 'a', 'b'\n... \n>>> import pickle\n>>> pickle.dumps(sic(), -1)\n'\\x80\\x02c__main__\\nsic\\nq\\x00)\\x81q\\x01.'\n>>> pickle.dumps(sic())\nTraceback (most recent call last):\n [snip snip]\n raise TypeError(\"a class that defines __slots__ without \"\nTypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled\n>>> \n\nAs you see, protocol -1 takes the __slots__ in stride, while the default protocol gives the same exception you saw.\nThe issues with protocol -1: it produces a binary string/file, rather than an ASCII one like the default protocol; the resulting pickled file would not be loadable by sufficiently ancient versions of Python. Advantages, besides the key one wrt __slots__, include more compact results, and better performance.\nIf you're forced to use the default protocol, then you'll need to identify exactly which class is giving you trouble and exactly why. We can discuss strategies if this is the case (but if you can possibly use the -1 protocol, that's so much better that it's not worth discussing;-) and simple code inspection looking for the troublesome class/object is proving too complicated (I have in mind some deepcopy-based tricks to get a usable representation of the whole graph, in case you're wondering).\n", "Perhaps an attribute of your instance is using __slots__\nFor example, socket has __slots__ so it can't be pickled\nYou need to identify which attribute is causing the error and write your own\n__getstate__ and __setstate__ to ignore that attribute\n", "From PEP 307:\n\nThe __getstate__ method should return a picklable value\n representing the object's state without referencing the object\n itself. If no __getstate__ method exists, a default\n implementation is used that returns self.__dict__.\n\n" ]
[ 32, 7, 2 ]
[]
[]
[ "pickle", "python", "slots" ]
stackoverflow_0002204155_pickle_python_slots.txt
Q: How to isolate a single color in an image I'm using the python OpenCV bindings and at the moment I try to isolate a colorrange. That means I want to filter out everything that is not reddish. I tried to take only the red color channel but this includes the white spaces in the Image too. What is a good way to do that? A: Use a different color space: http://en.wikipedia.org/wiki/HSL_color_space A: Use the HSV colorspace. Select pixels that have an H value in the range that you consider to contain "red," and an S value large enough that you do not consider it to be neutral, maroon, brown, or pink. You might also need to throw out pixels with low V's. The H dimension is a circle, and red is right where the circle is split, so your H range will be in two parts, one near 255, the other near 0. A: How about using a formular like r' = r-(g+b)?
How to isolate a single color in an image
I'm using the python OpenCV bindings and at the moment I try to isolate a colorrange. That means I want to filter out everything that is not reddish. I tried to take only the red color channel but this includes the white spaces in the Image too. What is a good way to do that?
[ "Use a different color space: http://en.wikipedia.org/wiki/HSL_color_space\n", "Use the HSV colorspace. Select pixels that have an H value in the range that you consider to contain \"red,\" and an S value large enough that you do not consider it to be neutral, maroon, brown, or pink. You might also need to throw out pixels with low V's. The H dimension is a circle, and red is right where the circle is split, so your H range will be in two parts, one near 255, the other near 0.\n", "How about using a formular like r' = r-(g+b)?\n" ]
[ 4, 1, 0 ]
[]
[]
[ "color_space", "image_processing", "opencv", "python" ]
stackoverflow_0000968317_color_space_image_processing_opencv_python.txt
Q: Transform items from iterable with a sequence of unary functions I frequently find myself needing to apply a sequence of unary functions to a sequence of of the same length. My first thought is to go with map(), however this only takes a single function to be applied to all items in the sequence. In the following code for example, I wish to apply str.upper() to the first item, and int to the second item in each a. "transform" is a place holder for the effect I'm after. COLS = tuple([transform((str.upper, int), a.split(",")) for a in "pid,5 user,8 program,28 dev,10 sent,9 received,15".split()]) Is there some standard library, or other nice implementation that can perform a transformation such as this neatly? A: What about...: def transform(functions, arguments): return [f(a) for f, a in zip(functions, arguments)] A: >>> s="pid,5 user,8 program,28 dev,10 sent,9 received,15".split() >>> [ ( m.upper(),int(n)) for m, n in [i.split(",") for i in s ] ] [('PID', 5), ('USER', 8), ('PROGRAM', 28), ('DEV', 10), ('SENT', 9), ('RECEIVED', 15)] A: I'm currently using this: def transform(unaries, iterable): return map(lambda a, b: a(b), unaries, iterable)
Transform items from iterable with a sequence of unary functions
I frequently find myself needing to apply a sequence of unary functions to a sequence of of the same length. My first thought is to go with map(), however this only takes a single function to be applied to all items in the sequence. In the following code for example, I wish to apply str.upper() to the first item, and int to the second item in each a. "transform" is a place holder for the effect I'm after. COLS = tuple([transform((str.upper, int), a.split(",")) for a in "pid,5 user,8 program,28 dev,10 sent,9 received,15".split()]) Is there some standard library, or other nice implementation that can perform a transformation such as this neatly?
[ "What about...:\ndef transform(functions, arguments):\n return [f(a) for f, a in zip(functions, arguments)]\n\n", ">>> s=\"pid,5 user,8 program,28 dev,10 sent,9 received,15\".split()\n>>> [ ( m.upper(),int(n)) for m, n in [i.split(\",\") for i in s ] ]\n[('PID', 5), ('USER', 8), ('PROGRAM', 28), ('DEV', 10), ('SENT', 9), ('RECEIVED', 15)]\n\n", "I'm currently using this:\ndef transform(unaries, iterable):\n return map(lambda a, b: a(b), unaries, iterable)\n\n" ]
[ 3, 1, 1 ]
[]
[]
[ "map", "predicate", "python", "transform", "unary_function" ]
stackoverflow_0002204733_map_predicate_python_transform_unary_function.txt
Q: Write timestamp to file every hour in Python I have a python script that is constantly grabbing data from Twitter and writing the messages to a file. The question that I have is every hour, I want my program to write the current time to the file. Below is my script. Currently, it gets into the timestamp function and just keeps printing out the time every 10 seconds. #! /usr/bin/env python import tweetstream import simplejson import urllib import time import datetime import sched class twit: def __init__(self,uname,pswd,filepath): self.uname=uname self.password=pswd self.filepath=open(filepath,"wb") def main(self): i=0 s = sched.scheduler(time.time, time.sleep) output=self.filepath #Grab every tweet using Streaming API with tweetstream.TweetStream(self.uname, self.password) as stream: for tweet in stream: if tweet.has_key("text"): try: #Write tweet to file and print it to STDOUT message=tweet['text']+ "\n" output.write(message) print tweet['user']['screen_name'] + ": " + tweet['text'], "\n" ################################ #Timestamp code #Timestamps should be placed once every hour s.enter(10, 1, t.timestamp, (s,)) s.run() except KeyError: pass def timestamp(self,sc): now = datetime.datetime.now() current_time= now.strftime("%Y-%m-%d %H:%M") print current_time self.filepath.write(current_time+"\n") if __name__=='__main__': t=twit("rohanbk","cookie","tweets.txt") t.main() Is there anyway for my script to do it without constantly checking the time every other minute with an IF statement to see how much time has elapsed? Can I use a scheduled task like how I've done above with a slight modification to my current implementation? A: your code sc.enter(10, 1, t.timestamp, (sc,) is asking to be scheduled again in 10 seconds. If you want to be scheduled once an hour, sc.enter(3600, 1, t.timestamp, (sc,) seems better, since an hour is 3600 seconds, not 10! Also, the line s.enter(1, 1, t.timestamp, (s,)) gets a timestamp 1 second after every tweet written -- what's the point of that? Just schedule the first invocation of timestamp once, outside the loop, as well as changing its periodicity from 10 seconds to 3600.
Write timestamp to file every hour in Python
I have a python script that is constantly grabbing data from Twitter and writing the messages to a file. The question that I have is every hour, I want my program to write the current time to the file. Below is my script. Currently, it gets into the timestamp function and just keeps printing out the time every 10 seconds. #! /usr/bin/env python import tweetstream import simplejson import urllib import time import datetime import sched class twit: def __init__(self,uname,pswd,filepath): self.uname=uname self.password=pswd self.filepath=open(filepath,"wb") def main(self): i=0 s = sched.scheduler(time.time, time.sleep) output=self.filepath #Grab every tweet using Streaming API with tweetstream.TweetStream(self.uname, self.password) as stream: for tweet in stream: if tweet.has_key("text"): try: #Write tweet to file and print it to STDOUT message=tweet['text']+ "\n" output.write(message) print tweet['user']['screen_name'] + ": " + tweet['text'], "\n" ################################ #Timestamp code #Timestamps should be placed once every hour s.enter(10, 1, t.timestamp, (s,)) s.run() except KeyError: pass def timestamp(self,sc): now = datetime.datetime.now() current_time= now.strftime("%Y-%m-%d %H:%M") print current_time self.filepath.write(current_time+"\n") if __name__=='__main__': t=twit("rohanbk","cookie","tweets.txt") t.main() Is there anyway for my script to do it without constantly checking the time every other minute with an IF statement to see how much time has elapsed? Can I use a scheduled task like how I've done above with a slight modification to my current implementation?
[ "your code\nsc.enter(10, 1, t.timestamp, (sc,)\n\nis asking to be scheduled again in 10 seconds. If you want to be scheduled once an hour,\nsc.enter(3600, 1, t.timestamp, (sc,)\n\nseems better, since an hour is 3600 seconds, not 10!\nAlso, the line\ns.enter(1, 1, t.timestamp, (s,))\n\ngets a timestamp 1 second after every tweet written -- what's the point of that? Just schedule the first invocation of timestamp once, outside the loop, as well as changing its periodicity from 10 seconds to 3600.\n" ]
[ 4 ]
[]
[]
[ "file_io", "python" ]
stackoverflow_0002204856_file_io_python.txt
Q: Creating "classes" with Django I'm just learning Django so feel free to correct me in any of my assumptions. I probably just need my mindset adjusted. What I'm trying to do is creating a "class" in an OOP style. For example, let's say we're designing a bunch of Rooms. Each Room has Furniture. And each piece of Furniture has a Type and a Color. What I can see so far is that I can have class FurnitureType(models.Model): name = models.CharField(max_length=200) class FurnitureColor(models.Model): name = models.CharField(max_length=50) class FurniturePiece(models.Model): type = models.ForeignKey(FurnitureType) color = models.ForeignKey(FurnitureColor) sqft = models.IntegerField() name = models.CharField(max_length=200) class Room(models.Model): name = models.CharField(max_length=200) furnitures = models.ManyToManyField(FurniturePiece) The problem is that each FurniturePiece has to have a unique name if I'm picking it out of the Django admin interface. If one person creates "Green Couch" then no one else can have a "Green Couch". What I'm wondering is if a) I need to learn more about Django UI and this is the right way to design this in Django or b) I have a bad design for this domain The reason I want Furniture name to be unique is because 10 people could create a "Green Couch" each with a different sqft. A: I don't get the problem with unique name. You can just specify it to be unique: class FurniturePiece(models.Model): type = models.ForeignKey(FurnitureType) color = models.ForeignKey(FurnitureColor) sqft = models.IntegerField() name = models.CharField(max_length=200, unique=True) I don't know whether you have to learn about Django UI or not. I guess you have to learn how to define models. The admin interface is just a generated interface based on your models. You can change the interface in certain aspects without changing the models, but besides that, there is less to learn about the admin interface. I suggest you follow a tutorial like the djangobook, to get a good start with Django. I think, the problem that you have is not how to use Django but more that you don't know how to model your application in general. First you have to think about which entities do yo have (like Room, Furniture, etc.). Then think about what relations they have. Afterwards you can model them in Django. Of course in order to do this you have to know how to model the relations. The syntax might be Django specific but the logical relations are not. E.g. a many-to-many relation is not something Django specific, this is a term used in databases to express a certain relationship. Djangos models are just abstraction of the database design below. E.g you specified a many-to-many relationship between Room and FurniturePiece. Now the question: Is this what you want? It means that a piece of furniture can belong to more than one room. This sounds strange. So maybe you want to model it that a piece of furniture only belongs to one room. But a room should still have several pieces of furniture. We therefore define a relationship from FurniturePiece to Room. In Django, we can express this with: class FurniturePiece(models.Model): room = models.ForeignKey(Room) type = models.ForeignKey(FurnitureType) color = models.ForeignKey(FurnitureColor) sqft = models.IntegerField() name = models.CharField(max_length=200) Maybe you should first learn about relational databases to get the basics before you model your application with Django. It might be that this not necessary in order to create an application in Django. But it will definitely help you to understand whats going on, for every ORM not just Django's. A: Why does each FurniturePiece need to have a unique name? It seems to me that if you remove that constraint everything just works. (as an aside you seem to have accidentally dropped the models.Model base class for all but the Room model). A: This is how I would do it: class Room(models.Model): name = models.CharField(max_length=255) pieces = models.ManyToManyField('FurniturePiece') class FurniturePiece(models.Model): itemid = models.CharField(max_length=20, unique=True) # This is what I would require to be unique. name = models.CharField(max_length=255) type = models.ForeignKey('FurnitureType') # Note I put 'FurnitureType' in quotes because it hasn't been written yet (coming next). color = models.ForeignKey('FurnitureColor') # Same here. width_in_inches = models.PositiveIntegerField() length_in_inches = models.PositiveIntegerField() # Next is the property decorator which allows a method to be called without using () @property def sqft(self): return (self.length_in_inches * self.width_in_inches) / 144 # Obviously this is rough. class FurnitureType(models.Model): name = models.CharField(max_length=255) class FurnitureColor(models.Model): name = models.CharField(max_length=255) Envision objects as real life objects, and you'll have a deeper understanding of the code as well. The reason for my sqft method is that data is best when normalized as much as possible. If you have a width and length, then when somebody asks, you have length, width, sqft, and if you add height, volume as well.
Creating "classes" with Django
I'm just learning Django so feel free to correct me in any of my assumptions. I probably just need my mindset adjusted. What I'm trying to do is creating a "class" in an OOP style. For example, let's say we're designing a bunch of Rooms. Each Room has Furniture. And each piece of Furniture has a Type and a Color. What I can see so far is that I can have class FurnitureType(models.Model): name = models.CharField(max_length=200) class FurnitureColor(models.Model): name = models.CharField(max_length=50) class FurniturePiece(models.Model): type = models.ForeignKey(FurnitureType) color = models.ForeignKey(FurnitureColor) sqft = models.IntegerField() name = models.CharField(max_length=200) class Room(models.Model): name = models.CharField(max_length=200) furnitures = models.ManyToManyField(FurniturePiece) The problem is that each FurniturePiece has to have a unique name if I'm picking it out of the Django admin interface. If one person creates "Green Couch" then no one else can have a "Green Couch". What I'm wondering is if a) I need to learn more about Django UI and this is the right way to design this in Django or b) I have a bad design for this domain The reason I want Furniture name to be unique is because 10 people could create a "Green Couch" each with a different sqft.
[ "I don't get the problem with unique name. You can just specify it to be unique:\nclass FurniturePiece(models.Model):\n type = models.ForeignKey(FurnitureType)\n color = models.ForeignKey(FurnitureColor)\n sqft = models.IntegerField()\n name = models.CharField(max_length=200, unique=True)\n\nI don't know whether you have to learn about Django UI or not. I guess you have to learn how to define models. The admin interface is just a generated interface based on your models. You can change the interface in certain aspects without changing the models, but besides that, there is less to learn about the admin interface.\nI suggest you follow a tutorial like the djangobook, to get a good start with Django.\n\nI think, the problem that you have is not how to use Django but more that you don't know how to model your application in general.\n\nFirst you have to think about which entities do yo have (like Room, Furniture, etc.). \nThen think about what relations they have. \nAfterwards you can model them in Django. Of course in order to do this you have to know how to model the relations. The syntax might be Django specific but the logical relations are not. E.g. a many-to-many relation is not something Django specific, this is a term used in databases to express a certain relationship.\n\nDjangos models are just abstraction of the database design below.\n\nE.g you specified a many-to-many relationship between Room and FurniturePiece.\nNow the question: Is this what you want? It means that a piece of furniture can belong to more than one room. This sounds strange. So maybe you want to model it that a piece of furniture only belongs to one room. But a room should still have several pieces of furniture. We therefore define a relationship from FurniturePiece to Room.\nIn Django, we can express this with:\nclass FurniturePiece(models.Model):\n room = models.ForeignKey(Room)\n type = models.ForeignKey(FurnitureType)\n color = models.ForeignKey(FurnitureColor)\n sqft = models.IntegerField()\n name = models.CharField(max_length=200)\n\n\nMaybe you should first learn about relational databases to get the basics before you model your application with Django.\nIt might be that this not necessary in order to create an application in Django. But it will definitely help you to understand whats going on, for every ORM not just Django's.\n", "Why does each FurniturePiece need to have a unique name? It seems to me that if you remove that constraint everything just works.\n(as an aside you seem to have accidentally dropped the models.Model base class for all but the Room model).\n", "This is how I would do it:\nclass Room(models.Model):\n name = models.CharField(max_length=255)\n pieces = models.ManyToManyField('FurniturePiece')\n\nclass FurniturePiece(models.Model):\n itemid = models.CharField(max_length=20, unique=True) # This is what I would require to be unique.\n name = models.CharField(max_length=255)\n type = models.ForeignKey('FurnitureType') # Note I put 'FurnitureType' in quotes because it hasn't been written yet (coming next).\n color = models.ForeignKey('FurnitureColor') # Same here.\n width_in_inches = models.PositiveIntegerField()\n length_in_inches = models.PositiveIntegerField()\n\n # Next is the property decorator which allows a method to be called without using ()\n @property\n def sqft(self):\n return (self.length_in_inches * self.width_in_inches) / 144 # Obviously this is rough.\n\n\nclass FurnitureType(models.Model):\n name = models.CharField(max_length=255)\n\nclass FurnitureColor(models.Model):\n name = models.CharField(max_length=255)\n\nEnvision objects as real life objects, and you'll have a deeper understanding of the code as well. The reason for my sqft method is that data is best when normalized as much as possible. If you have a width and length, then when somebody asks, you have length, width, sqft, and if you add height, volume as well.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002204874_django_python.txt
Q: Restricting RSS elements by date with feedparser. [Python] I iterate a RSS feed like so where _file is the feed d = feedparser.parse(_file) for element in d.entries: print repr(element.date) The date output comes out like so u'Thu, 16 Jul 2009 15:18:22 EDT' I cant seem to understand how to actually quantify the above date output so I can use it to limit feed elements. I So what I am asking is how can I get a actual time out of this, so I can say if greater then 7 days old, skip this element. A: feedparser is supposed to give you a struct_time object from Python's time module. I'm guessing it doesn't recognize that date format and so is giving you the raw string. See here on how to add support for parsing malformed timestamps: http://pythonhosted.org/feedparser/date-parsing.html If you manage to get it to give you the struct_time, you can read more about that here: http://docs.python.org/library/time.html#time.struct_time struct_time objects have everything you need. They have these members: time.struct_time(tm_year=2010, tm_mon=2, tm_mday=4, tm_hour=23, tm_min=44, tm_sec=19, tm_wday=3, tm_yday=35, tm_isdst=0) I generally convert the structs to seconds, like this: import time import calendar struct = time.localtime() seconds = calendar.timegm(struct) Then you can just do regular math to see how many seconds have elapsed, or use the datetime module to do timedeltas. A: one way >>> import time >>> t=time.strptime("Thu, 16 Jul 2009 15:18:22 EDT","%a, %d %b %Y %H:%M:%S %Z") >>> sevendays=86400*7 >>> current=time.strftime ("%s",time.localtime()) >>> if int(current) - time.mktime(t) > sevendays: print "more than 7 days" you can also see the datetime module and make use of timedelta() for date calculations. A: If you install the dateutil module: import dateutil.parser as dp import dateutil.tz as dtz import datetime date_string=u'Thu, 16 Jul 2009 15:18:22 EDT' adatetime=dp.parse(date_string) print(adatetime) # 2009-07-16 15:18:22-04:00 now=datetime.datetime.now(dtz.tzlocal()) print(now) # 2010-02-04 23:35:52.428766-05:00 aweekago=now-datetime.timedelta(days=7) print(aweekago) # 2010-01-28 23:35:52.428766-05:00 if adatetime<aweekago: print('old news') If you are using Ubuntu, dateutil is provided by the python-dateutil package.
Restricting RSS elements by date with feedparser. [Python]
I iterate a RSS feed like so where _file is the feed d = feedparser.parse(_file) for element in d.entries: print repr(element.date) The date output comes out like so u'Thu, 16 Jul 2009 15:18:22 EDT' I cant seem to understand how to actually quantify the above date output so I can use it to limit feed elements. I So what I am asking is how can I get a actual time out of this, so I can say if greater then 7 days old, skip this element.
[ "feedparser is supposed to give you a struct_time object from Python's time module. I'm guessing it doesn't recognize that date format and so is giving you the raw string.\nSee here on how to add support for parsing malformed timestamps:\nhttp://pythonhosted.org/feedparser/date-parsing.html\nIf you manage to get it to give you the struct_time, you can read more about that here:\nhttp://docs.python.org/library/time.html#time.struct_time\nstruct_time objects have everything you need. They have these members:\ntime.struct_time(tm_year=2010, tm_mon=2, tm_mday=4, tm_hour=23, tm_min=44, tm_sec=19, tm_wday=3, tm_yday=35, tm_isdst=0)\nI generally convert the structs to seconds, like this:\nimport time\nimport calendar\n\nstruct = time.localtime()\nseconds = calendar.timegm(struct)\n\nThen you can just do regular math to see how many seconds have elapsed, or use the datetime module to do timedeltas.\n", "one way\n>>> import time\n>>> t=time.strptime(\"Thu, 16 Jul 2009 15:18:22 EDT\",\"%a, %d %b %Y %H:%M:%S %Z\")\n>>> sevendays=86400*7\n>>> current=time.strftime (\"%s\",time.localtime())\n>>> if int(current) - time.mktime(t) > sevendays:\n print \"more than 7 days\"\n\nyou can also see the datetime module and make use of timedelta() for date calculations.\n", "If you install the dateutil module:\nimport dateutil.parser as dp\nimport dateutil.tz as dtz\nimport datetime\n\ndate_string=u'Thu, 16 Jul 2009 15:18:22 EDT'\nadatetime=dp.parse(date_string)\nprint(adatetime) \n# 2009-07-16 15:18:22-04:00\n\nnow=datetime.datetime.now(dtz.tzlocal())\nprint(now)\n# 2010-02-04 23:35:52.428766-05:00\n\naweekago=now-datetime.timedelta(days=7)\nprint(aweekago)\n# 2010-01-28 23:35:52.428766-05:00\n\nif adatetime<aweekago:\n print('old news')\n\nIf you are using Ubuntu, dateutil is provided by the python-dateutil package.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "feedparser", "python" ]
stackoverflow_0002204858_feedparser_python.txt
Q: Which is web.py killer app? A killer app is an app that make a library or framework famous. I think web.py is quite famous, but I don't know any big, widely used app written in web.py. Could you point out any? I've head that the first version of youtube.com was coded using web.py but I'd like you to mention an open source one so I can see its code. A: From web.py website here is a list of "Real Web Apps" written in web.py. None of them has yet become the next twitter. redditriver.com: a mobile version of reddit.com webme: a blogging and podcasting system webr: a flickr powered photo gallery http://www.colr.org/ (v5): A site for playing with colors. todo: a simple web.py example where you can create, delete and edit-in-place an item music-share: a simple web app for music sharing (mp3 files). Google Modules : an iGoogle Gadget directory written in MVC style. Mailer : a very simple mass mailer. MLSS Admin : a system to rate, comment and accept candidates for conferences and likes. Wikitrivia : take randomly generated quizes generated using Wikipedia. A: Well, when Reddit first moved from Common Lisp to Python they used web.py (src, src). That made a pretty big splash at the time, which might explain some of its popularity. It should be said, I guess, that Reddit has since abandoned web.py in favor of Pylons. A: A killer app isn't necessarily defined by whether it makes the framework/language famous. The Killer 'App' is an application that disrupts its industry in such a way that the rules or status quo of that industry is changed forever. E.g. the iPhone. It is 'killer' because it 'kills' off market leaders in its industry. A by-product of a killer app is that the technology used to bring it to market gets a fair amount of media attention. This is inevitably happens because the majority of developers(in our case) who operate in the killer app's industry want to be up on the latest trends/concepts/etc... In the iPhone's case: It has sold millions; meaning the iPhone is a cash cow; therefore developers flock; developers need to then know what technology to use; more developers=more attention; etc... Obviously this is a short summary on what generally happens. Anyway... Pylons is pretty good. Its gaining quite a bit of attention because it is easily customizable, robust and focused on doing what is necessary which is rapid development. There is also Turbo Gears, but the latest version is built on Pylons anyway. Hope this helps.
Which is web.py killer app?
A killer app is an app that make a library or framework famous. I think web.py is quite famous, but I don't know any big, widely used app written in web.py. Could you point out any? I've head that the first version of youtube.com was coded using web.py but I'd like you to mention an open source one so I can see its code.
[ "From web.py website here is a list of \"Real Web Apps\" written in web.py. None of them has yet become the next twitter.\n\nredditriver.com: a mobile version of reddit.com\nwebme: a blogging and podcasting system\nwebr: a flickr powered photo gallery\nhttp://www.colr.org/ (v5): A site for playing with colors.\ntodo: a simple web.py example where you can create, delete and edit-in-place an item\nmusic-share: a simple web app for music sharing (mp3 files).\nGoogle Modules : an iGoogle Gadget directory written in MVC style.\nMailer : a very simple mass mailer.\nMLSS Admin : a system to rate, comment and accept candidates for conferences and likes.\nWikitrivia : take randomly generated quizes generated using Wikipedia. \n\n", "Well, when Reddit first moved from Common Lisp to Python they used web.py (src, src). That made a pretty big splash at the time, which might explain some of its popularity. It should be said, I guess, that Reddit has since abandoned web.py in favor of Pylons.\n", "A killer app isn't necessarily defined by whether it makes the framework/language famous. The Killer 'App' is an application that disrupts its industry in such a way that the rules or status quo of that industry is changed forever. E.g. the iPhone. \nIt is 'killer' because it 'kills' off market leaders in its industry.\nA by-product of a killer app is that the technology used to bring it to market gets a fair amount of media attention. This is inevitably happens because the majority of developers(in our case) who operate in the killer app's industry want to be up on the latest trends/concepts/etc...\nIn the iPhone's case: It has sold millions; meaning the iPhone is a cash cow; therefore developers flock; developers need to then know what technology to use; more developers=more attention; etc...\nObviously this is a short summary on what generally happens.\nAnyway... Pylons is pretty good. Its gaining quite a bit of attention because it is easily customizable, robust and focused on doing what is necessary which is rapid development. \nThere is also Turbo Gears, but the latest version is built on Pylons anyway.\nHope this helps. \n" ]
[ 7, 4, 0 ]
[]
[]
[ "python", "web.py" ]
stackoverflow_0002187610_python_web.py.txt
Q: Translate a python dict into a Solr query string I'm just getting started with Python, and I'm stuck on the syntax that I need to convert a set of request.POST parameters to Solr's query syntax. The use case is a form defined like this: class SearchForm(forms.Form): text = forms.CharField() metadata = forms.CharField() figures = forms.CharField() Upon submission, the form needs to generate a url-encoded string to pass to a URL client that looks like this: text:myKeyword1+AND+metadata:myKeyword2+AND+figures:myKeyword3 Seems like there should be a simple way to do this. The closest I can get is for f, v in request.POST.iteritems(): if v: qstring += u'%s\u003A%s+and+' % (f,v) However in that case the colon (\u003A) generates a Solr error because it is not getting encoded properly. What is the clean way to do this? Thanks! A: I would recommend creating a function in the form to handle this rather than in the view. So after you call form.isValid() to ensure the data is acceptable. You can invoke a form.generateSolrQuery() (or whatever name you would like). class SearchForm(forms.Form): text = forms.CharField() metadata = forms.CharField() figures = forms.CharField() def generateSolrQuery(self): return "+and+".join(["%s\u003A%s" % (f,v) for f,v in self.cleaned_data.items()]) A: Isn't u'\u003A' simply the same as u':', which is escaped in URLs as %3A ? qstring = '+AND+'.join( [ u'%s%%3A%s'%(f,v) for f,v in request.POST.iteritems() if f] ) A: Use django-haystack. It takes care of all the interfacing with Solr - both indexing and querying.
Translate a python dict into a Solr query string
I'm just getting started with Python, and I'm stuck on the syntax that I need to convert a set of request.POST parameters to Solr's query syntax. The use case is a form defined like this: class SearchForm(forms.Form): text = forms.CharField() metadata = forms.CharField() figures = forms.CharField() Upon submission, the form needs to generate a url-encoded string to pass to a URL client that looks like this: text:myKeyword1+AND+metadata:myKeyword2+AND+figures:myKeyword3 Seems like there should be a simple way to do this. The closest I can get is for f, v in request.POST.iteritems(): if v: qstring += u'%s\u003A%s+and+' % (f,v) However in that case the colon (\u003A) generates a Solr error because it is not getting encoded properly. What is the clean way to do this? Thanks!
[ "I would recommend creating a function in the form to handle this rather than in the view. So after you call form.isValid() to ensure the data is acceptable. You can invoke a form.generateSolrQuery() (or whatever name you would like).\nclass SearchForm(forms.Form):\n text = forms.CharField()\n metadata = forms.CharField()\n figures = forms.CharField()\n\n def generateSolrQuery(self):\n return \"+and+\".join([\"%s\\u003A%s\" % (f,v) for f,v in self.cleaned_data.items()])\n\n", "Isn't u'\\u003A' simply the same as u':', which is escaped in URLs as %3A ?\nqstring = '+AND+'.join(\n [ u'%s%%3A%s'%(f,v) for f,v in request.POST.iteritems() if f]\n ) \n\n", "Use django-haystack. It takes care of all the interfacing with Solr - both indexing and querying.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "django", "python", "solr" ]
stackoverflow_0002203514_django_python_solr.txt
Q: TypeErrors using metaclasses in conjunction with multiple inheritance I have two questions converning metaclasses and multiple inheritance. The first is: Why do I get a TypeError for the class Derived but not for Derived2? class Metaclass(type): pass class Klass(object): __metaclass__ = Metaclass #class Derived(object, Klass): pass # if I uncomment this, I get a TypeError class OtherClass(object): pass class Derived2(OtherClass, Klass): pass # I do not get a TypeError for this The exact error message is: TypeError: Error when calling the metaclass bases Cannot create a consistent method resolution order (MRO) for bases object, Klass The second question is: Why does super not work in this case(if I use __init__ instead of __new__, super works again): class Metaclass(type): def __new__(self, name, bases, dict_): return super(Metaclass, self).__new__(name, bases, dict_) class Klass(object): __metaclass__ = Metaclass There I get: TypeError: Error when calling the metaclass bases type.__new__(X): X is not a type object (str) I'm using Python 2.6. A: The second question has already been well answered twice, though __new__ is actually a staticmethod, not a classmethod as erroneously claimed in a comment...: >>> class sic(object): ... def __new__(cls, *x): return object.__new__(cls, *x) ... >>> type(sic.__dict__['__new__']) <type 'staticmethod'> The first question (as somebody noted) has nothing to do with metaclasses: you simply can't multiply inherit from any two classes A and B in this order where B is a subclass of A. E.g.: >>> class cis(sic): pass ... >>> class oops(sic, cis): pass ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Error when calling the metaclass bases Cannot create a consistent method resolution order (MRO) for bases sic, cis The MRO guarantees that leftmost bases are visited before rightmost ones - but it also guarantees that among ancestors if x is a subclass of y then x is visited before y. It's impossible to satisfy both of these guarantees in this case. There's a good reason for these guarantees of course: without them (e.g. in old style classes, which only guarantee the left-right order in method resolution, not the subclass constraint) all overrides in x would be ignored in favor of the definitions in y, and that can't make much sense. Think about it: what does it mean to inherit from object first, and from some other class second? That object's (essentially nonexistent;-) definition of its several special methods must take precedence over the other class's, causing the other class's overrides to be ignored? A: For the first question, have a look at the description of MRO in python - specifically, the "bad Method Resolution order" section. Essentially, it's to do with the fact that python doesn't know whether to use object or Klass's methods. (It's nothing to do with the usage of metaclasses.) For the second question, it looks like you're misunderstanding how the __new__ function works. It doesn't take a reference to itself as the first argument - it takes a reference to the type of the class being instantiated. So your code should look like this: class Metaclass(type): def __new__(cls, name, bases, dictn): return type.__new__(cls, name, bases, dictn) A: For the second question, you need to pass self to __new__ like this: class Metaclass(type): def __new__(self, name, bases, dict_): return super(Metaclass, self).__new__(self, name, bases, dict_) class Klass(object): __metaclass__ = Metaclass I can't recall off the top of my head why this is, but I think it's because type.__new__ isn't a bound method and thus doesn't magically get the self argument. A: Why would you do? class Derived(object, Klass): Klass already derives from object. class Derived(Klass): Is the reasonable thing here.
TypeErrors using metaclasses in conjunction with multiple inheritance
I have two questions converning metaclasses and multiple inheritance. The first is: Why do I get a TypeError for the class Derived but not for Derived2? class Metaclass(type): pass class Klass(object): __metaclass__ = Metaclass #class Derived(object, Klass): pass # if I uncomment this, I get a TypeError class OtherClass(object): pass class Derived2(OtherClass, Klass): pass # I do not get a TypeError for this The exact error message is: TypeError: Error when calling the metaclass bases Cannot create a consistent method resolution order (MRO) for bases object, Klass The second question is: Why does super not work in this case(if I use __init__ instead of __new__, super works again): class Metaclass(type): def __new__(self, name, bases, dict_): return super(Metaclass, self).__new__(name, bases, dict_) class Klass(object): __metaclass__ = Metaclass There I get: TypeError: Error when calling the metaclass bases type.__new__(X): X is not a type object (str) I'm using Python 2.6.
[ "The second question has already been well answered twice, though __new__ is actually a staticmethod, not a classmethod as erroneously claimed in a comment...:\n>>> class sic(object):\n... def __new__(cls, *x): return object.__new__(cls, *x)\n... \n>>> type(sic.__dict__['__new__'])\n<type 'staticmethod'>\n\nThe first question (as somebody noted) has nothing to do with metaclasses: you simply can't multiply inherit from any two classes A and B in this order where B is a subclass of A. E.g.:\n>>> class cis(sic): pass\n... \n>>> class oops(sic, cis): pass\n... \nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: Error when calling the metaclass bases\n Cannot create a consistent method resolution\norder (MRO) for bases sic, cis\n\nThe MRO guarantees that leftmost bases are visited before rightmost ones - but it also guarantees that among ancestors if x is a subclass of y then x is visited before y. It's impossible to satisfy both of these guarantees in this case. There's a good reason for these guarantees of course: without them (e.g. in old style classes, which only guarantee the left-right order in method resolution, not the subclass constraint) all overrides in x would be ignored in favor of the definitions in y, and that can't make much sense. Think about it: what does it mean to inherit from object first, and from some other class second? That object's (essentially nonexistent;-) definition of its several special methods must take precedence over the other class's, causing the other class's overrides to be ignored?\n", "For the first question, have a look at the description of MRO in python - specifically, the \"bad Method Resolution order\" section. Essentially, it's to do with the fact that python doesn't know whether to use object or Klass's methods. (It's nothing to do with the usage of metaclasses.)\nFor the second question, it looks like you're misunderstanding how the __new__ function works. It doesn't take a reference to itself as the first argument - it takes a reference to the type of the class being instantiated. So your code should look like this:\nclass Metaclass(type):\n def __new__(cls, name, bases, dictn):\n return type.__new__(cls, name, bases, dictn)\n\n", "For the second question, you need to pass self to __new__ like this:\nclass Metaclass(type):\n def __new__(self, name, bases, dict_):\n return super(Metaclass, self).__new__(self, name, bases, dict_)\n\nclass Klass(object):\n __metaclass__ = Metaclass\n\nI can't recall off the top of my head why this is, but I think it's because type.__new__ isn't a bound method and thus doesn't magically get the self argument.\n", "Why would you do?\nclass Derived(object, Klass):\n\nKlass already derives from object. \nclass Derived(Klass):\n\nIs the reasonable thing here.\n" ]
[ 7, 4, 0, 0 ]
[]
[]
[ "metaclass", "multiple_inheritance", "python", "python_2.x" ]
stackoverflow_0002203947_metaclass_multiple_inheritance_python_python_2.x.txt
Q: Using Mako In Windows I plan to use wsgi + mako in Windows. I install mako using C:\wsgi>c:\Python26\Scripts\easy_install.exe Mako No error. I get Finished processing dependencies for Mako at end of the message. I check my Python directory, I am having the following structure : C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg\EGG-INFO C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg\mako C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg\mako\ext I run the following code HelloWorld.py from mako.template import Template def application(environ, start_response): status = '200 OK' mytemplate = Template("hello, ${name}!") output = mytemplate.render(name="jack") response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] I get the following error log : [Fri Feb 05 16:11:19 2010] [error] [client 127.0.0.1] File "C:/wsgi/HelloWorld.py", line 1, in <module> [Fri Feb 05 16:11:19 2010] [error] [client 127.0.0.1] from mako.template import Template [Fri Feb 05 16:11:19 2010] [error] [client 127.0.0.1] ImportError: No module named mako.template Any advice? A: a few things to try make sure you're using python2.6 try import mako and see if you get a similar error if mako imports correctly look at the value of repr(mako) and make sure it corresponds to the path you have.
Using Mako In Windows
I plan to use wsgi + mako in Windows. I install mako using C:\wsgi>c:\Python26\Scripts\easy_install.exe Mako No error. I get Finished processing dependencies for Mako at end of the message. I check my Python directory, I am having the following structure : C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg\EGG-INFO C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg\mako C:\Python26\Lib\site-packages\mako-0.2.5-py2.6.egg\mako\ext I run the following code HelloWorld.py from mako.template import Template def application(environ, start_response): status = '200 OK' mytemplate = Template("hello, ${name}!") output = mytemplate.render(name="jack") response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] I get the following error log : [Fri Feb 05 16:11:19 2010] [error] [client 127.0.0.1] File "C:/wsgi/HelloWorld.py", line 1, in <module> [Fri Feb 05 16:11:19 2010] [error] [client 127.0.0.1] from mako.template import Template [Fri Feb 05 16:11:19 2010] [error] [client 127.0.0.1] ImportError: No module named mako.template Any advice?
[ "a few things to try\n\nmake sure you're using python2.6\ntry import mako and see if you get a similar error\nif mako imports correctly look at the value of repr(mako) and make sure it corresponds to the path you have.\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0002205836_python.txt
Q: best (python) setup for cpu / memory intensive task i'm doing simulation which generates thousands of result objects. Each object size is around 1mb, and all the result objects should be on memory to be queried for various ad hoc reports. And it takes 1~2 secs to make one result object. So it takes more than 5 minutes to get one simulation done even though i fully use my quad-core cpu with parallel execution. And the task process takes more than 4~5 gb memory for one simulation set. The problem is, I want to run more simulation sets simultaneously and get it done more quickly. Currently, I'm doing this job using c# and ironpython on windows vista64, quad-core cpu with 8g memory. I gonna order a new computer, 24 gb memory with better cpu and ultimately, i may buy workstation with multi cpus and more memories. So my question is, what is the best way to utilize new hardware? I consider one of the combinations below. ironpython + c# on windows 64 ironpython + c# (mono) on linux 64 jython + java on windows 64 jython + java on linux 64 Simulation engine is written in c# / java, and i use python to make reports. Which combination do you guys think is the best? Is there no big difference between .net and java platform to handle memory consuming task? Is there no difference between windows and linux? I sometimes run my current c# + ironpython code on my ubuntu laptop (32bit, 2g ram) and feel that it seems pretty stable compared to windows .net env on the same spec hardware. But i dont know when the underlying hardware is pretty better. And i welcome any kind of suggestion regardless of the choices above. A: Since you can install all of those for free and it sounds like you already have the code implemented in both .Net and Java then I suggest you benchmark the program on all four platforms (windows/linux * java/.net). It sounds like all the heavy lifting is done in Java/C#, so I suspect the relative performance of Jython vs. IronPython is largely irrelevant. A: @Dave is spot on, if you really care benchmark each combination and see. Personally I'd suggest you stick with the tool set that you are most comfortable with, be that Windows, Java, Linux, .Net or any random combination there of. Your level of productivity in maintaining and developing your software usually trumps any minor performance gain you might get from switching OS or VM.
best (python) setup for cpu / memory intensive task
i'm doing simulation which generates thousands of result objects. Each object size is around 1mb, and all the result objects should be on memory to be queried for various ad hoc reports. And it takes 1~2 secs to make one result object. So it takes more than 5 minutes to get one simulation done even though i fully use my quad-core cpu with parallel execution. And the task process takes more than 4~5 gb memory for one simulation set. The problem is, I want to run more simulation sets simultaneously and get it done more quickly. Currently, I'm doing this job using c# and ironpython on windows vista64, quad-core cpu with 8g memory. I gonna order a new computer, 24 gb memory with better cpu and ultimately, i may buy workstation with multi cpus and more memories. So my question is, what is the best way to utilize new hardware? I consider one of the combinations below. ironpython + c# on windows 64 ironpython + c# (mono) on linux 64 jython + java on windows 64 jython + java on linux 64 Simulation engine is written in c# / java, and i use python to make reports. Which combination do you guys think is the best? Is there no big difference between .net and java platform to handle memory consuming task? Is there no difference between windows and linux? I sometimes run my current c# + ironpython code on my ubuntu laptop (32bit, 2g ram) and feel that it seems pretty stable compared to windows .net env on the same spec hardware. But i dont know when the underlying hardware is pretty better. And i welcome any kind of suggestion regardless of the choices above.
[ "Since you can install all of those for free and it sounds like you already have the code implemented in both .Net and Java then I suggest you benchmark the program on all four platforms (windows/linux * java/.net). \nIt sounds like all the heavy lifting is done in Java/C#, so I suspect the relative performance of Jython vs. IronPython is largely irrelevant.\n", "@Dave is spot on, if you really care benchmark each combination and see.\nPersonally I'd suggest you stick with the tool set that you are most comfortable with, be that Windows, Java, Linux, .Net or any random combination there of. Your level of productivity in maintaining and developing your software usually trumps any minor performance gain you might get from switching OS or VM.\n" ]
[ 2, 2 ]
[]
[]
[ ".net", "c#", "java", "memory", "python" ]
stackoverflow_0002205832_.net_c#_java_memory_python.txt
Q: Is there a way to turn XML into dictionary and lists? <things> <fruit>apple</fruit> <hardware>mouse</hardware> ... </things> Turn it into: {'things':[{'fruit':'apple'}, {'hardware':'mouse'}]} Is there an easy way to do this? Thanks. A: there's a nice recipe for that here: http://code.activestate.com/recipes/570085/ another good one is here: http://code.activestate.com/recipes/522991/
Is there a way to turn XML into dictionary and lists?
<things> <fruit>apple</fruit> <hardware>mouse</hardware> ... </things> Turn it into: {'things':[{'fruit':'apple'}, {'hardware':'mouse'}]} Is there an easy way to do this? Thanks.
[ "there's a nice recipe for that here:\nhttp://code.activestate.com/recipes/570085/\nanother good one is here:\nhttp://code.activestate.com/recipes/522991/\n" ]
[ 1 ]
[]
[]
[ "list", "python", "string", "xml" ]
stackoverflow_0002205987_list_python_string_xml.txt
Q: Can I make a field in database use this django code ,and don't use 'python manage.py startapp xx' from django.db import models class Person(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) Is this possible? A: If you only want to access a database from Python, I recommend you use SQLAlchemy instead.
Can I make a field in database use this django code ,and don't use 'python manage.py startapp xx'
from django.db import models class Person(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) Is this possible?
[ "If you only want to access a database from Python, I recommend you use SQLAlchemy instead.\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002205857_django_python.txt
Q: Print string in a form of Unicode codes How can I print a string as a sequence of unicode codes in Python? Input: "если" (in Russian). Output: "\u0435\u0441\u043b\u0438" A: This should work: >>> s = u'если' >>> print repr(s) u'\u0435\u0441\u043b\u0438' A: Code: txt = u"если" print repr(txt) Output: u'\u0435\u0441\u043b\u0438' A: a = u"\u0435\u0441\u043b\u0438" print "".join("\u{0:04x}".format(ord(c)) for c in a) A: If you need a specific encoding, you can use : txt = u'если' print txt.encode('utf8') print txt.encode('utf16')
Print string in a form of Unicode codes
How can I print a string as a sequence of unicode codes in Python? Input: "если" (in Russian). Output: "\u0435\u0441\u043b\u0438"
[ "This should work:\n>>> s = u'если'\n>>> print repr(s)\nu'\\u0435\\u0441\\u043b\\u0438'\n\n", "Code:\ntxt = u\"если\"\nprint repr(txt)\n\nOutput:\nu'\\u0435\\u0441\\u043b\\u0438'\n\n", "a = u\"\\u0435\\u0441\\u043b\\u0438\"\nprint \"\".join(\"\\u{0:04x}\".format(ord(c)) for c in a)\n\n", "If you need a specific encoding, you can use :\ntxt = u'если'\nprint txt.encode('utf8')\nprint txt.encode('utf16')\n\n" ]
[ 9, 3, 1, 0 ]
[]
[]
[ "python", "string", "unicode" ]
stackoverflow_0002206210_python_string_unicode.txt
Q: Problems using the python WConio library I'm trying to use the WConio library for python, but when I import it, it gives this error: Traceback (most recent call last): File "WConioExample.py", line 15, in < module> import WConio File "d:\tools\development\python2.5\lib\site-packages\WConio.py", line 23, in from _WConio import * ImportError: DLL load failed with error code 193 I've installed WConio-1.5.win32-py2.5.exe and made sure the _WConio.pyd file exists. I'm using it on Win7. I have searched for this problem, but the results were of no good use. What can I do to solve this? A: Probably, you've installed 32 bit library on 64 bit system.
Problems using the python WConio library
I'm trying to use the WConio library for python, but when I import it, it gives this error: Traceback (most recent call last): File "WConioExample.py", line 15, in < module> import WConio File "d:\tools\development\python2.5\lib\site-packages\WConio.py", line 23, in from _WConio import * ImportError: DLL load failed with error code 193 I've installed WConio-1.5.win32-py2.5.exe and made sure the _WConio.pyd file exists. I'm using it on Win7. I have searched for this problem, but the results were of no good use. What can I do to solve this?
[ "Probably, you've installed 32 bit library on 64 bit system.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0002206125_python.txt
Q: moving a function out of my method in django Hay i have a method in my view which uploads an image, the image is then saved to a db object. I want to remove this from my view and either put it in my model or a seperate file. filename_bits = request.FILES['image'].name.split(".") filename_bits.reverse() extension = filename_bits[0] # create filename and open a destination filename = '_'+force_unicode(random.randint(0,10000000))+'_'+force_unicode(random.randint(0,10000000))+'.'+force_unicode(extension) path = 'assests/uploads/'+filename destination = open(path, 'wb+') # write to that destination for chunk in request.FILES['image'].chunks(): destination.write(chunk) destination.close() picture = CarPicture(car=car, path=path, filename=filename) picture.save() as you can see it uploads a file (after creating random filenames) then create a carPicture object and saves it. Any ideas? Could this be done in the model by overwriting the save method? A: You should consider using an ImageField in your model. For example in models.py you would have: def get_random_filename(car_picture, filename): extension = filename.split('.')[-1] return u'_%s_%s.%s' % (random.randint(0,10000000), random.randint(0,10000000), extension) class CarPicture(models.Model): title = models.TextField() image = models.ImageField(upload_to=get_random_filename) And in your views you would just have: picture = CarPicture(title="Some Title", image=request.FILES['image']) picture.save() This will store the image to disk, using the randomly generated filename, and set the image field in the database to the path of the image. You can also get the url for the image with: picture.image.url() so a template might have: <img src="{{picture.image.url}}" title="{{picture.image.title}}"/>
moving a function out of my method in django
Hay i have a method in my view which uploads an image, the image is then saved to a db object. I want to remove this from my view and either put it in my model or a seperate file. filename_bits = request.FILES['image'].name.split(".") filename_bits.reverse() extension = filename_bits[0] # create filename and open a destination filename = '_'+force_unicode(random.randint(0,10000000))+'_'+force_unicode(random.randint(0,10000000))+'.'+force_unicode(extension) path = 'assests/uploads/'+filename destination = open(path, 'wb+') # write to that destination for chunk in request.FILES['image'].chunks(): destination.write(chunk) destination.close() picture = CarPicture(car=car, path=path, filename=filename) picture.save() as you can see it uploads a file (after creating random filenames) then create a carPicture object and saves it. Any ideas? Could this be done in the model by overwriting the save method?
[ "You should consider using an ImageField in your model. For example in models.py you would have:\ndef get_random_filename(car_picture, filename):\n extension = filename.split('.')[-1]\n return u'_%s_%s.%s' % (random.randint(0,10000000),\n random.randint(0,10000000),\n extension)\n\nclass CarPicture(models.Model):\n title = models.TextField()\n image = models.ImageField(upload_to=get_random_filename)\n\nAnd in your views you would just have:\npicture = CarPicture(title=\"Some Title\", image=request.FILES['image'])\npicture.save()\n\nThis will store the image to disk, using the randomly generated filename, and set the image field in the database to the path of the image. You can also get the url for the image with:\npicture.image.url()\n\nso a template might have:\n<img src=\"{{picture.image.url}}\" title=\"{{picture.image.title}}\"/>\n\n" ]
[ 3 ]
[]
[]
[ "django", "methods", "model", "python", "upload" ]
stackoverflow_0002206422_django_methods_model_python_upload.txt
Q: global variable from django to javascript I would like some variables from my settings.py to be available in every javascript running across my project. What is the most elegant way of achieving this? Right now I can think of two: write a context processor and declare those globals in a base template. All templates must extend the base template. declare those globals in a dynamically generated .js file (by some view) and load this file using <script> tag in a base template. All templates must extend the base template. Can I do it without a base template? A: I would use option 1. You should use a base template in any case, and a context processor is probably the best way of getting the variables into it. A: I'm not familiar with django, so this might be completely incorrect. Can you write out the variables to hidden HTML fields on the page. This will allow you to access them from JavaScript, or utilise them in form posts should you require that. A: I have the same problem, and I'm using an ugly solution with a “refactor later” note. I put a js_top block at the top of my base template, and then any template who needs additional includes or js variables set can use that block. So I have stuff such as this: {% block js_top %} <script src="/jquery.useless-plugin.js" type="textjavascript"></script> <script type="textjavascript"> var myVar = {{my_variable.propriety}}; </script> {% endblock %} Of course if you have need of a more robust and less “one-off” system, I'd go with the generated js.
global variable from django to javascript
I would like some variables from my settings.py to be available in every javascript running across my project. What is the most elegant way of achieving this? Right now I can think of two: write a context processor and declare those globals in a base template. All templates must extend the base template. declare those globals in a dynamically generated .js file (by some view) and load this file using <script> tag in a base template. All templates must extend the base template. Can I do it without a base template?
[ "I would use option 1. You should use a base template in any case, and a context processor is probably the best way of getting the variables into it.\n", "I'm not familiar with django, so this might be completely incorrect.\nCan you write out the variables to hidden HTML fields on the page. This will allow you to access them from JavaScript, or utilise them in form posts should you require that.\n", "I have the same problem, and I'm using an ugly solution with a “refactor later” note.\nI put a js_top block at the top of my base template, and then any template who needs additional includes or js variables set can use that block.\nSo I have stuff such as this: \n{% block js_top %}\n <script src=\"/jquery.useless-plugin.js\" type=\"textjavascript\"></script>\n <script type=\"textjavascript\">\n var myVar = {{my_variable.propriety}};\n </script>\n{% endblock %}\n\nOf course if you have need of a more robust and less “one-off” system, I'd go with the generated js.\n" ]
[ 6, 1, 0 ]
[]
[]
[ "django", "javascript", "python" ]
stackoverflow_0002206353_django_javascript_python.txt
Q: Need help using M2Crypto.Engine to access USB Token I am using M2Crypto-0.20.2. I want to use engine_pkcs11 from the OpenSC project and the Aladdin PKI client for token based authentication making xmlrpc calls over ssl. from M2Crypto import Engine Engine.load_dynamic() dynamic = Engine.Engine('dynamic') # Load the engine_pkcs from the OpenSC project dynamic.ctrl_cmd_string("SO_PATH", "/usr/local/ssl/lib/engines/engine_pkcs11.so") Engine.cleanup() Engine.load_dynamic() # Load the Aladdin PKI Client aladdin = Engine.Engine('dynamic') aladdin.ctrl_cmd_string("SO_PATH", "/usr/lib/libeTPkcs11.so") key = aladdin.load_private_key("PIN","password") This is the error I receive: key = pkcs.load_private_key("PIN","eT0ken") File "/usr/local/lib/python2.4/site-packages/M2Crypto/Engine.py", line 70, in load_private_key return self._engine_load_key(m2.engine_load_private_key, name, pin) File "/usr/local/lib/python2.4/site-packages/M2Crypto/Engine.py", line 60, in _engine_load_key raise EngineError(Err.get_error()) M2Crypto.Engine.EngineError: 23730:error:26096075:engine routines:ENGINE_load_private_key:not initialised:eng_pkey.c:112: For load_private_key(), what should be passed as the first argument? The M2Crypto documentation does not explain it. I don't get any errors loading the engines, but I'm not sure if I'm loading them correctly. It seems like the engine ID has to be a specific name but I don't find that list anywhere. 'dynamic' is working for me. Any help would be appreciated! A: Found !!!! Yes, exactly the way where I came from. So, actually the ENGINE_init() is not implemented in M2Crypto.Engine. So, only one solution: patching!!! (very small...) so I've created a new Engine method (in Engine.py) def engine_initz(self): """Return engine name""" return m2.engine_initz(self._ptr) Why engine_initz ? because engine_init is already define in SWIG/_engine.i,: void engine_init(PyObject *engine_err) { Py_INCREF(engine_err); _engine_err = engine_err; } I don't really know what is done, so I've prefered creating a new one... So I've just added the following to SWIG/_engine.i: %rename(engine_initz) ENGINE_init; extern int ENGINE_init(ENGINE *); And recompile the __m2crypto.so, now just add a "pkcs11.engine_initz()" before launching the private key, and it works..... A: Looking at the pastebin link Becky provided, I believe it translates to something like this in the new API: from M2Crypto import Engine, m2 dynamic = Engine.load_dynamic_engine("pkcs11", "/Users/martin/prefix/lib/engines/engine_pkcs11.so") pkcs11 = Engine.Engine("pkcs11") pkcs11.ctrl_cmd_string("MODULE_PATH", "/Library/OpenSC/lib/opensc-pkcs11.so") r = pkcs11.ctrl_cmd_string("PIN", sys.argv[1]) key = pkcs11.load_private_key("id_01") So I am betting that if you substitute "/Users/martin/prefix/lib/engines/engine_pkcs11.so" with "/usr/local/ssl/lib/engines/engine_pkcs11.so" and "/Library/OpenSC/lib/opensc-pkcs11.so" with "/usr/lib/libeTPkcs11.so" you might get it to work with Aladdin. A: I don't know what and why the engine_init code present in current M2Crypto is supposed to do. Exposing ENGINE_init() as engine_init2 with the following patch to M2Crypto helps: Index: SWIG/_engine.i =================================================================== --- SWIG/_engine.i (revision 719) +++ SWIG/_engine.i (working copy) @@ -44,6 +44,9 @@ %rename(engine_free) ENGINE_free; extern int ENGINE_free(ENGINE *); +%rename(engine_init2) ENGINE_init; +extern int ENGINE_init(ENGINE *); + /* * Engine id/name functions */ After this, the following code takes me further (but urllib does not fully work for me currently): import sys, os, time, cgi, urllib, urlparse from M2Crypto import m2urllib2 as urllib2 from M2Crypto import m2, SSL, Engine # load dynamic engine e = Engine.load_dynamic_engine("pkcs11", "/Users/martin/prefix/lib/engines/engine_pkcs11.so") pk = Engine.Engine("pkcs11") pk.ctrl_cmd_string("MODULE_PATH", "/Library/OpenSC/lib/opensc-pkcs11.so") m2.engine_init2(m2.engine_by_id("pkcs11")) # This makes the trick cert = e.load_certificate("slot_01-id_01") key = e.load_private_key("slot_01-id_01", sys.argv[1]) ctx = SSL.Context("sslv23") ctx.set_cipher_list("HIGH:!aNULL:!eNULL:@STRENGTH") ctx.set_session_id_ctx("foobar") m2.ssl_ctx_use_x509(ctx.ctx, cert.x509) m2.ssl_ctx_use_pkey_privkey(ctx.ctx, key.pkey) opener = urllib2.build_opener(ctx) urllib2.install_opener(opener) A: That is exactly the code I've tried. But It ended with the following error: Traceback (most recent call last): File "prog9.py", line 13, in <module> key = pkcs11.load_private_key("id_45") File "/usr/lib/pymodules/python2.5/M2Crypto/Engine.py", line 70, in load_private_key return self._engine_load_key(m2.engine_load_private_key, name, pin) File "/usr/lib/pymodules/python2.5/M2Crypto/Engine.py", line 60, in _engine_load_key raise EngineError(Err.get_error()) M2Crypto.Engine.EngineError: 11814:error:26096075:engine outines:ENGINE_load_private_key:not initialised:eng_pkey.c:112: I'm using OpenSC PKCS11 lib, not aladdin lib. But I don't think the problem is closed. A: I tried the code that Heikki suggested (minus one line) and got the same error as Erlo. For load_private_key(), how do I know what to put in for the argument? dynamic = Engine.load_dynamic_engine("pkcs11", "/usr/local/ssl/lib/engines/engine_pkcs11.so") # m2.engine_free(dynamic) this line gave me an error TypeError: in method 'engine_free', argument 1 of type 'ENGINE *' pkcs11 = Engine.Engine("pkcs11") pkcs11.ctrl_cmd_string("MODULE_PATH", "/usr/lib/libeTPkcs11.so") r = pkcs11.ctrl_cmd_string("PIN", "password") key = pkcs11.load_private_key("id_01") A: I think the problem is not really the "load_private_key()". It's like something is missing between "MODULE_PATH" definition and the load_private_key() call. What happen if you remplace "/usr/lib/libeTPkcs11.so" by a wrong path ? In my case I have no error related to this. I've run "pcscd" in foreground with high debug level, there is no call to smartcard during the python execution... So definitly, I don't understand what's wrong... The equivalent in "openssl" is using "-pre" command. The "-pre" (by opposite to the "-post") are command sent to the engine before loading. Perhaps we need to call a methode which "load" the engine after all "ctrl_cmd_string" calls ?? ... Lost :-/
Need help using M2Crypto.Engine to access USB Token
I am using M2Crypto-0.20.2. I want to use engine_pkcs11 from the OpenSC project and the Aladdin PKI client for token based authentication making xmlrpc calls over ssl. from M2Crypto import Engine Engine.load_dynamic() dynamic = Engine.Engine('dynamic') # Load the engine_pkcs from the OpenSC project dynamic.ctrl_cmd_string("SO_PATH", "/usr/local/ssl/lib/engines/engine_pkcs11.so") Engine.cleanup() Engine.load_dynamic() # Load the Aladdin PKI Client aladdin = Engine.Engine('dynamic') aladdin.ctrl_cmd_string("SO_PATH", "/usr/lib/libeTPkcs11.so") key = aladdin.load_private_key("PIN","password") This is the error I receive: key = pkcs.load_private_key("PIN","eT0ken") File "/usr/local/lib/python2.4/site-packages/M2Crypto/Engine.py", line 70, in load_private_key return self._engine_load_key(m2.engine_load_private_key, name, pin) File "/usr/local/lib/python2.4/site-packages/M2Crypto/Engine.py", line 60, in _engine_load_key raise EngineError(Err.get_error()) M2Crypto.Engine.EngineError: 23730:error:26096075:engine routines:ENGINE_load_private_key:not initialised:eng_pkey.c:112: For load_private_key(), what should be passed as the first argument? The M2Crypto documentation does not explain it. I don't get any errors loading the engines, but I'm not sure if I'm loading them correctly. It seems like the engine ID has to be a specific name but I don't find that list anywhere. 'dynamic' is working for me. Any help would be appreciated!
[ "Found !!!!\nYes, exactly the way where I came from.\nSo, actually the ENGINE_init() is not implemented in M2Crypto.Engine. So, only one solution: patching!!! (very small...) so I've created a new Engine method (in Engine.py)\ndef engine_initz(self):\n \"\"\"Return engine name\"\"\"\n return m2.engine_initz(self._ptr)\n\nWhy engine_initz ? because engine_init is already define in SWIG/_engine.i,:\nvoid engine_init(PyObject *engine_err) {\n Py_INCREF(engine_err);\n _engine_err = engine_err;\n}\n\nI don't really know what is done, so I've prefered creating a new one... So I've just added the following to SWIG/_engine.i:\n%rename(engine_initz) ENGINE_init;\nextern int ENGINE_init(ENGINE *);\n\nAnd recompile the __m2crypto.so, now just add a \"pkcs11.engine_initz()\" before launching the private key, and it works.....\n", "Looking at the pastebin link Becky provided, I believe it translates to something like this in the new API:\nfrom M2Crypto import Engine, m2\n\ndynamic = Engine.load_dynamic_engine(\"pkcs11\", \"/Users/martin/prefix/lib/engines/engine_pkcs11.so\")\n\npkcs11 = Engine.Engine(\"pkcs11\")\n\npkcs11.ctrl_cmd_string(\"MODULE_PATH\", \"/Library/OpenSC/lib/opensc-pkcs11.so\")\n\nr = pkcs11.ctrl_cmd_string(\"PIN\", sys.argv[1])\n\nkey = pkcs11.load_private_key(\"id_01\")\n\nSo I am betting that if you substitute \"/Users/martin/prefix/lib/engines/engine_pkcs11.so\" with \"/usr/local/ssl/lib/engines/engine_pkcs11.so\" and \"/Library/OpenSC/lib/opensc-pkcs11.so\" with \"/usr/lib/libeTPkcs11.so\" you might get it to work with Aladdin.\n", "I don't know what and why the engine_init code present in current M2Crypto is supposed to do. Exposing ENGINE_init() as engine_init2 with the following patch to M2Crypto helps:\nIndex: SWIG/_engine.i\n===================================================================\n--- SWIG/_engine.i (revision 719)\n+++ SWIG/_engine.i (working copy)\n@@ -44,6 +44,9 @@\n %rename(engine_free) ENGINE_free;\n extern int ENGINE_free(ENGINE *);\n\n+%rename(engine_init2) ENGINE_init;\n+extern int ENGINE_init(ENGINE *);\n+\n /*\n * Engine id/name functions\n */\n\nAfter this, the following code takes me further (but urllib does not fully work for me currently):\nimport sys, os, time, cgi, urllib, urlparse\nfrom M2Crypto import m2urllib2 as urllib2\nfrom M2Crypto import m2, SSL, Engine\n\n# load dynamic engine\ne = Engine.load_dynamic_engine(\"pkcs11\", \"/Users/martin/prefix/lib/engines/engine_pkcs11.so\")\npk = Engine.Engine(\"pkcs11\")\npk.ctrl_cmd_string(\"MODULE_PATH\", \"/Library/OpenSC/lib/opensc-pkcs11.so\")\n\nm2.engine_init2(m2.engine_by_id(\"pkcs11\")) # This makes the trick\n\ncert = e.load_certificate(\"slot_01-id_01\")\nkey = e.load_private_key(\"slot_01-id_01\", sys.argv[1])\n\nctx = SSL.Context(\"sslv23\")\nctx.set_cipher_list(\"HIGH:!aNULL:!eNULL:@STRENGTH\")\nctx.set_session_id_ctx(\"foobar\")\nm2.ssl_ctx_use_x509(ctx.ctx, cert.x509)\nm2.ssl_ctx_use_pkey_privkey(ctx.ctx, key.pkey)\n\nopener = urllib2.build_opener(ctx)\nurllib2.install_opener(opener)\n\n", "That is exactly the code I've tried. But It ended with the following error:\nTraceback (most recent call last):\n File \"prog9.py\", line 13, in <module>\n key = pkcs11.load_private_key(\"id_45\")\n File \"/usr/lib/pymodules/python2.5/M2Crypto/Engine.py\", line 70, in load_private_key\n return self._engine_load_key(m2.engine_load_private_key, name, pin)\n File \"/usr/lib/pymodules/python2.5/M2Crypto/Engine.py\", line 60, in _engine_load_key\n raise EngineError(Err.get_error())\nM2Crypto.Engine.EngineError: 11814:error:26096075:engine outines:ENGINE_load_private_key:not initialised:eng_pkey.c:112:\n\nI'm using OpenSC PKCS11 lib, not aladdin lib. But I don't think the problem is closed.\n", "I tried the code that Heikki suggested (minus one line) and got the same error as Erlo. For load_private_key(), how do I know what to put in for the argument? \ndynamic = Engine.load_dynamic_engine(\"pkcs11\", \"/usr/local/ssl/lib/engines/engine_pkcs11.so\")\n# m2.engine_free(dynamic) this line gave me an error TypeError: in method 'engine_free', argument 1 of type 'ENGINE *'\n\npkcs11 = Engine.Engine(\"pkcs11\")\npkcs11.ctrl_cmd_string(\"MODULE_PATH\", \"/usr/lib/libeTPkcs11.so\")\n\nr = pkcs11.ctrl_cmd_string(\"PIN\", \"password\")\n\nkey = pkcs11.load_private_key(\"id_01\")\n\n", "I think the problem is not really the \"load_private_key()\". It's like something is missing between \"MODULE_PATH\" definition and the load_private_key() call. What happen if you remplace \"/usr/lib/libeTPkcs11.so\" by a wrong path ? In my case I have no error related to this.\nI've run \"pcscd\" in foreground with high debug level, there is no call to smartcard during the python execution... So definitly, I don't understand what's wrong...\nThe equivalent in \"openssl\" is using \"-pre\" command. The \"-pre\" (by opposite to the \"-post\") are command sent to the engine before loading. Perhaps we need to call a methode which \"load\" the engine after all \"ctrl_cmd_string\" calls ?? ...\nLost :-/\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "m2crypto", "python" ]
stackoverflow_0002195179_m2crypto_python.txt
Q: Debugging a subprocess.Popen call I have been using subprocess.Popen successfully in the past, when wrapping binaries with a python script to format arguments / customize etc... Developing a nth wrapper, I did as usual... but nothing happens. Here is the little code: print command p = subprocess.Popen(command, shell = True) result = p.communicate()[0] print vars(p) return result And here is the output: /usr/bin/sh /tmp/run/launch.sh {'_child_created': True, 'returncode': 0, 'stdout': None, 'stdin': None, 'pid': 21650, 'stderr': None, 'universal_newlines': False} As you can see, the goal is to create a shell script setting up everything I need, and then executing it. I would prefer to use real python code, but unfortunately launch.sh call 3rd party shell scripts that I have no wish to try and replicate (though I've been insisting for a python api for over a year now). The problem is that: the shell script is not executed (it should spawn process and output some little things) no python exception is raised there is nothing in the p object that indicates that an error occurred I have tried check_call without any success either... I am at a loss regarding what I should do, and would be very glad if someone could either point my mistake or direct me toward resolution... EDIT: Trying to run this on Linux (sh) shell is necessary for variable substitution in the scripts invoked EDIT 2: Following badp suggestion, I tweaked the code and added subprocess.Popen('ps', shell = True).communicate() Right after p = ... line that creates the process, here is the output: /usr/bin/sh /tmp/run/launch.sh PID TTY TIME CMD 29978 pts/0 00:00:01 zsh 1178 pts/0 00:00:01 python 1180 pts/0 00:00:00 sh <defunct> 1181 pts/0 00:00:00 ps None Apparently the process is launched (even though <defunct>) and one should also note that I have a little problem passing the parameters in... Thanks. A: I've finally found the answer to my question, thanks to badp and his suggestions for debugging. From the python page on the subprocess module: The executable argument specifies the program to execute. It is very seldom needed: Usually, the program to execute is defined by the args argument. If shell=True, the executable argument specifies which shell to use. On Unix, the default shell is /bin/sh. On Windows, the default shell is specified by the COMSPEC environment variable. The only reason you would need to specify shell=True on Windows is where the command you wish to execute is actually built in to the shell, eg dir, copy. You don’t need shell=True to run a batch file, nor to run a console-based executable. Since I am on Linux and using shell=True, my command is in fact a list of arguments to be executed by executable, which defaults to /bin/sh. Thus the full command executed was: /bin/sh /usr/bin/sh /tmp/run/launch.sh... which did not work so well. And I should have used either: subprocess.Popen('/tmp/run/launch.sh', shell=True) or subprocess.Popen('/tmp/run/launch.sh', executable = '/usr/bin/sh', shell=True) It's tricky that shell=True would actually modify the default executable value on Linux only... A: Try this: p = subprocess.Popen(command, shell = True, #is this even needed? stdin = subprocess.PIPE, stdout = subprocess.PIPE, # stderr = subprocess.STDOUT #uncomment if reqd ) Tested working on Windows with the ping command. This lets you communicate, which might help you find out why the script isn't launched in the first place :)
Debugging a subprocess.Popen call
I have been using subprocess.Popen successfully in the past, when wrapping binaries with a python script to format arguments / customize etc... Developing a nth wrapper, I did as usual... but nothing happens. Here is the little code: print command p = subprocess.Popen(command, shell = True) result = p.communicate()[0] print vars(p) return result And here is the output: /usr/bin/sh /tmp/run/launch.sh {'_child_created': True, 'returncode': 0, 'stdout': None, 'stdin': None, 'pid': 21650, 'stderr': None, 'universal_newlines': False} As you can see, the goal is to create a shell script setting up everything I need, and then executing it. I would prefer to use real python code, but unfortunately launch.sh call 3rd party shell scripts that I have no wish to try and replicate (though I've been insisting for a python api for over a year now). The problem is that: the shell script is not executed (it should spawn process and output some little things) no python exception is raised there is nothing in the p object that indicates that an error occurred I have tried check_call without any success either... I am at a loss regarding what I should do, and would be very glad if someone could either point my mistake or direct me toward resolution... EDIT: Trying to run this on Linux (sh) shell is necessary for variable substitution in the scripts invoked EDIT 2: Following badp suggestion, I tweaked the code and added subprocess.Popen('ps', shell = True).communicate() Right after p = ... line that creates the process, here is the output: /usr/bin/sh /tmp/run/launch.sh PID TTY TIME CMD 29978 pts/0 00:00:01 zsh 1178 pts/0 00:00:01 python 1180 pts/0 00:00:00 sh <defunct> 1181 pts/0 00:00:00 ps None Apparently the process is launched (even though <defunct>) and one should also note that I have a little problem passing the parameters in... Thanks.
[ "I've finally found the answer to my question, thanks to badp and his suggestions for debugging.\nFrom the python page on the subprocess module:\n\nThe executable argument specifies the program to execute. It is very seldom needed: Usually, the program to execute is defined by the args argument. If shell=True, the executable argument specifies which shell to use. On Unix, the default shell is /bin/sh. On Windows, the default shell is specified by the COMSPEC environment variable. The only reason you would need to specify shell=True on Windows is where the command you wish to execute is actually built in to the shell, eg dir, copy. You don’t need shell=True to run a batch file, nor to run a console-based executable.\n\nSince I am on Linux and using shell=True, my command is in fact a list of arguments to be executed by executable, which defaults to /bin/sh. Thus the full command executed was: /bin/sh /usr/bin/sh /tmp/run/launch.sh... which did not work so well.\nAnd I should have used either:\nsubprocess.Popen('/tmp/run/launch.sh', shell=True)\n\nor\nsubprocess.Popen('/tmp/run/launch.sh', executable = '/usr/bin/sh', shell=True)\n\nIt's tricky that shell=True would actually modify the default executable value on Linux only...\n", "Try this:\np = subprocess.Popen(command,\n shell = True, #is this even needed?\n stdin = subprocess.PIPE,\n stdout = subprocess.PIPE,\n # stderr = subprocess.STDOUT #uncomment if reqd\n )\n\nTested working on Windows with the ping command. This lets you communicate, which might help you find out why the script isn't launched in the first place :)\n" ]
[ 6, 4 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0002206407_python_subprocess.txt
Q: Send invitation emails on post_save or all at once in django view? There's a requirement in a web app I'm building about the possibility of the user sending join invitations to his friends. These invitations are stored in the database through the Invitation model. The user can send multiple invitations at once. What do you think is more appropriate: sending all emails at once in the back end view or sending one at time in Invitation post_save? Is there a substantial performance overhead to send one email at time? A: If this is live application and user experience is important, then I suggest you avoid sending anything email-related in post_save handlers, or even in views. Reasons are: SMTP can go down, network connection can go down, network can be up but speed can be that of a snail etc. In each of those cases either your program breaks, or user waits and waits and waits... which is not good for business. The solution is to write/buy/find separate email dispatcher that is able to handle all such situations gently, alert administrator in case of trouble, switch SMTP gates on the fly, additionally it could trace bounce-back etc. Then, in your post_save handler, you only add something like this: email_dispatcher.add_to_queue(my_email) Regarding ready-made solutions - quick scan of code.google com resulted in http://code.google.com/p/django-mailer/ but I haven't used it so cannot make recommendation.
Send invitation emails on post_save or all at once in django view?
There's a requirement in a web app I'm building about the possibility of the user sending join invitations to his friends. These invitations are stored in the database through the Invitation model. The user can send multiple invitations at once. What do you think is more appropriate: sending all emails at once in the back end view or sending one at time in Invitation post_save? Is there a substantial performance overhead to send one email at time?
[ "If this is live application and user experience is important, then I suggest you avoid sending anything email-related in post_save handlers, or even in views.\nReasons are: SMTP can go down, network connection can go down, network can be up but speed can be that of a snail etc. In each of those cases either your program breaks, or user waits and waits and waits... which is not good for business.\nThe solution is to write/buy/find separate email dispatcher that is able to handle all such situations gently, alert administrator in case of trouble, switch SMTP gates on the fly, additionally it could trace bounce-back etc.\nThen, in your post_save handler, you only add something like this:\n email_dispatcher.add_to_queue(my_email)\n\nRegarding ready-made solutions - quick scan of code.google com resulted in http://code.google.com/p/django-mailer/ but I haven't used it so cannot make recommendation.\n" ]
[ 6 ]
[]
[]
[ "django", "email", "notifications", "python" ]
stackoverflow_0002207250_django_email_notifications_python.txt
Q: Python - best way to set a column in a 2d array to a specific value I have a 2d array, I would like to set a column to a particular value, my code is below. Is this the best way in python? rows = 5 cols = 10 data = (rows * cols) *[0] val = 10 set_col = 5 for row in range(rows): data[row * cols + set_col - 1] = val If I want to set a number of columns to a particular value , how could I extend this I would like to use the python standard library only Thanks A: NumPy package provides powerful N-dimensional array object. If data is a numpy array then to set set_col column to val value: data[:, set_col] = val Complete Example: >>> import numpy as np >>> a = np.arange(10) >>> a.shape = (5,2) >>> a array([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) >>> a[:,1] = -1 >>> a array([[ 0, -1], [ 2, -1], [ 4, -1], [ 6, -1], [ 8, -1]]) A: A better solution would be: data = [[0] * cols for i in range(rows)] For the values of cols = 2, rows = 3 we'd get: data = [[0, 0], [0, 0], [0, 0]] Then you can access it as: v = data[row][col] Which leads to: val = 10 set_col = 5 for row in range(rows): data[row][set_col] = val Or the more Pythonic (thanks J.F. Sebastian): for row in data: row[set_col] = val A: There's nothing inherently wrong with the way you're using, except that it would be clearer to name the variableset_col than set_row since you're setting a column. So set a number of columns, just wrap it with another loop: for set_col in [...columns that have to be set...] One concern, though: your 2D array is unusual in that it's packed in a 1D array (Python can support 2D arrays via lists of lists as well), so I would wrap it all with methods or functions. A: In your case rows and columns are probably interchangeable, i.e. it's matter of semantics which is which. If this is the case, then you could make columns to occupy sequence of cells in data list, and then zero them using just: data[column_start:column_start+rows] = rows * [0] A: An earlier answer left out a range, so you could try the following: cols = 7 rows = 8 data = [[0] * cols for i in range(rows)] val = 10 set_col = 5 for row in data: row[set_col] = val to extend this to a number of columns you could store the column number and it's value in a dict. So to set colum 5 to 10 and column 2 to 7: cols = 7 rows = 8 data = [[0] * cols for i in range(rows)] valdict = {5:10, 2:7} for col, val in valdict.items(): for row in data: row[col] = val Swapping the rows and columns, as suggested in another answer, makes this slightly simpler: cols = 7 rows = 8 data = [[0] * rows for i in range(cols)] valdict = {5:10, 2:7} for col, val in valdict.items(): data[col] = [val] * rows
Python - best way to set a column in a 2d array to a specific value
I have a 2d array, I would like to set a column to a particular value, my code is below. Is this the best way in python? rows = 5 cols = 10 data = (rows * cols) *[0] val = 10 set_col = 5 for row in range(rows): data[row * cols + set_col - 1] = val If I want to set a number of columns to a particular value , how could I extend this I would like to use the python standard library only Thanks
[ "NumPy package provides powerful N-dimensional array object. If data is a numpy array then to set set_col column to val value:\ndata[:, set_col] = val\n\nComplete Example: \n>>> import numpy as np\n>>> a = np.arange(10)\n>>> a.shape = (5,2)\n>>> a\narray([[0, 1],\n [2, 3],\n [4, 5],\n [6, 7],\n [8, 9]])\n>>> a[:,1] = -1\n>>> a\narray([[ 0, -1],\n [ 2, -1],\n [ 4, -1],\n [ 6, -1],\n [ 8, -1]])\n\n", "A better solution would be:\ndata = [[0] * cols for i in range(rows)]\n\nFor the values of cols = 2, rows = 3 we'd get:\ndata = [[0, 0],\n [0, 0],\n [0, 0]]\n\nThen you can access it as:\nv = data[row][col]\n\nWhich leads to:\nval = 10\nset_col = 5\n\nfor row in range(rows):\n data[row][set_col] = val\n\nOr the more Pythonic (thanks J.F. Sebastian):\nfor row in data:\n row[set_col] = val\n\n", "There's nothing inherently wrong with the way you're using, except that it would be clearer to name the variableset_col than set_row since you're setting a column.\nSo set a number of columns, just wrap it with another loop:\nfor set_col in [...columns that have to be set...]\n\nOne concern, though: your 2D array is unusual in that it's packed in a 1D array (Python can support 2D arrays via lists of lists as well), so I would wrap it all with methods or functions.\n", "In your case rows and columns are probably interchangeable, i.e. it's matter of semantics which is which. If this is the case, then you could make columns to occupy sequence of cells in data list, and then zero them using just:\n data[column_start:column_start+rows] = rows * [0]\n\n", "An earlier answer left out a range, so you could try the following:\ncols = 7\nrows = 8\n\ndata = [[0] * cols for i in range(rows)]\n\nval = 10\nset_col = 5\n\nfor row in data:\n row[set_col] = val\n\nto extend this to a number of columns you could store the column number and it's value in a dict. So to set colum 5 to 10 and column 2 to 7:\ncols = 7\nrows = 8\n\ndata = [[0] * cols for i in range(rows)]\n\nvaldict = {5:10, 2:7}\n\nfor col, val in valdict.items():\n for row in data:\n row[col] = val\n\nSwapping the rows and columns, as suggested in another answer, makes this slightly simpler:\ncols = 7\nrows = 8\n\ndata = [[0] * rows for i in range(cols)]\n\nvaldict = {5:10, 2:7}\n\nfor col, val in valdict.items():\n data[col] = [val] * rows\n\n" ]
[ 29, 15, 0, 0, 0 ]
[]
[]
[ "multidimensional_array", "python" ]
stackoverflow_0002207283_multidimensional_array_python.txt
Q: Python regex problem What I am trying to do: Parse a query for a leading or trailing ? which will result in a search on the rest of the string. "foobar?" or "?foobar" results in a search. "foobar" results in some other behavior. This code works as expected in the interpreter: >>> import re >>> print re.match(".+\?\s*$","foobar?") <_sre.SRE_Match object at 0xb77c4d40> >>> print re.match(".+\?\s*$","foobar") None This code from a Django app does not: doSearch = { "text":"Search for: ", "url":"http://www.google.com/#&q=QUERY", "words":["^\?\s*",".+\?\s*$"] } ... subQ = myCore.lookForPrefix(someQuery, doSearch["words"]) ... def lookForPrefix(query,listOfPrefixes): for l in listOfPrefixes: if re.match(l, query): return re.sub(l,'', query) return False The Django code never matches the trailing "?", all other regexs work fine. And ideas about why not? A: The problem is in your second regex. It matches the whole query, so using re.sub() will replace it all with an empty string. I.e. lookForPrefix('foobar?',listOfPrefixes) will return ''. You are likely checking the return value in an if, so it evaluates the empty string as false. To solve this, you just need to change the second regex to \?\s*$ and use re.search() instead of re.match(), as the latter requires that your regex matches from the beginning of the string. doSearch = { "text":"Search for: ", "url":"http://www.google.com/#&q=QUERY", "words":["^\?\s*","\?\s*$"] } def lookForPrefix(query,listOfPrefixes): for l in listOfPrefixes: if re.search(l, query): return re.sub(l,'', query) return False The result: >>> lookForPrefix('?foobar', doSearch["words"]) 'foobar' >>> lookForPrefix('foobar?', doSearch["words"]) 'foobar' >>> lookForPrefix('foobar', doSearch["words"]) False EDIT: In fact, you might as well combine the two regexes into one: ^\?\s*|\?\s*$. That will work equally well. A: You probably want to use raw strings for regexes, such as: r'^\s\?'. Regular strings will prevent problems with escaped characters becoming other values (r'\0' is the same as '\0', but different from '\0' (a single null character)). Also r'^\?\s*|\?\s*$' will NOT work as intended by Max S. because the | is alternating between "\s* and \?. The regex proposed in the EDIT interprets to: question mark at the beginning of the line followed by any number of spaces OR a question mark, followed by any number of spaces and the end of the line. I believe Max S. intended: r'(^\?\s*)|(\?\s*$)', which interprets to: a question mark followed by any number of spaces at the beginning or end of the line.
Python regex problem
What I am trying to do: Parse a query for a leading or trailing ? which will result in a search on the rest of the string. "foobar?" or "?foobar" results in a search. "foobar" results in some other behavior. This code works as expected in the interpreter: >>> import re >>> print re.match(".+\?\s*$","foobar?") <_sre.SRE_Match object at 0xb77c4d40> >>> print re.match(".+\?\s*$","foobar") None This code from a Django app does not: doSearch = { "text":"Search for: ", "url":"http://www.google.com/#&q=QUERY", "words":["^\?\s*",".+\?\s*$"] } ... subQ = myCore.lookForPrefix(someQuery, doSearch["words"]) ... def lookForPrefix(query,listOfPrefixes): for l in listOfPrefixes: if re.match(l, query): return re.sub(l,'', query) return False The Django code never matches the trailing "?", all other regexs work fine. And ideas about why not?
[ "The problem is in your second regex. It matches the whole query, so using re.sub() will replace it all with an empty string. I.e. lookForPrefix('foobar?',listOfPrefixes) will return ''. You are likely checking the return value in an if, so it evaluates the empty string as false.\nTo solve this, you just need to change the second regex to \\?\\s*$ and use re.search() instead of re.match(), as the latter requires that your regex matches from the beginning of the string.\ndoSearch = { \"text\":\"Search for: \", \"url\":\"http://www.google.com/#&q=QUERY\", \"words\":[\"^\\?\\s*\",\"\\?\\s*$\"] }\n\ndef lookForPrefix(query,listOfPrefixes):\n for l in listOfPrefixes:\n if re.search(l, query):\n return re.sub(l,'', query)\n return False\n\nThe result:\n>>> lookForPrefix('?foobar', doSearch[\"words\"])\n'foobar'\n>>> lookForPrefix('foobar?', doSearch[\"words\"])\n'foobar'\n>>> lookForPrefix('foobar', doSearch[\"words\"])\nFalse\n\nEDIT: In fact, you might as well combine the two regexes into one: ^\\?\\s*|\\?\\s*$. That will work equally well.\n", "You probably want to use raw strings for regexes, such as: r'^\\s\\?'. Regular strings will prevent problems with escaped characters becoming other values (r'\\0' is the same as '\\0', but different from '\\0' (a single null character)).\nAlso r'^\\?\\s*|\\?\\s*$' will NOT work as intended by Max S. because the | is alternating between \"\\s* and \\?. The regex proposed in the EDIT interprets to: question mark at the beginning of the line followed by any number of spaces OR a question mark, followed by any number of spaces and the end of the line.\nI believe Max S. intended: r'(^\\?\\s*)|(\\?\\s*$)', which interprets to: a question mark followed by any number of spaces at the beginning or end of the line.\n" ]
[ 3, 0 ]
[]
[]
[ "django", "python", "regex" ]
stackoverflow_0002206026_django_python_regex.txt
Q: How to write dynamic Django models? what i want, is to receive advices to define a re-usefull Product model, for a shopping site app, nowadays I know that the store is going to commerce with "clothing", so the product model will have a "season or collections" relationship, but in the future I should use that app to commerce with X product, e.g: "cars" which have "mechanical specifications" relationships. So Im thinking in metamodels, creating a generic model defined by key/values, but, how to make the relationships?. But, you are the experts community and I hope you help me to see beyond. A: One way to define relationships between one object and many other types of objects is to use a GenericForeignKey and the ContentType framework. I'd guess you would be looking for a Product with some more specific related object such as Jacket. It may look something like this: class Product(models.Model): price = models.FloatField() description = models.TextField() content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() details = generic.GenericForeignKey('content_type', 'object_id') class Jacket(models.Model): size = models.PositiveIntegerField() color = models.CharField(max_length=25) This allows you to define any number of models similar to Jacket which can be associated with your product. jacket = Jacket(size=69, color="pink") jacket.save() prod = Product(price=0.99) prod.details = jacket # It's like magic! prod.save() This does create the additional work of creating a ton of Models, but it also allows you to be very creative with what data you store and how you store it. A: One, popular, option would be to use something like tags. So you'd have the stuff that's common to all items, like an item ID, name, description, price. Then you'd have some more generic tags or described tags in another table, associated with those items. You could have a tag that represents the season, or automobile specifications, etc... Of course, it really depends on how you design the system to cope with that additional information.
How to write dynamic Django models?
what i want, is to receive advices to define a re-usefull Product model, for a shopping site app, nowadays I know that the store is going to commerce with "clothing", so the product model will have a "season or collections" relationship, but in the future I should use that app to commerce with X product, e.g: "cars" which have "mechanical specifications" relationships. So Im thinking in metamodels, creating a generic model defined by key/values, but, how to make the relationships?. But, you are the experts community and I hope you help me to see beyond.
[ "One way to define relationships between one object and many other types of objects is to use a GenericForeignKey and the ContentType framework. I'd guess you would be looking for a Product with some more specific related object such as Jacket. It may look something like this:\nclass Product(models.Model):\n price = models.FloatField()\n description = models.TextField()\n content_type = models.ForeignKey(ContentType)\n object_id = models.PositiveIntegerField()\n details = generic.GenericForeignKey('content_type', 'object_id')\n\nclass Jacket(models.Model):\n size = models.PositiveIntegerField()\n color = models.CharField(max_length=25)\n\nThis allows you to define any number of models similar to Jacket which can be associated with your product.\njacket = Jacket(size=69, color=\"pink\")\njacket.save()\nprod = Product(price=0.99)\nprod.details = jacket # It's like magic!\nprod.save()\n\nThis does create the additional work of creating a ton of Models, but it also allows you to be very creative with what data you store and how you store it.\n", "One, popular, option would be to use something like tags. So you'd have the stuff that's common to all items, like an item ID, name, description, price. Then you'd have some more generic tags or described tags in another table, associated with those items. You could have a tag that represents the season, or automobile specifications, etc...\nOf course, it really depends on how you design the system to cope with that additional information.\n" ]
[ 3, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002207562_django_python.txt
Q: Using a method in a model to count objects filtered by the primary key I wanted to use a method as part of my model to count all the occurrences of the object in another table that references it as a foreign key. Will the below work? class Tile(models.Model): #... def popularity(self): return PlaylistItem.objects.filter(tile__exact=self.id).count() And the relevant information from the playlistitem model: class PlaylistItem(models.Model): #... tile = models.ForeignKey(Tile) A: When you create a ForeignKey, Django creates a backref on referenced model for you, so you could just do: def popularity(self): return self.playlistitem_set.count() See http://docs.djangoproject.com/en/1.1/topics/db/queries/#backwards-related-objects.
Using a method in a model to count objects filtered by the primary key
I wanted to use a method as part of my model to count all the occurrences of the object in another table that references it as a foreign key. Will the below work? class Tile(models.Model): #... def popularity(self): return PlaylistItem.objects.filter(tile__exact=self.id).count() And the relevant information from the playlistitem model: class PlaylistItem(models.Model): #... tile = models.ForeignKey(Tile)
[ "When you create a ForeignKey, Django creates a backref on referenced model for you, so you could just do:\ndef popularity(self):\n return self.playlistitem_set.count()\n\nSee http://docs.djangoproject.com/en/1.1/topics/db/queries/#backwards-related-objects.\n" ]
[ 4 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0002207875_django_django_models_python.txt
Q: Where would I import urllib2 for a class? I have a class that needs access to urllib2, the trivial example for me is: class foo(object): myStringHTML = urllib2.urlopen("http://www.google.com").read() How should I structure my code to include urllib2? In general, I want to store foo in a utility module with a number of other classes, and be able to import foo by itself from the module: from utilpackage import foo Is the correct style to put the import inside the class? This seems strange to me, but it works.... class import_u2_in_foo(object): import urllib2 myStringHtml = urllib2.urlopen("http://www.google.com").read() Or should I move foo into another package so I always use import foo # then foo.py contains import urllib2 class foo(object): myStringHtml = urllib2.urlopen("http://www.google.com").read() How should I structure my code here to be the most pythonic :)? A: You should import it in the utilpackage module, but only export the class foo from it: import urllib2 __all__ = ["foo"] class foo(object): myStringHtml = urllib2.urlopen("http://www.google.com").read() Then you can do from utilpackage import foo but not from utilpackage import urllib2 That's best practice for from-imports in my opinion. A: My general rule of thumb is that if the import is used by many things within the file, I will put it at the very top. If it's only used by one single function or one single scope, I will place it closer to where it's used, say in the class or function. However, this isn't a hard and fast rule, and I don't spend a lot of time thinking about the optimum place to put an import usually. Mostly, I put them at the top. The two biggest reasons I like to have imports closer to where I use them is: If the function imports the things it needs, then I can easily cut and paste that code out into another file, say for testing purposes or if it's boilerplate code like my "catch and syslog any exceptions that happen in this code" code. If a function is infrequently used, or requires a module that isn't commonly installed for an infrequently used part, I don't have to import it when it's not used or require that users install a module they don't need. A: Both class and import are normal Python instructions that have no magic attached (well, there are some things happening under the hood, but still they are just plain good Python instructions). Python goes through your source code from top to bottom, executing every instruction on it its way - no matter if it's class instruction that creates new object for class, or import instruction that imports new module into current scope. Moreover, because import instruction is as good instruction as any other, you can just put it inside class - you can think about classes in similar way as you think about .py files inside directories - and in the latter case you're probably used to that import can be put directly in main scope of .py file. Writing all that, I'm not sure what you would like to achieve by adding this to your class: myStringHtml = urllib2.urlopen("http://www.google.com").read() -- this instruction is parsed on class creation, and then all objects of the class are sharing the same myStringHtml value. Regarding the question about best place to put import urllib2, I also don't quite understand the problem, but for me it seems most natural to do it in this way: import foo # then foo.py contains import urllib2 class foo(object): myStringHtml = urllib2.urlopen("http://www.google.com").read()
Where would I import urllib2 for a class?
I have a class that needs access to urllib2, the trivial example for me is: class foo(object): myStringHTML = urllib2.urlopen("http://www.google.com").read() How should I structure my code to include urllib2? In general, I want to store foo in a utility module with a number of other classes, and be able to import foo by itself from the module: from utilpackage import foo Is the correct style to put the import inside the class? This seems strange to me, but it works.... class import_u2_in_foo(object): import urllib2 myStringHtml = urllib2.urlopen("http://www.google.com").read() Or should I move foo into another package so I always use import foo # then foo.py contains import urllib2 class foo(object): myStringHtml = urllib2.urlopen("http://www.google.com").read() How should I structure my code here to be the most pythonic :)?
[ "You should import it in the utilpackage module, but only export the class foo from it:\nimport urllib2\n\n__all__ = [\"foo\"]\n\nclass foo(object):\n myStringHtml = urllib2.urlopen(\"http://www.google.com\").read()\n\nThen you can do\nfrom utilpackage import foo\n\nbut not\nfrom utilpackage import urllib2\n\nThat's best practice for from-imports in my opinion.\n", "My general rule of thumb is that if the import is used by many things within the file, I will put it at the very top. If it's only used by one single function or one single scope, I will place it closer to where it's used, say in the class or function.\nHowever, this isn't a hard and fast rule, and I don't spend a lot of time thinking about the optimum place to put an import usually. Mostly, I put them at the top.\nThe two biggest reasons I like to have imports closer to where I use them is:\n\nIf the function imports the things it needs, then I can easily cut and paste that code out into another file, say for testing purposes or if it's boilerplate code like my \"catch and syslog any exceptions that happen in this code\" code.\nIf a function is infrequently used, or requires a module that isn't commonly installed for an infrequently used part, I don't have to import it when it's not used or require that users install a module they don't need.\n\n", "Both class and import are normal Python instructions that have no magic attached (well, there are some things happening under the hood, but still they are just plain good Python instructions).\nPython goes through your source code from top to bottom, executing every instruction on it its way - no matter if it's class instruction that creates new object for class, or import instruction that imports new module into current scope.\nMoreover, because import instruction is as good instruction as any other, you can just put it inside class - you can think about classes in similar way as you think about .py files inside directories - and in the latter case you're probably used to that import can be put directly in main scope of .py file.\nWriting all that, I'm not sure what you would like to achieve by adding this to your class:\n myStringHtml = urllib2.urlopen(\"http://www.google.com\").read()\n\n-- this instruction is parsed on class creation, and then all objects of the class are sharing the same myStringHtml value.\nRegarding the question about best place to put import urllib2, I also don't quite understand the problem, but for me it seems most natural to do it in this way:\n import foo\n\n # then foo.py contains\n import urllib2\n\n class foo(object):\n myStringHtml = urllib2.urlopen(\"http://www.google.com\").read()\n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "coding_style", "module", "python" ]
stackoverflow_0002205712_coding_style_module_python.txt
Q: What is the most lightweight way to transmit data over the internet using Python? I have two computers in geographically dispersed locations, both connected to the internet. On each computer I am running a Python program, and I would like to send and receive data from one to the other. I'd like to use the most simple approach possible, while remaining somewhat secure. I have considered the following solutions, but I'm not sure which is the simplest: HTTP server and client, using protobuf*; SOAP web service and client (pywebsvcs maybe?); Some sort of IPC over an SSH tunnel -- again, protobuf maybe? Like I said, I'd like the solution to be somewhat secure, but simplicity is the most important requirement. The data is very simple; object of type A, which contains a list of objects of type B, and some other fields. *I have used protobuf in the past, so the only difficulty would be setting up the HTTP server, which I guess would be cherrypy. A: Protocol buffers are "lightweight" in the sense that they produce very compact wire representation, thus saving bandwidth, memory, storage, etc -- while staying very general-purpose and cross-language. We use them a lot at Google, of course, but it's not clear whether you care about these performance characteristics at all -- you seem to use "lightweight" in a very different sense from this, strictly connected with (mental) load on you, the programmer, and not al all with (computational) load on computers and networks;-). If you don't care about spending much more bandwidth / memory / etc than you could, and neither do you care about the ability to code the participating subsystems in different languages, then protocol buffers may not be optimal for you. Neither is pickling, if I read your "somewhat secure" requirement correctly: unpickling a suitably constructed malicious pickled-string can execute arbitrary code on the unpickling machine. In fact, HTTP is not "somewhat secure" in a slightly different sense: there's nothing in that protocol to stop intruders from "sniffing" your traffic (so you should never use HTTP to send confidential payloads, unless maybe you use strong encryption on the payload before sending it and undo that after receiving it). For security (again depending on what meaning you put on the word) you need HTTPS or (simpler to set up, doesn't require you to purchase certificates!-) SSH tunnels. Once you do have an SSH tunnel established between two machines (for Python, paramiko can help, but even doing it via shell scripts or otherwise by directly controlling the ssh commandline client isn't too bad;-) you can run any protocol on it (HTTP is fine, for example), as the tunnel endpoints are made available as given numbered ports on which you can open socket. I would personally recommend JSON instead of XML for encoding the payloads -- see here for an XMLRPC-like JSON-based RPC server and client, for example -- but I guess that using the XMLRPC server and client that come with Python's standard library is even simpler, thus probably closer to what you're looking for. Why would you want cherrypy in addition? Is performance now suddenly trumping simplicity, for this aspect of the whole architecture only, while in every other case simplicity was picked over performance? That would seem a peculiarly contradictory set of architectural choices!-) A: The cheapest and simplest way to transmit would probably be XML-RPC. It runs over HTTP (so you can secure it that way), it's in the standard library, and unlike protobuf, you don't have to worry about creating and compiling your data type files (since both ends are running Python, the dynamic typing shouldn't be a problem). The only caveat is that any types not represented in XML-RPC must be pickled or otherwise serialized. A: Or you could go right down to the Sockets library and simply transmit the data in your own format. http://www.amk.ca/python/howto/sockets/ A: You could consider Pyro, be sure to read the Security chapter. Update: It seems simpler to set up than Protocol Buffers and may require less work if your requirements grow more complex in the future (they have a way of doing that... :-) A: Alex is right, of course. But, I'll chime in that I have been very happy in the past with pickling data and pushing it over SSH to another process for unpickling. It's just so easy. But, it's not suitable for many things. You really need to trust the incoming data, which in the case of my blog server receiving a pickled blog post (my client parses out the tags or the like), I definitely do trust the data -- it's authenticated as me already. Google, where Alex works, is an entirely different matter. :-)
What is the most lightweight way to transmit data over the internet using Python?
I have two computers in geographically dispersed locations, both connected to the internet. On each computer I am running a Python program, and I would like to send and receive data from one to the other. I'd like to use the most simple approach possible, while remaining somewhat secure. I have considered the following solutions, but I'm not sure which is the simplest: HTTP server and client, using protobuf*; SOAP web service and client (pywebsvcs maybe?); Some sort of IPC over an SSH tunnel -- again, protobuf maybe? Like I said, I'd like the solution to be somewhat secure, but simplicity is the most important requirement. The data is very simple; object of type A, which contains a list of objects of type B, and some other fields. *I have used protobuf in the past, so the only difficulty would be setting up the HTTP server, which I guess would be cherrypy.
[ "Protocol buffers are \"lightweight\" in the sense that they produce very compact wire representation, thus saving bandwidth, memory, storage, etc -- while staying very general-purpose and cross-language. We use them a lot at Google, of course, but it's not clear whether you care about these performance characteristics at all -- you seem to use \"lightweight\" in a very different sense from this, strictly connected with (mental) load on you, the programmer, and not al all with (computational) load on computers and networks;-).\nIf you don't care about spending much more bandwidth / memory / etc than you could, and neither do you care about the ability to code the participating subsystems in different languages, then protocol buffers may not be optimal for you.\nNeither is pickling, if I read your \"somewhat secure\" requirement correctly: unpickling a suitably constructed malicious pickled-string can execute arbitrary code on the unpickling machine. In fact, HTTP is not \"somewhat secure\" in a slightly different sense: there's nothing in that protocol to stop intruders from \"sniffing\" your traffic (so you should never use HTTP to send confidential payloads, unless maybe you use strong encryption on the payload before sending it and undo that after receiving it). For security (again depending on what meaning you put on the word) you need HTTPS or (simpler to set up, doesn't require you to purchase certificates!-) SSH tunnels.\nOnce you do have an SSH tunnel established between two machines (for Python, paramiko can help, but even doing it via shell scripts or otherwise by directly controlling the ssh commandline client isn't too bad;-) you can run any protocol on it (HTTP is fine, for example), as the tunnel endpoints are made available as given numbered ports on which you can open socket. I would personally recommend JSON instead of XML for encoding the payloads -- see here for an XMLRPC-like JSON-based RPC server and client, for example -- but I guess that using the XMLRPC server and client that come with Python's standard library is even simpler, thus probably closer to what you're looking for. Why would you want cherrypy in addition? Is performance now suddenly trumping simplicity, for this aspect of the whole architecture only, while in every other case simplicity was picked over performance? That would seem a peculiarly contradictory set of architectural choices!-)\n", "The cheapest and simplest way to transmit would probably be XML-RPC. It runs over HTTP (so you can secure it that way), it's in the standard library, and unlike protobuf, you don't have to worry about creating and compiling your data type files (since both ends are running Python, the dynamic typing shouldn't be a problem). The only caveat is that any types not represented in XML-RPC must be pickled or otherwise serialized.\n", "Or you could go right down to the Sockets library and simply transmit the data in your own format.\nhttp://www.amk.ca/python/howto/sockets/\n", "You could consider Pyro, be sure to read the Security chapter.\nUpdate: It seems simpler to set up than Protocol Buffers and may require less work if your requirements grow more complex in the future (they have a way of doing that... :-)\n", "Alex is right, of course. But, I'll chime in that I have been very happy in the past with pickling data and pushing it over SSH to another process for unpickling. It's just so easy.\nBut, it's not suitable for many things. You really need to trust the incoming data, which in the case of my blog server receiving a pickled blog post (my client parses out the tags or the like), I definitely do trust the data -- it's authenticated as me already.\nGoogle, where Alex works, is an entirely different matter. :-)\n" ]
[ 9, 3, 1, 0, 0 ]
[]
[]
[ "network_protocols", "python" ]
stackoverflow_0002199963_network_protocols_python.txt
Q: How to redirect to a query string URL containing non-ascii characters in DJANGO? How to redirect to a query string URL containing non-ascii characters in DJANGO? When I use return HttpResponseRedirect(u'/page/?title=' + query_string) where the query_string contains characters like 你好, I get an error 'ascii' codec can't encode characters in position 21-26: ordinal not in range(128), HTTP response headers must be in US-ASCII format ... A: HttpResponseRedirect(((u'/page/?title=' + query_string).encode('utf-8')) is the first thing to try (since UTF8 is the only popular encoding that can handle all Unicode characters). That should definitely get rid of the exception you're observing -- the issue then moves to ensuring the handler for /page can properly deal with UTF-8 encoded queries (presumably by decoding them back into Unicode). However, that part is not, strictly speaking, germane to this specific question you're asking! A: django way: from django.http import HttpResponseRedirect from django.utils.http import urlquote return HttpResponseRedirect(u'/page/?title=%s' % urlquote(query_string))
How to redirect to a query string URL containing non-ascii characters in DJANGO?
How to redirect to a query string URL containing non-ascii characters in DJANGO? When I use return HttpResponseRedirect(u'/page/?title=' + query_string) where the query_string contains characters like 你好, I get an error 'ascii' codec can't encode characters in position 21-26: ordinal not in range(128), HTTP response headers must be in US-ASCII format ...
[ "HttpResponseRedirect(((u'/page/?title=' + query_string).encode('utf-8'))\n\nis the first thing to try (since UTF8 is the only popular encoding that can handle all Unicode characters). That should definitely get rid of the exception you're observing -- the issue then moves to ensuring the handler for /page can properly deal with UTF-8 encoded queries (presumably by decoding them back into Unicode). However, that part is not, strictly speaking, germane to this specific question you're asking!\n", "django way:\nfrom django.http import HttpResponseRedirect\nfrom django.utils.http import urlquote\n\nreturn HttpResponseRedirect(u'/page/?title=%s' % urlquote(query_string))\n\n" ]
[ 6, 6 ]
[]
[]
[ "django", "httpresponse", "python", "unicode" ]
stackoverflow_0002204914_django_httpresponse_python_unicode.txt
Q: How to use Django templatetags in static media files? Im using a flash gallery and the settings xml file is stored in /media/xml/gallery.xml In the gallery.xml file I want to add this snippet of code: <items> {% for image in images %} <item source="{{ MEDIA_URL }}{{ image.image }}" thumb="" description="{{ image.title }}" /> {% endfor %} </item> But the source="... renders as such: http://127.0.0.1:8000/media/images/gallery/%7B%7B%20MEDIA_URL%20%7D%7D%7B%7B%20image.image%20%7D%7D Is there a way I can work around this problem? Thanks for the help. A: You will have to serve this document via a Django view, and render it as a template. A: Static media is, by definition, static. If you want Django mechanisms to work then you need to process using Django.
How to use Django templatetags in static media files?
Im using a flash gallery and the settings xml file is stored in /media/xml/gallery.xml In the gallery.xml file I want to add this snippet of code: <items> {% for image in images %} <item source="{{ MEDIA_URL }}{{ image.image }}" thumb="" description="{{ image.title }}" /> {% endfor %} </item> But the source="... renders as such: http://127.0.0.1:8000/media/images/gallery/%7B%7B%20MEDIA_URL%20%7D%7D%7B%7B%20image.image%20%7D%7D Is there a way I can work around this problem? Thanks for the help.
[ "You will have to serve this document via a Django view, and render it as a template.\n", "Static media is, by definition, static. If you want Django mechanisms to work then you need to process using Django.\n" ]
[ 2, 2 ]
[]
[]
[ "django", "media", "python", "tags", "templates" ]
stackoverflow_0002208551_django_media_python_tags_templates.txt
Q: How to make a multi choice form field on app engine I'm building an application on app Engine and I want to make a form field with multiple choices. Here is my form (it uses django.newforms from the app engine sdk (django 0.96)) : from google.appengine.ext.db import djangoforms from django import newforms class KeywordForm(djangoforms.ModelForm): class Meta: model = Keyword exclude = ['site', 'created_at', 'last_update'] choices = [ (1, 'value1'), (2, 'value2'), (3, 'value3'), (4, 'value4') ] server = newforms.fields.MultipleChoiceField(choices = choices) The problem is : when I submit the form (with one or more values selected) I've this validation error : "Enter a list of values." I don't understand why... some help on this problem will be very appreciated. Thanks ! :) Edit (extra informations) : Here is the form validation code : form = forms.KeywordForm(data=self.request.POST) if form.is_valid(): ... self.request.POST : UnicodeMultiDict([(u'keyword', u'test'), (u'server[]', u'1'), (u'server[]', u'2')]) A: I've found a solution ! The problem is self.request.POST dictionnary provided to my form's constructor. It's format is not appreciated by MultipleChoiceField.clean() function, so I transformed it. Here is the working validation code : args = self.request.arguments() data = {} for i in args: data[i] = self.request.get_all(i) form = forms.KeywordForm(data=data) if form.is_valid(): [...]
How to make a multi choice form field on app engine
I'm building an application on app Engine and I want to make a form field with multiple choices. Here is my form (it uses django.newforms from the app engine sdk (django 0.96)) : from google.appengine.ext.db import djangoforms from django import newforms class KeywordForm(djangoforms.ModelForm): class Meta: model = Keyword exclude = ['site', 'created_at', 'last_update'] choices = [ (1, 'value1'), (2, 'value2'), (3, 'value3'), (4, 'value4') ] server = newforms.fields.MultipleChoiceField(choices = choices) The problem is : when I submit the form (with one or more values selected) I've this validation error : "Enter a list of values." I don't understand why... some help on this problem will be very appreciated. Thanks ! :) Edit (extra informations) : Here is the form validation code : form = forms.KeywordForm(data=self.request.POST) if form.is_valid(): ... self.request.POST : UnicodeMultiDict([(u'keyword', u'test'), (u'server[]', u'1'), (u'server[]', u'2')])
[ "I've found a solution !\nThe problem is self.request.POST dictionnary provided to my form's constructor.\nIt's format is not appreciated by MultipleChoiceField.clean() function, so I transformed it.\nHere is the working validation code :\n args = self.request.arguments()\n data = {}\n for i in args:\n data[i] = self.request.get_all(i)\n form = forms.KeywordForm(data=data)\n if form.is_valid():\n [...]\n\n" ]
[ 2 ]
[]
[]
[ "django_forms", "google_app_engine", "python" ]
stackoverflow_0002208207_django_forms_google_app_engine_python.txt
Q: Django unable to open sqlite on SOME queries only? I have had no troubles locally, but pushing a new project to an existing machine (one running plenty of other django apps without trouble) gave this: OperationalError: unable to open database file What is more perplexing is: The sqlite file is read-write for all This error only happens on some queries! Other's are fine. In a fresh db after running syncdb, my views work, but /admin/ triggers this. If I loaddata for some of my apps from data dumped from my local machine, some of the apps trigger this in my views, and others do not. I can find no correlation between any of the things that seem to trigger this. Why would it fail to open the database, aside from permissions? A: Write access on the directory. I thought i checked that first :-/
Django unable to open sqlite on SOME queries only?
I have had no troubles locally, but pushing a new project to an existing machine (one running plenty of other django apps without trouble) gave this: OperationalError: unable to open database file What is more perplexing is: The sqlite file is read-write for all This error only happens on some queries! Other's are fine. In a fresh db after running syncdb, my views work, but /admin/ triggers this. If I loaddata for some of my apps from data dumped from my local machine, some of the apps trigger this in my views, and others do not. I can find no correlation between any of the things that seem to trigger this. Why would it fail to open the database, aside from permissions?
[ "Write access on the directory. I thought i checked that first :-/\n" ]
[ 0 ]
[]
[]
[ "django", "python", "sqlite" ]
stackoverflow_0002208643_django_python_sqlite.txt
Q: Index xml files from a outside website Using python django I would like to access this site http://www.reta-vortaro.de/revo/ It is a dictionary site for a language called esperanto, I need to be able to search for a word, and get back its definition, it looks like Each Esperanto root word has an xml file, I need to index each xml file store the name of each xml file in a database. On my website I need to $_GET the word. I need to search for combinations of these root words with a xml file named after it. A: Most programming languages have access to both some sort of XML parser as well as some persistent embedded key-value store. Once you've decided on a programming language, just find one of each that you can feel comfortable with. A: Wonder, if you have access to WSDL. You might be able to access the data that way. What exactly is the problem you are encountering? A: As soon as you need indexing and fast search, it might worth looking for XML database for storing your dictionary (especially for complex queries and big dictionaries). You can easily access most XML databases from PHP. A: I would consider such workflow for you: Download all the files Load their contents and filenames into database (any database will fit) Setup sphinx search tool ( http://sphinx.pocoo.org/ ) Run sphinx to build an index for xml_contents Design your application to use sphinx for search in index Delete all file contains, leaving only filename and sphinx index in a database When using sphinx search, you'll get a filename, operate with it as you supposed before I'm not very familiar with sphinx and don't know is it capable to use files, to build it's index, that why I offer you to load all the information into database A: Have you tried asking the site's administrator for the data? Or perhaps he could set up a web service for you? A: Well, you can fetch each XML file with file_get_contents(), curl, wget or the tool that pleases you the most. Then, You can save the XML files on the file system, or even better use Oracle's Berkeley DBXML, with it you can actually save the XML in the Database and query it, kinda like if it was SQL. It has PHP bindings and lets you query with XQuery. I used it to replace a XML web service and works like a charm, blazing fast. For PHP XML parsing I used to use Keith Devens' XML to Array parser, which is EASY, but it's now oldish Now I use CakePHP's own, you may want to use PHP's SimpleXML. There are also JavaScript based parsers you could use on the client side of the app like jParse (jQuery). This is the Page for PHP + dbXML but seems to be down: http://phpdbxml.4641.org/ but you can download it from here: http://www.oracle.com/technology/software/products/berkeley-db/index.html (there's the license too). I hope it helps.
Index xml files from a outside website
Using python django I would like to access this site http://www.reta-vortaro.de/revo/ It is a dictionary site for a language called esperanto, I need to be able to search for a word, and get back its definition, it looks like Each Esperanto root word has an xml file, I need to index each xml file store the name of each xml file in a database. On my website I need to $_GET the word. I need to search for combinations of these root words with a xml file named after it.
[ "Most programming languages have access to both some sort of XML parser as well as some persistent embedded key-value store. Once you've decided on a programming language, just find one of each that you can feel comfortable with.\n", "Wonder, if you have access to WSDL. You might be able to access the data that way.\nWhat exactly is the problem you are encountering?\n", "As soon as you need indexing and fast search, it might worth looking for XML database for storing your dictionary (especially for complex queries and big dictionaries). You can easily access most XML databases from PHP.\n", "I would consider such workflow for you:\n\nDownload all the files\nLoad their contents and filenames into database (any database will fit) \nSetup sphinx search tool ( http://sphinx.pocoo.org/ )\nRun sphinx to build an index for xml_contents\nDesign your application to use sphinx for search in index\nDelete all file contains, leaving only filename and sphinx index in a database\nWhen using sphinx search, you'll get a filename, operate with it as you supposed before\n\nI'm not very familiar with sphinx and don't know is it capable to use files, to build it's index, that why I offer you to load all the information into database\n", "Have you tried asking the site's administrator for the data? Or perhaps he could set up a web service for you?\n", "Well, you can fetch each XML file with file_get_contents(), curl, wget or the tool that pleases you the most.\nThen, You can save the XML files on the file system, or even better use Oracle's Berkeley DBXML, with it you can actually save the XML in the Database and query it, kinda like if it was SQL. It has PHP bindings and lets you query with XQuery. I used it to replace a XML web service and works like a charm, blazing fast.\nFor PHP XML parsing I used to use Keith Devens' XML to Array parser, which is EASY, but it's now oldish Now I use CakePHP's own, you may want to use PHP's SimpleXML. There are also JavaScript based parsers you could use on the client side of the app like jParse (jQuery).\nThis is the Page for PHP + dbXML but seems to be down: http://phpdbxml.4641.org/ but you can download it from here: http://www.oracle.com/technology/software/products/berkeley-db/index.html (there's the license too).\nI hope it helps.\n" ]
[ 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "django", "python", "xml" ]
stackoverflow_0002115290_django_python_xml.txt
Q: How to debug a MemoryError in Python? Tools for tracking memory use? I have a Python program that dies with a MemoryError when I feed it a large file. Are there any tools that I could use to figure out what's using the memory? This program ran fine on smaller input files. The program obviously needs some scalability improvements; I'm just trying to figure out where. "Benchmark before you optimize", as a wise person once said. (Just to forestall the inevitable "add more RAM" answer: This is running on a 32-bit WinXP box with 4GB RAM, so Python has access to 2GB of usable memory. Adding more memory is not technically possible. Reinstalling my PC with 64-bit Windows is not practical.) EDIT: Oops, this is a duplicate of Which Python memory profiler is recommended? A: Heapy is a memory profiler for Python, which is the type of tool you need. A: The simplest and lightweight way would likely be to use the built in memory query capabilities of Python, such as sys.getsizeof - just run it on your objects for a reduced problem (i.e. a smaller file) and see what takes a lot of memory. A: In your case, the answer is probably very simple: Do not read the whole file at once but process the file chunk by chunk. That may be very easy or complicated depending on your usage scenario. Just for example, a MD5 checksum computation can be done much more efficiently for huge files without reading the whole file in. The latter change has dramatically reduced memory consumption in some SCons usage scenarios but was almost impossible to trace with a memory profiler. If you still need a memory profiler: eliben already suggested sys.getsizeof. If that doesn't cut it, try Heapy or Pympler. A: You asked for a tool recommendation: Python Memory Validator allows you to monitor the memory usage, allocation locations, GC collections, object instances, memory snapshots, etc of your Python application. Windows only. http://www.softwareverify.com/python/memory/index.html Disclaimer: I was involved in the creation of this software.
How to debug a MemoryError in Python? Tools for tracking memory use?
I have a Python program that dies with a MemoryError when I feed it a large file. Are there any tools that I could use to figure out what's using the memory? This program ran fine on smaller input files. The program obviously needs some scalability improvements; I'm just trying to figure out where. "Benchmark before you optimize", as a wise person once said. (Just to forestall the inevitable "add more RAM" answer: This is running on a 32-bit WinXP box with 4GB RAM, so Python has access to 2GB of usable memory. Adding more memory is not technically possible. Reinstalling my PC with 64-bit Windows is not practical.) EDIT: Oops, this is a duplicate of Which Python memory profiler is recommended?
[ "Heapy is a memory profiler for Python, which is the type of tool you need.\n", "The simplest and lightweight way would likely be to use the built in memory query capabilities of Python, such as sys.getsizeof - just run it on your objects for a reduced problem (i.e. a smaller file) and see what takes a lot of memory.\n", "In your case, the answer is probably very simple: Do not read the whole file at once but process the file chunk by chunk. That may be very easy or complicated depending on your usage scenario. Just for example, a MD5 checksum computation can be done much more efficiently for huge files without reading the whole file in. The latter change has dramatically reduced memory consumption in some SCons usage scenarios but was almost impossible to trace with a memory profiler.\nIf you still need a memory profiler: eliben already suggested sys.getsizeof. If that doesn't cut it, try Heapy or Pympler.\n", "You asked for a tool recommendation:\nPython Memory Validator allows you to monitor the memory usage, allocation locations, GC collections, object instances, memory snapshots, etc of your Python application. Windows only.\nhttp://www.softwareverify.com/python/memory/index.html\nDisclaimer: I was involved in the creation of this software.\n" ]
[ 10, 4, 2, 1 ]
[]
[]
[ "memory_management", "out_of_memory", "profiling", "python" ]
stackoverflow_0001681836_memory_management_out_of_memory_profiling_python.txt
Q: How to do "performance-based" (benchmark) unit testing in Python Let's say that I've got my code base to as high a degree of unit test coverage as makes sense. (Beyond a certain point, increasing coverage doesn't have a good ROI.) Next I want to test performance. To benchmark code to make sure that new commits aren't slowing things down needlessly. I was very intrigued by Safari's zero tolerance policy for slowdowns from commits. I'm not sure that level of commitment to speed has a good ROI for most projects, but I'd at least like to be alerted that a speed regression has happened, and be able to make a judgment call about it. Environment is Python on Linux, and a suggestion that was also workable for BASH scripts would make me very happy. (But Python is the main focus.) A: You will want to do performance testing at a system level if possible - test your application as a whole, in context, with data and behaviour as close to production use as possible. This is not easy, and it will be even harder to automate it and get consistent results. Moreover, you can't use a VM for performance testing (unless your production environment runs in VMs, and even then, you'd need to run the VM on a host with nothing else on). When you say doing performance unit-testing, that may be valuable, but only if it is being used to diagnose a problem which really exists at a system level (not just in the developer's head). Also, performance of units in unit testing sometimes fails to reflect their performance in-context, so it may not be useful at all. A: While I agree that testing performance at a system level is ultimately more relevant, if you'd like to do UnitTest style load testing for Python, FunkLoad http://funkload.nuxeo.org/ does exactly that. Micro benchmarks have their place when you're trying to speed up a specific action in your codebase. And getting subsequent performance unit tests done is a useful way to ensure that this action that you just optimized does not unintentionally regress in performance upon future commits. A: MarkR is right... doing real-world performance testing is key, and may be somewhat dodgey in unit tests. Having said that, have a look at the cProfile module in the standard library. It will at least be useful for giving you a relative sense from commit-to-commit of how fast things are running, and you can run it within a unit test, though of course you'll get results in the details that include the overhead of the unit test framework itself. In all, though, if your objective is zero-tolerance, you'll need something much more robust than this... cProfile in a unit test won't cut it at all, and may be misleading. A: When I do performance testing, I generally have a test suite of data inputs, and measure how long it takes the program to process each one. You can log the performance on a daily or weekly basis, but I don't find it particularly useful to worry about performance until all the functionality is implemented. If performance is too poor, then I break out cProfile, run it with the same data inputs, and try to see where the bottlenecks are.
How to do "performance-based" (benchmark) unit testing in Python
Let's say that I've got my code base to as high a degree of unit test coverage as makes sense. (Beyond a certain point, increasing coverage doesn't have a good ROI.) Next I want to test performance. To benchmark code to make sure that new commits aren't slowing things down needlessly. I was very intrigued by Safari's zero tolerance policy for slowdowns from commits. I'm not sure that level of commitment to speed has a good ROI for most projects, but I'd at least like to be alerted that a speed regression has happened, and be able to make a judgment call about it. Environment is Python on Linux, and a suggestion that was also workable for BASH scripts would make me very happy. (But Python is the main focus.)
[ "You will want to do performance testing at a system level if possible - test your application as a whole, in context, with data and behaviour as close to production use as possible.\nThis is not easy, and it will be even harder to automate it and get consistent results.\nMoreover, you can't use a VM for performance testing (unless your production environment runs in VMs, and even then, you'd need to run the VM on a host with nothing else on).\nWhen you say doing performance unit-testing, that may be valuable, but only if it is being used to diagnose a problem which really exists at a system level (not just in the developer's head).\nAlso, performance of units in unit testing sometimes fails to reflect their performance in-context, so it may not be useful at all.\n", "While I agree that testing performance at a system level is ultimately more relevant, if you'd like to do UnitTest style load testing for Python, FunkLoad http://funkload.nuxeo.org/ does exactly that.\nMicro benchmarks have their place when you're trying to speed up a specific action in your codebase. And getting subsequent performance unit tests done is a useful way to ensure that this action that you just optimized does not unintentionally regress in performance upon future commits.\n", "MarkR is right... doing real-world performance testing is key, and may be somewhat dodgey in unit tests. Having said that, have a look at the cProfile module in the standard library. It will at least be useful for giving you a relative sense from commit-to-commit of how fast things are running, and you can run it within a unit test, though of course you'll get results in the details that include the overhead of the unit test framework itself.\nIn all, though, if your objective is zero-tolerance, you'll need something much more robust than this... cProfile in a unit test won't cut it at all, and may be misleading.\n", "When I do performance testing, I generally have a test suite of data inputs, and measure how long it takes the program to process each one.\nYou can log the performance on a daily or weekly basis, but I don't find it particularly useful to worry about performance until all the functionality is implemented.\nIf performance is too poor, then I break out cProfile, run it with the same data inputs, and try to see where the bottlenecks are.\n" ]
[ 7, 4, 2, 2 ]
[]
[]
[ "benchmarking", "linux", "python", "unit_testing" ]
stackoverflow_0000671503_benchmarking_linux_python_unit_testing.txt
Q: Access remote computer to launch Python script without disturbing user First, I'll admit this is cross-posted at SuperUser but I decided to also post here as my objective is programming related and this community might have better solution scenarios than just the one I'm thinking of. I have a Windows 7 computer that is acting as a Media Center so it is always on but not always in use. I would like to be able to log in to this machine remotely and launch some Python scripts that take time to compute. I sync the scripts and their output using dropbox. If I use Remote Desktop I can get a view of the desktop and use it but the media center view gets blocked (I get the log in screen). If I use LogMeIn, the media center application closes (not compatible with remote use) and both the remote and media center views are the same. Is there a way to access the computer remotely to launch and monitor the execution of these Python scripts while not disturbing the media center users? A: How about wrapping their execution within Remote WSH? Not sure about the monitoring part, although I suppose you could simply have your scripts create a log on the remote machine, and then you can "tail" it in realtime while the remote script is still running... A: For running remote python I like execnet. It gives you the option of running code on any "ssh"able connection or if you are on a local network a little bootstrap script you can run on the connecting machine (be careful as this is not secure if you intend to connect from elsewhere) I would run openssh and use execnets ssh connector. You can make channels so the results of the scripts get sent back to you. It not that difficult to setup and once you do you have lots of flexibility in future, i.e distributing your scripts on many more machines. A: Further searching yielded this which exactly addresses using Media Center and remote connections: http://www.missingremote.com/index.php?option=com_content&task=view&id=3692&Itemid=232 A: I use psexec to run remote commands on Windows machines.
Access remote computer to launch Python script without disturbing user
First, I'll admit this is cross-posted at SuperUser but I decided to also post here as my objective is programming related and this community might have better solution scenarios than just the one I'm thinking of. I have a Windows 7 computer that is acting as a Media Center so it is always on but not always in use. I would like to be able to log in to this machine remotely and launch some Python scripts that take time to compute. I sync the scripts and their output using dropbox. If I use Remote Desktop I can get a view of the desktop and use it but the media center view gets blocked (I get the log in screen). If I use LogMeIn, the media center application closes (not compatible with remote use) and both the remote and media center views are the same. Is there a way to access the computer remotely to launch and monitor the execution of these Python scripts while not disturbing the media center users?
[ "How about wrapping their execution within Remote WSH? Not sure about the monitoring part, although I suppose you could simply have your scripts create a log on the remote machine, and then you can \"tail\" it in realtime while the remote script is still running...\n", "For running remote python I like execnet. It gives you the option of running code on any \"ssh\"able connection or if you are on a local network a little bootstrap script you can run on the connecting machine (be careful as this is not secure if you intend to connect from elsewhere)\nI would run openssh and use execnets ssh connector. You can make channels so the results of the scripts get sent back to you. \nIt not that difficult to setup and once you do you have lots of flexibility in future, i.e distributing your scripts on many more machines.\n", "Further searching yielded this which exactly addresses using Media Center and remote connections:\nhttp://www.missingremote.com/index.php?option=com_content&task=view&id=3692&Itemid=232\n", "I use psexec to run remote commands on Windows machines.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "python", "remote_desktop", "windows_7" ]
stackoverflow_0002208944_python_remote_desktop_windows_7.txt
Q: Using Postgres in a web app: "transaction aborted" errors Recently I moved a web app I'm developing from MySQL to PostgreSQL for performance reasons (I need functionality PostGIS provides). Now quite often encounter the following error: current transaction is aborted, commands ignored until end of transaction block The server application uses mod_python. The error occurs in the hailing function (i.e. the one that creates a new session for this specific client). Here goes the appropriate piece of code (the exception occurs on the line where sessionAppId is invoked: def hello(req): req.content_type = "text/json" req.headers_out.add('Cache-Control', "no-store, no-cache, must-revalidate") req.headers_out.add('Pragma', "no-cache") req.headers_out.add('Expires', "-1") instance = req.hostname.split(".")[0] cookieSecret = '....' # whatever :-) receivedCookies = Cookie.get_cookies(req, Cookie.SignedCookie, secret = cookieSecret) sessionList = receivedCookies.get('sessions', None) sessionId = str(uuid.uuid4()) if sessionList: if type(sessionList) is not Cookie.SignedCookie: return "{status: 'error', errno:1, errmsg:'Permission denied.'}" else: sessionList = sessionList.value.split(",") for x in sessionList[:]: revisionCookie = receivedCookies.get('rev_' + str(sessionAppId(x, instance)), None) # more processing here.... # ..... cursors[instance].execute("lock revision, app, timeout IN SHARE MODE") cursors[instance].execute("insert into app (type, active, active_revision, contents, z) values ('session', true, %s, %s, 0) returning id", (cRevision, sessionId)) sAppId = cursors[instance].fetchone()[0] cursors[instance].execute("insert into revision (app_id, type) values (%s, 'active')", (sAppId,)) cursors[instance].execute("insert into timeout (app_id, last_seen) values (%s, now())", (sAppId,)) connections[instance].commit() # ..... And here is sessionAppId itself: def sessionAppId(sessionId, instance): cursors[instance].execute("select id from app where type='session' and contents = %s", (sessionId, )) row = cursors[instance].fetchone() if row == None: return 0 else: return row[0] Some clarifications and additional questions: cursors[instance] and connections[instance] are the database connection and the cursor for the web app instance served on this domain name. I.e. the same server serves example1.com and example2.com, and uses these dictionaries to call the appropriate database depending on the server name the request came for. Do I really need to lock the tables in the hello() function? Most of the code in hello() is needed to maintain one separate session per browser tab. I couldn't find a way to do with cookies only, since browser tabs opening a website share the pool of cookies. Is there a better way to do it? A: that error is caused because of a precedent error. look at this piece of code: >>> import psycopg2 >>> conn = psycopg2.connect('') >>> cur = conn.cursor() >>> cur.execute('select current _date') Traceback (most recent call last): File "<stdin>", line 1, in <module> psycopg2.ProgrammingError: syntax error at or near "_date" LINE 1: select current _date ^ >>> cur.execute('select current_date') Traceback (most recent call last): File "<stdin>", line 1, in <module> psycopg2.InternalError: current transaction is aborted, commands ignored until end of transaction block >>> conn.rollback() >>> cur.execute('select current_date') >>> cur.fetchall() [(datetime.date(2010, 2, 5),)] >>> if you are familiar with twisted, look at twisted.enterprise.adbapi for an example how to handle cursors. basically you should always commit or rollback your cursors: try: cur.execute("...") cur.fetchall() cur.close() connection.commit() except: connection.rollback()
Using Postgres in a web app: "transaction aborted" errors
Recently I moved a web app I'm developing from MySQL to PostgreSQL for performance reasons (I need functionality PostGIS provides). Now quite often encounter the following error: current transaction is aborted, commands ignored until end of transaction block The server application uses mod_python. The error occurs in the hailing function (i.e. the one that creates a new session for this specific client). Here goes the appropriate piece of code (the exception occurs on the line where sessionAppId is invoked: def hello(req): req.content_type = "text/json" req.headers_out.add('Cache-Control', "no-store, no-cache, must-revalidate") req.headers_out.add('Pragma', "no-cache") req.headers_out.add('Expires', "-1") instance = req.hostname.split(".")[0] cookieSecret = '....' # whatever :-) receivedCookies = Cookie.get_cookies(req, Cookie.SignedCookie, secret = cookieSecret) sessionList = receivedCookies.get('sessions', None) sessionId = str(uuid.uuid4()) if sessionList: if type(sessionList) is not Cookie.SignedCookie: return "{status: 'error', errno:1, errmsg:'Permission denied.'}" else: sessionList = sessionList.value.split(",") for x in sessionList[:]: revisionCookie = receivedCookies.get('rev_' + str(sessionAppId(x, instance)), None) # more processing here.... # ..... cursors[instance].execute("lock revision, app, timeout IN SHARE MODE") cursors[instance].execute("insert into app (type, active, active_revision, contents, z) values ('session', true, %s, %s, 0) returning id", (cRevision, sessionId)) sAppId = cursors[instance].fetchone()[0] cursors[instance].execute("insert into revision (app_id, type) values (%s, 'active')", (sAppId,)) cursors[instance].execute("insert into timeout (app_id, last_seen) values (%s, now())", (sAppId,)) connections[instance].commit() # ..... And here is sessionAppId itself: def sessionAppId(sessionId, instance): cursors[instance].execute("select id from app where type='session' and contents = %s", (sessionId, )) row = cursors[instance].fetchone() if row == None: return 0 else: return row[0] Some clarifications and additional questions: cursors[instance] and connections[instance] are the database connection and the cursor for the web app instance served on this domain name. I.e. the same server serves example1.com and example2.com, and uses these dictionaries to call the appropriate database depending on the server name the request came for. Do I really need to lock the tables in the hello() function? Most of the code in hello() is needed to maintain one separate session per browser tab. I couldn't find a way to do with cookies only, since browser tabs opening a website share the pool of cookies. Is there a better way to do it?
[ "that error is caused because of a precedent error. look at this piece of code:\n>>> import psycopg2\n>>> conn = psycopg2.connect('')\n>>> cur = conn.cursor()\n>>> cur.execute('select current _date')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\npsycopg2.ProgrammingError: syntax error at or near \"_date\"\nLINE 1: select current _date\n ^\n\n>>> cur.execute('select current_date')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\npsycopg2.InternalError: current transaction is aborted, commands ignored until end of transaction block\n\n>>> conn.rollback()\n>>> cur.execute('select current_date')\n>>> cur.fetchall()\n[(datetime.date(2010, 2, 5),)]\n>>> \n\nif you are familiar with twisted, look at twisted.enterprise.adbapi for an example how to handle cursors. basically you should always commit or rollback your cursors:\ntry:\n cur.execute(\"...\")\n cur.fetchall()\n cur.close()\n connection.commit()\nexcept:\n connection.rollback()\n\n" ]
[ 14 ]
[]
[]
[ "concurrency", "postgresql", "python" ]
stackoverflow_0002209169_concurrency_postgresql_python.txt
Q: How do I properly setup my python paths and permissions for Django+mod_wsgi deployment? The issue I'm having is my wsgi file can't import the wsgi handlers properly. /var/log/apache2/error.log reports: ImportError: No module named django.core.handlers.wsgi Googling this brings up a couple results, mostly dealing with permissions errors because www-data can't read certain files and/or the pythonpath is not correct. Some of the solutions are vague or just don't work in my circumstance. Background information.. My /usr/lib directory.. /usr/lib/python2.4 /usr/lib/python2.5 /usr/lib/python2.6 /usr/lib/python-django The default python version is 2.5.2. If I open the interpreter as a regular user I can import django.core.handlers.wsgi with no issues. If I switch to www-data the python version is the same, and I can import the django.core.handlers.wsgi module no problem. In my bashrc, I set my PYTHONPATH to my home directory which contains all my django sites... export PYTHONPATH=/home/meder/django-sites/:$PYTHONPATH So the directory structure is: django-sites/ test test is the directory created by django-admin createproject. My virtualhost: <VirtualHost *:80> ServerName beta.blah.com WSGIScriptAlias / /home/meder/django-sites/test/apache/django.wsgi Alias /media /home/meder/django-sites/test/media/ </VirtualHost> The /home/meder/django-sites/test/apache/django.wsgi file itself: import os, sys sys.path.append('/usr/local/django') sys.path.append('/home/meder/django-sites') sys.path.append('/home/meder/django-sites/test') os.environ['DJANGO_SETTINGS_MODULE'] = 'test.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Finally, my OS is Debian Lenny and I grabbed django 1.1.1 from backports. Hope that's enough information. Update #1 - per the first reply here's the result of ldd /usr/lib/apache2/modules/mod_wsgi.so: meder@site:/usr/lib/apache2/modules$ ldd mod_wsgi.so libpython2.5.so.1.0 => /usr/lib/libpython2.5.so.1.0 (0xb7d99000) libpthread.so.0 => /lib/libpthread.so.0 (0xb7d81000) libdl.so.2 => /lib/libdl.so.2 (0xb7d7c000) libutil.so.1 => /lib/libutil.so.1 (0xb7d78000) libm.so.6 => /lib/libm.so.6 (0xb7d52000) libc.so.6 => /lib/libc.so.6 (0xb7c14000) /lib/ld-linux.so.2 (0xb7efd000) So it is compiled against python 2.5 and not 2.4. A: Since I'm on Debian it appears that django is in /usr/lib/pymodules/python2.5 and not /usr/lib/python2.5/site-packages. I added sys.path.append('/usr/lib/pymodules/python2.5') to the top of my wsgi file and that did it, although I feel as though I should be fixing this in a more proper manner. A: I don't think your problem lies with the sys.path. I've always used Mod_WSGI with Django using the Daemonized process, like so, # Note these 2 lines WSGIDaemonProcess site-1 user=user-1 group=user-1 threads=25 WSGIProcessGroup site-1 Alias /media/ /usr/local/django/mysite/media/ <Directory /usr/local/django/mysite/media> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi <Directory /usr/local/django/mysite/apache> Order deny,allow Allow from all If you note the first 2 lines - you can specify the group and the user which will be running this. In your case, you mention that www-data can import the django module but it doesn't work when Apache deploys it - perhaps the process is being run by nobody, or some other user/group that does not have privileges to import this module. Adding the DaemonProcess and Group lines should solve your problem. HTH. [1] For reference - here's the Django Mod_WSGI doc - http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango A: Sounds like you mod_wsgi is not compiled against Python 2.5 and instead compiled against Python 2.4 or 2.6. Run: ldd mod_wsgi.so on the mod_wsgi.so file where it is installed to work out what it is using. If it is different, you will need to recompile mod_wsgi from source code such that it uses the version you want to use.
How do I properly setup my python paths and permissions for Django+mod_wsgi deployment?
The issue I'm having is my wsgi file can't import the wsgi handlers properly. /var/log/apache2/error.log reports: ImportError: No module named django.core.handlers.wsgi Googling this brings up a couple results, mostly dealing with permissions errors because www-data can't read certain files and/or the pythonpath is not correct. Some of the solutions are vague or just don't work in my circumstance. Background information.. My /usr/lib directory.. /usr/lib/python2.4 /usr/lib/python2.5 /usr/lib/python2.6 /usr/lib/python-django The default python version is 2.5.2. If I open the interpreter as a regular user I can import django.core.handlers.wsgi with no issues. If I switch to www-data the python version is the same, and I can import the django.core.handlers.wsgi module no problem. In my bashrc, I set my PYTHONPATH to my home directory which contains all my django sites... export PYTHONPATH=/home/meder/django-sites/:$PYTHONPATH So the directory structure is: django-sites/ test test is the directory created by django-admin createproject. My virtualhost: <VirtualHost *:80> ServerName beta.blah.com WSGIScriptAlias / /home/meder/django-sites/test/apache/django.wsgi Alias /media /home/meder/django-sites/test/media/ </VirtualHost> The /home/meder/django-sites/test/apache/django.wsgi file itself: import os, sys sys.path.append('/usr/local/django') sys.path.append('/home/meder/django-sites') sys.path.append('/home/meder/django-sites/test') os.environ['DJANGO_SETTINGS_MODULE'] = 'test.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Finally, my OS is Debian Lenny and I grabbed django 1.1.1 from backports. Hope that's enough information. Update #1 - per the first reply here's the result of ldd /usr/lib/apache2/modules/mod_wsgi.so: meder@site:/usr/lib/apache2/modules$ ldd mod_wsgi.so libpython2.5.so.1.0 => /usr/lib/libpython2.5.so.1.0 (0xb7d99000) libpthread.so.0 => /lib/libpthread.so.0 (0xb7d81000) libdl.so.2 => /lib/libdl.so.2 (0xb7d7c000) libutil.so.1 => /lib/libutil.so.1 (0xb7d78000) libm.so.6 => /lib/libm.so.6 (0xb7d52000) libc.so.6 => /lib/libc.so.6 (0xb7c14000) /lib/ld-linux.so.2 (0xb7efd000) So it is compiled against python 2.5 and not 2.4.
[ "Since I'm on Debian it appears that django is in /usr/lib/pymodules/python2.5 and not /usr/lib/python2.5/site-packages.\nI added\nsys.path.append('/usr/lib/pymodules/python2.5') \n\nto the top of my wsgi file and that did it, although I feel as though I should be fixing this in a more proper manner.\n", "I don't think your problem lies with the sys.path. I've always used Mod_WSGI with Django using the Daemonized process, like so,\n# Note these 2 lines\nWSGIDaemonProcess site-1 user=user-1 group=user-1 threads=25\nWSGIProcessGroup site-1\n\nAlias /media/ /usr/local/django/mysite/media/\n\n<Directory /usr/local/django/mysite/media>\nOrder deny,allow\nAllow from all\n</Directory>\n\nWSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi\n\n<Directory /usr/local/django/mysite/apache>\nOrder deny,allow\nAllow from all\n\n\nIf you note the first 2 lines - you can specify the group and the user which will be running this. In your case, you mention that www-data can import the django module but it doesn't work when Apache deploys it - perhaps the process is being run by nobody, or some other user/group that does not have privileges to import this module. Adding the DaemonProcess and Group lines should solve your problem.\nHTH.\n[1] For reference - here's the Django Mod_WSGI doc - http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango\n", "Sounds like you mod_wsgi is not compiled against Python 2.5 and instead compiled against Python 2.4 or 2.6. Run:\nldd mod_wsgi.so\n\non the mod_wsgi.so file where it is installed to work out what it is using.\nIf it is different, you will need to recompile mod_wsgi from source code such that it uses the version you want to use.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "django", "mod_wsgi", "python" ]
stackoverflow_0002205105_django_mod_wsgi_python.txt
Q: What is cross browser support for JavaScript 1.7's new features? Specifically array comprehensions and the "let" statement https://developer.mozilla.org/en/New_in_JavaScript_1.7 A lot of these new features are borrowed from Python, and would allow the creation of less verbose apps, which is always a good thing. How many times have you typed for (i = 0; i < arr.length; i++) { /* ... */ } for really simple operations? Wouldn't this be easier: [/* ... */ for each (i in arr)] I think brevity is a great thing. Basically, it all comes down to IE in the end, though. Does IE support these new features? What about other browsers? A: While this question is a bit old, and is marked "answered" - I found it on Google and the answers given are possibly inaccurate, or if not, definitely incomplete. It's very important to note that Javascript is NOT A STANDARD. Ken correctly mentioned that ECMAScript is the cross-browser standard that all browsers aim to comply with, but what he didn't clarify is that Javascript is NOT ECMAScript. To say Javascript "implements" ECMAScript means that Javascript includes ECMAScript, plus it's own proprietary extra non-cross-browser features. The for each example given by nicholas is an example of a proprietary feature added by Mozilla that is not in any standard, and therefore unlikely to be adopted by any other browsers. Javascript 1.7 and 1.8 features are useful for extension development in XUL, but should never be used for cross-browser development - that's what standards are for. A: No, when they say "JavaScript", they mean it literally: the ECMAScript engine used by Gecko. JScript and other engines (AFAIK) don't support these features. EDIT: According to wikipedia, JavaScript 1.7 implements ECMAScript "Edition 3 plus all JavaScript 1.6 enhancements, plus Pythonic generators and array comprehensions ([a*a for (a in iter)]), block scope with let, destructuring assignment (var [a,b]=[1,2])". So these features are not part of ECMAScript. A: In addition to IE not supporting it, it seems like the webkit based browsers (Safari, Chrome), despite claiming to have JS 1.7 support (actually executing script tags declared as being in JS 1.7), do not actually support any of these features which means that for now, JS 1.7 with its very nice features is limited to Geko browsers alone. And because Webkit still executes scripts tagged as 1.7 only, this also means that we can't even fail gracefully but we'll just produce syntax errors on these browsers when we are using any of the new keywords or syntax.
What is cross browser support for JavaScript 1.7's new features? Specifically array comprehensions and the "let" statement
https://developer.mozilla.org/en/New_in_JavaScript_1.7 A lot of these new features are borrowed from Python, and would allow the creation of less verbose apps, which is always a good thing. How many times have you typed for (i = 0; i < arr.length; i++) { /* ... */ } for really simple operations? Wouldn't this be easier: [/* ... */ for each (i in arr)] I think brevity is a great thing. Basically, it all comes down to IE in the end, though. Does IE support these new features? What about other browsers?
[ "While this question is a bit old, and is marked \"answered\" - I found it on Google and the answers given are possibly inaccurate, or if not, definitely incomplete.\nIt's very important to note that Javascript is NOT A STANDARD. Ken correctly mentioned that ECMAScript is the cross-browser standard that all browsers aim to comply with, but what he didn't clarify is that Javascript is NOT ECMAScript.\nTo say Javascript \"implements\" ECMAScript means that Javascript includes ECMAScript, plus it's own proprietary extra non-cross-browser features. The for each example given by nicholas is an example of a proprietary feature added by Mozilla that is not in any standard, and therefore unlikely to be adopted by any other browsers.\nJavascript 1.7 and 1.8 features are useful for extension development in XUL, but should never be used for cross-browser development - that's what standards are for.\n", "No, when they say \"JavaScript\", they mean it literally: the ECMAScript engine used by Gecko. JScript and other engines (AFAIK) don't support these features.\nEDIT: According to wikipedia, JavaScript 1.7 implements ECMAScript \"Edition 3 plus all JavaScript 1.6 enhancements, plus Pythonic generators and array comprehensions ([a*a for (a in iter)]), block scope with let, destructuring assignment (var [a,b]=[1,2])\". So these features are not part of ECMAScript.\n", "In addition to IE not supporting it, it seems like the webkit based browsers (Safari, Chrome), despite claiming to have JS 1.7 support (actually executing script tags declared as being in JS 1.7), do not actually support any of these features which means that for now, JS 1.7 with its very nice features is limited to Geko browsers alone.\nAnd because Webkit still executes scripts tagged as 1.7 only, this also means that we can't even fail gracefully but we'll just produce syntax errors on these browsers when we are using any of the new keywords or syntax.\n" ]
[ 33, 8, 1 ]
[]
[]
[ "arrays", "cross_browser", "internet_explorer", "javascript", "python" ]
stackoverflow_0001330498_arrays_cross_browser_internet_explorer_javascript_python.txt
Q: InternalError: current transaction is aborted, commands ignored until end of transaction block I'm getting this error when doing database calls in a sub process using multiprocessing library. Visit : Pastie InternalError: current transaction is aborted, commands ignored until end of transaction block this is to a Postgre Database, using psycopg2 driver in web.py. However if I use threading.Thread instead of multiprocessing.Process I don't get this error. Any idea how to fix this? A: multiprocessing works (on UNIX systems) by forking the current process. If you have an existing database connection, this will leave the two processes (the current one and the new one) with the same database connection. Trying to use it from both is bad. Create a new database connection in the child process instead.
InternalError: current transaction is aborted, commands ignored until end of transaction block
I'm getting this error when doing database calls in a sub process using multiprocessing library. Visit : Pastie InternalError: current transaction is aborted, commands ignored until end of transaction block this is to a Postgre Database, using psycopg2 driver in web.py. However if I use threading.Thread instead of multiprocessing.Process I don't get this error. Any idea how to fix this?
[ "multiprocessing works (on UNIX systems) by forking the current process. If you have an existing database connection, this will leave the two processes (the current one and the new one) with the same database connection. Trying to use it from both is bad. Create a new database connection in the child process instead.\n" ]
[ 9 ]
[]
[]
[ "multiprocessing", "postgresql", "psycopg2", "python", "web.py" ]
stackoverflow_0002209560_multiprocessing_postgresql_psycopg2_python_web.py.txt
Q: SQLAlchemy ORM Inserting Related Objects without Selecting Them In the SQLAlchemy ORM tutorial, it describes the process of creating object relations roughly as follows. Let's pretend I have a table Articles, a table Keywords, and a table Articles_Keywords which creates a many-many relationship. article = meta.Session.query(Article).filter(id=1).one() keyword1 = meta.Session.query(Keyword).filter(id=1).one() keyword2 = meta.Session.query(Keyword).filter(id=2).one() article.keywords = [keyword1,keyword2] meta.Session.commit() I already have the primary key ID numbers for the keywords in question, so all I need to do is add those IDs to the Articles_Keywords table, linked to this article. The problem is that in order to do that with the ORM, I have to select all of the Keywords from the database, which adds a lot of overhead for seemingly no reason. Is there a way to create this relationship without running any SQL to select the keywords? A: Unfortunately the ORM cannot keep the object state sane without querying the database. However you can easily go around the ORM and insert into the association table. Assuming that Articles_Keywords is a Table object: meta.Session.execute(Articles_Keywords.delete(Articles_Keywords.c.article_id == 1)) meta.Session.execute(Articles_Keywords.insert(), [ {'article_id': 1, 'keyword_id': 1}, {'article_id': 1, 'keyword_id': 2}, ]) If you go around the ORM then already loaded collections aren't updated to reflect the changes in the database. If you need to refresh the collection call Session.expire on it. This will cause the collection to be reloaded next time it is accessed. meta.Session.expire(article, ['keywords']) A: Ants has the right answer, but also if you mapped article_keywords explicitly as an association object, you would just create new ArticleKeyword objects and add them straight in.
SQLAlchemy ORM Inserting Related Objects without Selecting Them
In the SQLAlchemy ORM tutorial, it describes the process of creating object relations roughly as follows. Let's pretend I have a table Articles, a table Keywords, and a table Articles_Keywords which creates a many-many relationship. article = meta.Session.query(Article).filter(id=1).one() keyword1 = meta.Session.query(Keyword).filter(id=1).one() keyword2 = meta.Session.query(Keyword).filter(id=2).one() article.keywords = [keyword1,keyword2] meta.Session.commit() I already have the primary key ID numbers for the keywords in question, so all I need to do is add those IDs to the Articles_Keywords table, linked to this article. The problem is that in order to do that with the ORM, I have to select all of the Keywords from the database, which adds a lot of overhead for seemingly no reason. Is there a way to create this relationship without running any SQL to select the keywords?
[ "Unfortunately the ORM cannot keep the object state sane without querying the database. However you can easily go around the ORM and insert into the association table. Assuming that Articles_Keywords is a Table object:\n meta.Session.execute(Articles_Keywords.delete(Articles_Keywords.c.article_id == 1))\n meta.Session.execute(Articles_Keywords.insert(), [\n {'article_id': 1, 'keyword_id': 1},\n {'article_id': 1, 'keyword_id': 2},\n ])\n\nIf you go around the ORM then already loaded collections aren't updated to reflect the changes in the database. If you need to refresh the collection call Session.expire on it. This will cause the collection to be reloaded next time it is accessed.\nmeta.Session.expire(article, ['keywords'])\n\n", "Ants has the right answer, but also if you mapped article_keywords explicitly as an association object, you would just create new ArticleKeyword objects and add them straight in. \n" ]
[ 3, 1 ]
[]
[]
[ "orm", "performance", "python", "sqlalchemy" ]
stackoverflow_0002180522_orm_performance_python_sqlalchemy.txt
Q: PIP install a Python Package without a setup.py file? I'm trying to figure out how I can install a python package that doesn't have a setup.py file with pip. (package in question is http://code.google.com/p/django-google-analytics/) Normally I would just checkout the code from the repo and symlink into my site-packages, but I'm trying to get my whole environment frozen into a pip requirements file for easy deployment and testing. Any ideas? A: Fork the repo and add a working setup.py. Then send a pull request to the author. Oh, it's on Google Code. Well then, file a bug and post a patch. If the author refuses to make their code into an installable Python distribution (never happened to me), just host your fork somewhere and put that in your requirements file. A: You can't. PIP installs Python packages. That's not a Python package. I've heard that the Django community in general doesn't make much packages, which makes things like what you are trying to do tricky. But that could be wrong. If you want to freeze your environment you might want to look into Buildout. Other options in this case is to use an svn:external.
PIP install a Python Package without a setup.py file?
I'm trying to figure out how I can install a python package that doesn't have a setup.py file with pip. (package in question is http://code.google.com/p/django-google-analytics/) Normally I would just checkout the code from the repo and symlink into my site-packages, but I'm trying to get my whole environment frozen into a pip requirements file for easy deployment and testing. Any ideas?
[ "Fork the repo and add a working setup.py. Then send a pull request to the author.\nOh, it's on Google Code. Well then, file a bug and post a patch.\nIf the author refuses to make their code into an installable Python distribution (never happened to me), just host your fork somewhere and put that in your requirements file.\n", "You can't. PIP installs Python packages. That's not a Python package. I've heard that the Django community in general doesn't make much packages, which makes things like what you are trying to do tricky. But that could be wrong.\nIf you want to freeze your environment you might want to look into Buildout. Other options in this case is to use an svn:external. \n" ]
[ 16, 7 ]
[]
[]
[ "easy_install", "pip", "python", "setuptools" ]
stackoverflow_0002204811_easy_install_pip_python_setuptools.txt
Q: How do I do this with Python list? (itemgetter?) [{'id':44}, {'name':'alexa'},{'color':'blue'}] I want to select whatever in the list that is "id". Basically, I want to print 44, since that's "id" in the list. A: That's a weird data structure... A list of one item dictionaries. key = 'id' l = [{'id':44}, {'name':'alexa'},{'color':'blue'}] print [ x[key] for x in l if key in x ][0] Assuming you can rely on key being present precisely once... Maybe you should just convert the list into a dictionary first: key = 'id' l = [{'id':44}, {'name':'alexa'},{'color':'blue'}] d = {} for x in l: d.update(x) print d[key] A: All the other answers solve your problem, I am just suggesting an alternative way of going about doing this. Instead of having a list of dicts where you query on the key and have to iterate over all list items to get values, just use a dict of lists. Each key would map to a list of values (or just one value if all your dicts had distinct sets of keys). So, data=[{'id':44}, {'name':'alexa'},{'color':'blue'}] becomes data={'id':[44], 'name':['alexa'], 'color':['blue']} and you can neatly access the value for 'id' using data['id'] (or data['id'][0] if you only need one value). If all your keys are distinct across the dicts (as in your example) you don't even have to have lists of values. data={'id':44, 'name':'alexa', 'color':'blue'} Not only does this make your code cleaner, it also speeds up your queries which no longer have to iterate over a list. A: You could do something like this: >>> KEY = 'id' >>> >>> my_list = [{'id':44}, {'name':'alexa'},{'color':'blue'}] >>> my_ids = [x[KEY] for x in my_list if KEY in x] >>> print my_ids [44] Which is obviously a list of the values you want. You can then print them as required. A: Probably this is the best solution: >>> L = [{'id':44}, {'name':'alexa'},{'color':'blue'}] >>> newd = {} >>> for d in L: ... newd.update(d) >>> newd['id'] 44 A: >>> from itertools import dropwhile >>> def find_value(l, key): ... return dropwhile(lambda x: key not in x, l).next()[key] >>> find_value([{'id':44}, {'name':'alexa'},{'color':'blue'}], "id") This will do a linear search, but only until the element is found. If you want to have proper error handling, use: def find_value(l, key): try: return dropwhile(lambda x: key not in x, l).next()[key] except StopIteration: raise ValueError(key) A: >>> L = [{'id':44}, {'name':'alexa'},{'color':'blue'}] >>> newd=dict(d.items()[0] for d in L) >>> newd['id'] 44
How do I do this with Python list? (itemgetter?)
[{'id':44}, {'name':'alexa'},{'color':'blue'}] I want to select whatever in the list that is "id". Basically, I want to print 44, since that's "id" in the list.
[ "That's a weird data structure... A list of one item dictionaries.\nkey = 'id'\nl = [{'id':44}, {'name':'alexa'},{'color':'blue'}]\n\nprint [ x[key] for x in l if key in x ][0]\n\nAssuming you can rely on key being present precisely once...\nMaybe you should just convert the list into a dictionary first:\nkey = 'id'\nl = [{'id':44}, {'name':'alexa'},{'color':'blue'}]\n\nd = {}\nfor x in l:\n d.update(x)\nprint d[key]\n\n", "All the other answers solve your problem, I am just suggesting an alternative way of going about doing this.\nInstead of having a list of dicts where you query on the key and have to iterate over all list items to get values, just use a dict of lists. Each key would map to a list of values (or just one value if all your dicts had distinct sets of keys).\nSo,\ndata=[{'id':44}, {'name':'alexa'},{'color':'blue'}]\n\nbecomes\ndata={'id':[44], 'name':['alexa'], 'color':['blue']}\n\nand you can neatly access the value for 'id' using data['id'] (or data['id'][0] if you only need one value).\nIf all your keys are distinct across the dicts (as in your example) you don't even have to have lists of values.\ndata={'id':44, 'name':'alexa', 'color':'blue'}\n\nNot only does this make your code cleaner, it also speeds up your queries which no longer have to iterate over a list. \n", "You could do something like this:\n>>> KEY = 'id'\n>>>\n>>> my_list = [{'id':44}, {'name':'alexa'},{'color':'blue'}]\n>>> my_ids = [x[KEY] for x in my_list if KEY in x]\n>>> print my_ids\n[44]\n\nWhich is obviously a list of the values you want. You can then print them as required.\n", "Probably this is the best solution:\n>>> L = [{'id':44}, {'name':'alexa'},{'color':'blue'}]\n\n>>> newd = {}\n>>> for d in L:\n... newd.update(d)\n>>> newd['id']\n44\n\n", " >>> from itertools import dropwhile\n >>> def find_value(l, key):\n ... return dropwhile(lambda x: key not in x, l).next()[key]\n >>> find_value([{'id':44}, {'name':'alexa'},{'color':'blue'}], \"id\")\n\nThis will do a linear search, but only until the element is found.\nIf you want to have proper error handling, use:\ndef find_value(l, key):\n try:\n return dropwhile(lambda x: key not in x, l).next()[key]\n except StopIteration:\n raise ValueError(key)\n\n", ">>> L = [{'id':44}, {'name':'alexa'},{'color':'blue'}]\n>>> newd=dict(d.items()[0] for d in L)\n>>> newd['id']\n44\n\n" ]
[ 5, 4, 3, 3, 2, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0002206543_dictionary_list_python.txt
Q: How to use C++ operators within python using boost::python (pyopencv) I'm using the pyopencv bindings. This python lib uses boost::python to connect to OopenCV. Now I'm trying to use the SURF class but don't know how to handle the class operator in my python code. The C++ class is defined as: void SURF::operator()(const Mat& img, const Mat& mask, vector<KeyPoint>& keypoints) const {...} How can I pass my arguments to that class? Update: Thanks to interjay I can call the method but now I getting type errors. What would be the python boost::python::tuple ? import pyopencv as cv img = cv.imread('myImage.jpg') surf = cv.SURF(); key = [] mask = cv.Mat() print surf(img, mask, key, False) Gives me that: Traceback (most recent call last): File "client.py", line 18, in <module> print surf(img, mask, key, False) Boost.Python.ArgumentError: Python argument types in SURF.__call__(SURF, Mat, Mat, list, bool) did not match C++ signature: __call__(cv::SURF inst, cv::Mat img, cv::Mat mask, boost::python::tuple keypoints, bool useProvidedKeypoints=False) __call__(cv::SURF inst, cv::Mat img, cv::Mat mask) A: You just call it as if it was a function. If surf_inst is an instance of the SURF class, you would call: newKeyPoints = surf_inst(img, mask, keypoints) The argument keypoints is expected to be a tuple, and img and mask should be an instance of the Mat class. The C++ function modifies its keypoints parameter. The Python version instead returns the modified keypoints. C++'s operator() is analogous to Python's __call__: It makes an object callable using the same syntax as a function call. Edit: For your second question: As you can see in the error, keypoints is supposed to be a tuple and you gave it a list. Try making it a tuple instead.
How to use C++ operators within python using boost::python (pyopencv)
I'm using the pyopencv bindings. This python lib uses boost::python to connect to OopenCV. Now I'm trying to use the SURF class but don't know how to handle the class operator in my python code. The C++ class is defined as: void SURF::operator()(const Mat& img, const Mat& mask, vector<KeyPoint>& keypoints) const {...} How can I pass my arguments to that class? Update: Thanks to interjay I can call the method but now I getting type errors. What would be the python boost::python::tuple ? import pyopencv as cv img = cv.imread('myImage.jpg') surf = cv.SURF(); key = [] mask = cv.Mat() print surf(img, mask, key, False) Gives me that: Traceback (most recent call last): File "client.py", line 18, in <module> print surf(img, mask, key, False) Boost.Python.ArgumentError: Python argument types in SURF.__call__(SURF, Mat, Mat, list, bool) did not match C++ signature: __call__(cv::SURF inst, cv::Mat img, cv::Mat mask, boost::python::tuple keypoints, bool useProvidedKeypoints=False) __call__(cv::SURF inst, cv::Mat img, cv::Mat mask)
[ "You just call it as if it was a function. If surf_inst is an instance of the SURF class, you would call:\nnewKeyPoints = surf_inst(img, mask, keypoints)\n\nThe argument keypoints is expected to be a tuple, and img and mask should be an instance of the Mat class. The C++ function modifies its keypoints parameter. The Python version instead returns the modified keypoints.\nC++'s operator() is analogous to Python's __call__: It makes an object callable using the same syntax as a function call.\nEdit: For your second question: As you can see in the error, keypoints is supposed to be a tuple and you gave it a list. Try making it a tuple instead.\n" ]
[ 1 ]
[]
[]
[ "boost_python", "c++", "operator_overloading", "operators", "python" ]
stackoverflow_0002209889_boost_python_c++_operator_overloading_operators_python.txt
Q: Python != operation vs "is not" In a comment on this question, I saw a statement that recommended using result is not None vs result != None I was wondering what the difference is, and why one might be recommended over the other? A: == is an equality test. It checks whether the right hand side and the left hand side are equal objects (according to their __eq__ or __cmp__ methods.) is is an identity test. It checks whether the right hand side and the left hand side are the very same object. No methodcalls are done, objects can't influence the is operation. You use is (and is not) for singletons, like None, where you don't care about objects that might want to pretend to be None or where you want to protect against objects breaking when being compared against None. A: First, let me go over a few terms. If you just want your question answered, scroll down to "Answering your question". Definitions Object identity: When you create an object, you can assign it to a variable. You can then also assign it to another variable. And another. >>> button = Button() >>> cancel = button >>> close = button >>> dismiss = button >>> print(cancel is close) True In this case, cancel, close, and dismiss all refer to the same object in memory. You only created one Button object, and all three variables refer to this one object. We say that cancel, close, and dismiss all refer to identical objects; that is, they refer to one single object. Object equality: When you compare two objects, you usually don't care that it refers to the exact same object in memory. With object equality, you can define your own rules for how two objects compare. When you write if a == b:, you are essentially saying if a.__eq__(b):. This lets you define a __eq__ method on a so that you can use your own comparison logic. Rationale for equality comparisons Rationale: Two objects have the exact same data, but are not identical. (They are not the same object in memory.) Example: Strings >>> greeting = "It's a beautiful day in the neighbourhood." >>> a = unicode(greeting) >>> b = unicode(greeting) >>> a is b False >>> a == b True Note: I use unicode strings here because Python is smart enough to reuse regular strings without creating new ones in memory. Here, I have two unicode strings, a and b. They have the exact same content, but they are not the same object in memory. However, when we compare them, we want them to compare equal. What's happening here is that the unicode object has implemented the __eq__ method. class unicode(object): # ... def __eq__(self, other): if len(self) != len(other): return False for i, j in zip(self, other): if i != j: return False return True Note: __eq__ on unicode is definitely implemented more efficiently than this. Rationale: Two objects have different data, but are considered the same object if some key data is the same. Example: Most types of model data >>> import datetime >>> a = Monitor() >>> a.make = "Dell" >>> a.model = "E770s" >>> a.owner = "Bob Jones" >>> a.warranty_expiration = datetime.date(2030, 12, 31) >>> b = Monitor() >>> b.make = "Dell" >>> b.model = "E770s" >>> b.owner = "Sam Johnson" >>> b.warranty_expiration = datetime.date(2005, 8, 22) >>> a is b False >>> a == b True Here, I have two Dell monitors, a and b. They have the same make and model. However, they neither have the same data nor are the same object in memory. However, when we compare them, we want them to compare equal. What's happening here is that the Monitor object implemented the __eq__ method. class Monitor(object): # ... def __eq__(self, other): return self.make == other.make and self.model == other.model Answering your question When comparing to None, always use is not. None is a singleton in Python - there is only ever one instance of it in memory. By comparing identity, this can be performed very quickly. Python checks whether the object you're referring to has the same memory address as the global None object - a very, very fast comparison of two numbers. By comparing equality, Python has to look up whether your object has an __eq__ method. If it does not, it examines each superclass looking for an __eq__ method. If it finds one, Python calls it. This is especially bad if the __eq__ method is slow and doesn't immediately return when it notices that the other object is None. Did you not implement __eq__? Then Python will probably find the __eq__ method on object and use that instead - which just checks for object identity anyway. When comparing most other things in Python, you will be using !=. A: Consider the following: class Bad(object): def __eq__(self, other): return True c = Bad() c is None # False, equivalent to id(c) == id(None) c == None # True, equivalent to c.__eq__(None) A: None is a singleton, therefore identity comparison will always work, whereas an object can fake the equality comparison via .__eq__(). A: >>> () is () True >>> 1 is 1 True >>> (1,) == (1,) True >>> (1,) is (1,) False >>> a = (1,) >>> b = a >>> a is b True Some objects are singletons, and thus is with them is equivalent to ==. Most are not.
Python != operation vs "is not"
In a comment on this question, I saw a statement that recommended using result is not None vs result != None I was wondering what the difference is, and why one might be recommended over the other?
[ "== is an equality test. It checks whether the right hand side and the left hand side are equal objects (according to their __eq__ or __cmp__ methods.)\nis is an identity test. It checks whether the right hand side and the left hand side are the very same object. No methodcalls are done, objects can't influence the is operation.\nYou use is (and is not) for singletons, like None, where you don't care about objects that might want to pretend to be None or where you want to protect against objects breaking when being compared against None.\n", "First, let me go over a few terms. If you just want your question answered, scroll down to \"Answering your question\".\nDefinitions\nObject identity: When you create an object, you can assign it to a variable. You can then also assign it to another variable. And another.\n>>> button = Button()\n>>> cancel = button\n>>> close = button\n>>> dismiss = button\n>>> print(cancel is close)\nTrue\n\nIn this case, cancel, close, and dismiss all refer to the same object in memory. You only created one Button object, and all three variables refer to this one object. We say that cancel, close, and dismiss all refer to identical objects; that is, they refer to one single object.\nObject equality: When you compare two objects, you usually don't care that it refers to the exact same object in memory. With object equality, you can define your own rules for how two objects compare. When you write if a == b:, you are essentially saying if a.__eq__(b):. This lets you define a __eq__ method on a so that you can use your own comparison logic.\nRationale for equality comparisons\nRationale: Two objects have the exact same data, but are not identical. (They are not the same object in memory.)\nExample: Strings\n>>> greeting = \"It's a beautiful day in the neighbourhood.\"\n>>> a = unicode(greeting)\n>>> b = unicode(greeting)\n>>> a is b\nFalse\n>>> a == b\nTrue\n\nNote: I use unicode strings here because Python is smart enough to reuse regular strings without creating new ones in memory.\nHere, I have two unicode strings, a and b. They have the exact same content, but they are not the same object in memory. However, when we compare them, we want them to compare equal. What's happening here is that the unicode object has implemented the __eq__ method.\nclass unicode(object):\n # ...\n\n def __eq__(self, other):\n if len(self) != len(other):\n return False\n\n for i, j in zip(self, other):\n if i != j:\n return False\n\n return True\n\nNote: __eq__ on unicode is definitely implemented more efficiently than this.\nRationale: Two objects have different data, but are considered the same object if some key data is the same.\nExample: Most types of model data\n>>> import datetime\n>>> a = Monitor()\n>>> a.make = \"Dell\"\n>>> a.model = \"E770s\"\n>>> a.owner = \"Bob Jones\"\n>>> a.warranty_expiration = datetime.date(2030, 12, 31)\n>>> b = Monitor()\n>>> b.make = \"Dell\"\n>>> b.model = \"E770s\"\n>>> b.owner = \"Sam Johnson\"\n>>> b.warranty_expiration = datetime.date(2005, 8, 22)\n>>> a is b\nFalse\n>>> a == b\nTrue\n\nHere, I have two Dell monitors, a and b. They have the same make and model. However, they neither have the same data nor are the same object in memory. However, when we compare them, we want them to compare equal. What's happening here is that the Monitor object implemented the __eq__ method.\nclass Monitor(object):\n # ...\n\n def __eq__(self, other):\n return self.make == other.make and self.model == other.model\n\nAnswering your question\nWhen comparing to None, always use is not. None is a singleton in Python - there is only ever one instance of it in memory.\nBy comparing identity, this can be performed very quickly. Python checks whether the object you're referring to has the same memory address as the global None object - a very, very fast comparison of two numbers.\nBy comparing equality, Python has to look up whether your object has an __eq__ method. If it does not, it examines each superclass looking for an __eq__ method. If it finds one, Python calls it. This is especially bad if the __eq__ method is slow and doesn't immediately return when it notices that the other object is None.\nDid you not implement __eq__? Then Python will probably find the __eq__ method on object and use that instead - which just checks for object identity anyway.\nWhen comparing most other things in Python, you will be using !=.\n", "Consider the following:\nclass Bad(object):\n def __eq__(self, other):\n return True\n\nc = Bad()\nc is None # False, equivalent to id(c) == id(None)\nc == None # True, equivalent to c.__eq__(None)\n\n", "None is a singleton, therefore identity comparison will always work, whereas an object can fake the equality comparison via .__eq__().\n", "\n>>> () is ()\nTrue\n>>> 1 is 1\nTrue\n>>> (1,) == (1,)\nTrue\n>>> (1,) is (1,)\nFalse\n>>> a = (1,)\n>>> b = a\n>>> a is b\nTrue\n\nSome objects are singletons, and thus is with them is equivalent to ==. Most are not.\n" ]
[ 365, 177, 48, 23, 11 ]
[]
[]
[ "operators", "python" ]
stackoverflow_0002209755_operators_python.txt
Q: print beautiful value with error I want to display in a HTML page some datas with errors, for example: (value, error) -> string (123, 12) -> (12 +- 1) x 10^1 (4234.3, 2) -> (4234 +- 2) (0.02312, 0.003) -> (23 +- 3) x 10^-3 I've produced this: from math import log10 def format_value_error(value,error): E = int(log10(abs(error))) val = float(value) / 10**E err = float(error) / 10**E return "(%f +- %f) x 10^%d" % (val, err, E) but I've some difficulties with rounding. Are there some libraries with this functionality? A: I'm not sure exactly what you want, but I assume you just want to round the numbers you have to the nearest integer? If so, you can use the built-in function round: >>> int(round(1.5)) 2 Here's the help: >>> help(round) Help on built-in function round in module __builtin__: round(...) round(number[, ndigits]) -> floating point number Round a number to a given precision in decimal digits (default 0 digits). This always returns a floating point number. Precision may be negative. If you want to round down you can use floor from math. I think you actually want to use this after taking the logarithm, rather than just casting to int, as int(-0.5) is 0, not -1 as you want. Here's a modified version of your program that I think does what you want: from math import log10, floor def format_value_error(value,error): E = int(floor(log10(error))) val = int(round(float(value) / 10**E)) err = int(round(float(error) / 10**E)) return "(%d +- %d) x 10^%d" % (val, err, E) print format_value_error(123, 12) print format_value_error(4234.3, 2) print format_value_error(0.02312, 0.003) This gives the following output: (12 +- 1) x 10^1 (4234 +- 2) x 10^0 (23 +- 3) x 10^-3 This is very close to what you want. The only difference is that the text x 10^0 should not be printed, but I'm sure you can find a solution for this. :) A: You could use this in Mark's solution to blank the exponent for 10^0 if E: return "(%d +- %d) x 10^%d" % (val, err, E) else: return "(%d +- %d)" % (val, err)
print beautiful value with error
I want to display in a HTML page some datas with errors, for example: (value, error) -> string (123, 12) -> (12 +- 1) x 10^1 (4234.3, 2) -> (4234 +- 2) (0.02312, 0.003) -> (23 +- 3) x 10^-3 I've produced this: from math import log10 def format_value_error(value,error): E = int(log10(abs(error))) val = float(value) / 10**E err = float(error) / 10**E return "(%f +- %f) x 10^%d" % (val, err, E) but I've some difficulties with rounding. Are there some libraries with this functionality?
[ "I'm not sure exactly what you want, but I assume you just want to round the numbers you have to the nearest integer? If so, you can use the built-in function round: \n>>> int(round(1.5))\n2\n\nHere's the help:\n>>> help(round)\nHelp on built-in function round in module __builtin__:\n\nround(...)\n round(number[, ndigits]) -> floating point number\n\n Round a number to a given precision in decimal digits (default 0 digits).\n This always returns a floating point number. Precision may be negative.\n\nIf you want to round down you can use floor from math. I think you actually want to use this after taking the logarithm, rather than just casting to int, as int(-0.5) is 0, not -1 as you want. Here's a modified version of your program that I think does what you want:\nfrom math import log10, floor\ndef format_value_error(value,error):\n E = int(floor(log10(error)))\n val = int(round(float(value) / 10**E))\n err = int(round(float(error) / 10**E))\n return \"(%d +- %d) x 10^%d\" % (val, err, E)\n\nprint format_value_error(123, 12)\nprint format_value_error(4234.3, 2)\nprint format_value_error(0.02312, 0.003)\n\nThis gives the following output:\n(12 +- 1) x 10^1\n(4234 +- 2) x 10^0\n(23 +- 3) x 10^-3\n\nThis is very close to what you want. The only difference is that the text x 10^0 should not be printed, but I'm sure you can find a solution for this. :)\n", "You could use this in Mark's solution to blank the exponent for 10^0\n if E:\n return \"(%d +- %d) x 10^%d\" % (val, err, E)\n else:\n return \"(%d +- %d)\" % (val, err)\n\n" ]
[ 3, 0 ]
[]
[]
[ "math", "printing", "python", "statistics" ]
stackoverflow_0002210475_math_printing_python_statistics.txt
Q: Map list of tuples into a dictionary I've got a list of tuples extracted from a table in a DB which looks like (key , foreignkey , value). There is a many to one relationship between the key and foreignkeys and I'd like to convert it into a dict indexed by the foreignkey containing the sum of all values with that foreignkey, i.e. { foreignkey , sumof( value ) }. I wrote something that's rather verbose: myDict = {} for item in myTupleList: if item[1] in myDict: myDict [ item[1] ] += item[2] else: myDict [ item[1] ] = item[2] but after seeing this question's answer or these two there's got to be a more concise way of expressing what I'd like to do. And if this is a repeat, I missed it and will remove the question if you can provide the link. A: Assuming all your values are ints, you could use a defaultdict to make this easier: from collections import defaultdict myDict = defaultdict(int) for item in myTupleList: myDict[item[1]] += item[2] defaultdict is like a dictionary, except if you try to get a key that isn't there it fills in the value returned by the callable - in this case, int, which returns 0 when called with no arguments. UPDATE: Thanks to @gnibbler for reminding me, but tuples can be unpacked in a for loop: from collections import defaultdict myDict = defaultdict(int) for _, key, val in myTupleList: myDict[key] += val Here, the 3-item tuple gets unpacked into the variables _, key, and val. _ is a common placeholder name in Python, used to indicate that the value isn't really important. Using this, we can avoid the hairy item[1] and item[2] indexing. We can't rely on this if the tuples in myTupleList aren't all the same size, but I bet they are. (We also avoid the situation of someone looking at the code and thinking it's broken because the writer thought arrays were 1-indexed, which is what I thought when I first read the code. I wasn't alleviated of this until I read the question. In the above loop, however, it's obvious that myTupleList is a tuple of three elements, and we just don't need the first one.) A: from collections import defaultdict myDict = defaultdict(int) for _, key, value in myTupleList: myDict[key] += value A: Here's my (tongue in cheek) answer: myDict = reduce(lambda d, t: (d.__setitem__(t[1], d.get(t[1], 0) + t[2]), d)[1], myTupleList, {}) It is ugly and bad, but here is how it works. The first argument to reduce (because it isn't clear there) is lambda d, t: (d.__setitem__(t[1], d.get(t[1], 0) + t[2]), d)[1]. I will talk about this later, but for now, I'll just call it joe (no offense to any people named Joe intended). The reduce function basically works like this: joe(joe(joe({}, myTupleList[0]), myTupleList[1]), myTupleList[2]) And that's for a three element list. As you can see, it basically uses its first argument to sort of accumulate each result into the final answer. In this case, the final answer is the dictionary you wanted. Now for joe itself. Here is joe as a def: def joe(myDict, tupleItem): myDict[tupleItem[1]] = myDict.get(tupleItem[1], 0) + tupleItem[2] return myDict Unfortunately, no form of = or return is allowed in a Python lambda so that has to be gotten around. I get around the lack of = by calling the dicts __setitem__ function directly. I get around the lack of return in by creating a tuple with the return value of __setitem__ and the dictionary and then return the tuple element containing the dictionary. I will slowly alter joe so you can see how I accomplished this. First, remove the =: def joe(myDict, tupleItem): # Using __setitem__ to avoid using '=' myDict.__setitem__(tupleItem[1], myDict.get(tupleItem[1], 0) + tupleItem[2]) return myDict Next, make the entire expression evaluate to the value we want to return: def joe(myDict, tupleItem): return (myDict.__setitem__(tupleItem[1], myDict.get(tupleItem[1], 0) + tupleItem[2]), myDict)[1] I have run across this use-case for reduce and dict many times in my Python programming. In my opinion, dict could use a member function reduceto(keyfunc, reduce_func, iterable, default_val=None). keyfunc would take the current value from the iterable and return the key. reduce_func would take the existing value in the dictionary and the value from the iterable and return the new value for the dictionary. default_val would be what was passed into reduce_func if the dictionary was missing a key. The return value should be the dictionary itself so you could do things like: myDict = dict().reduceto(lambda t: t[1], lambda o, t: o + t, myTupleList, 0) A: Maybe not exactly readable but it should work: fks = dict([ (v[1], True) for v in myTupleList ]).keys() myDict = dict([ (fk, sum([ v[2] for v in myTupleList if v[1] == fk ])) for fk in fks ]) The first line finds all unique foreign keys. The second line builds your dictionary by first constructing a list of (fk, sum(all values for this fk))-pairs and turning that into a dictionary. A: Look at SQLAlchemy and see if that does all the mapping you need and perhaps more
Map list of tuples into a dictionary
I've got a list of tuples extracted from a table in a DB which looks like (key , foreignkey , value). There is a many to one relationship between the key and foreignkeys and I'd like to convert it into a dict indexed by the foreignkey containing the sum of all values with that foreignkey, i.e. { foreignkey , sumof( value ) }. I wrote something that's rather verbose: myDict = {} for item in myTupleList: if item[1] in myDict: myDict [ item[1] ] += item[2] else: myDict [ item[1] ] = item[2] but after seeing this question's answer or these two there's got to be a more concise way of expressing what I'd like to do. And if this is a repeat, I missed it and will remove the question if you can provide the link.
[ "Assuming all your values are ints, you could use a defaultdict to make this easier:\nfrom collections import defaultdict\n\nmyDict = defaultdict(int)\n\nfor item in myTupleList:\n myDict[item[1]] += item[2]\n\ndefaultdict is like a dictionary, except if you try to get a key that isn't there it fills in the value returned by the callable - in this case, int, which returns 0 when called with no arguments.\nUPDATE: Thanks to @gnibbler for reminding me, but tuples can be unpacked in a for loop:\nfrom collections import defaultdict\n\nmyDict = defaultdict(int)\n\nfor _, key, val in myTupleList:\n myDict[key] += val\n\nHere, the 3-item tuple gets unpacked into the variables _, key, and val. _ is a common placeholder name in Python, used to indicate that the value isn't really important. Using this, we can avoid the hairy item[1] and item[2] indexing. We can't rely on this if the tuples in myTupleList aren't all the same size, but I bet they are.\n(We also avoid the situation of someone looking at the code and thinking it's broken because the writer thought arrays were 1-indexed, which is what I thought when I first read the code. I wasn't alleviated of this until I read the question. In the above loop, however, it's obvious that myTupleList is a tuple of three elements, and we just don't need the first one.)\n", "from collections import defaultdict\n\nmyDict = defaultdict(int)\n\nfor _, key, value in myTupleList:\n myDict[key] += value\n\n", "Here's my (tongue in cheek) answer:\nmyDict = reduce(lambda d, t: (d.__setitem__(t[1], d.get(t[1], 0) + t[2]), d)[1], myTupleList, {})\n\nIt is ugly and bad, but here is how it works.\nThe first argument to reduce (because it isn't clear there) is lambda d, t: (d.__setitem__(t[1], d.get(t[1], 0) + t[2]), d)[1]. I will talk about this later, but for now, I'll just call it joe (no offense to any people named Joe intended). The reduce function basically works like this:\n joe(joe(joe({}, myTupleList[0]), myTupleList[1]), myTupleList[2])\n\nAnd that's for a three element list. As you can see, it basically uses its first argument to sort of accumulate each result into the final answer. In this case, the final answer is the dictionary you wanted.\nNow for joe itself. Here is joe as a def:\ndef joe(myDict, tupleItem):\n myDict[tupleItem[1]] = myDict.get(tupleItem[1], 0) + tupleItem[2]\n return myDict\n\nUnfortunately, no form of = or return is allowed in a Python lambda so that has to be gotten around. I get around the lack of = by calling the dicts __setitem__ function directly. I get around the lack of return in by creating a tuple with the return value of __setitem__ and the dictionary and then return the tuple element containing the dictionary. I will slowly alter joe so you can see how I accomplished this.\nFirst, remove the =:\ndef joe(myDict, tupleItem):\n # Using __setitem__ to avoid using '='\n myDict.__setitem__(tupleItem[1], myDict.get(tupleItem[1], 0) + tupleItem[2])\n return myDict\n\nNext, make the entire expression evaluate to the value we want to return:\ndef joe(myDict, tupleItem):\n return (myDict.__setitem__(tupleItem[1], myDict.get(tupleItem[1], 0) + tupleItem[2]),\n myDict)[1]\n\nI have run across this use-case for reduce and dict many times in my Python programming. In my opinion, dict could use a member function reduceto(keyfunc, reduce_func, iterable, default_val=None). keyfunc would take the current value from the iterable and return the key. reduce_func would take the existing value in the dictionary and the value from the iterable and return the new value for the dictionary. default_val would be what was passed into reduce_func if the dictionary was missing a key. The return value should be the dictionary itself so you could do things like:\nmyDict = dict().reduceto(lambda t: t[1], lambda o, t: o + t, myTupleList, 0)\n\n", "Maybe not exactly readable but it should work:\nfks = dict([ (v[1], True) for v in myTupleList ]).keys()\nmyDict = dict([ (fk, sum([ v[2] for v in myTupleList if v[1] == fk ])) for fk in fks ])\n\nThe first line finds all unique foreign keys. The second line builds your dictionary by first constructing a list of (fk, sum(all values for this fk))-pairs and turning that into a dictionary.\n", "Look at SQLAlchemy and see if that does all the mapping you need and perhaps more\n" ]
[ 9, 5, 4, 0, 0 ]
[]
[]
[ "dictionary", "list", "python", "tuples" ]
stackoverflow_0002210581_dictionary_list_python_tuples.txt
Q: How do I generate a table of contents for HTML text in Python? Assume that I have some HTML code, like this (generated from Markdown or Textile or something): <h1>A header</h1> <p>Foo</p> <h2>Another header</h2> <p>More content</p> <h2>Different header</h2> <h1>Another toplevel header <!-- and so on --> How could I generate a table of contents for it using Python? A: Use an HTML parser such as lxml or BeautifulSoup to find all header elements. A: Here's an example using lxml and xpath. from lxml import etree doc = etree.parse("test.xml") for node in doc.xpath('//h1|//h2|//h3|//h4|//h5'): print node.tag, node.text
How do I generate a table of contents for HTML text in Python?
Assume that I have some HTML code, like this (generated from Markdown or Textile or something): <h1>A header</h1> <p>Foo</p> <h2>Another header</h2> <p>More content</p> <h2>Different header</h2> <h1>Another toplevel header <!-- and so on --> How could I generate a table of contents for it using Python?
[ "Use an HTML parser such as lxml or BeautifulSoup to find all header elements.\n", "Here's an example using lxml and xpath.\nfrom lxml import etree\ndoc = etree.parse(\"test.xml\")\nfor node in doc.xpath('//h1|//h2|//h3|//h4|//h5'):\n print node.tag, node.text\n\n" ]
[ 6, 3 ]
[]
[]
[ "html", "python", "tableofcontents" ]
stackoverflow_0002210265_html_python_tableofcontents.txt
Q: Dynamically select database based on request I'm trying to keep my RESTful site DRY, and I can't come up with a good way to factor out the code to dynamically select from each "user's" separate database. We've got a separate database for each client. This comes in as a part of the URL, and is passed into each view as a keyword arg. I want to give each and every view the behavior of accessing the corresponding database WITHOUT have to make sure each programmer writing a view remembers to use Thing.objects.using(user).all() and t = Thing() t.save(using=user) every time. It seems like there ought to be some way to intercept the request and set the default database based on the args to the view before it hits the view, allowing us to use the usual Thing.objects.all() This would also have the advantage of factoring out all the user resolution code into a more appropriate place. A: We do this by the following technique. Apache picks off the first part of the path and routes this to a specific mod_wsgi Daemon. Each mod_wsgi daemon is a different customer's installation. We have many parallel customers, each with (nearly) identical code, all based off a single common installation of the base software. Each customer has a separate settings.py with their unique configuration. They don't (actually can't) know about each other because Apache has peeled off the top layer of the path for us.
Dynamically select database based on request
I'm trying to keep my RESTful site DRY, and I can't come up with a good way to factor out the code to dynamically select from each "user's" separate database. We've got a separate database for each client. This comes in as a part of the URL, and is passed into each view as a keyword arg. I want to give each and every view the behavior of accessing the corresponding database WITHOUT have to make sure each programmer writing a view remembers to use Thing.objects.using(user).all() and t = Thing() t.save(using=user) every time. It seems like there ought to be some way to intercept the request and set the default database based on the args to the view before it hits the view, allowing us to use the usual Thing.objects.all() This would also have the advantage of factoring out all the user resolution code into a more appropriate place.
[ "We do this by the following technique.\n\nApache picks off the first part of the path and routes this to a specific mod_wsgi Daemon.\nEach mod_wsgi daemon is a different customer's installation.\n\nWe have many parallel customers, each with (nearly) identical code, all based off a single common installation of the base software.\nEach customer has a separate settings.py with their unique configuration. \nThey don't (actually can't) know about each other because Apache has peeled off the top layer of the path for us.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002210914_django_python.txt
Q: Python Build Problem on Mac OS 10.6 / Snow Leopard I'm encountering a build problem for Python 2.6.4 on Snow Leopard. Mac OS X 10.6 Yonah CPU, 32-bit gcc-4.2.1 Update I Solved by removing all non-standard includes and libraries from CFLAGS (there happened to be a uuid/uuid.h in there ...). Still, it compiled despite the error describe below, with /usr/include/hfs/hfs_format.h:765 being a hot spot. For the curious or resourceful, the source file in question: $ cat /usr/include/hfs/hfs_format.h ... 748 #include <uuid/uuid.h> 749 750 /* JournalInfoBlock - Structure that describes where our journal lives */ 751 752 // the original size of the reserved field in the JournalInfoBlock was 753 // 32*sizeof(u_int32_t). To keep the total size of the structure the 754 // same we subtract the size of new fields (currently: ext_jnl_uuid and 755 // machine_uuid). If you add additional fields, place them before the 756 // reserved field and subtract their size in this macro. 757 // 758 #define JIB_RESERVED_SIZE ((32*sizeof(u_int32_t)) - sizeof(uuid_string_t) - 48) 759 760 struct JournalInfoBlock { 761 u_int32_t flags; 762 u_int32_t device_signature[8]; // signature used to locate device. 763 u_int64_t offset; // byte offset to the journal on the device 764 u_int64_t size; // size in bytes of the journal 765 uuid_string_t ext_jnl_uuid; 766 char machine_serial_num[48]; 767 char reserved[JIB_RESERVED_SIZE]; 768 } __attribute__((aligned(2), packed)); 769 typedef struct JournalInfoBlock JournalInfoBlock; ... I'm leaving the question open, since the build yielded too much warnings and this error and still puzzles me a bit ... Update II To get rid of the warnings regarding the deployment target, I edited the Makefile before compilation: $ cat Makefile ... 126 MACOSX_DEPLOYMENT_TARGET=10.3 # => 10.6 ... Original Question When trying to build Python 2.6.4 from source, I run into an error: $ uname -v $ Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; \ root:xnu-1486.2.11~1/RELEASE_I386 $ cd ~/src/Python-2.6.4 $ ./configure --enable-universalsdk=/ --prefix=$HOME checking for --with-universal-archs... 32-bit ... checking machine type as reported by uname -m... i386 ... checking for OSX 10.5 SDK or later... no // <- But I have XCode + the SDKs installed? ... $ make ... ... /usr/include/hfs/hfs_format.h:765: error: \ expected specifier-qualifier-list before ‘uuid_string_t’ It seems to root in Python/mactoolboxglue.c. Hints welcome! Also I get a lot of warnings of these kinds: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for \ Intel with Mac OS X Deployment Target < 10.4 is invalid. gcc -c -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -DNDEBUG -g \ -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include \ -DPy_BUILD_CORE -o Objects/structseq.o Objects/structseq.c In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from Include/pyport.h:235, from Include/Python.h:58, from Objects/structseq.c:4: A: Try adding --universal-archs=32-bit to the configure arguments. EDIT: You may also need to set environment variable MACOSX_DEPLOYMENT_TARGET=10.6 and explicitly use the 10.6 SDK: export MACOSX_DEPLOYMENT_TARGET=10.6 ./configure --universal-archs=32-bit --enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk ... There are still some configure issues with building Python on 10.6. If you re-use the build directory, make sure you clean out all of the cached files that previous runs of configure may have left behind. BTW, if you just need a 32-bit version, you could use the 2.6.4 OS X installer from python.org. A: It seems that you aren't the only one running into this error: Not sure if this has the same root cause : http://trac.macports.org/ticket/21282 Scroll down to the end comments,which seem to indicate a positive outcome. (repeated here for convenience): ...//Quote from trac.macports.org: "try to rename, move or delete /opt/local/include/uuid/uuid.h" ...//End Quote
Python Build Problem on Mac OS 10.6 / Snow Leopard
I'm encountering a build problem for Python 2.6.4 on Snow Leopard. Mac OS X 10.6 Yonah CPU, 32-bit gcc-4.2.1 Update I Solved by removing all non-standard includes and libraries from CFLAGS (there happened to be a uuid/uuid.h in there ...). Still, it compiled despite the error describe below, with /usr/include/hfs/hfs_format.h:765 being a hot spot. For the curious or resourceful, the source file in question: $ cat /usr/include/hfs/hfs_format.h ... 748 #include <uuid/uuid.h> 749 750 /* JournalInfoBlock - Structure that describes where our journal lives */ 751 752 // the original size of the reserved field in the JournalInfoBlock was 753 // 32*sizeof(u_int32_t). To keep the total size of the structure the 754 // same we subtract the size of new fields (currently: ext_jnl_uuid and 755 // machine_uuid). If you add additional fields, place them before the 756 // reserved field and subtract their size in this macro. 757 // 758 #define JIB_RESERVED_SIZE ((32*sizeof(u_int32_t)) - sizeof(uuid_string_t) - 48) 759 760 struct JournalInfoBlock { 761 u_int32_t flags; 762 u_int32_t device_signature[8]; // signature used to locate device. 763 u_int64_t offset; // byte offset to the journal on the device 764 u_int64_t size; // size in bytes of the journal 765 uuid_string_t ext_jnl_uuid; 766 char machine_serial_num[48]; 767 char reserved[JIB_RESERVED_SIZE]; 768 } __attribute__((aligned(2), packed)); 769 typedef struct JournalInfoBlock JournalInfoBlock; ... I'm leaving the question open, since the build yielded too much warnings and this error and still puzzles me a bit ... Update II To get rid of the warnings regarding the deployment target, I edited the Makefile before compilation: $ cat Makefile ... 126 MACOSX_DEPLOYMENT_TARGET=10.3 # => 10.6 ... Original Question When trying to build Python 2.6.4 from source, I run into an error: $ uname -v $ Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; \ root:xnu-1486.2.11~1/RELEASE_I386 $ cd ~/src/Python-2.6.4 $ ./configure --enable-universalsdk=/ --prefix=$HOME checking for --with-universal-archs... 32-bit ... checking machine type as reported by uname -m... i386 ... checking for OSX 10.5 SDK or later... no // <- But I have XCode + the SDKs installed? ... $ make ... ... /usr/include/hfs/hfs_format.h:765: error: \ expected specifier-qualifier-list before ‘uuid_string_t’ It seems to root in Python/mactoolboxglue.c. Hints welcome! Also I get a lot of warnings of these kinds: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for \ Intel with Mac OS X Deployment Target < 10.4 is invalid. gcc -c -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -DNDEBUG -g \ -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include \ -DPy_BUILD_CORE -o Objects/structseq.o Objects/structseq.c In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from Include/pyport.h:235, from Include/Python.h:58, from Objects/structseq.c:4:
[ "Try adding --universal-archs=32-bit to the configure arguments.\nEDIT: You may also need to set environment variable MACOSX_DEPLOYMENT_TARGET=10.6 and explicitly use the 10.6 SDK:\nexport MACOSX_DEPLOYMENT_TARGET=10.6\n./configure --universal-archs=32-bit --enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk ...\n\nThere are still some configure issues with building Python on 10.6. If you re-use the build directory, make sure you clean out all of the cached files that previous runs of configure may have left behind.\nBTW, if you just need a 32-bit version, you could use the 2.6.4 OS X installer from python.org.\n", "It seems that you aren't the only one running into this error:\nNot sure if this has the same root cause :\nhttp://trac.macports.org/ticket/21282\nScroll down to the end comments,which seem to indicate a positive outcome. (repeated here for convenience):\n...//Quote from trac.macports.org:\n\"try to rename, move or delete /opt/local/include/uuid/uuid.h\"\n...//End Quote\n" ]
[ 4, 1 ]
[]
[]
[ "32_bit", "build", "gcc", "macos", "python" ]
stackoverflow_0002211387_32_bit_build_gcc_macos_python.txt
Q: Python module matrix class that implements Modulo 2 arithmetic? I'm looking for a pure Python module that implements a matrix class where the underlying matrix operations are computed in modulo 2 arithmetic as in (x+y)%2 I need to do a lot of basic matrix manipulations ( transpose, multiplication, etc. ). Any help appreciated. Thanks in advance A: This might help you. Look for the Matrix module on that page. Here is the source. cheers
Python module matrix class that implements Modulo 2 arithmetic?
I'm looking for a pure Python module that implements a matrix class where the underlying matrix operations are computed in modulo 2 arithmetic as in (x+y)%2 I need to do a lot of basic matrix manipulations ( transpose, multiplication, etc. ). Any help appreciated. Thanks in advance
[ "This might help you. Look for the Matrix module on that page. Here is the source.\ncheers\n" ]
[ 1 ]
[]
[]
[ "math", "matrix", "python" ]
stackoverflow_0002211405_math_matrix_python.txt
Q: why my django code can not be a 'Standalone Django scripts' before you look my code , see http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/ i want to be a Standalone Django scripts' this is my code : from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet import os os.environ["DJANGO_SETTINGS_MODULE"] = "sphinx_test.settings" from django.core.management import setup_environ from sphinx_test import settings setup_environ(settings) DJANGO_SETTINGS_MODULE=sphinx_test.settings class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") import datetime class Group(models.Model): name = models.CharField(max_length=32) class Document(models.Model): group = models.ForeignKey(Group) date_added = models.DateTimeField(default=datetime.datetime.now) title = models.CharField(max_length=32) content = models.TextField() search = SphinxQuerySet(File,index="test1") class Meta: db_table = 'documents' and this is traceback: Traceback (most recent call last): File "D:\zjm_code\sphinx_test\models.py", line 1, in <module> from django.db import models File "D:\Python25\Lib\site-packages\django\db\__init__.py", line 10, in <module> if not settings.DATABASE_ENGINE: File "D:\Python25\Lib\site-packages\django\utils\functional.py", line 269, in __getattr__ self._setup() File "D:\Python25\Lib\site-packages\django\conf\__init__.py", line 38, in _setup raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE) ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined. A: The code that you use to set the django settings module has to come before any django-related code, including the django db imports at the top of the script.
why my django code can not be a 'Standalone Django scripts'
before you look my code , see http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/ i want to be a Standalone Django scripts' this is my code : from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet import os os.environ["DJANGO_SETTINGS_MODULE"] = "sphinx_test.settings" from django.core.management import setup_environ from sphinx_test import settings setup_environ(settings) DJANGO_SETTINGS_MODULE=sphinx_test.settings class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") import datetime class Group(models.Model): name = models.CharField(max_length=32) class Document(models.Model): group = models.ForeignKey(Group) date_added = models.DateTimeField(default=datetime.datetime.now) title = models.CharField(max_length=32) content = models.TextField() search = SphinxQuerySet(File,index="test1") class Meta: db_table = 'documents' and this is traceback: Traceback (most recent call last): File "D:\zjm_code\sphinx_test\models.py", line 1, in <module> from django.db import models File "D:\Python25\Lib\site-packages\django\db\__init__.py", line 10, in <module> if not settings.DATABASE_ENGINE: File "D:\Python25\Lib\site-packages\django\utils\functional.py", line 269, in __getattr__ self._setup() File "D:\Python25\Lib\site-packages\django\conf\__init__.py", line 38, in _setup raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE) ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
[ "The code that you use to set the django settings module has to come before any django-related code, including the django db imports at the top of the script.\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002211624_django_python.txt
Q: selfClosingTags in BeautifulSoup Using BeautifulSoup to parse my XML import BeautifulSoup soup = BeautifulSoup.BeautifulStoneSoup( """<alan x="y" /><anne>hello</anne>""" ) # selfClosingTags=['alan']) print soup.prettify() This will output: <alan x="y"> <anne> hello </anne> </alan> ie, the anne tag is a child of the alan tag. If I pass selfClosingTags=['alan'] when I create the soup, I get: <alan x="y" /> <anne> hello </anne> Great! My question: why can't the presence of the /> be used to indicate a self closing tag? A: You are asking what was in the mind of an author, after having noted that he gives names like Beautiful[Stone]Soup to classes/modules :-) Here are two more examples of the behaviour of BeautifulStoneSoup: >>> soup = BeautifulSoup.BeautifulStoneSoup( """<alan x="y" ><anne>hello</anne>""" ) >>> print soup.prettify() <alan x="y"> <anne> hello </anne> </alan> >>> soup = BeautifulSoup.BeautifulStoneSoup( """<alan x="y" ><anne>hello</anne>""", selfClosingTags=['alan']) >>> print soup.prettify() <alan x="y" /> <anne> hello </anne> >>> My take: a self-closing tag is not legal if it is not defined to the parser. So the author had choices when deciding how to handle an illegal fragment like <alan x="y" /> ... (1) assume that the / was a mistake (2) treat alan as a self-closing tag quite independently of how it might be used elsewhere in the input (3) make 2 passes over the input nutting out in the first pass how each tag was used. Which choice do you prefer? A: I don't have a "why", but this might be of interest to you. If you use BeautifulSoup (no Stone) to parse XML with a self-closing tag, it works. Sort of: >>> soup = BeautifulSoup.BeautifulSoup( """<alan x="y" /><anne>hello</anne>""" ) # selfClosingTags=['alan']) >>> print soup.prettify() <alan x="y"> </alan> <anne> hello </anne> The nesting is right, even if alan is rendered as a start and an end tag.
selfClosingTags in BeautifulSoup
Using BeautifulSoup to parse my XML import BeautifulSoup soup = BeautifulSoup.BeautifulStoneSoup( """<alan x="y" /><anne>hello</anne>""" ) # selfClosingTags=['alan']) print soup.prettify() This will output: <alan x="y"> <anne> hello </anne> </alan> ie, the anne tag is a child of the alan tag. If I pass selfClosingTags=['alan'] when I create the soup, I get: <alan x="y" /> <anne> hello </anne> Great! My question: why can't the presence of the /> be used to indicate a self closing tag?
[ "You are asking what was in the mind of an author, after having noted that he gives names like Beautiful[Stone]Soup to classes/modules :-)\nHere are two more examples of the behaviour of BeautifulStoneSoup:\n>>> soup = BeautifulSoup.BeautifulStoneSoup(\n \"\"\"<alan x=\"y\" ><anne>hello</anne>\"\"\"\n )\n>>> print soup.prettify()\n<alan x=\"y\">\n <anne>\n hello\n </anne>\n</alan>\n\n>>> soup = BeautifulSoup.BeautifulStoneSoup(\n \"\"\"<alan x=\"y\" ><anne>hello</anne>\"\"\",\n selfClosingTags=['alan'])\n>>> print soup.prettify()\n<alan x=\"y\" />\n<anne>\n hello\n</anne>\n>>>\n\nMy take: a self-closing tag is not legal if it is not defined to the parser. So the author had choices when deciding how to handle an illegal fragment like <alan x=\"y\" /> ... (1) assume that the / was a mistake (2) treat alan as a self-closing tag quite independently of how it might be used elsewhere in the input (3) make 2 passes over the input nutting out in the first pass how each tag was used. Which choice do you prefer?\n", "I don't have a \"why\", but this might be of interest to you. If you use BeautifulSoup (no Stone) to parse XML with a self-closing tag, it works. Sort of:\n>>> soup = BeautifulSoup.BeautifulSoup( \"\"\"<alan x=\"y\" /><anne>hello</anne>\"\"\" ) # selfClosingTags=['alan'])\n>>> print soup.prettify()\n<alan x=\"y\">\n</alan>\n<anne>\n hello\n</anne>\n\nThe nesting is right, even if alan is rendered as a start and an end tag.\n" ]
[ 3, 1 ]
[]
[]
[ "beautifulsoup", "python", "xml" ]
stackoverflow_0002211589_beautifulsoup_python_xml.txt
Q: Django model with filterable attributes I've got two models. One represents a piece of equipment, the other represents a possible attribute the equipment has. Semantically, this might look like: Equipment: tractor, Attributes: wheels, towing Equipment: lawnmower, Attributes: wheels, blades Equipment: hedgetrimmer, Attributes: blades I want to make queries like, wheels = Attributes.objects.get(name='wheels') blades = Attributes.objects.get(name='blades') Equipment.objects.filter(has_attribute=wheels) \ .exclude(has_attribute=blades) How can I create Django models to do this? This seems simple, but I'm just too dense to see the right solution. One solution that popped into my head is to encode the list of Attribute IDs in an integer list like |109|14|3 and test for attributes using Equipment.objects.filter(attributes_contains='|%d|' % id) -- but this seems really wrong. A: Your second example is pretty close, but you need to understand how the QuerySet API works across relationships (i.e. joins). class Attribute(models.Model): name = models.CharField(max_length=20) class Equipment(models.Model): name = models.CharField(max_length=20) attributes = models.ManyToManyField(Attribute) equips = Equipment.objects.filter( attributes__name='wheels').exclude(attributes__name='blades') You can use Q objects in your QuerySet to do more interesting queries. And keep in mind you can always dump the SQL from a QuerySet like this: print equips.query.as_sql() Sometimes you'll want to see the exact SQL being generated to make sure you're using the API correctly.
Django model with filterable attributes
I've got two models. One represents a piece of equipment, the other represents a possible attribute the equipment has. Semantically, this might look like: Equipment: tractor, Attributes: wheels, towing Equipment: lawnmower, Attributes: wheels, blades Equipment: hedgetrimmer, Attributes: blades I want to make queries like, wheels = Attributes.objects.get(name='wheels') blades = Attributes.objects.get(name='blades') Equipment.objects.filter(has_attribute=wheels) \ .exclude(has_attribute=blades) How can I create Django models to do this? This seems simple, but I'm just too dense to see the right solution. One solution that popped into my head is to encode the list of Attribute IDs in an integer list like |109|14|3 and test for attributes using Equipment.objects.filter(attributes_contains='|%d|' % id) -- but this seems really wrong.
[ "Your second example is pretty close, but you need to understand how the QuerySet API works across relationships (i.e. joins).\nclass Attribute(models.Model):\n name = models.CharField(max_length=20)\n\nclass Equipment(models.Model):\n name = models.CharField(max_length=20)\n attributes = models.ManyToManyField(Attribute)\n\nequips = Equipment.objects.filter(\n attributes__name='wheels').exclude(attributes__name='blades')\n\nYou can use Q objects in your QuerySet to do more interesting queries.\nAnd keep in mind you can always dump the SQL from a QuerySet like this:\nprint equips.query.as_sql()\n\nSometimes you'll want to see the exact SQL being generated to make sure you're using the API correctly.\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "mysql", "python", "sql" ]
stackoverflow_0002211631_django_django_models_mysql_python_sql.txt
Q: why 'list index out of range' in my django code; IndexError: list index out of range this is my django code : import os os.environ["DJANGO_SETTINGS_MODULE"] = "sphinx_test.settings" #from django.core.management import setup_environ #from sphinx_test import settings #setup_environ(settings) from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") import datetime class Group(models.Model): name = models.CharField(max_length=32) class Document(models.Model): group = models.ForeignKey(Group) date_added = models.DateTimeField(default=datetime.datetime.now) title = models.CharField(max_length=32) content = models.TextField() search = SphinxQuerySet(File,index="test1") class Meta: db_table = 'documents' and Traceback (most recent call last): File "D:\zjm_code\sphinx_test\models.py", line 16, in <module> class File(models.Model): File "D:\Python25\Lib\site-packages\django\db\models\base.py", line 52, in __new__ kwargs = {"app_label": model_module.__name__.split('.')[-2]} IndexError: list index out of range A: You need to set Meta.app_label to something usable. A: That's odd, that part of the code is just supposed to determine your app name. See the section here starting line 45. What's your app name for this? You may be able to avoid the error by setting app_label to the name of your app in the Meta section of your model.
why 'list index out of range' in my django code;
IndexError: list index out of range this is my django code : import os os.environ["DJANGO_SETTINGS_MODULE"] = "sphinx_test.settings" #from django.core.management import setup_environ #from sphinx_test import settings #setup_environ(settings) from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") import datetime class Group(models.Model): name = models.CharField(max_length=32) class Document(models.Model): group = models.ForeignKey(Group) date_added = models.DateTimeField(default=datetime.datetime.now) title = models.CharField(max_length=32) content = models.TextField() search = SphinxQuerySet(File,index="test1") class Meta: db_table = 'documents' and Traceback (most recent call last): File "D:\zjm_code\sphinx_test\models.py", line 16, in <module> class File(models.Model): File "D:\Python25\Lib\site-packages\django\db\models\base.py", line 52, in __new__ kwargs = {"app_label": model_module.__name__.split('.')[-2]} IndexError: list index out of range
[ "You need to set Meta.app_label to something usable.\n", "That's odd, that part of the code is just supposed to determine your app name. See the section here starting line 45. What's your app name for this?\nYou may be able to avoid the error by setting app_label to the name of your app in the Meta section of your model. \n" ]
[ 3, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002211705_django_python.txt
Q: How to animate a graph I've drawn in PyQt? So I've managed to get a graph drawn up on my screen like such: class Window(QWidget): #stuff graphicsView = QGraphicsView(self) scene = QGraphicsScene(self) #draw our nodes and edges. for i in range(0, len(MAIN_WORLD.currentMax.tour) - 1): node = QGraphicsRectItem(MAIN_WORLD.currentMax.tour[i][0]/3, MAIN_WORLD.currentMax.tour[i][1]/3, 5, 5) edge = QGraphicsLineItem(MAIN_WORLD.currentMax.tour[i][0]/3, MAIN_WORLD.currentMax.tour[i][1]/3, MAIN_WORLD.currentMax.tour[i+1][0]/3, MAIN_WORLD.currentMax.tour[i+1][1]/3) scene.addItem(node) scene.addItem(edge) #now go back and draw our connecting edge. Connects end to home node. connectingEdge = QGraphicsLineItem(MAIN_WORLD.currentMax.tour[0][0]/3, MAIN_WORLD.currentMax.tour[0][1]/3, MAIN_WORLD.currentMax.tour[len(MAIN_WORLD.currentMax.tour) - 1][0]/3, MAIN_WORLD.currentMax.tour[len(MAIN_WORLD.currentMax.tour) - 1][1]/3) scene.addItem(connectingEdge) graphicsView.setScene(scene) hbox = QVBoxLayout(self) #some more stuff.. hbox.addWidget(graphicsView) self.setLayout(hbox) Now, the edges are going to be updating constantly, so I want to be able to remove those edges and redraw them. How can I do that? A: QGraphicsScene manages the drawing of the items you've added to it. If the position of the rectangles or lines has changed you can update them if you old onto them: for i in range( ): nodes[i] = node = QGraphicsRectItem() scene.add(nodes[i]) Later, you can update a node's position: nodes[j].setRect(newx, newy, newwidth, newheight) Similarly for lines. If you need to remove one, you can use scene.removeItem(nodes[22])
How to animate a graph I've drawn in PyQt?
So I've managed to get a graph drawn up on my screen like such: class Window(QWidget): #stuff graphicsView = QGraphicsView(self) scene = QGraphicsScene(self) #draw our nodes and edges. for i in range(0, len(MAIN_WORLD.currentMax.tour) - 1): node = QGraphicsRectItem(MAIN_WORLD.currentMax.tour[i][0]/3, MAIN_WORLD.currentMax.tour[i][1]/3, 5, 5) edge = QGraphicsLineItem(MAIN_WORLD.currentMax.tour[i][0]/3, MAIN_WORLD.currentMax.tour[i][1]/3, MAIN_WORLD.currentMax.tour[i+1][0]/3, MAIN_WORLD.currentMax.tour[i+1][1]/3) scene.addItem(node) scene.addItem(edge) #now go back and draw our connecting edge. Connects end to home node. connectingEdge = QGraphicsLineItem(MAIN_WORLD.currentMax.tour[0][0]/3, MAIN_WORLD.currentMax.tour[0][1]/3, MAIN_WORLD.currentMax.tour[len(MAIN_WORLD.currentMax.tour) - 1][0]/3, MAIN_WORLD.currentMax.tour[len(MAIN_WORLD.currentMax.tour) - 1][1]/3) scene.addItem(connectingEdge) graphicsView.setScene(scene) hbox = QVBoxLayout(self) #some more stuff.. hbox.addWidget(graphicsView) self.setLayout(hbox) Now, the edges are going to be updating constantly, so I want to be able to remove those edges and redraw them. How can I do that?
[ "QGraphicsScene manages the drawing of the items you've added to it. If the position of the rectangles or lines has changed you can update them if you old onto them:\nfor i in range( ):\n nodes[i] = node = QGraphicsRectItem()\n scene.add(nodes[i])\n\nLater, you can update a node's position:\nnodes[j].setRect(newx, newy, newwidth, newheight)\n\nSimilarly for lines.\nIf you need to remove one, you can use\nscene.removeItem(nodes[22])\n\n" ]
[ 2 ]
[]
[]
[ "animation", "pyqt", "python" ]
stackoverflow_0002190210_animation_pyqt_python.txt
Q: what is the simplest way to create a table use django db api ,and base on 'Standalone Django scripts' we can call this 'Standalone Django table' i am not successful now . can you ??? thanks if you don't know 'Standalone Django scripts', look this http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/ 2.this is my code: from django.core.management import setup_environ from sphinx_test import settings setup_environ(settings) import sys sys.path.append('D:\zjm_code\sphinx_test') from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") #class Meta:#<----------- 1 # app_label = 'sphinx_test'#<------ 2 and my problem is when i add '#' in front of 1 and 2,it's error is : Traceback (most recent call last): File "D:\zjm_code\sphinx_test\books\models.py", line 17, in <module> class File(models.Model): File "D:\Python25\Lib\site-packages\django\db\models\base.py", line 52, in __new__ kwargs = {"app_label": model_module.__name__.split('.')[-2]} IndexError: list index out of range when i remove '#' in front of 1 and 2,it print nothing,and don't create table yet. why ?? A: The article you linked is a pretty damn good explanation of the simplest way to do it. Edit: Re-arranged this for clarity. Starting with a fresh app, create a model, sync the database to create the tables, and then use the setup_environ function from within your standalone script. Of course this is assuming that myapp is in your PYTHONPATH. If it isn't you must append the path to your app before try to import it: #!/usr/bin/env python from django.core.management import setup_environ # If myapp is not in your PYTHONPATH, append it to sys.path import sys sys.path.append('/path/to/myapp/') # This must be AFTER you update sys.path from myapp import settings setup_environ(settings) from myapp.models import Foo, Bar # do stuff foo = Foo.objects.get(id=1) bar = Bar.objects.filter(foo=foo.baz) Edit #2: In response to the updated code by the OP. You are trying to create a new model from within the standalone script, which is not the right approach. A standalone script should not be used to create new models or applications, but rather to reference already existing data. So using your example you would need to create a new app from within a project and then create another script to use as the standalone script. So I'll use the creation of standalone.py as an example. This is what file structure of D:\zjm_code\sphinx_test should look like: sphinx_test |-- __init__.py |-- manage.py |-- settings.py `-- urls.py So first you would need to create a new app from within this project folder. Let's call it file_test and create it with python manage.py startapp file_test. Now the file tree of D:\zjm_code\sphinx_test should look like this: sphinx_test |-- __init__.py |-- __init__.pyc |-- file_test | |-- __init__.py | |-- models.py | |-- tests.py | `-- views.py |-- manage.py |-- settings.py |-- settings.pyc `-- urls.py Now you can create your File model in file_test\models.py: from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") #class Meta:#<----------- 1 # app_label = 'sphinx_test'#<------ 2 After you create this model, this is where you would execute python manage.py syncdb to create the model tables in the database you have configured for this app. Then you can create standalone.py that has all the logic to work with the file_test.models.File model you just created: #!/path/to/python from django.core.management import setup_environ import sys sys.path.append('D:\zjm_code\sphinx_test') from sphinx_test import settings setup_environ(settings) # NOW you can import from your app from sphinx_test.file_test.models import File f = File(name='test', tags='abc,xyz,', search='foo') f.save() # confirm the data was saved if f.id: print 'success!' else: print 'fail!' You have now created a standalone script that can interact with the Django ORM without need for a webserver or a testserver instance running. This is why it is considered to be standalone, because the new script you created may be executed at the command-line alone.
what is the simplest way to create a table use django db api ,and base on 'Standalone Django scripts'
we can call this 'Standalone Django table' i am not successful now . can you ??? thanks if you don't know 'Standalone Django scripts', look this http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/ 2.this is my code: from django.core.management import setup_environ from sphinx_test import settings setup_environ(settings) import sys sys.path.append('D:\zjm_code\sphinx_test') from django.db import models from djangosphinx.models import SphinxSearch,SphinxQuerySet class File(models.Model): name = models.CharField(max_length=200) tags = models.CharField(max_length=200) objects = models.Manager() search = SphinxQuerySet(index="test1") #class Meta:#<----------- 1 # app_label = 'sphinx_test'#<------ 2 and my problem is when i add '#' in front of 1 and 2,it's error is : Traceback (most recent call last): File "D:\zjm_code\sphinx_test\books\models.py", line 17, in <module> class File(models.Model): File "D:\Python25\Lib\site-packages\django\db\models\base.py", line 52, in __new__ kwargs = {"app_label": model_module.__name__.split('.')[-2]} IndexError: list index out of range when i remove '#' in front of 1 and 2,it print nothing,and don't create table yet. why ??
[ "The article you linked is a pretty damn good explanation of the simplest way to do it.\nEdit: Re-arranged this for clarity.\nStarting with a fresh app, create a model, sync the database to create the tables, and then use the setup_environ function from within your standalone script.\nOf course this is assuming that myapp is in your PYTHONPATH. If it isn't you must append the path to your app before try to import it:\n#!/usr/bin/env python\n\nfrom django.core.management import setup_environ\n\n# If myapp is not in your PYTHONPATH, append it to sys.path\nimport sys\nsys.path.append('/path/to/myapp/')\n\n# This must be AFTER you update sys.path\nfrom myapp import settings\nsetup_environ(settings)\n\nfrom myapp.models import Foo, Bar\n\n# do stuff\nfoo = Foo.objects.get(id=1)\nbar = Bar.objects.filter(foo=foo.baz)\n\nEdit #2: In response to the updated code by the OP. You are trying to create a new model from within the standalone script, which is not the right approach. A standalone script should not be used to create new models or applications, but rather to reference already existing data.\nSo using your example you would need to create a new app from within a project and then create another script to use as the standalone script. So I'll use the creation of standalone.py as an example.\nThis is what file structure of D:\\zjm_code\\sphinx_test should look like:\nsphinx_test\n|-- __init__.py\n|-- manage.py\n|-- settings.py\n`-- urls.py\n\nSo first you would need to create a new app from within this project folder. Let's call it file_test and create it with python manage.py startapp file_test. Now the file tree of D:\\zjm_code\\sphinx_test should look like this:\nsphinx_test\n|-- __init__.py\n|-- __init__.pyc\n|-- file_test\n| |-- __init__.py\n| |-- models.py\n| |-- tests.py\n| `-- views.py\n|-- manage.py\n|-- settings.py\n|-- settings.pyc\n`-- urls.py\n\nNow you can create your File model in file_test\\models.py:\nfrom django.db import models\nfrom djangosphinx.models import SphinxSearch,SphinxQuerySet\n\nclass File(models.Model):\n name = models.CharField(max_length=200)\n tags = models.CharField(max_length=200) \n objects = models.Manager()\n search = SphinxQuerySet(index=\"test1\")\n #class Meta:#<----------- 1\n # app_label = 'sphinx_test'#<------ 2\n\nAfter you create this model, this is where you would execute python manage.py syncdb to create the model tables in the database you have configured for this app.\nThen you can create standalone.py that has all the logic to work with the file_test.models.File model you just created:\n#!/path/to/python\nfrom django.core.management import setup_environ\nimport sys\nsys.path.append('D:\\zjm_code\\sphinx_test')\n\nfrom sphinx_test import settings\nsetup_environ(settings)\n\n# NOW you can import from your app\nfrom sphinx_test.file_test.models import File\n\nf = File(name='test', tags='abc,xyz,', search='foo')\nf.save()\n\n# confirm the data was saved\nif f.id:\n print 'success!'\nelse:\n print 'fail!'\n\nYou have now created a standalone script that can interact with the Django ORM without need for a webserver or a testserver instance running. This is why it is considered to be standalone, because the new script you created may be executed at the command-line alone. \n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0002211816_django_python.txt
Q: Method of Multiple Assignment in Python I'm trying to prepare for a future in computer science, so I started with ECMAScript and I am now trying to learn more about Python. Coming from ECMAScript, seeing multiple assignments such as a, b, c = 1, 2, 3 leaves me bewildered for a moment, until I realize that there are multiple assignments going on. To make things a bit clearer, I'd really like to do (a, b, c) = (1, 2, 3) but I am not sure if this will be a measurable performance hit. From what I understand, tuples are essentially how multiple assignments work regardless, but there are a great many oddities in the world, so I try not to assume anything. Thanks in advance A: It's extremely easy to check, with the dis module: >>> import dis >>> dis.dis(compile('a,b,c=1,2,3','','exec')) 1 0 LOAD_CONST 4 ((1, 2, 3)) 3 UNPACK_SEQUENCE 3 6 STORE_NAME 0 (a) 9 STORE_NAME 1 (b) 12 STORE_NAME 2 (c) 15 LOAD_CONST 3 (None) 18 RETURN_VALUE >>> dis.dis(compile('(a,b,c)=(1,2,3)','','exec')) 1 0 LOAD_CONST 4 ((1, 2, 3)) 3 UNPACK_SEQUENCE 3 6 STORE_NAME 0 (a) 9 STORE_NAME 1 (b) 12 STORE_NAME 2 (c) 15 LOAD_CONST 3 (None) 18 RETURN_VALUE >>> See? Those totally redundant parentheses make absolutely no difference to the bytecode that's generated and executed -- just like, say, a+b and (a+b) will generate and execute exactly the same bytecode as each other. So, if you like to add redundant parentheses, knock yourself out -- people reading your code may not like them, but ones who are just executing it will never even notice. Only, why stop at just two pairs of redundant parentheses? See, >>> dis.dis(compile('(((a,b,c)))=(((1,2,3)))','','exec')) 1 0 LOAD_CONST 4 ((1, 2, 3)) 3 UNPACK_SEQUENCE 3 6 STORE_NAME 0 (a) 9 STORE_NAME 1 (b) 12 STORE_NAME 2 (c) 15 LOAD_CONST 3 (None) 18 RETURN_VALUE >>> six pairs of redundant parentheses (or any number, really) still produce exactly the same code. Once you leave the obvious minimum number of redundant parentheses (none at all: they're redundant, after all;-), exactly where do you stop?-) And why there, when it's "free" to add yet one more pair... or two... or three...?-) A: It should not have any effect on performance. The parenthesis do not make it a tuple, the comma's do. So (1,2,3) is exactly the same as 1,2,3 A: It also works for lists: a, b, c = [1, 2, 3] works just as well A: Multiple assignment is implemented as a combination of tuple packing and tuple unpacking, to my knowledge, so it should have the same effect.
Method of Multiple Assignment in Python
I'm trying to prepare for a future in computer science, so I started with ECMAScript and I am now trying to learn more about Python. Coming from ECMAScript, seeing multiple assignments such as a, b, c = 1, 2, 3 leaves me bewildered for a moment, until I realize that there are multiple assignments going on. To make things a bit clearer, I'd really like to do (a, b, c) = (1, 2, 3) but I am not sure if this will be a measurable performance hit. From what I understand, tuples are essentially how multiple assignments work regardless, but there are a great many oddities in the world, so I try not to assume anything. Thanks in advance
[ "It's extremely easy to check, with the dis module:\n>>> import dis\n>>> dis.dis(compile('a,b,c=1,2,3','','exec'))\n 1 0 LOAD_CONST 4 ((1, 2, 3))\n 3 UNPACK_SEQUENCE 3\n 6 STORE_NAME 0 (a)\n 9 STORE_NAME 1 (b)\n 12 STORE_NAME 2 (c)\n 15 LOAD_CONST 3 (None)\n 18 RETURN_VALUE \n>>> dis.dis(compile('(a,b,c)=(1,2,3)','','exec'))\n 1 0 LOAD_CONST 4 ((1, 2, 3))\n 3 UNPACK_SEQUENCE 3\n 6 STORE_NAME 0 (a)\n 9 STORE_NAME 1 (b)\n 12 STORE_NAME 2 (c)\n 15 LOAD_CONST 3 (None)\n 18 RETURN_VALUE \n>>> \n\nSee? Those totally redundant parentheses make absolutely no difference to the bytecode that's generated and executed -- just like, say, a+b and (a+b) will generate and execute exactly the same bytecode as each other. So, if you like to add redundant parentheses, knock yourself out -- people reading your code may not like them, but ones who are just executing it will never even notice. Only, why stop at just two pairs of redundant parentheses? See,\n>>> dis.dis(compile('(((a,b,c)))=(((1,2,3)))','','exec'))\n 1 0 LOAD_CONST 4 ((1, 2, 3))\n 3 UNPACK_SEQUENCE 3\n 6 STORE_NAME 0 (a)\n 9 STORE_NAME 1 (b)\n 12 STORE_NAME 2 (c)\n 15 LOAD_CONST 3 (None)\n 18 RETURN_VALUE \n>>> \n\nsix pairs of redundant parentheses (or any number, really) still produce exactly the same code. Once you leave the obvious minimum number of redundant parentheses (none at all: they're redundant, after all;-), exactly where do you stop?-) And why there, when it's \"free\" to add yet one more pair... or two... or three...?-)\n", "It should not have any effect on performance. The parenthesis do not make it a tuple, the comma's do. So (1,2,3) is exactly the same as 1,2,3\n", "It also works for lists:\na, b, c = [1, 2, 3]\n\nworks just as well\n", "Multiple assignment is implemented as a combination of tuple packing and tuple unpacking, to my knowledge, so it should have the same effect.\n" ]
[ 13, 11, 2, 1 ]
[]
[]
[ "python", "syntax", "variable_assignment" ]
stackoverflow_0002211822_python_syntax_variable_assignment.txt
Q: Are "not in" and "is not" both operators? If so, are they in any way different than "not x in.." and "not x is.."? I've always preferred these: not 'x' in 'abc' not 'x' is 'a' (assuming, of course that everyone knows in and is out-prioritize not -- I probably should use parentheses) over the more (English) grammatical: 'x' not in 'abc' 'x' is not 'a' but didn't bother to think why until I realized they do not make syntactical sense 'x' == not 'a' 'x' not == 'a' both of course throw a syntax error. so I figured they were both two-word operators. However, the documentation only references is not and makes no mention of not in as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the Zen of Python (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not". A: It's easy to check if there's any difference, with the dis module: >>> dis.dis(compile('not a in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a not in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE See? The awkward first form (who often confuses those who can't immediately tell the relative priority of not and in operators) gets compiled into exactly the same bytecode as the can't-confuse-anybody second for, with the single not in operator. Similarly: >>> dis.dis(compile('not a is b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a is not b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE both these forms -- the goofy, confusing first one, and the elegant second one -- compare to exactly the same code, and both use the is not operator. Of course there's no good reason to use the forms that can confuse the reader (unless you like to set traps for those who read your code!-), but in terms of execution semantics and speed there's no difference whatsoever. A: From the python 2.6.4 docs at: http://docs.python.org/reference/expressions.html > The operator not in is defined to have the inverse true value of in. The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. x is not y yields the inverse truth value. eg: "x not in y" is exactly the same as "not x in y" and "x is not y" is the same as "not x is y". "x not == y" doesn't parse, but "x != y" does, so there's an equivalence there too ... HTH. A: The remainder of your question has been answered above, but I'll address the last question: the Zen of Python bit. "There should only be one way to do it" isn't mean't in a mathematical sense. If it were, there'd be no != operator, since that's just the inversion of ==. Similarly, no and and or --- you can, after all, just use a single nand command. There is a limit to the "one way" mantra: there should only be one high-level way of doing it. Of course, that high-level way can be decomposed --- you can write your own math.tan, and you never need to use urllib --- socket is always there for you. But just as urllib.open is a higher-level encapsulation of raw socket operations, so not in is a higher-level encapsulation of not and in. That's a bit banal, you might say. But you use x != y instead of not (x == y). A: Your documentation reference is nothing to do with syntax. Try this. Both is not and not in are two-word operators. A: x in a tests whether the element x is in a (which in this case would probably be a list or a string or some such) and returns True if it is. x not in a is similar, but returns False if x is in a. On the other hand, x is not a is analogous to x != a and not x is a. And like you said, x == not 5 will give you an error of sorts
Are "not in" and "is not" both operators? If so, are they in any way different than "not x in.." and "not x is.."?
I've always preferred these: not 'x' in 'abc' not 'x' is 'a' (assuming, of course that everyone knows in and is out-prioritize not -- I probably should use parentheses) over the more (English) grammatical: 'x' not in 'abc' 'x' is not 'a' but didn't bother to think why until I realized they do not make syntactical sense 'x' == not 'a' 'x' not == 'a' both of course throw a syntax error. so I figured they were both two-word operators. However, the documentation only references is not and makes no mention of not in as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the Zen of Python (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
[ "It's easy to check if there's any difference, with the dis module:\n>>> dis.dis(compile('not a in b','','exec'))\n 1 0 LOAD_NAME 0 (a)\n 3 LOAD_NAME 1 (b)\n 6 COMPARE_OP 7 (not in)\n 9 POP_TOP \n 10 LOAD_CONST 0 (None)\n 13 RETURN_VALUE \n>>> dis.dis(compile('a not in b','','exec'))\n 1 0 LOAD_NAME 0 (a)\n 3 LOAD_NAME 1 (b)\n 6 COMPARE_OP 7 (not in)\n 9 POP_TOP \n 10 LOAD_CONST 0 (None)\n 13 RETURN_VALUE \n\nSee? The awkward first form (who often confuses those who can't immediately tell the relative priority of not and in operators) gets compiled into exactly the same bytecode as the can't-confuse-anybody second for, with the single not in operator. Similarly:\n>>> dis.dis(compile('not a is b','','exec'))\n 1 0 LOAD_NAME 0 (a)\n 3 LOAD_NAME 1 (b)\n 6 COMPARE_OP 9 (is not)\n 9 POP_TOP \n 10 LOAD_CONST 0 (None)\n 13 RETURN_VALUE \n>>> dis.dis(compile('a is not b','','exec'))\n 1 0 LOAD_NAME 0 (a)\n 3 LOAD_NAME 1 (b)\n 6 COMPARE_OP 9 (is not)\n 9 POP_TOP \n 10 LOAD_CONST 0 (None)\n 13 RETURN_VALUE \n\nboth these forms -- the goofy, confusing first one, and the elegant second one -- compare to exactly the same code, and both use the is not operator.\nOf course there's no good reason to use the forms that can confuse the reader (unless you like to set traps for those who read your code!-), but in terms of execution semantics and speed there's no difference whatsoever.\n", "From the python 2.6.4 docs at: http://docs.python.org/reference/expressions.html\n>\n\nThe operator not in is defined to have\n the inverse true value of in.\nThe operators is and is not test for\n object identity: x is y is true if and\n only if x and y are the same object. x\n is not y yields the inverse truth\n value.\n\neg: \"x not in y\" is exactly the same as \"not x in y\" and \"x is not y\" is the same as \"not x is y\".\n\"x not == y\" doesn't parse, but \"x != y\" does, so there's an equivalence there too ...\nHTH.\n", "The remainder of your question has been answered above, but I'll address the last question: the Zen of Python bit.\n\"There should only be one way to do it\" isn't mean't in a mathematical sense. If it were, there'd be no != operator, since that's just the inversion of ==. Similarly, no and and or --- you can, after all, just use a single nand command. There is a limit to the \"one way\" mantra: there should only be one high-level way of doing it. Of course, that high-level way can be decomposed --- you can write your own math.tan, and you never need to use urllib --- socket is always there for you. But just as urllib.open is a higher-level encapsulation of raw socket operations, so not in is a higher-level encapsulation of not and in. That's a bit banal, you might say. But you use x != y instead of not (x == y).\n", "Your documentation reference is nothing to do with syntax. Try this. Both is not and not in are two-word operators.\n", "x in a tests whether the element x is in a (which in this case would probably be a list or a string or some such) and returns True if it is.\nx not in a is similar, but returns False if x is in a.\nOn the other hand, x is not a is analogous to x != a and not x is a.\nAnd like you said, x == not 5 will give you an error of sorts\n" ]
[ 7, 4, 2, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0002211706_python.txt
Q: Why does gethostbyaddr(gethostname()) return my IPv6 IP? I'm working on making a simple server application with python, and I'm trying to get the IP to bind the listening socket to. An example I looked at uses this: HOST = gethostbyaddr(gethostname()) With a little more processing after this, it should give me just the host IP as a string. This should return the IPv4 address. But when I run this code, it returns my IPv6 address. Why does it do this and how can I get my IPv4 address? If its relevant, I'm using windows vista and python 2.5 A: Getting your IP address is harder than you might think. Check this answer I gave for the one reliable way I've found. Here's what the answer says in case you don't like clicking on things: Use the netifaces module. Because networking is complex, using netifaces can be a little tricky, but here's how to do what you want: >>> import netifaces >>> netifaces.interfaces() ['lo', 'eth0'] >>> netifaces.ifaddresses('eth0') {17: [{'broadcast': 'ff:ff:ff:ff:ff:ff', 'addr': '00:11:2f:32:63:45'}], 2: [{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}], 10: [{'netmask': 'ffff:ffff:ffff:ffff::', 'addr': 'fe80::211:2fff:fe32:6345%eth0'}]} >>> for interface in netifaces.interfaces(): ... print netifaces.ifaddresses(interface)[netifaces.AF_INET] ... [{'peer': '127.0.0.1', 'netmask': '255.0.0.0', 'addr': '127.0.0.1'}] [{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}] >>> for interface in netifaces.interfaces(): ... for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]: ... print link['addr'] ... 127.0.0.1 10.0.0.2 This can be made a little more readable like this: from netifaces import interfaces, ifaddresses, AF_INET def ip4_addresses(): ip_list = [] for interface in interfaces(): for link in ifaddresses(interface)[AF_INET]: ip_list.append(link['addr']) return ip_list If you want IPv6 addresses, use AF_INET6 instead of AF_INET. If you're wondering why netifaces uses lists and dictionaries all over the place, it's because a single computer can have multiple NICs, and each NIC can have multiple addresses, and each address has its own set of options. A: IPv6 is taking precedence over IPv4 as it's the newer family, it's generally what you want if your hostname is associated with multiple families. You should be using getaddrinfo for family independent resolution, here is an example, import sys, socket; host = socket.gethostname(); result = socket.getaddrinfo(host, None); print "family:%i socktype:%i proto:%i canonname:%s sockaddr:%s"%result[0]; result = socket.getaddrinfo(host, None, socket.AF_INET); print "family:%i socktype:%i proto:%i canonname:%s sockaddr:%s"%result[0]; result = socket.getaddrinfo(host, None, socket.AF_INET6); print "family:%i socktype:%i proto:%i canonname:%s sockaddr:%s"%result[0]; Which on a dual-stack configured host gives me the following, family:10 socktype:1 proto:6 canonname: sockaddr:('2002:dce8:d28e::31', 0, 0, 0) family:2 socktype:1 proto:6 canonname: sockaddr:('10.6.28.31', 0) family:10 socktype:1 proto:6 canonname: sockaddr:('2002:dce8:d28e::31', 0, 0, 0) A: gethostbyaddr() takes an IP address as a parameter, not a hostname, so I'm surprised it's working at all without throwing an exception. If instead you meant gethostbyname(), then your results are more surprising, since that function claims not to support IPv6. Harley's answer explains how to correctly get your IP address.
Why does gethostbyaddr(gethostname()) return my IPv6 IP?
I'm working on making a simple server application with python, and I'm trying to get the IP to bind the listening socket to. An example I looked at uses this: HOST = gethostbyaddr(gethostname()) With a little more processing after this, it should give me just the host IP as a string. This should return the IPv4 address. But when I run this code, it returns my IPv6 address. Why does it do this and how can I get my IPv4 address? If its relevant, I'm using windows vista and python 2.5
[ "Getting your IP address is harder than you might think.\nCheck this answer I gave for the one reliable way I've found.\nHere's what the answer says in case you don't like clicking on things:\nUse the netifaces module. Because networking is complex, using netifaces can be a little tricky, but here's how to do what you want:\n>>> import netifaces\n>>> netifaces.interfaces()\n['lo', 'eth0']\n>>> netifaces.ifaddresses('eth0')\n{17: [{'broadcast': 'ff:ff:ff:ff:ff:ff', 'addr': '00:11:2f:32:63:45'}], 2: [{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}], 10: [{'netmask': 'ffff:ffff:ffff:ffff::', 'addr': 'fe80::211:2fff:fe32:6345%eth0'}]}\n>>> for interface in netifaces.interfaces():\n... print netifaces.ifaddresses(interface)[netifaces.AF_INET]\n...\n[{'peer': '127.0.0.1', 'netmask': '255.0.0.0', 'addr': '127.0.0.1'}]\n[{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}]\n>>> for interface in netifaces.interfaces():\n... for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]:\n... print link['addr']\n...\n127.0.0.1\n10.0.0.2\n\nThis can be made a little more readable like this:\nfrom netifaces import interfaces, ifaddresses, AF_INET\n\ndef ip4_addresses():\n ip_list = []\n for interface in interfaces():\n for link in ifaddresses(interface)[AF_INET]:\n ip_list.append(link['addr'])\n return ip_list\n\nIf you want IPv6 addresses, use AF_INET6 instead of AF_INET. If you're wondering why netifaces uses lists and dictionaries all over the place, it's because a single computer can have multiple NICs, and each NIC can have multiple addresses, and each address has its own set of options.\n", "IPv6 is taking precedence over IPv4 as it's the newer family, it's generally what you want if your hostname is associated with multiple families. You should be using getaddrinfo for family independent resolution, here is an example,\nimport sys, socket;\nhost = socket.gethostname();\nresult = socket.getaddrinfo(host, None);\nprint \"family:%i socktype:%i proto:%i canonname:%s sockaddr:%s\"%result[0];\nresult = socket.getaddrinfo(host, None, socket.AF_INET);\nprint \"family:%i socktype:%i proto:%i canonname:%s sockaddr:%s\"%result[0];\nresult = socket.getaddrinfo(host, None, socket.AF_INET6);\nprint \"family:%i socktype:%i proto:%i canonname:%s sockaddr:%s\"%result[0];\n\nWhich on a dual-stack configured host gives me the following,\nfamily:10 socktype:1 proto:6 canonname: sockaddr:('2002:dce8:d28e::31', 0, 0, 0)\nfamily:2 socktype:1 proto:6 canonname: sockaddr:('10.6.28.31', 0)\nfamily:10 socktype:1 proto:6 canonname: sockaddr:('2002:dce8:d28e::31', 0, 0, 0)\n\n", "gethostbyaddr() takes an IP address as a parameter, not a hostname, so I'm surprised it's working at all without throwing an exception. If instead you meant gethostbyname(), then your results are more surprising, since that function claims not to support IPv6. Harley's answer explains how to correctly get your IP address.\n" ]
[ 11, 5, 2 ]
[]
[]
[ "ip_address", "ipv4", "ipv6", "python", "sockets" ]
stackoverflow_0000415407_ip_address_ipv4_ipv6_python_sockets.txt
Q: optional arguments when calling a function without modifying function definition I want to know how to call a function, passing a parameter that it might not be expecting. I faced this problem a few days ago, and found a way around it. But today, I decided I'd see if what I wanted to do was possible. Unfortunately, I don't remember the context which I used it in. So here is a stupid example in which there are plenty of better ways to do this, but just ignore that: def test(func, arg1, arg2): return func(arg1, arg2, flag1=True, flag2=False) #Only pass the flags if the function accepts them. def func1(a, b, flag1, flag2): ret = a if flag1: ret *= b if flag2: ret += b return ret def func2(a, b): return a*b print test(func1, 5, 6) #prints 30 The alternative I came up with looked like this: def test(func, arg1, arg2): numArgs = len(inspect.getargspec(func).args) if numArgs >= 4: return func(arg1, arg2, True, False) elif numArgs == 3: return func(arg1, arg2, True) else: return func(arg1, arg2) print test(func2, 5, 6) #prints 30 or a try..except.. block But there's got to be a better way of doing this without altering func1 and func2, right? (Edit): Making use of the solution provided by Max S, I'm thinking this is the best approach: def callWithOptionalArgs(func, *args): argSpec = inspect.getargspec(func) lenArgSpec = len(argSpec.args or ()) argsToPass = args[:lenArgSpec] #too many args defaults = argSpec.defaults or () lenDefaults = len(defaults) argsToPass += (None, )*(lenArgSpec-len(argsToPass)-lenDefaults) #too few args argsToPass += defaults[len(argsToPass)+len(defaults)-lenArgSpec:] #default args return func(*argsToPass) print callWithOptionalArgs(func1, 5, 6, True) #prints 30 print callWithOptionalArgs(func2, 5, 6, True) #prints 30 A: Inspecting the function is the only way to explicitly differentiate between functions with different numbers of arguments without altering or decorating the originals. The only change I would do to your wrapper is to generalize it for any number of arguments: def padArgsWithTrue(func, *args): passed_args = list(args) num_args = len(inspect.getargspec(func).args) passed_args += [True] * (num_args - len(args)) return func(*passed_args) print padArgsWithTrue(lambda x,y,z,w: (x*y, z, w), 5, 6) EDIT: Note that this does not accommodate functions with variable number of args or keyword args. You'll have to decide on a policy to deal with those before a complete solution could be written. A: If I understand you, you want to be able to call both func1 and func2 using test. def test(func, arg1, arg2): try: return func(arg1, arg2, flag1=True, flag2=False) except TypeError: return func(arg1, arg2) def func1(a, b, flag1, flag2): ret = a if flag1: ret *= b if flag2: ret += b return ret def func2(a, b): return a*b print test(func1, 5, 6) #prints 30 print test(func2, 5, 6) #prints 30
optional arguments when calling a function without modifying function definition
I want to know how to call a function, passing a parameter that it might not be expecting. I faced this problem a few days ago, and found a way around it. But today, I decided I'd see if what I wanted to do was possible. Unfortunately, I don't remember the context which I used it in. So here is a stupid example in which there are plenty of better ways to do this, but just ignore that: def test(func, arg1, arg2): return func(arg1, arg2, flag1=True, flag2=False) #Only pass the flags if the function accepts them. def func1(a, b, flag1, flag2): ret = a if flag1: ret *= b if flag2: ret += b return ret def func2(a, b): return a*b print test(func1, 5, 6) #prints 30 The alternative I came up with looked like this: def test(func, arg1, arg2): numArgs = len(inspect.getargspec(func).args) if numArgs >= 4: return func(arg1, arg2, True, False) elif numArgs == 3: return func(arg1, arg2, True) else: return func(arg1, arg2) print test(func2, 5, 6) #prints 30 or a try..except.. block But there's got to be a better way of doing this without altering func1 and func2, right? (Edit): Making use of the solution provided by Max S, I'm thinking this is the best approach: def callWithOptionalArgs(func, *args): argSpec = inspect.getargspec(func) lenArgSpec = len(argSpec.args or ()) argsToPass = args[:lenArgSpec] #too many args defaults = argSpec.defaults or () lenDefaults = len(defaults) argsToPass += (None, )*(lenArgSpec-len(argsToPass)-lenDefaults) #too few args argsToPass += defaults[len(argsToPass)+len(defaults)-lenArgSpec:] #default args return func(*argsToPass) print callWithOptionalArgs(func1, 5, 6, True) #prints 30 print callWithOptionalArgs(func2, 5, 6, True) #prints 30
[ "Inspecting the function is the only way to explicitly differentiate between functions with different numbers of arguments without altering or decorating the originals. The only change I would do to your wrapper is to generalize it for any number of arguments:\ndef padArgsWithTrue(func, *args):\n passed_args = list(args)\n num_args = len(inspect.getargspec(func).args)\n passed_args += [True] * (num_args - len(args))\n return func(*passed_args)\n\nprint padArgsWithTrue(lambda x,y,z,w: (x*y, z, w), 5, 6)\n\nEDIT: Note that this does not accommodate functions with variable number of args or keyword args. You'll have to decide on a policy to deal with those before a complete solution could be written.\n", "If I understand you, you want to be able to call both func1 and func2 using test.\ndef test(func, arg1, arg2):\n try:\n return func(arg1, arg2, flag1=True, flag2=False)\n except TypeError:\n return func(arg1, arg2)\n\n\ndef func1(a, b, flag1, flag2):\n ret = a\n if flag1:\n ret *= b\n if flag2:\n ret += b\n return ret\n\ndef func2(a, b):\n return a*b\n\nprint test(func1, 5, 6) #prints 30\nprint test(func2, 5, 6) #prints 30\n\n" ]
[ 1, 0 ]
[]
[]
[ "function_calls", "optional_parameters", "python" ]
stackoverflow_0002212185_function_calls_optional_parameters_python.txt
Q: Accessing C header magic numbers/flags with Cython Some standard C libraries that I want to access with Cython have a ton of flags. The Cython docs state that I must replicate the parts of the header I need. Which is fine when it comes to functions definitions. They are usually replicated everywhere, docs included. But what about all those magic numbers? If I want to call mmap, I can always find the function definition and paste it into a .pxd file: void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset) But calling it needs a ton of flags like PROT_READ, MAP_ANONYMOUS and so on. I have at least two problems with this: Firstly, it is annoying work to hunt down exactly where those numbers are defined. In fact I'd rather write a .c file and printf the values I need. Are there any better way of finding the value of a given flag such as PROT_READ? Secondly, how stable are these numbers? Having extracted all the values I need and hardcoded them into my Cython source, what are the chances that compiling on a different platform has switched around, let's say PROT_READ and PROT_EXEC? Even if the answer is that there are no good or proper ways to do it, I'd like to hear it. I can always accept that something is cumbersome as long as I know I'm not missing something. A: To use these constants from Cython, you don't need to figure exactly where they came from or what they are any more than you do from C. For example, your .pxd file can look like cdef extern from "foo.h": void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset) cdef int PROT_READ cdef int MAP_ANONYMOUS ... As long as the definitions are (directly or indirectly) included from foo.h, this should work fine. A: There are several possible alternatives: Use the flags from the Python mmap module. simple only works when there are existing Python bindings Use the Python mmap object in the first place, and hand it over to your Cython code even simpler openening might have some Python overhead Use the code generator of ctypeslib some docs on how to extract constants needs gccxml Just copy the numbers. That being said, the numbers are very, very stable. If they'd change, each and every C program using mmap would have to be recompiled, as the flags from the headers are contained in the binary. EDIT: mmap is part of POSIX, but a cursory read hasn't revealed whether the flags have to be the same value on all platforms. A: Write a file foo.c with this as the contents: #include <sys/mman.h> Then run cpp -dM foo.c | grep -v __ | awk '{if ($3) print $2, "=", $3}' > mman.py which will create a python file that defines all the constants from mman.h Obviously, you can do that for multiple includes if you want. The resulting file might need a bit of cleaning up, but it'll get you close.
Accessing C header magic numbers/flags with Cython
Some standard C libraries that I want to access with Cython have a ton of flags. The Cython docs state that I must replicate the parts of the header I need. Which is fine when it comes to functions definitions. They are usually replicated everywhere, docs included. But what about all those magic numbers? If I want to call mmap, I can always find the function definition and paste it into a .pxd file: void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset) But calling it needs a ton of flags like PROT_READ, MAP_ANONYMOUS and so on. I have at least two problems with this: Firstly, it is annoying work to hunt down exactly where those numbers are defined. In fact I'd rather write a .c file and printf the values I need. Are there any better way of finding the value of a given flag such as PROT_READ? Secondly, how stable are these numbers? Having extracted all the values I need and hardcoded them into my Cython source, what are the chances that compiling on a different platform has switched around, let's say PROT_READ and PROT_EXEC? Even if the answer is that there are no good or proper ways to do it, I'd like to hear it. I can always accept that something is cumbersome as long as I know I'm not missing something.
[ "To use these constants from Cython, you don't need to figure exactly where they came from or what they are any more than you do from C. For example, your .pxd file can look like\ncdef extern from \"foo.h\":\n void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset)\n cdef int PROT_READ\n cdef int MAP_ANONYMOUS\n ...\n\nAs long as the definitions are (directly or indirectly) included from foo.h, this should work fine. \n", "There are several possible alternatives:\n\nUse the flags from the Python mmap module.\n\n\nsimple\nonly works when there are existing Python bindings\n\nUse the Python mmap object in the first place, and hand it over to your Cython code\n\n\neven simpler openening\nmight have some Python overhead\n\nUse the code generator of ctypeslib\n\nsome docs on how to extract constants\nneeds gccxml\n\nJust copy the numbers.\n\nThat being said, the numbers are very, very stable. If they'd change, each and every C program using mmap would have to be recompiled, as the flags from the headers are contained in the binary. \nEDIT: mmap is part of POSIX, but a cursory read hasn't revealed whether the flags have to be the same value on all platforms.\n", "Write a file foo.c with this as the contents:\n#include <sys/mman.h>\n\nThen run\ncpp -dM foo.c | grep -v __ | awk '{if ($3) print $2, \"=\", $3}' > mman.py\n\nwhich will create a python file that defines all the constants from mman.h\nObviously, you can do that for multiple includes if you want.\nThe resulting file might need a bit of cleaning up, but it'll get you close.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "cython", "header_files", "python" ]
stackoverflow_0002206557_cython_header_files_python.txt