content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
How can I change '>' to '>' and '>' to '>'?
print u'<'
How can I print <
print '>'
How can I print >
A:
You should use HTMLParser module to decode html:
>>> import HTMLParser
>>> h= HTMLParser.HTMLParser()
>>> h.unescape('alpha < β')
u'alpha < \u03b2'
To escape HTML, cgi module is fine:
>>> cgi.escape(u'<a>bá</a>').encode('ascii', 'xmlcharrefreplace')
'<a>bá</a>
|
How can I change '>' to '>' and '>' to '>'?
|
print u'<'
How can I print <
print '>'
How can I print >
|
[
"You should use HTMLParser module to decode html:\n>>> import HTMLParser\n>>> h= HTMLParser.HTMLParser()\n>>> h.unescape('alpha < β')\nu'alpha < \\u03b2'\n\nTo escape HTML, cgi module is fine:\n>>> cgi.escape(u'<a>bá</a>').encode('ascii', 'xmlcharrefreplace')\n'<a>bá</a>\n\n"
] |
[
17
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001977900_python.txt
|
Q:
A multi-part/threaded downloader via python?
I've seen a few threaded downloaders online, and even a few multi-part downloaders (HTTP).
I haven't seen them together as a class/function.
If any of you have a class/function lying around, that I can just drop into any of my applications where I need to grab multiple files, I'd be much obliged.
If there is there a library/framework (or a program's back-end) that does this, please direct me towards it?
A:
Threadpool by Christopher Arndt may be what you're looking for. I've used this "easy to use object-oriented thread pool framework" for the exact purpose you describe and it works great. See the usage examples at the bottom on the linked page. And it really is easy to use: just define three functions (one of which is an optional exception handler in place of the default handler) and you are on your way.
from http://www.chrisarndt.de/projects/threadpool/:
Object-oriented, reusable design
Provides callback mechanism to process results as they are returned from the worker threads.
WorkRequest objects wrap the tasks assigned to the worker threads and allow for easy passing of arbitrary data to the callbacks.
The use of the Queue class solves most locking issues.
All worker threads are daemonic, so they exit when the main program exits, no need for joining.
Threads start running as soon as you create them. No need to start or stop them. You can increase or decrease the pool size at any time, superfluous threads will just exit when they finish their current task.
You don't need to keep a reference to a thread after you have assigned the last task to it. You just tell it: "don't come back looking for work, when you're done!"
Threads don't eat up cycles while waiting to be assigned a task, they just block when the task queue is empty (though they wake up every few seconds to check whether they are dismissed).
Also available at http://pypi.python.org/pypi/threadpool, easy_install, or as a subversion checkout (see project homepage).
|
A multi-part/threaded downloader via python?
|
I've seen a few threaded downloaders online, and even a few multi-part downloaders (HTTP).
I haven't seen them together as a class/function.
If any of you have a class/function lying around, that I can just drop into any of my applications where I need to grab multiple files, I'd be much obliged.
If there is there a library/framework (or a program's back-end) that does this, please direct me towards it?
|
[
"Threadpool by Christopher Arndt may be what you're looking for. I've used this \"easy to use object-oriented thread pool framework\" for the exact purpose you describe and it works great. See the usage examples at the bottom on the linked page. And it really is easy to use: just define three functions (one of which is an optional exception handler in place of the default handler) and you are on your way. \nfrom http://www.chrisarndt.de/projects/threadpool/:\n\nObject-oriented, reusable design\nProvides callback mechanism to process results as they are returned from the worker threads.\nWorkRequest objects wrap the tasks assigned to the worker threads and allow for easy passing of arbitrary data to the callbacks.\nThe use of the Queue class solves most locking issues.\nAll worker threads are daemonic, so they exit when the main program exits, no need for joining.\nThreads start running as soon as you create them. No need to start or stop them. You can increase or decrease the pool size at any time, superfluous threads will just exit when they finish their current task.\nYou don't need to keep a reference to a thread after you have assigned the last task to it. You just tell it: \"don't come back looking for work, when you're done!\"\nThreads don't eat up cycles while waiting to be assigned a task, they just block when the task queue is empty (though they wake up every few seconds to check whether they are dismissed).\n\nAlso available at http://pypi.python.org/pypi/threadpool, easy_install, or as a subversion checkout (see project homepage).\n"
] |
[
1
] |
[] |
[] |
[
"download",
"multipart",
"multithreading",
"python",
"urllib"
] |
stackoverflow_0001979435_download_multipart_multithreading_python_urllib.txt
|
Q:
Python web server?
I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here.
A:
what about the internal python webserver ?
just type "python web server" in google, and host the 1st result...
A:
Well, I used web frameworks like TurboGears, my current projects are based on Pylons. The last ist fairly easy to learn and both come with CherryPy.
To do some background job you could implement that in pylons too.
Just go to your config/environment.py and see that example:
(I implemented a queue here)
from faxserver.lib.myQueue import start_queue
...
def load_environment(global_conf, app_conf):
...
start_queue()
It depends on your need if you simply use CherryPy or start to use something more complete like Pylons.
A:
Use the WSGI Reference Implementation wsgiref already provided with Python
Use REST protocols with JSON (not XML-RPC). It's simpler and faster than XML.
Background jobs are started with subprocess.
A:
Why dont you use open source build tools (continuous integration tools) like Cruise. Most of them come with a web server/xml interface and sometimes with fancy reports as well.
|
Python web server?
|
I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here.
|
[
"what about the internal python webserver ?\njust type \"python web server\" in google, and host the 1st result...\n",
"Well, I used web frameworks like TurboGears, my current projects are based on Pylons. The last ist fairly easy to learn and both come with CherryPy. \nTo do some background job you could implement that in pylons too.\nJust go to your config/environment.py and see that example:\n(I implemented a queue here)\nfrom faxserver.lib.myQueue import start_queue\n...\ndef load_environment(global_conf, app_conf):\n ...\n start_queue()\n\nIt depends on your need if you simply use CherryPy or start to use something more complete like Pylons.\n",
"Use the WSGI Reference Implementation wsgiref already provided with Python\nUse REST protocols with JSON (not XML-RPC). It's simpler and faster than XML.\nBackground jobs are started with subprocess.\n",
"Why dont you use open source build tools (continuous integration tools) like Cruise. Most of them come with a web server/xml interface and sometimes with fancy reports as well. \n"
] |
[
3,
1,
1,
0
] |
[
"This sounds like a fun project. So, why don't write your own HTTP server? Its not so complicated after all, HTTP is a well-known and easy to implement protocol and you'll gain a lot of new knowledge!\nCheck documentation or manual pages (whatever you prefer) of socket(), bind(), listen(), accept() and so on.\n"
] |
[
-3
] |
[
"cherrypy",
"python"
] |
stackoverflow_0001978791_cherrypy_python.txt
|
Q:
How can I access an uploaded file in universal-newline mode?
I am working with a file uploaded using Django's forms.FileField. This returns an object of type InMemoryUploadedFile.
I need to access this file in universal-newline mode. Any ideas on how to do this without saving and then reopening the file?
Thanks
A:
If you are using Python 2.6 or higher, you can use the io.StringIO class after having read your file into memory (using the read() method). Example:
>>> import io
>>> s = u"a\r\nb\nc\rd"
>>> sio = io.StringIO(s, newline=None)
>>> sio.readlines()
[u'a\n', u'b\n', u'c\n', u'd']
To actually use this in your django view, you may need to convert the input file data to unicode:
stream = io.StringIO(unicode(request.FILES['foo'].read()), newline=None)
|
How can I access an uploaded file in universal-newline mode?
|
I am working with a file uploaded using Django's forms.FileField. This returns an object of type InMemoryUploadedFile.
I need to access this file in universal-newline mode. Any ideas on how to do this without saving and then reopening the file?
Thanks
|
[
"If you are using Python 2.6 or higher, you can use the io.StringIO class after having read your file into memory (using the read() method). Example:\n>>> import io\n>>> s = u\"a\\r\\nb\\nc\\rd\"\n>>> sio = io.StringIO(s, newline=None)\n>>> sio.readlines()\n[u'a\\n', u'b\\n', u'c\\n', u'd']\n\nTo actually use this in your django view, you may need to convert the input file data to unicode:\nstream = io.StringIO(unicode(request.FILES['foo'].read()), newline=None)\n\n"
] |
[
18
] |
[] |
[] |
[
"django",
"django_forms",
"python"
] |
stackoverflow_0001875956_django_django_forms_python.txt
|
Q:
Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx)
I have a web project where I must import text and images from a user-supplied document, and one of the possible formats is Microsoft Office 2007. There's also a need to generate documents in this format.
The server runs CentOS 5.2 and has PHP/Perl/Python installed. I can execute local binaries and shell scripts if I must. We use Apache 2.2 but will be switching over to Nginx once it goes live.
What are my options? Anyone had experience with this?
A:
The Office 2007 file formats are open and well documented. Roughly speaking, all of the new file formats ending in "x" are zip compressed XML documents. For example:
To open a Word 2007 XML file Create a
temporary folder in which to store the
file and its parts.
Save a Word 2007 document, containing
text, pictures, and other elements, as
a .docx file.
Add a .zip extension to the end of the
file name.
Double-click the file. It will open in
the ZIP application. You can see the
parts that comprise the file.
Extract the parts to the folder that
you created previously.
The other file formats are roughly similar. I don't know of any open source libraries for interacting with them as yet - but depending on your exact requirements, it doesn't look too difficult to read and write simple documents. Certainly it should be a lot easier than with the older formats.
If you need to read the older formats, OpenOffice has an API and can read and write Office 2003 and older documents with more or less success.
A:
The python docx module can generate formatted Microsoft office docx files from pure Python. Out of the box, it does headers, paragraphs, tables, and bullets, but the makeelement() module can be extended to do arbitrary elements like images.
from docx import *
document = newdocument()
# This location is where most document content lives
docbody = document.xpath('/w:document/w:body',namespaces=wordnamespaces)[0]
# Append two headings
docbody.append(heading('Heading',1) )
docbody.append(heading('Subheading',2))
docbody.append(paragraph('Some text')
A:
I have successfully used the OpenXML Format SDK in a project to modify an Excel spreadsheet via code. This would require .NET and I'm not sure about how well it would work under Mono.
A:
You can probably check the code for Sphider. They docs and pdfs, so I'm sure they can read them. Might also lead you in the right direction for other Office formats.
|
Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx)
|
I have a web project where I must import text and images from a user-supplied document, and one of the possible formats is Microsoft Office 2007. There's also a need to generate documents in this format.
The server runs CentOS 5.2 and has PHP/Perl/Python installed. I can execute local binaries and shell scripts if I must. We use Apache 2.2 but will be switching over to Nginx once it goes live.
What are my options? Anyone had experience with this?
|
[
"The Office 2007 file formats are open and well documented. Roughly speaking, all of the new file formats ending in \"x\" are zip compressed XML documents. For example:\n\nTo open a Word 2007 XML file Create a\n temporary folder in which to store the\n file and its parts.\nSave a Word 2007 document, containing\n text, pictures, and other elements, as\n a .docx file.\nAdd a .zip extension to the end of the\n file name.\nDouble-click the file. It will open in\n the ZIP application. You can see the\n parts that comprise the file.\nExtract the parts to the folder that\n you created previously.\n\nThe other file formats are roughly similar. I don't know of any open source libraries for interacting with them as yet - but depending on your exact requirements, it doesn't look too difficult to read and write simple documents. Certainly it should be a lot easier than with the older formats.\nIf you need to read the older formats, OpenOffice has an API and can read and write Office 2003 and older documents with more or less success.\n",
"The python docx module can generate formatted Microsoft office docx files from pure Python. Out of the box, it does headers, paragraphs, tables, and bullets, but the makeelement() module can be extended to do arbitrary elements like images.\nfrom docx import *\ndocument = newdocument()\n\n# This location is where most document content lives \ndocbody = document.xpath('/w:document/w:body',namespaces=wordnamespaces)[0]\n\n# Append two headings\ndocbody.append(heading('Heading',1) ) \ndocbody.append(heading('Subheading',2))\ndocbody.append(paragraph('Some text')\n\n",
"I have successfully used the OpenXML Format SDK in a project to modify an Excel spreadsheet via code. This would require .NET and I'm not sure about how well it would work under Mono.\n",
"You can probably check the code for Sphider. They docs and pdfs, so I'm sure they can read them. Might also lead you in the right direction for other Office formats.\n"
] |
[
18,
6,
3,
2
] |
[] |
[] |
[
"office_2007",
"parsing",
"perl",
"php",
"python"
] |
stackoverflow_0000173246_office_2007_parsing_perl_php_python.txt
|
Q:
django query question
Assume I have such simple model:
class Foo(models.Model):
name = models.CharField(max_length=25)
b_date = models.DateField()
Now assume that my query result from Foo.objects.all() , I retrieve something like this:
[
{'name': 'zed', 'b_date': '2009-12-23'},
{'name': 'amy', 'b_date': '2009-12-6'},
{'name': 'joe', 'b_date': '2009-12-26'},
{'name': 'wayne', 'b_date': '2009-12-14'},
{'name': 'chris', 'b_date': '2009-12-9'},
]
Now I need to get the earliest date from b_date (which is '2009-12-6' for our case and the latest one ('2009-12-23' for the example) and generate a list that start from the begining and iterates through the end such as:
what_I_want = ['2009-12-6','2009-12-7','2009-12-8' .............. '2009-12-22','2009-12-23']
How would you solve this, in the most efficient way. doing this either in view or templete would be appreciated.
Regards
A:
Foo.objects.order_by('b_date').values_list('b_date', flat=True)
|
django query question
|
Assume I have such simple model:
class Foo(models.Model):
name = models.CharField(max_length=25)
b_date = models.DateField()
Now assume that my query result from Foo.objects.all() , I retrieve something like this:
[
{'name': 'zed', 'b_date': '2009-12-23'},
{'name': 'amy', 'b_date': '2009-12-6'},
{'name': 'joe', 'b_date': '2009-12-26'},
{'name': 'wayne', 'b_date': '2009-12-14'},
{'name': 'chris', 'b_date': '2009-12-9'},
]
Now I need to get the earliest date from b_date (which is '2009-12-6' for our case and the latest one ('2009-12-23' for the example) and generate a list that start from the begining and iterates through the end such as:
what_I_want = ['2009-12-6','2009-12-7','2009-12-8' .............. '2009-12-22','2009-12-23']
How would you solve this, in the most efficient way. doing this either in view or templete would be appreciated.
Regards
|
[
"Foo.objects.order_by('b_date').values_list('b_date', flat=True)\n\n"
] |
[
7
] |
[] |
[] |
[
"django",
"django_templates",
"django_views",
"python"
] |
stackoverflow_0001979951_django_django_templates_django_views_python.txt
|
Q:
In Python, use "dict" with keywords or anonymous dictionaries?
Say you want to pass a dictionary of values to a function, or otherwise want to work with a short-lived dictionary that won't be reused. There are two easy ways to do this:
Use the dict() function to create a dictionary:
foo.update(dict(bar=42, baz='qux'))
Use an anonymous dictionary:
foo.update({'bar': 42, 'baz': 'qux'})
Which do you prefer? Are there reasons other than personal style for choosing one over the other?
A:
I prefer the anonymous dict option.
I don't like the dict() option for the same reason I don't like:
i = int("1")
With the dict() option you're needlessly calling a function which is adding overhead you don't need:
>>> from timeit import Timer
>>> Timer("mydict = {'a' : 1, 'b' : 2, 'c' : 'three'}").timeit()
0.91826782454194589
>>> Timer("mydict = dict(a=1, b=2, c='three')").timeit()
1.9494664824719337
A:
I think in this specific case I'd probably prefer this:
foo.update(bar=42, baz='qux')
In the more general case, I often prefer the literal syntax (what you call an anonymous dictionary, though it's just as anonymous to use {} as it is to use dict()). I think that speaks more clearly to the maintenance programmer (often me), partly because it stands out so nicely with syntax-highlighting text editors. It also ensures that when I have to add a key which is not representable as a Python name, like something with spaces, then I don't have to go and rewrite the whole line.
A:
My answer will largely talk about the design of APIs to use dicts vs. keyword args.
But it's also applicable the individual use of {...} vs. dict(...).
The bottom line: be consistent. If most of your code will refer to 'bar' as a string - keep it a string in {...}; if you normally refer to it the identifier bar - use dict(bar=...).
Constraints
Before talking about style, note that the keyword bar=42 syntax works only for strings and only if they are valid identifiers. If you need arbitrary punctuation, spaces, unicode - or even non-string keys - the question is over => only the {'bar': 42} syntax will work.
This also means that when designing an API, you must allow full dicts, and not only keyword arguments - unless you are sure that only strings, and only valid identifiers are allowed.
(Technically, update(**{'spaces & punctuation': 42}) works. But it's ugly. And numbers/tuples/unicode won't work.)
Note that dict() and dict.update() combine both APIs: you can pass a single dict, you can pass keyword args, and you can even pass both (the later I think is undocumented). So if you want to be nice, allow both:
def update(self, *args, **kwargs):
"""Callable as dict() - with either a mapping or keyword args:
.update(mapping)
.update(**kwargs)
"""
mapping = dict(*args, **kwargs)
# do something with `mapping`...
This is especially recommended for a method named .update(), to follow the least-surprise rule.
Style
I find it nice to distinguish internal from external strings. By internal I mean arbitrary identifiers denoting something only inside the program (variable names, object attributes) or possibly between several programs (DB columns, XML attribute names). They are normally only visible to developers. External strings are intended for human consumption.
[Some Python coders (me included) observe the convention of using 'single_quotes' for internal strings vs. "Double quotes" for external strings. This is definitely not universal, though.]
Your question is about the proper uses of barewords (Perl term) - syntax sugars allowing to omit the quotes quotes altogether on internal strings. Some languages (notably LISP) allow them widely; the Pythonic opportunities to employ barewords are attribute access - foo.bar and keyword arguments - update(bar=...).
The stylistic dilemma here is "Are your strings internal enough to look like identifiers?"
If the keys are external strings, the answer is definitely NO:
foo.update({"The answer to the big question": 42})
# which you later might access as:
foo["The answer to the big question"]
If the keys refer to Python identifiers (e.g. object attributes), then I'd say YES:
foo.update(dict(bar=42))
# As others mentioned, in that case the cleaner API (if possible)
# would be to receive them as **kwargs directly:
foo.update(bar=42)
# which you later might access as:
foo.bar
If the keys refer to identifiers outside your Python program, such as XML attr names, or DB column names, using barewords may be good or bad choice - but you it's best to choose one style and be consistent.
Consistency is good because there is a psychological barrier between identifiers and strings. It exists because strings rarely cross it - only when using introspection to do meta-programming. And syntax highlighting only reinforces it. So if you read the code and see a green 'bar' in one place and a black foo.bar in a second place, you won't immediately make a connection.
Another important rule of thumb is: Barewords are good iff they are (mostly) fixed. E.g. if you refer to fixed DB columns mostly in your code, than using barewords to refer to them might be nice; but if half the time the column is a parameter, then it's better to use strings.
This is because parameter/constant is the most important difference people associate with the identifiers/strings barrier. The difference between column (variable) and "person" (constant) is the most readable way to convey this difference. Making them both identifiers would blur the distinction, as well as backfiring syntactically - you'd need to use **{column: value} and getattr(obj, column) etc. a lot.
A:
I prefer your "anonymous dictionary" method and I think this is purely a personal style thing. I just find the latter version more readable but it's also what I'm used to seeing.
A:
The dict() method has the added overhead of a function call.
>>>import timeit,dis
>>> timeit.Timer("{'bar': 42, 'baz': 'qux'}").repeat()
[0.59602910425766709, 0.60173793037941437, 0.59139834811408321]
>>> timeit.Timer("dict(bar=42, baz='qux')").repeat()
[0.98166498814792646, 0.97745355904172015, 0.99231773870701545]
>>> dis.dis(compile("{'bar': 42, 'baz': 'qux'}","","exec"))
1 0 BUILD_MAP 0
3 DUP_TOP
4 LOAD_CONST 0 (42)
7 ROT_TWO
8 LOAD_CONST 1 ('bar')
11 STORE_SUBSCR
12 DUP_TOP
13 LOAD_CONST 2 ('qux')
16 ROT_TWO
17 LOAD_CONST 3 ('baz')
20 STORE_SUBSCR
21 POP_TOP
22 LOAD_CONST 4 (None)
25 RETURN_VALUE
>>> dis.dis(compile("dict(bar=42, baz='qux')","","exec"))
1 0 LOAD_NAME 0 (dict)
3 LOAD_CONST 0 ('bar')
6 LOAD_CONST 1 (42)
9 LOAD_CONST 2 ('baz')
12 LOAD_CONST 3 ('qux')
15 CALL_FUNCTION 512
18 POP_TOP
19 LOAD_CONST 4 (None)
22 RETURN_VALUE
A:
I prefer the anonymous dictionary, too, just out of personal style.
A:
If I have a lot of arguments, sometimes it is nice to omit the quotes on the keys:
DoSomething(dict(
Name = 'Joe',
Age = 20,
Gender = 'Male',
))
This is a very subjective question, BTW. :)
A:
I think the dict() function is really there for when you're creating a dict from something else, maybe something that easily produces the necessary keyword args. The anonymous method is best for 'dict literals' in the same way you'd use "" for strings, not str().
|
In Python, use "dict" with keywords or anonymous dictionaries?
|
Say you want to pass a dictionary of values to a function, or otherwise want to work with a short-lived dictionary that won't be reused. There are two easy ways to do this:
Use the dict() function to create a dictionary:
foo.update(dict(bar=42, baz='qux'))
Use an anonymous dictionary:
foo.update({'bar': 42, 'baz': 'qux'})
Which do you prefer? Are there reasons other than personal style for choosing one over the other?
|
[
"I prefer the anonymous dict option.\nI don't like the dict() option for the same reason I don't like:\n i = int(\"1\")\n\nWith the dict() option you're needlessly calling a function which is adding overhead you don't need:\n>>> from timeit import Timer\n>>> Timer(\"mydict = {'a' : 1, 'b' : 2, 'c' : 'three'}\").timeit()\n0.91826782454194589\n>>> Timer(\"mydict = dict(a=1, b=2, c='three')\").timeit()\n1.9494664824719337\n\n",
"I think in this specific case I'd probably prefer this:\nfoo.update(bar=42, baz='qux')\n\nIn the more general case, I often prefer the literal syntax (what you call an anonymous dictionary, though it's just as anonymous to use {} as it is to use dict()). I think that speaks more clearly to the maintenance programmer (often me), partly because it stands out so nicely with syntax-highlighting text editors. It also ensures that when I have to add a key which is not representable as a Python name, like something with spaces, then I don't have to go and rewrite the whole line.\n",
"My answer will largely talk about the design of APIs to use dicts vs. keyword args.\nBut it's also applicable the individual use of {...} vs. dict(...).\nThe bottom line: be consistent. If most of your code will refer to 'bar' as a string - keep it a string in {...}; if you normally refer to it the identifier bar - use dict(bar=...).\nConstraints\nBefore talking about style, note that the keyword bar=42 syntax works only for strings and only if they are valid identifiers. If you need arbitrary punctuation, spaces, unicode - or even non-string keys - the question is over => only the {'bar': 42} syntax will work.\nThis also means that when designing an API, you must allow full dicts, and not only keyword arguments - unless you are sure that only strings, and only valid identifiers are allowed.\n(Technically, update(**{'spaces & punctuation': 42}) works. But it's ugly. And numbers/tuples/unicode won't work.)\nNote that dict() and dict.update() combine both APIs: you can pass a single dict, you can pass keyword args, and you can even pass both (the later I think is undocumented). So if you want to be nice, allow both:\ndef update(self, *args, **kwargs):\n \"\"\"Callable as dict() - with either a mapping or keyword args:\n\n .update(mapping)\n .update(**kwargs)\n \"\"\"\n mapping = dict(*args, **kwargs)\n # do something with `mapping`...\n\nThis is especially recommended for a method named .update(), to follow the least-surprise rule.\nStyle\nI find it nice to distinguish internal from external strings. By internal I mean arbitrary identifiers denoting something only inside the program (variable names, object attributes) or possibly between several programs (DB columns, XML attribute names). They are normally only visible to developers. External strings are intended for human consumption.\n[Some Python coders (me included) observe the convention of using 'single_quotes' for internal strings vs. \"Double quotes\" for external strings. This is definitely not universal, though.]\nYour question is about the proper uses of barewords (Perl term) - syntax sugars allowing to omit the quotes quotes altogether on internal strings. Some languages (notably LISP) allow them widely; the Pythonic opportunities to employ barewords are attribute access - foo.bar and keyword arguments - update(bar=...).\nThe stylistic dilemma here is \"Are your strings internal enough to look like identifiers?\"\nIf the keys are external strings, the answer is definitely NO:\nfoo.update({\"The answer to the big question\": 42})\n\n# which you later might access as:\nfoo[\"The answer to the big question\"]\n\nIf the keys refer to Python identifiers (e.g. object attributes), then I'd say YES:\nfoo.update(dict(bar=42))\n# As others mentioned, in that case the cleaner API (if possible)\n# would be to receive them as **kwargs directly:\nfoo.update(bar=42)\n\n# which you later might access as:\nfoo.bar\n\nIf the keys refer to identifiers outside your Python program, such as XML attr names, or DB column names, using barewords may be good or bad choice - but you it's best to choose one style and be consistent.\nConsistency is good because there is a psychological barrier between identifiers and strings. It exists because strings rarely cross it - only when using introspection to do meta-programming. And syntax highlighting only reinforces it. So if you read the code and see a green 'bar' in one place and a black foo.bar in a second place, you won't immediately make a connection.\nAnother important rule of thumb is: Barewords are good iff they are (mostly) fixed. E.g. if you refer to fixed DB columns mostly in your code, than using barewords to refer to them might be nice; but if half the time the column is a parameter, then it's better to use strings.\nThis is because parameter/constant is the most important difference people associate with the identifiers/strings barrier. The difference between column (variable) and \"person\" (constant) is the most readable way to convey this difference. Making them both identifiers would blur the distinction, as well as backfiring syntactically - you'd need to use **{column: value} and getattr(obj, column) etc. a lot.\n",
"I prefer your \"anonymous dictionary\" method and I think this is purely a personal style thing. I just find the latter version more readable but it's also what I'm used to seeing. \n",
"The dict() method has the added overhead of a function call.\n>>>import timeit,dis\n>>> timeit.Timer(\"{'bar': 42, 'baz': 'qux'}\").repeat()\n[0.59602910425766709, 0.60173793037941437, 0.59139834811408321]\n>>> timeit.Timer(\"dict(bar=42, baz='qux')\").repeat()\n[0.98166498814792646, 0.97745355904172015, 0.99231773870701545]\n\n>>> dis.dis(compile(\"{'bar': 42, 'baz': 'qux'}\",\"\",\"exec\"))\n 1 0 BUILD_MAP 0\n 3 DUP_TOP\n 4 LOAD_CONST 0 (42)\n 7 ROT_TWO\n 8 LOAD_CONST 1 ('bar')\n 11 STORE_SUBSCR\n 12 DUP_TOP\n 13 LOAD_CONST 2 ('qux')\n 16 ROT_TWO\n 17 LOAD_CONST 3 ('baz')\n 20 STORE_SUBSCR\n 21 POP_TOP\n 22 LOAD_CONST 4 (None)\n 25 RETURN_VALUE\n\n>>> dis.dis(compile(\"dict(bar=42, baz='qux')\",\"\",\"exec\"))\n 1 0 LOAD_NAME 0 (dict)\n 3 LOAD_CONST 0 ('bar')\n 6 LOAD_CONST 1 (42)\n 9 LOAD_CONST 2 ('baz')\n 12 LOAD_CONST 3 ('qux')\n 15 CALL_FUNCTION 512\n 18 POP_TOP\n 19 LOAD_CONST 4 (None)\n 22 RETURN_VALUE\n\n",
"I prefer the anonymous dictionary, too, just out of personal style.\n",
"If I have a lot of arguments, sometimes it is nice to omit the quotes on the keys:\nDoSomething(dict(\n Name = 'Joe',\n Age = 20,\n Gender = 'Male',\n ))\n\nThis is a very subjective question, BTW. :)\n",
"I think the dict() function is really there for when you're creating a dict from something else, maybe something that easily produces the necessary keyword args. The anonymous method is best for 'dict literals' in the same way you'd use \"\" for strings, not str().\n"
] |
[
20,
6,
4,
2,
2,
1,
1,
1
] |
[
"Actually, if the receiving function will only receive a dictionary with not pre-dertermined keywords, I normally use the ** passing convention.\nIn this example, that would be:\nclass Foo(object):\n def update(self, **param_dict):\n for key in param_dict:\n ....\nfoo = Foo()\n....\nfoo.update(bar=42, baz='qux')\n\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0001929274_python.txt
|
Q:
Web2py Import Once per Session
I'm using Web2Py and i want to import my program simply once per session... not everytime the page is loaded. is this possible ? such as "import Client" being used on the page but only import it once per session..
A:
In web2py your models and controllers are executed, not imported. They are executed every time a request arrives. If you press the button [compile] in admin, they will be bytecode compiled and some other optimizations are performs.
If your app (in models and controllers) does "import somemodule", then the import statement is executed at every request but "somemodule" is actually imported only the first time it is executed, as you asked.
|
Web2py Import Once per Session
|
I'm using Web2Py and i want to import my program simply once per session... not everytime the page is loaded. is this possible ? such as "import Client" being used on the page but only import it once per session..
|
[
"In web2py your models and controllers are executed, not imported. They are executed every time a request arrives. If you press the button [compile] in admin, they will be bytecode compiled and some other optimizations are performs.\nIf your app (in models and controllers) does \"import somemodule\", then the import statement is executed at every request but \"somemodule\" is actually imported only the first time it is executed, as you asked.\n"
] |
[
6
] |
[] |
[] |
[
"python",
"web2py"
] |
stackoverflow_0001978426_python_web2py.txt
|
Q:
How to render a doctype with Python's xml.dom.minidom?
I tried:
document.doctype = xml.dom.minidom.DocumentType('html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd"')
There is no doctype in the output. How to fix without inserting it by hand?
A:
You shouldn't instantiate classes from minidom directly. It's not a supported part of the API, the ownerDocuments won't tie up and you can get some strange misbehaviours. Instead use the proper DOM Level 2 Core methods:
>>> imp= minidom.getDOMImplementation('')
>>> dt= imp.createDocumentType('html', '-//W3C//DTD XHTML 1.0 Strict//EN', 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd')
(‘DTD/xhtml1-strict.dtd’ is a commonly-used but wrong SystemId. That relative URL would only be valid inside the xhtml1 folder at w3.org.)
Now you've got a DocumentType node, you can add it to a document. According to the standard, the only guaranteed way of doing this is at document creation time:
>>> doc= imp.createDocument('http://www.w3.org/1999/xhtml', 'html', dt)
>>> print doc.toxml()
<?xml version="1.0" ?><!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'><html/>
If you want to change the doctype of an existing document, that's more trouble. The DOM standard doesn't require that DocumentType nodes with no ownerDocument be insertable into a document. However some DOMs allow it, eg. pxdom. minidom kind of allows it:
>>> doc= minidom.parseString('<html xmlns="http://www.w3.org/1999/xhtml"><head/><body/></html>')
>>> dt= minidom.getDOMImplementation('').createDocumentType('html', '-//W3C//DTD XHTML 1.0 Strict//EN', 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd')
>>> doc.insertBefore(dt, doc.documentElement)
<xml.dom.minidom.DocumentType instance>
>>> print doc.toxml()
<?xml version="1.0" ?><!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'><html xmlns="http://www.w3.org/1999/xhtml"><head/><body/></html>
but with bugs:
>>> doc.doctype
# None
>>> dt.ownerDocument
# None
which may or may not matter to you.
Technically, the only reliable way per the standard to set a doctype on an existing document is to create a new document and import the whole of the old document into it!
def setDoctype(document, doctype):
imp= document.implementation
newdocument= imp.createDocument(doctype.namespaceURI, doctype.name, doctype)
newdocument.xmlVersion= document.xmlVersion
refel= newdocument.documentElement
for child in document.childNodes:
if child.nodeType==child.ELEMENT_NODE:
newdocument.replaceChild(
newdocument.importNode(child, True), newdocument.documentElement
)
refel= None
elif child.nodeType!=child.DOCUMENT_TYPE_NODE:
newdocument.insertBefore(newdocument.importNode(child, True), refel)
return newdocument
|
How to render a doctype with Python's xml.dom.minidom?
|
I tried:
document.doctype = xml.dom.minidom.DocumentType('html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd"')
There is no doctype in the output. How to fix without inserting it by hand?
|
[
"You shouldn't instantiate classes from minidom directly. It's not a supported part of the API, the ownerDocuments won't tie up and you can get some strange misbehaviours. Instead use the proper DOM Level 2 Core methods:\n>>> imp= minidom.getDOMImplementation('')\n>>> dt= imp.createDocumentType('html', '-//W3C//DTD XHTML 1.0 Strict//EN', 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd')\n\n(‘DTD/xhtml1-strict.dtd’ is a commonly-used but wrong SystemId. That relative URL would only be valid inside the xhtml1 folder at w3.org.)\nNow you've got a DocumentType node, you can add it to a document. According to the standard, the only guaranteed way of doing this is at document creation time:\n>>> doc= imp.createDocument('http://www.w3.org/1999/xhtml', 'html', dt)\n>>> print doc.toxml()\n<?xml version=\"1.0\" ?><!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'><html/>\n\nIf you want to change the doctype of an existing document, that's more trouble. The DOM standard doesn't require that DocumentType nodes with no ownerDocument be insertable into a document. However some DOMs allow it, eg. pxdom. minidom kind of allows it:\n>>> doc= minidom.parseString('<html xmlns=\"http://www.w3.org/1999/xhtml\"><head/><body/></html>')\n>>> dt= minidom.getDOMImplementation('').createDocumentType('html', '-//W3C//DTD XHTML 1.0 Strict//EN', 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd')\n>>> doc.insertBefore(dt, doc.documentElement)\n<xml.dom.minidom.DocumentType instance>\n>>> print doc.toxml()\n<?xml version=\"1.0\" ?><!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Strict//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd'><html xmlns=\"http://www.w3.org/1999/xhtml\"><head/><body/></html>\n\nbut with bugs:\n>>> doc.doctype\n# None\n>>> dt.ownerDocument\n# None\n\nwhich may or may not matter to you.\nTechnically, the only reliable way per the standard to set a doctype on an existing document is to create a new document and import the whole of the old document into it!\ndef setDoctype(document, doctype):\n imp= document.implementation\n newdocument= imp.createDocument(doctype.namespaceURI, doctype.name, doctype)\n newdocument.xmlVersion= document.xmlVersion\n refel= newdocument.documentElement\n for child in document.childNodes:\n if child.nodeType==child.ELEMENT_NODE:\n newdocument.replaceChild(\n newdocument.importNode(child, True), newdocument.documentElement\n )\n refel= None\n elif child.nodeType!=child.DOCUMENT_TYPE_NODE:\n newdocument.insertBefore(newdocument.importNode(child, True), refel)\n return newdocument\n\n"
] |
[
11
] |
[] |
[] |
[
"doctype",
"python",
"xml"
] |
stackoverflow_0001980380_doctype_python_xml.txt
|
Q:
Django template question
How can I achieve this using the Django template system:
Say I have 2 variable passed to the template system:
days=[1,2,3,4,5]
items=[ {name:"apple,day:3},{name:"orange,day:5} ]
I want to have such output as a table:
1 2 3 4 5
apple n n y n n
orange n n n n y
As you can notice, giving "n" to non matching ones and "y" to matching.
A:
Why don't you define this logic in the django view, and then simply pass arrays of Ys and Ns to the template?
A:
Here's what Ignacio meant. That said, I probably agree with Daniel that you should do this in the view.
<table>
{% for item in items %}
<tr>
<td>{% item.name %}</td>
{% for dday in days %}
<td>
{% ifequal dday item.day %}y{% else %}n{% endifequal %}
</td>
{% endfor %}
</tr>
{% endfor %}
</table>
I've called the days loop variable 'dday' to make it clear that the lookup item.day here is actually getting item['day'].
A:
Two loops. The outer loop is through items, the inner through days. Test if outer[day] is equal to inner, and output y if so and n if not.
|
Django template question
|
How can I achieve this using the Django template system:
Say I have 2 variable passed to the template system:
days=[1,2,3,4,5]
items=[ {name:"apple,day:3},{name:"orange,day:5} ]
I want to have such output as a table:
1 2 3 4 5
apple n n y n n
orange n n n n y
As you can notice, giving "n" to non matching ones and "y" to matching.
|
[
"Why don't you define this logic in the django view, and then simply pass arrays of Ys and Ns to the template?\n",
"Here's what Ignacio meant. That said, I probably agree with Daniel that you should do this in the view.\n<table>\n{% for item in items %}\n <tr>\n <td>{% item.name %}</td>\n {% for dday in days %}\n <td>\n {% ifequal dday item.day %}y{% else %}n{% endifequal %}\n </td>\n {% endfor %}\n </tr>\n{% endfor %}\n</table>\n\nI've called the days loop variable 'dday' to make it clear that the lookup item.day here is actually getting item['day'].\n",
"Two loops. The outer loop is through items, the inner through days. Test if outer[day] is equal to inner, and output y if so and n if not.\n"
] |
[
6,
6,
2
] |
[] |
[] |
[
"django",
"django_templates",
"python"
] |
stackoverflow_0001980600_django_django_templates_python.txt
|
Q:
Is there any reason to use threading.Lock over multiprocessing.Lock?
If a software project supports a version of Python that multiprocessing has been backported to, is there any reason to use threading.Lock over multiprocessing.Lock? Would a multiprocessing lock not be thread safe as well?
For that matter, is there a reason to use any synchronization primitives from threading that are also in multiprocessing?
A:
The threading module's synchronization primitive are lighter and faster than multiprocessing, due to the lack of dealing with shared semaphores, etc. If you are using threads; use threading's locks. Processes should use multiprocessing's locks.
A:
I would expect the multi-threading synchronization primitives to be quite faster as they can use shared memory area easily. But I suppose you will have to perform speed test to be sure of it. Also, you might have side-effects that are quite unwanted (and unspecified in the doc).
For example, a process-wise lock could very well block all threads of the process. And if it doesn't, releasing a lock might not wake up the threads of the process.
In short, if you want your code to work for sure, you should use the thread-synchronization primitives if you are using threads and the process-synchronization primitives if you are using processes. Otherwise, it might work on your platform only, or even just with your specific version of Python.
A:
multiprocessing and threading packages have slightly different aims, though both are concurrency related. threading coordinates threads within one process, while multiprocessing provide thread-like interface for coordinating multiple processes.
If your application doesn't spawn new processes which require data synchronization, multiprocessing is a bit more heavy weight, and threading package should be better suited.
|
Is there any reason to use threading.Lock over multiprocessing.Lock?
|
If a software project supports a version of Python that multiprocessing has been backported to, is there any reason to use threading.Lock over multiprocessing.Lock? Would a multiprocessing lock not be thread safe as well?
For that matter, is there a reason to use any synchronization primitives from threading that are also in multiprocessing?
|
[
"The threading module's synchronization primitive are lighter and faster than multiprocessing, due to the lack of dealing with shared semaphores, etc. If you are using threads; use threading's locks. Processes should use multiprocessing's locks.\n",
"I would expect the multi-threading synchronization primitives to be quite faster as they can use shared memory area easily. But I suppose you will have to perform speed test to be sure of it. Also, you might have side-effects that are quite unwanted (and unspecified in the doc).\nFor example, a process-wise lock could very well block all threads of the process. And if it doesn't, releasing a lock might not wake up the threads of the process.\nIn short, if you want your code to work for sure, you should use the thread-synchronization primitives if you are using threads and the process-synchronization primitives if you are using processes. Otherwise, it might work on your platform only, or even just with your specific version of Python.\n",
"multiprocessing and threading packages have slightly different aims, though both are concurrency related. threading coordinates threads within one process, while multiprocessing provide thread-like interface for coordinating multiple processes.\nIf your application doesn't spawn new processes which require data synchronization, multiprocessing is a bit more heavy weight, and threading package should be better suited.\n"
] |
[
21,
4,
3
] |
[] |
[] |
[
"locking",
"multiprocessing",
"multithreading",
"process",
"python"
] |
stackoverflow_0001980479_locking_multiprocessing_multithreading_process_python.txt
|
Q:
lisp-style style `let` syntax in Python list-comprehensions
Consider the following code:
>>> colprint([
(name, versions[name][0].summary or '')
for name in sorted(versions.keys())
])
What this code does is to print the elements of the dictionary versions in ascending order of its keys, but since the value is another sorted list, only the summary of its first element (the 'max') is printed.
Since I am familiar with let from lisp, I rewrote the above as:
>>> colprint([
(name, package.summary or '')
for name in sorted(versions.keys())
for package in [versions[name][0]]
)]
Do you think this violates being Pythonic? Can it be improved?
Note: For the curious, colprint is defined here.
A:
Why not exploit tuples?
colprint([(name, version[0].summary or '')
for (name, version) in sorted(versions.iteritems())])
or, even
colprint(sorted([(name, version[0].summary or '')
for (name, version) in versions.iteritems()]))
Also, you may consider (in my first example) removing the []s, because that way you get a generator instead of a list (which may or may not be useful, since I'm guessing this'll print the whole array, so you won't be saving any evaluations).
A:
I wouldn't use the "tricky for clause" (or "let-equivalent") in most cases, but I would if it's the natural way to avoid repetition, especially costly repetition. E.g.
xs = [(y, y*1.2, y-3.4) for z in zs for y in [somefun(z)] ]
looks much better to me than calling somefun three times!-) So, it's worth keeping in mind, even if probably not worth using where it does not remove repetition.
A:
So you're using "for x in [y]" as a substitute for "let x y".
Trying to emulate language's syntax in another language is never a good idea. I think that the original version is much clearer.
A:
As Tordek says, you can use items() or iteritems() in this case to avoid the issue:
colprint(sorted((name, packages[0].summary or '')
for (name, packages) in versions.items()))
Moving the sorting outside is a nice touch.
[Note that the use of items() changed the sorting order slightly - it used to be by name with ties resolved by original order (Python sort is stable), now it's by name with ties resolved by summary. Since the original order of a dict is random, the new behaviour is probably better.]
But for other uses (such as Alex Martelli's example), a "let"-alike might still be useful.
I've also once discovered the for var in [value] trick, but I now find it ugly.
A cleaner alternative might be a "pipeline" of comprehensions / generators, using the "decorate/undecorate" trick to pass the added value in a tuple:
# You could write this with keys() or items() -
# I'm just trying to examplify the pipeline technique.
names_packages = ((name, versions[name][0])
for name in versions.keys())
names_summaries = ((name, package.summary or '')
for (name, package) in names_packages)
colprint(sorted(names_summaries))
Or applied to Alex's example:
ys = (somefun(z) for z in zs)
xs = [(y, y*1.2, y-3.4) for y in ys]
(in which you don't even need the original z values, so the intermediate values don't have to be tuples.)
See http://www.dabeaz.com/generators/ for more powerful examples of the "pipeline" technique...
A:
You can move the sorting to the end to avoid some intermediate lists.
This looks a bit nicer i guess:
colprint(sorted(
(name, version[0].summary or '')
for (name,version) in versions.iteritems())
))
Python3 can do even better:
colprint(sorted(
(name, first_version.summary or '')
for (name,(first_version,*_)) in versions.items())
))
|
lisp-style style `let` syntax in Python list-comprehensions
|
Consider the following code:
>>> colprint([
(name, versions[name][0].summary or '')
for name in sorted(versions.keys())
])
What this code does is to print the elements of the dictionary versions in ascending order of its keys, but since the value is another sorted list, only the summary of its first element (the 'max') is printed.
Since I am familiar with let from lisp, I rewrote the above as:
>>> colprint([
(name, package.summary or '')
for name in sorted(versions.keys())
for package in [versions[name][0]]
)]
Do you think this violates being Pythonic? Can it be improved?
Note: For the curious, colprint is defined here.
|
[
"Why not exploit tuples?\ncolprint([(name, version[0].summary or '')\n for (name, version) in sorted(versions.iteritems())])\n\nor, even\ncolprint(sorted([(name, version[0].summary or '')\n for (name, version) in versions.iteritems()]))\n\nAlso, you may consider (in my first example) removing the []s, because that way you get a generator instead of a list (which may or may not be useful, since I'm guessing this'll print the whole array, so you won't be saving any evaluations).\n",
"I wouldn't use the \"tricky for clause\" (or \"let-equivalent\") in most cases, but I would if it's the natural way to avoid repetition, especially costly repetition. E.g.\nxs = [(y, y*1.2, y-3.4) for z in zs for y in [somefun(z)] ]\n\nlooks much better to me than calling somefun three times!-) So, it's worth keeping in mind, even if probably not worth using where it does not remove repetition.\n",
"So you're using \"for x in [y]\" as a substitute for \"let x y\".\nTrying to emulate language's syntax in another language is never a good idea. I think that the original version is much clearer.\n",
"As Tordek says, you can use items() or iteritems() in this case to avoid the issue:\ncolprint(sorted((name, packages[0].summary or '')\n for (name, packages) in versions.items()))\n\nMoving the sorting outside is a nice touch.\n[Note that the use of items() changed the sorting order slightly - it used to be by name with ties resolved by original order (Python sort is stable), now it's by name with ties resolved by summary. Since the original order of a dict is random, the new behaviour is probably better.]\nBut for other uses (such as Alex Martelli's example), a \"let\"-alike might still be useful.\nI've also once discovered the for var in [value] trick, but I now find it ugly.\nA cleaner alternative might be a \"pipeline\" of comprehensions / generators, using the \"decorate/undecorate\" trick to pass the added value in a tuple:\n# You could write this with keys() or items() - \n# I'm just trying to examplify the pipeline technique.\nnames_packages = ((name, versions[name][0]) \n for name in versions.keys())\n\nnames_summaries = ((name, package.summary or '')\n for (name, package) in names_packages)\n\ncolprint(sorted(names_summaries))\n\nOr applied to Alex's example:\nys = (somefun(z) for z in zs)\nxs = [(y, y*1.2, y-3.4) for y in ys]\n\n(in which you don't even need the original z values, so the intermediate values don't have to be tuples.)\nSee http://www.dabeaz.com/generators/ for more powerful examples of the \"pipeline\" technique...\n",
"You can move the sorting to the end to avoid some intermediate lists.\nThis looks a bit nicer i guess:\ncolprint(sorted(\n (name, version[0].summary or '')\n for (name,version) in versions.iteritems())\n ))\n\nPython3 can do even better:\ncolprint(sorted(\n (name, first_version.summary or '')\n for (name,(first_version,*_)) in versions.items())\n ))\n\n"
] |
[
7,
5,
4,
1,
0
] |
[] |
[] |
[
"lisp",
"python",
"refactoring"
] |
stackoverflow_0001910003_lisp_python_refactoring.txt
|
Q:
Loading an image in Python (Error)
I want to load an image but I get an error-message.
My code:
from PIL import Image
im = Image.open("D:\Python26\PYTHON-PROGRAMME\bild.jpg")
im.show()
I get this error:
Traceback (most recent call last):
File "D:\Python26\PYTHON-PROGRAMME\00000000000000000", line 2, in <module>
im = Image.open("D:\Python26\PYTHON-PROGRAMME\bild.jpg")
File "D:\Python26\lib\site-packages\PIL\Image.py", line 1888, in open
fp = __builtin__.open(fp, "rb")
IOError: [Errno 22] invalid mode ('rb') or filename: 'D:\\Python26\\PYTHON-PROGRAMME\x08ild.jpg'
A:
You need to escape the backslashes:
im = Image.open("D:\\Python26\\PYTHON-PROGRAMME\\bild.jpg")
|
Loading an image in Python (Error)
|
I want to load an image but I get an error-message.
My code:
from PIL import Image
im = Image.open("D:\Python26\PYTHON-PROGRAMME\bild.jpg")
im.show()
I get this error:
Traceback (most recent call last):
File "D:\Python26\PYTHON-PROGRAMME\00000000000000000", line 2, in <module>
im = Image.open("D:\Python26\PYTHON-PROGRAMME\bild.jpg")
File "D:\Python26\lib\site-packages\PIL\Image.py", line 1888, in open
fp = __builtin__.open(fp, "rb")
IOError: [Errno 22] invalid mode ('rb') or filename: 'D:\\Python26\\PYTHON-PROGRAMME\x08ild.jpg'
|
[
"You need to escape the backslashes:\nim = Image.open(\"D:\\\\Python26\\\\PYTHON-PROGRAMME\\\\bild.jpg\")\n\n"
] |
[
9
] |
[] |
[] |
[
"image",
"python"
] |
stackoverflow_0001981138_image_python.txt
|
Q:
Loading an image in Python (Error) part_2
This code didn't show my picture. The picture really exists :)
Does anybody know why this doesnt work?
Thanks in advance!
from PIL import Image
im = Image.open("D:\\Python26\\PYTHON-PROGRAMME\\bild.jpg")
im.show()
A:
You probably need to call the load() method to force the open() method to do its work. open is lazy.
Try:
from PIL import Image
im = Image.open("D:\\Python26\\PYTHON-PROGRAMME\\bild.jpg")
im.load()
im.show()
Idea #2:
Patch PIL's file Image.py to have a potentially more robust approach to using the Windows shell to display your image. In the method _showxv, replace the following lines:
if os.name == "nt":
command = "start /wait %s && del /f %s" % (file, file)
with
if os.name == "nt":
command = "%s" % file
I suspect that the problem with the existing implementation is that the del command after the && is running immediately after the start command rather than after the result of the start command finishes. Thus, the file has already been deleted by the time that the image viewer is ready to load and display it.
Do back up your copy of the code before patching it.
|
Loading an image in Python (Error) part_2
|
This code didn't show my picture. The picture really exists :)
Does anybody know why this doesnt work?
Thanks in advance!
from PIL import Image
im = Image.open("D:\\Python26\\PYTHON-PROGRAMME\\bild.jpg")
im.show()
|
[
"You probably need to call the load() method to force the open() method to do its work. open is lazy.\nTry:\nfrom PIL import Image\nim = Image.open(\"D:\\\\Python26\\\\PYTHON-PROGRAMME\\\\bild.jpg\")\nim.load()\nim.show()\n\nIdea #2:\nPatch PIL's file Image.py to have a potentially more robust approach to using the Windows shell to display your image. In the method _showxv, replace the following lines:\nif os.name == \"nt\":\n command = \"start /wait %s && del /f %s\" % (file, file)\n\nwith\nif os.name == \"nt\":\n command = \"%s\" % file\n\nI suspect that the problem with the existing implementation is that the del command after the && is running immediately after the start command rather than after the result of the start command finishes. Thus, the file has already been deleted by the time that the image viewer is ready to load and display it.\nDo back up your copy of the code before patching it.\n"
] |
[
1
] |
[] |
[] |
[
"image",
"python"
] |
stackoverflow_0001981335_image_python.txt
|
Q:
Python: find index of item containing X in list
I have a huge list of data, more than 1M records in a form similar (though this is a much simpler form) to this:
[
{'name': 'Colby Karnopp', 'ids': [441, 231, 822]},
{'name': 'Wilmer Lummus', 'ids': [438, 548, 469]},
{'name': 'Hope Teschner', 'ids': [735, 747, 488]},
{'name': 'Adolfo Fenrich', 'ids': [515, 213, 120]}
...
]
Given an id of 735, I want to find the index 2 for Hope Teschner since the given id falls within the list of ids for Hope. What is the best (performance-wise) way to do this?
Thanks for any tips.
EDIT
Probably should have mentioned this, but an id could show up more than once. In the case that a particular id does show up more than once, I want the lowest index for the given id.
The data in the list will be changing frequently, so I am hesitant to go about building a dictionary since the dictionary would need to be modified / rebuilt with each update to the list since the indexes are the values in the dict - ie. changing the position of one item in the list would require every value in the dictionary to be updated whose index is greater than the newly changed index.
EDIT EDIT
I just did some benchmarking and it seems that rebuilding the dictionary is quite fast even for 1M + records. I think I will pursue this solution for now.
A:
Simplest way to get the first index satisfying the condition (in Python 2.6 or better:
next((i for i, d in enumerate(hugelist) if 735 in d['ids']), None)
this gives None if no item satisfies the condition; more generally you could put as the second argument to the next built-in whatever you require in that case, or omit the second arg (and in that case you can remove one set of parentheses) if you're OK with getting a StopIteration exception when no item satisfies the condition (e.g., you know that situation is impossible).
If you need to do this kind of operation more than very few times between changes to the hugelist or its contents, then, as you indicate in the second edit to your question, building an auxiliary dict (from integer to index of first dict containing it) is preferable. Since you want the first applicable index, you want to iterate backwards (so hits that are closer to the start of hugelist will override ones that are further on) -- for example:
auxdict = {}
L = len(hugelist) - 1
for i, d in enumerate(reversed(hugelist)):
auxdict.update(dict.fromkeys(d['ids'], L-i))
[[You cannot use reversed(enumerate(... because enumerate returns an iterator, not a list, and reversed is optimized to only work on a sequence argument -- whence the need for L-i]].
You can build auxdict in other ways, including without the reversal, for example:
auxdict = {}
for i, d in enumerate(hugelist):
for item in d['ids']:
if item not in auxdict: auxdict[item] =i
but this is likely to be substantially slower due to the huge number of if that execute in the inner loop. The direct dict constructor (taking a sequence of key, value pairs) is also likely to be slower due to the need of inner loops:
L = len(hugelist) - 1
auxdict = dict((item, L-i) for i, d in enumerate(reversed(hugelist)) for item in d['ids'])
However, these are just qualitative considerations -- consider running benchmarks over a few "typical / representative" examples of values you could have in hugelist (using timeit at the command line prompt, as I've often recommended) to measure the relative speeds of these approaches (as well as, how their runtimes compare to that of an unaided lookup as I showed at the start of this answer -- this ratio, plus the average number of lookups you expect to perform between successive hugelist changes, will help you select the overall strategy).
A:
Performancewise, if you have 1M records you might want to switch to a database or a different data structure. With the given data structure this will be a linear time operation. You could create an ID to records dict once though if you plan to do this query often.
A:
The best way would probably be to setup a reverse dict() from ids to names.
A:
Can two or more dicts share the same ID? If so, I presume you will need to return a list of indexes.
If you want to do a one-off search then you can do it with a list comprehension:
>>> x = [
... {'name': 'Colby Karnopp', 'ids': [441, 231, 822]},
... {'name': 'Wilmer Lummus', 'ids': [438, 548, 469]},
... {'name': 'Hope Teschner', 'ids': [735, 747, 488]},
... {'name': 'Adolfo Fenrich', 'ids': [515, 213, 120]},
...
... ]
>>> print [idx for (idx, d) in enumerate(x) if 735 in d['ids']]
[2]
However if you want to do this a lot and the list does not change much then it is much better to create an inverse index:
>>> indexes = dict((id, idx) for (idx,d) in enumerate(x) for id in d['ids'])
>>> indexes
{213: 3, 515: 3, 548: 1, 822: 0, 231: 0, 488: 2, 747: 2, 469: 1, 438: 1, 120: 3, 441: 0, 735: 2}
>>> indexes[735]
2
NB: the above code assumes that each ID is unique. If there are duplicates replace the dict with a collections.defaultdict(list).
NNB: the above code returns the index into the original list since that is what you asked for. However it is probably better to return the actual dict instead of the index unless you want to use the index to delete it from the list.
A:
If frequency of building the index is low:
Create a lookup array of index values into your main list, such that eg
lookup = [-1,-1,-1...]
...
def addtolookup
...
mainlistindex =lookup[myvalue]
if mainlistindex!=-1:
name=mainlist[mainlistindex].name
If frwquency is high, consider the sorting approach (I think this is what is meant by the Schwartzian Transform answer). This might be good if you are having more problems with the performance in rebuilding your tree whenever the source list changes than you are with performance getting the data out of the manufactured index; as slotting data into an existing list (that (crucially) knows about the other possible matches for an id for when previous best match string stops being associated with an id) will be faster than building a list from scratch on every delta.
EDIT
This assumes that your IDs are densely populated integers.
To increase performance in accessing the sorted list, it can be partitioned into blocks of say 400-600 entries to avoid repeatedly moving the entire list forwards or backwards one or a few positions, and searched with a binary algorithm.
A:
It seems that the data structure is ill-suited to its use. Changing the list is costly - both the change itself (if you do any insertions/delitions) and the resulting need to rebuild a dict, or do linear scans every time.
The question is: how is your list changing?
Perhaps instead of using indexes (which change frequently), you could use objects, and use pointers to the objects themselves instead of worrying about indexes?
|
Python: find index of item containing X in list
|
I have a huge list of data, more than 1M records in a form similar (though this is a much simpler form) to this:
[
{'name': 'Colby Karnopp', 'ids': [441, 231, 822]},
{'name': 'Wilmer Lummus', 'ids': [438, 548, 469]},
{'name': 'Hope Teschner', 'ids': [735, 747, 488]},
{'name': 'Adolfo Fenrich', 'ids': [515, 213, 120]}
...
]
Given an id of 735, I want to find the index 2 for Hope Teschner since the given id falls within the list of ids for Hope. What is the best (performance-wise) way to do this?
Thanks for any tips.
EDIT
Probably should have mentioned this, but an id could show up more than once. In the case that a particular id does show up more than once, I want the lowest index for the given id.
The data in the list will be changing frequently, so I am hesitant to go about building a dictionary since the dictionary would need to be modified / rebuilt with each update to the list since the indexes are the values in the dict - ie. changing the position of one item in the list would require every value in the dictionary to be updated whose index is greater than the newly changed index.
EDIT EDIT
I just did some benchmarking and it seems that rebuilding the dictionary is quite fast even for 1M + records. I think I will pursue this solution for now.
|
[
"Simplest way to get the first index satisfying the condition (in Python 2.6 or better:\nnext((i for i, d in enumerate(hugelist) if 735 in d['ids']), None)\n\nthis gives None if no item satisfies the condition; more generally you could put as the second argument to the next built-in whatever you require in that case, or omit the second arg (and in that case you can remove one set of parentheses) if you're OK with getting a StopIteration exception when no item satisfies the condition (e.g., you know that situation is impossible).\nIf you need to do this kind of operation more than very few times between changes to the hugelist or its contents, then, as you indicate in the second edit to your question, building an auxiliary dict (from integer to index of first dict containing it) is preferable. Since you want the first applicable index, you want to iterate backwards (so hits that are closer to the start of hugelist will override ones that are further on) -- for example:\nauxdict = {}\nL = len(hugelist) - 1\nfor i, d in enumerate(reversed(hugelist)):\n auxdict.update(dict.fromkeys(d['ids'], L-i))\n\n[[You cannot use reversed(enumerate(... because enumerate returns an iterator, not a list, and reversed is optimized to only work on a sequence argument -- whence the need for L-i]].\nYou can build auxdict in other ways, including without the reversal, for example:\nauxdict = {}\nfor i, d in enumerate(hugelist):\n for item in d['ids']:\n if item not in auxdict: auxdict[item] =i\n\nbut this is likely to be substantially slower due to the huge number of if that execute in the inner loop. The direct dict constructor (taking a sequence of key, value pairs) is also likely to be slower due to the need of inner loops:\nL = len(hugelist) - 1\nauxdict = dict((item, L-i) for i, d in enumerate(reversed(hugelist)) for item in d['ids'])\n\nHowever, these are just qualitative considerations -- consider running benchmarks over a few \"typical / representative\" examples of values you could have in hugelist (using timeit at the command line prompt, as I've often recommended) to measure the relative speeds of these approaches (as well as, how their runtimes compare to that of an unaided lookup as I showed at the start of this answer -- this ratio, plus the average number of lookups you expect to perform between successive hugelist changes, will help you select the overall strategy).\n",
"Performancewise, if you have 1M records you might want to switch to a database or a different data structure. With the given data structure this will be a linear time operation. You could create an ID to records dict once though if you plan to do this query often.\n",
"The best way would probably be to setup a reverse dict() from ids to names.\n",
"Can two or more dicts share the same ID? If so, I presume you will need to return a list of indexes.\nIf you want to do a one-off search then you can do it with a list comprehension:\n>>> x = [\n... {'name': 'Colby Karnopp', 'ids': [441, 231, 822]}, \n... {'name': 'Wilmer Lummus', 'ids': [438, 548, 469]},\n... {'name': 'Hope Teschner', 'ids': [735, 747, 488]}, \n... {'name': 'Adolfo Fenrich', 'ids': [515, 213, 120]},\n ...\n... ]\n\n>>> print [idx for (idx, d) in enumerate(x) if 735 in d['ids']]\n[2]\n\nHowever if you want to do this a lot and the list does not change much then it is much better to create an inverse index:\n>>> indexes = dict((id, idx) for (idx,d) in enumerate(x) for id in d['ids'])\n>>> indexes\n{213: 3, 515: 3, 548: 1, 822: 0, 231: 0, 488: 2, 747: 2, 469: 1, 438: 1, 120: 3, 441: 0, 735: 2}\n>>> indexes[735]\n2\n\nNB: the above code assumes that each ID is unique. If there are duplicates replace the dict with a collections.defaultdict(list).\nNNB: the above code returns the index into the original list since that is what you asked for. However it is probably better to return the actual dict instead of the index unless you want to use the index to delete it from the list. \n",
"If frequency of building the index is low:\nCreate a lookup array of index values into your main list, such that eg\nlookup = [-1,-1,-1...]\n\n...\ndef addtolookup\n...\n\nmainlistindex =lookup[myvalue]\nif mainlistindex!=-1:\n name=mainlist[mainlistindex].name\n\nIf frwquency is high, consider the sorting approach (I think this is what is meant by the Schwartzian Transform answer). This might be good if you are having more problems with the performance in rebuilding your tree whenever the source list changes than you are with performance getting the data out of the manufactured index; as slotting data into an existing list (that (crucially) knows about the other possible matches for an id for when previous best match string stops being associated with an id) will be faster than building a list from scratch on every delta.\nEDIT\nThis assumes that your IDs are densely populated integers.\nTo increase performance in accessing the sorted list, it can be partitioned into blocks of say 400-600 entries to avoid repeatedly moving the entire list forwards or backwards one or a few positions, and searched with a binary algorithm.\n",
"It seems that the data structure is ill-suited to its use. Changing the list is costly - both the change itself (if you do any insertions/delitions) and the resulting need to rebuild a dict, or do linear scans every time.\nThe question is: how is your list changing?\nPerhaps instead of using indexes (which change frequently), you could use objects, and use pointers to the objects themselves instead of worrying about indexes?\n"
] |
[
6,
3,
3,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001975856_python.txt
|
Q:
Protection from accidentally misnaming object attributes in Python?
A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him?
A:
"changed the value of an object's attribute" Can lead to problems. This is pretty well known. You know it, now, also. That doesn't indict the language. It simply says that you've learned an important lesson in dynamic language programming.
Unit testing absolutely discovers this. You are not forced to mock all library classes. Some folks say it's only a unit test when it's tested in complete isolation. This is silly. You have to trust the library modules -- it's a feature of your architecture. Rather than mock them, just use them. (It is important to write mocks for your own newly-developed libraries. It's also important to mock libraries that make expensive API calls.)
In most cases, you can (and should) test your classes with the real library modules. This will find the misspelled attribute name.
Also, now that you know that attributes are dynamic, it's really easy to verify that the attribute exists. How?
Use interactive Python to explore the classes before writing too much code.
Remember, Python is not Java and it's not C. You can execute Python interactively and determine immediately if you've spelled something wrong. Writing a lot of code without doing any interactive confirmation is -- simply -- the wrong way to use Python.
A little interactive exploration will find misspelled attribute names.
Finally -- for your own classes -- you can wrap updatable attributes as properties. This makes it easier to debug any misspelled attribute names. Again, you know to check for this. You can use interactive development to confirm the attribute names.
Fussing around with __setattr__ creates problems. In some cases, we actually need to add attributes to an object. Why? It's simpler than creating a whole subclass for one special case where we have to maintain more state information.
Other things you can say:
I was burned by a C program that absolutely could not be made to work because of ______. [Insert any known C-language problem you want here. No array bounds checking, for example] Does that make C fatally flawed?
I was burned by a DBA who changed a column name and all the SQL broke. It's painful to unit test all of it. Does that make the relational database fatally flawed?
I was burned by a sys admin who changed a directory's permissions and my application broke. It was nearly impossible to find. Does that make the OS fatally flawed?
I was burned by a COBOL program where someone changed the copybook, forgot to recompile the program, and we couldn't debug it because the source looked perfect. COBOL, however, actually is fatally flawed, so this isn't a good example.
A:
There are code analyzers like pylint that will warn you if you add a attribute outside of __init__. PyDev has nice support for it. Such errors are very easy to find with a debugger too.
A:
If the possibility to make mistakes is enough for him to consider a language "fatally flawed", I don't think you can convince him otherwise. The more you can do with a language, the more you can do wrong with the language. It's a caveat of flexibility—but that's true for any language.
A:
You can use the __slots__ class attribute to limit the attributes that instances have. Attempting to set an attribute that's not expliticly listed will raise an AttributeError. There are some complications that arise with subclassing. See the Python data model reference for details.
A:
A tool like pylint or pychecker may be able to detect this.
A:
He's effectively ruling out an entire class of programming languages -- dynamically-typed languages -- because of one hard lesson learned. He can use only statically-typed languages if he wishes and still have a very productive career as a programmer, but he is certainly going to have deep frustrations with them as well. Will he then conclude that they are fatally-flawed?
A:
I think your friend has misplaced his frustration in the language. His real problem is lack of debugging techniques. teach him how to break down a program into small pieces to examine the output. like a manual unit test, this way any inconsistency is found and any assumptions are proven or discarded.
A:
I had a similar bad experience with Python when I first started ... took me 3 months to get over it. Having a tool which warns would be nice back then ...
|
Protection from accidentally misnaming object attributes in Python?
|
A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him?
|
[
"\"changed the value of an object's attribute\" Can lead to problems. This is pretty well known. You know it, now, also. That doesn't indict the language. It simply says that you've learned an important lesson in dynamic language programming.\n\nUnit testing absolutely discovers this. You are not forced to mock all library classes. Some folks say it's only a unit test when it's tested in complete isolation. This is silly. You have to trust the library modules -- it's a feature of your architecture. Rather than mock them, just use them. (It is important to write mocks for your own newly-developed libraries. It's also important to mock libraries that make expensive API calls.)\nIn most cases, you can (and should) test your classes with the real library modules. This will find the misspelled attribute name.\nAlso, now that you know that attributes are dynamic, it's really easy to verify that the attribute exists. How?\nUse interactive Python to explore the classes before writing too much code.\nRemember, Python is not Java and it's not C. You can execute Python interactively and determine immediately if you've spelled something wrong. Writing a lot of code without doing any interactive confirmation is -- simply -- the wrong way to use Python.\nA little interactive exploration will find misspelled attribute names.\nFinally -- for your own classes -- you can wrap updatable attributes as properties. This makes it easier to debug any misspelled attribute names. Again, you know to check for this. You can use interactive development to confirm the attribute names.\n\nFussing around with __setattr__ creates problems. In some cases, we actually need to add attributes to an object. Why? It's simpler than creating a whole subclass for one special case where we have to maintain more state information.\n\nOther things you can say:\nI was burned by a C program that absolutely could not be made to work because of ______. [Insert any known C-language problem you want here. No array bounds checking, for example] Does that make C fatally flawed?\nI was burned by a DBA who changed a column name and all the SQL broke. It's painful to unit test all of it. Does that make the relational database fatally flawed? \nI was burned by a sys admin who changed a directory's permissions and my application broke. It was nearly impossible to find. Does that make the OS fatally flawed? \nI was burned by a COBOL program where someone changed the copybook, forgot to recompile the program, and we couldn't debug it because the source looked perfect. COBOL, however, actually is fatally flawed, so this isn't a good example.\n",
"There are code analyzers like pylint that will warn you if you add a attribute outside of __init__. PyDev has nice support for it. Such errors are very easy to find with a debugger too.\n",
"If the possibility to make mistakes is enough for him to consider a language \"fatally flawed\", I don't think you can convince him otherwise. The more you can do with a language, the more you can do wrong with the language. It's a caveat of flexibility—but that's true for any language.\n",
"You can use the __slots__ class attribute to limit the attributes that instances have. Attempting to set an attribute that's not expliticly listed will raise an AttributeError. There are some complications that arise with subclassing. See the Python data model reference for details.\n",
"A tool like pylint or pychecker may be able to detect this.\n",
"He's effectively ruling out an entire class of programming languages -- dynamically-typed languages -- because of one hard lesson learned. He can use only statically-typed languages if he wishes and still have a very productive career as a programmer, but he is certainly going to have deep frustrations with them as well. Will he then conclude that they are fatally-flawed?\n",
"I think your friend has misplaced his frustration in the language. His real problem is lack of debugging techniques. teach him how to break down a program into small pieces to examine the output. like a manual unit test, this way any inconsistency is found and any assumptions are proven or discarded.\n",
"I had a similar bad experience with Python when I first started ... took me 3 months to get over it. Having a tool which warns would be nice back then ...\n"
] |
[
12,
7,
5,
5,
2,
1,
1,
0
] |
[] |
[] |
[
"attributes",
"python"
] |
stackoverflow_0001981208_attributes_python.txt
|
Q:
Python func_dict used to memoize; other useful tricks?
A Python function object has an attribute dictionary called func_dict which is visible from outside the function and is mutable, but which is not modified when the function is called. (I learned this from answers to a question I asked yesterday (#1753232): thanks!) I was reading code (at http://pythonprogramming.jottit.com/functional_programming) which memoized the computation of Fibonacci numbers and thought, "Why not use the func_dict attribute for memoizing?" It worked (see below; the output's at the end of the code.). It's a little like having a class property available but having the initialization code outside the object (in this case, not a class but a function).
I wonder what similar (or dissimilar) tricks can be done using this attribute?
def fib(n):
if n in fib.cache:
print "found fib.cache[%d] = %d: " %(n, fib.cache[n])
return fib.cache[n]
else:
print "fib.cache[%d] = fib(%d) + fib(%d)" % (n, n-1, n-2)
fib.cache[n] = fib(n-1) + fib(n-2)
print "modified fib.cache: ", fib.cache
return fib.cache[n]
fib.cache = {0:0, 1:1}
for x in range(7):
print "==================>", x
print fib( x)
"""
==================> 0
found fib.cache[0] = 0:
0
==================> 1
found fib.cache[1] = 1:
1
==================> 2
fib.cache[2] = fib(1) + fib(0)
found fib.cache[1] = 1:
found fib.cache[0] = 0:
modified fib.cache: {0: 0, 1: 1, 2: 1}
1
==================> 3
fib.cache[3] = fib(2) + fib(1)
found fib.cache[2] = 1:
found fib.cache[1] = 1:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2}
2
==================> 4
fib.cache[4] = fib(3) + fib(2)
found fib.cache[3] = 2:
found fib.cache[2] = 1:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2, 4: 3}
3
==================> 5
fib.cache[5] = fib(4) + fib(3)
found fib.cache[4] = 3:
found fib.cache[3] = 2:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2, 4: 3, 5: 5}
5
==================> 6
fib.cache[6] = fib(5) + fib(4)
found fib.cache[5] = 5:
found fib.cache[4] = 3:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2, 4: 3, 5: 5, 6: 8}
8
"""
A:
Just be careful: the fib.cache trick only works if fib is indeed the name of the relevant function object in the scope that's active while it's executing (for example, when you decorate it as you have done, you must assign the starting value for the cache to the decorator wrapper, not to the decorated function -- and if it gets further decorated after that, things break).
This is a bit fragile compared to the standard memoization idiom:
def fib(n, _memo={0:1, 1:1}):
if n in _memo:
return _memo[n]
else:
_memo[n] = fib(n-1) + fib(n-2)
return _memo[n]
or the decorator equivalent. The standard idiom's also faster (though not by much) -- putting them both in mem.py under names fib1 (the .cache-trick one, without prints and undecorated) and fib2 (my version), we see...:
$ python -mtimeit -s'import mem' 'mem.fib1(20)'
1000000 loops, best of 3: 0.754 usec per loop
$ python -mtimeit -s'import mem' 'mem.fib2(20)'
1000000 loops, best of 3: 0.507 usec per loop
so the standard-idiom version saves about 33% of the time, but that's when almost all calls do actually hit the memoization cache (which is populated after the first one of those million loops) -- fib2's speed advantage is smaller on cache misses, since it comes from the higher speed of accessing _memo (a local variable) vs fib.cache (a global name, fib, and then an attribute thereof, cache), and cache accesses dominate on cache hits (there's nothing else;-) but there's a little extra work (equal for both functions) on cache misses.
Anyway, don't mean to rain on your parade, but when you find some new cool idea be sure to measure it against the "good old way" of doing things, both in terms of "robustness" and performance (if you're caching, presumably you care about performance;-).
A:
I've always used this technique for memoizing. It uses the fact that you can call an object that's not a function, so long as that object has its __call__ method defined. Neither behindthefall's method, nor Alex Martelli's even occurred to me, for whatever reason. I would guess the performance is roughly the same as behindthefall's way, because it works roughly the same way. Maybe a little slower. But as the linked page points out, "the definition for fib() is now the "obvious" one, without the cacheing code obscuring the algorithm" which is kind of nice.
|
Python func_dict used to memoize; other useful tricks?
|
A Python function object has an attribute dictionary called func_dict which is visible from outside the function and is mutable, but which is not modified when the function is called. (I learned this from answers to a question I asked yesterday (#1753232): thanks!) I was reading code (at http://pythonprogramming.jottit.com/functional_programming) which memoized the computation of Fibonacci numbers and thought, "Why not use the func_dict attribute for memoizing?" It worked (see below; the output's at the end of the code.). It's a little like having a class property available but having the initialization code outside the object (in this case, not a class but a function).
I wonder what similar (or dissimilar) tricks can be done using this attribute?
def fib(n):
if n in fib.cache:
print "found fib.cache[%d] = %d: " %(n, fib.cache[n])
return fib.cache[n]
else:
print "fib.cache[%d] = fib(%d) + fib(%d)" % (n, n-1, n-2)
fib.cache[n] = fib(n-1) + fib(n-2)
print "modified fib.cache: ", fib.cache
return fib.cache[n]
fib.cache = {0:0, 1:1}
for x in range(7):
print "==================>", x
print fib( x)
"""
==================> 0
found fib.cache[0] = 0:
0
==================> 1
found fib.cache[1] = 1:
1
==================> 2
fib.cache[2] = fib(1) + fib(0)
found fib.cache[1] = 1:
found fib.cache[0] = 0:
modified fib.cache: {0: 0, 1: 1, 2: 1}
1
==================> 3
fib.cache[3] = fib(2) + fib(1)
found fib.cache[2] = 1:
found fib.cache[1] = 1:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2}
2
==================> 4
fib.cache[4] = fib(3) + fib(2)
found fib.cache[3] = 2:
found fib.cache[2] = 1:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2, 4: 3}
3
==================> 5
fib.cache[5] = fib(4) + fib(3)
found fib.cache[4] = 3:
found fib.cache[3] = 2:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2, 4: 3, 5: 5}
5
==================> 6
fib.cache[6] = fib(5) + fib(4)
found fib.cache[5] = 5:
found fib.cache[4] = 3:
modified fib.cache: {0: 0, 1: 1, 2: 1, 3: 2, 4: 3, 5: 5, 6: 8}
8
"""
|
[
"Just be careful: the fib.cache trick only works if fib is indeed the name of the relevant function object in the scope that's active while it's executing (for example, when you decorate it as you have done, you must assign the starting value for the cache to the decorator wrapper, not to the decorated function -- and if it gets further decorated after that, things break).\nThis is a bit fragile compared to the standard memoization idiom:\ndef fib(n, _memo={0:1, 1:1}):\n if n in _memo:\n return _memo[n]\n else:\n _memo[n] = fib(n-1) + fib(n-2)\n return _memo[n]\n\nor the decorator equivalent. The standard idiom's also faster (though not by much) -- putting them both in mem.py under names fib1 (the .cache-trick one, without prints and undecorated) and fib2 (my version), we see...:\n$ python -mtimeit -s'import mem' 'mem.fib1(20)'\n1000000 loops, best of 3: 0.754 usec per loop\n$ python -mtimeit -s'import mem' 'mem.fib2(20)'\n1000000 loops, best of 3: 0.507 usec per loop\n\nso the standard-idiom version saves about 33% of the time, but that's when almost all calls do actually hit the memoization cache (which is populated after the first one of those million loops) -- fib2's speed advantage is smaller on cache misses, since it comes from the higher speed of accessing _memo (a local variable) vs fib.cache (a global name, fib, and then an attribute thereof, cache), and cache accesses dominate on cache hits (there's nothing else;-) but there's a little extra work (equal for both functions) on cache misses.\nAnyway, don't mean to rain on your parade, but when you find some new cool idea be sure to measure it against the \"good old way\" of doing things, both in terms of \"robustness\" and performance (if you're caching, presumably you care about performance;-).\n",
"I've always used this technique for memoizing. It uses the fact that you can call an object that's not a function, so long as that object has its __call__ method defined. Neither behindthefall's method, nor Alex Martelli's even occurred to me, for whatever reason. I would guess the performance is roughly the same as behindthefall's way, because it works roughly the same way. Maybe a little slower. But as the linked page points out, \"the definition for fib() is now the \"obvious\" one, without the cacheing code obscuring the algorithm\" which is kind of nice. \n"
] |
[
7,
1
] |
[] |
[] |
[
"dictionary",
"fibonacci",
"function",
"memoization",
"python"
] |
stackoverflow_0001760495_dictionary_fibonacci_function_memoization_python.txt
|
Q:
How to tell if Python SQLite database connection or cursor is closed?
Let's say that you have the following code:
import sqlite3
conn = sqlite3.connect('mydb')
cur = conn.cursor()
# some database actions
cur.close()
conn.close()
# more code below
If I try to use the conn or cur objects later on, how could I tell that they are closed? I cannot find a .isclosed() method or anything like it.
A:
You could wrap in a try, except statement:
>>> conn = sqlite3.connect('mydb')
>>> conn.close()
>>> try:
... one_row = conn.execute("SELECT * FROM my_table LIMIT 1;")
... except sqlite3.ProgrammingError as e:
... print(e)
Cannot operate on a closed database.
This relies on a shortcut specific to sqlite3.
A:
How about making sure that the connection and cursor are never closed?
You could have a state based program that you can guarantee only calls close() at the right time.
Or wrap them in other objects that have a pass implementation of close(). Or add an _isclosed member set by close() and accessed by isclosed().
|
How to tell if Python SQLite database connection or cursor is closed?
|
Let's say that you have the following code:
import sqlite3
conn = sqlite3.connect('mydb')
cur = conn.cursor()
# some database actions
cur.close()
conn.close()
# more code below
If I try to use the conn or cur objects later on, how could I tell that they are closed? I cannot find a .isclosed() method or anything like it.
|
[
"You could wrap in a try, except statement:\n>>> conn = sqlite3.connect('mydb')\n>>> conn.close()\n>>> try:\n... one_row = conn.execute(\"SELECT * FROM my_table LIMIT 1;\")\n... except sqlite3.ProgrammingError as e:\n... print(e)\nCannot operate on a closed database.\n\nThis relies on a shortcut specific to sqlite3.\n",
"How about making sure that the connection and cursor are never closed?\nYou could have a state based program that you can guarantee only calls close() at the right time.\nOr wrap them in other objects that have a pass implementation of close(). Or add an _isclosed member set by close() and accessed by isclosed().\n"
] |
[
12,
0
] |
[] |
[] |
[
"database_connection",
"python",
"sqlite"
] |
stackoverflow_0001981392_database_connection_python_sqlite.txt
|
Q:
Python change screen resolution virtual machine
In virtualbox, the screen resolution can be anything - even something strange like 993x451, etc. I tried changing it using pywin32 but I failed::
>>> dm = win32api.EnumDisplaySettings(None, 0)
>>> dm.PelsHeight = 451
>>> dm.PelsWidth = 950
>>> win32api.ChangeDisplaySettings(dm, 0)
-2L
which ends up being:
DISP_CHANGE_BADMODE
any help?
A:
Have you configured the virtual machine to actually advertise this mode to the OS?
edit: VirtualBox automatically sets new resolutions if you change the size of the window. You can set video mode hints from the host OS I believe (look for it in the documentation), but you need guest additions installed. You can also add VESA modes when using the fallback VESA driver. Either way, it seems this all needs to happen from the host OS for the guest OS to be able to make use of it. And it doesn't look like there's an easy (non cmdline possibly not persistent) way to configure it, though YMMV.
I haven't tested it but the command should be:
VBoxManage controlvm
You can also set the maximum guest OS screen size, found this while looking into it a bit deeper:
VBoxManage setextradata global GUI/MaxGuestResolution xres,yres
HTH
A:
Do you have VirtualBox set to automatically set the client window? that could cause some issues.
A:
The way I found to do this is to enable the automatic client resizing from the Guest OS. Then, in the host OS, programatically resize the VM window. This will cause the resolution to change.
|
Python change screen resolution virtual machine
|
In virtualbox, the screen resolution can be anything - even something strange like 993x451, etc. I tried changing it using pywin32 but I failed::
>>> dm = win32api.EnumDisplaySettings(None, 0)
>>> dm.PelsHeight = 451
>>> dm.PelsWidth = 950
>>> win32api.ChangeDisplaySettings(dm, 0)
-2L
which ends up being:
DISP_CHANGE_BADMODE
any help?
|
[
"Have you configured the virtual machine to actually advertise this mode to the OS?\nedit: VirtualBox automatically sets new resolutions if you change the size of the window. You can set video mode hints from the host OS I believe (look for it in the documentation), but you need guest additions installed. You can also add VESA modes when using the fallback VESA driver. Either way, it seems this all needs to happen from the host OS for the guest OS to be able to make use of it. And it doesn't look like there's an easy (non cmdline possibly not persistent) way to configure it, though YMMV.\nI haven't tested it but the command should be:\nVBoxManage controlvm \nYou can also set the maximum guest OS screen size, found this while looking into it a bit deeper:\nVBoxManage setextradata global GUI/MaxGuestResolution xres,yres\nHTH\n",
"Do you have VirtualBox set to automatically set the client window? that could cause some issues.\n",
"The way I found to do this is to enable the automatic client resizing from the Guest OS. Then, in the host OS, programatically resize the VM window. This will cause the resolution to change.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python",
"pywin32",
"resolution",
"virtualization",
"winapi"
] |
stackoverflow_0001349544_python_pywin32_resolution_virtualization_winapi.txt
|
Q:
nesting python list comprehensions to construct a list of lists
I'm a python newb and am having trouble groking nested list comprehensions. I'm trying to write some code to read in a file and construct a list for each character for each line.
so if the file contains
xxxcd
cdcdjkhjasld
asdasdxasda
The resulting list would be:
[
['x','x','x','c','d']
['c','d','c','d','j','k','h','j','a','s','l','d']
['a','s','d','a','s','d','x','a','s','d','a']
]
I have written the following code, and it works, but I have a nagging feeling that I should be able to write a nested list comprehension to do this in fewer lines of code. any suggestions would be appreciated.
data = []
f = open(file,'r')
for line in f:
line = line.strip().upper()
list = []
for c in line:
list.append(c)
data.append(list)
A:
This should help (you'll probably have to play around with it to strip the newlines or format it however you want, but the basic idea should work):
f = open(r"temp.txt")
[[c for c in line] for line in f]
A:
In your case, you can use the list constructor to handle the inner loop and use list comprehension for the outer loop. Something like:
f = open(file)
data = [list(line.strip().upper()) for line in f]
Given a string as input, the list constructor will create a list where each character of the string is a single element in the list.
The list comprehension is functionally equivalent to:
data = []
for line in f:
data.append(list(line.strip().upper()))
A:
Here is one level of list comprehension.
data = []
f = open(file,'r')
for line in f:
data.append([ch for ch in line.strip().upper()])
But we can do the whole thing on one go:
f = open(file, 'rt')
data = [list(line.strip().upper()) for line in f]
This is using list() to convert a string to a list of single-character strings. We could also use nested list comprehensions, and put the open() inline:
data = [[ch for ch in line.strip().upper()] for line in open(file, 'rt')]
At this point, though, I think the list comprehensions are detracting from easy readability of what is going on.
For complicated processing, such as lists inside lists, you might want to use a for loop for the outer layer and a list comprehension for the inner loop.
Also, as Chris Lutz said in a comment, in this case there really isn't a reason to explicitly split each line into character lists; you can always treat a string as a list, and you can use string methods with a string, but you can't use string methods with a list. (Well, you could use ''.join() to rejoin the list back to a string, but why not just leave it as a string?)
A:
data = [list(line.strip().upper()) for line in open(file,'r')]
A:
The only really significant difference between strings and lists of characters is that strings are immutable. You can iterate over and slice strings just as you would lists. And it's much more convenient to handle strings as strings, since they support string methods and lists don't.
So for most applications, I wouldn't bother converting the items in data to a list; I'd just do:
data = [line.strip() for line in open(filename, 'r')]
When I needed to manipulate strings in data as mutable lists, I'd use list to convert them, and join to put them back, e.g.:
data[2] = ''.join(sorted(list(data[2])))
Of course, if all you're going to do with these strings is modify them, then go ahead, store them as lists.
A:
First off you could combine the line.strip().upper() part with your outer for-loop, like this:
for line in [l.strip().upper() for l in f]:
# do stuff
Then you could make the iteration over the characters into a list comprehension, but it wouldn't be shorter or clearer. The neatest way to do what you do there is this:
list(someString)
Thus you could do:
data = [list(l.strip().upper()) for l in f]
I don't know if it states your intentions that well though. Error handling is also an issue, the whole expression will die if there is a problem on the way.
If you don't need to store the whole file and all the lines in memory, you could make it into a generator expression. This is very useful when processing huge files and you only need to process a chunk at a time. Generator expressions use parentheses instead, like so:
data = (list(l.strip().upper()) for l in f)
data will become a generator which runs the expression for each line in the file, but only when you iterate over it; compare that to a list comprehension which will create a huge list in memory. Note that data is not a list, but a generator, and more a kin to a iterator in C++ or IEnumerator in C#.
A generator can be fed into a list easily: list(someGenerator) That would defeat the purpose somewhat but is sometimes a necessity.
A:
>>> f = file('teste.txt')
>>> print map(lambda x: [c for c in x][:-1], f)
[['x', 'x', 'x', 'c', 'd'], ['c', 'd', 'c', 'd', 'j', 'k', 'h', 'j', 'a', 's', 'l', 'd'], ['a', 's', 'd', 'a', 's', 'd', 'x', 'a', 's', 'd']]
|
nesting python list comprehensions to construct a list of lists
|
I'm a python newb and am having trouble groking nested list comprehensions. I'm trying to write some code to read in a file and construct a list for each character for each line.
so if the file contains
xxxcd
cdcdjkhjasld
asdasdxasda
The resulting list would be:
[
['x','x','x','c','d']
['c','d','c','d','j','k','h','j','a','s','l','d']
['a','s','d','a','s','d','x','a','s','d','a']
]
I have written the following code, and it works, but I have a nagging feeling that I should be able to write a nested list comprehension to do this in fewer lines of code. any suggestions would be appreciated.
data = []
f = open(file,'r')
for line in f:
line = line.strip().upper()
list = []
for c in line:
list.append(c)
data.append(list)
|
[
"This should help (you'll probably have to play around with it to strip the newlines or format it however you want, but the basic idea should work):\nf = open(r\"temp.txt\")\n[[c for c in line] for line in f]\n\n",
"In your case, you can use the list constructor to handle the inner loop and use list comprehension for the outer loop. Something like:\nf = open(file)\ndata = [list(line.strip().upper()) for line in f]\n\nGiven a string as input, the list constructor will create a list where each character of the string is a single element in the list.\nThe list comprehension is functionally equivalent to:\ndata = []\nfor line in f:\n data.append(list(line.strip().upper()))\n\n",
"Here is one level of list comprehension.\ndata = []\nf = open(file,'r')\n\nfor line in f:\n data.append([ch for ch in line.strip().upper()])\n\nBut we can do the whole thing on one go:\nf = open(file, 'rt')\ndata = [list(line.strip().upper()) for line in f]\n\nThis is using list() to convert a string to a list of single-character strings. We could also use nested list comprehensions, and put the open() inline:\ndata = [[ch for ch in line.strip().upper()] for line in open(file, 'rt')]\n\nAt this point, though, I think the list comprehensions are detracting from easy readability of what is going on.\nFor complicated processing, such as lists inside lists, you might want to use a for loop for the outer layer and a list comprehension for the inner loop.\nAlso, as Chris Lutz said in a comment, in this case there really isn't a reason to explicitly split each line into character lists; you can always treat a string as a list, and you can use string methods with a string, but you can't use string methods with a list. (Well, you could use ''.join() to rejoin the list back to a string, but why not just leave it as a string?)\n",
"data = [list(line.strip().upper()) for line in open(file,'r')]\n\n",
"The only really significant difference between strings and lists of characters is that strings are immutable. You can iterate over and slice strings just as you would lists. And it's much more convenient to handle strings as strings, since they support string methods and lists don't.\nSo for most applications, I wouldn't bother converting the items in data to a list; I'd just do:\ndata = [line.strip() for line in open(filename, 'r')]\n\nWhen I needed to manipulate strings in data as mutable lists, I'd use list to convert them, and join to put them back, e.g.:\ndata[2] = ''.join(sorted(list(data[2])))\n\nOf course, if all you're going to do with these strings is modify them, then go ahead, store them as lists.\n",
"First off you could combine the line.strip().upper() part with your outer for-loop, like this:\nfor line in [l.strip().upper() for l in f]:\n # do stuff\n\nThen you could make the iteration over the characters into a list comprehension, but it wouldn't be shorter or clearer. The neatest way to do what you do there is this:\nlist(someString)\n\nThus you could do:\ndata = [list(l.strip().upper()) for l in f]\n\nI don't know if it states your intentions that well though. Error handling is also an issue, the whole expression will die if there is a problem on the way.\n\nIf you don't need to store the whole file and all the lines in memory, you could make it into a generator expression. This is very useful when processing huge files and you only need to process a chunk at a time. Generator expressions use parentheses instead, like so:\ndata = (list(l.strip().upper()) for l in f)\n\ndata will become a generator which runs the expression for each line in the file, but only when you iterate over it; compare that to a list comprehension which will create a huge list in memory. Note that data is not a list, but a generator, and more a kin to a iterator in C++ or IEnumerator in C#.\nA generator can be fed into a list easily: list(someGenerator) That would defeat the purpose somewhat but is sometimes a necessity.\n",
">>> f = file('teste.txt')\n>>> print map(lambda x: [c for c in x][:-1], f)\n[['x', 'x', 'x', 'c', 'd'], ['c', 'd', 'c', 'd', 'j', 'k', 'h', 'j', 'a', 's', 'l', 'd'], ['a', 's', 'd', 'a', 's', 'd', 'x', 'a', 's', 'd']]\n\n"
] |
[
19,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"list_comprehension",
"python"
] |
stackoverflow_0001982134_list_comprehension_python.txt
|
Q:
Play Subset of audio file using Pyglet
How can I use the pyglet API for sound to play subsets of a sound file e.g. from 1 second in to 3.5seconds of a 6 second sound clip?
I can load a sound file and play it, and can seek to the start of the interval desired, but am wondering how to stop playback at the point indicated?
A:
It doesn't appear that pyglet has support for setting a stop time. Your options are:
Poll the current time and stop playback when you've reached your desired endpoint. This may not be precise enough for you.
Or, use a sound file library to extract the portion you want into a temporary sound file, then use pyglet to play that sound file in its entirety. Python has built-in support for .wav files (the "wave" module), or you could shell out to a command-line tool like "sox".
A:
This approach seems to work: rather than poll the current time manually to stop playback, use the pyglet clock scheduler to run a stop callback once after a given interval. This is precise enough for my use case ;-)
player = None
def stop_callback(dt):
if player != None:
player.stop()
def play_sound_interval(mp3File, start=None, end=None):
sound = pyglet.resource.media(mp3File)
global player
player = pyglet.media.ManagedSoundPlayer()
player.queue(sound)
if start != None:
player.seek(start)
if end != None and start != None:
pyglet.clock.schedule_once(stop_callback, end-start)
elif end != None and start == None:
pyglet.clock.schedule_once(stop_callback, end)
player.play()
|
Play Subset of audio file using Pyglet
|
How can I use the pyglet API for sound to play subsets of a sound file e.g. from 1 second in to 3.5seconds of a 6 second sound clip?
I can load a sound file and play it, and can seek to the start of the interval desired, but am wondering how to stop playback at the point indicated?
|
[
"It doesn't appear that pyglet has support for setting a stop time. Your options are:\n\nPoll the current time and stop playback when you've reached your desired endpoint. This may not be precise enough for you.\nOr, use a sound file library to extract the portion you want into a temporary sound file, then use pyglet to play that sound file in its entirety. Python has built-in support for .wav files (the \"wave\" module), or you could shell out to a command-line tool like \"sox\".\n\n",
"This approach seems to work: rather than poll the current time manually to stop playback, use the pyglet clock scheduler to run a stop callback once after a given interval. This is precise enough for my use case ;-)\nplayer = None\n\ndef stop_callback(dt):\n if player != None:\n player.stop()\n\ndef play_sound_interval(mp3File, start=None, end=None):\n sound = pyglet.resource.media(mp3File)\n global player\n player = pyglet.media.ManagedSoundPlayer()\n player.queue(sound)\n if start != None:\n player.seek(start)\n if end != None and start != None:\n pyglet.clock.schedule_once(stop_callback, end-start)\n elif end != None and start == None:\n pyglet.clock.schedule_once(stop_callback, end)\n player.play()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"audio",
"pyglet",
"python"
] |
stackoverflow_0001977521_audio_pyglet_python.txt
|
Q:
What does python print() function actually do?
I was looking at this question and started wondering what does the print actually do.
I have never found out how to use string.decode() and string.encode() to get an unicode string "out" in the python interactive shell in the same format as the print does. No matter what I do, I get either
UnicodeEncodeError or
the escaped string with "\x##" notation...
This is python 2.x, but I'm already trying to mend my ways and actually call print() :)
Example:
>>> import sys
>>> a = '\xAA\xBB\xCC'
>>> print(a)
ª»Ì
>>> a.encode(sys.stdout.encoding)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xaa in position 0: ordinal not in range(128)
>>> a.decode(sys.stdout.encoding)
u'\xaa\xbb\xcc'
EDIT:
Why am I asking this? I am sick and tired of encode() errors and realized that since print can do it (at least in the interactive shell). I know that the MUST BE A WAY to magically do the encoding PROPERLY, by digging the info what encoding to use from somewhere...
ADDITIONAL INFO:
I'm running Python 2.4.3 (#1, Sep 3 2009, 15:37:12) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
>>> sys.stdin.encoding
'ISO-8859-1'
>>> sys.stdout.encoding
'ISO-8859-1'
However, the results are the same with Python 2.6.2 (r262:71600, Sep 8 2009, 13:06:43) on the same linux box.
A:
EDIT: (Major changes between this edit and the previous one... Note: I'm using Python 2.6.4 on an Ubuntu box.)
Firstly, in my first attempt at an answer, I provided some general information on print and str which I'm going to leave below for the benefit of anybody having simpler issues with print and chancing upon this question. As for a new attempt at dealing with the issue experienced by the OP... Basically, I'm inclined to say that there's no silver bullet here and if print somehow manages to make sense of a weird string literal, then that's not reproducible behaviour. I'm led to this conclusion by the following funny interaction with Python in my terminal window:
>>> print '\xaa\xbb\xcc'
��
Have you tried to input ª»Ì directly from the terminal? At a Linux terminal using utf-8 as the encoding, this is actually read in as six bytes, which can then be made to look like three unicode chars with the help of the decode method:
>>> 'ª»Ì'
'\xc2\xaa\xc2\xbb\xc3\x8c'
>>> 'ª»Ì'.decode(sys.stdin.encoding)
u'\xaa\xbb\xcc'
So, the '\xaa\xbb\xcc' literal only makes sense if you decode it as a latin-1 literal (well, actually you could use a different encoding which agrees with latin-1 on the relevant characters). As for print 'just working' in your case, it certainly doesn't for me -- as mentioned above.
This is explained by the fact that when you use a string literal not prefixed with a u -- i.e. "asdf" rather than u"asdf" -- the resulting string will use some non-unicode encoding. No; as a matter of fact, the string object itself is going to be encoding-unaware, and you're going to have to treat it as if it was encoded with encoding x, for the correct value of x. This basic idea leads me to the following:
a = '\xAA\xBB\xCC'
a.decode('latin1')
# result: u'\xAA\xBB\xCC'
print(a.decode('latin1'))
# output: ª»Ì
Note the lack of decoding errors and the proper output (which I expect to be stay proper at any other box). Apparently your string literal can be made sense of by Python, but not without some help.
Does this help? (At least in understanding how things work, if not in making the handling of encodings any easier...)
Now for some funny bits with some explanatory value (hopefully)! This works fine for me:
sys.stdout.write("\xAA\xBB\xCC".decode('latin1').encode(sys.stdout.encoding))
Skipping either the decode or the encode part results in a unicode-related exception. Theoretically speaking, this makes sense, as the first decode is needed to decide what characters there are in the given string (the only thing obvious on first sight is what bytes there are -- the Python 3 idea of having (unicode) strings for characters and bytes for, well, bytes, suddenly seems superbly reasonable), while the encode is needed so that the output respects the encoding of the output stream. Now this
sys.stdout.write("ąöî\n".decode(sys.stdin.encoding).encode(sys.stdout.encoding))
also works as expected, but the characters are actually coming from the keyboard and so are actually encoded with the stdin encoding... Also,
ord('ą'.decode('utf-8').encode('latin2'))
returns the correct 177 (my input encoding is utf-8), but '\xc4\x85'.encode('latin2') makes no sense to Python, as it has no clue as to how to make sense of '\xc4\x85' and figures that trying the 'ascii' code is the best it can do.
The original answer:
The relevant bit of Python docs (for version 2.6.4) says that print(obj) is meant to print out the string given by str(obj). I suppose you could then wrap it in a call to unicode (as in unicode(str(obj))) to get a unicode string out -- or you could just use Python 3 and exchange this particular nuisance for a couple of different ones. ;-)
Incidentally, this shows that you can manipulate the result of printing an object just like you can manipulate the result of calling str on an object, that is by messing with the __str__ method. Example:
class Foo(object):
def __str__(self):
return "I'm a Foo!"
print Foo()
As for the actual implementation of print, I expect this won't be useful at all, but if you really want to know what's going on... It's in the file Python/bltinmodule.c in the Python sources (I'm looking at version 2.6.4). Search for a line beginning with builtin_print. It's actually entirely straightforward, no magic going on there. :-)
Hopefully this answers your question... But if you do have a more arcane problem which I'm missing entirely, do comment, I'll make a second attempt. Also, I'm assuming we're dealing with Python 2.x; otherwise I guess I wouldn't have a useful comment.
A:
print() uses sys.stdout.encoding to determine what the output console can understand and then uses this encoding in the call to str.encode().
[EDIT] If you look at the source, it gets sys.stdout and then calls:
PyFile_WriteObject(PyTuple_GetItem(args, i), file,
Py_PRINT_RAW);
I guess the magic is in Py_PRINT_RAW but the source just says:
if (flags & Py_PRINT_RAW) {
value = PyObject_Str(v);
}
So no magic here. A loop over the arguments with sys.stdout.write(str(item)) should do the trick.
A:
>>> import sys
>>> a = '\xAA\xBB\xCC'
>>> print(a)
ª»Ì
All print is doing here is writing raw bytes to sys.stdout. The string a is a string of bytes, not Unicode characters.
Why am I asking this? I am sick and tired of encode() errors and realized that since print can do it (at least in the interactive shell). I know that the MUST BE A WAY to magically do the encoding PROPERLY, by digging the info what encoding to use from somewhere...
Alas no, print is doing nothing at all magical here. You hand it some bytes, it dumps the bytes to stdout.
To use .encode() and .decode() properly, you need to understand the difference between bytes and characters, and I'm afraid you do have to figure out the correct encoding to use.
A:
import sys
source_file_encoding = 'latin-1' # if there is no -*- coding: ... -*- line
a = '\xaa\xbb\xcc' # raw bytes that represent string in source_file_encoding
# print bytes, my terminal tries to interpret it as 'utf-8'
sys.stdout.write(a+'\n')
# -> ��
ua = a.decode(source_file_encoding)
sys.stdout.write(ua.encode(sys.stdout.encoding)+'\n')
# -> ª»Ì
See Defining Python Source Code Encodings
|
What does python print() function actually do?
|
I was looking at this question and started wondering what does the print actually do.
I have never found out how to use string.decode() and string.encode() to get an unicode string "out" in the python interactive shell in the same format as the print does. No matter what I do, I get either
UnicodeEncodeError or
the escaped string with "\x##" notation...
This is python 2.x, but I'm already trying to mend my ways and actually call print() :)
Example:
>>> import sys
>>> a = '\xAA\xBB\xCC'
>>> print(a)
ª»Ì
>>> a.encode(sys.stdout.encoding)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xaa in position 0: ordinal not in range(128)
>>> a.decode(sys.stdout.encoding)
u'\xaa\xbb\xcc'
EDIT:
Why am I asking this? I am sick and tired of encode() errors and realized that since print can do it (at least in the interactive shell). I know that the MUST BE A WAY to magically do the encoding PROPERLY, by digging the info what encoding to use from somewhere...
ADDITIONAL INFO:
I'm running Python 2.4.3 (#1, Sep 3 2009, 15:37:12) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
>>> sys.stdin.encoding
'ISO-8859-1'
>>> sys.stdout.encoding
'ISO-8859-1'
However, the results are the same with Python 2.6.2 (r262:71600, Sep 8 2009, 13:06:43) on the same linux box.
|
[
"EDIT: (Major changes between this edit and the previous one... Note: I'm using Python 2.6.4 on an Ubuntu box.)\nFirstly, in my first attempt at an answer, I provided some general information on print and str which I'm going to leave below for the benefit of anybody having simpler issues with print and chancing upon this question. As for a new attempt at dealing with the issue experienced by the OP... Basically, I'm inclined to say that there's no silver bullet here and if print somehow manages to make sense of a weird string literal, then that's not reproducible behaviour. I'm led to this conclusion by the following funny interaction with Python in my terminal window:\n>>> print '\\xaa\\xbb\\xcc'\n��\n\nHave you tried to input ª»Ì directly from the terminal? At a Linux terminal using utf-8 as the encoding, this is actually read in as six bytes, which can then be made to look like three unicode chars with the help of the decode method:\n>>> 'ª»Ì'\n'\\xc2\\xaa\\xc2\\xbb\\xc3\\x8c'\n>>> 'ª»Ì'.decode(sys.stdin.encoding)\nu'\\xaa\\xbb\\xcc'\n\nSo, the '\\xaa\\xbb\\xcc' literal only makes sense if you decode it as a latin-1 literal (well, actually you could use a different encoding which agrees with latin-1 on the relevant characters). As for print 'just working' in your case, it certainly doesn't for me -- as mentioned above.\nThis is explained by the fact that when you use a string literal not prefixed with a u -- i.e. \"asdf\" rather than u\"asdf\" -- the resulting string will use some non-unicode encoding. No; as a matter of fact, the string object itself is going to be encoding-unaware, and you're going to have to treat it as if it was encoded with encoding x, for the correct value of x. This basic idea leads me to the following:\na = '\\xAA\\xBB\\xCC'\na.decode('latin1')\n# result: u'\\xAA\\xBB\\xCC'\nprint(a.decode('latin1'))\n# output: ª»Ì\n\nNote the lack of decoding errors and the proper output (which I expect to be stay proper at any other box). Apparently your string literal can be made sense of by Python, but not without some help.\nDoes this help? (At least in understanding how things work, if not in making the handling of encodings any easier...)\n\nNow for some funny bits with some explanatory value (hopefully)! This works fine for me:\nsys.stdout.write(\"\\xAA\\xBB\\xCC\".decode('latin1').encode(sys.stdout.encoding))\n\nSkipping either the decode or the encode part results in a unicode-related exception. Theoretically speaking, this makes sense, as the first decode is needed to decide what characters there are in the given string (the only thing obvious on first sight is what bytes there are -- the Python 3 idea of having (unicode) strings for characters and bytes for, well, bytes, suddenly seems superbly reasonable), while the encode is needed so that the output respects the encoding of the output stream. Now this\nsys.stdout.write(\"ąöî\\n\".decode(sys.stdin.encoding).encode(sys.stdout.encoding))\n\nalso works as expected, but the characters are actually coming from the keyboard and so are actually encoded with the stdin encoding... Also,\nord('ą'.decode('utf-8').encode('latin2'))\n\nreturns the correct 177 (my input encoding is utf-8), but '\\xc4\\x85'.encode('latin2') makes no sense to Python, as it has no clue as to how to make sense of '\\xc4\\x85' and figures that trying the 'ascii' code is the best it can do.\n\nThe original answer:\nThe relevant bit of Python docs (for version 2.6.4) says that print(obj) is meant to print out the string given by str(obj). I suppose you could then wrap it in a call to unicode (as in unicode(str(obj))) to get a unicode string out -- or you could just use Python 3 and exchange this particular nuisance for a couple of different ones. ;-)\nIncidentally, this shows that you can manipulate the result of printing an object just like you can manipulate the result of calling str on an object, that is by messing with the __str__ method. Example:\nclass Foo(object):\n def __str__(self):\n return \"I'm a Foo!\"\n\nprint Foo()\n\nAs for the actual implementation of print, I expect this won't be useful at all, but if you really want to know what's going on... It's in the file Python/bltinmodule.c in the Python sources (I'm looking at version 2.6.4). Search for a line beginning with builtin_print. It's actually entirely straightforward, no magic going on there. :-)\nHopefully this answers your question... But if you do have a more arcane problem which I'm missing entirely, do comment, I'll make a second attempt. Also, I'm assuming we're dealing with Python 2.x; otherwise I guess I wouldn't have a useful comment.\n",
"print() uses sys.stdout.encoding to determine what the output console can understand and then uses this encoding in the call to str.encode().\n[EDIT] If you look at the source, it gets sys.stdout and then calls:\nPyFile_WriteObject(PyTuple_GetItem(args, i), file,\n Py_PRINT_RAW);\n\nI guess the magic is in Py_PRINT_RAW but the source just says:\n if (flags & Py_PRINT_RAW) {\n value = PyObject_Str(v);\n }\n\nSo no magic here. A loop over the arguments with sys.stdout.write(str(item)) should do the trick.\n",
">>> import sys\n>>> a = '\\xAA\\xBB\\xCC'\n>>> print(a)\nª»Ì\n\nAll print is doing here is writing raw bytes to sys.stdout. The string a is a string of bytes, not Unicode characters.\n\nWhy am I asking this? I am sick and tired of encode() errors and realized that since print can do it (at least in the interactive shell). I know that the MUST BE A WAY to magically do the encoding PROPERLY, by digging the info what encoding to use from somewhere...\n\nAlas no, print is doing nothing at all magical here. You hand it some bytes, it dumps the bytes to stdout.\nTo use .encode() and .decode() properly, you need to understand the difference between bytes and characters, and I'm afraid you do have to figure out the correct encoding to use.\n",
"import sys\n\nsource_file_encoding = 'latin-1' # if there is no -*- coding: ... -*- line\n\na = '\\xaa\\xbb\\xcc' # raw bytes that represent string in source_file_encoding\n\n# print bytes, my terminal tries to interpret it as 'utf-8'\nsys.stdout.write(a+'\\n') \n# -> ��\n\nua = a.decode(source_file_encoding)\nsys.stdout.write(ua.encode(sys.stdout.encoding)+'\\n')\n# -> ª»Ì\n\nSee Defining Python Source Code Encodings\n"
] |
[
9,
5,
2,
0
] |
[] |
[] |
[
"printing",
"python",
"python_2.x",
"unicode"
] |
stackoverflow_0001979234_printing_python_python_2.x_unicode.txt
|
Q:
python skipping inner loop in nested for loop
I am using some python to do some variable name generation. For some reason I am only getting part of what I need.
import sys
import csv
params = csv.reader(open('params.csv'), delimiter=',', skipinitialspace=True)
flags_r = []
flags_w = []
numbers_r = []
numbers_w = []
station = ['AC1','DC1','DC1']
drive = ['','Fld','Arm']
for i in range(3):
for p in params:
try:
desc = p[1].split(' ')
desc = [part.capitalize() for part in desc]
desc = "".join(desc)
except IndexError, e:
print 'IndexError: %s' %(e,)
continue
print station[i],drive[i],p[0]
flags_r.append( 'mod%(station)s_%(drive)sP%(param)04dr_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
flags_w.append( 'mod%(station)s_%(drive)sP%(param)04dw_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
numbers_r.append( 'mod%(station)s_%(drive)sP%(param)04drn_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
numbers_w.append( 'mod%(station)s_%(drive)sP%(param)04dwn_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
print i
params.csv:
100, Speed Reference
101, Speed Feedback
for some reason it is outputting:
AC1 100
AC1 101
0
1
2
the reason for the try/except is to catch any blank rows or missing second fields in the csv file.
It appears that the inner loop only get executed on the first pass. The only reason I can see for this to happen would be the try/except as I have done an interactive example to test it.
A:
In the first iteration of the outer loop you read all lines from params. In the second iteration all the lines from params are already read, so there is nothing left to iterate over in the inner loop.
To work around this, you could load all the data sets into a list and then iterate over that list:
reader = csv.reader(open('params.csv'), delimiter=',', skipinitialspace=True)
params = list(reader)
A:
Make sure params is a list and not an iterator.
>>> s = (i for i in range(10))
>>> for ss in s: print(ss)
0
...
9
>>> for ss in s: print(ss)
# Nothing!
A:
on the first pass the reader buffer gets exhausted, so there is nothing else to read since you reached end of file.
You need to read your file in before the loops
|
python skipping inner loop in nested for loop
|
I am using some python to do some variable name generation. For some reason I am only getting part of what I need.
import sys
import csv
params = csv.reader(open('params.csv'), delimiter=',', skipinitialspace=True)
flags_r = []
flags_w = []
numbers_r = []
numbers_w = []
station = ['AC1','DC1','DC1']
drive = ['','Fld','Arm']
for i in range(3):
for p in params:
try:
desc = p[1].split(' ')
desc = [part.capitalize() for part in desc]
desc = "".join(desc)
except IndexError, e:
print 'IndexError: %s' %(e,)
continue
print station[i],drive[i],p[0]
flags_r.append( 'mod%(station)s_%(drive)sP%(param)04dr_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
flags_w.append( 'mod%(station)s_%(drive)sP%(param)04dw_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
numbers_r.append( 'mod%(station)s_%(drive)sP%(param)04drn_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
numbers_w.append( 'mod%(station)s_%(drive)sP%(param)04dwn_%(desc)s' % \
{ 'station' : station[i], 'drive' : drive[i], 'param': int(p[0]), 'desc':desc })
print i
params.csv:
100, Speed Reference
101, Speed Feedback
for some reason it is outputting:
AC1 100
AC1 101
0
1
2
the reason for the try/except is to catch any blank rows or missing second fields in the csv file.
It appears that the inner loop only get executed on the first pass. The only reason I can see for this to happen would be the try/except as I have done an interactive example to test it.
|
[
"In the first iteration of the outer loop you read all lines from params. In the second iteration all the lines from params are already read, so there is nothing left to iterate over in the inner loop.\nTo work around this, you could load all the data sets into a list and then iterate over that list:\nreader = csv.reader(open('params.csv'), delimiter=',', skipinitialspace=True)\nparams = list(reader)\n\n",
"Make sure params is a list and not an iterator.\n>>> s = (i for i in range(10))\n>>> for ss in s: print(ss)\n\n0\n...\n9\n>>> for ss in s: print(ss)\n\n# Nothing!\n\n",
"on the first pass the reader buffer gets exhausted, so there is nothing else to read since you reached end of file.\nYou need to read your file in before the loops\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"nested_loops",
"python"
] |
stackoverflow_0001982506_nested_loops_python.txt
|
Q:
Easy way to implement dynamic views?
View are useful constructions of Python 3. For those who never noticed (like me): for a dictionary d you can write k = d.keys() and even if you update d the variable k will still be giving you the updated keys. You can write then k1 & k2 and it will always give you d1.keys() & d2.keys()
I want to implement this for my personal todo manager, but I would like to make complex views dynamic, or lazily evaluated. That is, I have some views called so, post and priority and I want to be able to write:
now = so | phone & priority
so that later, when the __repr__(now) is called, evaluation is performed only at that point.
My first thought was to overload the logical operators so I changed View.__and__ to create a new view that remembers itself being a composite of two subviews and applies & to them at computation. But there seem to be quite a lot of logical operators, so I'm not sure if I'm doing the right thing.
Is there a standard library class that would help me with that? How can I simplify the process?
A:
Well, there is a collection.UserList class which defines up most of them, perhaps that would mean you don't have to override all of them.
A:
There is no "easy" way to do so, especially if you want the lazy behaviour as stated. But still, there aren't that many logical operators, only three of them: __and__, __or__ and __xor__.
(for additional efficiency you can optionally implement the in-place versions __iand__, __ior__ and __ixor__, but if you don't the normal versions will be invoked as a fallback)
|
Easy way to implement dynamic views?
|
View are useful constructions of Python 3. For those who never noticed (like me): for a dictionary d you can write k = d.keys() and even if you update d the variable k will still be giving you the updated keys. You can write then k1 & k2 and it will always give you d1.keys() & d2.keys()
I want to implement this for my personal todo manager, but I would like to make complex views dynamic, or lazily evaluated. That is, I have some views called so, post and priority and I want to be able to write:
now = so | phone & priority
so that later, when the __repr__(now) is called, evaluation is performed only at that point.
My first thought was to overload the logical operators so I changed View.__and__ to create a new view that remembers itself being a composite of two subviews and applies & to them at computation. But there seem to be quite a lot of logical operators, so I'm not sure if I'm doing the right thing.
Is there a standard library class that would help me with that? How can I simplify the process?
|
[
"Well, there is a collection.UserList class which defines up most of them, perhaps that would mean you don't have to override all of them.\n",
"There is no \"easy\" way to do so, especially if you want the lazy behaviour as stated. But still, there aren't that many logical operators, only three of them: __and__, __or__ and __xor__.\n(for additional efficiency you can optionally implement the in-place versions __iand__, __ior__ and __ixor__, but if you don't the normal versions will be invoked as a fallback)\n"
] |
[
1,
1
] |
[] |
[] |
[
"arrays",
"python",
"python_3.x",
"set"
] |
stackoverflow_0001054428_arrays_python_python_3.x_set.txt
|
Q:
How to get Python code to work with C++ App?
I have the following python 3 file:
import base64
import xxx
str = xxx.GetString()
str2 = base64.b64encode(str.encode())
str3 = str2.decode()
print str3
xxx is a module exported by some C++ code. This script does not work because calling Py_InitModule on this script returns NULL. The weird thing is if I create a stub xxx.py in the same directory
def GetString() :
return "test"
and run the original script under python.exe, it works and outputs the base64 string. My question is why doesn't it like the return value of xxx.GetString? In the C++ code, it returns a string object. I hope I have explained my question well enough... this is a strange error.
A:
I know everybody says this...but:
Boost has an awesome library for exposing classes to python and getting data to and fro. If you're having problems, and looking for alternatives is an option I'd highly recommend the boost python library of the C interface. I've used them both, boost wins hands down imo.
A:
Er... You have to investigate why Py_InitModule returns NULL. Posting the Python code using that module won't help.
A:
Py_InitModule() is for initializing extension modules written in C, which is not what you are looking for here. If you want to import a module from C, there is a wealth of functions available in the C API: http://docs.python.org/c-api/import.html
But if your aim is really to run a script rather than import a module, you could also use one of the PyRun_XXX() functions described here: http://docs.python.org/c-api/veryhigh.html
|
How to get Python code to work with C++ App?
|
I have the following python 3 file:
import base64
import xxx
str = xxx.GetString()
str2 = base64.b64encode(str.encode())
str3 = str2.decode()
print str3
xxx is a module exported by some C++ code. This script does not work because calling Py_InitModule on this script returns NULL. The weird thing is if I create a stub xxx.py in the same directory
def GetString() :
return "test"
and run the original script under python.exe, it works and outputs the base64 string. My question is why doesn't it like the return value of xxx.GetString? In the C++ code, it returns a string object. I hope I have explained my question well enough... this is a strange error.
|
[
"I know everybody says this...but:\nBoost has an awesome library for exposing classes to python and getting data to and fro. If you're having problems, and looking for alternatives is an option I'd highly recommend the boost python library of the C interface. I've used them both, boost wins hands down imo. \n",
"Er... You have to investigate why Py_InitModule returns NULL. Posting the Python code using that module won't help.\n",
"Py_InitModule() is for initializing extension modules written in C, which is not what you are looking for here. If you want to import a module from C, there is a wealth of functions available in the C API: http://docs.python.org/c-api/import.html\nBut if your aim is really to run a script rather than import a module, you could also use one of the PyRun_XXX() functions described here: http://docs.python.org/c-api/veryhigh.html\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"c++",
"python",
"string"
] |
stackoverflow_0001978967_c++_python_string.txt
|
Q:
custom prompt in python
I have a script which when first run creates a new thread which logs certain events. After the thread has been created, I ask for some input on user with the following code:
user_input = raw_input('>> ')
So when the script it run the user receives the '>>' prompt, but when the logger from the created thread starts logging it starts to look like this:
>> Error: Some random error
Error: Another error
As you can see it messes-up and lost the prompt. What I wanna do is have the log messages appear but still maintain my cursor on the prompt ready for some input. like:
>>
Error: Some random error
Error: Another error
>>
A:
The problem is that your raw_input() is running on a completely different thread, and has no idea that the logger just spewed out some log messages. So there is no way for raw_input() to know that it should re-draw the prompt.
I don't have any simple solution for you. All I can think of is for the logger thread to not print the messages, but append them to some shared list that the raw_input() thread can see, and have that thread print the messages after the user enters input.
Actually, I do have one other idea: you could draw the >> prompt up at the top of the window, and have the logger messages appear lower in the window. If they are spatially separated, it doesn't matter if they are interleaved in time.
A:
The obvious (and probably the only correct) solution is to log into a separate file, rather than on standard output.
If you still want to observe the logging output in real time, you can use a command such as tail -f on the logging file.
|
custom prompt in python
|
I have a script which when first run creates a new thread which logs certain events. After the thread has been created, I ask for some input on user with the following code:
user_input = raw_input('>> ')
So when the script it run the user receives the '>>' prompt, but when the logger from the created thread starts logging it starts to look like this:
>> Error: Some random error
Error: Another error
As you can see it messes-up and lost the prompt. What I wanna do is have the log messages appear but still maintain my cursor on the prompt ready for some input. like:
>>
Error: Some random error
Error: Another error
>>
|
[
"The problem is that your raw_input() is running on a completely different thread, and has no idea that the logger just spewed out some log messages. So there is no way for raw_input() to know that it should re-draw the prompt.\nI don't have any simple solution for you. All I can think of is for the logger thread to not print the messages, but append them to some shared list that the raw_input() thread can see, and have that thread print the messages after the user enters input.\nActually, I do have one other idea: you could draw the >> prompt up at the top of the window, and have the logger messages appear lower in the window. If they are spatially separated, it doesn't matter if they are interleaved in time.\n",
"The obvious (and probably the only correct) solution is to log into a separate file, rather than on standard output.\nIf you still want to observe the logging output in real time, you can use a command such as tail -f on the logging file.\n"
] |
[
2,
1
] |
[] |
[] |
[
"command_line",
"python"
] |
stackoverflow_0001982601_command_line_python.txt
|
Q:
What's the best way to set up symbolic links to current installs, e.g python -> python2.6
What's the best way to set up symbolic links to current installs, e.g python -> python2.6?
I've just installed python2.6 through Macports at /opt/local/bin/python2.6, I'd now like to set up a symbolic link called python here /usr/local/bin/. I then want to be able to add this line at the beginning of my pythons scripts so it knows where to look: #!/usr/local/bin/python. But what happens when I upgrade python to python2.7 for example, do I just need to remember to go to my symbolic link and change it? I guess I'll remember because it likely won't work anymore? Is there a better way to do this?
A:
By default, MacPorts deliberately and carefully installs everything into a separate directory space: /opt/local. This ensures it does not conflict with anything installed as part of OS X or third-parties. To ensure that MacPorts-installed executables are found first, the recommended solution is to modify your shell PATH to put /opt/local/bin before /usr/bin.
MacPorts also provides a special port package, python_select, to manage which python version is pointed to by the command python in /opt/local/bin.
sudo port install python_select
sudo python_select
Then, to make your scripts use your current preferred python, the traditional solution is to use the env program in the shebang line of your scripts.
#!/usr/bin/env python
A:
Symlink the version you use most.
When you need another version, run it by specifying the version number, e.g.:
$ python2.5 dev_appserver.py myapp
|
What's the best way to set up symbolic links to current installs, e.g python -> python2.6
|
What's the best way to set up symbolic links to current installs, e.g python -> python2.6?
I've just installed python2.6 through Macports at /opt/local/bin/python2.6, I'd now like to set up a symbolic link called python here /usr/local/bin/. I then want to be able to add this line at the beginning of my pythons scripts so it knows where to look: #!/usr/local/bin/python. But what happens when I upgrade python to python2.7 for example, do I just need to remember to go to my symbolic link and change it? I guess I'll remember because it likely won't work anymore? Is there a better way to do this?
|
[
"By default, MacPorts deliberately and carefully installs everything into a separate directory space: /opt/local. This ensures it does not conflict with anything installed as part of OS X or third-parties. To ensure that MacPorts-installed executables are found first, the recommended solution is to modify your shell PATH to put /opt/local/bin before /usr/bin.\nMacPorts also provides a special port package, python_select, to manage which python version is pointed to by the command python in /opt/local/bin.\nsudo port install python_select\nsudo python_select\n\nThen, to make your scripts use your current preferred python, the traditional solution is to use the env program in the shebang line of your scripts.\n#!/usr/bin/env python\n\n",
"Symlink the version you use most.\nWhen you need another version, run it by specifying the version number, e.g.:\n$ python2.5 dev_appserver.py myapp\n\n"
] |
[
4,
2
] |
[
"Not sure about OSX, here is what I do on Ubuntu 9.04:\n>which python\n#/usr/bin/python\n\nJust replace that file with a sym link to the version of Python you actually want to use:\n>sudo ln -s /usr/bin/python2.6/python /usr/bin/python\n\n"
] |
[
-3
] |
[
"macos",
"macports",
"python"
] |
stackoverflow_0001982176_macos_macports_python.txt
|
Q:
What makes pylint think my class is abstract?
As I understand it, Python (2.5.2) does not have real support for abstract classes. Why is pylint complaining about this class being an "Abstract class not reference?" Will it do this for any class that has NotImplementedError thrown?
I have each class in its own file so if this is the case I guess I have no choice but to suppress this message but I am hoping there is maybe another way around it.
"""Package Repository interface."""
class PackageRepository(object):
"""Package Repository interface."""
def __init__(self):
self.hello = "world"
def get_package(self, package_id):
"""
Get a package by ID.
"""
raise NotImplementedError( \
"get_package() method has not been implemented")
def get_packages(self):
"""
Get all packages.
"""
raise NotImplementedError( \
"get_packages() method has not been implemented")
def commit(self):
"""
Commit all changes.
"""
raise NotImplementedError( \
"commit() method has not been implemented")
def do_something(self):
"""
Doing something.
"""
return self.hello
EDIT
Perhaps I should clarify. I realize this is an abstract class and I would love to use the abstract keyword but as I understand it none of that matters in Python (at least in the version I am currently using) so I didn't bother doing any funny abstract tricks (like those found here) and simply left it out.
I was surprised to see that pylint picks up on the fact that this is an abstract class on its own. What makes pylint determine this is an abstract class? Is it simply looking for NotImplementedError being thrown somewhere?
A:
FWIW, raising NotImplementedError is enough to make pylint think this is an abstract class (which is absolutely correct). from logilab.org/card/pylintfeatures: W0223: Method %r is abstract in class %r but is not overridden Used when an abstract method (ie raise NotImplementedError) is not overridden in concrete class. – Tobiesque 2 hours ago
A:
In my experience, pylint is a bit over-zealous, and isn't useful until you've turned off a number of the warnings.
|
What makes pylint think my class is abstract?
|
As I understand it, Python (2.5.2) does not have real support for abstract classes. Why is pylint complaining about this class being an "Abstract class not reference?" Will it do this for any class that has NotImplementedError thrown?
I have each class in its own file so if this is the case I guess I have no choice but to suppress this message but I am hoping there is maybe another way around it.
"""Package Repository interface."""
class PackageRepository(object):
"""Package Repository interface."""
def __init__(self):
self.hello = "world"
def get_package(self, package_id):
"""
Get a package by ID.
"""
raise NotImplementedError( \
"get_package() method has not been implemented")
def get_packages(self):
"""
Get all packages.
"""
raise NotImplementedError( \
"get_packages() method has not been implemented")
def commit(self):
"""
Commit all changes.
"""
raise NotImplementedError( \
"commit() method has not been implemented")
def do_something(self):
"""
Doing something.
"""
return self.hello
EDIT
Perhaps I should clarify. I realize this is an abstract class and I would love to use the abstract keyword but as I understand it none of that matters in Python (at least in the version I am currently using) so I didn't bother doing any funny abstract tricks (like those found here) and simply left it out.
I was surprised to see that pylint picks up on the fact that this is an abstract class on its own. What makes pylint determine this is an abstract class? Is it simply looking for NotImplementedError being thrown somewhere?
|
[
"FWIW, raising NotImplementedError is enough to make pylint think this is an abstract class (which is absolutely correct). from logilab.org/card/pylintfeatures: W0223: Method %r is abstract in class %r but is not overridden Used when an abstract method (ie raise NotImplementedError) is not overridden in concrete class. – Tobiesque 2 hours ago\n",
"In my experience, pylint is a bit over-zealous, and isn't useful until you've turned off a number of the warnings.\n"
] |
[
12,
1
] |
[] |
[] |
[
"pylint",
"python"
] |
stackoverflow_0001981978_pylint_python.txt
|
Q:
Django, grouping query items
say I have such model:
class Foo(models.Model):
name = models.CharField("name",max_length=25)
type = models.IntegerField("type number")
after doing some query like Foo.objects.filter(), I want to group the query result as such:
[ [{"name":"jb","type:"whiskey"},{"name":"jack daniels","type:"whiskey"}],
[{"name":"absolute","type:"vodka"},{name:"smirnoff ":"vodka"}],
[{name:"tuborg","type":beer}]
]
So as you can see, grouping items as list of dictionaries. List of group query lists intead of dictionary would also be welcome :)
Regards
A:
You can do this with the values method of a queryset:
http://docs.djangoproject.com/en/1.1/ref/models/querysets/#values-fields
values(*fields)
Returns a ValuesQuerySet -- a QuerySet
that evaluates to a list of
dictionaries instead of model-instance
objects.
A:
Check out the regroup template tag. If you want to do the grouping for display in your template then this should be what you need. Otherwise you can read the source to see how they accomplish the grouping.
A:
You can do the grouping in your view by using itertools.groupby().
A:
You can sort of do this by using order_by:
Foo.objects.order_by( "type" );
drinks = Foo.objects.all( )
Now you have an array of drinks ordered by type. You could use this or write a function to create the structure you want without having to sort it with a linear scan.
|
Django, grouping query items
|
say I have such model:
class Foo(models.Model):
name = models.CharField("name",max_length=25)
type = models.IntegerField("type number")
after doing some query like Foo.objects.filter(), I want to group the query result as such:
[ [{"name":"jb","type:"whiskey"},{"name":"jack daniels","type:"whiskey"}],
[{"name":"absolute","type:"vodka"},{name:"smirnoff ":"vodka"}],
[{name:"tuborg","type":beer}]
]
So as you can see, grouping items as list of dictionaries. List of group query lists intead of dictionary would also be welcome :)
Regards
|
[
"You can do this with the values method of a queryset:\nhttp://docs.djangoproject.com/en/1.1/ref/models/querysets/#values-fields\n\nvalues(*fields)\nReturns a ValuesQuerySet -- a QuerySet\n that evaluates to a list of\n dictionaries instead of model-instance\n objects.\n\n",
"Check out the regroup template tag. If you want to do the grouping for display in your template then this should be what you need. Otherwise you can read the source to see how they accomplish the grouping.\n",
"You can do the grouping in your view by using itertools.groupby().\n",
"You can sort of do this by using order_by:\nFoo.objects.order_by( \"type\" );\ndrinks = Foo.objects.all( )\n\nNow you have an array of drinks ordered by type. You could use this or write a function to create the structure you want without having to sort it with a linear scan.\n"
] |
[
3,
2,
2,
1
] |
[] |
[] |
[
"django",
"django_models",
"django_views",
"python"
] |
stackoverflow_0001981718_django_django_models_django_views_python.txt
|
Q:
Clutter does not update screen outside of breakpoints
I have some code:
l1 = clutter.Label()
l1.set_position(100,100)
for i in range(0,10):
l1.set_text(str(i))
time.sleep(1)
That is designed to show a count from 1 to 10 seconds on the screen in clutter, but I'm getting a strange error. When I run the script normally the screen runs as it should do, but there is no text displayed until 10 seconds are up. However, When I run with breakpoints in pdb the text shows up just fine.
I'm also getting a strange error at the start of the program:
do_wait: drmWaitVBlank returned -1, IRQs don't seem to be working correctly.
Try adjusting the vlank_mode configuration parameter.
But I don't see why that would affect the code out of break points but not in breakpoints.
Any help would be greatly appreciated.
A:
Not sure if you've already figured out the answer to this but:
The reason you are having this problem is because you are blocking the main thread (where all the drawing occurs) with your time.sleep() calls, preventing the library from redrawing the screen.
E.g. your code is currently doing this:
Clutter redraws the screen.
You loop over ten seconds and change the text ten times.
Clutter redraws the screen.
If you want to queue something on a timer, you should look into gobject.timeout_add.
A:
Have you tried posting (or searching) on the Clutter mailing list? Here's someone who got the same message about drmWaitVBlank for example.
My guess is most people on SO wouldn't be familiar with solving Clutter problems. I know I'm not :)
|
Clutter does not update screen outside of breakpoints
|
I have some code:
l1 = clutter.Label()
l1.set_position(100,100)
for i in range(0,10):
l1.set_text(str(i))
time.sleep(1)
That is designed to show a count from 1 to 10 seconds on the screen in clutter, but I'm getting a strange error. When I run the script normally the screen runs as it should do, but there is no text displayed until 10 seconds are up. However, When I run with breakpoints in pdb the text shows up just fine.
I'm also getting a strange error at the start of the program:
do_wait: drmWaitVBlank returned -1, IRQs don't seem to be working correctly.
Try adjusting the vlank_mode configuration parameter.
But I don't see why that would affect the code out of break points but not in breakpoints.
Any help would be greatly appreciated.
|
[
"Not sure if you've already figured out the answer to this but:\nThe reason you are having this problem is because you are blocking the main thread (where all the drawing occurs) with your time.sleep() calls, preventing the library from redrawing the screen.\nE.g. your code is currently doing this:\n\nClutter redraws the screen.\nYou loop over ten seconds and change the text ten times.\nClutter redraws the screen.\n\nIf you want to queue something on a timer, you should look into gobject.timeout_add.\n",
"Have you tried posting (or searching) on the Clutter mailing list? Here's someone who got the same message about drmWaitVBlank for example.\nMy guess is most people on SO wouldn't be familiar with solving Clutter problems. I know I'm not :)\n"
] |
[
4,
0
] |
[] |
[] |
[
"clutter_gui",
"graphics",
"linux",
"python"
] |
stackoverflow_0001446554_clutter_gui_graphics_linux_python.txt
|
Q:
Parsing Python Code From Within Python?
We have an older C++ tool that generates some python code automatically. I tried to slog through the C++ source tool, today and pretty much wanted to shoot my self. The thing is what i want to do, is clean up the source created by the tool and link the classes to our internal documentation system via adding sphinx tags.
Now what i am wondering is there some sort of wonderful tool for parsing python code within python?
There is alot of stuff like this:
foo._methods_.append()
Snip 500 lines
foo._methods_.append()
ANy suggestions?
Basically i have a functional but insanely messy code structure, i basically want to extract certain chunks, move them to their own files. And cleanup all the miscellanous things that get created.
I looked quickyl at both parser and AST but i cannot find any real examples of it being used.
A:
You may tokenize python code to parse individual tokens using tokenize module. e.g. Script to remove Python comments/docstrings
or you can use the parser module
or use ast module
A:
Depending on your needs, you may also want to check out the 2to3 library. It was written to automatically facilitate the conversion of Python 2.x apps to Python 3.0, so its main use case is taking one Python source file, performing some transformations on it, and then spitting out the result source file.
One benefit that lib2to3 has over the ast module is that ast does not preserve whitespace and comments, whereas lib2to3 does. If you're already dealing with autogenerated code this might not be a problem for you.
A:
Try one of the Python doc-generating utilities to see if this might help you with your overall problem. I've used epydoc with great success.
Or if you can get your boss to part with $200, buy a copy of Enterprise Architect from SparxSystems. It will reverse-engineer your Python code, generate a class diagram, and allow you to click on the class diagram and see the underlying code. Works for many languages besides Python, and an excellent design and documentation utility. (There is a $99 version, but this does not include the code import capability.)
|
Parsing Python Code From Within Python?
|
We have an older C++ tool that generates some python code automatically. I tried to slog through the C++ source tool, today and pretty much wanted to shoot my self. The thing is what i want to do, is clean up the source created by the tool and link the classes to our internal documentation system via adding sphinx tags.
Now what i am wondering is there some sort of wonderful tool for parsing python code within python?
There is alot of stuff like this:
foo._methods_.append()
Snip 500 lines
foo._methods_.append()
ANy suggestions?
Basically i have a functional but insanely messy code structure, i basically want to extract certain chunks, move them to their own files. And cleanup all the miscellanous things that get created.
I looked quickyl at both parser and AST but i cannot find any real examples of it being used.
|
[
"You may tokenize python code to parse individual tokens using tokenize module. e.g. Script to remove Python comments/docstrings \nor you can use the parser module \nor use ast module\n",
"Depending on your needs, you may also want to check out the 2to3 library. It was written to automatically facilitate the conversion of Python 2.x apps to Python 3.0, so its main use case is taking one Python source file, performing some transformations on it, and then spitting out the result source file.\nOne benefit that lib2to3 has over the ast module is that ast does not preserve whitespace and comments, whereas lib2to3 does. If you're already dealing with autogenerated code this might not be a problem for you.\n",
"Try one of the Python doc-generating utilities to see if this might help you with your overall problem. I've used epydoc with great success.\nOr if you can get your boss to part with $200, buy a copy of Enterprise Architect from SparxSystems. It will reverse-engineer your Python code, generate a class diagram, and allow you to click on the class diagram and see the underlying code. Works for many languages besides Python, and an excellent design and documentation utility. (There is a $99 version, but this does not include the code import capability.)\n"
] |
[
16,
3,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001978515_python.txt
|
Q:
Regex for getting content between $ chars from a text
The problem:
I need to extract strings that are between $ characters from a block of text, but i'm a total n00b when it comes to regular expressions.
For instance from this text:
Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$ myth.
i would like to get an array consisting of:
{'es membres', 'separat existentie es un'}
A little snippet in Python would be great.
A:
Import the re module, and use findall():
>>> import re
>>> p = re.compile('\$(.*?)\$')
>>> s = "apple $banana$ coconut $delicious ethereal$ funkytown"
>>> p.findall(s)
['banana', 'delicious ethereal']
The pattern p represents a dollar sign (\$), then a non-greedy match group ((...?)) which matches characters (.) of which there must be zero or more (*), followed by another dollar sign (\$).
A:
You can use re.findall:
>>> re.findall(r'\$(.*?)\$', s)
['es membres', 'separat existentie es un']
A:
The regex below captures everything between the $ characters non-greedily
\$(.*?)\$
A:
import re;
m = re.findall('\$([^$]*)\$','Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$ myth');
A:
Alternative without regexes which works for this simple case:
>>> s="Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$"
>>> s.split("$")[1::2]
['es membres', 'separat existentie es un']
Just split the string on '$' (this gives you a python list) and then only use every 'second' element of this list.
|
Regex for getting content between $ chars from a text
|
The problem:
I need to extract strings that are between $ characters from a block of text, but i'm a total n00b when it comes to regular expressions.
For instance from this text:
Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$ myth.
i would like to get an array consisting of:
{'es membres', 'separat existentie es un'}
A little snippet in Python would be great.
|
[
"Import the re module, and use findall():\n>>> import re\n>>> p = re.compile('\\$(.*?)\\$')\n>>> s = \"apple $banana$ coconut $delicious ethereal$ funkytown\"\n>>> p.findall(s)\n['banana', 'delicious ethereal']\n\nThe pattern p represents a dollar sign (\\$), then a non-greedy match group ((...?)) which matches characters (.) of which there must be zero or more (*), followed by another dollar sign (\\$).\n",
"You can use re.findall:\n>>> re.findall(r'\\$(.*?)\\$', s)\n['es membres', 'separat existentie es un']\n\n",
"The regex below captures everything between the $ characters non-greedily \n\\$(.*?)\\$\n",
"import re;\nm = re.findall('\\$([^$]*)\\$','Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$ myth');\n\n",
"Alternative without regexes which works for this simple case:\n>>> s=\"Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$\"\n>>> s.split(\"$\")[1::2]\n['es membres', 'separat existentie es un']\n\nJust split the string on '$' (this gives you a python list) and then only use every 'second' element of this list.\n"
] |
[
5,
3,
1,
1,
0
] |
[
"Valid regex demo in Perl:\nmy $a = 'Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$ myth.';\nmy @res;\nwhile ($a =~ /\\$([^\\$]+)\\$/gos)\n{\n push(@res, $1);\n}\n\nforeach my $item (@res)\n{\n print \"item: $item\\n\";\n}\n\nflags: s - treat all input text as single line, g - global\n"
] |
[
-1
] |
[
"python",
"regex"
] |
stackoverflow_0001983126_python_regex.txt
|
Q:
Spam Filtering Forms Without Akismet
I'm curious if anyone out there knows of something perhaps like Akismet, but where content doesn't have to go off to a 3rd party server. In a situation with critically sensitive data (patient records for instance) I wouldn't necessarily want that information sent off to another server I don't have control over. I really like Akismet, it works great for the most part. However, I need something more like a local instance of Akismet that's private, and able to be updated semi-regularly. Even better if it works with Python since I need this to interface with Django applications. Should I just go the route of SpamBayes?
A:
Have you looked at Project Honey Pot? I think they have some public querying services that you can use.
I think Project Honey Pot aims is to stop spam before it evens gets to your content processing routine (IP checking/headers analyzing/bot traps and such). It might fits what you're trying to do.
Another one I've heard of is Spamato. It can runs as a standalone proxy, I've never really tried it out though, but you could route content through your its proxy instance and gets the spam filtered.
A:
I can't think of any critically sensitive data that could be submitted by anonymous users. If the data is really sensitive (like you mentioned patient records), it is probably submitted by known and registered user so you should do manual approval of new users and protect the registration part from spammers.
|
Spam Filtering Forms Without Akismet
|
I'm curious if anyone out there knows of something perhaps like Akismet, but where content doesn't have to go off to a 3rd party server. In a situation with critically sensitive data (patient records for instance) I wouldn't necessarily want that information sent off to another server I don't have control over. I really like Akismet, it works great for the most part. However, I need something more like a local instance of Akismet that's private, and able to be updated semi-regularly. Even better if it works with Python since I need this to interface with Django applications. Should I just go the route of SpamBayes?
|
[
"Have you looked at Project Honey Pot? I think they have some public querying services that you can use.\nI think Project Honey Pot aims is to stop spam before it evens gets to your content processing routine (IP checking/headers analyzing/bot traps and such). It might fits what you're trying to do.\nAnother one I've heard of is Spamato. It can runs as a standalone proxy, I've never really tried it out though, but you could route content through your its proxy instance and gets the spam filtered.\n",
"I can't think of any critically sensitive data that could be submitted by anonymous users. If the data is really sensitive (like you mentioned patient records), it is probably submitted by known and registered user so you should do manual approval of new users and protect the registration part from spammers.\n"
] |
[
4,
1
] |
[] |
[] |
[
"django",
"python",
"spam_prevention"
] |
stackoverflow_0001982277_django_python_spam_prevention.txt
|
Q:
DBus-Cherrypy merge issue
I'm using python-dbus and cherrypy to monitor USB devices and provide a REST service that will maintain status on the inserted USB devices. I have written and debugged these services independently, and they work as expected.
Now, I'm merging the services into a single application. My problem is:
I cannot seem to get both services ( cherrypy and dbus ) to start together. One or the other blocks or goes out of scope, or doesn't get initialized.
I've tried encapsulating each in its own thread, and just call start on them. This has some bizarre issues.
class RESTThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
cherrypy.config.update({ 'server.socket_host': HVR_Common.DBUS_SERVER_ADDR, 'server.socket_port': HVR_Common.DBUS_SERVER_PORT, })
cherrypy.quickstart(USBRest())
class DBUSThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
DBusGMainLoop(set_as_default=True)
loop = gobject.MainLoop()
DeviceAddedListener()
print 'Starting DBus'
loop.run()
print 'DBus Python Started'
if __name__ == '__main__':
# Start up REST
print 'Starting REST'
rs = RESTThread()
rs.start()
db = DBUSThread()
db.start()
#cherrypy.config.update({ 'server.socket_host': HVR_Common.DBUS_SERVER_ADDR, 'server.socket_port': HVR_Common.DBUS_SERVER_PORT, })
#cherrypy.quickstart(USBRest())
while True:
x = 1
When this code is run, the cherrypy code doesn't fully initialize. When a USB device is inserted, cherrypy continues to initialize ( as if the threads are linked somehow ), but doesn't work ( doesn't serve up data or even make connections on the port ) I've looked at cherrypys wiki pages but haven't found a way to startup cherrypy in such a way that it inits, and returns, so I can init the DBus stuff an be able to get this out the door.
My ultimate question is: Is there a way to get cherrypy to start and not block but continue working? I want to get rid of the threads in this example and init both cherrypy and dbus in the main thread.
A:
Yes; don't use cherrypy.quickstart. Instead, unpack it:
cherrypy.config.update(conf)
cherrypy.tree.mount(USBREST())
cherrypy.engine.start()
Quickstart does the above, but finishes by calling engine.block(). If your program has some main loop other than CherryPy's, omit the call to engine.block and you should be fine. However, when your foreign main loop terminates, you'll still want to call cherrypy.engine.stop():
loop = gobject.MainLoop()
try:
loop.run()
finally:
cherrypy.engine.stop()
There are some other gotchas, like whether CherryPy should handle Ctrl-C and other signals, and whether it should autoreload. Those behaviors are up to you, and are all fairly easy to enable/disable. See the cherrypy.quickstart() source code for some of them.
A:
I figured this out. Apparently, there are a bunch of thread contention problems in glib. If you make an app that has DBusGMainLoop in it, then you cannot create another thread in your app. The new thread blocks immediately when start() is called on it. No amount of massaging will get the new thread to run.
I found a site that had an obscure reference to dbus.mainloop.glib.threads_init(), and how this must be called before initializing a new thread. However a new problem is uncovered when this is tried. An exception is thrown that says g_thread_init() must be called before dbus.mainloop.glib.threads_init() is called. More searching uncovered another obscure reference to gobject.threads_init(). It seemed to fit, so after much experimentation, I discovered the correct sequence.
Here's the solution.
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
gobject.threads_init()
dbus.mainloop.glib.threads_init()
DBUSMAINLOOP = gobject.MainLoop()
print 'Creating DBus Thread'
DBUSLOOPTHREAD = threading.Thread(name='glib_mainloop', target=DBUSMAINLOOP.run)
DBUSLOOPTHREAD.start()
print 'Starting REST'
cherrypy.config.update({ 'server.socket_host': Common.DBUS_SERVER_ADDR, 'server.socket_port': Common.DBUS_SERVER_PORT, })
cherrypy.quickstart(USBRest())
Gosh what a nightmare. Now to make it better.
|
DBus-Cherrypy merge issue
|
I'm using python-dbus and cherrypy to monitor USB devices and provide a REST service that will maintain status on the inserted USB devices. I have written and debugged these services independently, and they work as expected.
Now, I'm merging the services into a single application. My problem is:
I cannot seem to get both services ( cherrypy and dbus ) to start together. One or the other blocks or goes out of scope, or doesn't get initialized.
I've tried encapsulating each in its own thread, and just call start on them. This has some bizarre issues.
class RESTThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
cherrypy.config.update({ 'server.socket_host': HVR_Common.DBUS_SERVER_ADDR, 'server.socket_port': HVR_Common.DBUS_SERVER_PORT, })
cherrypy.quickstart(USBRest())
class DBUSThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
DBusGMainLoop(set_as_default=True)
loop = gobject.MainLoop()
DeviceAddedListener()
print 'Starting DBus'
loop.run()
print 'DBus Python Started'
if __name__ == '__main__':
# Start up REST
print 'Starting REST'
rs = RESTThread()
rs.start()
db = DBUSThread()
db.start()
#cherrypy.config.update({ 'server.socket_host': HVR_Common.DBUS_SERVER_ADDR, 'server.socket_port': HVR_Common.DBUS_SERVER_PORT, })
#cherrypy.quickstart(USBRest())
while True:
x = 1
When this code is run, the cherrypy code doesn't fully initialize. When a USB device is inserted, cherrypy continues to initialize ( as if the threads are linked somehow ), but doesn't work ( doesn't serve up data or even make connections on the port ) I've looked at cherrypys wiki pages but haven't found a way to startup cherrypy in such a way that it inits, and returns, so I can init the DBus stuff an be able to get this out the door.
My ultimate question is: Is there a way to get cherrypy to start and not block but continue working? I want to get rid of the threads in this example and init both cherrypy and dbus in the main thread.
|
[
"Yes; don't use cherrypy.quickstart. Instead, unpack it:\ncherrypy.config.update(conf)\ncherrypy.tree.mount(USBREST())\ncherrypy.engine.start()\n\nQuickstart does the above, but finishes by calling engine.block(). If your program has some main loop other than CherryPy's, omit the call to engine.block and you should be fine. However, when your foreign main loop terminates, you'll still want to call cherrypy.engine.stop():\nloop = gobject.MainLoop()\ntry:\n loop.run()\nfinally:\n cherrypy.engine.stop()\n\nThere are some other gotchas, like whether CherryPy should handle Ctrl-C and other signals, and whether it should autoreload. Those behaviors are up to you, and are all fairly easy to enable/disable. See the cherrypy.quickstart() source code for some of them.\n",
"I figured this out. Apparently, there are a bunch of thread contention problems in glib. If you make an app that has DBusGMainLoop in it, then you cannot create another thread in your app. The new thread blocks immediately when start() is called on it. No amount of massaging will get the new thread to run.\nI found a site that had an obscure reference to dbus.mainloop.glib.threads_init(), and how this must be called before initializing a new thread. However a new problem is uncovered when this is tried. An exception is thrown that says g_thread_init() must be called before dbus.mainloop.glib.threads_init() is called. More searching uncovered another obscure reference to gobject.threads_init(). It seemed to fit, so after much experimentation, I discovered the correct sequence.\nHere's the solution.\ndbus.mainloop.glib.DBusGMainLoop(set_as_default=True)\ngobject.threads_init()\ndbus.mainloop.glib.threads_init() \nDBUSMAINLOOP = gobject.MainLoop()\n\nprint 'Creating DBus Thread'\nDBUSLOOPTHREAD = threading.Thread(name='glib_mainloop', target=DBUSMAINLOOP.run)\nDBUSLOOPTHREAD.start()\n\nprint 'Starting REST'\ncherrypy.config.update({ 'server.socket_host': Common.DBUS_SERVER_ADDR, 'server.socket_port': Common.DBUS_SERVER_PORT, })\ncherrypy.quickstart(USBRest())\n\nGosh what a nightmare. Now to make it better.\n"
] |
[
3,
3
] |
[] |
[] |
[
"cherrypy",
"dbus",
"python"
] |
stackoverflow_0001976622_cherrypy_dbus_python.txt
|
Q:
who can tell me what can call the built-in functions in next code
i know some of this,ex.
__mod__ will be call /
__eq__will be call == > and <
but i don't know all.
def __nonzero__(self):
# an image is "true" if it contains at least one non-zero pixel
return self.im.getbbox() is not None
def __abs__(self):
return self.apply("abs", self)
def __pos__(self):
return self
def __neg__(self):
return self.apply("neg", self)
# binary operators
def __add__(self, other):
return self.apply("add", self, other)
def __radd__(self, other):
return self.apply("add", other, self)
def __sub__(self, other):
return self.apply("sub", self, other)
def __rsub__(self, other):
return self.apply("sub", other, self)
def __mul__(self, other):
return self.apply("mul", self, other)
def __rmul__(self, other):
return self.apply("mul", other, self)
def __div__(self, other):
return self.apply("div", self, other)
def __rdiv__(self, other):
return self.apply("div", other, self)
def __mod__(self, other):
return self.apply("mod", self, other)
def __rmod__(self, other):
return self.apply("mod", other, self)
def __pow__(self, other):
return self.apply("pow", self, other)
def __rpow__(self, other):
return self.apply("pow", other, self)
# bitwise
def __invert__(self):
return self.apply("invert", self)
def __and__(self, other):
return self.apply("and", self, other)
def __rand__(self, other):
return self.apply("and", other, self)
def __or__(self, other):
return self.apply("or", self, other)
def __ror__(self, other):
return self.apply("or", other, self)
def __xor__(self, other):
return self.apply("xor", self, other)
def __rxor__(self, other):
return self.apply("xor", other, self)
def __lshift__(self, other):
return self.apply("lshift", self, other)
def __rshift__(self, other):
return self.apply("rshift", self, other)
# logical
def __eq__(self, other):
return self.apply("eq", self, other)
def __ne__(self, other):
return self.apply("ne", self, other)
def __lt__(self, other):
return self.apply("lt", self, other)
def __le__(self, other):
return self.apply("le", self, other)
def __gt__(self, other):
return self.apply("gt", self, other)
def __ge__(self, other):
return self.apply("ge", self, other)
A:
Section 3.4 of the Python Language Reference covers the magic methods.
A:
See the Special Method Names section in the reference manual, including Basic Customization and Emulating Numeric Types.
A:
__mod__ is called for %, not for / as you state:
>>> class x(int):
... def __mod__(self, y):
... print '__mod__(%s, %s)' % (self, y)
... return int.__mod__(self, y)
...
>>> a = x(23)
>>> a / 4
5
>>> a % 4
__mod__(23, 4)
3
>>>
Make and use similar toy classes to clarify any doubt you may have about a special method, if any are left after you read the docs.
|
who can tell me what can call the built-in functions in next code
|
i know some of this,ex.
__mod__ will be call /
__eq__will be call == > and <
but i don't know all.
def __nonzero__(self):
# an image is "true" if it contains at least one non-zero pixel
return self.im.getbbox() is not None
def __abs__(self):
return self.apply("abs", self)
def __pos__(self):
return self
def __neg__(self):
return self.apply("neg", self)
# binary operators
def __add__(self, other):
return self.apply("add", self, other)
def __radd__(self, other):
return self.apply("add", other, self)
def __sub__(self, other):
return self.apply("sub", self, other)
def __rsub__(self, other):
return self.apply("sub", other, self)
def __mul__(self, other):
return self.apply("mul", self, other)
def __rmul__(self, other):
return self.apply("mul", other, self)
def __div__(self, other):
return self.apply("div", self, other)
def __rdiv__(self, other):
return self.apply("div", other, self)
def __mod__(self, other):
return self.apply("mod", self, other)
def __rmod__(self, other):
return self.apply("mod", other, self)
def __pow__(self, other):
return self.apply("pow", self, other)
def __rpow__(self, other):
return self.apply("pow", other, self)
# bitwise
def __invert__(self):
return self.apply("invert", self)
def __and__(self, other):
return self.apply("and", self, other)
def __rand__(self, other):
return self.apply("and", other, self)
def __or__(self, other):
return self.apply("or", self, other)
def __ror__(self, other):
return self.apply("or", other, self)
def __xor__(self, other):
return self.apply("xor", self, other)
def __rxor__(self, other):
return self.apply("xor", other, self)
def __lshift__(self, other):
return self.apply("lshift", self, other)
def __rshift__(self, other):
return self.apply("rshift", self, other)
# logical
def __eq__(self, other):
return self.apply("eq", self, other)
def __ne__(self, other):
return self.apply("ne", self, other)
def __lt__(self, other):
return self.apply("lt", self, other)
def __le__(self, other):
return self.apply("le", self, other)
def __gt__(self, other):
return self.apply("gt", self, other)
def __ge__(self, other):
return self.apply("ge", self, other)
|
[
"Section 3.4 of the Python Language Reference covers the magic methods.\n",
"See the Special Method Names section in the reference manual, including Basic Customization and Emulating Numeric Types.\n",
"__mod__ is called for %, not for / as you state:\n>>> class x(int):\n... def __mod__(self, y):\n... print '__mod__(%s, %s)' % (self, y)\n... return int.__mod__(self, y)\n... \n>>> a = x(23)\n>>> a / 4\n5\n>>> a % 4\n__mod__(23, 4)\n3\n>>> \n\nMake and use similar toy classes to clarify any doubt you may have about a special method, if any are left after you read the docs.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001983199_python.txt
|
Q:
Python ASCII Graph Drawing
I'm looking for a library to draw ASCII graphs (for use in a console) with Python. The graph is quite simple: it's only a flow chart for pipelines.
I saw NetworkX and igraph, but didn't see a way to output to ascii.
Do you have experience in this?
Thanks a lot!
Patrick
EDIT 1:
I actually found a library doing what I need, but it's in perl Graph::Easy . I could call the code from python but I don't like the idea too much... still looking for a python solution :)
A:
ascii-plotter might do what you want...
A:
When you say 'simple network graph in ascii', do you mean something like this?
.===. .===. .===. .===.
| a |---| b |---| c |---| d |
'===' '===' '---' '==='
I suspect there are probably better ways to display whatever information it is that you have than to try and draw it on the console. If it's just a pipeline, why not just print out:
a-b-c-d
If you're sure this is the route, one thing you could try would be to generate a decent graph using Matplotlib and then post the contents to one of the many image-to-ascii converters you can find on the web.
A:
It's not directly Python based, but you should take a look into the artist-mode of emacs
artist-mode video
artist-mode site
You can control emacs from python with pymacs, or you can take a look at the lisp code and draw some inspiration.
|
Python ASCII Graph Drawing
|
I'm looking for a library to draw ASCII graphs (for use in a console) with Python. The graph is quite simple: it's only a flow chart for pipelines.
I saw NetworkX and igraph, but didn't see a way to output to ascii.
Do you have experience in this?
Thanks a lot!
Patrick
EDIT 1:
I actually found a library doing what I need, but it's in perl Graph::Easy . I could call the code from python but I don't like the idea too much... still looking for a python solution :)
|
[
"ascii-plotter might do what you want...\n",
"When you say 'simple network graph in ascii', do you mean something like this?\n.===. .===. .===. .===.\n| a |---| b |---| c |---| d |\n'===' '===' '---' '==='\n\nI suspect there are probably better ways to display whatever information it is that you have than to try and draw it on the console. If it's just a pipeline, why not just print out:\na-b-c-d\n\nIf you're sure this is the route, one thing you could try would be to generate a decent graph using Matplotlib and then post the contents to one of the many image-to-ascii converters you can find on the web.\n",
"It's not directly Python based, but you should take a look into the artist-mode of emacs\n\nartist-mode video\nartist-mode site\n\nYou can control emacs from python with pymacs, or you can take a look at the lisp code and draw some inspiration.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"ascii",
"graph",
"python"
] |
stackoverflow_0000834395_ascii_graph_python.txt
|
Q:
i can't found '_weakref.py',where is '_weakref.py'
from _weakref import (
getweakrefcount,
getweakrefs,
ref,
proxy,
CallableProxyType,
ProxyType,
ReferenceType)
A:
It's not a Python-coded module, it's a C-coded Python-extension module.
You can read the extension module's C source code here.
A:
_weakref is a C module that comes with Python. Having said that, you should never import a module starting with an underscore directly; import the Python module and let it deal with the C module as needed, in this case weakref.
|
i can't found '_weakref.py',where is '_weakref.py'
|
from _weakref import (
getweakrefcount,
getweakrefs,
ref,
proxy,
CallableProxyType,
ProxyType,
ReferenceType)
|
[
"It's not a Python-coded module, it's a C-coded Python-extension module.\nYou can read the extension module's C source code here.\n",
"_weakref is a C module that comes with Python. Having said that, you should never import a module starting with an underscore directly; import the Python module and let it deal with the C module as needed, in this case weakref.\n"
] |
[
3,
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001983332_python.txt
|
Q:
why my code make dead-loop,it is about 'iter(c,a)'
def c():
yield 222
yield 333
a=[1,2,3,4]
b=iter(c,333)
print a,b
for i in b:
print i
how can i get it.
A:
You need to provide the function (which will return the next value) to your iter() call. In your case, that's c().next rather than c.
This snippet below works as expected by producing all the yielded values up to, but exclusive of, the terminating value.
def generator():
yield 1
yield 2
yield 3
yield -1
sequence = iter (generator().next, -1)
print sequence
for value in sequence:
print value
The output of that is:
pax> python prog1.py
<callable-iterator object at 0xb77dd6ac>
1
2
3
pax> _
A:
You didn't call c().
Your question is cryptic. I don't know what you expect.
Please, edit the question and add information about what you thought that would do, and what you've observed instead.
A:
iter takes a callable and a sentinel and calls the callable repeatedly. Calling c repeatedly creates new generators which is not what you want. You want to call c once and then repeatedly call the next function, so try this instead:
def c():
yield 222
yield 333
a=[1,2,3,4]
b=iter(c().next, 333)
print a,b
for i in b:
print i
Output:
222
A:
Because c never returns 333.
When using a sentinel in iter the first argument must be a callable, and iter will yield the returning value of the first argument until this value is equal to the sentinel.
What you'd like to do should be something like:
def c():
yield 222
yield 333
a=[1,2,3,4]
b=iter(c().next,333)
print a,b
for i in b:
print i
|
why my code make dead-loop,it is about 'iter(c,a)'
|
def c():
yield 222
yield 333
a=[1,2,3,4]
b=iter(c,333)
print a,b
for i in b:
print i
how can i get it.
|
[
"You need to provide the function (which will return the next value) to your iter() call. In your case, that's c().next rather than c.\nThis snippet below works as expected by producing all the yielded values up to, but exclusive of, the terminating value.\ndef generator():\n yield 1\n yield 2\n yield 3\n yield -1\n\nsequence = iter (generator().next, -1)\nprint sequence\nfor value in sequence:\n print value\n\nThe output of that is:\npax> python prog1.py\n <callable-iterator object at 0xb77dd6ac>\n 1\n 2\n 3\npax> _\n\n",
"You didn't call c().\nYour question is cryptic. I don't know what you expect.\nPlease, edit the question and add information about what you thought that would do, and what you've observed instead.\n",
"iter takes a callable and a sentinel and calls the callable repeatedly. Calling c repeatedly creates new generators which is not what you want. You want to call c once and then repeatedly call the next function, so try this instead:\ndef c():\n yield 222\n yield 333\n\na=[1,2,3,4]\nb=iter(c().next, 333)\nprint a,b\nfor i in b:\n print i\n\nOutput:\n222\n\n",
"Because c never returns 333. \nWhen using a sentinel in iter the first argument must be a callable, and iter will yield the returning value of the first argument until this value is equal to the sentinel.\nWhat you'd like to do should be something like:\ndef c():\n yield 222\n yield 333\n\n\na=[1,2,3,4]\nb=iter(c().next,333)\nprint a,b\nfor i in b:\n print i\n\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001983543_python.txt
|
Q:
Python Encapsulate a function to Print to a variable
If I have a function that contains a lot of print statements:
ie.
def funA():
print "Hi"
print "There"
print "Friend"
print "!"
What I want to do is something like this
def main():
##funA() does not print to screen here
a = getPrint(funA()) ##where getPrint is some made up function/object
print a ##prints what funA would normally print at this step
So when funcA gets called it doesn't do any printing, instead it output to an object. I then print the object to get the result. Is there a way of doing this? I also do not want to touch the original function.
I hope it makes sense.
A:
You can do almost exactly what you want, as long as you don't mind a tiny syntax difference:
import cStringIO
import sys
def getPrint(thefun, *a, **k):
savstdout = sys.stdout
sys.stdout = cStringIO.StringIO()
try:
thefun(*a, **k)
finally:
v = sys.stdout.getvalue()
sys.stdout = savstdout
return v
The tiny difference is that you must call getPrint(funA), not getPrint(funA()) -- i.e., you must pass the function object itself, without the trailing parentheses that would call it immediately, before getPrint can do its magic.
If you absolutely insist on those extra parentheses, then getPrint cannot do all the needed preparation, and must be supplemented by other code to prepare things right (I strongly recommend losing the extra parentheses, thus enabling the encapsulation of all the functionality inside getPrint!).
A:
from cStringIO import StringIO
def getPrint(func, *args, **kwds):
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
func(*args, **kwds)
except:
raise
else:
return sys.stdout.getvalue()
finally:
sys.stdout = old_stdout
#...
a = getPrint(funA) # notice no (), it is called by getPrint
print a.rstrip("\n") # avoid extra trailing lines
A:
Best way is to do a context manager
from contextlib import contextmanager
import StringIO
import sys
@contextmanager
def capture():
old_stdout = sys.stdout
sys.stdout = StringIO.StringIO()
try:
yield sys.stdout
finally:
sys.stdout = old_stdout
Now you can run any printing code:
with capture() as c:
funA()
funB()
print 'HELLO!'
then later:
print c.getvalue()
A:
Replace sys.stdout with a file-like object.
A:
Use cStringIO ( see doc ).
from cStringIO import StringIO
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
getPrint( funA() )
# use mystdout to get string
A:
The simplest thing is to change your funA() to not print anything, but simply to return the string values.
Like so:
def funA():
return "Hi\n" + "There\n" + "Friend\n" + "!\n"
# later:
print(funA())
It's always easy to collect strings and print them; it's tricker to to collect strings as they are being printed.
If you have a huge body of existing printing functions, then yeah, use one of the tricks provided here to collect the output.
|
Python Encapsulate a function to Print to a variable
|
If I have a function that contains a lot of print statements:
ie.
def funA():
print "Hi"
print "There"
print "Friend"
print "!"
What I want to do is something like this
def main():
##funA() does not print to screen here
a = getPrint(funA()) ##where getPrint is some made up function/object
print a ##prints what funA would normally print at this step
So when funcA gets called it doesn't do any printing, instead it output to an object. I then print the object to get the result. Is there a way of doing this? I also do not want to touch the original function.
I hope it makes sense.
|
[
"You can do almost exactly what you want, as long as you don't mind a tiny syntax difference:\nimport cStringIO\nimport sys\n\ndef getPrint(thefun, *a, **k):\n savstdout = sys.stdout\n sys.stdout = cStringIO.StringIO()\n try:\n thefun(*a, **k)\n finally:\n v = sys.stdout.getvalue()\n sys.stdout = savstdout\n return v\n\nThe tiny difference is that you must call getPrint(funA), not getPrint(funA()) -- i.e., you must pass the function object itself, without the trailing parentheses that would call it immediately, before getPrint can do its magic.\nIf you absolutely insist on those extra parentheses, then getPrint cannot do all the needed preparation, and must be supplemented by other code to prepare things right (I strongly recommend losing the extra parentheses, thus enabling the encapsulation of all the functionality inside getPrint!).\n",
"from cStringIO import StringIO\n\ndef getPrint(func, *args, **kwds):\n old_stdout = sys.stdout\n sys.stdout = StringIO()\n try:\n func(*args, **kwds)\n except:\n raise\n else:\n return sys.stdout.getvalue()\n finally:\n sys.stdout = old_stdout\n\n#...\na = getPrint(funA) # notice no (), it is called by getPrint\nprint a.rstrip(\"\\n\") # avoid extra trailing lines\n\n",
"Best way is to do a context manager\nfrom contextlib import contextmanager\nimport StringIO\nimport sys\n\n@contextmanager\ndef capture():\n old_stdout = sys.stdout\n sys.stdout = StringIO.StringIO()\n try:\n yield sys.stdout\n finally:\n sys.stdout = old_stdout\n\nNow you can run any printing code:\nwith capture() as c:\n funA()\n funB()\n print 'HELLO!'\n\nthen later: \nprint c.getvalue()\n\n",
"Replace sys.stdout with a file-like object.\n",
"Use cStringIO ( see doc ).\nfrom cStringIO import StringIO\n\nold_stdout = sys.stdout\nsys.stdout = mystdout = StringIO()\n\ngetPrint( funA() )\n# use mystdout to get string\n\n",
"The simplest thing is to change your funA() to not print anything, but simply to return the string values.\nLike so:\ndef funA():\n return \"Hi\\n\" + \"There\\n\" + \"Friend\\n\" + \"!\\n\"\n\n# later:\nprint(funA())\n\nIt's always easy to collect strings and print them; it's tricker to to collect strings as they are being printed.\nIf you have a huge body of existing printing functions, then yeah, use one of the tricks provided here to collect the output.\n"
] |
[
8,
3,
2,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001983401_python.txt
|
Q:
"Unknown column 'user_id' error in django view
I'm having an error where I am not sure what caused it.
Here is the error:
Exception Type: OperationalError
Exception Value:
(1054, "Unknown column 'user_id' in 'field list'")
Does anyone know why I am getting this error? I can't figure it out. Everything seems to be fine.
My view code is below:
if "login" in request.session:
t = request.POST.get('title', '')
d = request.POST.get('description', '')
fid = request.session["login"]
fuser = User.objects.get(id=fid)
i = Idea(user=fuser, title=t, description=d, num_votes=1)
i.save()
return HttpResponse("true", mimetype="text/plain")
else:
return HttpResponse("false", mimetype="text/plain")
I appreciate any help! Thanks!
Edit: Also a side question. Do I use objects.get(id= or objects.get(pk= ? If I use a primary key, do I need to declare an id field or an index in the model?
Edit: Here are the relevant models:
class User (models.Model):
first_name = models.CharField(max_length=200)
last_name = models.CharField(max_length=200)
email = models.CharField(max_length=200)
password = models.CharField(max_length=200)
class Idea (models.Model):
user = models.ForeignKey(User)
title = models.CharField(max_length=200)
description = models.CharField(max_length=255)
num_votes = models.IntegerField()
A:
The user_id field is the FK reference from Idea to User. It looks like you've changed your model, and not updated your database, then you'll have this kind of problem.
Drop the old table, rerun syncdb.
Your model tables get an id field by default. You can call it id in your queries. You can also use the synonym of pk.
If you define your own primary key field you, you don't get the automatic id field. But you can still use pk to refer to the Primary Key.
A:
You'll have to show your models to get real help, but it looks like your Idea table doesn't have a user_id column? Did you modify the SQL table structure?
A:
Yes, I dropped the tables and it all worked great. However, you have to actually go into the database and DROP them. "manage.py flush" or "manage.py reset appname" won't do it by themselves.
-Nick O
|
"Unknown column 'user_id' error in django view
|
I'm having an error where I am not sure what caused it.
Here is the error:
Exception Type: OperationalError
Exception Value:
(1054, "Unknown column 'user_id' in 'field list'")
Does anyone know why I am getting this error? I can't figure it out. Everything seems to be fine.
My view code is below:
if "login" in request.session:
t = request.POST.get('title', '')
d = request.POST.get('description', '')
fid = request.session["login"]
fuser = User.objects.get(id=fid)
i = Idea(user=fuser, title=t, description=d, num_votes=1)
i.save()
return HttpResponse("true", mimetype="text/plain")
else:
return HttpResponse("false", mimetype="text/plain")
I appreciate any help! Thanks!
Edit: Also a side question. Do I use objects.get(id= or objects.get(pk= ? If I use a primary key, do I need to declare an id field or an index in the model?
Edit: Here are the relevant models:
class User (models.Model):
first_name = models.CharField(max_length=200)
last_name = models.CharField(max_length=200)
email = models.CharField(max_length=200)
password = models.CharField(max_length=200)
class Idea (models.Model):
user = models.ForeignKey(User)
title = models.CharField(max_length=200)
description = models.CharField(max_length=255)
num_votes = models.IntegerField()
|
[
"\nThe user_id field is the FK reference from Idea to User. It looks like you've changed your model, and not updated your database, then you'll have this kind of problem.\nDrop the old table, rerun syncdb.\nYour model tables get an id field by default. You can call it id in your queries. You can also use the synonym of pk.\nIf you define your own primary key field you, you don't get the automatic id field. But you can still use pk to refer to the Primary Key.\n\n",
"You'll have to show your models to get real help, but it looks like your Idea table doesn't have a user_id column? Did you modify the SQL table structure?\n",
"Yes, I dropped the tables and it all worked great. However, you have to actually go into the database and DROP them. \"manage.py flush\" or \"manage.py reset appname\" won't do it by themselves.\n-Nick O\n"
] |
[
6,
4,
0
] |
[] |
[] |
[
"django",
"django_models",
"model",
"python",
"view"
] |
stackoverflow_0000293300_django_django_models_model_python_view.txt
|
Q:
wxPython GridSizer not attached to panel?
I'm trying to build a level editor for a game I'm working on. It pulls data from a flat file and then based on a byte-by-byte search it'll assemble a grid from pre-set tiles. This part of the app I should have no issues with. The problem is that my test version of the editor which just loads a 16x16 grid of test tiles from 00 to FF is loading in the wrong place.
Example: The app frame looks like this:
|-T-------|
| | |
| | |
| | |
| | |
|---------|
Excusing my horrible ASCII art, the frame essentially has a horizontal sizer on it, with 2 vertical sizers, one for the left and one for the right. Each of these has a panel in it, which each have a second sizer in them. The left sizer has a 64 pixel-wide spacer in it, then a gridsizer which is later filled with images. The right second sizer is user sizable but a minimum of 960 pixels via a spacer there, then a gridsizer that's determined by the level width and height in code.
Essentially, for each side - there's a gridsizer inside a sizer which has a spacer for the width of the section, which are on a panel that's inside a sizer for that half of the sizer that's on the main frame. I hope this makes sense, as it confuses me at times :P
Here's the code that does all this:
#Horizontal sizer
self.h_sizer = wx.BoxSizer(wx.HORIZONTAL)
#Vertical sizer
self.v_sizer_left = wx.BoxSizer(wx.VERTICAL)
self.v_sizer_right = wx.BoxSizer(wx.VERTICAL)
#Create the 2 panels
self.leftPanel = wx.ScrolledWindow(self, style = wx.VSCROLL | wx.ALWAYS_SHOW_SB)
self.leftPanel.EnableScrolling(False, True)
self.rightPanel = wx.ScrolledWindow(self, style = wx.VSCROLL | wx.ALWAYS_SHOW_SB)
self.rightPanel.EnableScrolling(False, True)
#Create a sizer for the contents of the left panel
self.lps = wx.BoxSizer(wx.VERTICAL)
self.lps.Add((64, 0)) #Add a spacer into the sizer to force it to be 64px wide
self.leftPanelSizer = wx.GridSizer(256, 1, 2, 2) # horizontal rows, vertical rows, vgap, hgap
self.lps.Add(self.leftPanelSizer) #Add the tiles grid to the left panel sizer
self.leftPanel.SetSizerAndFit(self.lps) #Set the leftPanel to use LeftPanelSizer (it's not lupus) as the sizer
self.leftPanel.SetScrollbars(0,66,0, 0) #Add the scrollbar, increment in 64 pixel bits plus the 2 spacer pixels
self.leftPanel.SetAutoLayout(True)
self.lps.Fit(self.leftPanel)
#Create a sizer for the contents of the right panel
self.rps = wx.BoxSizer(wx.VERTICAL)
self.rps.Add((960, 0)) #Set it's width and height to be the window's, for now, with a spacer
self.rightPanelSizer = wx.GridSizer(16, 16, 0, 0) # horizontal rows, vertical rows, vgap, hgap
self.rps.Add(self.rightPanelSizer) #Add the level grid to the right panel sizer
self.rightPanel.SetSizerAndFit(self.rps) #Set the rightPanel to use RightPanelSizer as the sizer
self.rightPanel.SetScrollbars(64,64,0, 0) #Add the scrollbar, increment in 64 pixel bits - vertical and horizontal
self.rightPanel.SetAutoLayout(True)
self.rps.Fit(self.rightPanel)
#Debugging purposes. Colours :)
self.leftPanel.SetBackgroundColour((0,255,0))
self.rightPanel.SetBackgroundColour((0,128,128))
#Add the left panel to the left vertical sizer, tell it to resize with the window (0 does not resize, 1 does). Do not expand the sizer on this side.
self.v_sizer_left.Add(self.leftPanel, 1)
#Add the right panel to the right vertical sizer, tell it to resize with the window (0 does not resize, 1 does) Expand the sizer to fit as much as possible on this side.
self.v_sizer_right.Add(self.rightPanel, 1, wx.EXPAND)
#Now add the 2 vertical sizers to the horizontal sizer.
self.h_sizer.Add(self.v_sizer_left, 0, wx.EXPAND) #First the left one...
self.h_sizer.Add((0,704)) #Add in a spacer between the two to get the vertical window size correct...
self.h_sizer.Add(self.v_sizer_right, 1, wx.EXPAND) #And finally the right hand frame.
After getting all the data, the app will then use it to generate the level layout with .png tiles from a specified directory but for testing purposes I'm just generating a 16x16 grid from 00 to FF, as mentioned above - via a menu option I call this method:
def populateLevelGrid(self, id):
#This debug method fills the level grid scrollbar with 256 example image tiles
levelTileset = self.levelTilesetLookup[id]
#levelWidth = self.levelWidthLookup[id]
#levelHeight = self.levelHeightLookup[id]
#levelTileTotal = levelWidth * levelHeight
levelTileTotal = 256 #debug line
self.imgPanelGrid = []
for i in range(levelTileTotal):
self.imgPanelGrid.append(ImgPanel.ImgPanel(self, False))
self.rightPanelSizer.Add(self.imgPanelGrid[i])
self.imgPanelGrid[i].set_image("tiles/"+ levelTileset + "/" + str(i) + ".png")
self.rightPanelSizer.Layout()
self.rightPanelSizer.FitInside(self.rightPanel)
This works, but pins the grid to the top left of the entire frame, not to the top left of the right half of the frame - that it should be on. There's similar code to do a 1x256 grid on the left frame but I've no way of telling if that's suffering the same issue for obvious reasons. Both have working scrollbars but have redrawing issues when scrolled, making me wonder if it's drawing the images over the entire application and just ignoring the application layout.
Is there something I've missed here? This is the first time I've done anything with gridsizers, images and GUIs in general in python, having only recently started with the language after wanting to write in something a little more cross-platform than VB6 - any thoughts? =/
A:
That's a lot of code and a lot of sizers! But I think maybe your ImgPanels should have the rightPanel as their parent, and not self.
Also, do you ever call self.SetSizer(self.h_sizer)? Didn't see that anywhere.
I recommend creating portions of the layout in separate functions. Then you don't have to worry about your local namespace being polluted with all these wacky sizer names. So for each part of the sizer hierarchy you could have a function.
create_controls
create_left_panel
create_grid
create_right_panel
Also, I usually use SetMinSize on the child controls instead of adding dummy spacers to the sizers to setup size constraints. Then Fit will do it for you. Speaking of Fit, I didn't even know it could take arguments!
|
wxPython GridSizer not attached to panel?
|
I'm trying to build a level editor for a game I'm working on. It pulls data from a flat file and then based on a byte-by-byte search it'll assemble a grid from pre-set tiles. This part of the app I should have no issues with. The problem is that my test version of the editor which just loads a 16x16 grid of test tiles from 00 to FF is loading in the wrong place.
Example: The app frame looks like this:
|-T-------|
| | |
| | |
| | |
| | |
|---------|
Excusing my horrible ASCII art, the frame essentially has a horizontal sizer on it, with 2 vertical sizers, one for the left and one for the right. Each of these has a panel in it, which each have a second sizer in them. The left sizer has a 64 pixel-wide spacer in it, then a gridsizer which is later filled with images. The right second sizer is user sizable but a minimum of 960 pixels via a spacer there, then a gridsizer that's determined by the level width and height in code.
Essentially, for each side - there's a gridsizer inside a sizer which has a spacer for the width of the section, which are on a panel that's inside a sizer for that half of the sizer that's on the main frame. I hope this makes sense, as it confuses me at times :P
Here's the code that does all this:
#Horizontal sizer
self.h_sizer = wx.BoxSizer(wx.HORIZONTAL)
#Vertical sizer
self.v_sizer_left = wx.BoxSizer(wx.VERTICAL)
self.v_sizer_right = wx.BoxSizer(wx.VERTICAL)
#Create the 2 panels
self.leftPanel = wx.ScrolledWindow(self, style = wx.VSCROLL | wx.ALWAYS_SHOW_SB)
self.leftPanel.EnableScrolling(False, True)
self.rightPanel = wx.ScrolledWindow(self, style = wx.VSCROLL | wx.ALWAYS_SHOW_SB)
self.rightPanel.EnableScrolling(False, True)
#Create a sizer for the contents of the left panel
self.lps = wx.BoxSizer(wx.VERTICAL)
self.lps.Add((64, 0)) #Add a spacer into the sizer to force it to be 64px wide
self.leftPanelSizer = wx.GridSizer(256, 1, 2, 2) # horizontal rows, vertical rows, vgap, hgap
self.lps.Add(self.leftPanelSizer) #Add the tiles grid to the left panel sizer
self.leftPanel.SetSizerAndFit(self.lps) #Set the leftPanel to use LeftPanelSizer (it's not lupus) as the sizer
self.leftPanel.SetScrollbars(0,66,0, 0) #Add the scrollbar, increment in 64 pixel bits plus the 2 spacer pixels
self.leftPanel.SetAutoLayout(True)
self.lps.Fit(self.leftPanel)
#Create a sizer for the contents of the right panel
self.rps = wx.BoxSizer(wx.VERTICAL)
self.rps.Add((960, 0)) #Set it's width and height to be the window's, for now, with a spacer
self.rightPanelSizer = wx.GridSizer(16, 16, 0, 0) # horizontal rows, vertical rows, vgap, hgap
self.rps.Add(self.rightPanelSizer) #Add the level grid to the right panel sizer
self.rightPanel.SetSizerAndFit(self.rps) #Set the rightPanel to use RightPanelSizer as the sizer
self.rightPanel.SetScrollbars(64,64,0, 0) #Add the scrollbar, increment in 64 pixel bits - vertical and horizontal
self.rightPanel.SetAutoLayout(True)
self.rps.Fit(self.rightPanel)
#Debugging purposes. Colours :)
self.leftPanel.SetBackgroundColour((0,255,0))
self.rightPanel.SetBackgroundColour((0,128,128))
#Add the left panel to the left vertical sizer, tell it to resize with the window (0 does not resize, 1 does). Do not expand the sizer on this side.
self.v_sizer_left.Add(self.leftPanel, 1)
#Add the right panel to the right vertical sizer, tell it to resize with the window (0 does not resize, 1 does) Expand the sizer to fit as much as possible on this side.
self.v_sizer_right.Add(self.rightPanel, 1, wx.EXPAND)
#Now add the 2 vertical sizers to the horizontal sizer.
self.h_sizer.Add(self.v_sizer_left, 0, wx.EXPAND) #First the left one...
self.h_sizer.Add((0,704)) #Add in a spacer between the two to get the vertical window size correct...
self.h_sizer.Add(self.v_sizer_right, 1, wx.EXPAND) #And finally the right hand frame.
After getting all the data, the app will then use it to generate the level layout with .png tiles from a specified directory but for testing purposes I'm just generating a 16x16 grid from 00 to FF, as mentioned above - via a menu option I call this method:
def populateLevelGrid(self, id):
#This debug method fills the level grid scrollbar with 256 example image tiles
levelTileset = self.levelTilesetLookup[id]
#levelWidth = self.levelWidthLookup[id]
#levelHeight = self.levelHeightLookup[id]
#levelTileTotal = levelWidth * levelHeight
levelTileTotal = 256 #debug line
self.imgPanelGrid = []
for i in range(levelTileTotal):
self.imgPanelGrid.append(ImgPanel.ImgPanel(self, False))
self.rightPanelSizer.Add(self.imgPanelGrid[i])
self.imgPanelGrid[i].set_image("tiles/"+ levelTileset + "/" + str(i) + ".png")
self.rightPanelSizer.Layout()
self.rightPanelSizer.FitInside(self.rightPanel)
This works, but pins the grid to the top left of the entire frame, not to the top left of the right half of the frame - that it should be on. There's similar code to do a 1x256 grid on the left frame but I've no way of telling if that's suffering the same issue for obvious reasons. Both have working scrollbars but have redrawing issues when scrolled, making me wonder if it's drawing the images over the entire application and just ignoring the application layout.
Is there something I've missed here? This is the first time I've done anything with gridsizers, images and GUIs in general in python, having only recently started with the language after wanting to write in something a little more cross-platform than VB6 - any thoughts? =/
|
[
"That's a lot of code and a lot of sizers! But I think maybe your ImgPanels should have the rightPanel as their parent, and not self.\nAlso, do you ever call self.SetSizer(self.h_sizer)? Didn't see that anywhere.\nI recommend creating portions of the layout in separate functions. Then you don't have to worry about your local namespace being polluted with all these wacky sizer names. So for each part of the sizer hierarchy you could have a function.\ncreate_controls\n create_left_panel\n create_grid\n create_right_panel\n\nAlso, I usually use SetMinSize on the child controls instead of adding dummy spacers to the sizers to setup size constraints. Then Fit will do it for you. Speaking of Fit, I didn't even know it could take arguments!\n"
] |
[
1
] |
[] |
[] |
[
"image",
"panel",
"python",
"sizer",
"wxpython"
] |
stackoverflow_0001983727_image_panel_python_sizer_wxpython.txt
|
Q:
Having trouble understanding CherryPy
I read through the tutorial on the cherrypy website, and I'm still having some trouble understanding how it can be implemented in a modular, scalable way.
Could someone show me an example of how to have cherrypy receive a simple http post to its root, process the variable in some way, and respond dynamically using that data in the response?
A:
from cherrypy import expose
class Adder:
@expose
def index(self):
return '''<html>
<body>
<form action="add">
<input name="a" /> + <input name="b"> =
<input type="submit" />
</form>
</body>
</html>'''
@expose
def add(self, a, b):
return str(int(a) + int(b))
if __name__ == "__main__":
from cherrypy import quickstart
quickstart(Adder())
Run the script and then open a browser on http://localhost:8080
A:
Are you asking for an example like this?
http://www.cherrypy.org/wiki/CherryPyTutorial#ReceivingdatafromHTMLforms
It receives input from forms.
You can return any text you want from a CherryPy method function, so dynamic text based on the input is really trivial.
|
Having trouble understanding CherryPy
|
I read through the tutorial on the cherrypy website, and I'm still having some trouble understanding how it can be implemented in a modular, scalable way.
Could someone show me an example of how to have cherrypy receive a simple http post to its root, process the variable in some way, and respond dynamically using that data in the response?
|
[
"from cherrypy import expose\n\nclass Adder:\n @expose\n def index(self):\n return '''<html>\n <body>\n <form action=\"add\">\n <input name=\"a\" /> + <input name=\"b\"> = \n <input type=\"submit\" />\n </form>\n </body>\n </html>'''\n\n @expose\n def add(self, a, b):\n return str(int(a) + int(b))\n\n\nif __name__ == \"__main__\":\n from cherrypy import quickstart\n quickstart(Adder())\n\nRun the script and then open a browser on http://localhost:8080\n",
"Are you asking for an example like this?\nhttp://www.cherrypy.org/wiki/CherryPyTutorial#ReceivingdatafromHTMLforms\nIt receives input from forms.\nYou can return any text you want from a CherryPy method function, so dynamic text based on the input is really trivial.\n"
] |
[
3,
1
] |
[] |
[] |
[
"cherrypy",
"python"
] |
stackoverflow_0001982721_cherrypy_python.txt
|
Q:
why my code 'next()' function not called in this example?
class a(object):
w={'a':'aaa','b':'bbb'}
def __iter__(self):
return iter(self.w)
def next(self):#this is not be called
print 'sss'
for i in self.w:
return i
b=a()
for i in b:
print i
and what is Relations between __iter__ and next function.
thanks
A:
In __iter__() you return an iterator on the dict stored in self.w, not on the class itself. Returning self instead will fix that.
A:
I'm not entirely sure what you are asking, but the next() function isn't called because you never explicitly call it. You define __iter__, which gets called when you do:
for i in b:
This should implicitly call the .next() method of the iterator, but the iterator isn't a, but rather iter(self.w). As your object is not the iterator, its next() method is never called.
Hope this helps.
A:
next should remember the last position and return the next item (not always return the first item, like in your code), and when there are no further items, raise the StopIteration exception.
Also __iter__ should return the class itself.
See Python documentation for iterators.
In your case, using generator is more suitable:
class a(object):
w={'a':'aaa','b':'bbb'}
def __iter__(self):
print 'sss'
for i in self.w:
yield i
b=a()
for i in b:
print i
|
why my code 'next()' function not called in this example?
|
class a(object):
w={'a':'aaa','b':'bbb'}
def __iter__(self):
return iter(self.w)
def next(self):#this is not be called
print 'sss'
for i in self.w:
return i
b=a()
for i in b:
print i
and what is Relations between __iter__ and next function.
thanks
|
[
"In __iter__() you return an iterator on the dict stored in self.w, not on the class itself. Returning self instead will fix that.\n",
"I'm not entirely sure what you are asking, but the next() function isn't called because you never explicitly call it. You define __iter__, which gets called when you do:\nfor i in b:\n\nThis should implicitly call the .next() method of the iterator, but the iterator isn't a, but rather iter(self.w). As your object is not the iterator, its next() method is never called.\nHope this helps.\n",
"next should remember the last position and return the next item (not always return the first item, like in your code), and when there are no further items, raise the StopIteration exception.\nAlso __iter__ should return the class itself.\nSee Python documentation for iterators.\nIn your case, using generator is more suitable:\nclass a(object):\n w={'a':'aaa','b':'bbb'}\n def __iter__(self):\n print 'sss'\n for i in self.w:\n yield i\n\nb=a()\nfor i in b:\n print i\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"iterator",
"python"
] |
stackoverflow_0001984184_iterator_python.txt
|
Q:
SimpleHTTPServer, shutdown and blocked request handlers
I have an instance of SimpleHTTPServer, however when I try to call
"shutdown" on it and there is a request handler that is blocked - the
whole process will block.
It does so even if I run the "serve_forever" method in a deamon
thread.
See example code at http://codepad.org/cn8EYdfg
A:
the shutdown method won't take affect until after the GET request is processed.
And that request is blocking because of the queue operation.
Maybe this will help: Stoppable Interruptable Python HTTP Server
|
SimpleHTTPServer, shutdown and blocked request handlers
|
I have an instance of SimpleHTTPServer, however when I try to call
"shutdown" on it and there is a request handler that is blocked - the
whole process will block.
It does so even if I run the "serve_forever" method in a deamon
thread.
See example code at http://codepad.org/cn8EYdfg
|
[
"the shutdown method won't take affect until after the GET request is processed.\nAnd that request is blocking because of the queue operation.\nMaybe this will help: Stoppable Interruptable Python HTTP Server\n"
] |
[
0
] |
[] |
[] |
[
"http",
"multithreading",
"python"
] |
stackoverflow_0001984159_http_multithreading_python.txt
|
Q:
iPhone client with a python socket
i am creating a python server socket that sends data to the client reguarding files status... what i have is a list containing dictionaries:
[{'Status': '[2,5%]', 'File': 'SlackwareDVD.iso'},
{'Status': '[21,8%]', 'File': 'Ubuntu_x86.iso'}]
the socket, when asked, sends this data, obviously it is sent as a string type.. i was trying to figure out how i could pass this data in OBJC in corrispective NSarray and NSDictionaries...
anyone have a clue?? hints?? :D
Thanks
PirosB3
A:
Seems perfectly suited for JSON. On the python side use any of the popular json libraries - for example, simplejson - to convert your python data to json, and on the iPhone side use an iPhone json library to convert it to local data representations. Here's an article that shows how to do that.
A:
Convert it to an XML property list and Cocoa will do the rest.
A:
A binary way to transport common types is using the Hessian protocol which is available for the iPhone here. I'm not sure what the status of the python implementations are, I can find two (1, 2)
Using it makes it very easy to encode and decode messages.
A:
Assuming you have access to the python codebase, generating an XML Property List is trivial from Python.
See Understanding XML Property Lists in the Apple documentation for more information.
I would be surprised if someone hasn't already written something that'll convert Python property lists into NSDictionary compatible XML property lists.
|
iPhone client with a python socket
|
i am creating a python server socket that sends data to the client reguarding files status... what i have is a list containing dictionaries:
[{'Status': '[2,5%]', 'File': 'SlackwareDVD.iso'},
{'Status': '[21,8%]', 'File': 'Ubuntu_x86.iso'}]
the socket, when asked, sends this data, obviously it is sent as a string type.. i was trying to figure out how i could pass this data in OBJC in corrispective NSarray and NSDictionaries...
anyone have a clue?? hints?? :D
Thanks
PirosB3
|
[
"Seems perfectly suited for JSON. On the python side use any of the popular json libraries - for example, simplejson - to convert your python data to json, and on the iPhone side use an iPhone json library to convert it to local data representations. Here's an article that shows how to do that.\n",
"Convert it to an XML property list and Cocoa will do the rest. \n",
"A binary way to transport common types is using the Hessian protocol which is available for the iPhone here. I'm not sure what the status of the python implementations are, I can find two (1, 2)\nUsing it makes it very easy to encode and decode messages.\n",
"Assuming you have access to the python codebase, generating an XML Property List is trivial from Python.\nSee Understanding XML Property Lists in the Apple documentation for more information.\nI would be surprised if someone hasn't already written something that'll convert Python property lists into NSDictionary compatible XML property lists.\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"iphone",
"objective_c",
"python",
"sockets",
"string"
] |
stackoverflow_0001982375_iphone_objective_c_python_sockets_string.txt
|
Q:
Remove row or column from 2D list if all values (in that row or column) are None
I have a grid (6 rows, 5 columns):
grid = [
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
]
I augment the grid and it might turn into something like:
grid = [
[{"some" : "thing"}, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, {"something" : "else"}, None],
[None, {"another" : "thing"}, None, None, None],
[None, None, None, None, None],
]
I want to remove entire rows and columns that have all Nones in them. So in the previous code, grid would be transformed into:
grid = [
[{"some" : "thing"}, None, None],
[None, None, {"something" : "else"}],
[None, {"another" : "thing"}, None],
]
I removed row 1, 2, 5 (zero indexed) and column 2 and 4.
The way I am deleting the rows now:
for row in range(6):
if grid[row] == [None, None, None, None, None]:
del grid[row]
I don't have a decent way of deleting None columns yet. Is there a "pythonic" way of doing this?
A:
It's not the fastest way but I think it's quite easy to understand:
def transpose(grid):
return zip(*grid)
def removeBlankRows(grid):
return [list(row) for row in grid if any(row)]
print removeBlankRows(transpose(removeBlankRows(transpose(grid))))
Output:
[[{'some': 'thing'}, None, None],
[None, None, {'something': 'else'}],
[None, {'another': 'thing'}, None]]
How it works: I use zip to write a function that transposes the rows and columns. A second function removeBlankRows removes rows where all elements are None (or anything that evaluates to false in a boolean context). Then to perform the entire operation I transpose the grid, remove blank rows (which are the columns in the original data), transpose again, then remove blank rows.
If it's important to only strip None and not other things that evaluate to false, change the removeBlankRows function to:
def removeBlankRows(grid):
return [list(row) for row in grid if any(x is not None for x in row)]
A:
grid = ...
# remove empty rows
grid = [x for x in grid if any(x)]
# if any value you put in won't evaluate to False
# e.g. an empty string or empty list wouldn't work here
# in that case, use:
grid = [x for x in grid if any(n is not None for n in x)]
# remove empty columns
if not grid:
raise ValueError("empty grid")
# or whatever, as next line assumes grid[0] exists
empties = range(len(grid[0])) # assume all empty at first
for r in grid:
empties = [c for c in empties if r[c] is None] # strip out non-empty
if empties:
empties.reverse() # apply in reversed order
for r in grid:
for c in empties:
r.pop(c)
A:
Use zip() to transpose the ragged array, run the clearing routine again, then zip() it again.
A:
If only you had a transpose function, you could do: transpose(removeRows(transpose(removeRows(mat))))
Actually ... using a bitmask is a better idea.
Let me think about that ...
First compute gridmask:
grid_mask = [
10000,
00000,
00000,
00010,
00000
]
Now remove zeroes:
grid_mask = [
10000,
00010,
]
Now and all values bitwise:
grid_mask = 10010
Now remove all but 1st and 4th columns.
A:
Here is quick attempt
It will work for any size matrix and rows can be different sizes too , and may be fast :)
from collections import defaultdict
grid = [
[{"some" : "thing"}, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, {"something" : "else"}, None],
[None, {"another" : "thing"}, None, None, None],
[None, None, None, None, None],
]
# go thru the grid remove, rows which have all None
# doing that count None in each columns, remove such columns later
newGrid = []
colSize = len(grid)
colCount = defaultdict(int)
for row in grid:
allNone = True
for c, cell in enumerate(row):
if cell is None:
colCount[c] += 1
else:
allNone = False
if not allNone: # only add rows which are not all none
newGrid.append(row)
# get cols which need to be removed
removeCols = [col for col, count in colCount.iteritems() if count == colSize]
removeCols.sort(reverse=True)
# now go thru each column and remove all None Columns
for row in newGrid:
for col in removeCols:
row.pop(col)
grid = newGrid
import pprint
pprint.pprint(grid)
output:
[[{'some': 'thing'}, None, None],
[None, None, {'something': 'else'}],
[None, {'another': 'thing'}, None]]
A:
You can also use the built-in function any(), which checks if any of the elements of an iterable has a not None value. It's faster than comparing and you don't need to know the size of the iterable.
>>> def remove_rows( matrix ):
... '''Returns a matrix without empty rows'''
... ret_matrix = []
... for row in matrix:
... #Check if the row has any value or all are None
... if any(row):
... ret_matrix.append(row)
...
... return ret_matrix
#You can do it also with a list comprehension, which will be even faster
>>> def remove_rows(matrix):
... '''Returns a matrix without empty rows'''
... ret_matrix = [ row for row in matrix if any(row) ]
... return ret_matrix
>>> grid = [
... [{"some" : "thing"}, None, None, None, None],
... [None, None, None, None, None],
... [None, None, None, None, None],
... [None, None, None, {"something" : "else"}, None],
... [None, {"another" : "thing"}, None, None, None],
... [None, None, None, None, None],
... ]
>>> grid = remove_rows( grid )
>>> grid
[
[{'some': 'thing'}, None, None, None, None],
[None, None, None, {'something': 'else'}, None],
[None, {'another': 'thing'}, None, None, None]
]
#transpose grid using zip (using asterisk)
>>> grid = zip(*grid)
#Note that zip returns tuples
>>> grid
[
({'some': 'thing'}, None, None),
(None, None, {'another': 'thing'}),
(None, None, None),
(None, {'something': 'else'}, None),
(None, None, None)
]
>>> grid = remove_rows(grid)
>>> grid
[
({'some': 'thing'}, None, None),
(None, None, {'another': 'thing'}),
(None, {'something': 'else'}, None)
]
>>> #Transpose again to get the first matrix, without empty rows or columns
>>> final_grid = zip(*grid)
>>> final_grid
[
({'some': 'thing'}, None, None),
(None, None, {'something': 'else'}),
(None, {'another': 'thing'}, None)
]
|
Remove row or column from 2D list if all values (in that row or column) are None
|
I have a grid (6 rows, 5 columns):
grid = [
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
]
I augment the grid and it might turn into something like:
grid = [
[{"some" : "thing"}, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, {"something" : "else"}, None],
[None, {"another" : "thing"}, None, None, None],
[None, None, None, None, None],
]
I want to remove entire rows and columns that have all Nones in them. So in the previous code, grid would be transformed into:
grid = [
[{"some" : "thing"}, None, None],
[None, None, {"something" : "else"}],
[None, {"another" : "thing"}, None],
]
I removed row 1, 2, 5 (zero indexed) and column 2 and 4.
The way I am deleting the rows now:
for row in range(6):
if grid[row] == [None, None, None, None, None]:
del grid[row]
I don't have a decent way of deleting None columns yet. Is there a "pythonic" way of doing this?
|
[
"It's not the fastest way but I think it's quite easy to understand:\ndef transpose(grid):\n return zip(*grid)\n\ndef removeBlankRows(grid):\n return [list(row) for row in grid if any(row)]\n\nprint removeBlankRows(transpose(removeBlankRows(transpose(grid))))\n\nOutput:\n[[{'some': 'thing'}, None, None],\n [None, None, {'something': 'else'}],\n [None, {'another': 'thing'}, None]]\n\nHow it works: I use zip to write a function that transposes the rows and columns. A second function removeBlankRows removes rows where all elements are None (or anything that evaluates to false in a boolean context). Then to perform the entire operation I transpose the grid, remove blank rows (which are the columns in the original data), transpose again, then remove blank rows.\nIf it's important to only strip None and not other things that evaluate to false, change the removeBlankRows function to:\ndef removeBlankRows(grid):\n return [list(row) for row in grid if any(x is not None for x in row)]\n\n",
"grid = ...\n\n# remove empty rows\ngrid = [x for x in grid if any(x)]\n# if any value you put in won't evaluate to False\n# e.g. an empty string or empty list wouldn't work here\n# in that case, use:\ngrid = [x for x in grid if any(n is not None for n in x)]\n\n# remove empty columns\nif not grid:\n raise ValueError(\"empty grid\")\n # or whatever, as next line assumes grid[0] exists\nempties = range(len(grid[0])) # assume all empty at first\nfor r in grid:\n empties = [c for c in empties if r[c] is None] # strip out non-empty\nif empties:\n empties.reverse() # apply in reversed order\n for r in grid:\n for c in empties:\n r.pop(c)\n\n",
"Use zip() to transpose the ragged array, run the clearing routine again, then zip() it again.\n",
"If only you had a transpose function, you could do: transpose(removeRows(transpose(removeRows(mat))))\nActually ... using a bitmask is a better idea.\nLet me think about that ...\nFirst compute gridmask:\ngrid_mask = [\n10000,\n00000,\n00000,\n00010,\n00000\n]\n\nNow remove zeroes:\ngrid_mask = [\n10000,\n00010,\n]\n\nNow and all values bitwise:\ngrid_mask = 10010\n\nNow remove all but 1st and 4th columns.\n",
"Here is quick attempt\nIt will work for any size matrix and rows can be different sizes too , and may be fast :)\nfrom collections import defaultdict\ngrid = [\n [{\"some\" : \"thing\"}, None, None, None, None],\n [None, None, None, None, None],\n [None, None, None, None, None],\n [None, None, None, {\"something\" : \"else\"}, None],\n [None, {\"another\" : \"thing\"}, None, None, None],\n [None, None, None, None, None],\n ]\n\n# go thru the grid remove, rows which have all None\n# doing that count None in each columns, remove such columns later\nnewGrid = []\ncolSize = len(grid)\ncolCount = defaultdict(int)\nfor row in grid:\n allNone = True\n for c, cell in enumerate(row):\n if cell is None:\n colCount[c] += 1\n else:\n allNone = False\n\n if not allNone: # only add rows which are not all none\n newGrid.append(row)\n\n# get cols which need to be removed\nremoveCols = [col for col, count in colCount.iteritems() if count == colSize]\nremoveCols.sort(reverse=True) \n\n# now go thru each column and remove all None Columns\nfor row in newGrid:\n for col in removeCols:\n row.pop(col)\n\ngrid = newGrid\nimport pprint\npprint.pprint(grid)\n\noutput:\n[[{'some': 'thing'}, None, None],\n [None, None, {'something': 'else'}],\n [None, {'another': 'thing'}, None]]\n\n",
"You can also use the built-in function any(), which checks if any of the elements of an iterable has a not None value. It's faster than comparing and you don't need to know the size of the iterable.\n>>> def remove_rows( matrix ):\n... '''Returns a matrix without empty rows'''\n... ret_matrix = []\n... for row in matrix:\n... #Check if the row has any value or all are None\n... if any(row):\n... ret_matrix.append(row)\n...\n... return ret_matrix\n#You can do it also with a list comprehension, which will be even faster\n>>> def remove_rows(matrix):\n... '''Returns a matrix without empty rows'''\n... ret_matrix = [ row for row in matrix if any(row) ]\n... return ret_matrix\n\n>>> grid = [\n... [{\"some\" : \"thing\"}, None, None, None, None],\n... [None, None, None, None, None],\n... [None, None, None, None, None],\n... [None, None, None, {\"something\" : \"else\"}, None],\n... [None, {\"another\" : \"thing\"}, None, None, None],\n... [None, None, None, None, None],\n... ]\n\n\n>>> grid = remove_rows( grid )\n>>> grid\n[\n [{'some': 'thing'}, None, None, None, None], \n [None, None, None, {'something': 'else'}, None], \n [None, {'another': 'thing'}, None, None, None]\n]\n\n#transpose grid using zip (using asterisk)\n>>> grid = zip(*grid)\n#Note that zip returns tuples\n>>> grid\n[\n ({'some': 'thing'}, None, None), \n (None, None, {'another': 'thing'}),\n (None, None, None), \n (None, {'something': 'else'}, None), \n (None, None, None)\n]\n\n>>> grid = remove_rows(grid)\n>>> grid\n[\n ({'some': 'thing'}, None, None), \n (None, None, {'another': 'thing'}),\n (None, {'something': 'else'}, None)\n]\n>>> #Transpose again to get the first matrix, without empty rows or columns\n>>> final_grid = zip(*grid)\n>>> final_grid\n[\n ({'some': 'thing'}, None, None),\n (None, None, {'something': 'else'}), \n (None, {'another': 'thing'}, None)\n]\n\n"
] |
[
7,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0001983902_list_python.txt
|
Q:
How to use a certain Firefox profile in Python Selenium binding?
I'm looking for a solution similar to presented in this question, except it has to deal with python.
A:
the answer that was accepted is language agnostic. You will need to setup a firefox profile then when starting up selenium rc tell it to use that profile.
You just need to run your tests to use *firefox against the rc that has your profile and when the browser loads it will be the one you are after
|
How to use a certain Firefox profile in Python Selenium binding?
|
I'm looking for a solution similar to presented in this question, except it has to deal with python.
|
[
"the answer that was accepted is language agnostic. You will need to setup a firefox profile then when starting up selenium rc tell it to use that profile. \nYou just need to run your tests to use *firefox against the rc that has your profile and when the browser loads it will be the one you are after\n"
] |
[
1
] |
[] |
[] |
[
"firefox",
"python",
"selenium"
] |
stackoverflow_0001964406_firefox_python_selenium.txt
|
Q:
Filter with field has relationship in Django?
I have these models in Django:
class Customer(models.Model):
def __unicode__(self):
return self.name
name = models.CharField(max_length=200)
class Sale(models.Model):
def __unicode__(self):
return "Sale %s (%i)" % (self.type, self.id)
customer = models.ForeignKey(Customer)
total = models.DecimalField(max_digits=15, decimal_places=3)
notes = models.TextField(blank=True, null=True)
class Unitary_Sale(models.Model):
book = models.ForeignKey(Book)
quantity = models.IntegerField()
unit_price = models.DecimalField(max_digits=15, decimal_places=3)
sale = models.ForeignKey(Sale)
How can I filter to get all books that customer sale?
I tried:
units=Unitary_Sale.objects.all()
>>> units=Unitary_Sale.objects.all()
>>> for unit in units:
... print unit.sale.customer
... print unit.book,unit.sale.total
...
Sok nara
Khmer Empire (H001) 38.4
Sok nara
killing field (H001) 16
San ta
khmer krom (H001) 20
San ta
Khmer Empire (H001) 20
>>>
what I want:
sok nora:56.4 (38.4+18)
san ta:40 (20+20)
Or to dictionary:
{sok nora:156.4, san ta:40}
>>> store_resulte = {}
>>> for unit in units:
... store_resulte[unit.sale.customer] = unit.sale.total
...
>>> print store_resulte
{<Customer: Sok nara>: Decimal("16"), <Customer: san ta>: Decimal("20")}
It should be:
{<Customer: Sok nara>: Decimal("56.4"), <Customer: san ta>: Decimal("40")}
A:
Have a look at aggregation documentation.
I believe this should do the trick (only with version 1.1 or dev.):
Customer.objects.annotate(total=Sum('sale__total'))
EDIT: You can also define a custom method for the class:
class Customer(models.Model):
def __unicode__(self):
return self.name
name = models.CharField(max_length=200)
def total_sale(self):
total = 0
for s in self.sale_set:
total += s.total
return total
|
Filter with field has relationship in Django?
|
I have these models in Django:
class Customer(models.Model):
def __unicode__(self):
return self.name
name = models.CharField(max_length=200)
class Sale(models.Model):
def __unicode__(self):
return "Sale %s (%i)" % (self.type, self.id)
customer = models.ForeignKey(Customer)
total = models.DecimalField(max_digits=15, decimal_places=3)
notes = models.TextField(blank=True, null=True)
class Unitary_Sale(models.Model):
book = models.ForeignKey(Book)
quantity = models.IntegerField()
unit_price = models.DecimalField(max_digits=15, decimal_places=3)
sale = models.ForeignKey(Sale)
How can I filter to get all books that customer sale?
I tried:
units=Unitary_Sale.objects.all()
>>> units=Unitary_Sale.objects.all()
>>> for unit in units:
... print unit.sale.customer
... print unit.book,unit.sale.total
...
Sok nara
Khmer Empire (H001) 38.4
Sok nara
killing field (H001) 16
San ta
khmer krom (H001) 20
San ta
Khmer Empire (H001) 20
>>>
what I want:
sok nora:56.4 (38.4+18)
san ta:40 (20+20)
Or to dictionary:
{sok nora:156.4, san ta:40}
>>> store_resulte = {}
>>> for unit in units:
... store_resulte[unit.sale.customer] = unit.sale.total
...
>>> print store_resulte
{<Customer: Sok nara>: Decimal("16"), <Customer: san ta>: Decimal("20")}
It should be:
{<Customer: Sok nara>: Decimal("56.4"), <Customer: san ta>: Decimal("40")}
|
[
"Have a look at aggregation documentation.\nI believe this should do the trick (only with version 1.1 or dev.):\nCustomer.objects.annotate(total=Sum('sale__total'))\n\nEDIT: You can also define a custom method for the class:\nclass Customer(models.Model):\n def __unicode__(self):\n return self.name\n name = models.CharField(max_length=200)\n\n def total_sale(self):\n total = 0\n for s in self.sale_set:\n total += s.total\n return total\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_models",
"django_orm",
"python"
] |
stackoverflow_0001985026_django_django_models_django_orm_python.txt
|
Q:
Please discuss what are and why use portlets
Why would I want to use java portlets above tomcat and gwt?
Would portlets make it less- or un- necessary for me to use jsp and jsf?
Has Jboss been part of the portlet evolution culture? Does Jboss satisfy the portlet jsrs?
What portlet implementation/brand would run on gae java and gae python?
Are portlet specs due to peer pressure from php cms culture?
What are the equivalent of portlet and portlet jsr in .net?
A:
Portlets were a well-meaning but mis-guided attempt at a reusable widget API for web applications. Think of the personalised google home page, with the portlets like weather, news, mail, etc.
Unfortunately, they made a bit of a mess of it. The Portlet API is a bit of a pig, a real barrel of not-fun, and there are very few implementations of it. The only one I've ever used is JBoss Portal, but it's a bit of a brute, and rather buggy. Liferay may also be a portlet server, but the home page is heavy on fluff and light on information, so I can't tell.
Spring provides an MVC framework for the portlet API which tries to reduce the pain, but frankly I wish they hadn't bothered, it just clutters up the documentation.
Essentially, the whole shebang looks like a solution in search of a problem.
A:
If you happen to have a framework that you need to use, and it supports portlets, you may find then that portlets are going to be useful, since the application is built with the idea, but, as others have mentioned, if you are starting on a project, there are many other technologies that will do what you want with less effort, in a more stable environment.
For example, when I worked at the University of South Florida, the learning management system was (and is) Blackboard, and they now support portlets: http://www.ja-sig.org/wiki/display/JSG/Blackboard+Portlet. So, if the application provides an API, and expects people to use portlets, then it may make sense to look at them.
UPDATE:
After looking at the question there were a couple of things I missed.
Portlets were an attempt, it seems, to try to do as Google did on their homepage, where you could have multiple unrelated blocks of information on the webpage, so you could track your stack portfolio and your favorite hockey team, for example. I don't think it was influenced by PHP CMS as it was just an idea that was ready to come about, and if you need the server code to help pull the information, and to tie it into an application, this was one approach.
The closest thing in ASP.NET that I can think of to portlets are controls. I could have a stock portfolio control and when I include it on my page, you can set the options and it will show you your stocks and hockey team scores.
Not everyone uses JSF, for example, so controls would need to be written by hand as JSPs and servlets, with javascript.
A:
Why would I want to use java portlets above tomcat and gwt?
These technologies are not directly comparable. Coming from regular web page development, Portlets seem like a very restrictive technology. But then the value of Portal servers is largely the control they give to administrators and users - the fact that this makes your life more difficult is irrelevant.
Would portlets make it less- or un- necessary for me to use jsp and jsf?
You can write directly to the output, just like you would in a Servlet. You probably still want a view technology (that will have to support portlets).
|
Please discuss what are and why use portlets
|
Why would I want to use java portlets above tomcat and gwt?
Would portlets make it less- or un- necessary for me to use jsp and jsf?
Has Jboss been part of the portlet evolution culture? Does Jboss satisfy the portlet jsrs?
What portlet implementation/brand would run on gae java and gae python?
Are portlet specs due to peer pressure from php cms culture?
What are the equivalent of portlet and portlet jsr in .net?
|
[
"Portlets were a well-meaning but mis-guided attempt at a reusable widget API for web applications. Think of the personalised google home page, with the portlets like weather, news, mail, etc.\nUnfortunately, they made a bit of a mess of it. The Portlet API is a bit of a pig, a real barrel of not-fun, and there are very few implementations of it. The only one I've ever used is JBoss Portal, but it's a bit of a brute, and rather buggy. Liferay may also be a portlet server, but the home page is heavy on fluff and light on information, so I can't tell.\nSpring provides an MVC framework for the portlet API which tries to reduce the pain, but frankly I wish they hadn't bothered, it just clutters up the documentation.\nEssentially, the whole shebang looks like a solution in search of a problem.\n",
"If you happen to have a framework that you need to use, and it supports portlets, you may find then that portlets are going to be useful, since the application is built with the idea, but, as others have mentioned, if you are starting on a project, there are many other technologies that will do what you want with less effort, in a more stable environment.\nFor example, when I worked at the University of South Florida, the learning management system was (and is) Blackboard, and they now support portlets: http://www.ja-sig.org/wiki/display/JSG/Blackboard+Portlet. So, if the application provides an API, and expects people to use portlets, then it may make sense to look at them.\nUPDATE:\nAfter looking at the question there were a couple of things I missed.\nPortlets were an attempt, it seems, to try to do as Google did on their homepage, where you could have multiple unrelated blocks of information on the webpage, so you could track your stack portfolio and your favorite hockey team, for example. I don't think it was influenced by PHP CMS as it was just an idea that was ready to come about, and if you need the server code to help pull the information, and to tie it into an application, this was one approach.\nThe closest thing in ASP.NET that I can think of to portlets are controls. I could have a stock portfolio control and when I include it on my page, you can set the options and it will show you your stocks and hockey team scores.\nNot everyone uses JSF, for example, so controls would need to be written by hand as JSPs and servlets, with javascript.\n",
"\nWhy would I want to use java portlets above tomcat and gwt?\n\nThese technologies are not directly comparable. Coming from regular web page development, Portlets seem like a very restrictive technology. But then the value of Portal servers is largely the control they give to administrators and users - the fact that this makes your life more difficult is irrelevant.\n\nWould portlets make it less- or un- necessary for me to use jsp and jsf?\n\nYou can write directly to the output, just like you would in a Servlet. You probably still want a view technology (that will have to support portlets).\n"
] |
[
6,
2,
1
] |
[] |
[] |
[
".net",
"java",
"portlet",
"python"
] |
stackoverflow_0001983282_.net_java_portlet_python.txt
|
Q:
Python: Colorbands in the PIL-Module
I have this code from the PIL-handbook but I get an error message.
from PIL import Image, ImageEnhance, ImageChops
im = Image.open("D:\\Python26\\PYTHON-PROGRAMME\\bild.jpg")
# split the image into individual bands
source = im.split()
R, G, B = 0, 1, 2
# select regions where red is less than 100
mask = source[R].point(lambda i: i < 100 and 255)
# process the green band
out = source[G].point(lambda i: i * 0.7)
# paste the processed band back, but only where red was < 100
source[G].paste(out, None, mask)
# build a new multiband image
im = Image.merge(im.mode, source)
im = im.point(lambda i: expression and 255)
im.save("D:\\Python26\\PYTHON-PROGRAMME\\bild2.jpg")
print('done')
Error:
Traceback (most recent call last):
File "D:\Python26\PYTHON-PROGRAMME\00000000000000000", line 14, in <module>
out = source[G].point(lambda i: i * 0.7)
IndexError: tuple index out of range
A:
Code runs fine for me, on a colour image.
I would expect you to get the quoted error if you tried to operate on an image that only had one colour channel — ie. a black-and-white JPEG. How about im= im.convert('RGB') before the split, to be sure?
|
Python: Colorbands in the PIL-Module
|
I have this code from the PIL-handbook but I get an error message.
from PIL import Image, ImageEnhance, ImageChops
im = Image.open("D:\\Python26\\PYTHON-PROGRAMME\\bild.jpg")
# split the image into individual bands
source = im.split()
R, G, B = 0, 1, 2
# select regions where red is less than 100
mask = source[R].point(lambda i: i < 100 and 255)
# process the green band
out = source[G].point(lambda i: i * 0.7)
# paste the processed band back, but only where red was < 100
source[G].paste(out, None, mask)
# build a new multiband image
im = Image.merge(im.mode, source)
im = im.point(lambda i: expression and 255)
im.save("D:\\Python26\\PYTHON-PROGRAMME\\bild2.jpg")
print('done')
Error:
Traceback (most recent call last):
File "D:\Python26\PYTHON-PROGRAMME\00000000000000000", line 14, in <module>
out = source[G].point(lambda i: i * 0.7)
IndexError: tuple index out of range
|
[
"Code runs fine for me, on a colour image.\nI would expect you to get the quoted error if you tried to operate on an image that only had one colour channel — ie. a black-and-white JPEG. How about im= im.convert('RGB') before the split, to be sure?\n"
] |
[
3
] |
[] |
[] |
[
"image",
"python"
] |
stackoverflow_0001985178_image_python.txt
|
Q:
Interaction between Java App and Python App
I have a python application which I cant edit its a black box from my point of view. The python application knows how to process text and return processed text.
I have another application written in Java which knows how to collect non processed texts.
Current state, the python app works in batch mode every x minutes.
I want to make the python
processing part of the process: Java app collects text and request the python app to process and return processed text as part of a flow.
What do you think is the simplest solution for this?
Thanks,
Rod
A:
I don't know nothing about Jython and the like. I guess it's the best solution if you can execute two programs without executing a new process each time the Java app needs to transform text. Anyway a simple proof of concept is to execute a separate process from the Java App to make it work. Next you can enhance the execution with all that tools.
Executing a separate process from Java
String[] envprops = new String[] {"PROP1=VAL1", "PROP2=VAL2" };
Process pythonProc = Runtime.getRuntime().exec(
"the command to execute the python app",
envprops,
new File("/workingdirectory"));
// get an outputstream to write into the standard input of python
OutputStream toPython = pythonProc.getOutputStream();
// get an inputstream to read from the standard output of python
InputStream fromPython = pythonProc.getInputStream();
// send something
toPython.write(.....);
// receive something
fromPython.read(....);
Important: chars are NOT bytes
A lot of people understimate this.
Be careful with char to byte conversions (remember Writers/Readers are for chars, Input/OutputStreams are for bytes, encoding is necesary for convertir one to another, you can use OuputStreamWriter to convert string to bytes and send, InputStreamReader to convert bytes to chars and read them).
A:
Look into Jython - you can run Python programs directly from Java code, and interact seamlessly back and forth.
A:
Use ProcessBuilder to execute your Python code as a filter:
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class PBTest {
public static void main(String[] args) {
ProcessBuilder pb = new ProcessBuilder("python", "-c", "print 42");
pb.redirectErrorStream(true);
try {
Process p = pb.start();
String s;
BufferedReader stdout = new BufferedReader (
new InputStreamReader(p.getInputStream()));
while ((s = stdout.readLine()) != null) {
System.out.println(s);
}
System.out.println("Exit value: " + p.waitFor());
p.getInputStream().close();
p.getOutputStream().close();
p.getErrorStream().close();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
A:
Expose one of the two as a service of some kind, web service maybe. Another option is to port the python code to Jython
A:
One possible solution is jpype. This allows you to launch a JVM from Python and pass data back and forth between them.
Another solution may be to write the Python program as a filter (reading data from stdin and writing result to stdout) then run it as a pipe. However I do not know how well Java supports this - according to the Sun docs their concept of pipes only supports communication between threads on the same JVM.
A:
An option is making the python application work as a server, listens for request via sockets (TCP).
|
Interaction between Java App and Python App
|
I have a python application which I cant edit its a black box from my point of view. The python application knows how to process text and return processed text.
I have another application written in Java which knows how to collect non processed texts.
Current state, the python app works in batch mode every x minutes.
I want to make the python
processing part of the process: Java app collects text and request the python app to process and return processed text as part of a flow.
What do you think is the simplest solution for this?
Thanks,
Rod
|
[
"I don't know nothing about Jython and the like. I guess it's the best solution if you can execute two programs without executing a new process each time the Java app needs to transform text. Anyway a simple proof of concept is to execute a separate process from the Java App to make it work. Next you can enhance the execution with all that tools.\nExecuting a separate process from Java\nString[] envprops = new String[] {\"PROP1=VAL1\", \"PROP2=VAL2\" };\nProcess pythonProc = Runtime.getRuntime().exec(\n \"the command to execute the python app\", \n envprops, \n new File(\"/workingdirectory\"));\n\n// get an outputstream to write into the standard input of python\nOutputStream toPython = pythonProc.getOutputStream();\n\n// get an inputstream to read from the standard output of python\nInputStream fromPython = pythonProc.getInputStream();\n\n// send something\ntoPython.write(.....);\n// receive something\nfromPython.read(....);\n\nImportant: chars are NOT bytes\nA lot of people understimate this.\nBe careful with char to byte conversions (remember Writers/Readers are for chars, Input/OutputStreams are for bytes, encoding is necesary for convertir one to another, you can use OuputStreamWriter to convert string to bytes and send, InputStreamReader to convert bytes to chars and read them).\n",
"Look into Jython - you can run Python programs directly from Java code, and interact seamlessly back and forth.\n",
"Use ProcessBuilder to execute your Python code as a filter:\nimport java.io.BufferedReader;\nimport java.io.InputStreamReader;\n\npublic class PBTest {\n\n public static void main(String[] args) {\n ProcessBuilder pb = new ProcessBuilder(\"python\", \"-c\", \"print 42\");\n pb.redirectErrorStream(true);\n try {\n Process p = pb.start();\n String s;\n BufferedReader stdout = new BufferedReader (\n new InputStreamReader(p.getInputStream()));\n while ((s = stdout.readLine()) != null) {\n System.out.println(s);\n }\n System.out.println(\"Exit value: \" + p.waitFor());\n p.getInputStream().close();\n p.getOutputStream().close();\n p.getErrorStream().close();\n } catch (Exception ex) {\n ex.printStackTrace();\n }\n }\n}\n\n",
"Expose one of the two as a service of some kind, web service maybe. Another option is to port the python code to Jython\n",
"One possible solution is jpype. This allows you to launch a JVM from Python and pass data back and forth between them.\nAnother solution may be to write the Python program as a filter (reading data from stdin and writing result to stdout) then run it as a pipe. However I do not know how well Java supports this - according to the Sun docs their concept of pipes only supports communication between threads on the same JVM. \n",
"An option is making the python application work as a server, listens for request via sockets (TCP).\n"
] |
[
7,
6,
5,
0,
0,
0
] |
[] |
[] |
[
"interaction",
"interface",
"java",
"python"
] |
stackoverflow_0001984445_interaction_interface_java_python.txt
|
Q:
Get request object in a db model?
Is there a way so that I can get access to request object, while saving a db object, without explicitly passing it e.g.
class RequestData(db.Model):
...
def put(self):
# autopopulate fields from current request
I just want a quick way to access request, instead of passing it thru so many layers of view/forms/etc
A:
Since Google AppEngine uses CGI protocol, the request information is all there in the environment variables (See CGI Environment Variables).
You can regenerate the request object just like this:
req = google.appengine.ext.webapp.Request(dict(os.environ))
A:
There isn't any pre-built way of achieving this to the best of my knowledge, however writing such a piece of code should not be very hard.
If its GAE, the self.request.arguments() method can be used to read the properties of the request object and thus should enable iteratively accessing values of all the request properties.
A:
views.py:
import threading
request_storage = threading.local()
def myview(request):
request_storage.data = request
models.py:
class MyClass():
def save():
from views import request_storage
request_storage.data.GET['myarg']
this will use request_storage instance from current thread
|
Get request object in a db model?
|
Is there a way so that I can get access to request object, while saving a db object, without explicitly passing it e.g.
class RequestData(db.Model):
...
def put(self):
# autopopulate fields from current request
I just want a quick way to access request, instead of passing it thru so many layers of view/forms/etc
|
[
"Since Google AppEngine uses CGI protocol, the request information is all there in the environment variables (See CGI Environment Variables).\nYou can regenerate the request object just like this:\nreq = google.appengine.ext.webapp.Request(dict(os.environ))\n\n",
"There isn't any pre-built way of achieving this to the best of my knowledge, however writing such a piece of code should not be very hard.\nIf its GAE, the self.request.arguments() method can be used to read the properties of the request object and thus should enable iteratively accessing values of all the request properties.\n",
"views.py:\nimport threading\nrequest_storage = threading.local()\ndef myview(request):\n request_storage.data = request\n\nmodels.py:\nclass MyClass():\n def save():\n from views import request_storage\n request_storage.data.GET['myarg']\n\nthis will use request_storage instance from current thread\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001952704_google_app_engine_python.txt
|
Q:
Using for...else in Python generators
I'm a big fan of Python's for...else syntax - it's surprising how often it's applicable, and how effectively it can simplify code.
However, I've not figured out a nice way to use it in a generator, for example:
def iterate(i):
for value in i:
yield value
else:
print 'i is empty'
In the above example, I'd like the print statement to be executed only if i is empty. However, as else only respects break and return, it is always executed, regardless of the length of i.
If it's impossible to use for...else in this way, what's the best approach to this so that the print statement is only executed when nothing is yielded?
A:
You're breaking the definition of a generator, which should throw a StopIteration exception when iteration is complete (which is automatically handled by a return statement in a generator function)
So:
def iterate(i):
for value in i:
yield value
return
Best to let the calling code handle the case of an empty iterator:
count = 0
for value in iterate(range([])):
print value
count += 1
else:
if count == 0:
print "list was empty"
Might be a cleaner way of doing the above, but that ought to work fine, and doesn't fall into any of the common 'treating an iterator like a list' traps below.
A:
There are a couple ways of doing this. You could always use the Iterator directly:
def iterate(i):
try:
i_iter = iter(i)
next = i_iter.next()
except StopIteration:
print 'i is empty'
return
while True:
yield next
next = i_iter.next()
But if you know more about what to expect from the argument i, you can be more concise:
def iterate(i):
if i: # or if len(i) == 0
for next in i:
yield next
else:
print 'i is empty'
raise StopIteration()
A:
Summing up some of the earlier answers, it could be solved like this:
def iterate(i):
empty = True
for value in i:
yield value
empty = False
if empty:
print "empty"
so there really is no "else" clause involved.
A:
As you note, for..else only detects a break. So it's only applicable when you look for something and then stop.
It's not applicable to your purpose not because it's a generator, but because you want to process all elements, without stopping (because you want to yield them all, but that's not the point).
So generator or not, you really need a boolean, as in Ber's solution.
A:
If it's impossible to use for...else in this way, what's the best approach to this so that the print statement is only executed when nothing is yielded?
Maximum i can think of:
>>> empty = True
>>> for i in [1,2]:
... empty = False
... if empty:
... print 'empty'
...
>>>
>>>
>>> empty = True
>>> for i in []:
... empty = False
... if empty:
... print 'empty'
...
empty
>>>
|
Using for...else in Python generators
|
I'm a big fan of Python's for...else syntax - it's surprising how often it's applicable, and how effectively it can simplify code.
However, I've not figured out a nice way to use it in a generator, for example:
def iterate(i):
for value in i:
yield value
else:
print 'i is empty'
In the above example, I'd like the print statement to be executed only if i is empty. However, as else only respects break and return, it is always executed, regardless of the length of i.
If it's impossible to use for...else in this way, what's the best approach to this so that the print statement is only executed when nothing is yielded?
|
[
"You're breaking the definition of a generator, which should throw a StopIteration exception when iteration is complete (which is automatically handled by a return statement in a generator function)\nSo:\ndef iterate(i):\n for value in i:\n yield value\n return\n\nBest to let the calling code handle the case of an empty iterator:\ncount = 0\nfor value in iterate(range([])):\n print value\n count += 1\nelse:\n if count == 0:\n print \"list was empty\"\n\nMight be a cleaner way of doing the above, but that ought to work fine, and doesn't fall into any of the common 'treating an iterator like a list' traps below.\n",
"There are a couple ways of doing this. You could always use the Iterator directly:\ndef iterate(i):\n try:\n i_iter = iter(i)\n next = i_iter.next()\n except StopIteration:\n print 'i is empty'\n return\n\n while True:\n yield next\n next = i_iter.next()\n\nBut if you know more about what to expect from the argument i, you can be more concise:\ndef iterate(i):\n if i: # or if len(i) == 0\n for next in i:\n yield next\n else:\n print 'i is empty'\n raise StopIteration()\n\n",
"Summing up some of the earlier answers, it could be solved like this:\ndef iterate(i):\n empty = True\n for value in i:\n yield value\n empty = False\n\n if empty:\n print \"empty\"\n\nso there really is no \"else\" clause involved.\n",
"As you note, for..else only detects a break. So it's only applicable when you look for something and then stop.\nIt's not applicable to your purpose not because it's a generator, but because you want to process all elements, without stopping (because you want to yield them all, but that's not the point).\nSo generator or not, you really need a boolean, as in Ber's solution.\n",
"\n\nIf it's impossible to use for...else in this way, what's the best approach to this so that the print statement is only executed when nothing is yielded?\n\n\nMaximum i can think of:\n\n\n>>> empty = True\n>>> for i in [1,2]:\n... empty = False\n... if empty:\n... print 'empty'\n...\n>>>\n>>>\n>>> empty = True\n>>> for i in []:\n... empty = False\n... if empty:\n... print 'empty'\n...\nempty\n>>>\n\n\n"
] |
[
12,
5,
5,
4,
0
] |
[
"What about simple if-else?\ndef iterate(i):\n if len(i) == 0: print 'i is empty'\n else:\n for value in i:\n yield value\n\n"
] |
[
-2
] |
[
"for_loop",
"generator",
"python",
"syntax",
"yield"
] |
stackoverflow_0000603641_for_loop_generator_python_syntax_yield.txt
|
Q:
Python asynchronous callbacks and generators
I'm trying to convert a synchronous library to use an internal asynchronous IO framework. I have several methods that look like this:
def foo:
....
sync_call_1() # synchronous blocking call
....
sync_call_2() # synchronous blocking call
....
return bar
For each of the synchronous functions (sync_call_*), I have written a corresponding async function that takes a a callback. E.g.
def async_call_1(callback=none):
# do the I/O
callback()
Now for the python newbie question -- whats the easiest way to translate the existing methods to use these new async methods instead? That is, the method foo() above needs to now be:
def async_foo(callback):
# Do the foo() stuff using async_call_*
callback()
One obvious choice is to pass a callback into each async method which effectively "resumes" the calling "foo" function, and then call the callback global at the very end of the method. However, that makes the code brittle, ugly and I would need to add a new callback for every call to an async_call_* method.
Is there an easy way to do that using a python idiom, such as a generator or coroutine?
A:
UPDATE: take this with a grain of salt, as I'm out of touch with modern python async developments, including gevent and asyncio and don't actually have serious experience with async code.
There are 3 common approaches to thread-less async coding in Python:
Callbacks - ugly but workable, Twisted does this well.
Generators - nice but require all your code to follow the style.
Use Python implementation with real tasklets - Stackless (RIP) and greenlet.
Unfortunately, ideally the whole program should use one style, or things become complicated. If you are OK with your library exposing a fully synchronous interface, you are probably OK, but if you want several calls to your library to work in parallel, especially in parallel with other async code, then you need a common event "reactor" that can work with all the code.
So if you have (or expect the user to have) other async code in the application, adopting the same model is probably smart.
If you don't want to understand the whole mess, consider using bad old threads. They are also ugly, but work with everything else.
If you do want to understand how coroutines might help you - and how they might complicate you, David Beazley's "A Curious Course on Coroutines and Concurrency" is good stuff.
Greenlets might be actualy the cleanest way if you can use the extension. I don't have any experience with them, so can't say much.
A:
There are several way for multiplexing tasks. We can't say what is the best for your case without deeper knowledge on what you are doing. Probably the most easiest/universal way is to use threads. Take a look at this question for some ideas.
A:
You need to make function foo also async. How about this approach?
@make_async
def foo(somearg, callback):
# This function is now async. Expect a callback argument.
...
# change
# x = sync_call1(somearg, some_other_arg)
# to the following:
x = yield async_call1, somearg, some_other_arg
...
# same transformation again
y = yield async_call2, x
...
# change
# return bar
# to a callback call
callback(bar)
And make_async can be defined like this:
def make_async(f):
"""Decorator to convert sync function to async
using the above mentioned transformations"""
def g(*a, **kw):
async_call(f(*a, **kw))
return g
def async_call(it, value=None):
# This function is the core of async transformation.
try:
# send the current value to the iterator and
# expect function to call and args to pass to it
x = it.send(value)
except StopIteration:
return
func = x[0]
args = list(x[1:])
# define callback and append it to args
# (assuming that callback is always the last argument)
callback = lambda new_value: async_call(it, new_value)
args.append(callback)
func(*args)
CAUTION: I haven't tested this
|
Python asynchronous callbacks and generators
|
I'm trying to convert a synchronous library to use an internal asynchronous IO framework. I have several methods that look like this:
def foo:
....
sync_call_1() # synchronous blocking call
....
sync_call_2() # synchronous blocking call
....
return bar
For each of the synchronous functions (sync_call_*), I have written a corresponding async function that takes a a callback. E.g.
def async_call_1(callback=none):
# do the I/O
callback()
Now for the python newbie question -- whats the easiest way to translate the existing methods to use these new async methods instead? That is, the method foo() above needs to now be:
def async_foo(callback):
# Do the foo() stuff using async_call_*
callback()
One obvious choice is to pass a callback into each async method which effectively "resumes" the calling "foo" function, and then call the callback global at the very end of the method. However, that makes the code brittle, ugly and I would need to add a new callback for every call to an async_call_* method.
Is there an easy way to do that using a python idiom, such as a generator or coroutine?
|
[
"UPDATE: take this with a grain of salt, as I'm out of touch with modern python async developments, including gevent and asyncio and don't actually have serious experience with async code.\n\nThere are 3 common approaches to thread-less async coding in Python:\n\nCallbacks - ugly but workable, Twisted does this well.\nGenerators - nice but require all your code to follow the style.\nUse Python implementation with real tasklets - Stackless (RIP) and greenlet.\n\nUnfortunately, ideally the whole program should use one style, or things become complicated. If you are OK with your library exposing a fully synchronous interface, you are probably OK, but if you want several calls to your library to work in parallel, especially in parallel with other async code, then you need a common event \"reactor\" that can work with all the code.\nSo if you have (or expect the user to have) other async code in the application, adopting the same model is probably smart.\nIf you don't want to understand the whole mess, consider using bad old threads. They are also ugly, but work with everything else.\nIf you do want to understand how coroutines might help you - and how they might complicate you, David Beazley's \"A Curious Course on Coroutines and Concurrency\" is good stuff.\nGreenlets might be actualy the cleanest way if you can use the extension. I don't have any experience with them, so can't say much.\n",
"There are several way for multiplexing tasks. We can't say what is the best for your case without deeper knowledge on what you are doing. Probably the most easiest/universal way is to use threads. Take a look at this question for some ideas.\n",
"You need to make function foo also async. How about this approach?\n@make_async\ndef foo(somearg, callback):\n # This function is now async. Expect a callback argument.\n ...\n\n # change \n # x = sync_call1(somearg, some_other_arg)\n # to the following:\n x = yield async_call1, somearg, some_other_arg\n ...\n\n # same transformation again\n y = yield async_call2, x\n ...\n\n # change\n # return bar\n # to a callback call\n callback(bar)\n\nAnd make_async can be defined like this:\ndef make_async(f):\n \"\"\"Decorator to convert sync function to async\n using the above mentioned transformations\"\"\"\n def g(*a, **kw):\n async_call(f(*a, **kw))\n return g\n\ndef async_call(it, value=None):\n # This function is the core of async transformation.\n\n try: \n # send the current value to the iterator and\n # expect function to call and args to pass to it\n x = it.send(value)\n except StopIteration:\n return\n\n func = x[0]\n args = list(x[1:])\n\n # define callback and append it to args\n # (assuming that callback is always the last argument)\n\n callback = lambda new_value: async_call(it, new_value)\n args.append(callback)\n\n func(*args)\n\nCAUTION: I haven't tested this\n"
] |
[
10,
2,
2
] |
[] |
[] |
[
"asynchronous",
"generator",
"python"
] |
stackoverflow_0001805958_asynchronous_generator_python.txt
|
Q:
How can I approximate Python's or operator for set comparison in Scala?
After hearing the latest Stack Overflow podcast, Peter Norvig's compact Python spell-checker intrigued me, so I decided to implement it in Scala if I could express it well in the functional Scala idiom, and also to see how many lines of code it would take.
Here's the whole problem. (Let's not compare lines of code yet.)
(Two notes: You can run this in the Scala interpreter, if you wish. If you need a copy of big.txt, or the whole project, it's on GitHub.)
import scala.io.Source
val alphabet = "abcdefghijklmnopqrstuvwxyz"
def train(text:String) = {
"[a-z]+".r.findAllIn(text).foldLeft(Map[String, Int]() withDefaultValue 1)
{(a, b) => a(b) = a(b) + 1}
}
val NWORDS = train(Source.fromFile("big.txt").getLines.mkString.toLowerCase)
def known(words:Set[String]) =
{Set.empty ++ (for(w <- words if NWORDS contains w) yield w)}
def edits1(word:String) = {
Set.empty ++
(for (i <- 0 until word.length) // Deletes
yield (word take i) + (word drop (i + 1))) ++
(for (i <- 0 until word.length - 1) // Transposes
yield (word take i) + word(i + 1) + word(i) + (word drop (i + 2))) ++
(for (i <- 0 until word.length; j <- alphabet) // Replaces
yield (word take i) + j + (word drop (i+1))) ++
(for (i <- 0 until word.length; j <- alphabet) // Inserts
yield (word take i) + j + (word drop i))
}
def known_edits2(word:String) = {Set.empty ++ (for (e1 <- edits1(word);
e2 <- edits1(e1) if NWORDS contains e2) yield e2)}
def correct(word:String) = {
val options = Seq(() => known(Set(word)), () => known(edits1(word)),
() => known_edits2(word), () => Set(word))
val candidates = options.foldLeft(Set[String]())
{(a, b) => if (a.isEmpty) b() else a}
candidates.foldLeft("") {(a, b) => if (NWORDS(a) > NWORDS(b)) a else b}
}
Specifically, I'm wondering if there's anything cleaner I can do with the correct function. In the original Python, the implementation is a bit cleaner:
def correct(word):
candidates = known([word]) or known(edits1(word)) or
known_edits2(word) or [word]
return max(candidates, key=NWORDS.get)
Apparently in Python, an empty set will evaluate to Boolean False, so only the first of the candidates to return a non-empty set will be evaluated, saving potentially expensive calls to edits1 and known_edits2.
The only solution I would come up with is the version you see here, where the Seq of anonymous functions are called until one returns a non-empty Set, which the last one is guaranteed to do.
So experienced Scala-heads, is there a more syntactically concise or better way to do this? Thanks in advance!
A:
I'm not sure why you're attempting to use lazy evaluation for known rather than simply using a stream as oxbow_lakes illustrated. A better way of doing what he did:
def correct(word: String) = {
import Stream._
val str = cons(known(Set(word)),
cons(known(edits1(word)),
cons(known_edits2(word),
cons(Set(word), empty))))
str find { !_.isEmpty } match {
case Some(candidates) =>
candidates.foldLeft(Set[String]()) { (res, n) =>
if (NWORDS(res) > NWORDS(n)) res else n
}
case None => Set()
}
}
The exploits the fact that Stream.cons is lazy already and so we don't need to wrap everything up in a thunk.
If you're really in the mood for nice syntax though, we can add some syntactic sugar to all of those conses:
implicit def streamSyntax[A](tail: =>Stream[A]) = new {
def #::(hd: A) = Stream.cons(hd, tail)
}
Now our previously-ugly str definition falls into the following:
def correct(word: String) = {
val str = known(Set(word)) #:: known(edits1(word)) #::
known_edits2(word) #:: Set(word) #:: Stream.empty
...
}
A:
Would this work? The _ syntax is a partially applied function and by using a (lazy) Stream, I ensure that the evaluations in the reduceLeft (which I think is more appropriate than foldLeft here) only happen as required!
def correct(word:String) = {
Stream(known(Set(word)) _,
known(edits1(word)) _,
known_edits2(word) _,
Set(word) _
).find( !_().isEmpty ) match {
case Some(candidates) =>
candidates.reduceLeft {(res, n) => if (NWORDS(res) > NWORDS(n)) res else n}
case _ => "" //or some other value
}
I've probably made some syntax errors here, but I think the Stream approach is a valid one
A:
Scala 2.7-ready (including implicit performance work-around):
class Or[A](one: Set[A]) {
def or(other: => Set[A]): Set[A] = if (one.isEmpty) other else one
}
implicit def toOr[A](one: Set[A]) = new Or(one)
def correct(word: String) = {
candidates = known(Set(word)) or known(edits1(word)) or known_edits2(word) or Set(word)
candidates.foldLeft("") {(a, b) => if (NWORDS(a) > NWORDS(b)) a else b}
}
Scala 2.8-goodness:
implicit def toOr[A](one: Set[A]) = new AnyRef {
def or(other: => Set[A]): Set[A] = if (one.isEmpty) other else one
}
def correct(word: String) = {
candidates = known(Set(word)) or known(edits1(word)) or known_edits2(word) or Set(word)
candidates.max(Ordering.fromLessThan[String](NWORDS(_) < NWORDS(_)))
}
That said, I pretty much upvoted everyone else's. I hadn't consider a Stream.
EDIT
It seems Ordering.fromLessThan can lead to twice the necessary comparisons. Here is an alternate version for that line:
candidates.max(new Ordering[String] { def compare(x: String, y: String) = NWORDS(x) - NWORDS(y) })
A:
Iterators are also lazy (although not very functional since you can only iterate over them once.) So, you could do it like this:
def correct(word: String) = {
val sets = List[String => Set[String]](
x => known(Set(x)), x => known(edits1(x)), known_edits2
).elements.map(_(word))
sets find { !_.isEmpty } match {
case Some(candidates: Set[String]) => candidates.reduceLeft { (res, n) => if (NWORDS(res) > NWORDS(n)) res else n }
case None => word
}
}
As a bonus, Iterator's find() method doesn't force evaluation of the next element.
A:
I've tried to implement a short Scala implementation of the spelling corrector. It's 15 lines without imports. The shortest replacement for Python's or is simple call by name parameter:
def or[T](candidates : Seq[T], other : => Seq[T]) = if(candidates.isEmpty) other else candidates
def candidates(word: String) = or(known(List(word)), or(known(edits1(word)), known(edits2(word))))
In a real world scenario I would use the implicit conversion Daniel proposed.
|
How can I approximate Python's or operator for set comparison in Scala?
|
After hearing the latest Stack Overflow podcast, Peter Norvig's compact Python spell-checker intrigued me, so I decided to implement it in Scala if I could express it well in the functional Scala idiom, and also to see how many lines of code it would take.
Here's the whole problem. (Let's not compare lines of code yet.)
(Two notes: You can run this in the Scala interpreter, if you wish. If you need a copy of big.txt, or the whole project, it's on GitHub.)
import scala.io.Source
val alphabet = "abcdefghijklmnopqrstuvwxyz"
def train(text:String) = {
"[a-z]+".r.findAllIn(text).foldLeft(Map[String, Int]() withDefaultValue 1)
{(a, b) => a(b) = a(b) + 1}
}
val NWORDS = train(Source.fromFile("big.txt").getLines.mkString.toLowerCase)
def known(words:Set[String]) =
{Set.empty ++ (for(w <- words if NWORDS contains w) yield w)}
def edits1(word:String) = {
Set.empty ++
(for (i <- 0 until word.length) // Deletes
yield (word take i) + (word drop (i + 1))) ++
(for (i <- 0 until word.length - 1) // Transposes
yield (word take i) + word(i + 1) + word(i) + (word drop (i + 2))) ++
(for (i <- 0 until word.length; j <- alphabet) // Replaces
yield (word take i) + j + (word drop (i+1))) ++
(for (i <- 0 until word.length; j <- alphabet) // Inserts
yield (word take i) + j + (word drop i))
}
def known_edits2(word:String) = {Set.empty ++ (for (e1 <- edits1(word);
e2 <- edits1(e1) if NWORDS contains e2) yield e2)}
def correct(word:String) = {
val options = Seq(() => known(Set(word)), () => known(edits1(word)),
() => known_edits2(word), () => Set(word))
val candidates = options.foldLeft(Set[String]())
{(a, b) => if (a.isEmpty) b() else a}
candidates.foldLeft("") {(a, b) => if (NWORDS(a) > NWORDS(b)) a else b}
}
Specifically, I'm wondering if there's anything cleaner I can do with the correct function. In the original Python, the implementation is a bit cleaner:
def correct(word):
candidates = known([word]) or known(edits1(word)) or
known_edits2(word) or [word]
return max(candidates, key=NWORDS.get)
Apparently in Python, an empty set will evaluate to Boolean False, so only the first of the candidates to return a non-empty set will be evaluated, saving potentially expensive calls to edits1 and known_edits2.
The only solution I would come up with is the version you see here, where the Seq of anonymous functions are called until one returns a non-empty Set, which the last one is guaranteed to do.
So experienced Scala-heads, is there a more syntactically concise or better way to do this? Thanks in advance!
|
[
"I'm not sure why you're attempting to use lazy evaluation for known rather than simply using a stream as oxbow_lakes illustrated. A better way of doing what he did:\ndef correct(word: String) = {\n import Stream._\n\n val str = cons(known(Set(word)), \n cons(known(edits1(word)),\n cons(known_edits2(word),\n cons(Set(word), empty))))\n\n str find { !_.isEmpty } match {\n case Some(candidates) =>\n candidates.foldLeft(Set[String]()) { (res, n) =>\n if (NWORDS(res) > NWORDS(n)) res else n\n }\n\n case None => Set()\n }\n}\n\nThe exploits the fact that Stream.cons is lazy already and so we don't need to wrap everything up in a thunk.\nIf you're really in the mood for nice syntax though, we can add some syntactic sugar to all of those conses:\nimplicit def streamSyntax[A](tail: =>Stream[A]) = new {\n def #::(hd: A) = Stream.cons(hd, tail)\n}\n\nNow our previously-ugly str definition falls into the following:\ndef correct(word: String) = {\n val str = known(Set(word)) #:: known(edits1(word)) #::\n known_edits2(word) #:: Set(word) #:: Stream.empty\n\n ...\n}\n\n",
"Would this work? The _ syntax is a partially applied function and by using a (lazy) Stream, I ensure that the evaluations in the reduceLeft (which I think is more appropriate than foldLeft here) only happen as required!\ndef correct(word:String) = {\n Stream(known(Set(word)) _, \n known(edits1(word)) _, \n known_edits2(word) _, \n Set(word) _\n ).find( !_().isEmpty ) match {\n case Some(candidates) =>\n candidates.reduceLeft {(res, n) => if (NWORDS(res) > NWORDS(n)) res else n}\n case _ => \"\" //or some other value\n\n}\nI've probably made some syntax errors here, but I think the Stream approach is a valid one\n",
"Scala 2.7-ready (including implicit performance work-around):\nclass Or[A](one: Set[A]) {\n def or(other: => Set[A]): Set[A] = if (one.isEmpty) other else one \n}\n\nimplicit def toOr[A](one: Set[A]) = new Or(one)\n\ndef correct(word: String) = {\n candidates = known(Set(word)) or known(edits1(word)) or known_edits2(word) or Set(word)\n candidates.foldLeft(\"\") {(a, b) => if (NWORDS(a) > NWORDS(b)) a else b}\n}\n\nScala 2.8-goodness:\nimplicit def toOr[A](one: Set[A]) = new AnyRef { \n def or(other: => Set[A]): Set[A] = if (one.isEmpty) other else one \n}\n\ndef correct(word: String) = {\n candidates = known(Set(word)) or known(edits1(word)) or known_edits2(word) or Set(word)\n candidates.max(Ordering.fromLessThan[String](NWORDS(_) < NWORDS(_)))\n}\n\nThat said, I pretty much upvoted everyone else's. I hadn't consider a Stream.\nEDIT\nIt seems Ordering.fromLessThan can lead to twice the necessary comparisons. Here is an alternate version for that line:\n candidates.max(new Ordering[String] { def compare(x: String, y: String) = NWORDS(x) - NWORDS(y) })\n\n",
"Iterators are also lazy (although not very functional since you can only iterate over them once.) So, you could do it like this: \n def correct(word: String) = {\n val sets = List[String => Set[String]](\n x => known(Set(x)), x => known(edits1(x)), known_edits2\n ).elements.map(_(word))\n\n sets find { !_.isEmpty } match {\n case Some(candidates: Set[String]) => candidates.reduceLeft { (res, n) => if (NWORDS(res) > NWORDS(n)) res else n }\n case None => word\n }\n }\n\nAs a bonus, Iterator's find() method doesn't force evaluation of the next element.\n",
"I've tried to implement a short Scala implementation of the spelling corrector. It's 15 lines without imports. The shortest replacement for Python's or is simple call by name parameter:\ndef or[T](candidates : Seq[T], other : => Seq[T]) = if(candidates.isEmpty) other else candidates\ndef candidates(word: String) = or(known(List(word)), or(known(edits1(word)), known(edits2(word))))\n\nIn a real world scenario I would use the implicit conversion Daniel proposed.\n"
] |
[
6,
4,
3,
2,
0
] |
[] |
[] |
[
"anonymous_function",
"python",
"scala"
] |
stackoverflow_0001780459_anonymous_function_python_scala.txt
|
Q:
registering python com server
I have a problem when registering a Python com server, I got a message box that says :
Invalid command line argument. This
programs provides LocalServer com
support for Python COM objects. It is
typically run automatically by COM,
passing passing as arguments The
ProgID or CLSID of the Python
server(s) to be hosted
Although the same server was registered successfully on other machines that has different windows OS, I would appreciate any help.
Thanks,
Sarah Abdelrazak
A:
Just an idea but have you checked the value of the LocalServer32 registry key of your COM server. see http://msdn.microsoft.com/en-us/library/ms683844%28VS.85%29.aspx
Any difference in the ways you've installed it on this machine? Does the path contain blanks?
I hope it helps
|
registering python com server
|
I have a problem when registering a Python com server, I got a message box that says :
Invalid command line argument. This
programs provides LocalServer com
support for Python COM objects. It is
typically run automatically by COM,
passing passing as arguments The
ProgID or CLSID of the Python
server(s) to be hosted
Although the same server was registered successfully on other machines that has different windows OS, I would appreciate any help.
Thanks,
Sarah Abdelrazak
|
[
"Just an idea but have you checked the value of the LocalServer32 registry key of your COM server. see http://msdn.microsoft.com/en-us/library/ms683844%28VS.85%29.aspx\nAny difference in the ways you've installed it on this machine? Does the path contain blanks?\nI hope it helps\n"
] |
[
0
] |
[] |
[] |
[
"com",
"python",
"pywin32"
] |
stackoverflow_0001982540_com_python_pywin32.txt
|
Q:
How to generate permutations of a list without "reverse duplicates" in Python using generators
This is related to question How to generate all permutations of a list in Python
How to generate all permutations that match following criteria: if two permutations are reverse of each other (i.e. [1,2,3,4] and [4,3,2,1]), they are considered equal and only one of them should be in final result.
Example:
permutations_without_duplicates ([1,2,3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
I am permuting lists that contain unique integers.
The number of resulting permutations will be high so I'd like to use Python's generators if possible.
Edit: I'd like not to store list of all permutations to memory if possible.
A:
I have a marvelous followup to SilentGhost's proposal - posting a separate answer since the margins of a comment would be too narrow to contain code :-)
itertools.permutations is built in (since 2.6) and fast. We just need a filtering condition that for every (perm, perm[::-1]) would accept exactly one of them. Since the OP says items are always distinct, we can just compare any 2 elements:
for p in itertools.permutations(range(3)):
if p[0] <= p[-1]:
print(p)
which prints:
(0, 1, 2)
(0, 2, 1)
(1, 0, 2)
This works because reversing the permutation would always flip the relation between first and last element!
For 4 or more elements, other element pairs that are symmetric around the middle (e.g. second from each side p[1] <= p[::-1][1]) would work too.
(This answer previously claimed p[0] < p[1] would work but it doesn't — after p is reversed this picks different elements.)
You can also do direct lexicographic comparison on whole permutation vs it's reverse:
for p in itertools.permutations(range(3)):
if p <= p[::-1]:
print(p)
I'm not sure if there is any more effecient way to filter. itertools.permutations guarantees lexicographic order, but the lexicographic position p and p[::-1] are related in a quite complex way. In particular, just stopping at the middle doesn't work.
But I suspect (didn't check) that the built-in iterator with 2:1 filtering would outperform any custom implementation. And of course it wins on simplicity!
A:
If you generate permutations in lexicographical order, then you don't need to store anything to work out whether the reverse of a given permutation has already been seen. You just have to lexicographically compare it to its reverse - if it's smaller then return it, if it's larger then skip it.
There's probably a more efficient way to do it, but this is simple and has the properties you require (implementable as a generator, uses O(n) working memory).
A:
This is a more concise and faster version of ChristopheD's accepted answer, which I liked a lot. Recursion is great. I made it enforce uniquenss of the incoming list by removing duplicates, however maybe it should just raise an exception instead.
def fac(x):
return (1 if x==0 else x * fac(x-1))
def permz(plist):
plist = sorted(set(plist))
plen = len(plist)
limit = fac(plen) / 2
counter = 0
if plen==1:
yield plist
else:
for perm in permz(plist[1:]):
for i in xrange(plen):
if counter == limit:
raise StopIteration
counter += 1
yield perm[:i] + plist[0:1] + perm[i:]
# ---- testing ----
plists = [
list('love'),
range(5),
[1,4,2,3,9],
['a',2,2.1],
range(8)]
for plist in plists:
perms = list(permz(plist))
print plist, True in [(list(reversed(i)) in foo) for i in perms]
A:
EDIT: changed completely to keep everything as a generator (never the whole list in memory). Should fulfill the requirements (only calculates half of the possible permutations (not the reverse ones).
EDIT2: added shorter (and simpler) factorial function from here.
EDIT3:: (see comments) - a version with improvements can be found in bwopah's version.
def fac(x):
return (1 if x==0 else x * fac(x-1))
def all_permutations(plist):
global counter
if len(plist) <=1:
yield plist
else:
for perm in all_permutations(plist[1:]):
for i in xrange(len(perm)+1):
if len(perm[:i] + plist[0:1] + perm[i:]) == lenplist:
if counter == limit:
raise StopIteration
else:
counter = counter + 1
yield perm[:i] + plist[0:1] + perm[i:]
counter = 0
plist = ['a','b','c']
lenplist = len(plist)
limit = fac(lenplist) / 2
all_permutations_gen = all_permutations(plist)
print all_permutations_gen
print list(all_permutations_gen)
A:
How about this:
from itertools import permutations
def rev_generator(plist):
reversed_elements = set()
for i in permutations(plist):
if i not in reversed_elements:
reversed_i = tuple(reversed(i))
reversed_elements.add(reversed_i)
yield i
>>> list(rev_generator([1,2,3]))
[(1, 2, 3), (1, 3, 2), (2, 1, 3)]
Also, if the return value must be a list, you could just change the yield i to yield list(i), but for iteration purposes, the tuples will work just fine.
A:
Here is code that does the trick. To get rid of the dups I noticed that for your list if the value of the first location is greater than the value of the last location then it will be a dup. I create a map to keep track of where each item was in the list to start with and then use that map to do the test. The code also does not use recursion so it keeps its memory footprint small. Also the list can be of any type items not just numbers see the last two test cases.
Here is the code.
class Permutation:
def __init__(self, justalist):
self._data = justalist[:]
self._len=len(self._data)
self._s=[]
self._nfact=1
self._map ={}
i=0
for elem in self._data:
self._s.append(elem)
self._map[str(elem)]=i
i+=1
self._nfact*=i
if i != 0:
self._nfact2=self._nfact//i
def __iter__(self):
for k in range(self._nfact):
for i in range(self._len):
self._s[i]=self._data[i]
s=self._s
factorial=self._nfact2
for i in range(self._len-1):
tempi = (k // factorial) % (self._len - i)
temp = s[i + tempi]
for j in range(i + tempi,i,-1):
s[j] = s[j-1]
s[i] = temp
factorial //= (self._len - (i + 1))
if self._map[str(s[0])] < self._map[str(s[-1])]:
yield s
s=[1,2]
print("_"*25)
print("input list:",s)
for sx in Permutation(s):
print(sx)
s=[1,2,3]
print("_"*25)
print("input list:",s)
for sx in Permutation(s):
print(sx)
s=[1,2,3,4]
print("_"*25)
print("input list:",s)
for sx in Permutation(s):
print(sx)
s=[3,2,1]
print("_"*25)
print("input list:",s)
for sx in Permutation(s):
print(sx)
s=["Apple","Pear","Orange"]
print("_"*25)
print("input list:",s)
for sx in Permutation(s):
print(sx)
s=[[1,4,5],"Pear",(1,6,9),Permutation([])]
print("_"*25)
print("input list:",s)
for sx in Permutation(s):
print(sx)
and here is the output for my test cases.
_________________________
input list: [1, 2]
[1, 2]
_________________________
input list: [1, 2, 3]
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
_________________________
input list: [1, 2, 3, 4]
[1, 2, 3, 4]
[1, 2, 4, 3]
[1, 3, 2, 4]
[1, 3, 4, 2]
[1, 4, 2, 3]
[1, 4, 3, 2]
[2, 1, 3, 4]
[2, 1, 4, 3]
[2, 3, 1, 4]
[2, 4, 1, 3]
[3, 1, 2, 4]
[3, 2, 1, 4]
_________________________
input list: [3, 2, 1]
[3, 2, 1]
[3, 1, 2]
[2, 3, 1]
_________________________
input list: ['Apple', 'Pear', 'Orange']
['Apple', 'Pear', 'Orange']
['Apple', 'Orange', 'Pear']
['Pear', 'Apple', 'Orange']
_________________________
input list: [[1, 4, 5], 'Pear', (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>]
[[1, 4, 5], 'Pear', (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>]
[[1, 4, 5], 'Pear', <__main__.Permutation object at 0x0142DEF0>, (1, 6, 9)]
[[1, 4, 5], (1, 6, 9), 'Pear', <__main__.Permutation object at 0x0142DEF0>]
[[1, 4, 5], (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>, 'Pear']
[[1, 4, 5], <__main__.Permutation object at 0x0142DEF0>, 'Pear', (1, 6, 9)]
[[1, 4, 5], <__main__.Permutation object at 0x0142DEF0>, (1, 6, 9), 'Pear']
['Pear', [1, 4, 5], (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>]
['Pear', [1, 4, 5], <__main__.Permutation object at 0x0142DEF0>, (1, 6, 9)]
['Pear', (1, 6, 9), [1, 4, 5], <__main__.Permutation object at 0x0142DEF0>]
['Pear', <__main__.Permutation object at 0x0142DEF0>, [1, 4, 5], (1, 6, 9)]
[(1, 6, 9), [1, 4, 5], 'Pear', <__main__.Permutation object at 0x0142DEF0>]
[(1, 6, 9), 'Pear', [1, 4, 5], <__main__.Permutation object at 0x0142DEF0>]
A:
Here is my implementation:
a = [1,2,3,4]
def p(l):
if len(l) <= 1:
yield l
else:
for i in range(len(l)):
for q in p([l[j] for j in range(len(l)) if j != i]):
yield [l[i]] + q
out = (i for i in p(a) if i < i[::-1])
P function is a regular permu function, yields all possibilities. The filter is done when iterates the result. Simply, it has two possible results, the smaller half of the all permus and the bigger half of the permus. In this example, the out contains the smaller half of the list.
A:
this is an implementation of onebyone's suggestion
from http://en.wikipedia.org/wiki/Permutation#Lexicographical_order_generation
The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
Find the highest index i such that s[i] < s[i+1]. If no such index exists, the permutation is the last permutation.
Find the highest index j > i such that s[j] > s[i]. Such a j must exist, since i+1 is such an index.
Swap s[i] with s[j].
Reverse all the order of all of the elements after index i
the function:
def perms(items):
items.sort()
yield items[:]
m = [len(items)-2] # step 1
while m:
i = m[-1]
j = [ j for j in range(i+1,len(items)) if items[j]>items[i] ][-1] # step 2
items[i], items[j] = items[j], items[i] # step 3
items[i+1:] = list(reversed(items[i+1:])) # step 4
if items<list(reversed(items)):
yield items[:]
m = [ i for i in range(len(items)-1) if items[i]<items[i+1] ] # step 1
checking our work:
>>> foo=list(perms([1,3,2,4,5]))
>>> True in [(list(reversed(i)) in foo) for i in foo]
False
A:
Some setup code first:
try:
from itertools import permutations
except ImportError:
# straight from http://docs.python.org/library/itertools.html#itertools.permutations
def permutations(iterable, r=None):
# permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC
# permutations(range(3)) --> 012 021 102 120 201 210
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
if r > n:
return
indices = range(n)
cycles = range(n, n-r, -1)
yield tuple(pool[i] for i in indices[:r])
while n:
for i in reversed(range(r)):
cycles[i] -= 1
if cycles[i] == 0:
indices[i:] = indices[i+1:] + indices[i:i+1]
cycles[i] = n - i
else:
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
yield tuple(pool[i] for i in indices[:r])
break
else:
return
def non_reversed_permutations(iterable):
"Return non-reversed permutations for an iterable with unique items"
for permutation in permutations(iterable):
if permutation[0] < permutation[-1]:
yield permutation
|
How to generate permutations of a list without "reverse duplicates" in Python using generators
|
This is related to question How to generate all permutations of a list in Python
How to generate all permutations that match following criteria: if two permutations are reverse of each other (i.e. [1,2,3,4] and [4,3,2,1]), they are considered equal and only one of them should be in final result.
Example:
permutations_without_duplicates ([1,2,3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
I am permuting lists that contain unique integers.
The number of resulting permutations will be high so I'd like to use Python's generators if possible.
Edit: I'd like not to store list of all permutations to memory if possible.
|
[
"I have a marvelous followup to SilentGhost's proposal - posting a separate answer since the margins of a comment would be too narrow to contain code :-)\nitertools.permutations is built in (since 2.6) and fast. We just need a filtering condition that for every (perm, perm[::-1]) would accept exactly one of them. Since the OP says items are always distinct, we can just compare any 2 elements:\nfor p in itertools.permutations(range(3)):\n if p[0] <= p[-1]:\n print(p)\n\nwhich prints:\n(0, 1, 2)\n(0, 2, 1)\n(1, 0, 2)\n\nThis works because reversing the permutation would always flip the relation between first and last element!\nFor 4 or more elements, other element pairs that are symmetric around the middle (e.g. second from each side p[1] <= p[::-1][1]) would work too.\n(This answer previously claimed p[0] < p[1] would work but it doesn't — after p is reversed this picks different elements.)\nYou can also do direct lexicographic comparison on whole permutation vs it's reverse:\nfor p in itertools.permutations(range(3)):\n if p <= p[::-1]:\n print(p)\n\nI'm not sure if there is any more effecient way to filter. itertools.permutations guarantees lexicographic order, but the lexicographic position p and p[::-1] are related in a quite complex way. In particular, just stopping at the middle doesn't work.\nBut I suspect (didn't check) that the built-in iterator with 2:1 filtering would outperform any custom implementation. And of course it wins on simplicity!\n",
"If you generate permutations in lexicographical order, then you don't need to store anything to work out whether the reverse of a given permutation has already been seen. You just have to lexicographically compare it to its reverse - if it's smaller then return it, if it's larger then skip it.\nThere's probably a more efficient way to do it, but this is simple and has the properties you require (implementable as a generator, uses O(n) working memory).\n",
"This is a more concise and faster version of ChristopheD's accepted answer, which I liked a lot. Recursion is great. I made it enforce uniquenss of the incoming list by removing duplicates, however maybe it should just raise an exception instead.\ndef fac(x): \n return (1 if x==0 else x * fac(x-1))\n\ndef permz(plist):\n plist = sorted(set(plist))\n plen = len(plist)\n limit = fac(plen) / 2\n counter = 0\n if plen==1:\n yield plist\n else:\n for perm in permz(plist[1:]):\n for i in xrange(plen):\n if counter == limit:\n raise StopIteration\n counter += 1\n yield perm[:i] + plist[0:1] + perm[i:]\n\n# ---- testing ----\nplists = [\n list('love'),\n range(5),\n [1,4,2,3,9],\n ['a',2,2.1],\n range(8)] \n\nfor plist in plists:\n perms = list(permz(plist))\n print plist, True in [(list(reversed(i)) in foo) for i in perms]\n\n",
"EDIT: changed completely to keep everything as a generator (never the whole list in memory). Should fulfill the requirements (only calculates half of the possible permutations (not the reverse ones).\nEDIT2: added shorter (and simpler) factorial function from here.\nEDIT3:: (see comments) - a version with improvements can be found in bwopah's version.\ndef fac(x): \n return (1 if x==0 else x * fac(x-1))\n\ndef all_permutations(plist):\n global counter\n\n if len(plist) <=1:\n yield plist\n else:\n for perm in all_permutations(plist[1:]):\n for i in xrange(len(perm)+1):\n if len(perm[:i] + plist[0:1] + perm[i:]) == lenplist:\n if counter == limit:\n raise StopIteration\n else:\n counter = counter + 1\n yield perm[:i] + plist[0:1] + perm[i:]\n\ncounter = 0\nplist = ['a','b','c']\nlenplist = len(plist)\nlimit = fac(lenplist) / 2\n\nall_permutations_gen = all_permutations(plist)\nprint all_permutations_gen\nprint list(all_permutations_gen)\n\n",
"How about this:\nfrom itertools import permutations\n\ndef rev_generator(plist):\n reversed_elements = set()\n for i in permutations(plist):\n if i not in reversed_elements:\n reversed_i = tuple(reversed(i))\n reversed_elements.add(reversed_i)\n yield i\n\n>>> list(rev_generator([1,2,3]))\n[(1, 2, 3), (1, 3, 2), (2, 1, 3)]\n\nAlso, if the return value must be a list, you could just change the yield i to yield list(i), but for iteration purposes, the tuples will work just fine.\n",
"Here is code that does the trick. To get rid of the dups I noticed that for your list if the value of the first location is greater than the value of the last location then it will be a dup. I create a map to keep track of where each item was in the list to start with and then use that map to do the test. The code also does not use recursion so it keeps its memory footprint small. Also the list can be of any type items not just numbers see the last two test cases.\nHere is the code.\nclass Permutation:\n def __init__(self, justalist):\n self._data = justalist[:]\n self._len=len(self._data)\n self._s=[]\n self._nfact=1\n self._map ={}\n i=0\n for elem in self._data:\n self._s.append(elem)\n self._map[str(elem)]=i\n i+=1\n self._nfact*=i\n if i != 0:\n self._nfact2=self._nfact//i\n\n def __iter__(self):\n for k in range(self._nfact):\n for i in range(self._len):\n self._s[i]=self._data[i]\n s=self._s\n factorial=self._nfact2\n for i in range(self._len-1):\n tempi = (k // factorial) % (self._len - i)\n temp = s[i + tempi]\n for j in range(i + tempi,i,-1):\n s[j] = s[j-1]\n s[i] = temp\n factorial //= (self._len - (i + 1))\n\n if self._map[str(s[0])] < self._map[str(s[-1])]:\n yield s\n\n\n\n\ns=[1,2]\nprint(\"_\"*25)\nprint(\"input list:\",s)\nfor sx in Permutation(s):\n print(sx)\n\ns=[1,2,3]\nprint(\"_\"*25)\nprint(\"input list:\",s)\nfor sx in Permutation(s):\n print(sx)\n\ns=[1,2,3,4]\nprint(\"_\"*25)\nprint(\"input list:\",s)\nfor sx in Permutation(s):\n print(sx)\n\ns=[3,2,1]\nprint(\"_\"*25)\nprint(\"input list:\",s)\nfor sx in Permutation(s):\n print(sx)\n\ns=[\"Apple\",\"Pear\",\"Orange\"]\nprint(\"_\"*25)\nprint(\"input list:\",s)\nfor sx in Permutation(s):\n print(sx)\n\ns=[[1,4,5],\"Pear\",(1,6,9),Permutation([])]\nprint(\"_\"*25)\nprint(\"input list:\",s)\nfor sx in Permutation(s):\n print(sx)\n\nand here is the output for my test cases.\n_________________________\ninput list: [1, 2]\n[1, 2]\n_________________________\ninput list: [1, 2, 3]\n[1, 2, 3]\n[1, 3, 2]\n[2, 1, 3]\n_________________________\ninput list: [1, 2, 3, 4]\n[1, 2, 3, 4]\n[1, 2, 4, 3]\n[1, 3, 2, 4]\n[1, 3, 4, 2]\n[1, 4, 2, 3]\n[1, 4, 3, 2]\n[2, 1, 3, 4]\n[2, 1, 4, 3]\n[2, 3, 1, 4]\n[2, 4, 1, 3]\n[3, 1, 2, 4]\n[3, 2, 1, 4]\n_________________________\ninput list: [3, 2, 1]\n[3, 2, 1]\n[3, 1, 2]\n[2, 3, 1]\n_________________________\ninput list: ['Apple', 'Pear', 'Orange']\n['Apple', 'Pear', 'Orange']\n['Apple', 'Orange', 'Pear']\n['Pear', 'Apple', 'Orange']\n_________________________\ninput list: [[1, 4, 5], 'Pear', (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>]\n[[1, 4, 5], 'Pear', (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>]\n[[1, 4, 5], 'Pear', <__main__.Permutation object at 0x0142DEF0>, (1, 6, 9)]\n[[1, 4, 5], (1, 6, 9), 'Pear', <__main__.Permutation object at 0x0142DEF0>]\n[[1, 4, 5], (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>, 'Pear']\n[[1, 4, 5], <__main__.Permutation object at 0x0142DEF0>, 'Pear', (1, 6, 9)]\n[[1, 4, 5], <__main__.Permutation object at 0x0142DEF0>, (1, 6, 9), 'Pear']\n['Pear', [1, 4, 5], (1, 6, 9), <__main__.Permutation object at 0x0142DEF0>]\n['Pear', [1, 4, 5], <__main__.Permutation object at 0x0142DEF0>, (1, 6, 9)]\n['Pear', (1, 6, 9), [1, 4, 5], <__main__.Permutation object at 0x0142DEF0>]\n['Pear', <__main__.Permutation object at 0x0142DEF0>, [1, 4, 5], (1, 6, 9)]\n[(1, 6, 9), [1, 4, 5], 'Pear', <__main__.Permutation object at 0x0142DEF0>]\n[(1, 6, 9), 'Pear', [1, 4, 5], <__main__.Permutation object at 0x0142DEF0>]\n\n",
"Here is my implementation:\na = [1,2,3,4]\n\ndef p(l):\n if len(l) <= 1:\n yield l\n else:\n for i in range(len(l)):\n for q in p([l[j] for j in range(len(l)) if j != i]):\n yield [l[i]] + q\n\nout = (i for i in p(a) if i < i[::-1])\n\nP function is a regular permu function, yields all possibilities. The filter is done when iterates the result. Simply, it has two possible results, the smaller half of the all permus and the bigger half of the permus. In this example, the out contains the smaller half of the list.\n",
"this is an implementation of onebyone's suggestion\nfrom http://en.wikipedia.org/wiki/Permutation#Lexicographical_order_generation\nThe following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.\n\nFind the highest index i such that s[i] < s[i+1]. If no such index exists, the permutation is the last permutation.\nFind the highest index j > i such that s[j] > s[i]. Such a j must exist, since i+1 is such an index.\nSwap s[i] with s[j].\nReverse all the order of all of the elements after index i\n\nthe function:\ndef perms(items):\n items.sort()\n yield items[:]\n m = [len(items)-2] # step 1\n while m:\n i = m[-1]\n j = [ j for j in range(i+1,len(items)) if items[j]>items[i] ][-1] # step 2\n items[i], items[j] = items[j], items[i] # step 3\n items[i+1:] = list(reversed(items[i+1:])) # step 4\n if items<list(reversed(items)):\n yield items[:]\n m = [ i for i in range(len(items)-1) if items[i]<items[i+1] ] # step 1\n\nchecking our work:\n>>> foo=list(perms([1,3,2,4,5]))\n>>> True in [(list(reversed(i)) in foo) for i in foo]\nFalse\n\n",
"Some setup code first:\ntry:\n from itertools import permutations\nexcept ImportError:\n # straight from http://docs.python.org/library/itertools.html#itertools.permutations\n def permutations(iterable, r=None):\n # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC\n # permutations(range(3)) --> 012 021 102 120 201 210\n pool = tuple(iterable)\n n = len(pool)\n r = n if r is None else r\n if r > n:\n return\n indices = range(n)\n cycles = range(n, n-r, -1)\n yield tuple(pool[i] for i in indices[:r])\n while n:\n for i in reversed(range(r)):\n cycles[i] -= 1\n if cycles[i] == 0:\n indices[i:] = indices[i+1:] + indices[i:i+1]\n cycles[i] = n - i\n else:\n j = cycles[i]\n indices[i], indices[-j] = indices[-j], indices[i]\n yield tuple(pool[i] for i in indices[:r])\n break\n else:\n return\n\ndef non_reversed_permutations(iterable):\n \"Return non-reversed permutations for an iterable with unique items\"\n for permutation in permutations(iterable):\n if permutation[0] < permutation[-1]:\n yield permutation\n\n"
] |
[
14,
12,
4,
3,
2,
2,
2,
1,
1
] |
[
"itertools.permutations does exactly what you want. you might make of use of reversed built-in as well\n"
] |
[
-2
] |
[
"algorithm",
"combinatorics",
"generator",
"python"
] |
stackoverflow_0000960557_algorithm_combinatorics_generator_python.txt
|
Q:
What does the “|” sign mean in Python?
This question originally asked (wrongly) what does "|" mean in Python, when the actual question was about Django. That question had a wonderful answer by Triptych I want to preserve.
A:
In Python, the '|' operator is defined by default on integer types and set types.
If the two operands are integers, then it will perform a bitwise or, which is a mathematical operation.
If the two operands are set types, the '|' operator will return the union of two sets.
a = set([1,2,3])
b = set([2,3,4])
c = a|b # = set([1,2,3,4])
Additionally, authors may define operator behavior for custom types, so if something.property is a user-defined object, you should check that class definition for an __or__() method, which will then define the behavior in your code sample.
So, it's impossible to give you a precise answer without knowing the data types for the two operands, but usually it will be a bitwise or.
A:
Bitwise OR
A:
It could also be "tricked" into a pipe like in unix shells, see here http://code.google.com/p/python-pipeline/
|
What does the “|” sign mean in Python?
|
This question originally asked (wrongly) what does "|" mean in Python, when the actual question was about Django. That question had a wonderful answer by Triptych I want to preserve.
|
[
"In Python, the '|' operator is defined by default on integer types and set types. \nIf the two operands are integers, then it will perform a bitwise or, which is a mathematical operation.\nIf the two operands are set types, the '|' operator will return the union of two sets.\na = set([1,2,3])\nb = set([2,3,4])\nc = a|b # = set([1,2,3,4])\n\nAdditionally, authors may define operator behavior for custom types, so if something.property is a user-defined object, you should check that class definition for an __or__() method, which will then define the behavior in your code sample.\nSo, it's impossible to give you a precise answer without knowing the data types for the two operands, but usually it will be a bitwise or.\n",
"Bitwise OR\n",
"It could also be \"tricked\" into a pipe like in unix shells, see here http://code.google.com/p/python-pipeline/\n"
] |
[
27,
1,
0
] |
[] |
[] |
[
"python",
"syntax_rules"
] |
stackoverflow_0000417396_python_syntax_rules.txt
|
Q:
What are the memory requirements for large python list?
I was doing a foolish thing like:
from itertools import *
rows = combinations(range(0, 1140), 17)
all_rows = []
for row in rows:
all_rows.append(row)
No surprise; I run out of memory address space (32 bit python 3.1)
My question is: how do I calculate how much memory address space I will need for a large list? In this case the list is on the order of 2.3X10^37.
Is there a function in Python that returns the information I am looking for, or actually the size of a smaller but similar list? What are those tools?
A:
There's a handy function sys.getsizeof() (since Python 2.6) that helps with this:
>>> import sys
>>> sys.getsizeof(1) # integer
12
>>> sys.getsizeof([]) # empty list
36
>>> sys.getsizeof(()) # empty tuple
28
>>> sys.getsizeof((1,)) # tuple with one element
32
From that you can see that each integer takes up 12 bytes, and the memory for each reference in a list or tuple is 4 bytes (on a 32-bit machine) plus the overhead (36 or 28 bytes respectively).
If your result has tuples of length 17 with integers, then you'd have 17*(12+4)+28 or 300 bytes per tuple. The result itself is a list, so 36 bytes plus 4 per reference. Find out how long the list will be (call it N) and you have 36+N*(4+300) as the total bytes required.
Edit: There's one other thing that could significantly affect that result. Python creates new integer objects as required for most integer values, but for small ones (empirically determined to be the range [-5, 256] on Python 2.6.4 on Windows) it pre-creates them all and re-uses them. If a large portion of your values are less than 257 this would significantly reduce the memory consumption. (On Python 257 is not 257+0 ;-)).
A:
Well first rather than writing:
all_rows = []
for row in rows:
all_rows.append(row)
You can simply write:
all_rows = list(rows)
which will be quite a bit more efficient.
Then, there are two things to consider for memory consumption of a list:
memory consumption of the objects comprising the list; this obviously depends on these objects, their type, and whether there is a lot of sharing or not
memory consumption of the list itself; each object in the list is referenced by a pointer, which will take 4 bytes in 32-bit mode and 8 bytes in 64-bit mode; so, roughly, the size of the list itself is (4 or 8 bytes) times the number of objects in the list (that's ignoring the fixed list header overhead and the moderate amount of over-allocation that Python lists do)
By the way, in recent Python versions you can use sys.getsizeof() to get the size of an object:
>>> import sys
>>> sys.getsizeof([None] * 100)
872
A:
Addendum: Since you're dealing with lists of integers and worry about memory usage --- there is also the array-module:
[array] defines an object type which can compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. The type is specified at object creation time [...].
A:
You are asking for
http://en.wikipedia.org/wiki/Binomial_coefficient
http://www.brpreiss.com/books/opus7/programs/pgm14_10.txt
Anyhow, sounds like you are trying to solve an NP-complete problem by brute force ;)
|
What are the memory requirements for large python list?
|
I was doing a foolish thing like:
from itertools import *
rows = combinations(range(0, 1140), 17)
all_rows = []
for row in rows:
all_rows.append(row)
No surprise; I run out of memory address space (32 bit python 3.1)
My question is: how do I calculate how much memory address space I will need for a large list? In this case the list is on the order of 2.3X10^37.
Is there a function in Python that returns the information I am looking for, or actually the size of a smaller but similar list? What are those tools?
|
[
"There's a handy function sys.getsizeof() (since Python 2.6) that helps with this:\n>>> import sys\n>>> sys.getsizeof(1) # integer\n12\n>>> sys.getsizeof([]) # empty list\n36\n>>> sys.getsizeof(()) # empty tuple\n28\n>>> sys.getsizeof((1,)) # tuple with one element\n32\n\nFrom that you can see that each integer takes up 12 bytes, and the memory for each reference in a list or tuple is 4 bytes (on a 32-bit machine) plus the overhead (36 or 28 bytes respectively). \nIf your result has tuples of length 17 with integers, then you'd have 17*(12+4)+28 or 300 bytes per tuple. The result itself is a list, so 36 bytes plus 4 per reference. Find out how long the list will be (call it N) and you have 36+N*(4+300) as the total bytes required.\nEdit: There's one other thing that could significantly affect that result. Python creates new integer objects as required for most integer values, but for small ones (empirically determined to be the range [-5, 256] on Python 2.6.4 on Windows) it pre-creates them all and re-uses them. If a large portion of your values are less than 257 this would significantly reduce the memory consumption. (On Python 257 is not 257+0 ;-)).\n",
"Well first rather than writing:\nall_rows = []\nfor row in rows:\n all_rows.append(row)\n\nYou can simply write:\nall_rows = list(rows)\n\nwhich will be quite a bit more efficient.\nThen, there are two things to consider for memory consumption of a list:\n\nmemory consumption of the objects comprising the list; this obviously depends on these objects, their type, and whether there is a lot of sharing or not\nmemory consumption of the list itself; each object in the list is referenced by a pointer, which will take 4 bytes in 32-bit mode and 8 bytes in 64-bit mode; so, roughly, the size of the list itself is (4 or 8 bytes) times the number of objects in the list (that's ignoring the fixed list header overhead and the moderate amount of over-allocation that Python lists do)\n\nBy the way, in recent Python versions you can use sys.getsizeof() to get the size of an object:\n>>> import sys\n>>> sys.getsizeof([None] * 100)\n872\n\n",
"Addendum: Since you're dealing with lists of integers and worry about memory usage --- there is also the array-module:\n\n[array] defines an object type which can compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. The type is specified at object creation time [...].\n\n",
"You are asking for\nhttp://en.wikipedia.org/wiki/Binomial_coefficient\nhttp://www.brpreiss.com/books/opus7/programs/pgm14_10.txt\nAnyhow, sounds like you are trying to solve an NP-complete problem by brute force ;)\n"
] |
[
11,
4,
3,
1
] |
[] |
[] |
[
"32bit_64bit",
"memory_management",
"python"
] |
stackoverflow_0001985975_32bit_64bit_memory_management_python.txt
|
Q:
Gruber’s URL Regular Expression in Python
How do I rewrite this new way to recognise addresses to work in Python?
\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))
A:
The original source for that states "This pattern should work in most modern regex implementations" and specifically Perl. Python's regex implementation is modern and similar to Perl's but is missing the [:punct:] character class. You can easily build that using this:
>>> import string, re
>>> pat = r'\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^%s\s]|/)))'
>>> pat = pat % re.sub(r'([-\\\]])', r'\\\1', string.punctuation)
The re.sub() call escapes certain characters inside the character set as required.
Edit: Using re.escape() works just as well, since it just sticks a backslash in front of everything. That felt crude to me at first, but certainly works fine for this case.
>>> pat = pat % re.escape(string.punctuation)
A:
I don't think python have this expression
[:punct:]
Wikipedia says [:punct:] is same to
[-!\"#$%&\'()*+,./:;<=>?@\\[\\\\]^_`{|}~]
A:
Python doesn't have the POSIX bracket expressions.
The [:punct:] bracket expression is equivalent in ASCII to
[!"#$%&'()*+,\-./:;<=>?@[\\\]^_`{|}~]
|
Gruber’s URL Regular Expression in Python
|
How do I rewrite this new way to recognise addresses to work in Python?
\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))
|
[
"The original source for that states \"This pattern should work in most modern regex implementations\" and specifically Perl. Python's regex implementation is modern and similar to Perl's but is missing the [:punct:] character class. You can easily build that using this:\n>>> import string, re\n>>> pat = r'\\b(([\\w-]+://?|www[.])[^\\s()<>]+(?:\\([\\w\\d]+\\)|([^%s\\s]|/)))'\n>>> pat = pat % re.sub(r'([-\\\\\\]])', r'\\\\\\1', string.punctuation)\n\nThe re.sub() call escapes certain characters inside the character set as required. \nEdit: Using re.escape() works just as well, since it just sticks a backslash in front of everything. That felt crude to me at first, but certainly works fine for this case.\n>>> pat = pat % re.escape(string.punctuation)\n\n",
"I don't think python have this expression\n[:punct:]\n\nWikipedia says [:punct:] is same to\n[-!\\\"#$%&\\'()*+,./:;<=>?@\\\\[\\\\\\\\]^_`{|}~]\n\n",
"Python doesn't have the POSIX bracket expressions.\nThe [:punct:] bracket expression is equivalent in ASCII to \n[!\"#$%&'()*+,\\-./:;<=>?@[\\\\\\]^_`{|}~] \n\n"
] |
[
12,
5,
2
] |
[] |
[] |
[
"gruber",
"python",
"regex"
] |
stackoverflow_0001986059_gruber_python_regex.txt
|
Q:
Which is more fundamental: Python functions or Python object-methods?
I am trying to get a conceptual understanding of the nature of Python functions and methods. I get that functions are actually objects, with a method that is called when the function is executed. But is that function-object method actually another function?
For example:
def fred():
pass
If I look at dir(fred), I see it has an attribute named __call__. But dir(fred.__call__) also has an attribute named __call__. So does fred.__call__.__call__ and so on. The ids of this chain of __call__ objects suggest they are all distinct. Are they really objects or is this some low-level trick of the interpreter?
Which is more fundamental: functions or object-methods?
A:
Short answer: both are fundamental, .__call__() on functions is just a virtual trick.
The rest of this answer is a bit complicated. You don't have to understand it, but I find the subject interesting. Be warned that I'm going to present a series of lies, progressively fixing them.
Long answer
At the most fundamental level, Python can be said to have just 2 operations:
attribute access: obj.attr
function call: callable(args)
Method calls - obj.method(args) - are not fundamental. They consist of 2 steps: fetching the attribute obj.method (which gives a callable "bound method" object) and calling that with args.
Other operators are defined in terms of them. E.g. x + y tries x.__add__(y), falling back to other similar combinations if that doesn't work.
Infinitely Long Answer?
So far so good. But calling and attribute access themselves are also defined in terms of obj.__call__(args) and obj.__getattribute__(name)?!?
Is it turtles all the way down?!?
The trick is that operations on an object are defined by calling methods of its type: type(obj).__call__(obj, args) and type(obj).__getattribute__(obj, name). Which BTW means that I lied to you, and there is a third fundamental operation:
getting the type of an object: type(obj)
OK, this is still not helpful. type(obj).__call__(...) still involves an attribute access and a call, so this should continue ad infinitum? The rub is that eventually you hit a builtin type - usually a function, object or type - and for them attribute access and function calls are fundamental.
So when you call a instance of a custom class, that's implemented through its __call__ method indeed. But its __call__ method is probably a normal function - which can be called directly. End of mystery.
Similarly about __getattribute__ - you can provide it to define attribute access for your class, but the class itself implement attribute access fundamentally (unless it has a custom metaclass).
The Curtain in Front of the Man
So why does even a function has a fred.__call__ method? Well that's just smoke and mirrors that Python pulls to blur the difference between builtin types and custom classes. This method exists on all callable objects, but calling a normal function doesn't have to go through it - functions are fundamentally callable.
Similarly, all objects have obj.__getattribute__ and obj.__class__, but for built-in types it just exposes the fundamental operations instead of defining it.
Small Print
The first claim that Python had 2 fundamental operations was actually a complete lie. Technically, all Python operators have a "fundamental" operation at the C level, exposed for consistency through a method, and custom classes can redefine these operations through similar methods.
But the story I told you could have been true, and it reduces the question its center: why .__call__() and .__getattribute__() are not an infinite recursion.
A:
Not specifically a Python answer, but at the lowest level the processor understands only actions and variables. From that we extrapolate functions, and from variables and functions we extrapolate objects. So from a low-level programming perspective I'd say that the more fundamental thing is the function.
That's not necessarily true of Python in the Pythonic sense, and is probably a good example of why it's not always beneficial to look deeply into the implementation of the language as a user of it. :) Thinking of a function as an object is certainly the better answer in Python itself.
At first I thought your calls were tracking into the Python library, but the .call method has the same properties as any other method. Thus it's recursively exploring itself, I think, having played with the python CLI for a few minutes; I think that is a painful way of exploring the architecture and while not necessarily a bug a property of how Python handles objects under the covers. :)
A:
Which is more fundamental: functions
or object-methods?
I think the best answer might be "neither". See the Execution model part of the Python reference, where it refers to "blocks". This is what actually gets executed. The __call__ thing you were getting hung up on in the infinite search for an end is just a wrapper which knows how to execute the code block (see the various func_xxx attributes of your function instead, with the actual bytecode being stored as func_code).
Also relevant, the Function definitions section, which refers to "a function object [being] (a wrapper around the executable code for the function)". Lastly, there's the term callable, which might also be an answer to "which is more fundamental?"
|
Which is more fundamental: Python functions or Python object-methods?
|
I am trying to get a conceptual understanding of the nature of Python functions and methods. I get that functions are actually objects, with a method that is called when the function is executed. But is that function-object method actually another function?
For example:
def fred():
pass
If I look at dir(fred), I see it has an attribute named __call__. But dir(fred.__call__) also has an attribute named __call__. So does fred.__call__.__call__ and so on. The ids of this chain of __call__ objects suggest they are all distinct. Are they really objects or is this some low-level trick of the interpreter?
Which is more fundamental: functions or object-methods?
|
[
"Short answer: both are fundamental, .__call__() on functions is just a virtual trick.\n\nThe rest of this answer is a bit complicated. You don't have to understand it, but I find the subject interesting. Be warned that I'm going to present a series of lies, progressively fixing them.\nLong answer\nAt the most fundamental level, Python can be said to have just 2 operations:\n\nattribute access: obj.attr\nfunction call: callable(args)\n\nMethod calls - obj.method(args) - are not fundamental. They consist of 2 steps: fetching the attribute obj.method (which gives a callable \"bound method\" object) and calling that with args.\nOther operators are defined in terms of them. E.g. x + y tries x.__add__(y), falling back to other similar combinations if that doesn't work.\nInfinitely Long Answer?\nSo far so good. But calling and attribute access themselves are also defined in terms of obj.__call__(args) and obj.__getattribute__(name)?!?\nIs it turtles all the way down?!?\nThe trick is that operations on an object are defined by calling methods of its type: type(obj).__call__(obj, args) and type(obj).__getattribute__(obj, name). Which BTW means that I lied to you, and there is a third fundamental operation:\n\ngetting the type of an object: type(obj)\n\nOK, this is still not helpful. type(obj).__call__(...) still involves an attribute access and a call, so this should continue ad infinitum? The rub is that eventually you hit a builtin type - usually a function, object or type - and for them attribute access and function calls are fundamental.\nSo when you call a instance of a custom class, that's implemented through its __call__ method indeed. But its __call__ method is probably a normal function - which can be called directly. End of mystery.\nSimilarly about __getattribute__ - you can provide it to define attribute access for your class, but the class itself implement attribute access fundamentally (unless it has a custom metaclass).\nThe Curtain in Front of the Man\nSo why does even a function has a fred.__call__ method? Well that's just smoke and mirrors that Python pulls to blur the difference between builtin types and custom classes. This method exists on all callable objects, but calling a normal function doesn't have to go through it - functions are fundamentally callable.\nSimilarly, all objects have obj.__getattribute__ and obj.__class__, but for built-in types it just exposes the fundamental operations instead of defining it.\nSmall Print\nThe first claim that Python had 2 fundamental operations was actually a complete lie. Technically, all Python operators have a \"fundamental\" operation at the C level, exposed for consistency through a method, and custom classes can redefine these operations through similar methods.\nBut the story I told you could have been true, and it reduces the question its center: why .__call__() and .__getattribute__() are not an infinite recursion.\n",
"Not specifically a Python answer, but at the lowest level the processor understands only actions and variables. From that we extrapolate functions, and from variables and functions we extrapolate objects. So from a low-level programming perspective I'd say that the more fundamental thing is the function.\nThat's not necessarily true of Python in the Pythonic sense, and is probably a good example of why it's not always beneficial to look deeply into the implementation of the language as a user of it. :) Thinking of a function as an object is certainly the better answer in Python itself.\nAt first I thought your calls were tracking into the Python library, but the .call method has the same properties as any other method. Thus it's recursively exploring itself, I think, having played with the python CLI for a few minutes; I think that is a painful way of exploring the architecture and while not necessarily a bug a property of how Python handles objects under the covers. :)\n",
"\nWhich is more fundamental: functions\n or object-methods?\n\nI think the best answer might be \"neither\". See the Execution model part of the Python reference, where it refers to \"blocks\". This is what actually gets executed. The __call__ thing you were getting hung up on in the infinite search for an end is just a wrapper which knows how to execute the code block (see the various func_xxx attributes of your function instead, with the actual bytecode being stored as func_code).\nAlso relevant, the Function definitions section, which refers to \"a function object [being] (a wrapper around the executable code for the function)\". Lastly, there's the term callable, which might also be an answer to \"which is more fundamental?\"\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"function",
"object",
"python"
] |
stackoverflow_0001985635_function_object_python.txt
|
Q:
Finding all classes that derive from a given base class in python
I'm looking for a way to get a list of all classes that derive from a particular base class in Python.
More specifically I am using Django and I have a abstract base Model and then several Models that derive from that base class...
class Asset(models.Model):
name = models.CharField(max_length=500)
last_update = models.DateTimeField(default=datetime.datetime.now())
category = models.CharField(max_length=200, default='None')
class Meta:
abstract = True
class AssetTypeA(Asset):
junk = models.CharField(max_length=200)
hasJunk = models.BooleanField()
def __unicode__(self):
return self.junk
class AssetTypeB(Asset):
stuff= models.CharField(max_length=200)
def __unicode__(self):
return self.stuff
I'd like to be able to detect if anyone adds a new AssetTypeX model and generate the appropriate pages but currently I am maintaining a list manually, is there a way to determine a list of class names for anything that derives from "Asset"?
A:
Asset.__subclasses__() gives the immediate subclasses of Asset, but whether that's sufficient depends on whether that immediate part is a problem for you -- if you want all descendants to whatever number of levels, you'll need recursive expansion, e.g.:
def descendants(aclass):
directones = aclass.__subclasses__()
if not directones: return
for c in directones:
yield c
for x in descendants(c): yield x
Your examples suggest you only care about classes directly subclassing Asset, in which case you might not need this extra level of expansion.
|
Finding all classes that derive from a given base class in python
|
I'm looking for a way to get a list of all classes that derive from a particular base class in Python.
More specifically I am using Django and I have a abstract base Model and then several Models that derive from that base class...
class Asset(models.Model):
name = models.CharField(max_length=500)
last_update = models.DateTimeField(default=datetime.datetime.now())
category = models.CharField(max_length=200, default='None')
class Meta:
abstract = True
class AssetTypeA(Asset):
junk = models.CharField(max_length=200)
hasJunk = models.BooleanField()
def __unicode__(self):
return self.junk
class AssetTypeB(Asset):
stuff= models.CharField(max_length=200)
def __unicode__(self):
return self.stuff
I'd like to be able to detect if anyone adds a new AssetTypeX model and generate the appropriate pages but currently I am maintaining a list manually, is there a way to determine a list of class names for anything that derives from "Asset"?
|
[
"Asset.__subclasses__() gives the immediate subclasses of Asset, but whether that's sufficient depends on whether that immediate part is a problem for you -- if you want all descendants to whatever number of levels, you'll need recursive expansion, e.g.:\ndef descendants(aclass):\n directones = aclass.__subclasses__()\n if not directones: return\n for c in directones:\n yield c\n for x in descendants(c): yield x\n\nYour examples suggest you only care about classes directly subclassing Asset, in which case you might not need this extra level of expansion.\n"
] |
[
9
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001986335_django_python.txt
|
Q:
Why Unexpected Indent?
Why does this happen?
def LoadPackageList():
try:
#Attempts to load package list... Adds each neccessary attribute into array
print("Loading Package List... please wait")
packages = []
packagelisturl = os.getcwd() + "packages.list"
dom = minidom.parse(urllib.urlopen(packagelisturl))
try:
for eachattributeofpkglist in dom.GetElementsByTagNameNS(packagelist, 'packages'):
packages.append({
'title': node.getAttribute('title'),
'shortname': node.getAttribute('shortname'),
'dlurl': node.getAttributes('dlurl'),
'description': node.getAttributes('description'),
'tags': node.getAttributes('tags'),
'infopage': node.getAttributes('infopage'),
'quality': node.getAttributes('quality'),
'id': node.getAttributes('id')
})
except LoadPackageListFailed:
print("Loading Package List failed... try again soon or manually update this release!")
Write2ErrorLog(LoadPackageListFailed)
#Indent Here Fails
def Usage():
#prints usage and closes
print ("Invalid Argument Specified, please retry using the format stated below\n")
print ("*** Simtho Usage Parameters ***\n")
print ("-i Installs Local App, include full path")
print ("-u Uninstalls Installed App,include ID or Name")
print ("-l Lists all installed Apps and their ID")
print ("-a Lists All Apps in Repository")
print ("-s Downloads and Installs App from repository, enter the title or id number")
print ("-w Downloads and Installs Single App from a full link")
print ("-r Removes All Packages installed\n")
print ("*** End of Simtho Usage ***")
os._exit(1)
return;
A:
Try running it through python -t and see if there is a mixture of tabs and spaces somewhere in there.
As a side note: use optparse to process command line parameters. It will make your life much easier and produce a nice consistent interface.
A:
The code you posted works just fine. Therefore, check the lines immediately above the part you posted. If that does not help, please post the exact error message and a little more of the code.
Edit: Also check that you do not have a mixture of tabs and spaces.
Edit 2: (In response to more code posted by the OP).
Aha. Every try requires an except. In the definition of LoadPackageList, there are two try's. The inner one has an except block, but the outer one is just a bare try.
A:
Shouldn't the try on line 2 have an except, finally or else associated with it? Or is this some new Python idiom I haven't seen before?
|
Why Unexpected Indent?
|
Why does this happen?
def LoadPackageList():
try:
#Attempts to load package list... Adds each neccessary attribute into array
print("Loading Package List... please wait")
packages = []
packagelisturl = os.getcwd() + "packages.list"
dom = minidom.parse(urllib.urlopen(packagelisturl))
try:
for eachattributeofpkglist in dom.GetElementsByTagNameNS(packagelist, 'packages'):
packages.append({
'title': node.getAttribute('title'),
'shortname': node.getAttribute('shortname'),
'dlurl': node.getAttributes('dlurl'),
'description': node.getAttributes('description'),
'tags': node.getAttributes('tags'),
'infopage': node.getAttributes('infopage'),
'quality': node.getAttributes('quality'),
'id': node.getAttributes('id')
})
except LoadPackageListFailed:
print("Loading Package List failed... try again soon or manually update this release!")
Write2ErrorLog(LoadPackageListFailed)
#Indent Here Fails
def Usage():
#prints usage and closes
print ("Invalid Argument Specified, please retry using the format stated below\n")
print ("*** Simtho Usage Parameters ***\n")
print ("-i Installs Local App, include full path")
print ("-u Uninstalls Installed App,include ID or Name")
print ("-l Lists all installed Apps and their ID")
print ("-a Lists All Apps in Repository")
print ("-s Downloads and Installs App from repository, enter the title or id number")
print ("-w Downloads and Installs Single App from a full link")
print ("-r Removes All Packages installed\n")
print ("*** End of Simtho Usage ***")
os._exit(1)
return;
|
[
"Try running it through python -t and see if there is a mixture of tabs and spaces somewhere in there.\nAs a side note: use optparse to process command line parameters. It will make your life much easier and produce a nice consistent interface.\n",
"The code you posted works just fine. Therefore, check the lines immediately above the part you posted. If that does not help, please post the exact error message and a little more of the code.\nEdit: Also check that you do not have a mixture of tabs and spaces.\nEdit 2: (In response to more code posted by the OP).\nAha. Every try requires an except. In the definition of LoadPackageList, there are two try's. The inner one has an except block, but the outer one is just a bare try.\n",
"Shouldn't the try on line 2 have an except, finally or else associated with it? Or is this some new Python idiom I haven't seen before?\n"
] |
[
7,
4,
3
] |
[] |
[] |
[
"indentation",
"python"
] |
stackoverflow_0001983405_indentation_python.txt
|
Q:
Where should I place the one-time operation operation in the Django framework?
I want to perform some one-time operations such as to start a background thread and populate a cache every 30 minutes as initialize action when the Django server is started, so it will not block user from visiting the website. Where should I place all this code in Django?
Put them into the setting.py file does not work. It seems it will cause a circular dependency.
Put them into the __init__.py file does not work. Django server call it many times (What is the reason?)
A:
I just create standalone scripts and schedule them with cron. Admittedly it's a bit low-tech, but It Just Works. Just place this at the top of a script in your projects top-level directory and call as needed.
#!/usr/bin/env python
from django.core.management import setup_environ
import settings
setup_environ(settings)
from django.db import transaction
# random interesting things
# If you change the database, make sure you use this next line
transaction.commit_unless_managed()
A:
We put one-time startup scripts in the top-level urls.py. This is often where your admin bindings go -- they're one-time startup, also.
Some folks like to put these things in settings.py but that seems to conflate settings (which don't do much) with the rest of the site's code (which does stuff).
A:
For one operation in startserver, you can use customs commands or if you want a periodic task or a queue of taske you can use celery
A:
__init__.py will be called every time the app is imported. So if you're using mod_wsgi with Apache for instance with the prefork method, then every new process created is effectively 'starting' the project thus importing __init__.py. It sounds like your best method would be to create a new management command, and then cron that up to run every so often if that's an option. Either that, or run that management command before starting the server. You could write up a quick script that runs that management command and then starts the server for instance.
|
Where should I place the one-time operation operation in the Django framework?
|
I want to perform some one-time operations such as to start a background thread and populate a cache every 30 minutes as initialize action when the Django server is started, so it will not block user from visiting the website. Where should I place all this code in Django?
Put them into the setting.py file does not work. It seems it will cause a circular dependency.
Put them into the __init__.py file does not work. Django server call it many times (What is the reason?)
|
[
"I just create standalone scripts and schedule them with cron. Admittedly it's a bit low-tech, but It Just Works. Just place this at the top of a script in your projects top-level directory and call as needed.\n#!/usr/bin/env python\nfrom django.core.management import setup_environ\nimport settings\nsetup_environ(settings)\nfrom django.db import transaction\n\n# random interesting things\n# If you change the database, make sure you use this next line\ntransaction.commit_unless_managed()\n\n",
"We put one-time startup scripts in the top-level urls.py. This is often where your admin bindings go -- they're one-time startup, also.\nSome folks like to put these things in settings.py but that seems to conflate settings (which don't do much) with the rest of the site's code (which does stuff).\n",
"For one operation in startserver, you can use customs commands or if you want a periodic task or a queue of taske you can use celery\n",
"__init__.py will be called every time the app is imported. So if you're using mod_wsgi with Apache for instance with the prefork method, then every new process created is effectively 'starting' the project thus importing __init__.py. It sounds like your best method would be to create a new management command, and then cron that up to run every so often if that's an option. Either that, or run that management command before starting the server. You could write up a quick script that runs that management command and then starts the server for instance.\n"
] |
[
6,
4,
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001986060_django_python.txt
|
Q:
Python, PyTables - taking advantage of in-kernel searching
I have HDF5 files with multiple groups, where each group contains a data set with >= 25 million rows. At each time step of simulation, each agent outputs the other agents he/she sensed at that time step. There are ~2000 agents in the scenario and thousands of time steps; the O(n^2) nature of the output explains the huge number of rows.
What I'm interested in calculating is the number of unique sightings by category. For instance, agents belong to a side, red, blue, or green. I want to make a two-dimensional table where row i, column j is the number of agents in category j that were sensed by at least one agent in category i. (I'm using the Sides in this code example, but we could classify the agents in other ways as well, such as by the weapon they have, or the sensors they carry.)
Here's a sample output table; note that the simulation does not output blue/blue sensations because it takes a ton of room and we aren't interested in them. Same for green green)
blue green red
blue 0 492 186
green 1075 0 186
red 451 498 26
The columns are
tick - time step
sensingAgentId - id of agent doing sensing
sensedAgentId - id of agent being sensed
detRange - range in meters between two agents
senseType - an enumerated type for what type of sensing was done
Here's the code I am currently using to accomplish this:
def createHeatmap():
h5file = openFile("someFile.h5")
run0 = h5file.root.run0.detections
# A dictionary of dictionaries, {'blue': {'blue':0, 'red':0, ...}
classHeat = emptyDict(sides)
# Interested in Per Category Unique Detections
seenClass = {}
# Initially each side has seen no one
for theSide in sides:
seenClass[theSide] = []
# In-kernel search filtering out many rows in file; in this instance 25,789,825 rows
# are filtered to 4,409,176
classifications = run0.where('senseType == 3')
# Iterate and filter
for row in classifications:
sensedId = row['sensedAgentId']
# side is a function that returns the string representation of the side of agent
# with that id.
sensedSide = side(sensedId)
sensingSide = side(row['sensingAgentId'])
# The side has already seen this agent before; ignore it
if sensedId in seenClass[sensingSide]:
continue
else:
classHeat[sensingSide][sensedSide] += 1
seenClass[sensingSide].append(sensedId)
return classHeat
Note: I have a Java background, so I apologize if this is not Pythonic. Please point this out and suggest ways to improve this code, I would love to become more proficient with Python.
Now, this is very slow: it takes approximately 50 seconds to do this iteration and membership checking, and this is with the most restrictive set of membership criteria (other detection types have many more rows to iterate over).
My question is, is it possible to move the work out of python and into the in-kernel search query? If so, how? Is there some glaringly obvious speedup I am missing? I need to be able to run this function for each run in a set of runs (~30), and for multiple set of criteria (~5), so it would be great if this could be sped up.
Final note: I tried using psyco but that barely made a difference.
A:
If you have N=~2k agents, I suggest putting all sightings into a numpy array of size NxN. This easily fits in memory (around 16 meg for integers). Just store a 1 wherever a sighting occurred.
Assume that you have an array sightings. The first coordinate is Sensing, the second is Sensed. Assume you also have 1-d index arrays listing which agents are on which side. You can get the number of sightings of side B by side A this way:
sideAseesB = sightings[sideAindices, sideBindices]
sideAseesBcount = numpy.logical_or.reduce(sideAseesB, axis=0).sum()
It's possible you'd need to use sightings.take(sideAindices, axis=0).take(sideBindices, axis=1) in the first step, but I doubt it.
|
Python, PyTables - taking advantage of in-kernel searching
|
I have HDF5 files with multiple groups, where each group contains a data set with >= 25 million rows. At each time step of simulation, each agent outputs the other agents he/she sensed at that time step. There are ~2000 agents in the scenario and thousands of time steps; the O(n^2) nature of the output explains the huge number of rows.
What I'm interested in calculating is the number of unique sightings by category. For instance, agents belong to a side, red, blue, or green. I want to make a two-dimensional table where row i, column j is the number of agents in category j that were sensed by at least one agent in category i. (I'm using the Sides in this code example, but we could classify the agents in other ways as well, such as by the weapon they have, or the sensors they carry.)
Here's a sample output table; note that the simulation does not output blue/blue sensations because it takes a ton of room and we aren't interested in them. Same for green green)
blue green red
blue 0 492 186
green 1075 0 186
red 451 498 26
The columns are
tick - time step
sensingAgentId - id of agent doing sensing
sensedAgentId - id of agent being sensed
detRange - range in meters between two agents
senseType - an enumerated type for what type of sensing was done
Here's the code I am currently using to accomplish this:
def createHeatmap():
h5file = openFile("someFile.h5")
run0 = h5file.root.run0.detections
# A dictionary of dictionaries, {'blue': {'blue':0, 'red':0, ...}
classHeat = emptyDict(sides)
# Interested in Per Category Unique Detections
seenClass = {}
# Initially each side has seen no one
for theSide in sides:
seenClass[theSide] = []
# In-kernel search filtering out many rows in file; in this instance 25,789,825 rows
# are filtered to 4,409,176
classifications = run0.where('senseType == 3')
# Iterate and filter
for row in classifications:
sensedId = row['sensedAgentId']
# side is a function that returns the string representation of the side of agent
# with that id.
sensedSide = side(sensedId)
sensingSide = side(row['sensingAgentId'])
# The side has already seen this agent before; ignore it
if sensedId in seenClass[sensingSide]:
continue
else:
classHeat[sensingSide][sensedSide] += 1
seenClass[sensingSide].append(sensedId)
return classHeat
Note: I have a Java background, so I apologize if this is not Pythonic. Please point this out and suggest ways to improve this code, I would love to become more proficient with Python.
Now, this is very slow: it takes approximately 50 seconds to do this iteration and membership checking, and this is with the most restrictive set of membership criteria (other detection types have many more rows to iterate over).
My question is, is it possible to move the work out of python and into the in-kernel search query? If so, how? Is there some glaringly obvious speedup I am missing? I need to be able to run this function for each run in a set of runs (~30), and for multiple set of criteria (~5), so it would be great if this could be sped up.
Final note: I tried using psyco but that barely made a difference.
|
[
"If you have N=~2k agents, I suggest putting all sightings into a numpy array of size NxN. This easily fits in memory (around 16 meg for integers). Just store a 1 wherever a sighting occurred.\nAssume that you have an array sightings. The first coordinate is Sensing, the second is Sensed. Assume you also have 1-d index arrays listing which agents are on which side. You can get the number of sightings of side B by side A this way:\nsideAseesB = sightings[sideAindices, sideBindices]\nsideAseesBcount = numpy.logical_or.reduce(sideAseesB, axis=0).sum()\n\nIt's possible you'd need to use sightings.take(sideAindices, axis=0).take(sideBindices, axis=1) in the first step, but I doubt it.\n"
] |
[
2
] |
[] |
[] |
[
"optimization",
"pytables",
"python",
"query_optimization"
] |
stackoverflow_0001971870_optimization_pytables_python_query_optimization.txt
|
Q:
Create a text file in python using values from a sqlite database
I am trying to build a text file which is a combination of predefined strings and variable values which I would like to take from a pre-existing sqlite database. The general format of each line of the text file is as such:
constraint n: value i < value j
Where n is an integer which will increase by one every line. Value i is the value found at row x, column y of table i and value j is that found at row x, column y of table j. I will need to iterate over each value of these tables presently in my database. My goal is to use nested while loops in order to iterate through each value in the tables, along the lines of:
>>> while x < 30:
y = 1
while y < 30:
#code to populate text file goes here
n = n + 1
y = y + 1
x = x + 1
I have some limited experience in python but am wide open to any suggestions. Any input that anyone has would be very much appreciated.
Thank you very much and Happy New Year!
Paul
A:
The simplest, unoptimized approach would be something like:
import sqlite3
conn = sqlite3.connect('whatever.file')
c = conn.cursor
out = open('results.txt', 'w')
n = 1
for x in range(1, 30):
for y in range(1, 30):
c.execute('select value from i where row=? and column=?', (x, y))
i = c.fetchone()[0]
c.execute('select value from j where row=? and column=?', (x, y))
j = c.fetchone()[0]
out.write('constraint %d: %s < %s\n' % (n, i, j))
n += 1
out.close()
conn.close()
The main issue with this approach is that it uses 1682 separate queries to sqlite and thereby might be a bit slow. The main optimization would be to reduce the number of queries by fetching (e.g.) all i and j values for each given x with just two queries (by having no where condition on y and select value, column as the select subclause) -- whether you need it or not depends on whether the performance of this dirt-simple approach is already satisfactory for your purposes.
A:
I believe you will be much happier if you use an ORM to access the data, rather than direct SQL as shown in Alex Martelli's answer. It seems like your database is very simple and an ORM will work well.
I suggest you try SQLAlchemy or Autumn. SQLAlchemy is powerful and flexible, and has been around for a long time. Autumn is very simple. Either one should help you.
http://www.sqlalchemy.org/
http://autumn-orm.org/
On the other hand, if Alex Martelli's example code has already solved your problem and this is a simple one-off problem, then hey, grab the solution and go.
|
Create a text file in python using values from a sqlite database
|
I am trying to build a text file which is a combination of predefined strings and variable values which I would like to take from a pre-existing sqlite database. The general format of each line of the text file is as such:
constraint n: value i < value j
Where n is an integer which will increase by one every line. Value i is the value found at row x, column y of table i and value j is that found at row x, column y of table j. I will need to iterate over each value of these tables presently in my database. My goal is to use nested while loops in order to iterate through each value in the tables, along the lines of:
>>> while x < 30:
y = 1
while y < 30:
#code to populate text file goes here
n = n + 1
y = y + 1
x = x + 1
I have some limited experience in python but am wide open to any suggestions. Any input that anyone has would be very much appreciated.
Thank you very much and Happy New Year!
Paul
|
[
"The simplest, unoptimized approach would be something like:\nimport sqlite3\nconn = sqlite3.connect('whatever.file')\nc = conn.cursor\n\nout = open('results.txt', 'w')\nn = 1\n\nfor x in range(1, 30):\n for y in range(1, 30):\n c.execute('select value from i where row=? and column=?', (x, y))\n i = c.fetchone()[0]\n c.execute('select value from j where row=? and column=?', (x, y))\n j = c.fetchone()[0]\n out.write('constraint %d: %s < %s\\n' % (n, i, j))\n n += 1\n\nout.close()\nconn.close()\n\nThe main issue with this approach is that it uses 1682 separate queries to sqlite and thereby might be a bit slow. The main optimization would be to reduce the number of queries by fetching (e.g.) all i and j values for each given x with just two queries (by having no where condition on y and select value, column as the select subclause) -- whether you need it or not depends on whether the performance of this dirt-simple approach is already satisfactory for your purposes.\n",
"I believe you will be much happier if you use an ORM to access the data, rather than direct SQL as shown in Alex Martelli's answer. It seems like your database is very simple and an ORM will work well.\nI suggest you try SQLAlchemy or Autumn. SQLAlchemy is powerful and flexible, and has been around for a long time. Autumn is very simple. Either one should help you.\nhttp://www.sqlalchemy.org/\nhttp://autumn-orm.org/\nOn the other hand, if Alex Martelli's example code has already solved your problem and this is a simple one-off problem, then hey, grab the solution and go.\n"
] |
[
3,
0
] |
[] |
[] |
[
"python",
"sqlite"
] |
stackoverflow_0001986978_python_sqlite.txt
|
Q:
Data Structures in Python
All the books I've read on data structures so far seem to use C/C++, and make heavy use of the "manual" pointer control that they offer. Since Python hides that sort of memory management and garbage collection from the user is it even possible to implement efficient data structures in this language, and is there any reason to do so instead of using the built-ins?
A:
Python gives you some powerful, highly optimized data structures, both as built-ins and as part of a few modules in the standard library (lists and dicts, of course, but also tuples, sets, arrays in module array, and some other containers in module collections).
Combinations of these data structures (and maybe some of the functions from helper modules such as heapq and bisect) are generally sufficient to implement most richer structures that may be needed in real-life programming; however, that's not invariably the case.
When you need something more than the rich library provides, consider the fact that an object's attributes (and items in collections) are essentially "pointers" to other objects (without pointer arithmetic), i.e., "reseatable references", in Python just like in Java. In Python, you normally use a None value in an attribute or item to represent what NULL would mean in C++ or null would mean in Java.
So, for example, you could implement binary trees via, e.g.:
class Node(object):
__slots__ = 'payload', 'left', 'right'
def __init__(self, payload=None, left=None, right=None):
self.payload = payload
self.left = left
self.right = right
plus methods or functions for traversal and similar operations (the __slots__ class attribute is optional -- mostly a memory optimization, to avoid each Node instance carrying its own __dict__, which would be substantially larger than the three needed attributes/references).
Other examples of data structures that may best be represented by dedicated Python classes, rather than by direct composition of other existing Python structures, include tries (see e.g. here) and graphs (see e.g. here).
A:
For some simple data structures (eg. a stack), you can just use the builtin list to get your job done. With more complex structures (eg. a bloom filter), you'll have to implement them yourself using the primitives the language supports.
You should use the builtins if they serve your purpose really since they're debugged and optimised by a horde of people for a long time. Doing it from scratch by yourself will probably produce an inferior data structure.
If however, you need something that's not available as a primitive or if the primitive doesn't perform well enough, you'll have to implement your own type.
The details like pointer management etc. are just implementation talk and don't really limit the capabilities of the language itself.
A:
C/C++ data structure books are only attempting to teach you the underlying principles behind the various structures - they are generally not advising you to actually go out and re-invent the wheel by building your own library of stacks and lists.
Whether you're using Python, C++, C#, Java, whatever, you should always look to the built in data structures first. They will generally be implemented using the same system primitives you would have to use doing it yourself, but with the advantage of having been tried and tested.
Only when the provided data structures do not allow you to accomplish what you need, and there isn't an alternative and reliable library available to you, should you be looking at building something from scratch (or extending what's provided).
A:
How Python handles objects at a low level isn't too strange anyway. This document should disambiguate it a tad; it's basically all the pointer logic you're already familiar with.
A:
With Python you have access to a vast assortment of library modules written and debugged by other people. Odds are very good that somewhere out there, there is a module that does at least part of what you want, and odds are even good that it might be implemented in C for performance.
For example, if you need to do matrix math, you can use NumPy, which was written in C and Fortran.
Python is slow enough that you won't be happy if you try to write some sort of really compute-intensive code (example, a Fast Fourier Transform) in native Python. On the other hand, you can get a C-coded Fourier Transform as part of SciPy, and just use it.
I have never had a situation where I wanted to solve a problem in Python and said "darn, I just can't express the data structure I need."
If you are a pioneer, and you are doing something in Python for which there just isn't any library module out there, then you can try writing it in pure Python. If it is fast enough, you are done. If it is too slow, you can profile it, figure out where the slow parts are, and rewrite them in C using the Python C API. I have never needed to do this yet.
A:
It's not possible to implement something like a C++ vector in Python, since you don't have array primitives the way C/C++ do. However, anything more complicated can be implemented (efficiently) on top of it, including, but not limited to: linked lists, hash tables, multisets, bloom filters, etc.
|
Data Structures in Python
|
All the books I've read on data structures so far seem to use C/C++, and make heavy use of the "manual" pointer control that they offer. Since Python hides that sort of memory management and garbage collection from the user is it even possible to implement efficient data structures in this language, and is there any reason to do so instead of using the built-ins?
|
[
"Python gives you some powerful, highly optimized data structures, both as built-ins and as part of a few modules in the standard library (lists and dicts, of course, but also tuples, sets, arrays in module array, and some other containers in module collections).\nCombinations of these data structures (and maybe some of the functions from helper modules such as heapq and bisect) are generally sufficient to implement most richer structures that may be needed in real-life programming; however, that's not invariably the case.\nWhen you need something more than the rich library provides, consider the fact that an object's attributes (and items in collections) are essentially \"pointers\" to other objects (without pointer arithmetic), i.e., \"reseatable references\", in Python just like in Java. In Python, you normally use a None value in an attribute or item to represent what NULL would mean in C++ or null would mean in Java.\nSo, for example, you could implement binary trees via, e.g.:\nclass Node(object):\n\n __slots__ = 'payload', 'left', 'right'\n\n def __init__(self, payload=None, left=None, right=None):\n self.payload = payload\n self.left = left\n self.right = right\n\nplus methods or functions for traversal and similar operations (the __slots__ class attribute is optional -- mostly a memory optimization, to avoid each Node instance carrying its own __dict__, which would be substantially larger than the three needed attributes/references).\nOther examples of data structures that may best be represented by dedicated Python classes, rather than by direct composition of other existing Python structures, include tries (see e.g. here) and graphs (see e.g. here).\n",
"For some simple data structures (eg. a stack), you can just use the builtin list to get your job done. With more complex structures (eg. a bloom filter), you'll have to implement them yourself using the primitives the language supports. \nYou should use the builtins if they serve your purpose really since they're debugged and optimised by a horde of people for a long time. Doing it from scratch by yourself will probably produce an inferior data structure. \nIf however, you need something that's not available as a primitive or if the primitive doesn't perform well enough, you'll have to implement your own type.\nThe details like pointer management etc. are just implementation talk and don't really limit the capabilities of the language itself. \n",
"C/C++ data structure books are only attempting to teach you the underlying principles behind the various structures - they are generally not advising you to actually go out and re-invent the wheel by building your own library of stacks and lists.\nWhether you're using Python, C++, C#, Java, whatever, you should always look to the built in data structures first. They will generally be implemented using the same system primitives you would have to use doing it yourself, but with the advantage of having been tried and tested.\nOnly when the provided data structures do not allow you to accomplish what you need, and there isn't an alternative and reliable library available to you, should you be looking at building something from scratch (or extending what's provided).\n",
"How Python handles objects at a low level isn't too strange anyway. This document should disambiguate it a tad; it's basically all the pointer logic you're already familiar with.\n",
"With Python you have access to a vast assortment of library modules written and debugged by other people. Odds are very good that somewhere out there, there is a module that does at least part of what you want, and odds are even good that it might be implemented in C for performance.\nFor example, if you need to do matrix math, you can use NumPy, which was written in C and Fortran.\nPython is slow enough that you won't be happy if you try to write some sort of really compute-intensive code (example, a Fast Fourier Transform) in native Python. On the other hand, you can get a C-coded Fourier Transform as part of SciPy, and just use it.\nI have never had a situation where I wanted to solve a problem in Python and said \"darn, I just can't express the data structure I need.\"\nIf you are a pioneer, and you are doing something in Python for which there just isn't any library module out there, then you can try writing it in pure Python. If it is fast enough, you are done. If it is too slow, you can profile it, figure out where the slow parts are, and rewrite them in C using the Python C API. I have never needed to do this yet.\n",
"It's not possible to implement something like a C++ vector in Python, since you don't have array primitives the way C/C++ do. However, anything more complicated can be implemented (efficiently) on top of it, including, but not limited to: linked lists, hash tables, multisets, bloom filters, etc.\n"
] |
[
25,
14,
10,
3,
2,
0
] |
[] |
[] |
[
"data_structures",
"python"
] |
stackoverflow_0001986712_data_structures_python.txt
|
Q:
cherrypy: accessing the URI/route parameters inside a tool hook?
I have a tool hook setup for 'before_finalize' like so:
def before_finalize():
d = cherrypy.response.body
location = '%s/views' % cherrypy.request.app.config['/']['application_path']
cherrypy.response.body = Template(file='%s/index.tmpl' % location).respond()
What I need to do is find out inside that hook what route (I'm using RoutesDispatcher) got us to that hook, or what the URI is, so I can appropriately find my template based on that. How can I find this information?
A:
cherrypy.url will get you the complete URI, but I suspect that's not quite what you're looking for...why do you need it? If you're trying to form your 'location' variable based on the URI, you probably want path_info instead of the complete URI:
location = '%s/views' % request.app.config['/']['application_path']
if request.path_info.endswith('/'):
fname = '%s%sindex.html' % (location, request.path_info)
else:
fname = '%s%s.html' % (location, request.path_info)
|
cherrypy: accessing the URI/route parameters inside a tool hook?
|
I have a tool hook setup for 'before_finalize' like so:
def before_finalize():
d = cherrypy.response.body
location = '%s/views' % cherrypy.request.app.config['/']['application_path']
cherrypy.response.body = Template(file='%s/index.tmpl' % location).respond()
What I need to do is find out inside that hook what route (I'm using RoutesDispatcher) got us to that hook, or what the URI is, so I can appropriately find my template based on that. How can I find this information?
|
[
"cherrypy.url will get you the complete URI, but I suspect that's not quite what you're looking for...why do you need it? If you're trying to form your 'location' variable based on the URI, you probably want path_info instead of the complete URI:\nlocation = '%s/views' % request.app.config['/']['application_path']\nif request.path_info.endswith('/'):\n fname = '%s%sindex.html' % (location, request.path_info)\nelse:\n fname = '%s%s.html' % (location, request.path_info)\n\n"
] |
[
0
] |
[] |
[] |
[
"cherrypy",
"python"
] |
stackoverflow_0001987042_cherrypy_python.txt
|
Q:
How to get Permissions to inherit from User's Groups?
I'm trying to figure out Django Groups and the documentation is pretty bare on the site.
For example, you can use the decorator permission_required() to check the permissions, however, this only checks if you have assigned permissions directly. I have assigned Users to Groups which have Permissions setup. When using Django's permissions system, it ignores the Groups the User belongs to.
Is there any way to get Django to inherit permissions from the User's Groups?
A:
Django does automatically inherit permissions from groups. There may be some problem in your installation or database (such as using a custom permission without having done a syncdb), or you might be passing the wrong argument to the decorator.
If you have a model named post in an app named blog for example, the decorator would be used like this:
@permission_required('blog.delete_post')
|
How to get Permissions to inherit from User's Groups?
|
I'm trying to figure out Django Groups and the documentation is pretty bare on the site.
For example, you can use the decorator permission_required() to check the permissions, however, this only checks if you have assigned permissions directly. I have assigned Users to Groups which have Permissions setup. When using Django's permissions system, it ignores the Groups the User belongs to.
Is there any way to get Django to inherit permissions from the User's Groups?
|
[
"Django does automatically inherit permissions from groups. There may be some problem in your installation or database (such as using a custom permission without having done a syncdb), or you might be passing the wrong argument to the decorator.\nIf you have a model named post in an app named blog for example, the decorator would be used like this:\n@permission_required('blog.delete_post')\n\n"
] |
[
5
] |
[] |
[] |
[
"django",
"django_authentication",
"django_permissions",
"python"
] |
stackoverflow_0001987496_django_django_authentication_django_permissions_python.txt
|
Q:
What does this code mean: "print >> sys.stderr"
print >> sys.stderr, "Error in atexit._run_exitfuncs:"
Why print '>>' in front of sys.stderr?
Thanks.
A:
This syntax means writes to a file object (sys.stderr in this case) instead of standard output. [Link]
In Python 3.0, print becomes a function instead of a statement: [Link]
print("Error in atexit._run_exitfuncs:", file=sys.stderr)
A:
From the Python documentation:
print also has an extended form,
defined by the second portion of the
syntax described above. This form is
sometimes referred to as “print
chevron.” In this form, the first
expression after the >> must evaluate
to a “file-like” object, specifically
an object that has a write() method as
described above. With this extended
form, the subsequent expressions are
printed to this file object. If the
first expression evaluates to None,
then sys.stdout is used as the file
for output.
|
What does this code mean: "print >> sys.stderr"
|
print >> sys.stderr, "Error in atexit._run_exitfuncs:"
Why print '>>' in front of sys.stderr?
Thanks.
|
[
"This syntax means writes to a file object (sys.stderr in this case) instead of standard output. [Link]\nIn Python 3.0, print becomes a function instead of a statement: [Link]\nprint(\"Error in atexit._run_exitfuncs:\", file=sys.stderr)\n\n",
"From the Python documentation:\n\nprint also has an extended form,\n defined by the second portion of the\n syntax described above. This form is\n sometimes referred to as “print\n chevron.” In this form, the first\n expression after the >> must evaluate\n to a “file-like” object, specifically\n an object that has a write() method as\n described above. With this extended\n form, the subsequent expressions are\n printed to this file object. If the\n first expression evaluates to None,\n then sys.stdout is used as the file\n for output.\n\n"
] |
[
39,
6
] |
[] |
[] |
[
"python",
"syntax"
] |
stackoverflow_0001987626_python_syntax.txt
|
Q:
Any difference between 'b' and 'c'?
class a(object):
b = 'bbbb'
def __init__(self):
self.c = 'cccc'
I think they are the same; is there any difference?
A:
Yes, there is a difference.
b is a class variable... one that is shared by all instances of a, while c is an instance variable which will exist independantly for each instance.
A:
b is a class variable, while c is a instance variable.
>>> a.b
'bbbb'
>>> a.c
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'a' has no attribute 'c'
>>> a().b
'bbbb'
>>> a().c
'cccc'
Instances of the class may have different value for their instance variables, but they share the same class variables.
A:
'b' is a class attribute, set directly on the class object 'a'. 'c' is an instance attribute, set directly on self. While self.b will find a.b due to how lookup works, you cannot use a.c (as it doesn't exist).
A:
>>> class a(object):
... b = 'bbbb'
... def __init__(self):
... self.c = 'cccc'
...
>>> a1=a()
>>> a2=a()
>>> a1.b
'bbbb'
>>> a2.b
'bbbb'
>>> a1.c='dddd'
>>> a1.c
'dddd'
>>> a2.c
'cccc'
>>> a.b= 'common'
>>> a1.b
'common'
>>> a2.b
'common'
|
Any difference between 'b' and 'c'?
|
class a(object):
b = 'bbbb'
def __init__(self):
self.c = 'cccc'
I think they are the same; is there any difference?
|
[
"Yes, there is a difference.\nb is a class variable... one that is shared by all instances of a, while c is an instance variable which will exist independantly for each instance.\n",
"b is a class variable, while c is a instance variable.\n>>> a.b\n'bbbb'\n>>> a.c\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: type object 'a' has no attribute 'c'\n>>> a().b\n'bbbb'\n>>> a().c\n'cccc'\n\nInstances of the class may have different value for their instance variables, but they share the same class variables.\n",
"'b' is a class attribute, set directly on the class object 'a'. 'c' is an instance attribute, set directly on self. While self.b will find a.b due to how lookup works, you cannot use a.c (as it doesn't exist).\n",
">>> class a(object):\n... b = 'bbbb'\n... def __init__(self):\n... self.c = 'cccc'\n...\n>>> a1=a()\n>>> a2=a()\n>>> a1.b\n'bbbb'\n>>> a2.b\n'bbbb'\n>>> a1.c='dddd'\n>>> a1.c\n'dddd'\n>>> a2.c\n'cccc'\n>>> a.b= 'common'\n>>> a1.b\n'common'\n>>> a2.b\n'common'\n\n"
] |
[
9,
5,
2,
2
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0001987703_class_python.txt
|
Q:
Is there any module that allows Django/Python to work with gnupg?
I was wondering if there's any django module, or in such case any python module, that will allow me to create my own application to manage the creation, administration, etc of GnuPG keys, as well as the ability to sign and encrypt documents through this application?
If there's no such module, how can I do that?
Thank you.
A:
I wrote a Django app django-email-extras that does exactly what you're looking for. It lets users manage GPG keys via the Django admin and encrypts all mail to recipients with valid keys and includes support for multi-part templated emails with attachments.
A:
GnuPGInterface can do all of that -- it's essentially a Python wrapper around the GnuPG program.
PyMe might be easier to use as it is designed to wrap around GPGME (ME = Made Easy).
From the PyME features page:
Ability to sign, encrypt, decrypt, and verify data.
Ability to list keys, export and import keys, and manage the keyring.
|
Is there any module that allows Django/Python to work with gnupg?
|
I was wondering if there's any django module, or in such case any python module, that will allow me to create my own application to manage the creation, administration, etc of GnuPG keys, as well as the ability to sign and encrypt documents through this application?
If there's no such module, how can I do that?
Thank you.
|
[
"I wrote a Django app django-email-extras that does exactly what you're looking for. It lets users manage GPG keys via the Django admin and encrypts all mail to recipients with valid keys and includes support for multi-part templated emails with attachments.\n",
"GnuPGInterface can do all of that -- it's essentially a Python wrapper around the GnuPG program.\nPyMe might be easier to use as it is designed to wrap around GPGME (ME = Made Easy).\nFrom the PyME features page:\n\n\nAbility to sign, encrypt, decrypt, and verify data.\nAbility to list keys, export and import keys, and manage the keyring.\n\n\n"
] |
[
4,
2
] |
[] |
[] |
[
"django",
"gnupg",
"python"
] |
stackoverflow_0001450036_django_gnupg_python.txt
|
Q:
Gracefully handling "MySQL has gone away"
I'm writing a small database adapter in Python, mostly for fun. I'm trying to get the code to gracefully recover from a situation where the MySQL connection "goes away," aka wait_timeout is exceeded. I've set wait_timeout at 10 so I can try this.
Here's my code:
def select(self, query, params=[]):
try:
self.cursor = self.cxn.cursor()
self.cursor.execute(query, params)
except MySQLdb.OperationalError, e:
if e[0] == 2006:
print "We caught the exception properly!"
print self.cxn
self.cxn.close()
self.cxn = self.db._get_cxn()
self.cursor = self.cxn.cursor()
self.cursor.execute(query, params)
print self.cxn
return self.cursor.fetchall()
Next I wait ten seconds and try to make a request. Here's what CherryPy looks like:
[31/Dec/2009:20:47:29] ENGINE Bus STARTING
[31/Dec/2009:20:47:29] ENGINE Starting database pool...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE Started monitor thread '_TimeoutMonitor'.
[31/Dec/2009:20:47:29] ENGINE Started monitor thread 'Autoreloader'.
[31/Dec/2009:20:47:30] ENGINE Serving on 0.0.0.0:8888
[31/Dec/2009:20:47:30] ENGINE Bus STARTED
We caught the exception properly! <====================================== Aaarg!
<_mysql.connection open to 'localhost' at 1ee22b0>
[31/Dec/2009:20:48:25] HTTP Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/CherryPy-3.1.2-py2.6.egg/cherrypy/_cprequest.py", line 606, in respond
cherrypy.response.body = self.handler()
File "/usr/local/lib/python2.6/dist-packages/CherryPy-3.1.2-py2.6.egg/cherrypy/_cpdispatch.py", line 25, in __call__
return self.callable(*self.args, **self.kwargs)
File "adp.py", line 69, in reports
page.sources = sql.GetSources()
File "/home/swoods/dev/adp/sql.py", line 45, in __call__
return getattr(self.formatter.cxn, parsefn)(sql, sql_vars)
File "/home/swoods/dev/adp/database.py", line 96, in select
self.cursor.execute(query, params)
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (2006, 'MySQL server has gone away')
[31/Dec/2009:20:48:25] HTTP
Request Headers:
COOKIE: session_id=e14f63acc306b26f14d966e606612642af2dd423
HOST: localhost:8888
CACHE-CONTROL: max-age=0
ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
ACCEPT-CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3
USER-AGENT: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5
CONNECTION: keep-alive
Remote-Addr: 127.0.0.1
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip,deflate
127.0.0.1 - - [31/Dec/2009:20:48:25] "GET /reports/1 HTTP/1.1" 500 1770 "" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5"
Why doesn't this work?? I clearly catch the exception, regenerate both the connection and the cursor, but it still doesn't work. Is it related to how MySQLdb gets the connections?
A:
Can't see from the code, but my guess would be that the db._get_cxn() method is doing some kind of connection pooling and returning the existing connection object instead of making a new one. Is there not a call you can make on db to flush the existing useless connection? (And should you really be calling an internal _-prefixed method?)
For preventing MySQL has gone away I generally prefer to keep a timestamp with the connection of the last time I used it. Then before trying to use it again I look at the timestamp and close/discard the connection if it was last used more than a few hours ago. This saves wrapping every possible query with a try...except OperationalError...try again.
|
Gracefully handling "MySQL has gone away"
|
I'm writing a small database adapter in Python, mostly for fun. I'm trying to get the code to gracefully recover from a situation where the MySQL connection "goes away," aka wait_timeout is exceeded. I've set wait_timeout at 10 so I can try this.
Here's my code:
def select(self, query, params=[]):
try:
self.cursor = self.cxn.cursor()
self.cursor.execute(query, params)
except MySQLdb.OperationalError, e:
if e[0] == 2006:
print "We caught the exception properly!"
print self.cxn
self.cxn.close()
self.cxn = self.db._get_cxn()
self.cursor = self.cxn.cursor()
self.cursor.execute(query, params)
print self.cxn
return self.cursor.fetchall()
Next I wait ten seconds and try to make a request. Here's what CherryPy looks like:
[31/Dec/2009:20:47:29] ENGINE Bus STARTING
[31/Dec/2009:20:47:29] ENGINE Starting database pool...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE POOL Connecting to MySQL...
[31/Dec/2009:20:47:29] ENGINE Started monitor thread '_TimeoutMonitor'.
[31/Dec/2009:20:47:29] ENGINE Started monitor thread 'Autoreloader'.
[31/Dec/2009:20:47:30] ENGINE Serving on 0.0.0.0:8888
[31/Dec/2009:20:47:30] ENGINE Bus STARTED
We caught the exception properly! <====================================== Aaarg!
<_mysql.connection open to 'localhost' at 1ee22b0>
[31/Dec/2009:20:48:25] HTTP Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/CherryPy-3.1.2-py2.6.egg/cherrypy/_cprequest.py", line 606, in respond
cherrypy.response.body = self.handler()
File "/usr/local/lib/python2.6/dist-packages/CherryPy-3.1.2-py2.6.egg/cherrypy/_cpdispatch.py", line 25, in __call__
return self.callable(*self.args, **self.kwargs)
File "adp.py", line 69, in reports
page.sources = sql.GetSources()
File "/home/swoods/dev/adp/sql.py", line 45, in __call__
return getattr(self.formatter.cxn, parsefn)(sql, sql_vars)
File "/home/swoods/dev/adp/database.py", line 96, in select
self.cursor.execute(query, params)
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (2006, 'MySQL server has gone away')
[31/Dec/2009:20:48:25] HTTP
Request Headers:
COOKIE: session_id=e14f63acc306b26f14d966e606612642af2dd423
HOST: localhost:8888
CACHE-CONTROL: max-age=0
ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
ACCEPT-CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3
USER-AGENT: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5
CONNECTION: keep-alive
Remote-Addr: 127.0.0.1
ACCEPT-LANGUAGE: en-US,en;q=0.8
ACCEPT-ENCODING: gzip,deflate
127.0.0.1 - - [31/Dec/2009:20:48:25] "GET /reports/1 HTTP/1.1" 500 1770 "" "Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.43 Safari/532.5"
Why doesn't this work?? I clearly catch the exception, regenerate both the connection and the cursor, but it still doesn't work. Is it related to how MySQLdb gets the connections?
|
[
"Can't see from the code, but my guess would be that the db._get_cxn() method is doing some kind of connection pooling and returning the existing connection object instead of making a new one. Is there not a call you can make on db to flush the existing useless connection? (And should you really be calling an internal _-prefixed method?)\nFor preventing MySQL has gone away I generally prefer to keep a timestamp with the connection of the last time I used it. Then before trying to use it again I look at the timestamp and close/discard the connection if it was last used more than a few hours ago. This saves wrapping every possible query with a try...except OperationalError...try again.\n"
] |
[
12
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001987701_mysql_python.txt
|
Q:
Convert an image to RGBA mode with python
I tried the code listed at Using PIL to make all white pixels transparent to convert some .ico files to .png images with transparent background.
It doesn't work on all .ico images, some time it just outputs an image with black background.
Sample .ico files can be located here
A:
i tried that script on all icons you put there . and it worked fine. some times it represents transparency with a black background . to make sure that the background is fully transparent . open it in photoshop.
Edit : make sure that your icon has an alpha channel.
|
Convert an image to RGBA mode with python
|
I tried the code listed at Using PIL to make all white pixels transparent to convert some .ico files to .png images with transparent background.
It doesn't work on all .ico images, some time it just outputs an image with black background.
Sample .ico files can be located here
|
[
"i tried that script on all icons you put there . and it worked fine. some times it represents transparency with a black background . to make sure that the background is fully transparent . open it in photoshop. \nEdit : make sure that your icon has an alpha channel.\n"
] |
[
0
] |
[] |
[] |
[
"image",
"python",
"python_imaging_library",
"transparent"
] |
stackoverflow_0001987848_image_python_python_imaging_library_transparent.txt
|
Q:
Why is my Python version slower than my Perl version?
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
$ time python python.py
100000000
real 0m48.100s
user 0m45.633s
sys 0m0.043s
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
$ time perl perl.pl
100000000
real 0m24.757s
user 0m22.341s
sys 0m0.029s
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or is Python really that much slower than Perl?
A:
The nit-picking answer is that you should compare it to idiomatic Python:
The original code takes 34 seconds on my machine.
A for loop (FlorianH's answer) with += and xrange() takes 21.
Putting the whole thing in a function reduces it to 9 seconds!
That's much faster than Perl (15 seconds on my machine)!
Explanation: Python local vars are much faster than globals.
(For fairness, I also tried a function in Perl - no change)
Getting rid of the j variable reduced it to 8 seconds:
print sum(1 for i in xrange(100000000))
Python has the strange property that higher-level shorter code tends to be fastest :-)
But the real answer is that your "micro-benchmark" is meaningless.
The real question of language speed is: what's the performance of an average real application? To know that, you should take into account:
Typical mix of operations in complex code.
Your code doesn't contain any data structures, function calls, or OOP operations.
A large enough codebase to feel cache effects — many interpreter optimizations trade memory for speed, which is not measured fairly by any tiny benchmark.
Optimization opportunities: after you write your code, IF it's not fast enough,
how much faster can you easily make it?
E.g. how hard is it to offload the heavy lifting to effecient C libriries?
PyPy's benchmarks and Octane are good examples of what realistic language speed benchmarks look like.
If you want to talk number crunching, Python IS surprisingly popular with scientists.
They love it for the simple pseudo-math syntax and short learning curve, but also for the excellent numpy library for array crunching and the ease of wrapping other existing C code.
And then there is the Psyco JIT which would probably run your toy example well under 1 second, but I can't check it now because it only works on 32-bit x86.
EDIT: Nowdays, skip Psyco and use PyPy which a cross-platform actively improving JIT.
A:
All this micro benchmarking can get a bit silly!
For eg. just switching to for in both Python & Perl provides an hefty speed bump. The original Perl example would be twice as quick if for was used:
my $j = 0;
for my $i (1..100000000) {
++$j;
}
print $j;
And I can shave off a bit more with this:
++$j for 1..100000000;
print $j;
And getting even sillier we can get it down to 1 second here ;-)
print {STDOUT} (1..10000000)[-1];
/I3az/
ref: Perl 5.10.1 used.
A:
Python is not particularly fast at numeric computations and I'm sure it's slower than perl when it comes to text processing.
Since you're an experienced Perl hand, I don't know if this applies to you but Python programs in the long run tend to be more maintainable and are quicker to develop. The speed is 'enough' for most situations and you have the flexibility to drop down into C when you really need a performance boost.
Update
Okay. I just created a large file (1GB) with random data in it (mostly ascii) and broke it into lines of equal lengths. This was supposed to simulate a log file.
I then ran simple perl and python programs that search the file line by line for an existing pattern.
With Python 2.6.2, the results were
real 0m18.364s
user 0m9.209s
sys 0m0.956s
and with Perl 5.10.0
real 0m17.639s
user 0m5.692s
sys 0m0.844s
The programs are as follows (please let me know if I'm doing something stupid)
import re
regexp = re.compile("p06c")
def search():
with open("/home/arif/f") as f:
for i in f:
if regexp.search(i):
print "Found : %s"%i
search()
and
sub search() {
open FOO,"/home/arif/f" or die $!;
while (<FOO>) {
print "Found : $_\n" if /p06c/o;
}
}
search();
The results are pretty close and tweaking it this way or other don't seem to alter the results much. I don't know if this is a true benchmark but I think it'd be the way I'd search log files in the two languages so I stand corrected about the relative performances.
Thanks Chris.
A:
Python runs very fast, if you use the correct syntax of the python language. It is roughly described as "pythonic".
If you restructure your code like this, it will run at least twice as fast (well, it does on my machine):
j = 0
for i in range(10000000):
j = j + 1
print j
Whenever you use a while in python, you should check if you could also use a "for X in range()".
A:
To OP, in Python this piece of code:
j = 0
for i in range(10000000):
j = j + 1
print j
is the same as
print range(10000001)[-1]
which, on my machine,
$ time python test.py
10000000
real 0m1.138s
user 0m0.761s
sys 0m0.357s
runs for approximately 1s. range() (or xrange) is internal to Python and "internally" , it already can generates a sequence of numbers for you. Therefore, you don't have to create your own iterations using your own loop. Now, you go and find a Perl equivalent that can run for 1s to produce the same result
A:
Python maintains global variables in a dictionary. Therefore, each time there is an assignment, the interpreter performs a lookup on the module dictionary, that is somewhat expensive, and this is the reason why you found your example so slower.
In order to improve performance, you should use local allocation, like creating a function. Python interpreter stores local variables in an array, with a much faster access.
However, it should be noted that this is an implementation detail of CPython; I suspect IronPython, for instance, would lead to a completely different result.
Finally, for more information on this topic, I suggest you an interesting essay from GvR, about optimization in Python: Python Patterns - An Optimization Anecdote.
A:
python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.
A:
I'm not up-to-date on everything with Python, but my first idea about this benchmark was the difference between Perl and Python numbers. In Perl, we have numbers. They aren't objects, and their precision is limited to the sizes imposed by the architecture. In Python, we have objects with arbitrary precision. For small numbers (those that fit in 32-bit), I'd expect Perl to be faster. If we go over the integer size of the architecture, the Perl script won't even work without some modification.
I see similar results for the original benchmark on my MacBook Air (32-bit) using a Perl 5.10.1 that I compiled myself and the Python 2.5.1 that came with Leopard:
However, I added arbitrary precision to the Perl program with the bignum
use bignum;
Now I wonder if the Perl version is ever going to finish. :) I'll post some results when it finishes, but it looks like it's going to be an order of magnitude difference.
Some of you may have seen my question about What are five things you hate about your favorite language?. Perl's default numbers is one of the things that I hate. I should never have to think about it and it shouldn't be slow. In Perl, I lose on both. Note, however, that if I needed numeric processing in Perl, I could use PDL.
A:
is Python really that much slower than
Perl?
Look at the Computer Language Benchmarks Game - "Compare the performance of ≈30 programming languages using ≈12 flawed benchmarks and ≈1100 programs".
They are only tiny benchmark programs but they still do a lot more than the code snippet you have timed -
http://shootout.alioth.debian.org/u32/python.php
|
Why is my Python version slower than my Perl version?
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
$ time python python.py
100000000
real 0m48.100s
user 0m45.633s
sys 0m0.043s
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
$ time perl perl.pl
100000000
real 0m24.757s
user 0m22.341s
sys 0m0.029s
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or is Python really that much slower than Perl?
|
[
"The nit-picking answer is that you should compare it to idiomatic Python:\n\nThe original code takes 34 seconds on my machine.\nA for loop (FlorianH's answer) with += and xrange() takes 21. \nPutting the whole thing in a function reduces it to 9 seconds!\nThat's much faster than Perl (15 seconds on my machine)!\nExplanation: Python local vars are much faster than globals.\n(For fairness, I also tried a function in Perl - no change)\nGetting rid of the j variable reduced it to 8 seconds: \nprint sum(1 for i in xrange(100000000))\n\nPython has the strange property that higher-level shorter code tends to be fastest :-)\nBut the real answer is that your \"micro-benchmark\" is meaningless.\nThe real question of language speed is: what's the performance of an average real application? To know that, you should take into account:\n\nTypical mix of operations in complex code.\nYour code doesn't contain any data structures, function calls, or OOP operations.\nA large enough codebase to feel cache effects — many interpreter optimizations trade memory for speed, which is not measured fairly by any tiny benchmark.\nOptimization opportunities: after you write your code, IF it's not fast enough, \nhow much faster can you easily make it?\nE.g. how hard is it to offload the heavy lifting to effecient C libriries?\n\nPyPy's benchmarks and Octane are good examples of what realistic language speed benchmarks look like.\nIf you want to talk number crunching, Python IS surprisingly popular with scientists.\nThey love it for the simple pseudo-math syntax and short learning curve, but also for the excellent numpy library for array crunching and the ease of wrapping other existing C code.\nAnd then there is the Psyco JIT which would probably run your toy example well under 1 second, but I can't check it now because it only works on 32-bit x86.\nEDIT: Nowdays, skip Psyco and use PyPy which a cross-platform actively improving JIT. \n",
"All this micro benchmarking can get a bit silly!\nFor eg. just switching to for in both Python & Perl provides an hefty speed bump. The original Perl example would be twice as quick if for was used:\nmy $j = 0;\n\nfor my $i (1..100000000) {\n ++$j;\n}\n\nprint $j;\n\n\nAnd I can shave off a bit more with this:\n++$j for 1..100000000;\nprint $j;\n\n\nAnd getting even sillier we can get it down to 1 second here ;-)\nprint {STDOUT} (1..10000000)[-1];\n\n/I3az/\nref: Perl 5.10.1 used.\n",
"Python is not particularly fast at numeric computations and I'm sure it's slower than perl when it comes to text processing. \nSince you're an experienced Perl hand, I don't know if this applies to you but Python programs in the long run tend to be more maintainable and are quicker to develop. The speed is 'enough' for most situations and you have the flexibility to drop down into C when you really need a performance boost.\nUpdate\nOkay. I just created a large file (1GB) with random data in it (mostly ascii) and broke it into lines of equal lengths. This was supposed to simulate a log file. \nI then ran simple perl and python programs that search the file line by line for an existing pattern.\nWith Python 2.6.2, the results were\nreal 0m18.364s\nuser 0m9.209s\nsys 0m0.956s\n\nand with Perl 5.10.0\nreal 0m17.639s\nuser 0m5.692s\nsys 0m0.844s\n\nThe programs are as follows (please let me know if I'm doing something stupid)\nimport re\nregexp = re.compile(\"p06c\")\n\ndef search():\n with open(\"/home/arif/f\") as f:\n for i in f:\n if regexp.search(i):\n print \"Found : %s\"%i\n\nsearch()\n\nand\nsub search() {\n open FOO,\"/home/arif/f\" or die $!;\n while (<FOO>) {\n print \"Found : $_\\n\" if /p06c/o;\n }\n}\n\nsearch();\n\nThe results are pretty close and tweaking it this way or other don't seem to alter the results much. I don't know if this is a true benchmark but I think it'd be the way I'd search log files in the two languages so I stand corrected about the relative performances. \nThanks Chris.\n",
"Python runs very fast, if you use the correct syntax of the python language. It is roughly described as \"pythonic\".\nIf you restructure your code like this, it will run at least twice as fast (well, it does on my machine):\nj = 0\nfor i in range(10000000):\n j = j + 1\nprint j\n\nWhenever you use a while in python, you should check if you could also use a \"for X in range()\".\n",
"To OP, in Python this piece of code:\nj = 0\nfor i in range(10000000):\n j = j + 1\nprint j\n\nis the same as \nprint range(10000001)[-1]\n\nwhich, on my machine, \n$ time python test.py\n10000000\n\nreal 0m1.138s\nuser 0m0.761s\nsys 0m0.357s\n\nruns for approximately 1s. range() (or xrange) is internal to Python and \"internally\" , it already can generates a sequence of numbers for you. Therefore, you don't have to create your own iterations using your own loop. Now, you go and find a Perl equivalent that can run for 1s to produce the same result\n",
"Python maintains global variables in a dictionary. Therefore, each time there is an assignment, the interpreter performs a lookup on the module dictionary, that is somewhat expensive, and this is the reason why you found your example so slower.\nIn order to improve performance, you should use local allocation, like creating a function. Python interpreter stores local variables in an array, with a much faster access.\nHowever, it should be noted that this is an implementation detail of CPython; I suspect IronPython, for instance, would lead to a completely different result.\nFinally, for more information on this topic, I suggest you an interesting essay from GvR, about optimization in Python: Python Patterns - An Optimization Anecdote.\n",
"python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.\n",
"I'm not up-to-date on everything with Python, but my first idea about this benchmark was the difference between Perl and Python numbers. In Perl, we have numbers. They aren't objects, and their precision is limited to the sizes imposed by the architecture. In Python, we have objects with arbitrary precision. For small numbers (those that fit in 32-bit), I'd expect Perl to be faster. If we go over the integer size of the architecture, the Perl script won't even work without some modification.\nI see similar results for the original benchmark on my MacBook Air (32-bit) using a Perl 5.10.1 that I compiled myself and the Python 2.5.1 that came with Leopard:\nHowever, I added arbitrary precision to the Perl program with the bignum\n use bignum;\n\nNow I wonder if the Perl version is ever going to finish. :) I'll post some results when it finishes, but it looks like it's going to be an order of magnitude difference.\nSome of you may have seen my question about What are five things you hate about your favorite language?. Perl's default numbers is one of the things that I hate. I should never have to think about it and it shouldn't be slow. In Perl, I lose on both. Note, however, that if I needed numeric processing in Perl, I could use PDL.\n",
"\nis Python really that much slower than\n Perl?\n\nLook at the Computer Language Benchmarks Game - \"Compare the performance of ≈30 programming languages using ≈12 flawed benchmarks and ≈1100 programs\".\nThey are only tiny benchmark programs but they still do a lot more than the code snippet you have timed -\nhttp://shootout.alioth.debian.org/u32/python.php\n"
] |
[
52,
9,
8,
8,
8,
5,
2,
1,
0
] |
[] |
[] |
[
"performance",
"perl",
"python"
] |
stackoverflow_0001984871_performance_perl_python.txt
|
Q:
Variables and function
Hey all, is it possible to call a function value instead of calling the whole function?As, if i call the whole function, it will run unnecessarily which i do not want.
For example:
def main():
# Inputing the x-value for the first start point of the line
start_point_x_1()
# Inputing the x-value for the 2nd end point of the line
end_point_x_2()
# The first output point calculated and printed
first_calculated_point()
def start_point_x_1():
return raw_input("Enter the x- value for the 1st " +
"start point for the line.\n")
def end_point_x_2():
return raw_input("Enter the x- value for the 2nd " +
"end point for the line.\n")
def first_calculated_point():
x0 = int(start_point_x_1())
a = int(end_point_x_2()) - int(start_point_x_1())
lamda_0 = 0
x = x0 + (lamda_0)*a
main()
The code above works but when i reach the function first_calculated_point and when i calculate x0, the function start_point_x_1() runs again.I tried storing the function like ' for example x1 = raw_input("Enter the x- value for the 1st " + "start point for the line.\n") under the function start_point_x_1() but when i call the variable x1 at x0 = x1, they said x1 is not defined. Is there any way to store the value of the function and call it instead of calling the whole function?
A:
Change
start_point_x_1()
to
x0 = start_point_x_1()
Similarly, do
x2 = end_point_x_2()
and finally:
first_calculated_point()
becomes
first_calculated_point(x0, x2)
The definition of the function changes to:
def first_calculated_point(x0, x2):
a = int(x2) - int(x0)
lamda_0 = 0
x = x0 + (lamda_0)*a
main()
Is this what you want? The idea is that you need to save the values taken from the user and then pass them to the function doing the calculation.
If this is not what you want, you need to explain yourself more, (and good indentation will help, particularly because indentation is significant in Python!).
A:
Why are you calling start_point_x_1 and end_point_x_2 from both main and first_calculated_point?
You could change the definitions of main
def main():
first_calculated_point()
and first_calculated_point:
def first_calculated_point():
x0 = int(start_point_x_1())
a = int(end_point_x_2()) - x0
lamda_0 = 0
x = x0 + (lamda_0)*a
# did you mean to return x?
Note that in the assignment to a, I replaced int(start_point_x_1()) with the variable to which the same expression was assigned on the previous line, but you can do this safely only when the expression doesn't have side effects, such as printing to the screen or reading input from the user.
A:
You can use 'memoization' that is cache the result of functions based on function arguments, for that you can write a decorator so that you can decorate whatever functions you think need that behaviour but if you problem is as simple as your code is why not assign it a variable an d used that assigned values?
e.g
x0 = int(start_point_x_1())
a = int(end_point_x_2()) - x0
|
Variables and function
|
Hey all, is it possible to call a function value instead of calling the whole function?As, if i call the whole function, it will run unnecessarily which i do not want.
For example:
def main():
# Inputing the x-value for the first start point of the line
start_point_x_1()
# Inputing the x-value for the 2nd end point of the line
end_point_x_2()
# The first output point calculated and printed
first_calculated_point()
def start_point_x_1():
return raw_input("Enter the x- value for the 1st " +
"start point for the line.\n")
def end_point_x_2():
return raw_input("Enter the x- value for the 2nd " +
"end point for the line.\n")
def first_calculated_point():
x0 = int(start_point_x_1())
a = int(end_point_x_2()) - int(start_point_x_1())
lamda_0 = 0
x = x0 + (lamda_0)*a
main()
The code above works but when i reach the function first_calculated_point and when i calculate x0, the function start_point_x_1() runs again.I tried storing the function like ' for example x1 = raw_input("Enter the x- value for the 1st " + "start point for the line.\n") under the function start_point_x_1() but when i call the variable x1 at x0 = x1, they said x1 is not defined. Is there any way to store the value of the function and call it instead of calling the whole function?
|
[
"Change\nstart_point_x_1()\n\nto\nx0 = start_point_x_1()\n\nSimilarly, do\nx2 = end_point_x_2()\n\nand finally:\nfirst_calculated_point()\n\nbecomes\nfirst_calculated_point(x0, x2)\n\nThe definition of the function changes to:\ndef first_calculated_point(x0, x2):\n a = int(x2) - int(x0)\n lamda_0 = 0\n x = x0 + (lamda_0)*a\n\nmain()\n\nIs this what you want? The idea is that you need to save the values taken from the user and then pass them to the function doing the calculation.\nIf this is not what you want, you need to explain yourself more, (and good indentation will help, particularly because indentation is significant in Python!).\n",
"Why are you calling start_point_x_1 and end_point_x_2 from both main and first_calculated_point?\nYou could change the definitions of main\ndef main():\n first_calculated_point()\n\nand first_calculated_point:\ndef first_calculated_point():\n x0 = int(start_point_x_1())\n a = int(end_point_x_2()) - x0\n lamda_0 = 0\n x = x0 + (lamda_0)*a\n\n # did you mean to return x?\n\nNote that in the assignment to a, I replaced int(start_point_x_1()) with the variable to which the same expression was assigned on the previous line, but you can do this safely only when the expression doesn't have side effects, such as printing to the screen or reading input from the user.\n",
"You can use 'memoization' that is cache the result of functions based on function arguments, for that you can write a decorator so that you can decorate whatever functions you think need that behaviour but if you problem is as simple as your code is why not assign it a variable an d used that assigned values?\ne.g\nx0 = int(start_point_x_1())\na = int(end_point_x_2()) - x0 \n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001988533_python.txt
|
Q:
How do I get datetime from date object python?
How do I get datetime from date object python?
I think of
import datetime as dt
today = dt.date.today()
date_time = dt.datetime(today.year, today.month, today.day)
Any easier solution?
A:
There are a few ways to do this:
mydatetime = datetime.datetime(d.year, d.month, d.day)
or
mydatetime = datetime.combine(d, datetime.time())
or
mydatetime = datetime.datetime.fromordinal(d.toordinal())
I think the first is the most commonly used.
A:
Try this:
import datetime
print 'Now :', datetime.datetime.now()
print 'Today :', datetime.datetime.today()
print 'UTC Now:', datetime.datetime.utcnow()
d = datetime.datetime.now()
for attr in [ 'year', 'month', 'day', 'hour', 'minute', 'second', 'microsecond']:
print attr, ':', getattr(d, attr)
or
mdt = datetime.datetime(d.year, d.month, d.day) #generalized
|
How do I get datetime from date object python?
|
How do I get datetime from date object python?
I think of
import datetime as dt
today = dt.date.today()
date_time = dt.datetime(today.year, today.month, today.day)
Any easier solution?
|
[
"There are a few ways to do this:\nmydatetime = datetime.datetime(d.year, d.month, d.day)\n\nor\nmydatetime = datetime.combine(d, datetime.time())\n\nor \nmydatetime = datetime.datetime.fromordinal(d.toordinal())\n\nI think the first is the most commonly used.\n",
"Try this:\nimport datetime\n\nprint 'Now :', datetime.datetime.now()\nprint 'Today :', datetime.datetime.today()\nprint 'UTC Now:', datetime.datetime.utcnow()\n\nd = datetime.datetime.now()\nfor attr in [ 'year', 'month', 'day', 'hour', 'minute', 'second', 'microsecond']:\n print attr, ':', getattr(d, attr)\n\nor\nmdt = datetime.datetime(d.year, d.month, d.day) #generalized\n\n"
] |
[
10,
1
] |
[] |
[] |
[
"datetime",
"python"
] |
stackoverflow_0001988599_datetime_python.txt
|
Q:
exception not getting caught when not in the right package in python
EDIT:
OK, I managed to isolate the bug and the exact, complete code to to reproduce it. But it appears either something that's by design, or a bug in python.
Create two sibling packages: admin & General, each with it's own __init__.py, of course.
In the package admin put the file 'test.py' with the following code:
from General.test02 import run
import RunStoppedException
try:
run()
except RunStoppedException.RunStoppedException,e:
print 'right'
except Exception,e:
print 'this is what i got: %s'%type(e)
and also in admin put the file 'RunStoppedException.py' with the following code:
class RunStoppedException(Exception):
def __init__(self):
Exception.__init__(self)
In the package General put the file test02.py with the code:
import admin.RunStoppedException
def run():
raise admin.RunStoppedException.RunStoppedException()
the printout:
this is what i got: <class 'admin.RunStoppedException.RunStoppedException'>
When it should've been right. This only happens when one file sits in the same dir as the exception, so they import it differently.
Is this by design, or a bug of python?
I am using python2.6, running it under eclipse+pydev
A:
import admin.RunStoppedException
This is an ambiguous relative import. Do you mean RunStoppedException from the admin top-level module? Or from mypackage.admin when you're in a package? If your current working directory (which is added to the module search path) happens to be inside the package, it could be either, depending on whether Python knows it's inside a package, which depends on how you're running the script.
If you've got both import admin.RunStoppedException and import RunStoppedException in different modules, that could very well import two copies of the same module: a top-level RunStoppedException and a submodule admin.RunStoppedException of the package admin, resulting in two instances of the exception, and the subsequent mismatch in except.
So don't use implicit relative imports. They are in any case going away (see PEP328). Always spell out the full module name, eg. import mypackage.admin.RunStoppedException. However avoid using the same identifier for your module name and your class name as this is terribly confusing. Note that Python will allow you to say:
except RunStoppedException:
where that identifier is referring to a module and not a subclass of Exception. This is for historical reasons and may also go away, but for the meantime it can hide bugs. A common pattern would be to use mypackage.exceptions to hold many exceptions. One-class-per-file is a Java habit that is frowned on in Python.
It's also a good idea generally try to keep the importing of module contents (like classes) down as much as possible. If something changes the copy of RunStoppedException inside the module, you'll now have different copies in different scripts. Though classes mostly don't change, module-level variables may, and monkey-patching and reloading become much harder when you're taking stuff outside of its owner module.
A:
I can only see two reasons
You have two different Exception classes with same name
Edit: I think culprit is this part because you import Exception class two ways
from RunStoppedException import RunStoppedException
from admin.RunStoppedException import RunStoppedException
make them consistent and your problem will be gone.
You are using some IDE, which is interfering with your code, this sounds bizarre but try to run your code on command line if you aren't
Even 1 and 2 doesn't fix your problem, write a small piece of code demonstrating the problem, which we can run here, which we can fix, but I am sure we will not need to because once you have written such a small standalone script where you can replicate the problem you will find the solution too.
A:
Works fine for me:
[/tmp] ls admin/
RunStoppedException.py __init__.py test.py
RunStoppedException.pyc __init__.pyc
[/tmp] ls General/
__init__.py __init__.pyc test02.py test02.pyc
[/tmp] python -m admin.test
right
[/tmp]
Running on:
Python 2.6.4 Stackless 3.1b3 060516 (release26-maint, Dec 14 2009, 23:28:06)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
My guess is that you have another "General" on your path somewhere, perhaps from earlier tests, and thats why the exceptions don't match.
Did you try the id/inspect.getabsfile debugging? If so, what was the output?
|
exception not getting caught when not in the right package in python
|
EDIT:
OK, I managed to isolate the bug and the exact, complete code to to reproduce it. But it appears either something that's by design, or a bug in python.
Create two sibling packages: admin & General, each with it's own __init__.py, of course.
In the package admin put the file 'test.py' with the following code:
from General.test02 import run
import RunStoppedException
try:
run()
except RunStoppedException.RunStoppedException,e:
print 'right'
except Exception,e:
print 'this is what i got: %s'%type(e)
and also in admin put the file 'RunStoppedException.py' with the following code:
class RunStoppedException(Exception):
def __init__(self):
Exception.__init__(self)
In the package General put the file test02.py with the code:
import admin.RunStoppedException
def run():
raise admin.RunStoppedException.RunStoppedException()
the printout:
this is what i got: <class 'admin.RunStoppedException.RunStoppedException'>
When it should've been right. This only happens when one file sits in the same dir as the exception, so they import it differently.
Is this by design, or a bug of python?
I am using python2.6, running it under eclipse+pydev
|
[
"import admin.RunStoppedException\n\nThis is an ambiguous relative import. Do you mean RunStoppedException from the admin top-level module? Or from mypackage.admin when you're in a package? If your current working directory (which is added to the module search path) happens to be inside the package, it could be either, depending on whether Python knows it's inside a package, which depends on how you're running the script.\nIf you've got both import admin.RunStoppedException and import RunStoppedException in different modules, that could very well import two copies of the same module: a top-level RunStoppedException and a submodule admin.RunStoppedException of the package admin, resulting in two instances of the exception, and the subsequent mismatch in except.\nSo don't use implicit relative imports. They are in any case going away (see PEP328). Always spell out the full module name, eg. import mypackage.admin.RunStoppedException. However avoid using the same identifier for your module name and your class name as this is terribly confusing. Note that Python will allow you to say:\nexcept RunStoppedException:\n\nwhere that identifier is referring to a module and not a subclass of Exception. This is for historical reasons and may also go away, but for the meantime it can hide bugs. A common pattern would be to use mypackage.exceptions to hold many exceptions. One-class-per-file is a Java habit that is frowned on in Python.\nIt's also a good idea generally try to keep the importing of module contents (like classes) down as much as possible. If something changes the copy of RunStoppedException inside the module, you'll now have different copies in different scripts. Though classes mostly don't change, module-level variables may, and monkey-patching and reloading become much harder when you're taking stuff outside of its owner module.\n",
"I can only see two reasons\n\nYou have two different Exception classes with same name\nEdit: I think culprit is this part because you import Exception class two ways\n\nfrom RunStoppedException import RunStoppedException \nfrom admin.RunStoppedException import RunStoppedException\n\nmake them consistent and your problem will be gone.\nYou are using some IDE, which is interfering with your code, this sounds bizarre but try to run your code on command line if you aren't\n\nEven 1 and 2 doesn't fix your problem, write a small piece of code demonstrating the problem, which we can run here, which we can fix, but I am sure we will not need to because once you have written such a small standalone script where you can replicate the problem you will find the solution too.\n",
"Works fine for me:\n[/tmp] ls admin/\nRunStoppedException.py __init__.py test.py\nRunStoppedException.pyc __init__.pyc\n[/tmp] ls General/\n__init__.py __init__.pyc test02.py test02.pyc\n[/tmp] python -m admin.test \nright\n[/tmp] \n\nRunning on:\nPython 2.6.4 Stackless 3.1b3 060516 (release26-maint, Dec 14 2009, 23:28:06) \n[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin\n\nMy guess is that you have another \"General\" on your path somewhere, perhaps from earlier tests, and thats why the exceptions don't match.\nDid you try the id/inspect.getabsfile debugging? If so, what was the output?\n"
] |
[
7,
0,
0
] |
[] |
[] |
[
"exception",
"package",
"python"
] |
stackoverflow_0001988475_exception_package_python.txt
|
Q:
Sphinx 0.6.3: The languages module cannot be found
I receive the following error message when I try to use the sphinx-quickstart generated make.bat command:
make html
Error: The languages module cannot be found. Did you install Sphinx and its dependencies correctly?
I tried running the sphinx-build command and received the same error.
I am using Python 2.6.4 on Windows Vista. I have installed setuptools-0.6c11.win32-py2.6, and installed Sphinx 0.6.3 using easy_install.
It appears that init.py is failing when it tries to import cmdline (I grep'd part of the error message, and init.py was the only file that turned up) since the error shows up in the try block that imports cmdline.
try:
from sphinx import cmdline
except ImportError, err:
errstr = str(err)
if errstr.lower().startswith('no module named'):
whichmod = errstr[16:]
hint = ''
if whichmod.startswith('docutils'):
whichmod = 'Docutils library'
elif whichmod.startswith('jinja'):
whichmod = 'Jinja library'
elif whichmod == 'roman':
whichmod = 'roman module (which is distributed with Docutils)'
hint = ('This can happen if you upgraded docutils using\n'
'easy_install without uninstalling the old version'
'first.')
else:
whichmod += ' module'
print >>sys.stderr, ('Error: The %s cannot be found. '
'Did you install Sphinx and its dependencies '
'correctly?' % whichmod)
if hint:
print >> sys.stderr, hint
return 1
raise
I do not see where "languages" would get passed as an argument, so I am confused at the error message. I have searched for a solution, but have turned up nothing.
A:
Grepping through the sphinx package for "languages", the only relevant import is:
/usr/lib/pymodules/python2.5/sphinx/environment.py:from docutils.parsers.rst.languages import en as english
So most likely there's something wrong with your docutils installation. Admittedly, the error message would be more helpful if the full package path was reported.
|
Sphinx 0.6.3: The languages module cannot be found
|
I receive the following error message when I try to use the sphinx-quickstart generated make.bat command:
make html
Error: The languages module cannot be found. Did you install Sphinx and its dependencies correctly?
I tried running the sphinx-build command and received the same error.
I am using Python 2.6.4 on Windows Vista. I have installed setuptools-0.6c11.win32-py2.6, and installed Sphinx 0.6.3 using easy_install.
It appears that init.py is failing when it tries to import cmdline (I grep'd part of the error message, and init.py was the only file that turned up) since the error shows up in the try block that imports cmdline.
try:
from sphinx import cmdline
except ImportError, err:
errstr = str(err)
if errstr.lower().startswith('no module named'):
whichmod = errstr[16:]
hint = ''
if whichmod.startswith('docutils'):
whichmod = 'Docutils library'
elif whichmod.startswith('jinja'):
whichmod = 'Jinja library'
elif whichmod == 'roman':
whichmod = 'roman module (which is distributed with Docutils)'
hint = ('This can happen if you upgraded docutils using\n'
'easy_install without uninstalling the old version'
'first.')
else:
whichmod += ' module'
print >>sys.stderr, ('Error: The %s cannot be found. '
'Did you install Sphinx and its dependencies '
'correctly?' % whichmod)
if hint:
print >> sys.stderr, hint
return 1
raise
I do not see where "languages" would get passed as an argument, so I am confused at the error message. I have searched for a solution, but have turned up nothing.
|
[
"Grepping through the sphinx package for \"languages\", the only relevant import is:\n/usr/lib/pymodules/python2.5/sphinx/environment.py:from docutils.parsers.rst.languages import en as english\n\nSo most likely there's something wrong with your docutils installation. Admittedly, the error message would be more helpful if the full package path was reported.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_sphinx"
] |
stackoverflow_0001987658_python_python_sphinx.txt
|
Q:
Reading values with raw_input in Python
I am trying to do integration in Python but whenever I key in a value my outputs always results in 0. What the reason?
E.g.:
def main():
eq_of_form()
value_of_a()
value_of_b()
value_of_c()
value_of_m()
value_of_n()
value_of_x()
area_under_graph()
def eq_of_form():
print "Eq of the form y = ax^m + bx^n + c " + ":"
def value_of_a():
return raw_input("Enter value for a \n")
def value_of_b():
return raw_input("Enter value for b \n")
def value_of_c():
return raw_input("Enter value for c \n")
def value_of_m():
return raw_input("Enter value for m \n")
def value_of_n():
return raw_input("Enter value for n \n")
def value_of_x():
return raw_input("Enter a value for x to find " +
"value of y and the gradient at that point \n " + "x = ")
def area_under_graph():
y = (int(value_of_a())*int(value_of_x())**(int(value_of_m())+1))/((int(value_of_m())+1))
// * 2nd part.This works for me(:
// + (int(value_of_b())) * (int(value_of_x())**
// (int(value_of_n())+1))/(int(value_of_n())+1) + ((int(value_of_c())*int(value_of_x())))
print y
main()
(* Note: the eq under the area_under_graph() function is only half of it because the other half kind of work so I did not post it:))
For the top code, I tried inputting the values here: (maybe you can try using the same(: )
a = 1
b = 2
c = 1
m = 2
n = 1
x = 1
I am supposed to get 7/3 which is 2.333, but I end up getting 2. The problem appears to lie in the first part of the eq.
Sorry for the newbie question.
A:
Your code at the start is wrong. You need to assign your variables after you read the user input:
value_of_a()
should be:
a = value_of_a()
It is also unnecessary to write a separate function for inputting each variable. You could instead have a function that takes a parameter:
def get_user_value(name):
return raw_input("Enter value for %s\n" % name)
a = get_user_value("a")
b = get_user_value("b")
# etc..
But then you ignore all these values and read them again inside the area_under_curve() method. This is probably not what you intend to do. Furthermore inside this method you assume that all parameters are integers. If you are using Python 2.5 the division here is integer division:
m1/m2
This could return 0 when the result was actually supposed to be a non-integer like 0.125. You need to use floats instead of integers to do the calculation. You can do this in Python 2.5 using float(m). In Python 3.0 the division operator does what you want by default.
A:
/ does Integer division in Python2, this means a/b is the biggest integer c with c*b <=a, so 7/3 is indeed 2. You want floats, so you need to use them .. replace all the int with float in your code.
You should probably take another look at functions too ... you code can be much much shorter :-)
A:
In Python 2.x, dividing an integer by another integer results in an integer. Either use from __future__ import division, or turn one of the integers into a float by passing it to float().
A:
The issue is that you're using integer arithmetic - see all those int calls you've got everywhere. Integer arithmetic (in Python 2.x) will only ever return integers, so you'll never get 2.33, only 2.
Use float() instead of int() and things should work.
|
Reading values with raw_input in Python
|
I am trying to do integration in Python but whenever I key in a value my outputs always results in 0. What the reason?
E.g.:
def main():
eq_of_form()
value_of_a()
value_of_b()
value_of_c()
value_of_m()
value_of_n()
value_of_x()
area_under_graph()
def eq_of_form():
print "Eq of the form y = ax^m + bx^n + c " + ":"
def value_of_a():
return raw_input("Enter value for a \n")
def value_of_b():
return raw_input("Enter value for b \n")
def value_of_c():
return raw_input("Enter value for c \n")
def value_of_m():
return raw_input("Enter value for m \n")
def value_of_n():
return raw_input("Enter value for n \n")
def value_of_x():
return raw_input("Enter a value for x to find " +
"value of y and the gradient at that point \n " + "x = ")
def area_under_graph():
y = (int(value_of_a())*int(value_of_x())**(int(value_of_m())+1))/((int(value_of_m())+1))
// * 2nd part.This works for me(:
// + (int(value_of_b())) * (int(value_of_x())**
// (int(value_of_n())+1))/(int(value_of_n())+1) + ((int(value_of_c())*int(value_of_x())))
print y
main()
(* Note: the eq under the area_under_graph() function is only half of it because the other half kind of work so I did not post it:))
For the top code, I tried inputting the values here: (maybe you can try using the same(: )
a = 1
b = 2
c = 1
m = 2
n = 1
x = 1
I am supposed to get 7/3 which is 2.333, but I end up getting 2. The problem appears to lie in the first part of the eq.
Sorry for the newbie question.
|
[
"Your code at the start is wrong. You need to assign your variables after you read the user input:\nvalue_of_a()\n\nshould be:\na = value_of_a()\n\nIt is also unnecessary to write a separate function for inputting each variable. You could instead have a function that takes a parameter:\ndef get_user_value(name):\n return raw_input(\"Enter value for %s\\n\" % name)\n\na = get_user_value(\"a\")\nb = get_user_value(\"b\")\n# etc..\n\nBut then you ignore all these values and read them again inside the area_under_curve() method. This is probably not what you intend to do. Furthermore inside this method you assume that all parameters are integers. If you are using Python 2.5 the division here is integer division:\nm1/m2\n\nThis could return 0 when the result was actually supposed to be a non-integer like 0.125. You need to use floats instead of integers to do the calculation. You can do this in Python 2.5 using float(m). In Python 3.0 the division operator does what you want by default.\n",
"/ does Integer division in Python2, this means a/b is the biggest integer c with c*b <=a, so 7/3 is indeed 2. You want floats, so you need to use them .. replace all the int with float in your code.\nYou should probably take another look at functions too ... you code can be much much shorter :-)\n",
"In Python 2.x, dividing an integer by another integer results in an integer. Either use from __future__ import division, or turn one of the integers into a float by passing it to float().\n",
"The issue is that you're using integer arithmetic - see all those int calls you've got everywhere. Integer arithmetic (in Python 2.x) will only ever return integers, so you'll never get 2.33, only 2. \nUse float() instead of int() and things should work.\n"
] |
[
5,
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001988733_python.txt
|
Q:
Qt Python - report in toolbox: QTextDocument and QPainter
I want to build multiple documents report using toolbox. Two pages is an option get a start. Formatting is ok, and can be worked latter.
I tried using QTextDocument in Html, and alternatively QPainter.
Of course, to make a test and keep things simple, I just ask in Qt to show the report title displayed on top of the document.
Here is the function for the toolbox main frame:
def toolbox_frame(self,MainWindow):
self.toolBox = QtGui.QToolBox(self.centralwidget)
self.toolBox.setGeometry(QtCore.QRect(10, 20, 471, 201))
self.toolbox_page1()
self.toolBox.addItem(self.page1, "")
self.toolBox.setItemText(self.toolBox.indexOf(self.page1), QtGui.QApplication.translate("MainWindow", "Page 1", None, QtGui.QApplication.UnicodeUTF8))
self.toolbox_page2()
self.toolBox.addItem(self.page2, "")
self.toolBox.setItemText(self.toolBox.indexOf(self.page2), QtGui.QApplication.translate("MainWindow", "Page 2", None, QtGui.QApplication.UnicodeUTF8))
... the function that holds the first page using QTextDocument with Html:
def toolbox_page1(self):
self.page1 = QtGui.QWidget()
self.page1.setGeometry(QtCore.QRect(0, 0, 471, 145))
html = u""
html += (" <p><font color=red><b>Title - Build "
"a Report : page 1.</b></font>")
document = QtGui.QTextDocument(self.page1)
document.setHtml(html)
and here the function using QPainter:
def toolbox_page2(self):
self.page2 = QtGui.QWidget()
self.page2.setGeometry(QtCore.QRect(0, 0, 471, 145))
sansFont = QtGui.QFont("Helvetica", 10)
painter = QtGui.QPainter(self.page2)
painter.setFont(sansFont)
painter.setPen(QtGui.QColor(168, 34, 3))
x=50
y=50
painter.drawText(x, y, "Title - Build a Report : page 2")
The problem is, that it just displays the toolbox with the page 1 and page 2, but not the title for both report inside the page 1 and page 2.
What is missing here?
All comments and suggestions are highly appreciated.
A:
For page1, the document needs to be displayed by a widget. Add the following to that function
textEdit = QtGui.QTextEdit(self.page1)
textEdit.setDocument(document)
layout = QtGui.QVBoxLayout(self.page1)
layout.addWidget(textEdit)
For page2, painting on a widget must be in response to a paint event which requires creating a subclass or event filter. A simpler way to draw some text is using a QLabel. Change the function to the following
def toolbox_page2(self):
self.page2 = QtGui.QWidget()
self.page2.setGeometry(QtCore.QRect(0, 0, 471, 145))
label = QtGui.QLabel(self.page2)
label.setText("Title - Build a Report : page 2")
label.setStyleSheet("font: 10pt 'Helvetica'; color: rgb(168, 34, 3)")
label.setGeometry(QtCore.QRect(QtCore.QPoint(50, 50), label.sizeHint()))
|
Qt Python - report in toolbox: QTextDocument and QPainter
|
I want to build multiple documents report using toolbox. Two pages is an option get a start. Formatting is ok, and can be worked latter.
I tried using QTextDocument in Html, and alternatively QPainter.
Of course, to make a test and keep things simple, I just ask in Qt to show the report title displayed on top of the document.
Here is the function for the toolbox main frame:
def toolbox_frame(self,MainWindow):
self.toolBox = QtGui.QToolBox(self.centralwidget)
self.toolBox.setGeometry(QtCore.QRect(10, 20, 471, 201))
self.toolbox_page1()
self.toolBox.addItem(self.page1, "")
self.toolBox.setItemText(self.toolBox.indexOf(self.page1), QtGui.QApplication.translate("MainWindow", "Page 1", None, QtGui.QApplication.UnicodeUTF8))
self.toolbox_page2()
self.toolBox.addItem(self.page2, "")
self.toolBox.setItemText(self.toolBox.indexOf(self.page2), QtGui.QApplication.translate("MainWindow", "Page 2", None, QtGui.QApplication.UnicodeUTF8))
... the function that holds the first page using QTextDocument with Html:
def toolbox_page1(self):
self.page1 = QtGui.QWidget()
self.page1.setGeometry(QtCore.QRect(0, 0, 471, 145))
html = u""
html += (" <p><font color=red><b>Title - Build "
"a Report : page 1.</b></font>")
document = QtGui.QTextDocument(self.page1)
document.setHtml(html)
and here the function using QPainter:
def toolbox_page2(self):
self.page2 = QtGui.QWidget()
self.page2.setGeometry(QtCore.QRect(0, 0, 471, 145))
sansFont = QtGui.QFont("Helvetica", 10)
painter = QtGui.QPainter(self.page2)
painter.setFont(sansFont)
painter.setPen(QtGui.QColor(168, 34, 3))
x=50
y=50
painter.drawText(x, y, "Title - Build a Report : page 2")
The problem is, that it just displays the toolbox with the page 1 and page 2, but not the title for both report inside the page 1 and page 2.
What is missing here?
All comments and suggestions are highly appreciated.
|
[
"For page1, the document needs to be displayed by a widget. Add the following to that function\n textEdit = QtGui.QTextEdit(self.page1)\n textEdit.setDocument(document)\n layout = QtGui.QVBoxLayout(self.page1)\n layout.addWidget(textEdit)\n\nFor page2, painting on a widget must be in response to a paint event which requires creating a subclass or event filter. A simpler way to draw some text is using a QLabel. Change the function to the following\ndef toolbox_page2(self):\n self.page2 = QtGui.QWidget()\n self.page2.setGeometry(QtCore.QRect(0, 0, 471, 145))\n\n label = QtGui.QLabel(self.page2)\n label.setText(\"Title - Build a Report : page 2\")\n label.setStyleSheet(\"font: 10pt 'Helvetica'; color: rgb(168, 34, 3)\")\n label.setGeometry(QtCore.QRect(QtCore.QPoint(50, 50), label.sizeHint()))\n\n"
] |
[
1
] |
[] |
[] |
[
"html",
"python",
"qpainter",
"qt",
"reportviewer"
] |
stackoverflow_0001988150_html_python_qpainter_qt_reportviewer.txt
|
Q:
Managing Setup code with TimeIt
As part of a pet project of mine, I need to test the performance of various different implementations of my code in Python. I anticipate this to be something I do alot of, and I want to try to make the code I write to serve this aim as easy to update and modify as possible.
It's still in its infancy at the moment, but I've taken to using strings to manage common setup or testing code, eg:
naiveSetup = 'from PerformanceTests.Vectors import NaiveVector\n' \
+ 'left = NaiveVector([1,0,0])\n' \
+ 'right = NaiveVector([0,1,0])'
This allows me to only write the code once, at the expense of making it harder to read and clunky to update.
Is there a better way?
A:
Use triple quotes """
setup_code = """
from PerformanceTests.Vectors import NaiveVector
left = NaiveVector([1,0,0])
right = NaiveVector([0,1,0])
"""
Another interesting method is provided in the docs of timeit:
def test():
"Stupid test function"
L = []
for i in range(100):
L.append(i)
if __name__=='__main__':
from timeit import Timer
t = Timer("test()", "from __main__ import test")
print t.timeit()
Though this isn't suitable for all needs.
A:
Timing code is fine, but it will still leave you guessing what's going on.
To find out what's actually going on, manually pause it a few random times in the debugger, and examine the call stack.
For example, in the code that is 30x slower in one implementation than in another, each sample of the stack has a 96.7% chance of falling in the extra time that it is spending, so you can see why.
No guesswork required.
|
Managing Setup code with TimeIt
|
As part of a pet project of mine, I need to test the performance of various different implementations of my code in Python. I anticipate this to be something I do alot of, and I want to try to make the code I write to serve this aim as easy to update and modify as possible.
It's still in its infancy at the moment, but I've taken to using strings to manage common setup or testing code, eg:
naiveSetup = 'from PerformanceTests.Vectors import NaiveVector\n' \
+ 'left = NaiveVector([1,0,0])\n' \
+ 'right = NaiveVector([0,1,0])'
This allows me to only write the code once, at the expense of making it harder to read and clunky to update.
Is there a better way?
|
[
"Use triple quotes \"\"\"\nsetup_code = \"\"\"\n from PerformanceTests.Vectors import NaiveVector\n left = NaiveVector([1,0,0])\n right = NaiveVector([0,1,0])\n\"\"\"\n\nAnother interesting method is provided in the docs of timeit:\ndef test():\n \"Stupid test function\"\n L = []\n for i in range(100):\n L.append(i)\n\nif __name__=='__main__':\n from timeit import Timer\n t = Timer(\"test()\", \"from __main__ import test\")\n print t.timeit()\n\nThough this isn't suitable for all needs.\n",
"Timing code is fine, but it will still leave you guessing what's going on.\nTo find out what's actually going on, manually pause it a few random times in the debugger, and examine the call stack.\nFor example, in the code that is 30x slower in one implementation than in another, each sample of the stack has a 96.7% chance of falling in the extra time that it is spending, so you can see why.\nNo guesswork required.\n"
] |
[
3,
0
] |
[] |
[] |
[
"performance",
"python",
"timeit"
] |
stackoverflow_0001988127_performance_python_timeit.txt
|
Q:
Why django contains a lot of '__init__.py'?
Many directories in a django project contain a __init__.py and I think it will be used as initialization for something. Where is this __init__.py used?
A:
Python doesn't take every subdirectory of every directory in sys.path to necessarily be a package: only those with a file called __init__.py. Consider the following shell session:
$ mkdir adir
$ echo 'print "hello world"' > adir/helo.py
$ python -c 'import adir.helo'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named adir.helo
$ touch adir/__init__.py
$ python -c 'import adir.helo'
hello world
See? With just directory adir and module helo.py in it, the attempt to import adir.helo fails. If __init__.py also exists in adir, then Python knows that adir is a package, and therefore the import succeeds.
A:
Your question is not clear. What exactly are you asking?
The file __init__.py is there so your folder can be defined as a package, which lets you do things like:
from myapp.models import Something
|
Why django contains a lot of '__init__.py'?
|
Many directories in a django project contain a __init__.py and I think it will be used as initialization for something. Where is this __init__.py used?
|
[
"Python doesn't take every subdirectory of every directory in sys.path to necessarily be a package: only those with a file called __init__.py. Consider the following shell session:\n$ mkdir adir\n$ echo 'print \"hello world\"' > adir/helo.py\n$ python -c 'import adir.helo'\nTraceback (most recent call last):\n File \"<string>\", line 1, in <module>\nImportError: No module named adir.helo\n$ touch adir/__init__.py\n$ python -c 'import adir.helo'\nhello world\n\nSee? With just directory adir and module helo.py in it, the attempt to import adir.helo fails. If __init__.py also exists in adir, then Python knows that adir is a package, and therefore the import succeeds.\n",
"Your question is not clear. What exactly are you asking? \nThe file __init__.py is there so your folder can be defined as a package, which lets you do things like:\nfrom myapp.models import Something\n\n"
] |
[
11,
7
] |
[] |
[] |
[
"django",
"module",
"python"
] |
stackoverflow_0001988149_django_module_python.txt
|
Q:
why my 'time' class has not 'tzset' attribute
my code:
import time
print hasattr(time.tzset)#error
and why someone do this like next:
if hasattr(time, 'tzset'):
# Move the time zone info into os.environ. See ticket #2315 for why
# we don't do this unconditionally (breaks Windows).
os.environ['TZ'] = self.TIME_ZONE
time.tzset()
i can't understand.
thanks
A:
See the docs for tzset: they clearly say
Availability: Unix.
so you would have it in, say, MacOSX, Solaris, or Linux, but not on Windows.
Also: there is no such thing as "your time class", despite your Q's title: the time you're trying to use is a module, not a class.
And finally, as @Daniel says, your first use of hasattr is totally wrong (the second one, which you don't understand, is correct).
A:
Your use of hasattr is wrong. The correct syntax is shown in your second snippet.
hasattr takes two arguments - an object, and a string represent the attribute you want to check for. The way you've done it, Python will try to evaluate time.tzset first, before passing it to hasattr - thus causing the very error you're trying to avoid.
A:
Either you have a local module that is shadowing the stock time module, you're using a version of Python older than 2.3, or you're running Python on Windows.
|
why my 'time' class has not 'tzset' attribute
|
my code:
import time
print hasattr(time.tzset)#error
and why someone do this like next:
if hasattr(time, 'tzset'):
# Move the time zone info into os.environ. See ticket #2315 for why
# we don't do this unconditionally (breaks Windows).
os.environ['TZ'] = self.TIME_ZONE
time.tzset()
i can't understand.
thanks
|
[
"See the docs for tzset: they clearly say\n\nAvailability: Unix.\n\nso you would have it in, say, MacOSX, Solaris, or Linux, but not on Windows.\nAlso: there is no such thing as \"your time class\", despite your Q's title: the time you're trying to use is a module, not a class.\nAnd finally, as @Daniel says, your first use of hasattr is totally wrong (the second one, which you don't understand, is correct).\n",
"Your use of hasattr is wrong. The correct syntax is shown in your second snippet.\nhasattr takes two arguments - an object, and a string represent the attribute you want to check for. The way you've done it, Python will try to evaluate time.tzset first, before passing it to hasattr - thus causing the very error you're trying to avoid.\n",
"Either you have a local module that is shadowing the stock time module, you're using a version of Python older than 2.3, or you're running Python on Windows.\n"
] |
[
11,
3,
2
] |
[] |
[] |
[
"python",
"syntax"
] |
stackoverflow_0001988182_python_syntax.txt
|
Q:
Asyncore not working properly with Tkinter GUI
At this point I'm still a noob when it comes to GUI and network programming so I'm hoping this will be a very simple fix. I've got a very basic understanding of the tkinter and asyncore modules having built a handful of programs in each of them, however I'm having trouble using both of them together in a program. I put together an entire UI only to find out that I could not achieve any significant asynchronous networking functionality. For the sake of simplicity, I deconstructed the program into its simplest form to illustrate the basic problem I'm having. Heres the code:
from Tkinter import *
import asyncore, socket
class Application(object):
def __init__(self, root):
mainFrame = Frame(root)
mainFrame.grid(column=1, row=1, columnspan=3, rowspan=1)
mainButton = Button(mainFrame, text='Click', command=self.makeSocket)
mainButton.grid(column=2, row=1, columnspan=1, rowspan=1, pady=7, padx=40)
def makeSocket(self):
clientSocket()
class clientSocket(asyncore.dispatcher):
def __init__(self):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect(("XXX.XXX.XXX.XXX", XXXX))
print 'init works'
def handle_connect(self):
print 'connect works'
root = Tk()
myApp = Application(root)
root.after_idle(asyncore.loop)
root.mainloop()
So when I run the program and click the button, I get the string 'init works', indicating that the clientSocket object is initialized and the connection is made successfully. However, the handle_connect method doesn't run. And if I implement the handle_read method and execute a command on the server(to send data back to the client) this method isn't called either. I'm thinking that there is some general problem that is preventing the asyncore loop from running on its own. I realize that tkinters event loop could be the culprit, but I was under the impression that the after_idle method would allow non-Tkinter events to be processed while the GUI is idle. Is it the tkinter event loop that is still causing problems or could it be something else?
A:
There are several problems here, and I'm not sure which one it is.
asyncore.loop is a function that never returns, when things are working properly. root.mainloop is probably a function that never returns until you close the window. So things are likely to go wrong because at some point one loop will be starved by the other for some period of time.
(Incidentally, this is why I dislike frameworks that attempt to make their usage easier by replacing the main loop and replacing it with an event-driven system - it works great until you need to use two or more of these systems together, at which point things can get messy.)
However, you can limit the number of times that asyncore.loop will iterate. Try this instead:
def poll_asyncore_once():
asyncore.loop(count=1)
root.after_idle(poll_asyncore_once)
You might want to add a timeout value to the loop call too, something less than a second.
However, I would have still thought that the connection would have gone through eventually even if the GUI did end up starved of events as a result of you entering the asyncore loop. This implies something else has gone wrong, and it's possible that asyncore is raising an exception in the connect() method and TK is swallowing it. Try putting an exception handler in clientSocket.init and see how you go.
A:
See this recipe by Jacob Hallén showing how to use asyncore and Tkinter together (basically by means of a threading trick). (It's also expanded as recipe 9.6 in the Python Cookbook's first printed edition, and as 11.4 in the second edition).
|
Asyncore not working properly with Tkinter GUI
|
At this point I'm still a noob when it comes to GUI and network programming so I'm hoping this will be a very simple fix. I've got a very basic understanding of the tkinter and asyncore modules having built a handful of programs in each of them, however I'm having trouble using both of them together in a program. I put together an entire UI only to find out that I could not achieve any significant asynchronous networking functionality. For the sake of simplicity, I deconstructed the program into its simplest form to illustrate the basic problem I'm having. Heres the code:
from Tkinter import *
import asyncore, socket
class Application(object):
def __init__(self, root):
mainFrame = Frame(root)
mainFrame.grid(column=1, row=1, columnspan=3, rowspan=1)
mainButton = Button(mainFrame, text='Click', command=self.makeSocket)
mainButton.grid(column=2, row=1, columnspan=1, rowspan=1, pady=7, padx=40)
def makeSocket(self):
clientSocket()
class clientSocket(asyncore.dispatcher):
def __init__(self):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect(("XXX.XXX.XXX.XXX", XXXX))
print 'init works'
def handle_connect(self):
print 'connect works'
root = Tk()
myApp = Application(root)
root.after_idle(asyncore.loop)
root.mainloop()
So when I run the program and click the button, I get the string 'init works', indicating that the clientSocket object is initialized and the connection is made successfully. However, the handle_connect method doesn't run. And if I implement the handle_read method and execute a command on the server(to send data back to the client) this method isn't called either. I'm thinking that there is some general problem that is preventing the asyncore loop from running on its own. I realize that tkinters event loop could be the culprit, but I was under the impression that the after_idle method would allow non-Tkinter events to be processed while the GUI is idle. Is it the tkinter event loop that is still causing problems or could it be something else?
|
[
"There are several problems here, and I'm not sure which one it is.\nasyncore.loop is a function that never returns, when things are working properly. root.mainloop is probably a function that never returns until you close the window. So things are likely to go wrong because at some point one loop will be starved by the other for some period of time.\n(Incidentally, this is why I dislike frameworks that attempt to make their usage easier by replacing the main loop and replacing it with an event-driven system - it works great until you need to use two or more of these systems together, at which point things can get messy.)\nHowever, you can limit the number of times that asyncore.loop will iterate. Try this instead:\ndef poll_asyncore_once():\n asyncore.loop(count=1)\n\nroot.after_idle(poll_asyncore_once) \n\nYou might want to add a timeout value to the loop call too, something less than a second.\nHowever, I would have still thought that the connection would have gone through eventually even if the GUI did end up starved of events as a result of you entering the asyncore loop. This implies something else has gone wrong, and it's possible that asyncore is raising an exception in the connect() method and TK is swallowing it. Try putting an exception handler in clientSocket.init and see how you go.\n",
"See this recipe by Jacob Hallén showing how to use asyncore and Tkinter together (basically by means of a threading trick). (It's also expanded as recipe 9.6 in the Python Cookbook's first printed edition, and as 11.4 in the second edition).\n"
] |
[
2,
0
] |
[] |
[] |
[
"asyncore",
"python",
"tkinter"
] |
stackoverflow_0001988286_asyncore_python_tkinter.txt
|
Q:
pkg_resources not found after installing setuptools
I am trying to compile and install python2.6.4 on Debian 5.0.3 (64bit). I installed using 'make altinstall' as I want to keep python 2.5.2 that comes with Deb5.0 as my default python.
Following this, I installed setuptools 0.6c11 using the command 'sudo sh setuptools-0.6c11-py2.6.egg --prefix=/usr/local'. However, after installing when I try to 'import pkg_resources' from python2.6, it doesnt work saying 'ImportError: No module named pkg_resources'. Without pkg_resources, I can hardly do much.
Can someone share here what may be going wrong or what's missing?
A:
Packaging and package integration is tricky. Debian has Python 2.6, but for some internal reason it is only in the experimental branch:
$ rmadison python2.6
python2.6 | 2.6.2-2 | experimental | source, ia64
python2.6 | 2.6.4-1 | experimental | source, alpha, amd64, armel, hppa, \
i386, powerpc, s390, sparc
$
I would use that package as it is likely to be more fully integrated with the rest of python packaging. Plus, as it is a .deb, you can easily uninstall it.
And the debian-python list may be able to assist you further.
|
pkg_resources not found after installing setuptools
|
I am trying to compile and install python2.6.4 on Debian 5.0.3 (64bit). I installed using 'make altinstall' as I want to keep python 2.5.2 that comes with Deb5.0 as my default python.
Following this, I installed setuptools 0.6c11 using the command 'sudo sh setuptools-0.6c11-py2.6.egg --prefix=/usr/local'. However, after installing when I try to 'import pkg_resources' from python2.6, it doesnt work saying 'ImportError: No module named pkg_resources'. Without pkg_resources, I can hardly do much.
Can someone share here what may be going wrong or what's missing?
|
[
"Packaging and package integration is tricky. Debian has Python 2.6, but for some internal reason it is only in the experimental branch:\n$ rmadison python2.6\n python2.6 | 2.6.2-2 | experimental | source, ia64\n python2.6 | 2.6.4-1 | experimental | source, alpha, amd64, armel, hppa, \\\n i386, powerpc, s390, sparc\n\n$\n\nI would use that package as it is likely to be more fully integrated with the rest of python packaging. Plus, as it is a .deb, you can easily uninstall it. \nAnd the debian-python list may be able to assist you further.\n"
] |
[
0
] |
[] |
[] |
[
"debian",
"pkg_resources",
"python",
"setuptools"
] |
stackoverflow_0001989355_debian_pkg_resources_python_setuptools.txt
|
Q:
Repeating regex groups
I'm trying to get some information from a web site. The information I want is in a table so I made a regex but I don't know the right way to simplify it.
The following are two parts of my regex that I would like to simplify:
<br>(.*)<br>(.*)<br>(.*)
<tr><td>(.+)r>(.+)r>(.+)r>(.+).+</td></tr> # This part should be repeated n times(n = 1 to 10)
I looked through the python documentation and I can't realize how to do it. Perhaps you can give me a hint.
Thank you,
mF.
A:
RegEx match open tags except XHTML self-contained tags
"Have you tried using an XML parser instead?"
EDIT: This is the way to go: Beautiful Soup
A:
This is the wrong way to go unless you're trying to scrape some data out of a tiny fragment.
It would be much better if you used a tolerant HTML. BeautifulSoup mentioned earlier is a good one but it's stagnating and I don't believe it's being maintained actively anymore.
A highly recommended parser for Python is lxml.
There was a long thread discussing parsing XHTML on one of our local mailing lists here which you might find useful too.
A:
You just need to put the block in parens and then use the {...} operators, e.g.:
(foo...){1,10}
Matches 1 to 10 instances of the thing inside of there. Given your example above, you can nest those:
((f..)(b..)){1,10}
|
Repeating regex groups
|
I'm trying to get some information from a web site. The information I want is in a table so I made a regex but I don't know the right way to simplify it.
The following are two parts of my regex that I would like to simplify:
<br>(.*)<br>(.*)<br>(.*)
<tr><td>(.+)r>(.+)r>(.+)r>(.+).+</td></tr> # This part should be repeated n times(n = 1 to 10)
I looked through the python documentation and I can't realize how to do it. Perhaps you can give me a hint.
Thank you,
mF.
|
[
"RegEx match open tags except XHTML self-contained tags\n\"Have you tried using an XML parser instead?\"\nEDIT: This is the way to go: Beautiful Soup \n",
"This is the wrong way to go unless you're trying to scrape some data out of a tiny fragment. \nIt would be much better if you used a tolerant HTML. BeautifulSoup mentioned earlier is a good one but it's stagnating and I don't believe it's being maintained actively anymore. \nA highly recommended parser for Python is lxml.\nThere was a long thread discussing parsing XHTML on one of our local mailing lists here which you might find useful too.\n",
"You just need to put the block in parens and then use the {...} operators, e.g.:\n(foo...){1,10}\n\nMatches 1 to 10 instances of the thing inside of there. Given your example above, you can nest those:\n((f..)(b..)){1,10}\n\n"
] |
[
3,
3,
1
] |
[] |
[] |
[
"html",
"python",
"regex"
] |
stackoverflow_0001989463_html_python_regex.txt
|
Q:
Problems assigning dict values to variables
I am struggling with something that must be one of those 'it is so obvious I am an idiot' problems. I have a csv file that I want to read in and use to create individual 'tables'. I have a variable (RID) that marks the beginning of a new 'table'.
I can't get my indicator variable (currentRow) to advance as I finish manipulating each line. You can see the print statements, currentRow remains equal to 0.
But if I use an assignment statement outside of the loop I can change the value of currentRow at will. The test assignment is to just understand where I am getting in the loop.
currentRow=0
test=0
theTables=defaultdict(list)
for line in csv.DictReader(open(r'c:\temp\testread.csv')):
newTableKey=line['CIK']+'-'+line['RDATE']+'-'+line['CDATE']+'-'+line['FNAME']+' '+line['TID']
if line['RID']=='1':
test+=1 # I can get here
if currentRow>int(line['RID']):
print 'got here'
theTables[oldTableKey]=theList
test+=1 # I cannot get here
theList=[]
theList.append(line)
currrentRow=int(line['RID'])
print currentRow #this value always prints at 0
print int(line['RID']) #this prints at the correct value
oldTableKey=newTableKey
A:
In the line:
currrentRow=int(line['RID'])
you have three rs in currrentRow. Reduce them to just two, and things should improve.
A:
One solution is to add the following somewhere inside the loop:
currentRow += 1
However, a better solution may be to use enumerable:
for currentRow, line in csv.DictReader..:
stuff
A:
It looks to me like it's possible that theTables[oldTableKey]=theList could be executed with an uninitialized value of oldTableKey.
|
Problems assigning dict values to variables
|
I am struggling with something that must be one of those 'it is so obvious I am an idiot' problems. I have a csv file that I want to read in and use to create individual 'tables'. I have a variable (RID) that marks the beginning of a new 'table'.
I can't get my indicator variable (currentRow) to advance as I finish manipulating each line. You can see the print statements, currentRow remains equal to 0.
But if I use an assignment statement outside of the loop I can change the value of currentRow at will. The test assignment is to just understand where I am getting in the loop.
currentRow=0
test=0
theTables=defaultdict(list)
for line in csv.DictReader(open(r'c:\temp\testread.csv')):
newTableKey=line['CIK']+'-'+line['RDATE']+'-'+line['CDATE']+'-'+line['FNAME']+' '+line['TID']
if line['RID']=='1':
test+=1 # I can get here
if currentRow>int(line['RID']):
print 'got here'
theTables[oldTableKey]=theList
test+=1 # I cannot get here
theList=[]
theList.append(line)
currrentRow=int(line['RID'])
print currentRow #this value always prints at 0
print int(line['RID']) #this prints at the correct value
oldTableKey=newTableKey
|
[
"In the line:\ncurrrentRow=int(line['RID'])\n\nyou have three rs in currrentRow. Reduce them to just two, and things should improve.\n",
"One solution is to add the following somewhere inside the loop:\ncurrentRow += 1\n\nHowever, a better solution may be to use enumerable:\nfor currentRow, line in csv.DictReader..:\n stuff\n\n",
"It looks to me like it's possible that theTables[oldTableKey]=theList could be executed with an uninitialized value of oldTableKey.\n"
] |
[
7,
0,
0
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0001989511_csv_python.txt
|
Q:
Python - reading checkboxes
I have a few checkboxes with common name and individual variables (ID).
How can I in python read them as list?
Now I'm using
checkbox= request.POST["common_name"]
It isn't work properly, checkbox variable store only the last checked box instead of any list or something.
A:
If you were using WebOB, request.POST.getall('common_name') would give you a list of all the POST variables with the name 'common_name'. See the WebOB docs for more.
But you aren't - you're using Django. See the QueryDict docs for several ways to do this - request.POST.getlist('common_name') is one way to do it.
A:
checkbox = request.POST.getlist("common_name")
A:
And if you want to select objects (say Contact objects) based upon the getlist list, you can do this:
selected_ids = request.POST.getlist('_selected_for_action')
object_list = Contact.objects.filter(pk__in=selected_ids)
|
Python - reading checkboxes
|
I have a few checkboxes with common name and individual variables (ID).
How can I in python read them as list?
Now I'm using
checkbox= request.POST["common_name"]
It isn't work properly, checkbox variable store only the last checked box instead of any list or something.
|
[
"If you were using WebOB, request.POST.getall('common_name') would give you a list of all the POST variables with the name 'common_name'. See the WebOB docs for more.\nBut you aren't - you're using Django. See the QueryDict docs for several ways to do this - request.POST.getlist('common_name') is one way to do it.\n",
"checkbox = request.POST.getlist(\"common_name\")\n\n",
"And if you want to select objects (say Contact objects) based upon the getlist list, you can do this:\nselected_ids = request.POST.getlist('_selected_for_action')\nobject_list = Contact.objects.filter(pk__in=selected_ids)\n\n"
] |
[
6,
2,
0
] |
[] |
[] |
[
"django",
"html",
"python"
] |
stackoverflow_0001979986_django_html_python.txt
|
Q:
why my code error,I would like to print the parameters of two functions
def c(*x,**y):
print x,y
def a(*x,**y):
print x
def b(*x1,**y1):
c(*(x+x1),**dict(y,**y1))
b()
a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')#error
A:
function a() is not returning a function; actually, it returns None. Therefore, the second set of parenthesis is a call on None object - and that's an error.
Did you intend to return a function, like doing something like return b?
A:
Replace b() with return b
Running this in Python 3.1:
def c(*x,**y):
print(x,y)
def a(*x,**y):
print(x)
def b(*x1,**y1):
c(*(x+x1),**dict(y,**y1))
return b
a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')
produces:
>>> ================================ RESTART ================================
>>>
(1, 2, 3)
(1, 2, 3, 4, 5, 6) {'a': 1, 'c': '222', 'b': 2, 'd': 'aaa'}
>>>
A:
a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')#error
I assume you got "object is not callable" exception..
May be you need to return a function object 'b' (not result of the 'b') ?
So instead of
b()
try
return b # without braces and with 'return'
A:
The error I get is:
Traceback (most recent call last):
File "./tmp.py", line 11, in
a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')
TypeError: 'NoneType' object is not
callable
I can fix this by changing your code to:
#!/usr/bin/python
def c(*x,**y):
print x,y
def a(*x,**y):
print x
def b(*x1,**y1):
c(*(x+x1),**dict(y,**y1))
return b
a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')
This produces the output:
(1, 2, 3)
(1, 2, 3, 4, 5, 6) {'a': 1,
'c': '222', 'b': 2, 'd': 'aaa'}
You haven't stated what you're trying to achieve though, so I don't know whether or not this is what you want.
|
why my code error,I would like to print the parameters of two functions
|
def c(*x,**y):
print x,y
def a(*x,**y):
print x
def b(*x1,**y1):
c(*(x+x1),**dict(y,**y1))
b()
a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')#error
|
[
"function a() is not returning a function; actually, it returns None. Therefore, the second set of parenthesis is a call on None object - and that's an error.\nDid you intend to return a function, like doing something like return b?\n",
"Replace b() with return b \nRunning this in Python 3.1:\ndef c(*x,**y):\n print(x,y)\ndef a(*x,**y):\n print(x)\n def b(*x1,**y1):\n c(*(x+x1),**dict(y,**y1))\n return b\n\na(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')\n\nproduces:\n>>> ================================ RESTART ================================\n>>> \n(1, 2, 3)\n(1, 2, 3, 4, 5, 6) {'a': 1, 'c': '222', 'b': 2, 'd': 'aaa'}\n>>> \n\n",
"a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')#error\n\nI assume you got \"object is not callable\" exception..\nMay be you need to return a function object 'b' (not result of the 'b') ?\nSo instead of\nb()\n\ntry\nreturn b # without braces and with 'return'\n\n",
"The error I get is:\n\nTraceback (most recent call last):\n File \"./tmp.py\", line 11, in \n a(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')\n TypeError: 'NoneType' object is not\n callable\n\nI can fix this by changing your code to:\n#!/usr/bin/python\n\ndef c(*x,**y):\n print x,y\ndef a(*x,**y):\n print x\n def b(*x1,**y1):\n c(*(x+x1),**dict(y,**y1))\n return b\n\na(1,2,3,a=1,b=2)(4,5,6,c='222',d='aaa')\n\nThis produces the output:\n\n(1, 2, 3)\n(1, 2, 3, 4, 5, 6) {'a': 1,\n 'c': '222', 'b': 2, 'd': 'aaa'}\n\nYou haven't stated what you're trying to achieve though, so I don't know whether or not this is what you want.\n"
] |
[
2,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001989843_python.txt
|
Q:
Call Method on Every Ancestor in Python
I have an object of class 'D' in python, and I want to sequentially execute the 'run' method as defined by 'D' and each of it's ancestors ('A', 'B' and 'C').
I'm able to accomplish this like this
class A(object):
def run_all(self):
# I prefer to execute in revere MRO order
for cls in reversed(self.__class__.__mro__):
if hasattr(cls, 'run'):
# This works
cls.run(self)
# This doesn't
#cls.__getattribute__(self, 'run')()
def run(self):
print "Running A"
class B(A):
def run(self):
print "Running B"
class C(A):
def run(self):
print "Running C"
class D(C, B):
def run(self):
print "Running D"
if __name__ == "__main__":
D().run_all()
Which results in
$ python test.py
Running A
Running B
Running C
Running D
However in practice I won't know the name of the method to be executed. But if I try this using getattribute() (see the commented) line it doesn't work:
$ python test.py
Running D
Running D
Running D
Running D
So my questions are:
Why isn't it working?
Is this even the best way to go about this?
A:
If you're OK with changing all the run implementations (and calling run instead of run_all in D), this works:
class A(object):
def run(self):
print "Running A"
class B(A):
def run(self):
super(B, self).run()
print "Running B"
class C(A):
def run(self):
super(C, self).run()
print "Running C"
class D(C, B):
def run(self):
super(D, self).run()
print "Running D"
if __name__ == "__main__":
D().run()
Note that I don't use super in the root class -- it "knows" there's no further superclass to go up to (object does not define a run method). Unfortunately, in Python 2, this is inevitably verbose (and not well suited to implementing via a decorator, either).
Your check on hasattr is quite fragile, if I understand your purposes correctly -- it will find that a class "has" the attribute if it defines or inherits it. So if you have an intermediate class that doesn't override run but does occur on the __mro__, the version of run it inherits gets called twice in your approach. E.g., consider:
class A(object):
def run_all(self):
for cls in reversed(self.__class__.__mro__):
if hasattr(cls, 'run'):
getattr(cls, 'run')(self)
def run(self):
print "Running A"
class B(A): pass
class C(A):
def run(self):
print "Running C"
class D(C, B): pass
if __name__ == "__main__":
D().run_all()
this prints
Running A
Running A
Running C
Running C
with two "stutters" for versions of run that B and D inherit without overriding (from A and C respectively). Assuming I'm right that this is not the effect you want, if you're keen to avoid super you could try changing run_all to:
def run_all(self):
for cls in reversed(self.__class__.__mro__):
meth = cls.__dict__.get('run')
if meth is not None: meth(self)
which, substituted into my latest example with just two distinct defs for run in A and C, makes the example print:
Running A
Running C
which I suspect may be closer to what you want.
One more side point: don't repeat the work -- hasattr guarding getattr, or an in test guarding dict access -- both the check in the guard, and the guarded accessor, must repeat exactly the same work internally, to no good purpose. Rather, use a third argument of None to a single getattr call (or the get method of the dict): this means that if the method is absent you'll retrieve a None value, and then you can guard the call against that occurrence. This is exactly the reason dicts have a get method and getattr has a third optional "default" argument: to make it easy to apply DRY, "don't repeat yourself", a very important maxim of good programming!-)
A:
You should not use __getattribute__ method..
just do the following:
getattr(cls, 'run')(self)
A:
Why don't you simply use super? Although some consider it harmful, it was designed with exactly this kind of scenario in mind, and I would use it without any hesitation.
From Python documentation:
This is useful for accessing inherited
methods that have been overridden in a
class. The search order is same as
that used by getattr() except that the
type itself is skipped. [...] This
makes it possible to implement
“diamond diagrams” where multiple base
classes implement the same method.
Update: In your case, it would become something like this:
class A(object):
def run(self):
print "Running A"
class B(A):
def run(self):
super(B, self).run()
print "Running B"
class C(A):
def run(self):
super(C, self).run()
print "Running C"
class D(C, B):
def run(self):
super(D, self).run()
print "Running D"
if __name__ == "__main__":
D().run()
|
Call Method on Every Ancestor in Python
|
I have an object of class 'D' in python, and I want to sequentially execute the 'run' method as defined by 'D' and each of it's ancestors ('A', 'B' and 'C').
I'm able to accomplish this like this
class A(object):
def run_all(self):
# I prefer to execute in revere MRO order
for cls in reversed(self.__class__.__mro__):
if hasattr(cls, 'run'):
# This works
cls.run(self)
# This doesn't
#cls.__getattribute__(self, 'run')()
def run(self):
print "Running A"
class B(A):
def run(self):
print "Running B"
class C(A):
def run(self):
print "Running C"
class D(C, B):
def run(self):
print "Running D"
if __name__ == "__main__":
D().run_all()
Which results in
$ python test.py
Running A
Running B
Running C
Running D
However in practice I won't know the name of the method to be executed. But if I try this using getattribute() (see the commented) line it doesn't work:
$ python test.py
Running D
Running D
Running D
Running D
So my questions are:
Why isn't it working?
Is this even the best way to go about this?
|
[
"If you're OK with changing all the run implementations (and calling run instead of run_all in D), this works:\nclass A(object):\n def run(self):\n print \"Running A\"\n\nclass B(A):\n def run(self):\n super(B, self).run()\n print \"Running B\"\n\nclass C(A):\n def run(self):\n super(C, self).run()\n print \"Running C\"\n\nclass D(C, B):\n def run(self):\n super(D, self).run()\n print \"Running D\"\n\nif __name__ == \"__main__\":\n D().run()\n\nNote that I don't use super in the root class -- it \"knows\" there's no further superclass to go up to (object does not define a run method). Unfortunately, in Python 2, this is inevitably verbose (and not well suited to implementing via a decorator, either).\nYour check on hasattr is quite fragile, if I understand your purposes correctly -- it will find that a class \"has\" the attribute if it defines or inherits it. So if you have an intermediate class that doesn't override run but does occur on the __mro__, the version of run it inherits gets called twice in your approach. E.g., consider:\nclass A(object):\n def run_all(self):\n for cls in reversed(self.__class__.__mro__):\n if hasattr(cls, 'run'):\n getattr(cls, 'run')(self)\n def run(self):\n print \"Running A\"\nclass B(A): pass\nclass C(A):\n def run(self):\n print \"Running C\"\nclass D(C, B): pass\n\nif __name__ == \"__main__\":\n D().run_all()\n\nthis prints\nRunning A\nRunning A\nRunning C\nRunning C\n\nwith two \"stutters\" for versions of run that B and D inherit without overriding (from A and C respectively). Assuming I'm right that this is not the effect you want, if you're keen to avoid super you could try changing run_all to:\ndef run_all(self):\n for cls in reversed(self.__class__.__mro__):\n meth = cls.__dict__.get('run')\n if meth is not None: meth(self)\n\nwhich, substituted into my latest example with just two distinct defs for run in A and C, makes the example print:\nRunning A\nRunning C\n\nwhich I suspect may be closer to what you want.\nOne more side point: don't repeat the work -- hasattr guarding getattr, or an in test guarding dict access -- both the check in the guard, and the guarded accessor, must repeat exactly the same work internally, to no good purpose. Rather, use a third argument of None to a single getattr call (or the get method of the dict): this means that if the method is absent you'll retrieve a None value, and then you can guard the call against that occurrence. This is exactly the reason dicts have a get method and getattr has a third optional \"default\" argument: to make it easy to apply DRY, \"don't repeat yourself\", a very important maxim of good programming!-)\n",
"You should not use __getattribute__ method..\njust do the following:\ngetattr(cls, 'run')(self)\n\n",
"Why don't you simply use super? Although some consider it harmful, it was designed with exactly this kind of scenario in mind, and I would use it without any hesitation.\nFrom Python documentation:\n\nThis is useful for accessing inherited\n methods that have been overridden in a\n class. The search order is same as\n that used by getattr() except that the\n type itself is skipped. [...] This\n makes it possible to implement\n “diamond diagrams” where multiple base\n classes implement the same method.\n\nUpdate: In your case, it would become something like this:\nclass A(object):\n\n def run(self):\n print \"Running A\"\n\nclass B(A):\n def run(self):\n super(B, self).run()\n print \"Running B\"\n\nclass C(A):\n def run(self):\n super(C, self).run()\n print \"Running C\"\n\nclass D(C, B):\n def run(self):\n super(D, self).run()\n print \"Running D\"\n\nif __name__ == \"__main__\":\n D().run()\n\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"datamodel",
"python"
] |
stackoverflow_0001989863_datamodel_python.txt
|
Q:
How to access fields in a struct imported from a .mat file using loadmat in Python?
Following this question which asks (and answers) how to read .mat files that were created in Matlab using Scipy, I want to know how to access the fields in the imported structs.
I have a file in Matlab from which I can import a struct:
>> load bla % imports a struct called G
>> G
G =
Inp: [40x40x2016 uint8]
Tgt: [8x2016 double]
Ltr: [1x2016 double]
Relevant: [1 2 3 4 5 6 7 8]
Now I want to do the same in Python:
x = scipy.io.loadmat('bla.mat')
>>> x
{'__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, Platform: PCWIN, Created on: Wed Jun 07 21:17:24 2006', 'G': array([[<scipy.io.matlab.mio5.mat_struct object at 0x0191F230>]], dtype=object), '__globals__': []}
>>> x['G']
array([[<scipy.io.matlab.mio5.mat_struct object at 0x0191F230>]], dtype=object)
>>> G = x['G']
>>> G
array([[<scipy.io.matlab.mio5.mat_struct object at 0x0191F230>]], dtype=object)
The question is, how can I access the members of the struct G: Inp, Tgt, Ltr and Relevant, the way I can in Matlab?
A:
First, I'd recommend to upgrade to Scipy svn if possible - there has been active development of the matlab io with some really dramatic speed ups recently.
Also as mentioned it might be worth trying with struct_as_record=True. But otherwise you should be able to get it out by playing around interactively.
Your G is an array of mio struct objects - you can check G.shape for example. In this case I think G = x['G'][0,0] should give the object you want. Then you should be able to access G.Inp etc.
|
How to access fields in a struct imported from a .mat file using loadmat in Python?
|
Following this question which asks (and answers) how to read .mat files that were created in Matlab using Scipy, I want to know how to access the fields in the imported structs.
I have a file in Matlab from which I can import a struct:
>> load bla % imports a struct called G
>> G
G =
Inp: [40x40x2016 uint8]
Tgt: [8x2016 double]
Ltr: [1x2016 double]
Relevant: [1 2 3 4 5 6 7 8]
Now I want to do the same in Python:
x = scipy.io.loadmat('bla.mat')
>>> x
{'__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, Platform: PCWIN, Created on: Wed Jun 07 21:17:24 2006', 'G': array([[<scipy.io.matlab.mio5.mat_struct object at 0x0191F230>]], dtype=object), '__globals__': []}
>>> x['G']
array([[<scipy.io.matlab.mio5.mat_struct object at 0x0191F230>]], dtype=object)
>>> G = x['G']
>>> G
array([[<scipy.io.matlab.mio5.mat_struct object at 0x0191F230>]], dtype=object)
The question is, how can I access the members of the struct G: Inp, Tgt, Ltr and Relevant, the way I can in Matlab?
|
[
"First, I'd recommend to upgrade to Scipy svn if possible - there has been active development of the matlab io with some really dramatic speed ups recently.\nAlso as mentioned it might be worth trying with struct_as_record=True. But otherwise you should be able to get it out by playing around interactively.\nYour G is an array of mio struct objects - you can check G.shape for example. In this case I think G = x['G'][0,0] should give the object you want. Then you should be able to access G.Inp etc. \n"
] |
[
6
] |
[] |
[] |
[
"file_io",
"mat_file",
"matlab",
"python",
"scipy"
] |
stackoverflow_0001984714_file_io_mat_file_matlab_python_scipy.txt
|
Q:
Python lists with STL like interface
I have to port a C++ STL application to Python. I am a Python newbie, but have been programming for over a decade. I have a great deal of experience with the STL, and find that it keeps me hooked to using C++. I have been searching for the following items on Google these past several days:
Python STL (in hope of leveraging my years of STL experience)
Python linked lists
Python advanced list usage
Python list optimization
Python ordered sets
And have found posts about the above topic, tutorials on Python lists that are decidedly NOT advanced, or dead ends. I am really surprised at my lack of success, I think I am just burned out from overworking and entering bad search terms!
(MY QUESTION) Can I get a Python STL wrapper, or an interface to Python lists that works like the STL? If not can someone point me to a truly advanced tutorial or paper on managing very large sorted collections of non trivial objects?
P.S. I can easily implement workarounds for one or two uses, but if management wants to port more code, I want to be ready to replace any STL code I find with equivalent Python code immediately. And YES I HAVE MEASURED AND DO NEED TO HAVE TOTALLY OPTIMAL CODE! I CANT JUST DO REDUNDANT SORTS AND SEARCHES!
(ADDENDUM) Thanks for the replies, I have checked out some of the references and am pleased. In response to some of the comments here:
1 - It is being ported to python because managements says so, I would just as soon leave it alone - if it aint broke, why fix it?
2 - Advanced list usage with non trivial objects, what I mean by that is: Many different ways to order and compare objects, not by one cmp method. I want to splice, sort, merge, search, insert, erase, and combine the lists extensively. I want lists of list iterators, I want to avoid copying.
3 - I now know that built in lists are actually arrays, and I should be looking for a different python class. I think this was the root of my confusion.
4 - Of course I am learning to do things in the Python way, but I also have deadlines. The STL code I am porting is working right, I would like to change it as little as possible, because that would introduce bugs.
Thanks to everyone for their input, I really appreciate it.
A:
If I were you I would take the time to learn how to properly use the various data structures available in Python instead of looking for things that are similar to what you know from C++.
It's not like you're looking for something fancy, just working with some data structures. In that case I would refer you to Python's documentation on the subject.
Doing this the 'Python' way would help you and more importantly future maintainers who will wonder why you try to program C++ in Python.
Just to whet your appetite, there's also no reason to prefer STL's style to Python (and for the record, I'm also a C++ programmer who knows STL throughly), consider the most trivial example of constructing a list and traversing it:
The Pythonic way:
mylist = [1, 2, 3, 4]
for value in mylist:
# playaround with value
The STL way (I made this up, to resemble STL) in Python:
mylist = [1, 2, 3, 4]
mylistiter = mylist.begin()
while mylistiter != mylist.end():
value = mylistiter.item()
mylistiter.next()
A:
Python's "lists" are not linked lists -- they're like Java ArrayLists or C++'s std::vectors, i.e., in lower-level terms, a resizable compact array of pointers.
A good "advanced tutorial" on such subjects is Hettinger's Core Python containers: under the hood presentation (the video at the URL is of the presentation at an Italian conference, but it's in English; another, shorter presentation of essentially the same talk is here).
So the performance characteristics of Python lists are essentially those of C++'s std::vector: Python's .append, like C++'s push_back, is O(1), but insertion or removal "in the middle" is O(N). Consequently, keeping a list sorted (as can be easily done with the help of functions in Python's standard library module bisect) is costly (if items arrive and/or depart randomly, each insertion and removal is O(N), just like similarly maintaining order in an std::vector would be. For some purposes, such as priority queues, you may get away with a "heap queue", also easy to maintain with the help of functions in Python's standard library module heapq -- but of course that doesn't afford the same range of uses as a completely sorted list (or vector) would.
So for purposes for which in C++ you'd use a std::set (and rely on its being ordered, i.e., a hashset wouldn't do -- Python's sets are hash-based, not ordered) you may be better off avoiding Python builtin containers in favor of something like this module (if you need to keep things pure-Python), or this one (which offers AVL trees, not RB ones, but is coded as a C-implemented Python extension and so may offer better performance) if C-coded extensions are OK.
If you do end up using your own module (be it pure Python, or C-coded), you may, if you wish, give it an STL-like veneer/interface (with .begin, .end, iterator objects that are advanced by incrementing rather than, as per normal Python behavior, by calling their next methods, ...), although it will never perform as well as "going with the grain" of the language would (the for statement is optimized to use normal Python iterators, i.e., one with next methods, and it will be faster than wrapping a somewhat awkward while around non-Python-standard, STL-like iterators).
To give an STL-like veneer to any Python built-in container, you'll incur substantial wrapping overhead, so the performance hit may be considerable. If you, as you say, "DO NEED TO HAVE TOTALLY OPTIMAL CODE", using such a veneer just for "syntax convenience" purposes would therefore seem to be a very bad choice.
Boost Python, the Python extension package that wraps the powerful C++ Boost library, might perhaps serve your purposes best.
A:
For linked-list-like operations people usually use collections.deque.
What operations do you need to perform fast? Bisection? Insertion?
A:
I would say that your issues go beyond just STL porting. Since the list, dict, and set data structures, which are bolted on to C++ via the STL, are native to core Python, then their usage is incorporated into common Python code idioms. If you want to give Google another shot, try looking for references for "Python for C++ Programmers". One of your hits will be this presentation by Alex Martelli. It's a little dated, from way back in ought-three, but there is a side-by-side comparison of some basic Python code that reads through a text file, and how it would look using STL.
From there, I would recommend that you read up on these Python features:
iterators
generators
list and generator comprehensions
And these builtin functions:
zip
map
Once you are familiar with these, then you will be able to construct your own translation/mapping between STL usage and Python builtin data structures.
As others have said, if you are looking for a "plug-and-chug" formula to convert STL C++ code to Python, you will just end up with bad Python. Such a brute force approach will never result in the power, elegance, and brevity of a single-line list comprehension. (I had this very experience when introducing Python to one of our managers, who was familiar with Java and C++ iterators. When I showed him this code:
numParams = 1000
paramRequests = [ ("EqptEmulator/ProcChamberI/Sensors",
"ChamberIData%d"%(i%250)) for i in range(numParams) ]
record.internalArray = [ParameterRequest(*pr) for pr in paramRequests]
and I explained that these replaced this code (or something like it, this might be a mishmash of C++ and Java APIs, sorry):
std::vector<ParameterRequest> prs = new std::vector<ParameterRequest>();
for (int i = 0; i<1000; ++i) {
string idstr;
strstream sstr(idstr);
sstr << "ChamberIData" << (i%250);
prs.add(new ParameterRequest("EqptEmulator/ProcChamberI/Sensors", idstr));
}
record.internalArray = new ParameterRequest[prs.size];
prs.toArray(record.internalArray);
One of your instincts from working with C++ will be a reluctance to create new lists from old, but rather to update or filter a list in place. We even see this on many forums from Python developers asking about how to modify a list while iterating over it. In Python, you are much better off building a new list from the old with a list comprehension.
allItems = [... some list of items, perhaps from a database query ...]
validItems = [it for it in allItems if it.isValid()]
As opposed to:
validItems = []
for it in allItems:
if it.isValid():
validItems.add(it)
or worse:
# get list of indexes of items to be removed
removeIndexes = []
for i in range(len(allItems)):
if not allItems[i].isValid():
removeIndexes.add(i)
# don't forget to remove items in descending order, or later indexes
# will be invalidated by earlier removals
sort(removeIndexes,reverse=True)
# copy list
validItems = allItems[:]
# now remove the items from allItems
for idx in removeIndexes:
del validItems[i]
A:
Python STL (in hope of leveraging my years of STL experience) - Start with the collections ABC's to learn what Python has. http://docs.python.org/library/collections.html
Python linked lists. Python lists have all the features you would want from a linked list.
Python advanced list usage. What does this mean?
Python list optimization. What does this mean?
Python ordered sets. You have several choices here; you could invent your own "ordered set" as a list that discards duplicates. You can subclass the heapq and add methods that discard duplicates: http://docs.python.org/library/heapq.html.
In many cases, however, the cost of maintaing an ordered set is actually excessive because it must only be ordered once at the end of the algorithm. In other cases, the "ordered set" really is a heapq -- you never needed the set-like features and only needed the ordering.
Non-Trivial.
(I'm guessing at what you meant by "non-trivial"). All Python objects are equivalent. There's no "trivial" vs. "non-trivial" objects. They're all first-class objects and can all have "non-trivial" complexity without any real work. This is not C++ where there are primitive (non-object) values floating around. Everything's an object in Python.
Management Expectations.
For the most part the C++ brain-cramping doesn't exist in Python. Use the obvious Python classes the obvious way and you'll have much less code. The reduction in code volume is the big win. Often, the management reason for converting C++ to Python is to get rid of the C++ complexity.
Python code will be much simpler, making it much more reliable and much easier to maintain.
While it's generally true that Python is slower than C++, it's also true that picking the right algorithm and data structure can have dramatic improvements on performance. In one benchmark, someone found that Python was actually faster than C because the C program had such a poorly chosen data structure.
It's possible that your C++ has a really poor algorithm and you will see comparable performance from Python.
It's also possible that your C++ program is I/O bound, or has other limitations that will leave the Python running at a comparable speed.
A:
The design of Python is quite intentionally "you can use just a few data structures (arrays and hash tables) for whatever you want to do, and if that isn't fast enough there's always C".
Python's standard library doesn't have a sorted-list data structure like std::set. You can download a red/black tree implementation or roll your own. (For small data sets, just using a list and periodically sorting it is a totally normal thing to do in Python.)
Rolling your own linked list is very easy.
|
Python lists with STL like interface
|
I have to port a C++ STL application to Python. I am a Python newbie, but have been programming for over a decade. I have a great deal of experience with the STL, and find that it keeps me hooked to using C++. I have been searching for the following items on Google these past several days:
Python STL (in hope of leveraging my years of STL experience)
Python linked lists
Python advanced list usage
Python list optimization
Python ordered sets
And have found posts about the above topic, tutorials on Python lists that are decidedly NOT advanced, or dead ends. I am really surprised at my lack of success, I think I am just burned out from overworking and entering bad search terms!
(MY QUESTION) Can I get a Python STL wrapper, or an interface to Python lists that works like the STL? If not can someone point me to a truly advanced tutorial or paper on managing very large sorted collections of non trivial objects?
P.S. I can easily implement workarounds for one or two uses, but if management wants to port more code, I want to be ready to replace any STL code I find with equivalent Python code immediately. And YES I HAVE MEASURED AND DO NEED TO HAVE TOTALLY OPTIMAL CODE! I CANT JUST DO REDUNDANT SORTS AND SEARCHES!
(ADDENDUM) Thanks for the replies, I have checked out some of the references and am pleased. In response to some of the comments here:
1 - It is being ported to python because managements says so, I would just as soon leave it alone - if it aint broke, why fix it?
2 - Advanced list usage with non trivial objects, what I mean by that is: Many different ways to order and compare objects, not by one cmp method. I want to splice, sort, merge, search, insert, erase, and combine the lists extensively. I want lists of list iterators, I want to avoid copying.
3 - I now know that built in lists are actually arrays, and I should be looking for a different python class. I think this was the root of my confusion.
4 - Of course I am learning to do things in the Python way, but I also have deadlines. The STL code I am porting is working right, I would like to change it as little as possible, because that would introduce bugs.
Thanks to everyone for their input, I really appreciate it.
|
[
"If I were you I would take the time to learn how to properly use the various data structures available in Python instead of looking for things that are similar to what you know from C++. \nIt's not like you're looking for something fancy, just working with some data structures. In that case I would refer you to Python's documentation on the subject. \nDoing this the 'Python' way would help you and more importantly future maintainers who will wonder why you try to program C++ in Python.\nJust to whet your appetite, there's also no reason to prefer STL's style to Python (and for the record, I'm also a C++ programmer who knows STL throughly), consider the most trivial example of constructing a list and traversing it:\nThe Pythonic way:\nmylist = [1, 2, 3, 4]\n\nfor value in mylist:\n # playaround with value\n\nThe STL way (I made this up, to resemble STL) in Python:\nmylist = [1, 2, 3, 4]\nmylistiter = mylist.begin()\n\nwhile mylistiter != mylist.end():\n value = mylistiter.item()\n mylistiter.next()\n\n",
"Python's \"lists\" are not linked lists -- they're like Java ArrayLists or C++'s std::vectors, i.e., in lower-level terms, a resizable compact array of pointers.\nA good \"advanced tutorial\" on such subjects is Hettinger's Core Python containers: under the hood presentation (the video at the URL is of the presentation at an Italian conference, but it's in English; another, shorter presentation of essentially the same talk is here).\nSo the performance characteristics of Python lists are essentially those of C++'s std::vector: Python's .append, like C++'s push_back, is O(1), but insertion or removal \"in the middle\" is O(N). Consequently, keeping a list sorted (as can be easily done with the help of functions in Python's standard library module bisect) is costly (if items arrive and/or depart randomly, each insertion and removal is O(N), just like similarly maintaining order in an std::vector would be. For some purposes, such as priority queues, you may get away with a \"heap queue\", also easy to maintain with the help of functions in Python's standard library module heapq -- but of course that doesn't afford the same range of uses as a completely sorted list (or vector) would.\nSo for purposes for which in C++ you'd use a std::set (and rely on its being ordered, i.e., a hashset wouldn't do -- Python's sets are hash-based, not ordered) you may be better off avoiding Python builtin containers in favor of something like this module (if you need to keep things pure-Python), or this one (which offers AVL trees, not RB ones, but is coded as a C-implemented Python extension and so may offer better performance) if C-coded extensions are OK.\nIf you do end up using your own module (be it pure Python, or C-coded), you may, if you wish, give it an STL-like veneer/interface (with .begin, .end, iterator objects that are advanced by incrementing rather than, as per normal Python behavior, by calling their next methods, ...), although it will never perform as well as \"going with the grain\" of the language would (the for statement is optimized to use normal Python iterators, i.e., one with next methods, and it will be faster than wrapping a somewhat awkward while around non-Python-standard, STL-like iterators).\nTo give an STL-like veneer to any Python built-in container, you'll incur substantial wrapping overhead, so the performance hit may be considerable. If you, as you say, \"DO NEED TO HAVE TOTALLY OPTIMAL CODE\", using such a veneer just for \"syntax convenience\" purposes would therefore seem to be a very bad choice.\nBoost Python, the Python extension package that wraps the powerful C++ Boost library, might perhaps serve your purposes best.\n",
"For linked-list-like operations people usually use collections.deque.\nWhat operations do you need to perform fast? Bisection? Insertion?\n",
"I would say that your issues go beyond just STL porting. Since the list, dict, and set data structures, which are bolted on to C++ via the STL, are native to core Python, then their usage is incorporated into common Python code idioms. If you want to give Google another shot, try looking for references for \"Python for C++ Programmers\". One of your hits will be this presentation by Alex Martelli. It's a little dated, from way back in ought-three, but there is a side-by-side comparison of some basic Python code that reads through a text file, and how it would look using STL.\nFrom there, I would recommend that you read up on these Python features:\n\niterators\ngenerators\nlist and generator comprehensions\n\nAnd these builtin functions:\n\nzip\nmap\n\nOnce you are familiar with these, then you will be able to construct your own translation/mapping between STL usage and Python builtin data structures.\nAs others have said, if you are looking for a \"plug-and-chug\" formula to convert STL C++ code to Python, you will just end up with bad Python. Such a brute force approach will never result in the power, elegance, and brevity of a single-line list comprehension. (I had this very experience when introducing Python to one of our managers, who was familiar with Java and C++ iterators. When I showed him this code:\nnumParams = 1000\nparamRequests = [ (\"EqptEmulator/ProcChamberI/Sensors\", \n \"ChamberIData%d\"%(i%250)) for i in range(numParams) ]\nrecord.internalArray = [ParameterRequest(*pr) for pr in paramRequests]\n\nand I explained that these replaced this code (or something like it, this might be a mishmash of C++ and Java APIs, sorry):\nstd::vector<ParameterRequest> prs = new std::vector<ParameterRequest>();\nfor (int i = 0; i<1000; ++i) {\n string idstr;\n strstream sstr(idstr);\n sstr << \"ChamberIData\" << (i%250);\n prs.add(new ParameterRequest(\"EqptEmulator/ProcChamberI/Sensors\", idstr));\n}\nrecord.internalArray = new ParameterRequest[prs.size];\nprs.toArray(record.internalArray);\n\nOne of your instincts from working with C++ will be a reluctance to create new lists from old, but rather to update or filter a list in place. We even see this on many forums from Python developers asking about how to modify a list while iterating over it. In Python, you are much better off building a new list from the old with a list comprehension.\nallItems = [... some list of items, perhaps from a database query ...]\nvalidItems = [it for it in allItems if it.isValid()]\n\nAs opposed to:\nvalidItems = []\nfor it in allItems:\n if it.isValid():\n validItems.add(it)\n\nor worse:\n# get list of indexes of items to be removed\nremoveIndexes = []\nfor i in range(len(allItems)):\n if not allItems[i].isValid():\n removeIndexes.add(i)\n\n# don't forget to remove items in descending order, or later indexes\n# will be invalidated by earlier removals\nsort(removeIndexes,reverse=True)\n\n# copy list\nvalidItems = allItems[:]\n\n# now remove the items from allItems\nfor idx in removeIndexes:\n del validItems[i]\n\n",
"Python STL (in hope of leveraging my years of STL experience) - Start with the collections ABC's to learn what Python has. http://docs.python.org/library/collections.html\nPython linked lists. Python lists have all the features you would want from a linked list. \nPython advanced list usage. What does this mean?\nPython list optimization. What does this mean?\nPython ordered sets. You have several choices here; you could invent your own \"ordered set\" as a list that discards duplicates. You can subclass the heapq and add methods that discard duplicates: http://docs.python.org/library/heapq.html.\nIn many cases, however, the cost of maintaing an ordered set is actually excessive because it must only be ordered once at the end of the algorithm. In other cases, the \"ordered set\" really is a heapq -- you never needed the set-like features and only needed the ordering.\nNon-Trivial.\n(I'm guessing at what you meant by \"non-trivial\"). All Python objects are equivalent. There's no \"trivial\" vs. \"non-trivial\" objects. They're all first-class objects and can all have \"non-trivial\" complexity without any real work. This is not C++ where there are primitive (non-object) values floating around. Everything's an object in Python.\nManagement Expectations.\nFor the most part the C++ brain-cramping doesn't exist in Python. Use the obvious Python classes the obvious way and you'll have much less code. The reduction in code volume is the big win. Often, the management reason for converting C++ to Python is to get rid of the C++ complexity.\nPython code will be much simpler, making it much more reliable and much easier to maintain. \nWhile it's generally true that Python is slower than C++, it's also true that picking the right algorithm and data structure can have dramatic improvements on performance. In one benchmark, someone found that Python was actually faster than C because the C program had such a poorly chosen data structure. \nIt's possible that your C++ has a really poor algorithm and you will see comparable performance from Python.\nIt's also possible that your C++ program is I/O bound, or has other limitations that will leave the Python running at a comparable speed.\n",
"The design of Python is quite intentionally \"you can use just a few data structures (arrays and hash tables) for whatever you want to do, and if that isn't fast enough there's always C\".\nPython's standard library doesn't have a sorted-list data structure like std::set. You can download a red/black tree implementation or roll your own. (For small data sets, just using a list and periodically sorting it is a totally normal thing to do in Python.)\nRolling your own linked list is very easy.\n"
] |
[
13,
13,
2,
2,
1,
1
] |
[] |
[] |
[
"c++",
"python",
"stl"
] |
stackoverflow_0001988484_c++_python_stl.txt
|
Q:
is '__all__' only for 'from some import *'
a.py
__all__=['b','c']
a='aaa'
b='bbb'
def c():
print 'ccc'
def d():
print 'dddd'
b.py
from a import a
print a
from a import *
print a
print d#error
Are there any other uses.
thanks
A:
Yes, it also changes what help(a) documents.
A:
No, the purpose of __all__ is just to describe exactly what should be imported when you do from foo import *.
A:
No other uses, except limiting the damage caused by the horrible from ... import * usage.
|
is '__all__' only for 'from some import *'
|
a.py
__all__=['b','c']
a='aaa'
b='bbb'
def c():
print 'ccc'
def d():
print 'dddd'
b.py
from a import a
print a
from a import *
print a
print d#error
Are there any other uses.
thanks
|
[
"Yes, it also changes what help(a) documents. \n",
"No, the purpose of __all__ is just to describe exactly what should be imported when you do from foo import *.\n",
"No other uses, except limiting the damage caused by the horrible from ... import * usage.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001990232_python.txt
|
Q:
What does this code from types.py do?
try:
raise TypeError
except TypeError:
try:
tb = sys.exc_info()[2]
TracebackType = type(tb)
FrameType = type(tb.tb_frame)
except AttributeError:
# In the restricted environment, exc_info returns (None, None,
# None) Then, tb.tb_frame gives an attribute error
pass
tb = None; del tb
I can't understand this code at all. What is it's purpose?
A:
It's a trick to get a traceback object and a frame object so that TracebackType and FrameType can be assigned their types. It simply raises an exception so it can catch the exception, then get the traceback and frame from sys.exc_info.
A:
The code tries to find out the types used for the tracebacks returned by sys.exc_info() and assigned these types to the variables TracebackType and FrameType.
To do so it first needs to raise an exception and catch it (the TypeError), so that sys.exc_info() can return a traceback for this exception. Then this traceback gets inspected to determine the types. In the end the local tb variable is deleted to not keep unnecessary circular references around (see the warning in the documentation of sys.exc_info()).
A:
It appears as though this code is used to get the call stack. If you research the exc_info function from http://pyref.infogami.com/sys.exc_info you'll find that the function returns a tuple of 3 values where the third is a Traceback object. This object contains the call stack information which is then displayed.
|
What does this code from types.py do?
|
try:
raise TypeError
except TypeError:
try:
tb = sys.exc_info()[2]
TracebackType = type(tb)
FrameType = type(tb.tb_frame)
except AttributeError:
# In the restricted environment, exc_info returns (None, None,
# None) Then, tb.tb_frame gives an attribute error
pass
tb = None; del tb
I can't understand this code at all. What is it's purpose?
|
[
"It's a trick to get a traceback object and a frame object so that TracebackType and FrameType can be assigned their types. It simply raises an exception so it can catch the exception, then get the traceback and frame from sys.exc_info.\n",
"The code tries to find out the types used for the tracebacks returned by sys.exc_info() and assigned these types to the variables TracebackType and FrameType.\nTo do so it first needs to raise an exception and catch it (the TypeError), so that sys.exc_info() can return a traceback for this exception. Then this traceback gets inspected to determine the types. In the end the local tb variable is deleted to not keep unnecessary circular references around (see the warning in the documentation of sys.exc_info()).\n",
"It appears as though this code is used to get the call stack. If you research the exc_info function from http://pyref.infogami.com/sys.exc_info you'll find that the function returns a tuple of 3 values where the third is a Traceback object. This object contains the call stack information which is then displayed.\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001990339_python.txt
|
Q:
How to get Python error as C string?
I have a C++ application that embeds the Python interpreter. It calls PyImport_Import to load scripts. I need a way of getting any syntax errors as C strings. For example, if the script uses a undefined function, I would like an error saying something like 'Function xxx is undefined.' How would I do this?
A:
PyErr_Occurred lets your C code check whether an exception has been raised, and, if so, what type; then, PyErr_Fetch lets you fetch all details (as Python objects), and you can get the string representation of the error instance with the usual high-level call PyObject_Str (just the same as except Exception, e: ...str(e)... in Python code).
|
How to get Python error as C string?
|
I have a C++ application that embeds the Python interpreter. It calls PyImport_Import to load scripts. I need a way of getting any syntax errors as C strings. For example, if the script uses a undefined function, I would like an error saying something like 'Function xxx is undefined.' How would I do this?
|
[
"PyErr_Occurred lets your C code check whether an exception has been raised, and, if so, what type; then, PyErr_Fetch lets you fetch all details (as Python objects), and you can get the string representation of the error instance with the usual high-level call PyObject_Str (just the same as except Exception, e: ...str(e)... in Python code).\n"
] |
[
1
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0001990635_python_string.txt
|
Q:
How to develop a Python module/package without having to restart the interpreter after every change?
I am developing a Python package using a text editor and IPython. Each time I change any of the module code I have to restart the interpreter to test this. This is a pain since the classes I am developing rely on a context that needs to be re-established on each reload.
I am aware of the reload() function, but this appears to be frowned upon (also since it has been relegated from a built-in in Python 3.0) and moreover it rarely works since the modules almost always have multiple references.
My question is - what is the best/accepted way to develop a Python module/package so that I don't have to go through the pain of constantly re-establishing my interpreter context?
One idea I did think of was using the if __name__ == '__main__': trick to run a module directly so the code is not imported. However this leaves a bunch of contextual cruft (specific to my setup) at the bottom of my module files.
Ideas?
A:
A different approach may be to formalise your test driven development, and instead of using the interpreter to test your module, save your tests and run them directly.
You probably know of the various ways to do this with python, I imagine the simplest way to start in this direction is to copy and paste what you do in the interpreter into the docstring as a doctest and add the following to the bottom of your module:
if __name__ == "__main__":
import doctest
doctest.testmod()
Your informal test will then be repeated every time the module is called directly. This has a number of other benefits. See the doctest docs for more info on writing doctests.
A:
Ipython does allow reloads see the magic function %run iPython doc
or if modules under the one have changed the recursive dreloadd() function
If you have a complex context is it possible to create it in another module? or assign it to a global variable which will stay around as the interpreter is not restarted
A:
How about using nose with nosey to run your tests automatically in a separate terminal every time you save your edits to disk? Set up all the state you need in your unit tests.
A:
you could create a python script that sets up your context and run it with
python -i context-setup.py
-i When a script is passed as first argument or the -c option is
used, enter interactive mode after executing the script or the
command. It does not read the $PYTHONSTARTUP file. This can be
useful to inspect global variables or a stack trace when a
script raises an exception.
|
How to develop a Python module/package without having to restart the interpreter after every change?
|
I am developing a Python package using a text editor and IPython. Each time I change any of the module code I have to restart the interpreter to test this. This is a pain since the classes I am developing rely on a context that needs to be re-established on each reload.
I am aware of the reload() function, but this appears to be frowned upon (also since it has been relegated from a built-in in Python 3.0) and moreover it rarely works since the modules almost always have multiple references.
My question is - what is the best/accepted way to develop a Python module/package so that I don't have to go through the pain of constantly re-establishing my interpreter context?
One idea I did think of was using the if __name__ == '__main__': trick to run a module directly so the code is not imported. However this leaves a bunch of contextual cruft (specific to my setup) at the bottom of my module files.
Ideas?
|
[
"A different approach may be to formalise your test driven development, and instead of using the interpreter to test your module, save your tests and run them directly.\nYou probably know of the various ways to do this with python, I imagine the simplest way to start in this direction is to copy and paste what you do in the interpreter into the docstring as a doctest and add the following to the bottom of your module:\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n\nYour informal test will then be repeated every time the module is called directly. This has a number of other benefits. See the doctest docs for more info on writing doctests.\n",
"Ipython does allow reloads see the magic function %run iPython doc\nor if modules under the one have changed the recursive dreloadd() function\nIf you have a complex context is it possible to create it in another module? or assign it to a global variable which will stay around as the interpreter is not restarted\n",
"How about using nose with nosey to run your tests automatically in a separate terminal every time you save your edits to disk? Set up all the state you need in your unit tests.\n",
"you could create a python script that sets up your context and run it with \npython -i context-setup.py\n\n\n -i When a script is passed as first argument or the -c option is\n used, enter interactive mode after executing the script or the\n command. It does not read the $PYTHONSTARTUP file. This can be\n useful to inspect global variables or a stack trace when a\n script raises an exception.\n\n"
] |
[
3,
2,
1,
0
] |
[] |
[] |
[
"module",
"package",
"python"
] |
stackoverflow_0001989644_module_package_python.txt
|
Q:
Why does this code return 'complex'?
# Example: provide pickling support for complex numbers.
try:
complex
except NameError:
pass
else:
def pickle_complex(c):
return complex, (c.real, c.imag) # why return complex here?
pickle(complex, pickle_complex, complex)
Why?
The following code is the pickle function being called:
dispatch_table = {}
def pickle(ob_type, pickle_function, constructor_ob=None):
if type(ob_type) is _ClassType:
raise TypeError("copy_reg is not intended for use with classes")
if not callable(pickle_function):
raise TypeError("reduction functions must be callable")
dispatch_table[ob_type] = pickle_function
# The constructor_ob function is a vestige of safe for unpickling.
# There is no reason for the caller to pass it anymore.
if constructor_ob is not None:
constructor(constructor_ob)
def constructor(object):
if not callable(object):
raise TypeError("constructors must be callable")
A:
complex is the class to use to reconstitute the pickled object. It is returned so that it can be pickled along with the real and imag values. Then when the unpickler comes along, it sees the class and some values to use as arguments to its constructor. The unpickler uses the given class and arguments to create a new complex object that is equivalent to the original one that was pickled.
This is explained in more detail in the copy_reg and pickle documentation.
|
Why does this code return 'complex'?
|
# Example: provide pickling support for complex numbers.
try:
complex
except NameError:
pass
else:
def pickle_complex(c):
return complex, (c.real, c.imag) # why return complex here?
pickle(complex, pickle_complex, complex)
Why?
The following code is the pickle function being called:
dispatch_table = {}
def pickle(ob_type, pickle_function, constructor_ob=None):
if type(ob_type) is _ClassType:
raise TypeError("copy_reg is not intended for use with classes")
if not callable(pickle_function):
raise TypeError("reduction functions must be callable")
dispatch_table[ob_type] = pickle_function
# The constructor_ob function is a vestige of safe for unpickling.
# There is no reason for the caller to pass it anymore.
if constructor_ob is not None:
constructor(constructor_ob)
def constructor(object):
if not callable(object):
raise TypeError("constructors must be callable")
|
[
"complex is the class to use to reconstitute the pickled object. It is returned so that it can be pickled along with the real and imag values. Then when the unpickler comes along, it sees the class and some values to use as arguments to its constructor. The unpickler uses the given class and arguments to create a new complex object that is equivalent to the original one that was pickled.\nThis is explained in more detail in the copy_reg and pickle documentation.\n"
] |
[
3
] |
[] |
[] |
[
"complex_numbers",
"pickle",
"python"
] |
stackoverflow_0001990759_complex_numbers_pickle_python.txt
|
Q:
Why does my class not have a 'keys' function?
class a(object):
w='www'
def __init__(self):
for i in self.keys():
print i
def __iter__(self):
for k in self.keys():
yield k
a() # why is there an error here?
Thanks.
Edit: The following class also doesn't extend any class;
why it can use keys?
class DictMixin:
# Mixin defining all dictionary methods for classes that already have
# a minimum dictionary interface including getitem, setitem, delitem,
# and keys. Without knowledge of the subclass constructor, the mixin
# does not define __init__() or copy(). In addition to the four base
# methods, progressively more efficiency comes with defining
# __contains__(), __iter__(), and iteritems().
# second level definitions support higher levels
def __iter__(self):
for k in self.keys():
yield k
def has_key(self, key):
try:
value = self[key]
except KeyError:
return False
return True
def __contains__(self, key):
return self.has_key(key)
# third level takes advantage of second level definitions
def iteritems(self):
for k in self:
yield (k, self[k])
def iterkeys(self):
return self.__iter__()
# fourth level uses definitions from lower levels
def itervalues(self):
for _, v in self.iteritems():
yield v
def values(self):
return [v for _, v in self.iteritems()]
def items(self):
return list(self.iteritems())
def clear(self):
for key in self.keys():
del self[key]
def setdefault(self, key, default=None):
try:
return self[key]
except KeyError:
self[key] = default
return default
def pop(self, key, *args):
if len(args) > 1:
raise TypeError, "pop expected at most 2 arguments, got "\
+ repr(1 + len(args))
try:
value = self[key]
except KeyError:
if args:
return args[0]
raise
del self[key]
return value
def popitem(self):
try:
k, v = self.iteritems().next()
except StopIteration:
raise KeyError, 'container is empty'
del self[k]
return (k, v)
def update(self, other=None, **kwargs):
# Make progressively weaker assumptions about "other"
if other is None:
pass
elif hasattr(other, 'iteritems'): # iteritems saves memory and lookups
for k, v in other.iteritems():
self[k] = v
elif hasattr(other, 'keys'):
for k in other.keys():
self[k] = other[k]
else:
for k, v in other:
self[k] = v
if kwargs:
self.update(kwargs)
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __repr__(self):
return repr(dict(self.iteritems()))
def __cmp__(self, other):
if other is None:
return 1
if isinstance(other, DictMixin):
other = dict(other.iteritems())
return cmp(dict(self.iteritems()), other)
def __len__(self):
return len(self.keys())
A:
Why would you expect it to have keys? You didn't define such a method in your class. Did you intend to inherit from a dictionary?
To do that declare class a(dict)
Or maybe you meant a.__dict__.keys()?
As for the large snippet you've posted in the update, read the comment above the class again:
# Mixin defining all dictionary methods for classes that already have
# a minimum dictionary interface including getitem, setitem, delitem,
# and keys
Note that "already have ... keys" part.
The DictMixin class comes from the UserDict module, which says:
class UserDict.DictMixin Mixin
defining all dictionary methods for
classes that already have a minimum
dictionary interface including
getitem(), setitem(), delitem(), and keys().
This mixin should be used as a
superclass. Adding each of the above
methods adds progressively more
functionality. For instance, defining
all but delitem() will preclude
only pop() and popitem() from the full
interface.
In addition to the four base methods,
progressively more efficiency comes
with defining contains(),
iter(), and iteritems().
Since the mixin has no knowledge of
the subclass constructor, it does not
define init() or copy().
Starting with Python version 2.6, it
is recommended to use
collections.MutableMapping instead of
DictMixin.
Note the recommendation in the last part - use collections.MutableMapping instead.
To iterate over attributes of an object:
class A(object):
def __init__(self):
self.myinstatt1 = 'one'
self.myinstatt2 = 'two'
def mymethod(self):
pass
a = A()
for attr, value in a.__dict__.iteritems():
print attr, value
|
Why does my class not have a 'keys' function?
|
class a(object):
w='www'
def __init__(self):
for i in self.keys():
print i
def __iter__(self):
for k in self.keys():
yield k
a() # why is there an error here?
Thanks.
Edit: The following class also doesn't extend any class;
why it can use keys?
class DictMixin:
# Mixin defining all dictionary methods for classes that already have
# a minimum dictionary interface including getitem, setitem, delitem,
# and keys. Without knowledge of the subclass constructor, the mixin
# does not define __init__() or copy(). In addition to the four base
# methods, progressively more efficiency comes with defining
# __contains__(), __iter__(), and iteritems().
# second level definitions support higher levels
def __iter__(self):
for k in self.keys():
yield k
def has_key(self, key):
try:
value = self[key]
except KeyError:
return False
return True
def __contains__(self, key):
return self.has_key(key)
# third level takes advantage of second level definitions
def iteritems(self):
for k in self:
yield (k, self[k])
def iterkeys(self):
return self.__iter__()
# fourth level uses definitions from lower levels
def itervalues(self):
for _, v in self.iteritems():
yield v
def values(self):
return [v for _, v in self.iteritems()]
def items(self):
return list(self.iteritems())
def clear(self):
for key in self.keys():
del self[key]
def setdefault(self, key, default=None):
try:
return self[key]
except KeyError:
self[key] = default
return default
def pop(self, key, *args):
if len(args) > 1:
raise TypeError, "pop expected at most 2 arguments, got "\
+ repr(1 + len(args))
try:
value = self[key]
except KeyError:
if args:
return args[0]
raise
del self[key]
return value
def popitem(self):
try:
k, v = self.iteritems().next()
except StopIteration:
raise KeyError, 'container is empty'
del self[k]
return (k, v)
def update(self, other=None, **kwargs):
# Make progressively weaker assumptions about "other"
if other is None:
pass
elif hasattr(other, 'iteritems'): # iteritems saves memory and lookups
for k, v in other.iteritems():
self[k] = v
elif hasattr(other, 'keys'):
for k in other.keys():
self[k] = other[k]
else:
for k, v in other:
self[k] = v
if kwargs:
self.update(kwargs)
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __repr__(self):
return repr(dict(self.iteritems()))
def __cmp__(self, other):
if other is None:
return 1
if isinstance(other, DictMixin):
other = dict(other.iteritems())
return cmp(dict(self.iteritems()), other)
def __len__(self):
return len(self.keys())
|
[
"Why would you expect it to have keys? You didn't define such a method in your class. Did you intend to inherit from a dictionary?\nTo do that declare class a(dict)\nOr maybe you meant a.__dict__.keys()?\nAs for the large snippet you've posted in the update, read the comment above the class again:\n # Mixin defining all dictionary methods for classes that already have\n # a minimum dictionary interface including getitem, setitem, delitem,\n # and keys\n\nNote that \"already have ... keys\" part.\nThe DictMixin class comes from the UserDict module, which says:\n\nclass UserDict.DictMixin Mixin\n defining all dictionary methods for\n classes that already have a minimum\n dictionary interface including\n getitem(), setitem(), delitem(), and keys().\nThis mixin should be used as a\n superclass. Adding each of the above\n methods adds progressively more\n functionality. For instance, defining\n all but delitem() will preclude\n only pop() and popitem() from the full\n interface.\nIn addition to the four base methods,\n progressively more efficiency comes\n with defining contains(),\n iter(), and iteritems().\nSince the mixin has no knowledge of\n the subclass constructor, it does not\n define init() or copy().\nStarting with Python version 2.6, it\n is recommended to use\n collections.MutableMapping instead of\n DictMixin.\n\nNote the recommendation in the last part - use collections.MutableMapping instead.\nTo iterate over attributes of an object:\nclass A(object):\n def __init__(self):\n self.myinstatt1 = 'one'\n self.myinstatt2 = 'two'\n def mymethod(self):\n pass\n\na = A()\nfor attr, value in a.__dict__.iteritems():\n print attr, value\n\n"
] |
[
3
] |
[] |
[] |
[
"inheritance",
"python"
] |
stackoverflow_0001990850_inheritance_python.txt
|
Q:
Port knocking and RSA encryption
I am doing on my project and there is about port knocking. I have 3 files that separated in server side and client.
In the server contains : portknocking server as a daemon and configuration file [contains sequence of port that must be satisfied and many other configuration detail]
In the client contains : portknocking client.
Is there possible to encrypt sequence port number in file configuration using RSA..If so,, how to do that??
thank you
PS: I am running daemon at server[it reads file configuration] and then I run program on client and specify sequence port number [if port number sequence is same with configuration file then it is connected]..
A:
Is there possible to encrypt sequence port number in file configuration using RSA.
Yes.
how to do that??
You might try a bit of searching for a python rsa library.
However, if you're planning on doing this on the client side, then realize in order for the client program to decrypt the data, it will have to have a decryption key. If the client program has the decryption key, then anyone with a text editor also has the decryption key (and the code to decrypt it).
If you really meant RSA encryption, take a look at PyCrypto.
|
Port knocking and RSA encryption
|
I am doing on my project and there is about port knocking. I have 3 files that separated in server side and client.
In the server contains : portknocking server as a daemon and configuration file [contains sequence of port that must be satisfied and many other configuration detail]
In the client contains : portknocking client.
Is there possible to encrypt sequence port number in file configuration using RSA..If so,, how to do that??
thank you
PS: I am running daemon at server[it reads file configuration] and then I run program on client and specify sequence port number [if port number sequence is same with configuration file then it is connected]..
|
[
"\nIs there possible to encrypt sequence port number in file configuration using RSA.\n\nYes.\n\nhow to do that?? \n\nYou might try a bit of searching for a python rsa library.\nHowever, if you're planning on doing this on the client side, then realize in order for the client program to decrypt the data, it will have to have a decryption key. If the client program has the decryption key, then anyone with a text editor also has the decryption key (and the code to decrypt it). \nIf you really meant RSA encryption, take a look at PyCrypto. \n"
] |
[
1
] |
[] |
[] |
[
"public_key",
"python",
"rsa"
] |
stackoverflow_0001990366_public_key_python_rsa.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.