content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Accessing the Atlassian Crowd SOAP API with Suds (python SOAP library)
Has anybody had any recent success with accessing the Crowd SOAP API via the Suds Python library?
I've found a few people successfully doing it in the past but Atlassian seems to have changed their WSDL since then to make the existing advice not entirely helpful.
Below is the simplest example I've been trying:
from suds.client import Client
url = 'https://crowd.hugeinc.com/services/SecurityServer?wsdl'
client = Client(url)
Unfortunately that generates the following error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/client.py", line 116, in __init__
sd = ServiceDefinition(self.wsdl, s)
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/servicedefinition.py", line 58, in __init__
self.paramtypes()
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/servicedefinition.py", line 137, in paramtypes
item = (pd[1], pd[1].resolve())
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/xsd/sxbasic.py", line 63, in resolve
raise TypeNotFound(qref)
TypeNotFound: Type not found: '(AuthenticatedToken, http://authentication.integration.crowd.atlassian.com, )'
I've tried to both binding and doctors to fix this problem to no avail. Neither approach resulted in any change. Any further recommendations or suggestions would be incredibly helpful.
A:
There is a patch for the Crowd WSDL here:
http://jira.atlassian.com/browse/CWD-159
|
Accessing the Atlassian Crowd SOAP API with Suds (python SOAP library)
|
Has anybody had any recent success with accessing the Crowd SOAP API via the Suds Python library?
I've found a few people successfully doing it in the past but Atlassian seems to have changed their WSDL since then to make the existing advice not entirely helpful.
Below is the simplest example I've been trying:
from suds.client import Client
url = 'https://crowd.hugeinc.com/services/SecurityServer?wsdl'
client = Client(url)
Unfortunately that generates the following error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/client.py", line 116, in __init__
sd = ServiceDefinition(self.wsdl, s)
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/servicedefinition.py", line 58, in __init__
self.paramtypes()
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/servicedefinition.py", line 137, in paramtypes
item = (pd[1], pd[1].resolve())
File "/Users/soconnor/.virtualenvs/hugeface/lib/python2.6/site-packages/suds/xsd/sxbasic.py", line 63, in resolve
raise TypeNotFound(qref)
TypeNotFound: Type not found: '(AuthenticatedToken, http://authentication.integration.crowd.atlassian.com, )'
I've tried to both binding and doctors to fix this problem to no avail. Neither approach resulted in any change. Any further recommendations or suggestions would be incredibly helpful.
|
[
"There is a patch for the Crowd WSDL here:\nhttp://jira.atlassian.com/browse/CWD-159\n"
] |
[
4
] |
[] |
[] |
[
"atlassian_crowd",
"python",
"soap",
"suds"
] |
stackoverflow_0002710086_atlassian_crowd_python_soap_suds.txt
|
Q:
How to share memory buffer across sessions in Django?
I want to have one party (or more) sends a stream of data via HTTP request(s). Other parties will be able to receive the same stream of data in almost real-time.
The data stream should be accessible across sessions (according to access control list).
How can I do this in Django? If possible I would like to avoid database access and use in memory buffer (along with some synchronization mechanism)
A:
Use posix_ipc or sysv_ipc to use shared memory.
|
How to share memory buffer across sessions in Django?
|
I want to have one party (or more) sends a stream of data via HTTP request(s). Other parties will be able to receive the same stream of data in almost real-time.
The data stream should be accessible across sessions (according to access control list).
How can I do this in Django? If possible I would like to avoid database access and use in memory buffer (along with some synchronization mechanism)
|
[
"Use posix_ipc or sysv_ipc to use shared memory.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python",
"web_applications"
] |
stackoverflow_0002727355_django_python_web_applications.txt
|
Q:
What does "str indices must be integers" mean?
I'm working with dicts in jython which are created from importing/parsing JSON. Working with certain sections I see the following message:
TypeError: str indices must be integers
This occurs when I do something like:
if jsondata['foo']['bar'].lower() == 'baz':
...
Where jsondata looks like:
{'foo': {'bar':'baz'} }
What does this mean, and how do I fix it?
A:
As Marcelo and Ivo say, it sounds like you're trying to access the raw JSON string, without first parsing it into Python via json.loads(my_json_string).
A:
You need to check the type for dict and existance of 'z' in the dict before getting data from dict.
>>> jsondata = {'a': '', 'b': {'z': True} }
>>> for key in jsondata:
... if type(jsondata[key]) is dict and 'z' in jsondata[key].keys() and jsondata[key]['z'] is True:
... print 'yes'
...
yes
>>>
or shorter one with dict.get
>>> jsondata = {'a': '', 'b': {'z': True}, 'c' :{'zz':True}}
>>> for key in jsondata:
... if type(jsondata[key]) is dict and jsondata[key].get('z',False):
... print 'yes'
...
yes
>>>
A:
Actually your statement should raise SyntaxError: can't assign to function call due to the fact that you're missing a = and thus making an assignment instead of a check for equality.
Since I don't get the TypeError when running the code you've shown, I suppose that you first fix the missing = and after that check back on what the Stacktrace says.
But it might also be possible that your jsondata hasn't been decoded and therefore is still plain text, which would of course then raise the indexing error.
|
What does "str indices must be integers" mean?
|
I'm working with dicts in jython which are created from importing/parsing JSON. Working with certain sections I see the following message:
TypeError: str indices must be integers
This occurs when I do something like:
if jsondata['foo']['bar'].lower() == 'baz':
...
Where jsondata looks like:
{'foo': {'bar':'baz'} }
What does this mean, and how do I fix it?
|
[
"As Marcelo and Ivo say, it sounds like you're trying to access the raw JSON string, without first parsing it into Python via json.loads(my_json_string).\n",
"You need to check the type for dict and existance of 'z' in the dict before getting data from dict.\n>>> jsondata = {'a': '', 'b': {'z': True} }\n>>> for key in jsondata:\n... if type(jsondata[key]) is dict and 'z' in jsondata[key].keys() and jsondata[key]['z'] is True:\n... print 'yes'\n...\nyes\n>>>\n\nor shorter one with dict.get\n>>> jsondata = {'a': '', 'b': {'z': True}, 'c' :{'zz':True}}\n>>> for key in jsondata:\n... if type(jsondata[key]) is dict and jsondata[key].get('z',False):\n... print 'yes'\n...\nyes\n>>>\n\n",
"Actually your statement should raise SyntaxError: can't assign to function call due to the fact that you're missing a = and thus making an assignment instead of a check for equality. \nSince I don't get the TypeError when running the code you've shown, I suppose that you first fix the missing = and after that check back on what the Stacktrace says.\nBut it might also be possible that your jsondata hasn't been decoded and therefore is still plain text, which would of course then raise the indexing error.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"jython",
"python"
] |
stackoverflow_0002720326_jython_python.txt
|
Q:
How do I stop Python install on Mac OS X from putting things in my home directory?
I'm trying to install Python from source on my Mac. (OS X 10.6.2, Python-2.6.5.tar.bz2) I've done this before and it was easy, but for some reason, this time after ./configure, and make, the sudo make install puts things some things in my home directory instead of in /usr/local/... where I expect. The .py files are okay, but not the .so files...
RobsMac Python-2.6.5 $ sudo make install
[...]
/usr/bin/install -c -m 644 ./Lib/anydbm.py /usr/local/lib/python2.6
/usr/bin/install -c -m 644 ./Lib/ast.py /usr/local/lib/python2.6
/usr/bin/install -c -m 644 ./Lib/asynchat.py /usr/local/lib/python2.6
[...]
running build_scripts
running install_lib
creating /Users/rob/Library/Python
creating /Users/rob/Library/Python/2.6
creating /Users/rob/Library/Python/2.6/site-packages
copying build/lib.macosx-10.4-x86_64-2.6/_AE.so -> /Users/rob/Library/
Python/2.6/site-packages
copying build/lib.macosx-10.4-x86_64-2.6/_AH.so -> /Users/rob/Library/
Python/2.6/site-packages
copying build/lib.macosx-10.4-x86_64-2.6/_App.so -> /Users/rob/Library/
Python/2.6/site-packages
[...]
Later, this causes imports that require those .so files to fail. For
example...
RobsMac Python-2.6.5 $ python
Python 2.6.5 (r265:79063, Apr 28 2010, 13:40:18)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import zlib
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named zlib
Any ideas what is wrong?
thanks,
Rob
A:
Doh. I've answered my own question. Recently I created a ~/.pydistutils.cfg file, for some stupid reason. I forgot to delete that file. It's contents were:
[install]
install_lib = ~/Library/Python/$py_version_short/site-packages
install_scripts = ~/bin
make install calls setup.py, and this file was overriding the normal setup.py behavior.
Rob
A:
In general, installing Python (or anything directly from the source) when it is already available on your system or when there are package managers that will install it for you, is not a very good idea. I strongly advise you against installing Python manually... Mac OS X 10.6 Snow Leopard comes with Python 2.6 out of the box; if you want a newer version of Python 2.6, then you should install MacPorts, and use:
sudo port install python26 python_select
You can then use the python_select to toggle between the system's version and MacPort's version.
If you are determined to install manually from the source, though, the way to do it would be to run "make distclean" (or untar the code separately again), then run "./configure --help" for a full list of configuration options. It is possible that on Mac OS X, it defaults to something other than /usr/local, in which case you could force it to install in that location by invoking configure with "./configure --prefix=/usr/local".
A:
Have you checked the parameters or variables make expects? There probably is a make variable you can use to override that behavior. In any case, have you tried MacPorts? It may be a better solution to what you are trying to accomplish.
|
How do I stop Python install on Mac OS X from putting things in my home directory?
|
I'm trying to install Python from source on my Mac. (OS X 10.6.2, Python-2.6.5.tar.bz2) I've done this before and it was easy, but for some reason, this time after ./configure, and make, the sudo make install puts things some things in my home directory instead of in /usr/local/... where I expect. The .py files are okay, but not the .so files...
RobsMac Python-2.6.5 $ sudo make install
[...]
/usr/bin/install -c -m 644 ./Lib/anydbm.py /usr/local/lib/python2.6
/usr/bin/install -c -m 644 ./Lib/ast.py /usr/local/lib/python2.6
/usr/bin/install -c -m 644 ./Lib/asynchat.py /usr/local/lib/python2.6
[...]
running build_scripts
running install_lib
creating /Users/rob/Library/Python
creating /Users/rob/Library/Python/2.6
creating /Users/rob/Library/Python/2.6/site-packages
copying build/lib.macosx-10.4-x86_64-2.6/_AE.so -> /Users/rob/Library/
Python/2.6/site-packages
copying build/lib.macosx-10.4-x86_64-2.6/_AH.so -> /Users/rob/Library/
Python/2.6/site-packages
copying build/lib.macosx-10.4-x86_64-2.6/_App.so -> /Users/rob/Library/
Python/2.6/site-packages
[...]
Later, this causes imports that require those .so files to fail. For
example...
RobsMac Python-2.6.5 $ python
Python 2.6.5 (r265:79063, Apr 28 2010, 13:40:18)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import zlib
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named zlib
Any ideas what is wrong?
thanks,
Rob
|
[
"Doh. I've answered my own question. Recently I created a ~/.pydistutils.cfg file, for some stupid reason. I forgot to delete that file. It's contents were:\n[install]\ninstall_lib = ~/Library/Python/$py_version_short/site-packages\ninstall_scripts = ~/bin\nmake install calls setup.py, and this file was overriding the normal setup.py behavior. \nRob \n",
"In general, installing Python (or anything directly from the source) when it is already available on your system or when there are package managers that will install it for you, is not a very good idea. I strongly advise you against installing Python manually... Mac OS X 10.6 Snow Leopard comes with Python 2.6 out of the box; if you want a newer version of Python 2.6, then you should install MacPorts, and use:\n\nsudo port install python26 python_select\n\nYou can then use the python_select to toggle between the system's version and MacPort's version.\nIf you are determined to install manually from the source, though, the way to do it would be to run \"make distclean\" (or untar the code separately again), then run \"./configure --help\" for a full list of configuration options. It is possible that on Mac OS X, it defaults to something other than /usr/local, in which case you could force it to install in that location by invoking configure with \"./configure --prefix=/usr/local\".\n",
"Have you checked the parameters or variables make expects? There probably is a make variable you can use to override that behavior. In any case, have you tried MacPorts? It may be a better solution to what you are trying to accomplish.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"configure",
"installation",
"macos",
"python"
] |
stackoverflow_0002727438_configure_installation_macos_python.txt
|
Q:
add methods in subclasses within the super class constructor
I want to add methods (more specifically: method aliases) automatically to Python subclasses. If the subclass defines a method named 'get' I want to add a method alias 'GET' to the dictionary of the subclass.
To not repeat myself I'd like to define this modifation routine in the base class. But if I check in the base class __init__ method, there is no such method, since it is defined in the subclass. It will become more clear with some source code:
class Base:
def __init__(self):
if hasattr(self, "get"):
setattr(self, "GET", self.get)
class Sub(Base):
def get():
pass
print(dir(Sub))
Output:
['__doc__', '__init__', '__module__', 'get']
It should also contain 'GET'.
Is there a way to do it within the base class?
A:
Your class's __init__ method adds a bound method as an attribute to instances of your class. This isn't exactly the same as adding the attribute to the class. Normally, methods work by storing functions in the class, as attributes, and then creating method objects as these functions are retrieved as attributes from either the class (creating unbound methods which only know the class they belong to) or the instance (creating bound methods, which know their instance.)
How does that differ from what you're doing? Well, you assign to the GET instance attribute of a specific instance, not the class. The bound method becomes part of the instance's data:
>>> s.__dict__
{'GET': <bound method Sub.get of <__main__.Sub object at 0xb70896cc>>}
Notice how the method is there under the key GET, but not under get. GET is an instance attribute, but get is not. This is subtly different in a number of ways: the method doesn't exist in the class object, so you can't do Sub.GET(instance) to call Sub's GET method, even though you can do Sub.get(instance). Secondly, if you have a subclass of Sub that defines its own GET method but not its own get method, the instance attribute would hide the subclass GET method with the bound get method from the baseclass. Thirdly it creates a circular reference between the bound method and the instance: the bound method has a reference to the instance, and the instance now stores a reference to the bound method. Normally bound methods are not stored on the instance partly to avoid that. Circular references are usually not a big issue, because we nowadays have the cyclic-gc module (gc) that takes care of them, but it can't always clean up reference cycles (for instance, when your class also has a __del__ method.) And lastly, storing bound method objects generally makes your instances unserializable: most serializers (such as pickle) can't handle bound methods.
You may not care about any of these issues, but if you do, there's a better approach to what you're trying to do: metaclasses. Instead of assigning bound methods to instance attributes as you create instances, you can assign normal functions to class attributes as you create the class:
class MethodAliasingType(type):
def __init__(self, name, bases, attrs):
# attrs is the dict of attributes that was used to create the
# class 'self', modifying it has no effect on the class.
# So use setattr() to set the attribute.
for k, v in attrs.iteritems():
if not hasattr(self, k.upper()):
setattr(self, k.upper(), v)
super(MethodAliasingType, self).__init__(name, bases, attrs)
class Base(object):
__metaclass__ = MethodAliasingType
class Sub(Base):
def get(self):
pass
Now, Sub.get and Sub.GET really are aliases, and overriding the one and not the other in a subclass works as expected.
>>> Sub.get
<unbound method Sub.get>
>>> Sub.GET
<unbound method Sub.get>
>>> Sub().get
<bound method Sub.get of <__main__.Sub object at 0xb708978c>>
>>> Sub().GET
<bound method Sub.get of <__main__.Sub object at 0xb7089a6c>>
>>> Sub().__dict__
{}
(Of course, if you don't want overriding the one and not the other to work, you can simply make this an error in your metaclass.) You can do the same thing as the metaclass using class decorators (in Python 2.6 and later), but it would mean requiring the class decorator on every subclass of Base -- class decorators aren't inherited.
A:
Its because class Sub hasn't been initiated yet, do it in its instance like
>>> s=Sub()
>>> dir(s)
['GET', '__doc__', '__init__', '__module__', 'get']
>>>
|
add methods in subclasses within the super class constructor
|
I want to add methods (more specifically: method aliases) automatically to Python subclasses. If the subclass defines a method named 'get' I want to add a method alias 'GET' to the dictionary of the subclass.
To not repeat myself I'd like to define this modifation routine in the base class. But if I check in the base class __init__ method, there is no such method, since it is defined in the subclass. It will become more clear with some source code:
class Base:
def __init__(self):
if hasattr(self, "get"):
setattr(self, "GET", self.get)
class Sub(Base):
def get():
pass
print(dir(Sub))
Output:
['__doc__', '__init__', '__module__', 'get']
It should also contain 'GET'.
Is there a way to do it within the base class?
|
[
"Your class's __init__ method adds a bound method as an attribute to instances of your class. This isn't exactly the same as adding the attribute to the class. Normally, methods work by storing functions in the class, as attributes, and then creating method objects as these functions are retrieved as attributes from either the class (creating unbound methods which only know the class they belong to) or the instance (creating bound methods, which know their instance.)\nHow does that differ from what you're doing? Well, you assign to the GET instance attribute of a specific instance, not the class. The bound method becomes part of the instance's data:\n>>> s.__dict__\n{'GET': <bound method Sub.get of <__main__.Sub object at 0xb70896cc>>}\n\nNotice how the method is there under the key GET, but not under get. GET is an instance attribute, but get is not. This is subtly different in a number of ways: the method doesn't exist in the class object, so you can't do Sub.GET(instance) to call Sub's GET method, even though you can do Sub.get(instance). Secondly, if you have a subclass of Sub that defines its own GET method but not its own get method, the instance attribute would hide the subclass GET method with the bound get method from the baseclass. Thirdly it creates a circular reference between the bound method and the instance: the bound method has a reference to the instance, and the instance now stores a reference to the bound method. Normally bound methods are not stored on the instance partly to avoid that. Circular references are usually not a big issue, because we nowadays have the cyclic-gc module (gc) that takes care of them, but it can't always clean up reference cycles (for instance, when your class also has a __del__ method.) And lastly, storing bound method objects generally makes your instances unserializable: most serializers (such as pickle) can't handle bound methods.\nYou may not care about any of these issues, but if you do, there's a better approach to what you're trying to do: metaclasses. Instead of assigning bound methods to instance attributes as you create instances, you can assign normal functions to class attributes as you create the class:\nclass MethodAliasingType(type):\n def __init__(self, name, bases, attrs):\n # attrs is the dict of attributes that was used to create the\n # class 'self', modifying it has no effect on the class.\n # So use setattr() to set the attribute.\n for k, v in attrs.iteritems():\n if not hasattr(self, k.upper()):\n setattr(self, k.upper(), v)\n super(MethodAliasingType, self).__init__(name, bases, attrs)\n\nclass Base(object):\n __metaclass__ = MethodAliasingType\n\nclass Sub(Base):\n def get(self):\n pass\n\nNow, Sub.get and Sub.GET really are aliases, and overriding the one and not the other in a subclass works as expected.\n>>> Sub.get\n<unbound method Sub.get>\n>>> Sub.GET\n<unbound method Sub.get>\n>>> Sub().get\n<bound method Sub.get of <__main__.Sub object at 0xb708978c>>\n>>> Sub().GET\n<bound method Sub.get of <__main__.Sub object at 0xb7089a6c>>\n>>> Sub().__dict__\n{}\n\n(Of course, if you don't want overriding the one and not the other to work, you can simply make this an error in your metaclass.) You can do the same thing as the metaclass using class decorators (in Python 2.6 and later), but it would mean requiring the class decorator on every subclass of Base -- class decorators aren't inherited.\n",
"Its because class Sub hasn't been initiated yet, do it in its instance like\n>>> s=Sub()\n>>> dir(s)\n['GET', '__doc__', '__init__', '__module__', 'get']\n>>>\n\n"
] |
[
3,
2
] |
[
"Create a derived constructor in your derived class which sets the attribute.\n"
] |
[
-1
] |
[
"constructor",
"inheritance",
"introspection",
"oop",
"python"
] |
stackoverflow_0002727762_constructor_inheritance_introspection_oop_python.txt
|
Q:
Passing parameter to base class constructor or using instance variable?
All classes derived from a certain base class have to define an attribute called "path". In the sense of duck typing I could rely upon definition in the subclasses:
class Base:
pass # no "path" variable here
def Sub(Base):
def __init__(self):
self.path = "something/"
Another possiblity would be to use the base class constructor:
class Base:
def __init__(self, path):
self.path = path
def Sub(Base):
def __init__(self):
super().__init__("something/")
I use Python 3.1.
What would you prefer and why? Is there a better way?
A:
In Python 3.0+:
I would go with a parameter to the base class's constructor like you have in the second example. As this forces classes which derive from Base to provide the necessary path property, which documents the fact that the class has such a property and that derived classes are required to provide it. Without it, you would be relying on this being stated (and read) somewhere in your class's docstrings, although it certainly does help to also state in the docstring what the particular property means.
In Python 2.6+:
I would use neither of the above; instead I would use:
class Base(object):
def __init__(self,path):
self.path=path;
class Sub(Base):
def __init__(self):
Base.__init__(self,"something/")
In other words, I would require such a parameter in the base class's constructor, because it documents the fact that all such types will have/use/need that particular parameter and that the parameter needs to be provieded. However, I would not use super() as super is somewhat fragile and dangerous in Python, and I would also make Base a new-style class by inheriting from object (or from some other new-style) class.
|
Passing parameter to base class constructor or using instance variable?
|
All classes derived from a certain base class have to define an attribute called "path". In the sense of duck typing I could rely upon definition in the subclasses:
class Base:
pass # no "path" variable here
def Sub(Base):
def __init__(self):
self.path = "something/"
Another possiblity would be to use the base class constructor:
class Base:
def __init__(self, path):
self.path = path
def Sub(Base):
def __init__(self):
super().__init__("something/")
I use Python 3.1.
What would you prefer and why? Is there a better way?
|
[
"In Python 3.0+:\nI would go with a parameter to the base class's constructor like you have in the second example. As this forces classes which derive from Base to provide the necessary path property, which documents the fact that the class has such a property and that derived classes are required to provide it. Without it, you would be relying on this being stated (and read) somewhere in your class's docstrings, although it certainly does help to also state in the docstring what the particular property means.\nIn Python 2.6+:\nI would use neither of the above; instead I would use:\nclass Base(object):\n def __init__(self,path):\n self.path=path;\n\nclass Sub(Base):\n def __init__(self):\n Base.__init__(self,\"something/\")\n\nIn other words, I would require such a parameter in the base class's constructor, because it documents the fact that all such types will have/use/need that particular parameter and that the parameter needs to be provieded. However, I would not use super() as super is somewhat fragile and dangerous in Python, and I would also make Base a new-style class by inheriting from object (or from some other new-style) class.\n"
] |
[
14
] |
[] |
[] |
[
"constructor",
"oop",
"parameters",
"python",
"python_3.x"
] |
stackoverflow_0002728346_constructor_oop_parameters_python_python_3.x.txt
|
Q:
Parse a CSV file using python (to make a decision tree later)
First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code.
The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no).
The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example:
# Column1, Column2, Column3, Column4
Value01, Value02, Value03, Value04
Value11, Value12, Value13, Value14
At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines:
Read in each line, character by character
If the character is not a comma or a space
Append character to temporary string
If the character is a comma
Append the temporary string to a list
Empty string
Once a line has been read
Create a dictionary using the header row as the key (somehow!)
Append that dictionary to a list
However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it.
Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?
A:
Python has some pretty powerful language constructs builtin. You can read lines from a file like:
with open(name_of_file,"r") as file:
for line in file:
# process the line
You can use the string.split function to separate the line along commas, and you can use string.strip to eliminate intervening whitespace. Python has very powerful lists and dictionaries.
To create a list, you simply use empty brackets like [], while to create an empty dictionary you use {}:
mylist = []; # Creates an empty list
mydict = {}; # Creates an empty dictionary
You can insert into the list using the .append() function, while you can use indexing subscripts to insert into the dictionary. For example, you can use mylist.append(5) to add 5 to the list, while you can use mydict[key]=value to associate the key key with the value value. To test whether a key is present in the dictionary, you can use the in keyword. For example:
if key in mydict:
print "Present"
else:
print "Absent"
To iterate over the contents of a list or dictionary, you can simply use a for-loop as in:
for val in mylist:
# do something with val
for key in mydict:
# do something with key or with mydict[key]
Since, in many cases, it is necessary to have both the value and index when iterating over a list, there is also a builtin function called enumerate that saves you the trouble of counting indices yourself:
for idx, val in enumerate(mylist):
# do something with val or with idx. Note that val=mylist[idx]
The code above is identical in function to:
idx=0
for val in mylist:
# process val, idx
idx += 1
You could also iterate over the indices if you so chose:
for idx in xrange(len(mylist)):
# Do something with idx and possibly mylist[idx]
Also, you can get the number of elements in a list or the number of keys in a dictionary using len.
It is possible to perform an operation on each element of a dictionary or list via the use of list comprehension; however, I would recommend that you simply use for-loops to accomplish that task. But, as an example:
>>> list1 = range(10)
>>> list1
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> list2 = [2*x for x in list1]
>>> list2
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
When you have the time, I suggest you read the Python tutorial to get some more in-depth knowledge.
A:
Example using the csv module from docs.python.org:
import csv
reader = csv.reader(open("some.csv", "rb"))
for row in reader:
print row
Instead of printing the rows, you could just save each row into a list, and then process it in the ID3 later.
database.append(row)
A:
Short answer: don't waste time and mental energy (1) reimplementing the built-in csv module (2) reading the csv module's source (it's written in C) -- just USE it!
A:
Look at csv.DictReader.
Example:
import csv
reader = csvDictReader(open('my_file.csv','rb') # 'rb' = read binary
for d in reader:
print d # this will print out a dictionary with keys equal to the first row of the file.
A:
Take a look at the built-in CSV module. Though you probably can't just use it, you can take a sneak peek at the code...
If that's a no-no, your (pseudo)code looks perfectly fine, though you should make use of the str.split() function and use that, reading the file line-by-line.
A:
Parse the CSV correctly
I'd avoid using str.split() to parse the fields because str.split() will not recognize quoted values. And many real-world CSV files use quotes.
http://en.wikipedia.org/wiki/Comma-separated_values
Example record using quoted values:
1997,Ford,E350,"Super, luxurious truck"
If you use str.split(), you'll get a record like this with 5 fields:
('1997', 'Ford', 'E350', '"Super', ' luxurious truck"')
But what you really want are records like this with 4 fields:
('1997', 'Ford', 'E350', 'Super, luxurious truck')
Also, besides commas being in the data, you may have to deal with newlines "\r\n" or just "\n" in the data. For example:
1997,Ford,E350,"Super
luxurious truck"
1997,Ford,E250,"Ok? Truck"
So be careful using:
file = open('filename.csv', 'r')
for line in file:
# problem here, "line" may contain partial data
Also, like John mentioned, the CSV standard is, that in quotes, if you get a double-double quote, then it turns into one quote.
1997,Ford,E350,"Super ""luxurious"" truck"
('1997', 'Ford', 'E350', 'Super "luxurious" truck')
So I'd suggest to modify your finite state machine like this:
Parse each character at a time.
Check to see if it's a quote, then set the state to "in quote"
If "in quote", store all the characters in the current field until there's another quote.
If "in quote", and there's another quote, store the quote character in the field data. (not the end, because a blank field shouldn't be `data,"",data` but instead `data,,data`)
If not "in quote", store the characters until you find a comma or newline.
If comma, save field and start a new field.
If newline, save field, save record, start a new record and a new field.
On a side note, interestingly, I've never seen a header commented out using # in a CSV. So to me, that would imply that you may have to look for commented lines in the data too. Using # to comment out a line in a CSV file is not standard.
Adding found fields into a record dictionary using header keys
Depending on memory requirements, if the CSV is small enough (maybe 10k to 100k records), using a dictionary is fine. Just store a list of all the column names so you can access the column name by index (or number). Then in the finite state machine, increment the column index when you find a comma, and reset to 0 when you find a newline.
So if your header is header = ['Column1', 'Column2'] Then when you find a data character, add it like this:
record[header[column_index]] += character
A:
I don't know too much about the builtin csv module that @Kaloyan Todorov talks about, but, if you're reading comma separated lines, then you can easily do this:
for line in file:
columns = line.split(',')
for column in columns:
print column.strip()
This will print all the entries of each line without the leading a tailing whitespaces.
|
Parse a CSV file using python (to make a decision tree later)
|
First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code.
The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no).
The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example:
# Column1, Column2, Column3, Column4
Value01, Value02, Value03, Value04
Value11, Value12, Value13, Value14
At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines:
Read in each line, character by character
If the character is not a comma or a space
Append character to temporary string
If the character is a comma
Append the temporary string to a list
Empty string
Once a line has been read
Create a dictionary using the header row as the key (somehow!)
Append that dictionary to a list
However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it.
Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?
|
[
"Python has some pretty powerful language constructs builtin. You can read lines from a file like:\n\nwith open(name_of_file,\"r\") as file:\n for line in file:\n # process the line\n\nYou can use the string.split function to separate the line along commas, and you can use string.strip to eliminate intervening whitespace. Python has very powerful lists and dictionaries.\nTo create a list, you simply use empty brackets like [], while to create an empty dictionary you use {}:\n\nmylist = []; # Creates an empty list\nmydict = {}; # Creates an empty dictionary\n\nYou can insert into the list using the .append() function, while you can use indexing subscripts to insert into the dictionary. For example, you can use mylist.append(5) to add 5 to the list, while you can use mydict[key]=value to associate the key key with the value value. To test whether a key is present in the dictionary, you can use the in keyword. For example:\n\nif key in mydict:\n print \"Present\"\nelse:\n print \"Absent\"\n\nTo iterate over the contents of a list or dictionary, you can simply use a for-loop as in:\n\nfor val in mylist:\n # do something with val\n\nfor key in mydict:\n # do something with key or with mydict[key]\n\nSince, in many cases, it is necessary to have both the value and index when iterating over a list, there is also a builtin function called enumerate that saves you the trouble of counting indices yourself:\n\nfor idx, val in enumerate(mylist):\n # do something with val or with idx. Note that val=mylist[idx]\n\nThe code above is identical in function to:\n\nidx=0\nfor val in mylist:\n # process val, idx\n idx += 1\n\nYou could also iterate over the indices if you so chose:\n\nfor idx in xrange(len(mylist)):\n # Do something with idx and possibly mylist[idx]\n\nAlso, you can get the number of elements in a list or the number of keys in a dictionary using len.\nIt is possible to perform an operation on each element of a dictionary or list via the use of list comprehension; however, I would recommend that you simply use for-loops to accomplish that task. But, as an example:\n\n>>> list1 = range(10)\n>>> list1\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n>>> list2 = [2*x for x in list1]\n>>> list2\n[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]\n\nWhen you have the time, I suggest you read the Python tutorial to get some more in-depth knowledge.\n",
"Example using the csv module from docs.python.org:\nimport csv\nreader = csv.reader(open(\"some.csv\", \"rb\"))\nfor row in reader:\n print row\n\nInstead of printing the rows, you could just save each row into a list, and then process it in the ID3 later.\ndatabase.append(row)\n\n",
"Short answer: don't waste time and mental energy (1) reimplementing the built-in csv module (2) reading the csv module's source (it's written in C) -- just USE it!\n",
"Look at csv.DictReader.\nExample:\nimport csv\nreader = csvDictReader(open('my_file.csv','rb') # 'rb' = read binary\nfor d in reader:\n print d # this will print out a dictionary with keys equal to the first row of the file.\n\n",
"Take a look at the built-in CSV module. Though you probably can't just use it, you can take a sneak peek at the code...\nIf that's a no-no, your (pseudo)code looks perfectly fine, though you should make use of the str.split() function and use that, reading the file line-by-line.\n",
"Parse the CSV correctly\nI'd avoid using str.split() to parse the fields because str.split() will not recognize quoted values. And many real-world CSV files use quotes.\nhttp://en.wikipedia.org/wiki/Comma-separated_values\nExample record using quoted values:\n1997,Ford,E350,\"Super, luxurious truck\"\n\nIf you use str.split(), you'll get a record like this with 5 fields:\n('1997', 'Ford', 'E350', '\"Super', ' luxurious truck\"')\n\nBut what you really want are records like this with 4 fields:\n('1997', 'Ford', 'E350', 'Super, luxurious truck')\n\nAlso, besides commas being in the data, you may have to deal with newlines \"\\r\\n\" or just \"\\n\" in the data. For example:\n1997,Ford,E350,\"Super\nluxurious truck\"\n1997,Ford,E250,\"Ok? Truck\"\n\nSo be careful using:\nfile = open('filename.csv', 'r')\nfor line in file:\n # problem here, \"line\" may contain partial data\n\nAlso, like John mentioned, the CSV standard is, that in quotes, if you get a double-double quote, then it turns into one quote.\n1997,Ford,E350,\"Super \"\"luxurious\"\" truck\"\n\n('1997', 'Ford', 'E350', 'Super \"luxurious\" truck')\n\nSo I'd suggest to modify your finite state machine like this:\n\nParse each character at a time.\nCheck to see if it's a quote, then set the state to \"in quote\"\nIf \"in quote\", store all the characters in the current field until there's another quote.\nIf \"in quote\", and there's another quote, store the quote character in the field data. (not the end, because a blank field shouldn't be `data,\"\",data` but instead `data,,data`)\nIf not \"in quote\", store the characters until you find a comma or newline.\nIf comma, save field and start a new field.\nIf newline, save field, save record, start a new record and a new field.\n\nOn a side note, interestingly, I've never seen a header commented out using # in a CSV. So to me, that would imply that you may have to look for commented lines in the data too. Using # to comment out a line in a CSV file is not standard.\nAdding found fields into a record dictionary using header keys\nDepending on memory requirements, if the CSV is small enough (maybe 10k to 100k records), using a dictionary is fine. Just store a list of all the column names so you can access the column name by index (or number). Then in the finite state machine, increment the column index when you find a comma, and reset to 0 when you find a newline.\nSo if your header is header = ['Column1', 'Column2'] Then when you find a data character, add it like this:\nrecord[header[column_index]] += character\n\n",
"I don't know too much about the builtin csv module that @Kaloyan Todorov talks about, but, if you're reading comma separated lines, then you can easily do this:\nfor line in file:\n columns = line.split(',')\n for column in columns:\n print column.strip()\n\nThis will print all the entries of each line without the leading a tailing whitespaces.\n"
] |
[
4,
4,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"csv",
"decision_tree",
"python"
] |
stackoverflow_0002726167_csv_decision_tree_python.txt
|
Q:
what is this code mean in python,
def a():
print 'sss'
print getattr(a, "_decorated_function", a).__name__
it print :
a
thanks
updated
my code:
def a():
w='www'
print getattr(a,'w')
but it print :
Traceback (most recent call last):
File "D:\zjm_code\a.py", line 8, in <module>
print getattr(a,'w')
AttributeError: 'function' object has no attribute 'w'
A:
See the documentation for getattr in Python. The reason it is printing "a", is because "a" has no attribute named "_decorated_function", and the third parameter to getattr() is a default value to return in the event that the first parameter has no attribute with the name of the second parameter. So, your code is the same as:
print a.__name__
Not suprisingly, a's name is "a", hence you get that as the output. By the way, I strongly suggest that you search the Python documentation prior to posting questions here on StackOverflow, as you are more likely to get answers there sooner. You might also find my Development and Coding Search custom search engine useful in finding the relevant Python reference documentation for future queries.
A:
If a has an attribute called _decorated_function then it returns what that attribute contains, otherwise it returns a. Seriously, this is all in the docs.
A:
In response to the Updated question
Functions must be given their attributes after declaration.
def a():
pass
a.w = 'www'
print a.w
Another method that is more similar with other OO languages is to use a class. This example will make a static attribute:
class a:
w = 'www'
print a.w
This will make w shared between all instances of class a, which is most useful as a constant in the program. If you on the other will be working with the variable and changing its value, it it better to do the following:
class b:
def __init__(self):
self.w = 'www'
c = b()
print c.w
A:
I'll take a guess:
The code can be broken into 2 steps:
func = getattr(a, "_decorated_function", a)
print func.__name__
Ignore the first line.
The second line prints the name of func, which in your case happens to be 'a'. No surprise.
The first line is there for the case of decorators:
class My_decorator:
def __init__(self,func):
self._decorated_function = func
def __call__(self,arg):
self._decorated_function(arg+1)
@My_decorator
def a(i):
print i
print a(0)
>>> 1
print a._decorated_function.__name__
>>> a
So the objects that you will be calling getattr(a, "_decorated_function", a) are expected to be either functions, or classes that have "decorated" a function.
|
what is this code mean in python,
|
def a():
print 'sss'
print getattr(a, "_decorated_function", a).__name__
it print :
a
thanks
updated
my code:
def a():
w='www'
print getattr(a,'w')
but it print :
Traceback (most recent call last):
File "D:\zjm_code\a.py", line 8, in <module>
print getattr(a,'w')
AttributeError: 'function' object has no attribute 'w'
|
[
"See the documentation for getattr in Python. The reason it is printing \"a\", is because \"a\" has no attribute named \"_decorated_function\", and the third parameter to getattr() is a default value to return in the event that the first parameter has no attribute with the name of the second parameter. So, your code is the same as:\n\nprint a.__name__\n\nNot suprisingly, a's name is \"a\", hence you get that as the output. By the way, I strongly suggest that you search the Python documentation prior to posting questions here on StackOverflow, as you are more likely to get answers there sooner. You might also find my Development and Coding Search custom search engine useful in finding the relevant Python reference documentation for future queries.\n",
"If a has an attribute called _decorated_function then it returns what that attribute contains, otherwise it returns a. Seriously, this is all in the docs.\n",
"In response to the Updated question\nFunctions must be given their attributes after declaration.\ndef a():\n pass\n\na.w = 'www'\nprint a.w\n\nAnother method that is more similar with other OO languages is to use a class. This example will make a static attribute:\nclass a:\n w = 'www'\n\nprint a.w\n\nThis will make w shared between all instances of class a, which is most useful as a constant in the program. If you on the other will be working with the variable and changing its value, it it better to do the following:\nclass b:\n def __init__(self):\n self.w = 'www'\n\nc = b()\nprint c.w\n\n",
"I'll take a guess:\nThe code can be broken into 2 steps:\nfunc = getattr(a, \"_decorated_function\", a)\nprint func.__name__\n\nIgnore the first line.\nThe second line prints the name of func, which in your case happens to be 'a'. No surprise.\nThe first line is there for the case of decorators:\nclass My_decorator:\n def __init__(self,func):\n self._decorated_function = func\n def __call__(self,arg):\n self._decorated_function(arg+1)\n\n@My_decorator\ndef a(i):\n print i\n\nprint a(0)\n>>> 1\n\nprint a._decorated_function.__name__\n>>> a\n\nSo the objects that you will be calling getattr(a, \"_decorated_function\", a) are expected to be either functions, or classes that have \"decorated\" a function.\n"
] |
[
4,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002727449_python.txt
|
Q:
how to create application(exe) from python script for linux
i'm newbie for python programming, i'm having a .py file, now what shall i do so i can create an application from .py file and it can be istall and run in any linux pc, i try to packaging it but its just create .tar file where i need python to run it, is it any to do so,
thanks
A:
Make sure that the main python file has #! /usr/bin/env python as the first line, then make sure it has execute permission set (should be as easy as chmod +x file_name.py).
A:
From link:
"PyInstaller is a program that converts (packages) Python programs into stand-alone executables, under Windows, Linux, and Mac OS X. Its main advantages over similar tools are that PyInstaller works with any version of Python since 1.5, it builds smaller executables thanks to transparent compression, it is fully multi-platform, and use the OS support to load the dynamic libraries, thus ensuring full compatibility."
I have not used it never in linux so I can not give you more info
A:
I think any modern Linux OS has Python installed, so you can just ditribute the .py file if it has no weird dependencies.
|
how to create application(exe) from python script for linux
|
i'm newbie for python programming, i'm having a .py file, now what shall i do so i can create an application from .py file and it can be istall and run in any linux pc, i try to packaging it but its just create .tar file where i need python to run it, is it any to do so,
thanks
|
[
"Make sure that the main python file has #! /usr/bin/env python as the first line, then make sure it has execute permission set (should be as easy as chmod +x file_name.py).\n",
"From link:\n\"PyInstaller is a program that converts (packages) Python programs into stand-alone executables, under Windows, Linux, and Mac OS X. Its main advantages over similar tools are that PyInstaller works with any version of Python since 1.5, it builds smaller executables thanks to transparent compression, it is fully multi-platform, and use the OS support to load the dynamic libraries, thus ensuring full compatibility.\"\nI have not used it never in linux so I can not give you more info\n",
"I think any modern Linux OS has Python installed, so you can just ditribute the .py file if it has no weird dependencies.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002728289_python.txt
|
Q:
How to get path of stroke using opencv?
I want to get path stroke from image using opencv. I know how to get contours but I need path of stroke (path that runs through the center of the stroke). Is it possible to get this in opencv ?
A:
OpenCV does not have a skeletonization algorithm that allows one to get a stroke path. You would need to implement an existing algorithm, good place to start is here:
http://en.wikipedia.org/wiki/Topological_skeleton
|
How to get path of stroke using opencv?
|
I want to get path stroke from image using opencv. I know how to get contours but I need path of stroke (path that runs through the center of the stroke). Is it possible to get this in opencv ?
|
[
"OpenCV does not have a skeletonization algorithm that allows one to get a stroke path. You would need to implement an existing algorithm, good place to start is here:\nhttp://en.wikipedia.org/wiki/Topological_skeleton\n"
] |
[
0
] |
[] |
[] |
[
"2d",
"c++",
"image_processing",
"opencv",
"python"
] |
stackoverflow_0002715766_2d_c++_image_processing_opencv_python.txt
|
Q:
How to organize Python modules for PyPI to support 2.x and 3.x
I have a Python module that I would like to upload to PyPI. So far, it is working for Python 2.x. It shouldn't be too hard to write a version for 3.x now.
But, after following guidelines for making modules in these places:
Distributing Python Modules
The Hitchhiker’s Guide to Packaging
it's not clear to me how to support multiple source distributions for different versions of Python, and it's not clear if/how PyPI could support it. I envisage I would have separate code for:
2.x
2.6 (maybe, as a special case to use the new buffer API)
3.x
How is it possible to set up a Python module in PyPI so that someone can do:
easy_install modulename
and it will install the right thing whether the user is using 2.x or 3.x?
A:
I found that setup.py for httplib2 seems to have an elegant way to support Python 2.x and 3.x. So I decided to copy that method.
The task is to craft a single setup.py for the package distribution that works with all the supported Python distributions. Then with the same setup.py, you can do:
python2 setup.py install
as well as
python3 setup.py install
It should be possible to keep setup.py simple enough to be parsed with all the supported Python distributions. I've successfully done so with a package cobs that supports 2.4 through 2.6 as well as 3.1. That package includes pure Python code (separate code for Python 2.x and 3.x) and C extensions, written separately for 2.x and 3.x.
To do it:
1) I put the Python 2.x code into a python2 subdirectory, and Python 3.x code in a python3 subdirectory.
2) I put the C extension code for 2.x and 3.x in a src directory under python2 and python3.
So, the directory structure is:
root
|
+--python2
| |
| +--src
|
+--python3
| |
| +--src
|
+--setup.py
+--MANIFEST.in
3) In the setup.py, I had these lines near the top:
if sys.version_info[0] == 2:
base_dir = 'python2'
elif sys.version_info[0] == 3:
base_dir = 'python3'
4) In the call to setup, I specified the packages as normal:
setup(
...
packages=[ 'cobs', 'cobs.cobs', 'cobs.cobsr', ],
5) I specified the base directory for the Python code using a package_dir option (refer to step 3 for base_dir):
package_dir={
'cobs' : base_dir + '/cobs',
},
6) For the C extensions, I gave the path:
ext_modules=[
Extension('cobs.cobs._cobs_ext', [ base_dir + '/src/_cobs_ext.c', ]),
Extension('cobs.cobsr._cobsr_ext', [ base_dir + '/src/_cobsr_ext.c', ]),
],
That was about it for setup.py. The setup.py file is parsable by both Python 2.x and 3.x.
7) Finally, if you build a source distribution using:
python2 setup.py sdist
then it will by default pull in only the files that are specifically needed to build for that Python. E.g. in the above case, you would only get the files under python2 in the source distribution, but not those under python3. But for a complete source distribution, you want to include the files for both 2.x and 3.x. To do that, create a MANIFEST.in file that contains something like this:
include *.txt
recursive-include python2 *
recursive-include python3 *
To see what I did, see the cobs source code on PyPI or BitBucket.
A:
The simplest solution is to use a single source distribution.
|
How to organize Python modules for PyPI to support 2.x and 3.x
|
I have a Python module that I would like to upload to PyPI. So far, it is working for Python 2.x. It shouldn't be too hard to write a version for 3.x now.
But, after following guidelines for making modules in these places:
Distributing Python Modules
The Hitchhiker’s Guide to Packaging
it's not clear to me how to support multiple source distributions for different versions of Python, and it's not clear if/how PyPI could support it. I envisage I would have separate code for:
2.x
2.6 (maybe, as a special case to use the new buffer API)
3.x
How is it possible to set up a Python module in PyPI so that someone can do:
easy_install modulename
and it will install the right thing whether the user is using 2.x or 3.x?
|
[
"I found that setup.py for httplib2 seems to have an elegant way to support Python 2.x and 3.x. So I decided to copy that method.\nThe task is to craft a single setup.py for the package distribution that works with all the supported Python distributions. Then with the same setup.py, you can do:\npython2 setup.py install\n\nas well as\npython3 setup.py install\n\nIt should be possible to keep setup.py simple enough to be parsed with all the supported Python distributions. I've successfully done so with a package cobs that supports 2.4 through 2.6 as well as 3.1. That package includes pure Python code (separate code for Python 2.x and 3.x) and C extensions, written separately for 2.x and 3.x.\nTo do it:\n1) I put the Python 2.x code into a python2 subdirectory, and Python 3.x code in a python3 subdirectory.\n2) I put the C extension code for 2.x and 3.x in a src directory under python2 and python3.\nSo, the directory structure is:\nroot\n |\n +--python2\n | |\n | +--src\n |\n +--python3\n | |\n | +--src\n |\n +--setup.py\n +--MANIFEST.in\n\n3) In the setup.py, I had these lines near the top:\nif sys.version_info[0] == 2:\n base_dir = 'python2'\nelif sys.version_info[0] == 3:\n base_dir = 'python3'\n\n4) In the call to setup, I specified the packages as normal:\nsetup(\n ...\n packages=[ 'cobs', 'cobs.cobs', 'cobs.cobsr', ],\n\n5) I specified the base directory for the Python code using a package_dir option (refer to step 3 for base_dir):\n package_dir={\n 'cobs' : base_dir + '/cobs',\n },\n\n6) For the C extensions, I gave the path:\n ext_modules=[\n Extension('cobs.cobs._cobs_ext', [ base_dir + '/src/_cobs_ext.c', ]),\n Extension('cobs.cobsr._cobsr_ext', [ base_dir + '/src/_cobsr_ext.c', ]),\n ],\n\nThat was about it for setup.py. The setup.py file is parsable by both Python 2.x and 3.x.\n7) Finally, if you build a source distribution using:\npython2 setup.py sdist\n\nthen it will by default pull in only the files that are specifically needed to build for that Python. E.g. in the above case, you would only get the files under python2 in the source distribution, but not those under python3. But for a complete source distribution, you want to include the files for both 2.x and 3.x. To do that, create a MANIFEST.in file that contains something like this:\ninclude *.txt\nrecursive-include python2 *\nrecursive-include python3 *\n\nTo see what I did, see the cobs source code on PyPI or BitBucket.\n",
"The simplest solution is to use a single source distribution.\n"
] |
[
18,
1
] |
[] |
[] |
[
"python",
"python_2.x",
"python_3.x",
"software_distribution"
] |
stackoverflow_0002398626_python_python_2.x_python_3.x_software_distribution.txt
|
Q:
Django Forms Help needed
Im new to django and trying to make a user registration form with few validations.
Apart from this I also want a username suggestion code which will tell the user if the username he is trying to register is available or already in use. Then it should give few suggestions that might be available to choose from. Can anyone who might have worked on the same or somewhat same project help me with this.
Thanks
A:
Check out the django-registration application. And have a look at the Class registration.forms.RegistrationForm and their method clean_username.
It should be easy to extend the form to suggest some usernames.
here is some sample code to generate unique username with numbered postfixes:
username # filled with user input or first/lastname etc.
#check for other profile with equal names (and those with a postfix)
others = [int(username.replace(name, "0"))
for p in User.objects.filter(username__startswith=username).exclude(user=self.user)
if username.replace(name, "0").isdigit()]
#do we need a postfix
if len(others) > 0 and 0 in others:
username = "%s%d" % (username, max(others) + 1)
you could fill the generated names in a Form Choice Field:
http://docs.djangoproject.com/en/dev/ref/forms/fields/#choicefield
|
Django Forms Help needed
|
Im new to django and trying to make a user registration form with few validations.
Apart from this I also want a username suggestion code which will tell the user if the username he is trying to register is available or already in use. Then it should give few suggestions that might be available to choose from. Can anyone who might have worked on the same or somewhat same project help me with this.
Thanks
|
[
"Check out the django-registration application. And have a look at the Class registration.forms.RegistrationForm and their method clean_username.\nIt should be easy to extend the form to suggest some usernames.\nhere is some sample code to generate unique username with numbered postfixes:\n username # filled with user input or first/lastname etc.\n\n #check for other profile with equal names (and those with a postfix)\n others = [int(username.replace(name, \"0\")) \n for p in User.objects.filter(username__startswith=username).exclude(user=self.user)\n if username.replace(name, \"0\").isdigit()]\n\n #do we need a postfix\n if len(others) > 0 and 0 in others:\n username = \"%s%d\" % (username, max(others) + 1)\n\nyou could fill the generated names in a Form Choice Field:\nhttp://docs.djangoproject.com/en/dev/ref/forms/fields/#choicefield\n"
] |
[
1
] |
[] |
[] |
[
"django_forms",
"python"
] |
stackoverflow_0002719292_django_forms_python.txt
|
Q:
Python BOM error in Ascii file
I have a weird, annoying problem with Python 2.6. I'm trying to run this file (and the other), on my Embedded Linux ARM board.
http://svn.tuxisalive.com/software_suite_v3/smart-core/smart-server/trunk/TDSService.py
I get this error:
File "tuxhttpserver.py", line 1
SyntaxError: encoding problem: with
BOM
I know that error is about the BOM bytes etc etc. BUT, there are NO BOM bytes, it's plain Ascii. I checked with a Hexeditor, and the linux File command says its Ascii.
Im freaking out here... The code worked fine on my Sheevaplug (also a ARM based system).
A:
Don't get too hung up on the "with BOM" remark. It's probably not relevant. What this error usually means is that the Python you are trying to run in does not support the encoding you declare. Observe:
% head -1 tmp.py
# -*- coding: asdfasdfasdf -*-
% python tmp.py
File "tmp.py", line 1
SyntaxError: encoding problem: with BOM
The Python installation you are running on this Embedded Linux ARM board probably lacks the 'latin-1' encoding. Since you don't have any non-ASCII characters in your source file, just declare the encoding as 'ascii', or leave out the encoding altogether.
|
Python BOM error in Ascii file
|
I have a weird, annoying problem with Python 2.6. I'm trying to run this file (and the other), on my Embedded Linux ARM board.
http://svn.tuxisalive.com/software_suite_v3/smart-core/smart-server/trunk/TDSService.py
I get this error:
File "tuxhttpserver.py", line 1
SyntaxError: encoding problem: with
BOM
I know that error is about the BOM bytes etc etc. BUT, there are NO BOM bytes, it's plain Ascii. I checked with a Hexeditor, and the linux File command says its Ascii.
Im freaking out here... The code worked fine on my Sheevaplug (also a ARM based system).
|
[
"Don't get too hung up on the \"with BOM\" remark. It's probably not relevant. What this error usually means is that the Python you are trying to run in does not support the encoding you declare. Observe:\n% head -1 tmp.py\n# -*- coding: asdfasdfasdf -*-\n% python tmp.py\n File \"tmp.py\", line 1\nSyntaxError: encoding problem: with BOM\n\nThe Python installation you are running on this Embedded Linux ARM board probably lacks the 'latin-1' encoding. Since you don't have any non-ASCII characters in your source file, just declare the encoding as 'ascii', or leave out the encoding altogether.\n"
] |
[
10
] |
[] |
[] |
[
"ascii",
"byte_order_mark",
"encoding",
"python"
] |
stackoverflow_0002729260_ascii_byte_order_mark_encoding_python.txt
|
Q:
How to create a CFuncType in Python
I need to pass a callback function that is CFuncType (ctypes.CFUNCTYPE or ctypes.PYFUNCTYPE...).
How can I cast a python function to CFuncType or how can I create a CFuncType function in python.
A:
I forgot how awesome ctypes is:
Below is Copied from http://docs.python.org/library/ctypes.html
So our callback function receives pointers to integers, and must return an integer. First we create the type for the callback function:
CMPFUNC = CFUNCTYPE(c_int, POINTER(c_int), POINTER(c_int))
For the first implementation of the callback function, we simply print the arguments we get, and return 0 (incremental development ;-):
def py_cmp_func(a, b):
print "py_cmp_func", a, b
return 0
Create the C callable callback:
cmp_func = CMPFUNC(py_cmp_func)
|
How to create a CFuncType in Python
|
I need to pass a callback function that is CFuncType (ctypes.CFUNCTYPE or ctypes.PYFUNCTYPE...).
How can I cast a python function to CFuncType or how can I create a CFuncType function in python.
|
[
"I forgot how awesome ctypes is: \nBelow is Copied from http://docs.python.org/library/ctypes.html\nSo our callback function receives pointers to integers, and must return an integer. First we create the type for the callback function:\nCMPFUNC = CFUNCTYPE(c_int, POINTER(c_int), POINTER(c_int))\n\nFor the first implementation of the callback function, we simply print the arguments we get, and return 0 (incremental development ;-):\n def py_cmp_func(a, b):\n print \"py_cmp_func\", a, b\n return 0\n\nCreate the C callable callback:\ncmp_func = CMPFUNC(py_cmp_func)\n\n"
] |
[
12
] |
[] |
[] |
[
"callback",
"ctypes",
"python"
] |
stackoverflow_0002729223_callback_ctypes_python.txt
|
Q:
Common elements between two lists not using sets in Python
I want count the same elements of two lists. Lists can have duplicate elements, so I can't convert this to sets and use & operator.
a=[2,2,1,1]
b=[1,1,3,3]
set(a) & set(b) work
a & b don't work
It is possible to do it withoud set and dictonary?
A:
In Python 3.x (and Python 2.7, when it's released), you can use collections.Counter for this:
>>> from collections import Counter
>>> list((Counter([2,2,1,1]) & Counter([1,3,3,1])).elements())
[1, 1]
Here's an alternative using collections.defaultdict (available in Python 2.5 and later). It has the nice property that the order of the result is deterministic (it essentially corresponds to the order of the second list).
from collections import defaultdict
def list_intersection(list1, list2):
bag = defaultdict(int)
for elt in list1:
bag[elt] += 1
result = []
for elt in list2:
if elt in bag:
# remove elt from bag, making sure
# that bag counts are kept positive
if bag[elt] == 1:
del bag[elt]
else:
bag[elt] -= 1
result.append(elt)
return result
For both these solutions, the number of occurrences of any given element x in the output list is the minimum of the numbers of occurrences of x in the two input lists. It's not clear from your question whether this is the behavior that you want.
A:
Using sets is the most efficient, but you could always do r = [i for i in l1 if i in l2].
A:
SilentGhost, Mark Dickinson and Lo'oris are right, Thanks very much for report this problem - I need common part of lists, so for:
a=[1,1,1,2]
b=[1,1,3,3]
result should be [1,1]
Sorry for comment in not suitable place - I have registered today.
I modified yours solutions:
def count_common(l1,l2):
l2_copy=list(l2)
counter=0
for i in l1:
if i in l2_copy:
counter+=1
l2_copy.remove(i)
return counter
l1=[1,1,1]
l2=[1,2]
print count_common(l1,l2)
1
|
Common elements between two lists not using sets in Python
|
I want count the same elements of two lists. Lists can have duplicate elements, so I can't convert this to sets and use & operator.
a=[2,2,1,1]
b=[1,1,3,3]
set(a) & set(b) work
a & b don't work
It is possible to do it withoud set and dictonary?
|
[
"In Python 3.x (and Python 2.7, when it's released), you can use collections.Counter for this:\n>>> from collections import Counter\n>>> list((Counter([2,2,1,1]) & Counter([1,3,3,1])).elements())\n[1, 1]\n\nHere's an alternative using collections.defaultdict (available in Python 2.5 and later). It has the nice property that the order of the result is deterministic (it essentially corresponds to the order of the second list).\nfrom collections import defaultdict\n\ndef list_intersection(list1, list2):\n bag = defaultdict(int)\n for elt in list1:\n bag[elt] += 1\n\n result = []\n for elt in list2:\n if elt in bag:\n # remove elt from bag, making sure\n # that bag counts are kept positive\n if bag[elt] == 1:\n del bag[elt]\n else:\n bag[elt] -= 1\n result.append(elt)\n\n return result\n\nFor both these solutions, the number of occurrences of any given element x in the output list is the minimum of the numbers of occurrences of x in the two input lists. It's not clear from your question whether this is the behavior that you want.\n",
"Using sets is the most efficient, but you could always do r = [i for i in l1 if i in l2].\n",
"SilentGhost, Mark Dickinson and Lo'oris are right, Thanks very much for report this problem - I need common part of lists, so for:\na=[1,1,1,2]\nb=[1,1,3,3]\nresult should be [1,1]\nSorry for comment in not suitable place - I have registered today.\nI modified yours solutions:\ndef count_common(l1,l2):\n l2_copy=list(l2)\n counter=0\n for i in l1:\n if i in l2_copy:\n counter+=1\n l2_copy.remove(i)\n return counter\n\nl1=[1,1,1]\nl2=[1,2]\nprint count_common(l1,l2)\n\n\n1\n"
] |
[
12,
8,
0
] |
[] |
[] |
[
"list",
"python",
"set"
] |
stackoverflow_0002727650_list_python_set.txt
|
Q:
efficient list mapping in python
I have the following input:
input = [(dog, dog, cat, mouse), (cat, ruby, python, mouse)]
and trying to have the following output:
outputlist = [[0, 0, 1, 2], [1, 3, 4, 2]]
outputmapping = {0:dog, 1:cat, 2:mouse, 3:ruby, 4:python, 5:mouse}
Any tips on how to handle given with scalability in mind (var input can get really large).
A:
You probably want something like:
import collections
import itertools
def build_catalog(L):
counter = itertools.count().next
names = collections.defaultdict(counter)
result = []
for t in L:
new_t = [ names[item] for item in t ]
result.append(new_t)
catalog = dict((name, idx) for idx, name in names.iteritems())
return result, catalog
Using it:
>>> input = [('dog', 'dog', 'cat', 'mouse'), ('cat', 'ruby', 'python', 'mouse')]
>>> outputlist, outputmapping = build_catalog(input)
>>> outputlist
[[0, 0, 1, 2], [1, 3, 4, 2]]
>>> outputmapping
{0: 'dog', 1: 'cat', 2: 'mouse', 3: 'ruby', 4: 'python'}
A:
This class will automatically map objects to increasing integer values:
class AutoMapping(object):
def __init__(self):
self.map = {}
self.objects = []
def __getitem__(self, val):
if val not in self.map:
self.map[val] = len(self.objects)
self.objects.append(val)
return self.map[val]
Example usage, for your input:
>>> input = [('dog', 'dog', 'cat', 'mouse'), ('cat', 'ruby', 'python', 'mouse')]
>>> map = AutoMapping()
>>> [[map[x] for x in y] for y in input]
[[0, 0, 1, 2], [1, 3, 4, 2]]
>>> map.objects
['dog', 'cat', 'mouse', 'ruby', 'python']
>>> dict(enumerate(map.objects))
{0: 'dog', 1: 'cat', 2: 'mouse', 3: 'ruby', 4: 'python'}
A:
Here is one possible solution, although it isn't the greatest. It could be made slightly more efficient if you know how many elements each entry in the list will have before-hand, by pre-allocating them.
labels=[];
label2index={};
outputlist=[];
for group in input:
current=[];
for label in group:
if label not in label2index:
label2index[label]=len(labels);
labels.append(label);
current.append(label2index[label]);
outputlist.append(current);
outputmapping={};
for idx, val in enumerate(labels):
outputmapping[idx]=val;
A:
I had the same problem quite often in my projects, so I wrapped up a class some time ago that does exactly this:
class UniqueIdGenerator(object):
"""A dictionary-like class that can be used to assign unique integer IDs to
names.
Usage:
>>> gen = UniqueIdGenerator()
>>> gen["A"]
0
>>> gen["B"]
1
>>> gen["C"]
2
>>> gen["A"] # Retrieving already existing ID
0
>>> len(gen) # Number of already used IDs
3
"""
def __init__(self, id_generator=None):
"""Creates a new unique ID generator. `id_generator` specifies how do we
assign new IDs to elements that do not have an ID yet. If it is `None`,
elements will be assigned integer identifiers starting from 0. If it is
an integer, elements will be assigned identifiers starting from the given
integer. If it is an iterator or generator, its `next` method will be
called every time a new ID is needed."""
if id_generator is None:
id_generator = 0
if isinstance(id_generator, int):
import itertools
self._generator = itertools.count(id_generator)
else:
self._generator = id_generator
self._ids = {}
def __getitem__(self, item):
"""Retrieves the ID corresponding to `item`. Generates a new ID for `item`
if it is the first time we request an ID for it."""
try:
return self._ids[item]
except KeyError:
self._ids[item] = self._generator.next()
return self._ids[item]
def __len__(self):
"""Retrieves the number of added elements in this UniqueIDGenerator"""
return len(self._ids)
def reverse_dict(self):
"""Returns the reversed mapping, i.e., the one that maps generated IDs to their
corresponding items"""
return dict((v, k) for k, v in self._ids.iteritems())
def values(self):
"""Returns the list of items added so far. Items are ordered according to
the standard sorting order of their keys, so the values will be exactly
in the same order they were added if the ID generator generates IDs in
ascending order. This hold, for instance, to numeric ID generators that
assign integers starting from a given number."""
return sorted(self._ids.keys(), key = self._ids.__getitem__)
Usage example:
>>> input = [(dog, dog, cat, mouse), (cat, ruby, python, mouse)]
>>> gen = UniqueIdGenerator()
>>> outputlist = [[gen[x] for x in y] for y in input]
[[0, 0, 1, 2], [1, 3, 4, 2]]
>>> print outputlist
>>> outputmapping = gen.reverse_dict()
>>> print outputmapping
{0: 'dog', 1: 'cat', 2: 'mouse', 3: 'ruby', 4: 'python'}
|
efficient list mapping in python
|
I have the following input:
input = [(dog, dog, cat, mouse), (cat, ruby, python, mouse)]
and trying to have the following output:
outputlist = [[0, 0, 1, 2], [1, 3, 4, 2]]
outputmapping = {0:dog, 1:cat, 2:mouse, 3:ruby, 4:python, 5:mouse}
Any tips on how to handle given with scalability in mind (var input can get really large).
|
[
"You probably want something like:\nimport collections\nimport itertools\n\ndef build_catalog(L):\n counter = itertools.count().next\n names = collections.defaultdict(counter)\n result = []\n for t in L:\n new_t = [ names[item] for item in t ]\n result.append(new_t)\n catalog = dict((name, idx) for idx, name in names.iteritems())\n return result, catalog\n\nUsing it:\n>>> input = [('dog', 'dog', 'cat', 'mouse'), ('cat', 'ruby', 'python', 'mouse')]\n>>> outputlist, outputmapping = build_catalog(input)\n>>> outputlist\n[[0, 0, 1, 2], [1, 3, 4, 2]]\n>>> outputmapping\n{0: 'dog', 1: 'cat', 2: 'mouse', 3: 'ruby', 4: 'python'}\n\n",
"This class will automatically map objects to increasing integer values:\nclass AutoMapping(object):\n def __init__(self):\n self.map = {}\n self.objects = []\n\n def __getitem__(self, val):\n if val not in self.map:\n self.map[val] = len(self.objects)\n self.objects.append(val)\n return self.map[val]\n\nExample usage, for your input:\n>>> input = [('dog', 'dog', 'cat', 'mouse'), ('cat', 'ruby', 'python', 'mouse')]\n>>> map = AutoMapping()\n>>> [[map[x] for x in y] for y in input]\n[[0, 0, 1, 2], [1, 3, 4, 2]]\n>>> map.objects\n['dog', 'cat', 'mouse', 'ruby', 'python']\n>>> dict(enumerate(map.objects))\n{0: 'dog', 1: 'cat', 2: 'mouse', 3: 'ruby', 4: 'python'}\n\n",
"Here is one possible solution, although it isn't the greatest. It could be made slightly more efficient if you know how many elements each entry in the list will have before-hand, by pre-allocating them.\nlabels=[];\nlabel2index={};\noutputlist=[];\nfor group in input:\n current=[];\n for label in group:\n if label not in label2index:\n label2index[label]=len(labels);\n labels.append(label);\n current.append(label2index[label]);\n outputlist.append(current);\n\noutputmapping={};\nfor idx, val in enumerate(labels):\n outputmapping[idx]=val;\n\n",
"I had the same problem quite often in my projects, so I wrapped up a class some time ago that does exactly this:\nclass UniqueIdGenerator(object):\n \"\"\"A dictionary-like class that can be used to assign unique integer IDs to\n names.\n\n Usage:\n\n >>> gen = UniqueIdGenerator()\n >>> gen[\"A\"]\n 0\n >>> gen[\"B\"]\n 1\n >>> gen[\"C\"]\n 2\n >>> gen[\"A\"] # Retrieving already existing ID\n 0\n >>> len(gen) # Number of already used IDs\n 3\n \"\"\"\n\n def __init__(self, id_generator=None):\n \"\"\"Creates a new unique ID generator. `id_generator` specifies how do we\n assign new IDs to elements that do not have an ID yet. If it is `None`,\n elements will be assigned integer identifiers starting from 0. If it is\n an integer, elements will be assigned identifiers starting from the given\n integer. If it is an iterator or generator, its `next` method will be\n called every time a new ID is needed.\"\"\"\n if id_generator is None:\n id_generator = 0\n if isinstance(id_generator, int):\n import itertools\n self._generator = itertools.count(id_generator)\n else:\n self._generator = id_generator\n self._ids = {}\n\n def __getitem__(self, item):\n \"\"\"Retrieves the ID corresponding to `item`. Generates a new ID for `item`\n if it is the first time we request an ID for it.\"\"\"\n try:\n return self._ids[item]\n except KeyError:\n self._ids[item] = self._generator.next()\n return self._ids[item]\n\n def __len__(self):\n \"\"\"Retrieves the number of added elements in this UniqueIDGenerator\"\"\"\n return len(self._ids)\n\n def reverse_dict(self):\n \"\"\"Returns the reversed mapping, i.e., the one that maps generated IDs to their\n corresponding items\"\"\"\n return dict((v, k) for k, v in self._ids.iteritems())\n\n def values(self):\n \"\"\"Returns the list of items added so far. Items are ordered according to\n the standard sorting order of their keys, so the values will be exactly\n in the same order they were added if the ID generator generates IDs in\n ascending order. This hold, for instance, to numeric ID generators that\n assign integers starting from a given number.\"\"\"\n return sorted(self._ids.keys(), key = self._ids.__getitem__)\n\nUsage example:\n>>> input = [(dog, dog, cat, mouse), (cat, ruby, python, mouse)]\n>>> gen = UniqueIdGenerator()\n>>> outputlist = [[gen[x] for x in y] for y in input]\n[[0, 0, 1, 2], [1, 3, 4, 2]]\n>>> print outputlist\n>>> outputmapping = gen.reverse_dict()\n>>> print outputmapping\n{0: 'dog', 1: 'cat', 2: 'mouse', 3: 'ruby', 4: 'python'}\n\n"
] |
[
6,
2,
0,
0
] |
[] |
[] |
[
"dictionary",
"list",
"mapping",
"python",
"python_itertools"
] |
stackoverflow_0002729135_dictionary_list_mapping_python_python_itertools.txt
|
Q:
Does dict.update affect a function's argspec?
import inspect
class Test:
def test(self, p, d={}):
d.update(p)
return d
print inspect.getargspec(getattr(Test, 'test'))[3]
print Test().test({'1':True})
print inspect.getargspec(getattr(Test, 'test'))[3]
I would expect the argspec for Test.test not to change but because of dict.update it does. Why?
A:
Because dicts are mutable objects. When you call d.update(p), you are actually mutating the default instance of the dict. This is a common catch; in particular, you should never use a mutable object as a default value in the list of arguments.
A better way to do this is as follows:
class Test:
def test(self, p, d = None):
if d is None:
d = {}
d.update(p)
return d
A:
A default argument in Python is whatever object was set when the function was defined, even if you set a mutable object. This question should explain what that means and why Python is the SO question least astonishment in python: the mutable default argument.
Basically, the same default object is used every time the function is called, rather than a new copy being made each time. For example:
>>> def f(xs=[]):
... xs.append(5)
... print xs
...
>>> f()
[5]
>>> f()
[5, 5]
The easiest way around this is to make your actual default argument None, and then simply check for None and provide a default in the function, for example:
>>> def f(xs=None):
... if xs is None:
... xs = []
... xs.append(5)
... print xs
...
>>> f()
[5]
>>> f()
[5]
|
Does dict.update affect a function's argspec?
|
import inspect
class Test:
def test(self, p, d={}):
d.update(p)
return d
print inspect.getargspec(getattr(Test, 'test'))[3]
print Test().test({'1':True})
print inspect.getargspec(getattr(Test, 'test'))[3]
I would expect the argspec for Test.test not to change but because of dict.update it does. Why?
|
[
"Because dicts are mutable objects. When you call d.update(p), you are actually mutating the default instance of the dict. This is a common catch; in particular, you should never use a mutable object as a default value in the list of arguments.\nA better way to do this is as follows:\nclass Test:\n def test(self, p, d = None):\n if d is None:\n d = {}\n d.update(p)\n return d\n\n",
"A default argument in Python is whatever object was set when the function was defined, even if you set a mutable object. This question should explain what that means and why Python is the SO question least astonishment in python: the mutable default argument.\nBasically, the same default object is used every time the function is called, rather than a new copy being made each time. For example:\n>>> def f(xs=[]):\n... xs.append(5)\n... print xs\n... \n>>> f()\n[5]\n>>> f()\n[5, 5]\n\nThe easiest way around this is to make your actual default argument None, and then simply check for None and provide a default in the function, for example:\n>>> def f(xs=None):\n... if xs is None:\n... xs = []\n... xs.append(5)\n... print xs\n... \n>>> f()\n[5]\n>>> f()\n[5]\n\n"
] |
[
5,
2
] |
[] |
[] |
[
"inspect",
"python"
] |
stackoverflow_0002730107_inspect_python.txt
|
Q:
Django ForeignModels
How do I get/set foreign key fields on a model object without touching the database and loading the related object?
A:
Django actually appends an '_id' to ForeignKey field names and with 'field_name_id' you can get or set the integer id value directly:
class MyModel(models.Model):
field = models.ForeignKey(MyOtherModel)
mymodel_instance = MyModel.objects.get(pk=1)
# queries database for related object and the result is a MyOtherModel instance
print mymodel_instance.field
# result is simply the integer id value, does not do any query
print mymodel_instance.field_id
A:
You can use .select_related() to load up related models as part of the initial query. You can then get/set properties without a database hit.
Remember that to save stuff, you will need to hit the database.
For details on how to use it, try the documentation.
|
Django ForeignModels
|
How do I get/set foreign key fields on a model object without touching the database and loading the related object?
|
[
"Django actually appends an '_id' to ForeignKey field names and with 'field_name_id' you can get or set the integer id value directly:\nclass MyModel(models.Model):\n field = models.ForeignKey(MyOtherModel)\n\nmymodel_instance = MyModel.objects.get(pk=1)\n# queries database for related object and the result is a MyOtherModel instance\nprint mymodel_instance.field\n# result is simply the integer id value, does not do any query\nprint mymodel_instance.field_id\n\n",
"You can use .select_related() to load up related models as part of the initial query. You can then get/set properties without a database hit.\nRemember that to save stuff, you will need to hit the database.\nFor details on how to use it, try the documentation.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002726417_django_python.txt
|
Q:
Python Lambdas and Variable Bindings
I've been working on a basic testing framework for an automated build. The piece of code below represents a simple test of communication between two machines using different programs. Before I actually do any tests, I want to completely define them - so this test below is not actually run until after all the tests have been declared. This piece of code is simply a declaration of a test.
remoteTests = []
for client in clients:
t = Test(
name = 'Test ' + str(host) + ' => ' + str(client),
cmds = [
host.start(CMD1),
client.start(CMD2),
host.wait(5),
host.stop(CMD1),
client.stop(CMD2),
],
passIf = lambda : client.returncode(CMD2) == 0
)
remoteTests.append(t)
Anyhow, after the test is run, it runs the function defined by 'passIf'. Since I want to run this test for multiple clients, I'm iterating them and defining a test for each - no big deal. However, after running the test on the first client, the 'passIf' evaluates on the last one in the clients list, not the 'client' at the time of the lambda declaration.
My question, then: when does python bind variable references in lambdas? I figured if using a variable from outside the lambda was not legal, the interpretor would have no idea what I was talking about. Instead, it silently bound to the instance of the last 'client'.
Also, is there a way to force the resolution the way I intended it?
A:
The client variable is defined in the outer scope, so by the time the lambda is run it will always be set to the last client in the list.
To get the intended result, you can give the lambda an argument with a default value:
passIf = lambda client=client: client.returncode(CMD2) == 0
Since the default value is evaluated at the time the lambda is defined, its value will remain correct.
Another way is to create the lambda inside a function:
def createLambda(client):
return lambda: client.returncode(CMD2) == 0
#...
passIf = createLambda(client)
Here the lambda refers to the client variable in the createLambda function, which has the correct value.
A:
What happens is that your passIf argument, the lambda, refers to the variable client from the enclosing scope. It doesn't refer to the object the variable client refers to when it is created, but the variable itself. If you call these passIf after the loop has ended, that means they all refer to the last value in the loop. (In closure terminology, Python's closures are late-binding, not early-binding.)
Fortunately it's fairly easy to make a late-binding closure into an early-binding closure. You can do it by simply giving the lambda an argument with as default the value you want to bind:
passIf = lambda client=client: client.returncode(CMD2) == 0
That does mean the function gets that extra argument, and might mess things up if it gets called with an argument by accident -- or when you want the function to take arbitrary arguments. So another technique is to do it like this:
# Before your loop:
def make_passIf(client):
return lambda: client.returncode(CMD2) == 0
# In the loop
t = Test(
...
passIf = make_passIf(client)
)
|
Python Lambdas and Variable Bindings
|
I've been working on a basic testing framework for an automated build. The piece of code below represents a simple test of communication between two machines using different programs. Before I actually do any tests, I want to completely define them - so this test below is not actually run until after all the tests have been declared. This piece of code is simply a declaration of a test.
remoteTests = []
for client in clients:
t = Test(
name = 'Test ' + str(host) + ' => ' + str(client),
cmds = [
host.start(CMD1),
client.start(CMD2),
host.wait(5),
host.stop(CMD1),
client.stop(CMD2),
],
passIf = lambda : client.returncode(CMD2) == 0
)
remoteTests.append(t)
Anyhow, after the test is run, it runs the function defined by 'passIf'. Since I want to run this test for multiple clients, I'm iterating them and defining a test for each - no big deal. However, after running the test on the first client, the 'passIf' evaluates on the last one in the clients list, not the 'client' at the time of the lambda declaration.
My question, then: when does python bind variable references in lambdas? I figured if using a variable from outside the lambda was not legal, the interpretor would have no idea what I was talking about. Instead, it silently bound to the instance of the last 'client'.
Also, is there a way to force the resolution the way I intended it?
|
[
"The client variable is defined in the outer scope, so by the time the lambda is run it will always be set to the last client in the list.\nTo get the intended result, you can give the lambda an argument with a default value:\npassIf = lambda client=client: client.returncode(CMD2) == 0\n\nSince the default value is evaluated at the time the lambda is defined, its value will remain correct.\nAnother way is to create the lambda inside a function:\ndef createLambda(client):\n return lambda: client.returncode(CMD2) == 0\n#...\npassIf = createLambda(client)\n\nHere the lambda refers to the client variable in the createLambda function, which has the correct value.\n",
"What happens is that your passIf argument, the lambda, refers to the variable client from the enclosing scope. It doesn't refer to the object the variable client refers to when it is created, but the variable itself. If you call these passIf after the loop has ended, that means they all refer to the last value in the loop. (In closure terminology, Python's closures are late-binding, not early-binding.) \nFortunately it's fairly easy to make a late-binding closure into an early-binding closure. You can do it by simply giving the lambda an argument with as default the value you want to bind:\npassIf = lambda client=client: client.returncode(CMD2) == 0\n\nThat does mean the function gets that extra argument, and might mess things up if it gets called with an argument by accident -- or when you want the function to take arbitrary arguments. So another technique is to do it like this:\n# Before your loop:\ndef make_passIf(client):\n return lambda: client.returncode(CMD2) == 0\n\n# In the loop\nt = Test(\n ...\n passIf = make_passIf(client)\n)\n\n"
] |
[
10,
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002731111_python.txt
|
Q:
Distutils - Where Am I going wrong?
I wanted to learn how to create python packages, so I visited http://docs.python.org/distutils/index.html.
For this exercise I'm using Python 2.6.2 on Windows XP.
I followed along with the simple example and created a small test project:
person/
setup.py
person/
__init__.py
person.py
My person.py file is simple:
class Person(object):
def __init__(self, name="", age=0):
self.name = name
self.age = age
def sound_off(self):
print "%s %d" % (self.name, self.age)
And my setup.py file is:
from distutils.core import setup
setup(name='person',
version='0.1',
packages=['person'],
)
I ran python setup.py sdist and it created MANIFEST, dist/ and build/. Next I ran python setup.py install and it installed it to my site packages directory.
I run the python console and can import the person module, but I cannot import the Person class.
>>>import person
>>>from person import Person
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name Person
I checked the files added to site-packages and checked the sys.path in the console, they seem ok. Why can't I import the Person class. Where did I go wrong?
A:
person/
__init__.py
person.py
You've got a package called person, and a module inside it called person.person. You defined the class in that module, so to access it you'd have to say:
import person.person
p= person.person.Person('Tim', 42)
If you want to put members directly inside the package person, you'd put them in the __init__.py file.
A:
Your question isn't really about distutils packages, but about Python packages -- a related but different thing with the same name. Packages in Python are a separate kind of module, that are directories with an __init__.py file. You created a person package with a person module with a Person class. import person gives you the package. If you want the person module inside the person package, you need import person.person. And if you want the Person class inside the person module inside the person package, you need from person.person import Person.
These things get a lot more obvious when you don't give different things the same name, and also when you don't put classes in separate modules for their own sake. Also see Should I create each class in its own .py file?
|
Distutils - Where Am I going wrong?
|
I wanted to learn how to create python packages, so I visited http://docs.python.org/distutils/index.html.
For this exercise I'm using Python 2.6.2 on Windows XP.
I followed along with the simple example and created a small test project:
person/
setup.py
person/
__init__.py
person.py
My person.py file is simple:
class Person(object):
def __init__(self, name="", age=0):
self.name = name
self.age = age
def sound_off(self):
print "%s %d" % (self.name, self.age)
And my setup.py file is:
from distutils.core import setup
setup(name='person',
version='0.1',
packages=['person'],
)
I ran python setup.py sdist and it created MANIFEST, dist/ and build/. Next I ran python setup.py install and it installed it to my site packages directory.
I run the python console and can import the person module, but I cannot import the Person class.
>>>import person
>>>from person import Person
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name Person
I checked the files added to site-packages and checked the sys.path in the console, they seem ok. Why can't I import the Person class. Where did I go wrong?
|
[
"person/\n __init__.py\n person.py\n\nYou've got a package called person, and a module inside it called person.person. You defined the class in that module, so to access it you'd have to say:\nimport person.person\np= person.person.Person('Tim', 42)\n\nIf you want to put members directly inside the package person, you'd put them in the __init__.py file.\n",
"Your question isn't really about distutils packages, but about Python packages -- a related but different thing with the same name. Packages in Python are a separate kind of module, that are directories with an __init__.py file. You created a person package with a person module with a Person class. import person gives you the package. If you want the person module inside the person package, you need import person.person. And if you want the Person class inside the person module inside the person package, you need from person.person import Person.\nThese things get a lot more obvious when you don't give different things the same name, and also when you don't put classes in separate modules for their own sake. Also see Should I create each class in its own .py file?\n"
] |
[
4,
2
] |
[] |
[] |
[
"distutils",
"python"
] |
stackoverflow_0002731452_distutils_python.txt
|
Q:
SQL-wrappers (activerecord) to recommened for python?
is there an activerecord (any similar SQL-wrapper) for python? which is good for:
used in a server-side python script
light-weight
supports MySQL
what I need to do:
insert (filename, file size, file md5, the file itself) into (string, int, string, BLOB) columns
if the same file (checksum + filename) does not exist in db
thx
A:
You might consider SQLAlchemy along with Elixir:
Elixir is a declarative layer on top of the SQLAlchemy library. It is a fairly thin wrapper, which provides the ability to create simple Python classes that map directly to relational database tables (this pattern is often referred to as the Active Record design pattern), providing many of the benefits of traditional databases without losing the convenience of Python objects.
|
SQL-wrappers (activerecord) to recommened for python?
|
is there an activerecord (any similar SQL-wrapper) for python? which is good for:
used in a server-side python script
light-weight
supports MySQL
what I need to do:
insert (filename, file size, file md5, the file itself) into (string, int, string, BLOB) columns
if the same file (checksum + filename) does not exist in db
thx
|
[
"You might consider SQLAlchemy along with Elixir:\n\nElixir is a declarative layer on top of the SQLAlchemy library. It is a fairly thin wrapper, which provides the ability to create simple Python classes that map directly to relational database tables (this pattern is often referred to as the Active Record design pattern), providing many of the benefits of traditional databases without losing the convenience of Python objects.\n\n"
] |
[
5
] |
[] |
[] |
[
"activerecord",
"orm",
"python"
] |
stackoverflow_0002727249_activerecord_orm_python.txt
|
Q:
Automatic logout in python web app
I have a web application in python wherein the user submits their email and password. These values are compared to values stored in a mysql database. If successful, the script generates a session id, stores it next to the email in the database and sets a cookie with the session id, with allows the user to interact with other parts of the sight.
When the user clicks logout, the script erases the session id from the database and deletes the cookie. The cookie expires after 5 hours. My concern is that if the user doesnt log out, and the cookie expires, the script will force him to login, but if he has copied the session id from before, it can still be validated.
How do i automatically delete the session id from the mysql database after 5 hours?
A:
You can encode the expiration time as part of your session id.
Then when you validate the session id, you can also check if it has expired, and if so force the user to log-in again.
You can also clean your database periodically, removing expired sessions.
A:
You'd have to add a timestamp to the session ID in the database to know when it ran out.
Better, make the timestamp part of the session ID itself, eg.:
Set-Cookie: session=1272672000-(random_number);expires=Sat, 01-May-2010 00:00:00 GMT
so that your script can see just by looking at the number at the front whether it's still a valid session ID.
Better still, make both the expiry timestamp and the user ID part of the session ID, and make the bit at the end a cryptographic hash of the user ID, expiry time and a secret key. Then you can validate the ID and know which user it belongs to without having to store anything in the database. (Related PHP example.)
|
Automatic logout in python web app
|
I have a web application in python wherein the user submits their email and password. These values are compared to values stored in a mysql database. If successful, the script generates a session id, stores it next to the email in the database and sets a cookie with the session id, with allows the user to interact with other parts of the sight.
When the user clicks logout, the script erases the session id from the database and deletes the cookie. The cookie expires after 5 hours. My concern is that if the user doesnt log out, and the cookie expires, the script will force him to login, but if he has copied the session id from before, it can still be validated.
How do i automatically delete the session id from the mysql database after 5 hours?
|
[
"You can encode the expiration time as part of your session id.\nThen when you validate the session id, you can also check if it has expired, and if so force the user to log-in again.\nYou can also clean your database periodically, removing expired sessions.\n",
"You'd have to add a timestamp to the session ID in the database to know when it ran out.\nBetter, make the timestamp part of the session ID itself, eg.:\nSet-Cookie: session=1272672000-(random_number);expires=Sat, 01-May-2010 00:00:00 GMT\n\nso that your script can see just by looking at the number at the front whether it's still a valid session ID.\nBetter still, make both the expiry timestamp and the user ID part of the session ID, and make the bit at the end a cryptographic hash of the user ID, expiry time and a secret key. Then you can validate the ID and know which user it belongs to without having to store anything in the database. (Related PHP example.)\n"
] |
[
1,
0
] |
[] |
[] |
[
"logging",
"mysql",
"python",
"web_applications"
] |
stackoverflow_0002731871_logging_mysql_python_web_applications.txt
|
Q:
Python: Control timeout length
I have code similar to the following running in a script:
try:
s = ftplib.FTP('xxx.xxx.xxx.xxx','username','password')
except:
print ('Could not contact FTP serer')
sys.exit()
IF the FTP site is inaccessible, the script almost seems to 'hang' ... It is taking about 75 seconds on average before sys.exit() appears to be called... I know the 75 seconds is probably very subjective, and dependent on the system this runs on...but is there a way to have python just try this once, and if unsucessful, to exit immediately? The platform I am using for this is Mac OS X 10.5/python 2.5.1.
A:
Starting with 2.6, the FTP constructor has an optional timeout parameter:
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])
Return a new instance of the FTP class. When host is given, the method call connect(host) is made. When user is given, additionally the method call login(user, passwd, acct) is made (where passwd and acct default to the empty string when not given). The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if is not specified, the global default timeout setting will be used).
Changed in version 2.6: timeout was added.
Starting with version 2.3 and up, the global default timeout can be utilized:
socket.setdefaulttimeout(timeout)
Set the default timeout in floating seconds for new socket objects. A value of None indicates that new socket objects have no timeout. When the socket module is first imported, the default is None.
New in version 2.3.
A:
since you are on python 2.5, you can set a global timeout for all socket operations (including FTP requests) by using:
socket.setdefaulttimeout()
(this was added in Python 2.3)
A:
http://docs.python.org/library/socket.html#socket.socket.settimeout
A:
if you look at the doc, there's a timeout parameter.
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])¶
maybe you can use that.
A:
A comment to those suggesting using 'socket.setdefaulttimeout()'. Internally, ftplib makes use of sock.makefile(). According to the python docs, you shouldn't mix makefile() with timeouts. Specifically:
http://docs.python.org/library/socket.html#socket.socket.makefile
Of course, I can't say that I've seen any problems, it's just that it worries me.
|
Python: Control timeout length
|
I have code similar to the following running in a script:
try:
s = ftplib.FTP('xxx.xxx.xxx.xxx','username','password')
except:
print ('Could not contact FTP serer')
sys.exit()
IF the FTP site is inaccessible, the script almost seems to 'hang' ... It is taking about 75 seconds on average before sys.exit() appears to be called... I know the 75 seconds is probably very subjective, and dependent on the system this runs on...but is there a way to have python just try this once, and if unsucessful, to exit immediately? The platform I am using for this is Mac OS X 10.5/python 2.5.1.
|
[
"Starting with 2.6, the FTP constructor has an optional timeout parameter:\n\nclass ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])\nReturn a new instance of the FTP class. When host is given, the method call connect(host) is made. When user is given, additionally the method call login(user, passwd, acct) is made (where passwd and acct default to the empty string when not given). The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if is not specified, the global default timeout setting will be used).\nChanged in version 2.6: timeout was added.\n\nStarting with version 2.3 and up, the global default timeout can be utilized:\n\nsocket.setdefaulttimeout(timeout)\nSet the default timeout in floating seconds for new socket objects. A value of None indicates that new socket objects have no timeout. When the socket module is first imported, the default is None.\nNew in version 2.3.\n\n",
"since you are on python 2.5, you can set a global timeout for all socket operations (including FTP requests) by using:\nsocket.setdefaulttimeout()\n\n(this was added in Python 2.3)\n",
"http://docs.python.org/library/socket.html#socket.socket.settimeout\n",
"if you look at the doc, there's a timeout parameter. \nclass ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])¶\nmaybe you can use that.\n",
"A comment to those suggesting using 'socket.setdefaulttimeout()'. Internally, ftplib makes use of sock.makefile(). According to the python docs, you shouldn't mix makefile() with timeouts. Specifically:\n http://docs.python.org/library/socket.html#socket.socket.makefile\nOf course, I can't say that I've seen any problems, it's just that it worries me.\n"
] |
[
7,
2,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002355743_python.txt
|
Q:
Javascript JQUERY AJAX: When Are These Implemented
I'm learning javascript. Poked around this excellent site to gather intel. Keep coming across questions / answers about javascript, JQUERY, JQUERY with AJAX, javascript with JQUERY, AJAX alone. My conclusion: these are all individually powerful and useful. My confusion: how does one determine which/which combination to use ?
I've concluded that javascript is readily available on most browsers. For example, I can extend a simple HTML page with
<html>
<body>
<script type="text/javascript">
document.write("Hello World!");
</script>
</body>
</html>
However, within the scope of Python/DJANGO, many of these questions are JQUERY and AJAX related. At which point or under what development circumstances would I conclude that javascript alone isn't going to "cut it", and I need to implement JQUERY and/or AJAX and/or some other permutation ?
A:
Javascript is code that runs client-side in the browser.
AJAX is a term used to refer to the process of Javascript contacting the webserver directly and getting a response as opposed to the user navigating to a different page
jQuery is a javascript library that provides an easy-to-use abstraction over top of AJAX and the browser DOM
Django is Python code that runs server-side
In some situations you can do the same operation on either the client or on the server. Typically however, you determine if it should be done client/server by asking yourself, "where is the resource that needs to be used located?" For example, querying a database would be done on the webserver, because that's where that resource is. Conversely re-arranging the webpage's UI is done client-side, because that's where the the UI is.
Javascript alone can always "cut it", but the advantage jQuery brings is that it makes things easier and faster, and cuts out a lot of the browser issues in doing AJAX and DOM manipulation.
A:
Since you are new to Javascript development, I'll try with relatable examples.
You can vote questions up or down on StackOverflow. Your vote action is sent to the server, and it gets recorded there. Had it not been for AJAX (and some other techniques), the entire page would need to be refreshed for that one action. AJAX solves the problem of asynchronously communicating with a server without requiring full page reloads.
jQuery is a library that provides convenient access to common Javascript tasks such as DOM manipulation, AJAX handling, etc. jQuery also hides away browser differences and provides a consistent interface for the end user. To illustrate these two points, see these examples:
finding all div elements on the page
// Javascript
var divs = document.getElementsByTagName("div")
// jQuery
$("div")
adding a click event handler to a button (illustrates browser differences)
With pure Javascript, it's best to create a cross-browser method to add events, as you surely wouldn't want to write this code every single time. Source - http://www.scottandrew.com/weblog/articles/cbs-events
function addEvent(obj, evType, fn, useCapture){
if (obj.addEventListener) { // standards-based browsers
obj.addEventListener(evType, fn, useCapture);
return true;
} else if (obj.attachEvent) { // IE
var r = obj.attachEvent("on"+evType, fn);
return r;
} else { // some unknown browser
alert("Handler could not be attached");
}
}
Once this is setup (one-time only), you can add events to any elements using this function.
// Javascript
var button = document.getElementById("buttonID");
addEvent(button, "click", function() { alert("clicked"); }, false);
// jQuery (contains code similar to above function to handle browser differences)
$("#buttonID").click(function() { alert("clicked"); });
AJAX is part of Javascript and not a separate technology in itself. You would use AJAX to avoid doing full page refreshes when you need to send/receive data from the server.
jQuery, MooTools, Dojo, Ext.JS, Prototype.JS, and many other libraries provide a wrapper around Javascript to abstract away browser differences, and provide an easier interface to work with. The question is would you want to do all of this re-work yourselves. If you're not exactly sure what re-work you may need to do, researching pure Javascript examples of common tasks such as AJAX calls, DOM manipulation, event handling, along with abstracting away browser quirks and comparing those to examples to equivalents in libraries such as jQuery might be a good start.
A:
The reason are exactly the same as you choose to use Django instead of Python alone.
jQuery is javascript library which will make your life easier and extends javascript.
Beyond the fact that jQuery is useful, I advise you to learn javascript first, as you should have learnt python before to use Django.
To resume :
pure javascript => simple code, native function exists for what you need
jQuery => complex code, rich application, functions doesn't exists in pure javascript ($.each() method for example).
|
Javascript JQUERY AJAX: When Are These Implemented
|
I'm learning javascript. Poked around this excellent site to gather intel. Keep coming across questions / answers about javascript, JQUERY, JQUERY with AJAX, javascript with JQUERY, AJAX alone. My conclusion: these are all individually powerful and useful. My confusion: how does one determine which/which combination to use ?
I've concluded that javascript is readily available on most browsers. For example, I can extend a simple HTML page with
<html>
<body>
<script type="text/javascript">
document.write("Hello World!");
</script>
</body>
</html>
However, within the scope of Python/DJANGO, many of these questions are JQUERY and AJAX related. At which point or under what development circumstances would I conclude that javascript alone isn't going to "cut it", and I need to implement JQUERY and/or AJAX and/or some other permutation ?
|
[
"\nJavascript is code that runs client-side in the browser.\nAJAX is a term used to refer to the process of Javascript contacting the webserver directly and getting a response as opposed to the user navigating to a different page\njQuery is a javascript library that provides an easy-to-use abstraction over top of AJAX and the browser DOM\nDjango is Python code that runs server-side\n\nIn some situations you can do the same operation on either the client or on the server. Typically however, you determine if it should be done client/server by asking yourself, \"where is the resource that needs to be used located?\" For example, querying a database would be done on the webserver, because that's where that resource is. Conversely re-arranging the webpage's UI is done client-side, because that's where the the UI is. \nJavascript alone can always \"cut it\", but the advantage jQuery brings is that it makes things easier and faster, and cuts out a lot of the browser issues in doing AJAX and DOM manipulation.\n",
"Since you are new to Javascript development, I'll try with relatable examples.\nYou can vote questions up or down on StackOverflow. Your vote action is sent to the server, and it gets recorded there. Had it not been for AJAX (and some other techniques), the entire page would need to be refreshed for that one action. AJAX solves the problem of asynchronously communicating with a server without requiring full page reloads.\njQuery is a library that provides convenient access to common Javascript tasks such as DOM manipulation, AJAX handling, etc. jQuery also hides away browser differences and provides a consistent interface for the end user. To illustrate these two points, see these examples:\nfinding all div elements on the page\n// Javascript\nvar divs = document.getElementsByTagName(\"div\")\n\n// jQuery\n$(\"div\")\n\nadding a click event handler to a button (illustrates browser differences)\nWith pure Javascript, it's best to create a cross-browser method to add events, as you surely wouldn't want to write this code every single time. Source - http://www.scottandrew.com/weblog/articles/cbs-events\nfunction addEvent(obj, evType, fn, useCapture){\n if (obj.addEventListener) { // standards-based browsers\n obj.addEventListener(evType, fn, useCapture);\n return true;\n } else if (obj.attachEvent) { // IE\n var r = obj.attachEvent(\"on\"+evType, fn);\n return r;\n } else { // some unknown browser\n alert(\"Handler could not be attached\");\n }\n}\n\nOnce this is setup (one-time only), you can add events to any elements using this function.\n// Javascript\nvar button = document.getElementById(\"buttonID\");\naddEvent(button, \"click\", function() { alert(\"clicked\"); }, false);\n\n// jQuery (contains code similar to above function to handle browser differences)\n$(\"#buttonID\").click(function() { alert(\"clicked\"); });\n\nAJAX is part of Javascript and not a separate technology in itself. You would use AJAX to avoid doing full page refreshes when you need to send/receive data from the server.\njQuery, MooTools, Dojo, Ext.JS, Prototype.JS, and many other libraries provide a wrapper around Javascript to abstract away browser differences, and provide an easier interface to work with. The question is would you want to do all of this re-work yourselves. If you're not exactly sure what re-work you may need to do, researching pure Javascript examples of common tasks such as AJAX calls, DOM manipulation, event handling, along with abstracting away browser quirks and comparing those to examples to equivalents in libraries such as jQuery might be a good start.\n",
"The reason are exactly the same as you choose to use Django instead of Python alone.\njQuery is javascript library which will make your life easier and extends javascript.\nBeyond the fact that jQuery is useful, I advise you to learn javascript first, as you should have learnt python before to use Django.\nTo resume :\npure javascript => simple code, native function exists for what you need\njQuery => complex code, rich application, functions doesn't exists in pure javascript ($.each() method for example).\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"ajax",
"django",
"javascript",
"jquery",
"python"
] |
stackoverflow_0002731825_ajax_django_javascript_jquery_python.txt
|
Q:
How to add a constructor to a subclassed numeric type?
I want to subclass a numeric type (say, int) in python and give it a shiny complex constructor. Something like this:
class NamedInteger(int):
def __init__(self, value):
super(NamedInteger, self).__init__(value)
self.name = 'pony'
def __str__(self):
return self.name
x = NamedInteger(5)
print x + 3
print str(x)
This works fine under Python 2.4, but Python 2.6 gives a deprecation warning. What is the best way to subclass a numeric type and to redefine constructors for builtin types in newer Python versions?
Edit:
Spotted in comments that this works without a super() line, so it could be like this:
class NamedInteger(int):
def __init__(self, value):
self.name = 'pony'
def __str__(self):
return self.name
x = NamedInteger(5)
print x + 3
print str(x)
I believe that this works because int is immutable type and has only __new__ method. However I would be glad to know a correct way of subclassing, so I could build a class with behaviour like this:
x = NamedInteger(5, 'kitty')
Second edit:
The final version now looks like this:
class NamedInteger(int):
def __new__(cls, value, name='pony'):
self = super(NamedInteger, cls).__new__(cls, value)
self.name = name
return self
def __str__(self):
return self.name
x = NamedInteger(5)
y = NamedInteger(3, 'kitty')
print "%d %d" % (x, y)
print "%s %s" % (x, y)
Answers below also gave very interesting links to Abstract Base Classes and numbers modules.
A:
As of Python 2.6, the preferred way to extend numeric types is not to directly inherit from them, but rather to register your class as a subclass of the Number abstract base class. Check out the abc module for documentation of the Abstract Base Class concept.
That module's documentation links to the numbers module, which contains the abstract base classes you can choose to declare yourself part of. So basically you'd say
import numbers
numbers.Number.register(NamedInteger)
to indicate that your class is a type of number.
Of course, the problem with this is that it requires you to implement all the various handler methods such as __add__, __mul__, etc. However, you'd really have to do this anyway, since you can't rely on the int class' implementation of those operations to do the correct thing for your class. For example, what's supposed to happen when you add an integer to a named integer?
My understanding is that the ABC approach is intended to force you to confront those questions. In this case the simplest thing to do is probably to keep an int as an instance variable of your class; in other words while you will register your class to give it the is-a relationship with Number, your implementation gives it a has-a relationship with int.
A:
You have to use __new__ instead of __init__ when you subclass immutable built-in types, e.g. :
class NamedInteger(int):
def __new__(cls, value, name='pony'):
inst = super(NamedInteger, cls).__new__(cls, value)
inst.name = name
return inst
def __str__(self):
return self.name
x = NamedInteger(5)
print x + 3 # => 8
print str(x) # => pony
x = NamedInteger(3, "foo")
print x + 3 # => 6
print str(x) # => foo
A:
It will work fine if you don't pass value to super(NamedInteger, self).__init__()
I wonder why though, I'm learning from your post :-)
|
How to add a constructor to a subclassed numeric type?
|
I want to subclass a numeric type (say, int) in python and give it a shiny complex constructor. Something like this:
class NamedInteger(int):
def __init__(self, value):
super(NamedInteger, self).__init__(value)
self.name = 'pony'
def __str__(self):
return self.name
x = NamedInteger(5)
print x + 3
print str(x)
This works fine under Python 2.4, but Python 2.6 gives a deprecation warning. What is the best way to subclass a numeric type and to redefine constructors for builtin types in newer Python versions?
Edit:
Spotted in comments that this works without a super() line, so it could be like this:
class NamedInteger(int):
def __init__(self, value):
self.name = 'pony'
def __str__(self):
return self.name
x = NamedInteger(5)
print x + 3
print str(x)
I believe that this works because int is immutable type and has only __new__ method. However I would be glad to know a correct way of subclassing, so I could build a class with behaviour like this:
x = NamedInteger(5, 'kitty')
Second edit:
The final version now looks like this:
class NamedInteger(int):
def __new__(cls, value, name='pony'):
self = super(NamedInteger, cls).__new__(cls, value)
self.name = name
return self
def __str__(self):
return self.name
x = NamedInteger(5)
y = NamedInteger(3, 'kitty')
print "%d %d" % (x, y)
print "%s %s" % (x, y)
Answers below also gave very interesting links to Abstract Base Classes and numbers modules.
|
[
"As of Python 2.6, the preferred way to extend numeric types is not to directly inherit from them, but rather to register your class as a subclass of the Number abstract base class. Check out the abc module for documentation of the Abstract Base Class concept.\nThat module's documentation links to the numbers module, which contains the abstract base classes you can choose to declare yourself part of. So basically you'd say\nimport numbers\nnumbers.Number.register(NamedInteger)\n\nto indicate that your class is a type of number.\nOf course, the problem with this is that it requires you to implement all the various handler methods such as __add__, __mul__, etc. However, you'd really have to do this anyway, since you can't rely on the int class' implementation of those operations to do the correct thing for your class. For example, what's supposed to happen when you add an integer to a named integer?\nMy understanding is that the ABC approach is intended to force you to confront those questions. In this case the simplest thing to do is probably to keep an int as an instance variable of your class; in other words while you will register your class to give it the is-a relationship with Number, your implementation gives it a has-a relationship with int.\n",
"You have to use __new__ instead of __init__ when you subclass immutable built-in types, e.g. :\nclass NamedInteger(int):\n\n def __new__(cls, value, name='pony'):\n inst = super(NamedInteger, cls).__new__(cls, value)\n inst.name = name\n return inst\n\n def __str__(self):\n return self.name\n\nx = NamedInteger(5)\nprint x + 3 # => 8 \nprint str(x) # => pony\nx = NamedInteger(3, \"foo\") \nprint x + 3 # => 6\nprint str(x) # => foo\n\n",
"It will work fine if you don't pass value to super(NamedInteger, self).__init__()\nI wonder why though, I'm learning from your post :-)\n"
] |
[
5,
5,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002732256_python.txt
|
Q:
Problems serving static files in CherryPy 3.1
I'm having some trouble serving a static XML stylesheet to accompany some dynamically generated output from a CherryPy web app. Even my test case serving a static text file fails.
Static file blah.txt is in the /static directory in my application root directory.
In my main site file (conesearch.py contains the CherryPy ConeSearch page-handler class):
import conesearch
cherrypy.config.update('site.config')
cherrypy.tree.mount(conesearch.ConeSearch(), "/ucac3", 'ucac3.config')
...
And in site.config I have the following options:
[/]
tools.staticdir.root: conesearch.current_dir
[/static]
tools.staticdir.on: True
tools.staticdir.dir: 'static'
where current_dir = os.path.dirname(os.path.abspath(__file__)) in conesearch.py
However, my simple test page (taken straight from http://www.cherrypy.org/wiki/StaticContent) fails with a 404:
def test(self):
return """
<html>
<head>
<title>CherryPy static tutorial</title>
</head>
<body>
<a href="/static/blah.txt">Link</a>
</body>
</html>"""
test.exposed = True
It is trying to access 127.0.0.1:8080/static/blah.txt, which by my reckoning should be AOK. Any thoughts or suggestions?
Cheers,
Simon
A:
cherrypy.config.update should only receive a single-level dictionary (mostly server.* entries), but you're passing it a multi-level dictionary of settings that should really be per-app (and therefore passed to tree.mount).
Move those [/] and [/static] sections from your site.config file to your ucac3.config file, and it should work fine.
A:
I serve static files like this:
config = {'/static':
{'tools.staticdir.on': True,
'tools.staticdir.dir': PATH_TO_STATIC_FILES,
}
}
cherrypy.tree.mount(MyApp(), '/', config=config)
A:
I have a similar setup. Let's say that I want the root of my site to be at http://mysite.com/site and that the root of my site/app is at /path/to/www.
I have the following config settings in my server.cfg and am finding my static files without a problem:
[global]
...
app.mount_point = '/site'
tools.staticdir.root = '/path/to/www/'
[/static]
tools.staticdir.on = True
tools.staticdir.dir = 'static'
I'm serving up dojo files, etc, from within the static directory without a problem, as well as css. I'm also using genshi for templating, and using the cherrypy.url() call to ensure that my other URLs are properly set. That allows me to change app.mount_point and have my links update as well.
|
Problems serving static files in CherryPy 3.1
|
I'm having some trouble serving a static XML stylesheet to accompany some dynamically generated output from a CherryPy web app. Even my test case serving a static text file fails.
Static file blah.txt is in the /static directory in my application root directory.
In my main site file (conesearch.py contains the CherryPy ConeSearch page-handler class):
import conesearch
cherrypy.config.update('site.config')
cherrypy.tree.mount(conesearch.ConeSearch(), "/ucac3", 'ucac3.config')
...
And in site.config I have the following options:
[/]
tools.staticdir.root: conesearch.current_dir
[/static]
tools.staticdir.on: True
tools.staticdir.dir: 'static'
where current_dir = os.path.dirname(os.path.abspath(__file__)) in conesearch.py
However, my simple test page (taken straight from http://www.cherrypy.org/wiki/StaticContent) fails with a 404:
def test(self):
return """
<html>
<head>
<title>CherryPy static tutorial</title>
</head>
<body>
<a href="/static/blah.txt">Link</a>
</body>
</html>"""
test.exposed = True
It is trying to access 127.0.0.1:8080/static/blah.txt, which by my reckoning should be AOK. Any thoughts or suggestions?
Cheers,
Simon
|
[
"cherrypy.config.update should only receive a single-level dictionary (mostly server.* entries), but you're passing it a multi-level dictionary of settings that should really be per-app (and therefore passed to tree.mount).\nMove those [/] and [/static] sections from your site.config file to your ucac3.config file, and it should work fine.\n",
"I serve static files like this:\nconfig = {'/static':\n {'tools.staticdir.on': True,\n 'tools.staticdir.dir': PATH_TO_STATIC_FILES,\n }\n }\n\ncherrypy.tree.mount(MyApp(), '/', config=config)\n\n",
"I have a similar setup. Let's say that I want the root of my site to be at http://mysite.com/site and that the root of my site/app is at /path/to/www.\nI have the following config settings in my server.cfg and am finding my static files without a problem:\n[global]\n...\napp.mount_point = '/site'\ntools.staticdir.root = '/path/to/www/'\n[/static]\ntools.staticdir.on = True\ntools.staticdir.dir = 'static'\n\nI'm serving up dojo files, etc, from within the static directory without a problem, as well as css. I'm also using genshi for templating, and using the cherrypy.url() call to ensure that my other URLs are properly set. That allows me to change app.mount_point and have my links update as well.\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"cherrypy",
"python"
] |
stackoverflow_0002496458_cherrypy_python.txt
|
Q:
Words doesn't starts with numbers
I have a string "one two 9three 52eight four", so I only want to get "one two four", because "three" starts with "9" and "eight" starts with "52".
I tried:
"(?!\d)\w+"
but it's still taking the "three" and "eight". I don't want it.
A:
Try
\b[a-zA-Z]\w*
A:
that's because \w includes number. what you need to do is:
>>> s = "one two 9three 52eight four"
>>> import re
>>> re.findall(r'\b[a-z]+\b', s, re.I)
['one', 'two', 'four']
Also, what you're using (?!...) is called negative look-ahead, while you probably meant negative look-behind (?<!...), which would of course still fail because of above-mentioned issue.
eta: then you just need a single word border:
>>> re.findall(r'\b(?!\d)\w+', s)
['one', 'two', 'four']
A:
Works fine for me:
import re
l = "one two 9three 52eight four".split()
c = re.compile("(?!\d)\w+")
m = [w for w in l if re.match(c, w)]
print m
Prints:
['one', 'two', 'four']
A:
regexp might be overkill.
In [3]: [word for word in eg.split(' ') if not word[0].isdigit()]
Out[3]: ['one', 'two', 'four']
|
Words doesn't starts with numbers
|
I have a string "one two 9three 52eight four", so I only want to get "one two four", because "three" starts with "9" and "eight" starts with "52".
I tried:
"(?!\d)\w+"
but it's still taking the "three" and "eight". I don't want it.
|
[
"Try\n\\b[a-zA-Z]\\w*\n\n",
"that's because \\w includes number. what you need to do is:\n>>> s = \"one two 9three 52eight four\"\n>>> import re\n>>> re.findall(r'\\b[a-z]+\\b', s, re.I)\n['one', 'two', 'four']\n\nAlso, what you're using (?!...) is called negative look-ahead, while you probably meant negative look-behind (?<!...), which would of course still fail because of above-mentioned issue.\neta: then you just need a single word border:\n>>> re.findall(r'\\b(?!\\d)\\w+', s)\n['one', 'two', 'four']\n\n",
"Works fine for me:\nimport re\n\nl = \"one two 9three 52eight four\".split()\nc = re.compile(\"(?!\\d)\\w+\")\n\nm = [w for w in l if re.match(c, w)]\nprint m\n\nPrints:\n['one', 'two', 'four']\n\n",
"regexp might be overkill.\nIn [3]: [word for word in eg.split(' ') if not word[0].isdigit()]\nOut[3]: ['one', 'two', 'four']\n\n"
] |
[
4,
2,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0002730094_python_regex.txt
|
Q:
wx Python is not properly drawing customtree items
I am currently using wx.CustomTree, to use to display a series of configuration settings. I generally fill them with wx.TextCtrl / wx.Combobox, to allow the user to edit / enter stuff. Here is my code:
class ConfigTree(CT.CustomTreeCtrl):
"""
Holds all non gui drawing panel stuff
"""
def __init__(self, parent):
CT.CustomTreeCtrl.__init__(self, parent,
id = common.ID_CONTROL_SETTINGS,
style = wx.TR_DEFAULT_STYLE | wx.TR_HAS_BUTTONS
| wx.TR_HAS_VARIABLE_ROW_HEIGHT | wx.TR_SINGLE)
#self.HideWindows()
#self.RefreshSubtree(self.root)
self.population_size_ctrl = None
self.SetSizeHints(350, common.FRAME_SIZE[1])
self.root = self.AddRoot("Configuration Settings")
child = self.AppendItem(self.root, "Foo", wnd=wx.TextCtrl(self, wx.ID_ANY, "Lots Of Muffins"))
The problem is, any children nodes, the data for these nodes is not filled in. When i basically expand the configuration settings tree node. I see the "Foo" node, however the textbox is empty. This is the same for both text node, Until i actually click on the child node. I've looked tried every form of update / etc. Does anyone have any ideas?
To: Anurag Uniyal
Firstly sorry for not giving the rest of the code. I've gotten around this problem by simply resizing the window everytime i demo the application.
So i tried the code on my Macbook Pro running Mac OS X, with newest wx and python 2.6. I still have the same problem, however i noticed resizing the window, or even touching the scrollbar fixes the issue.
I also noticed however, there is absolutely NO problems running on Windows Vista / Windows 7.
So trying this on another macbook running an older version of wx + python. Results in the same problem :(
Is there anyway to force the panel to redraw it self? Which i am pretty sure happens when i resize the window.
If you don't have any ideas then i'll strip it down and make a demo example, im home and won't be at work until later tommorow.
A:
I tested it on window with wx version 2.8.10.1 and it works, which OS and wx version you are using?
here is self contained code, which can be copy-pasted and run
import wx
import wx.lib.customtreectrl as CT
class ConfigTree(CT.CustomTreeCtrl):
"""
Holds all non gui drawing panel stuff
"""
def __init__(self, parent):
CT.CustomTreeCtrl.__init__(self, parent,
id = -1,
style = wx.TR_DEFAULT_STYLE | wx.TR_HAS_BUTTONS
| wx.TR_HAS_VARIABLE_ROW_HEIGHT | wx.TR_SINGLE)
self.population_size_ctrl = None
self.SetSizeHints(350, 350)
self.root = self.AddRoot("Configuration Settings")
child = self.AppendItem(self.root, "Foo", wnd=wx.TextCtrl(self, wx.ID_ANY, "Lots Of Muffins"))
def main():
app = wx.App()
frame = wx.Frame(None, title="Test tree", size=(500,500))
p = wx.Panel(frame, size=(500,500))
tree = ConfigTree(p)
tree.SetSize((500,500))
frame.Show()
app.MainLoop()
if __name__ == '__main__':
main()
A:
You can use RefreshItems if you are using virtual controls, or you could refresh the panel, which would update the contents of all the children windows (widgets).
|
wx Python is not properly drawing customtree items
|
I am currently using wx.CustomTree, to use to display a series of configuration settings. I generally fill them with wx.TextCtrl / wx.Combobox, to allow the user to edit / enter stuff. Here is my code:
class ConfigTree(CT.CustomTreeCtrl):
"""
Holds all non gui drawing panel stuff
"""
def __init__(self, parent):
CT.CustomTreeCtrl.__init__(self, parent,
id = common.ID_CONTROL_SETTINGS,
style = wx.TR_DEFAULT_STYLE | wx.TR_HAS_BUTTONS
| wx.TR_HAS_VARIABLE_ROW_HEIGHT | wx.TR_SINGLE)
#self.HideWindows()
#self.RefreshSubtree(self.root)
self.population_size_ctrl = None
self.SetSizeHints(350, common.FRAME_SIZE[1])
self.root = self.AddRoot("Configuration Settings")
child = self.AppendItem(self.root, "Foo", wnd=wx.TextCtrl(self, wx.ID_ANY, "Lots Of Muffins"))
The problem is, any children nodes, the data for these nodes is not filled in. When i basically expand the configuration settings tree node. I see the "Foo" node, however the textbox is empty. This is the same for both text node, Until i actually click on the child node. I've looked tried every form of update / etc. Does anyone have any ideas?
To: Anurag Uniyal
Firstly sorry for not giving the rest of the code. I've gotten around this problem by simply resizing the window everytime i demo the application.
So i tried the code on my Macbook Pro running Mac OS X, with newest wx and python 2.6. I still have the same problem, however i noticed resizing the window, or even touching the scrollbar fixes the issue.
I also noticed however, there is absolutely NO problems running on Windows Vista / Windows 7.
So trying this on another macbook running an older version of wx + python. Results in the same problem :(
Is there anyway to force the panel to redraw it self? Which i am pretty sure happens when i resize the window.
If you don't have any ideas then i'll strip it down and make a demo example, im home and won't be at work until later tommorow.
|
[
"I tested it on window with wx version 2.8.10.1 and it works, which OS and wx version you are using?\nhere is self contained code, which can be copy-pasted and run\nimport wx\nimport wx.lib.customtreectrl as CT\n\nclass ConfigTree(CT.CustomTreeCtrl):\n \"\"\"\n Holds all non gui drawing panel stuff\n \"\"\"\n def __init__(self, parent):\n CT.CustomTreeCtrl.__init__(self, parent,\n id = -1,\n style = wx.TR_DEFAULT_STYLE | wx.TR_HAS_BUTTONS\n | wx.TR_HAS_VARIABLE_ROW_HEIGHT | wx.TR_SINGLE)\n self.population_size_ctrl = None\n self.SetSizeHints(350, 350)\n self.root = self.AddRoot(\"Configuration Settings\")\n child = self.AppendItem(self.root, \"Foo\", wnd=wx.TextCtrl(self, wx.ID_ANY, \"Lots Of Muffins\"))\n\ndef main():\n app = wx.App()\n frame = wx.Frame(None, title=\"Test tree\", size=(500,500))\n p = wx.Panel(frame, size=(500,500))\n tree = ConfigTree(p)\n tree.SetSize((500,500))\n frame.Show()\n app.MainLoop()\n\nif __name__ == '__main__':\n main()\n\n",
"You can use RefreshItems if you are using virtual controls, or you could refresh the panel, which would update the contents of all the children windows (widgets).\n"
] |
[
1,
1
] |
[] |
[] |
[
"python",
"wxpython"
] |
stackoverflow_0002622843_python_wxpython.txt
|
Q:
post_save signal on m2m field
I have a pretty generic Article model, with m2m relation to Tag model. I want to keep count of each tag usage, i think the best way would be to denormalise count field on Tag model and update it each time Article being saved. How can i accomplish this, or maybe there's a better way?
A:
This is a new feature in Django 1.2:
http://docs.djangoproject.com/en/dev/ref/signals/#m2m-changed
A:
You can do this by creating an intermediate model for the M2M relationship and use it as your hook for the post_save and post_delete signals to update the denormalised column in the Article table.
For example, I do this for favourited Question counts in soclone, where Users have a M2M relationship with Questions:
from django.contrib.auth.models import User
from django.db import connection, models, transaction
from django.db.models.signals import post_delete, post_save
class Question(models.Model):
# ...
favourite_count = models.PositiveIntegerField(default=0)
class FavouriteQuestion(models.Model):
question = models.ForeignKey(Question)
user = models.ForeignKey(User)
def update_question_favourite_count(instance, **kwargs):
"""
Updates the favourite count for the Question related to the given
FavouriteQuestion.
"""
if kwargs.get('raw', False):
return
cursor = connection.cursor()
cursor.execute(
'UPDATE soclone_question SET favourite_count = ('
'SELECT COUNT(*) from soclone_favouritequestion '
'WHERE soclone_favouritequestion.question_id = soclone_question.id'
') '
'WHERE id = %s', [instance.question_id])
transaction.commit_unless_managed()
post_save.connect(update_question_favourite_count, sender=FavouriteQuestion)
post_delete.connect(update_question_favourite_count, sender=FavouriteQuestion)
# Very, very naughty
User.add_to_class('favourite_questions',
models.ManyToManyField(Question, through=FavouriteQuestion,
related_name='favourited_by'))
There's been a bit of discussion on the django-developers mailing list about implementing a means of declaratively declaring denormalisations to avoid having to write code like the above:
Denormalisation, magic, and is it really that useful?
Denormalisation Magic, Round Two
|
post_save signal on m2m field
|
I have a pretty generic Article model, with m2m relation to Tag model. I want to keep count of each tag usage, i think the best way would be to denormalise count field on Tag model and update it each time Article being saved. How can i accomplish this, or maybe there's a better way?
|
[
"This is a new feature in Django 1.2:\nhttp://docs.djangoproject.com/en/dev/ref/signals/#m2m-changed\n",
"You can do this by creating an intermediate model for the M2M relationship and use it as your hook for the post_save and post_delete signals to update the denormalised column in the Article table.\nFor example, I do this for favourited Question counts in soclone, where Users have a M2M relationship with Questions:\nfrom django.contrib.auth.models import User\nfrom django.db import connection, models, transaction\nfrom django.db.models.signals import post_delete, post_save\n\nclass Question(models.Model):\n # ...\n favourite_count = models.PositiveIntegerField(default=0)\n\nclass FavouriteQuestion(models.Model):\n question = models.ForeignKey(Question)\n user = models.ForeignKey(User)\n\ndef update_question_favourite_count(instance, **kwargs):\n \"\"\"\n Updates the favourite count for the Question related to the given\n FavouriteQuestion.\n \"\"\"\n if kwargs.get('raw', False):\n return\n cursor = connection.cursor()\n cursor.execute(\n 'UPDATE soclone_question SET favourite_count = ('\n 'SELECT COUNT(*) from soclone_favouritequestion '\n 'WHERE soclone_favouritequestion.question_id = soclone_question.id'\n ') '\n 'WHERE id = %s', [instance.question_id])\n transaction.commit_unless_managed()\n\npost_save.connect(update_question_favourite_count, sender=FavouriteQuestion)\npost_delete.connect(update_question_favourite_count, sender=FavouriteQuestion)\n\n# Very, very naughty\nUser.add_to_class('favourite_questions',\n models.ManyToManyField(Question, through=FavouriteQuestion,\n related_name='favourited_by'))\n\nThere's been a bit of discussion on the django-developers mailing list about implementing a means of declaratively declaring denormalisations to avoid having to write code like the above:\n\nDenormalisation, magic, and is it really that useful?\nDenormalisation Magic, Round Two \n\n"
] |
[
3,
2
] |
[] |
[] |
[
"django",
"django_signals",
"python"
] |
stackoverflow_0000240659_django_django_signals_python.txt
|
Q:
Can I use an opened gzip file with Popen in Python?
I have a little command line tool that reads from stdin.
On the command line I would run either...
./foo < bar
or ...
cat bar | ./foo
With a gziped file I can run
zcat bar.gz | ./foo
in Python I can do ...
Popen(["./foo", ], stdin=open('bar'), stdout=PIPE, stderr=PIPE)
but I can't do
import gzip
Popen(["./foo", ], stdin=gzip.open('bar'), stdout=PIPE, stderr=PIPE)
I wind up having to run
p0 = Popen(["zcat", "bar"], stdout=PIPE, stderr=PIPE)
Popen(["./foo", ], stdin=p0.stdout, stdout=PIPE, stderr=PIPE)
Am I doing something wrong?
Why can't I use gzip.open('bar') as an stdin arg to Popen?
A:
Because the 'stdin' and 'stdout' of the subprocess takes file descriptor (which is a number), which is an operating system resource. This is masked by the fact that if you pass an object, the subprocess module checks whether the object has a 'fileno' attribute and if it has, it will use it.
The 'gzip' object is not something an operating system provides. An open file is, a socket is, a pipe is. Gzip object is an object that provides read() and write() methods but no fileno attribute.
You can look at the communicate() method of subprocess though, you might want to use it.
|
Can I use an opened gzip file with Popen in Python?
|
I have a little command line tool that reads from stdin.
On the command line I would run either...
./foo < bar
or ...
cat bar | ./foo
With a gziped file I can run
zcat bar.gz | ./foo
in Python I can do ...
Popen(["./foo", ], stdin=open('bar'), stdout=PIPE, stderr=PIPE)
but I can't do
import gzip
Popen(["./foo", ], stdin=gzip.open('bar'), stdout=PIPE, stderr=PIPE)
I wind up having to run
p0 = Popen(["zcat", "bar"], stdout=PIPE, stderr=PIPE)
Popen(["./foo", ], stdin=p0.stdout, stdout=PIPE, stderr=PIPE)
Am I doing something wrong?
Why can't I use gzip.open('bar') as an stdin arg to Popen?
|
[
"Because the 'stdin' and 'stdout' of the subprocess takes file descriptor (which is a number), which is an operating system resource. This is masked by the fact that if you pass an object, the subprocess module checks whether the object has a 'fileno' attribute and if it has, it will use it.\nThe 'gzip' object is not something an operating system provides. An open file is, a socket is, a pipe is. Gzip object is an object that provides read() and write() methods but no fileno attribute.\nYou can look at the communicate() method of subprocess though, you might want to use it.\n"
] |
[
4
] |
[] |
[] |
[
"python",
"scripting",
"subprocess"
] |
stackoverflow_0002732811_python_scripting_subprocess.txt
|
Q:
Eclipse Python Integration
I found this python plugin list but thought I'd ask if anyone has any experience with anything listed there?
I'm totally new to both python and dynamic programming languages if that makes any difference.
A:
PyDev is the most widely used IDE I think. I'm using it not very often, but if I do, it suits me quite well.
A:
PyDev is the best I've used. I use it every day. When they had a pay version I paid for it. I use it on my Mac and Linux box and love it.
A:
I'm using PyDev. It's come a long way since I started using it. That might be it's greatest strength, it's very actively developed. It's got good support for Django and a whole list of other worthwhile features. If you're an Eclipse user you should definitely try it out.
A:
If you're used to Eclipse, it's probably best to go with the DLTK. As a bonus, you get support for a number of other languages (Tcl, Ruby, Javascript) too.
A:
Pydev supports python including vs 3.1, jython and ironpython. It is said that ironpython support in pydev is one of the best availables.
So pydev is probably the best plugin for eclipse. Some few months ago pydev and pydev extensions, the non-free part of pydev, were merged and released free. This is not related with abandonware, contrarily pydev is being actively developped.
Last release added support for Django (and there was also support for google application engine).
|
Eclipse Python Integration
|
I found this python plugin list but thought I'd ask if anyone has any experience with anything listed there?
I'm totally new to both python and dynamic programming languages if that makes any difference.
|
[
"PyDev is the most widely used IDE I think. I'm using it not very often, but if I do, it suits me quite well.\n",
"PyDev is the best I've used. I use it every day. When they had a pay version I paid for it. I use it on my Mac and Linux box and love it. \n",
"I'm using PyDev. It's come a long way since I started using it. That might be it's greatest strength, it's very actively developed. It's got good support for Django and a whole list of other worthwhile features. If you're an Eclipse user you should definitely try it out. \n",
"If you're used to Eclipse, it's probably best to go with the DLTK. As a bonus, you get support for a number of other languages (Tcl, Ruby, Javascript) too.\n",
"Pydev supports python including vs 3.1, jython and ironpython. It is said that ironpython support in pydev is one of the best availables.\nSo pydev is probably the best plugin for eclipse. Some few months ago pydev and pydev extensions, the non-free part of pydev, were merged and released free. This is not related with abandonware, contrarily pydev is being actively developped.\nLast release added support for Django (and there was also support for google application engine).\n"
] |
[
5,
2,
2,
2,
1
] |
[] |
[] |
[
"eclipse",
"eclipse_plugin",
"python"
] |
stackoverflow_0002732805_eclipse_eclipse_plugin_python.txt
|
Q:
Timed email reminder in python
I have written up a python script that allows a user to input a message, his email and the time and they would like the email sent. This is all stored in a mysql database.
However, how do I get the script to execute on the said time and date? will it require a cron job? I mean say at 2:15 on april 20th, the script will search the database for all times of 2:15, and send out those emails. But what about for emails at 2:16?
I am using a shared hosting provided, so cant have a continously running script.
Thanks
A:
If you cannot have a continuously running script, something must trigger it, so that would have to rely on your OS internals. In a unix environment a cron job, as you self state, would do the trick.
Set cron to run the script, and make the script wait for a given time and then continue running and sending until the next email is more than this given time away. Then make your script add a new cron job for a new wakeup time.
A:
Looks like this django application was made just for people in your situation...
http://code.google.com/p/django-cron/
Also, your design seems a little flawed. If its running at 2:15 you wouldn't want to send out just emails that should be sent at 2:15, but all ones that should have been sent in the past that have not been sent.
Your database should either:
A. Delete the entries once they send
or
B. Have a column defined on your database table to store whether it was sent or not. Then your logic should make use of that column.
A:
A cronjob every minute or so would do it. If you're considering this, you might like to mind two things:
1 - How many e-mails are expected to be sent per minute? If it takes you 1 second to send an e-mail and you have 100 e-mails per minute, you won't finish your queue.
2 - What will happen if one job starts before the last one finishes? Be careful not to send e-mails twice. You need either to make sure first process ends (risk: you can drop an e-mail eventually), avoid next process to start (risk: first process hangs whole queue) or make them work in parallel (risk: synchronization problems).
If you take daramarak's suggestion - make you script add a new cron job at end - you have the risk of whole system colapsing if one error occurs.
|
Timed email reminder in python
|
I have written up a python script that allows a user to input a message, his email and the time and they would like the email sent. This is all stored in a mysql database.
However, how do I get the script to execute on the said time and date? will it require a cron job? I mean say at 2:15 on april 20th, the script will search the database for all times of 2:15, and send out those emails. But what about for emails at 2:16?
I am using a shared hosting provided, so cant have a continously running script.
Thanks
|
[
"If you cannot have a continuously running script, something must trigger it, so that would have to rely on your OS internals. In a unix environment a cron job, as you self state, would do the trick.\nSet cron to run the script, and make the script wait for a given time and then continue running and sending until the next email is more than this given time away. Then make your script add a new cron job for a new wakeup time.\n",
"Looks like this django application was made just for people in your situation...\nhttp://code.google.com/p/django-cron/\nAlso, your design seems a little flawed. If its running at 2:15 you wouldn't want to send out just emails that should be sent at 2:15, but all ones that should have been sent in the past that have not been sent.\nYour database should either:\nA. Delete the entries once they send\nor\nB. Have a column defined on your database table to store whether it was sent or not. Then your logic should make use of that column.\n",
"A cronjob every minute or so would do it. If you're considering this, you might like to mind two things:\n1 - How many e-mails are expected to be sent per minute? If it takes you 1 second to send an e-mail and you have 100 e-mails per minute, you won't finish your queue. \n2 - What will happen if one job starts before the last one finishes? Be careful not to send e-mails twice. You need either to make sure first process ends (risk: you can drop an e-mail eventually), avoid next process to start (risk: first process hangs whole queue) or make them work in parallel (risk: synchronization problems).\nIf you take daramarak's suggestion - make you script add a new cron job at end - you have the risk of whole system colapsing if one error occurs.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"email",
"mysql",
"python",
"reminders"
] |
stackoverflow_0002732407_email_mysql_python_reminders.txt
|
Q:
Access class instance "name" dynamically in Python
In plain english: I am creating class instances dynamically in a for loop, the class then defines a few attributes for the instance. I need to later be able to look up those values in another for loop.
Sample code:
class A:
def __init__(self, name, attr):
self.name=name
self.attr=attr
names=("a1", "a2", "a3")
x=10
for name in names:
name=A(name, x)
x += 1
...
...
...
for name in names:
print name.attr
How can I create an identifier for these instances so they can be accessed later on by "name"?
I've figured a way to get this by associating "name" with the memory location:
class A:
instances=[]
names=[]
def __init__(self, name, attr):
self.name=name
self.attr=attr
A.instances.append(self)
A.names.append(name)
names=("a1", "a2", "a3")
x=10
for name in names:
name=A(name, x)
x += 1
...
...
...
for name in names:
index=A.names.index(name)
print "name: " + name
print "att: " + str(A.instances[index].att)
This has had me scouring the web for 2 days now, and I have not been able to find an answer. Maybe I don't know how to ask the question properly, or maybe it can't be done (as many other posts seemed to be suggesting).
Now this 2nd example works, and for now I will use it. I'm just thinking there has to be an easier way than creating your own makeshift dictionary of index numbers and I'm hoping I didn't waste 2 days looking for an answer that doesn't exist. Anyone have anything?
Thanks in advance,
Andy
Update: A coworker just showed me what he thinks is the simplest way and that is to make an actual dictionary of class instances using the instance "name" as the key.
A:
Sometimes keeping it simple is best. Having a dict that stores your instances with their names as the keys would be both straightforward and fairly simple to implement.
class A:
instances={}
def __init__(self, name, attr):
self.name=name
self.attr=attr
A.instances[name] = self
and then to get the proper instance, just...
instance = A.instances[name]
A:
No need to put the instance dict inside the class. Just create a dict, inst in the local scope:
class A:
def __init__(self, name, attr):
self.name=name
self.attr=attr
inst={}
names=("a1", "a2", "a3")
x=10
for name in names:
inst[name]=A(name, x)
x += 1
Then, whenever you want to access a certain instance by name, just use inst[name]:
for name in names:
print inst[name].attr
A:
Yes, the dictionary approach should work well, and can be dovetailed into the class itself.
class A:
_instances = {}
@classmethod
def get(cls, name):
return A._instances[name]
def __init__(self, name, attr):
self.name=name
self.attr=attr
A._instances[name] = self
a = A('foo', 10)
aa = A.get('foo')
If you want to play around with __new__, you can even make this transparent:
a = A('foo', 10)
aa = A('foo') # 'a' and 'aa' refer to the same instance.
This is a bit complicated, so I'll leave it to you to research (and of course ask another question on SO if you get stuck).
|
Access class instance "name" dynamically in Python
|
In plain english: I am creating class instances dynamically in a for loop, the class then defines a few attributes for the instance. I need to later be able to look up those values in another for loop.
Sample code:
class A:
def __init__(self, name, attr):
self.name=name
self.attr=attr
names=("a1", "a2", "a3")
x=10
for name in names:
name=A(name, x)
x += 1
...
...
...
for name in names:
print name.attr
How can I create an identifier for these instances so they can be accessed later on by "name"?
I've figured a way to get this by associating "name" with the memory location:
class A:
instances=[]
names=[]
def __init__(self, name, attr):
self.name=name
self.attr=attr
A.instances.append(self)
A.names.append(name)
names=("a1", "a2", "a3")
x=10
for name in names:
name=A(name, x)
x += 1
...
...
...
for name in names:
index=A.names.index(name)
print "name: " + name
print "att: " + str(A.instances[index].att)
This has had me scouring the web for 2 days now, and I have not been able to find an answer. Maybe I don't know how to ask the question properly, or maybe it can't be done (as many other posts seemed to be suggesting).
Now this 2nd example works, and for now I will use it. I'm just thinking there has to be an easier way than creating your own makeshift dictionary of index numbers and I'm hoping I didn't waste 2 days looking for an answer that doesn't exist. Anyone have anything?
Thanks in advance,
Andy
Update: A coworker just showed me what he thinks is the simplest way and that is to make an actual dictionary of class instances using the instance "name" as the key.
|
[
"Sometimes keeping it simple is best. Having a dict that stores your instances with their names as the keys would be both straightforward and fairly simple to implement.\nclass A:\n instances={}\n def __init__(self, name, attr):\n self.name=name\n self.attr=attr\n A.instances[name] = self\n\nand then to get the proper instance, just...\ninstance = A.instances[name]\n\n",
"No need to put the instance dict inside the class. Just create a dict, inst in the local scope:\nclass A:\n def __init__(self, name, attr):\n self.name=name\n self.attr=attr\n\ninst={}\nnames=(\"a1\", \"a2\", \"a3\")\nx=10\nfor name in names:\n inst[name]=A(name, x)\n x += 1\n\nThen, whenever you want to access a certain instance by name, just use inst[name]:\nfor name in names:\n print inst[name].attr\n\n",
"Yes, the dictionary approach should work well, and can be dovetailed into the class itself.\nclass A:\n _instances = {}\n\n @classmethod\n def get(cls, name):\n return A._instances[name]\n\n def __init__(self, name, attr):\n self.name=name\n self.attr=attr\n A._instances[name] = self\n\na = A('foo', 10)\naa = A.get('foo')\n\nIf you want to play around with __new__, you can even make this transparent:\na = A('foo', 10)\naa = A('foo') # 'a' and 'aa' refer to the same instance.\n\nThis is a bit complicated, so I'll leave it to you to research (and of course ask another question on SO if you get stuck).\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002733580_python.txt
|
Q:
Python: does it make sense to refactor this check into its own method?
I'm still learning python. I just wrote this method to determine if a player has won a game of tic-tac-toe yet, given a board state like: '[['o','x','x'],['x','o','-'],['x','o','o']]'
def hasWon(board):
players = ['x', 'o']
for player in players:
for row in board:
if row.count(player) == 3:
return player
top, mid, low = board
for i in range(3):
if [ top[i],mid[i],low[i] ].count(player) == 3:
return player
if [top[0],mid[1],low[2]].count(player) == 3:
return player
if [top[2],mid[1],low[0]].count(player) == 3:
return player
return None
It occurred to me that I check lists of 3 chars several times and could refactor the checking to its own method like so:
def check(list, player):
if list.count(player) == 3:
return player
...but then realized that all that really does is change lines like:
if [ top[i],mid[i],low[i] ].count(player) == 3:
return player
to:
if check( [top[i],mid[i],low[i]], player ):
return player
...which frankly doesn't seem like much of an improvement. Do you see a better way to refactor this? Or in general a more Pythonic option? I'd love to hear it!
A:
I might use
def check(somelist, player):
return somelist.count(player) == 3
Edit: as @Andrew suggested in a comment (tx @Andrew!), you can do even better, e.g.:
def check(somelist, player):
return somelist.count(player) == len(somelist)
without hardcoding the 3 -- which also suggests another nice alternative:
def check(somelist, player):
return all(x==player for x in somelist)
which very directly reads "all items in the list equal player". The general point is that by refactoring to a separate method you can then play with that method's implementation -- now of course here the code is very simple so the advantage is similarly modest, but it's an excellent point to keep in mind as you move to more complicated code.
As you've noticed you only need a bool anyway, so this allows a much simpler approach -- just return the bool expression rather than doing an if on it. It's important to never use a built-in name like list for your own identifiers -- an "attractive nuisance" of the language...;-).
By which I mean, Python uses for its built-ins lots of nice, attractive names like list, bool, sum, and so on, so it's easy to find yourself accidentally using one of those names for a variable of your own, and nothing bad seems to happen... until the time you need to turn, say, a tuple into a list, use the obviously best solution, x = list(thetuple)... and end up spending our trying to understand and solve the errors that come because you've used list to mean anything else than the built-in type of that name.
So, just get into the habit of not using those nice built-in names for purposes other than indicating the respective builtins, and you'll save yourself much future aggravation!-)
Back to your code, you might consider the conciseness afforded by not unpacking board (a hard decision, since your code is quite readable... but may look a bit verbose):
for i in range(3):
if check([row[i] for row in board], player):
return player
if check([row[i] for i, row in enumerate(board)], player):
return player
if check([row[2-i] for i, row in enumerate(board)], player):
return player
In the end I think I'd stick with your choice -- more readable and just marginally more verbose, if at all -- but it's nice to be aware of the alternatives, I think -- here, list comprehensions and enumerate to generate the lists to be checked as an alternative to "manually coding out" the three possibilities.
A:
Simply make a custom iterator over board.
def get_lines(board):
nums = range(3)
for i in nums:
yield (board[i][j] for j in nums) #cols
for j in nums:
yield (board[i][j] for i in nums) #rows
yield (board[i][i] for i in nums) #diag \
yield (board[i][2-i] for i in nums) #diag /
def get_winner(board): #a bit too indented
for line in get_lines(board): #more expensive, so go through it only once
for player in 'x', 'o':
if line == player, player, player: #other way to check victory condition
return player
return None
Obviously these really should be methods of a board class :)
A:
You could use a better name instead of check that doesn't tell much. The rule of thumb is: if you can think of a good name for a peace of code then it might be beneficial to move it into separate function even if it is just one line of code. allsame might be one of alternatives here.
def winner(board):
main_diag = [row[i] for i, row in enumerate(board)]
aux_diag = [row[len(board) - i - 1] for i, row in enumerate(board)]
for triple in board + zip(*board) + [main_diag, aux_diag]:
if allsame(triple):
return triple[0]
def allsame(lst):
return all(x == lst[0] for x in lst)
A:
Personally I think your best bet for readability is to bubble out functions to give you the rows(), columns(), and diags() of the board, as lists of lists. Then you can iterate through these and check uniformly. You can even then define allTriples(), which appends the output of rows(), columns(), and diags(), so that you can do your checking in a single concise loop. I'd probably also make the board an object, so that these functions could become object methods.
A:
And now for something completely different:
Represent the board by a list of nine elements. Each element can be -1 (X), 1 (O), or 0 (empty):
WIN_LINES = (
(0, 1, 2),
(3, 4, 5),
(6, 7, 8),
(0, 3, 6),
(1, 4, 7),
(2, 5, 8),
(2, 4, 6),
(0, 4, 8),
)
def test_for_win(board):
for line in WIN_LINES:
total = sum(board[point] for point in line)
if abs(total) == 3:
return total // 3
return None
Refinement:
WIN_LINES = (
0, 1, 2,
3, 4, 5,
6, 7, 8,
0, 3, 6,
1, 4, 7,
2, 5, 8,
2, 4, 6,
0, 4, 8,
)
def test_for_win(board):
wpos = 0
for _unused in xrange(8):
total = board[WIN_LINES[wpos]]; wpos += 1
total += board[WIN_LINES[wpos]]; wpos += 1
total += board[WIN_LINES[wpos]]; wpos += 1
if total == 3: return 1
if total == -3: return -1
return None
A:
Just an idea
def hasWon(board):
players = ['x', 'o']
for player in players:
top, mid, low = board
game = board + [[ top[i],mid[i],low[i]] for i in range(3)] + [top[0],mid[1],low[2]] +[top[2],mid[1],low[0]]
if 3 in [l.count(player) for l in game] :
return player
return None
A:
Your solution is fine - correct, readable and understandable.
Still, if you'd want to optimize for speed, I'd use a 1-dimensional array of digits, not strings, and try to look up each number as few times as possible. There will certainly be an extremely awkward-looking solution in which you check every field only once. I don't want to construct this now. :) Things like that might matter if you wanted to implement an AI playing against you while exploring the entire search tree of possible moves. An efficient win/loss check would be necessary there.
|
Python: does it make sense to refactor this check into its own method?
|
I'm still learning python. I just wrote this method to determine if a player has won a game of tic-tac-toe yet, given a board state like: '[['o','x','x'],['x','o','-'],['x','o','o']]'
def hasWon(board):
players = ['x', 'o']
for player in players:
for row in board:
if row.count(player) == 3:
return player
top, mid, low = board
for i in range(3):
if [ top[i],mid[i],low[i] ].count(player) == 3:
return player
if [top[0],mid[1],low[2]].count(player) == 3:
return player
if [top[2],mid[1],low[0]].count(player) == 3:
return player
return None
It occurred to me that I check lists of 3 chars several times and could refactor the checking to its own method like so:
def check(list, player):
if list.count(player) == 3:
return player
...but then realized that all that really does is change lines like:
if [ top[i],mid[i],low[i] ].count(player) == 3:
return player
to:
if check( [top[i],mid[i],low[i]], player ):
return player
...which frankly doesn't seem like much of an improvement. Do you see a better way to refactor this? Or in general a more Pythonic option? I'd love to hear it!
|
[
"I might use\ndef check(somelist, player):\n return somelist.count(player) == 3\n\nEdit: as @Andrew suggested in a comment (tx @Andrew!), you can do even better, e.g.:\ndef check(somelist, player):\n return somelist.count(player) == len(somelist)\n\nwithout hardcoding the 3 -- which also suggests another nice alternative:\ndef check(somelist, player):\n return all(x==player for x in somelist)\n\nwhich very directly reads \"all items in the list equal player\". The general point is that by refactoring to a separate method you can then play with that method's implementation -- now of course here the code is very simple so the advantage is similarly modest, but it's an excellent point to keep in mind as you move to more complicated code.\nAs you've noticed you only need a bool anyway, so this allows a much simpler approach -- just return the bool expression rather than doing an if on it. It's important to never use a built-in name like list for your own identifiers -- an \"attractive nuisance\" of the language...;-).\nBy which I mean, Python uses for its built-ins lots of nice, attractive names like list, bool, sum, and so on, so it's easy to find yourself accidentally using one of those names for a variable of your own, and nothing bad seems to happen... until the time you need to turn, say, a tuple into a list, use the obviously best solution, x = list(thetuple)... and end up spending our trying to understand and solve the errors that come because you've used list to mean anything else than the built-in type of that name.\nSo, just get into the habit of not using those nice built-in names for purposes other than indicating the respective builtins, and you'll save yourself much future aggravation!-)\nBack to your code, you might consider the conciseness afforded by not unpacking board (a hard decision, since your code is quite readable... but may look a bit verbose):\nfor i in range(3):\n if check([row[i] for row in board], player):\n return player\nif check([row[i] for i, row in enumerate(board)], player):\n return player\nif check([row[2-i] for i, row in enumerate(board)], player):\n return player\n\nIn the end I think I'd stick with your choice -- more readable and just marginally more verbose, if at all -- but it's nice to be aware of the alternatives, I think -- here, list comprehensions and enumerate to generate the lists to be checked as an alternative to \"manually coding out\" the three possibilities.\n",
"Simply make a custom iterator over board.\ndef get_lines(board):\n nums = range(3)\n for i in nums: \n yield (board[i][j] for j in nums) #cols\n for j in nums: \n yield (board[i][j] for i in nums) #rows\n yield (board[i][i] for i in nums) #diag \\\n yield (board[i][2-i] for i in nums) #diag /\n\ndef get_winner(board): #a bit too indented\n for line in get_lines(board): #more expensive, so go through it only once\n for player in 'x', 'o':\n if line == player, player, player: #other way to check victory condition\n return player\n return None\n\nObviously these really should be methods of a board class :)\n",
"You could use a better name instead of check that doesn't tell much. The rule of thumb is: if you can think of a good name for a peace of code then it might be beneficial to move it into separate function even if it is just one line of code. allsame might be one of alternatives here.\ndef winner(board):\n main_diag = [row[i] for i, row in enumerate(board)]\n aux_diag = [row[len(board) - i - 1] for i, row in enumerate(board)] \n for triple in board + zip(*board) + [main_diag, aux_diag]: \n if allsame(triple): \n return triple[0]\n\ndef allsame(lst): \n return all(x == lst[0] for x in lst)\n\n",
"Personally I think your best bet for readability is to bubble out functions to give you the rows(), columns(), and diags() of the board, as lists of lists. Then you can iterate through these and check uniformly. You can even then define allTriples(), which appends the output of rows(), columns(), and diags(), so that you can do your checking in a single concise loop. I'd probably also make the board an object, so that these functions could become object methods.\n",
"And now for something completely different:\nRepresent the board by a list of nine elements. Each element can be -1 (X), 1 (O), or 0 (empty):\nWIN_LINES = (\n (0, 1, 2),\n (3, 4, 5),\n (6, 7, 8),\n (0, 3, 6),\n (1, 4, 7),\n (2, 5, 8),\n (2, 4, 6),\n (0, 4, 8),\n )\n\ndef test_for_win(board):\n for line in WIN_LINES:\n total = sum(board[point] for point in line)\n if abs(total) == 3:\n return total // 3\n return None\n\nRefinement:\nWIN_LINES = (\n 0, 1, 2,\n 3, 4, 5,\n 6, 7, 8,\n 0, 3, 6,\n 1, 4, 7,\n 2, 5, 8,\n 2, 4, 6,\n 0, 4, 8,\n )\n\ndef test_for_win(board):\n wpos = 0\n for _unused in xrange(8):\n total = board[WIN_LINES[wpos]]; wpos += 1\n total += board[WIN_LINES[wpos]]; wpos += 1\n total += board[WIN_LINES[wpos]]; wpos += 1\n if total == 3: return 1\n if total == -3: return -1\n return None\n\n",
"Just an idea\ndef hasWon(board):\n players = ['x', 'o']\n for player in players:\n top, mid, low = board\n game = board + [[ top[i],mid[i],low[i]] for i in range(3)] + [top[0],mid[1],low[2]] +[top[2],mid[1],low[0]]\n if 3 in [l.count(player) for l in game] :\n return player\n return None\n\n",
"Your solution is fine - correct, readable and understandable. \nStill, if you'd want to optimize for speed, I'd use a 1-dimensional array of digits, not strings, and try to look up each number as few times as possible. There will certainly be an extremely awkward-looking solution in which you check every field only once. I don't want to construct this now. :) Things like that might matter if you wanted to implement an AI playing against you while exploring the entire search tree of possible moves. An efficient win/loss check would be necessary there.\n"
] |
[
5,
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"python",
"refactoring",
"tic_tac_toe"
] |
stackoverflow_0002730228_python_refactoring_tic_tac_toe.txt
|
Q:
In Python BeautifulSoup How to move tags
I have a partially converted XML document in soup coming from HTML. After some replacement and editing in the soup, the body is essentially -
<Text...></Text> # This replaces <a href..> tags but automatically creates the </Text>
<p class=norm ...</p>
<p class=norm ...</p>
<Text...></Text>
<p class=norm ...</p> and so forth.
I need to "move" the <p> tags to be children to <Text> or know how to suppress the </Text>. I want -
<Text...>
<p class=norm ...</p>
<p class=norm ...</p>
</Text>
<Text...>
<p class=norm ...</p>
</Text>
I've tried using item.insert and item.append but I'm thinking there must be a more elegant solution.
for item in soup.findAll(['p','span']):
if item.name == 'span' and item.has_key('class') and item['class'] == 'section':
xBCV = short_2_long(item._getAttrMap().get('value',''))
if currentnode:
pass
currentnode = Tag(soup,'Text', attrs=[('TypeOf', 'Section'),... ])
item.replaceWith(currentnode) # works but creates end tag
elif item.name == 'p' and item.has_key('class') and item['class'] == 'norm':
childcdatanode = None
for ahref in item.findAll('a'):
if childcdatanode:
pass
newlink = filter_hrefs(str(ahref))
childcdatanode = Tag(soup, newlink)
ahref.replaceWith(childcdatanode)
Thanks
A:
You can use insert to move tags. The docs say: "An element can occur in only one place in one parse tree. If you give insert an element that's already connected to a soup object, it gets disconnected (with extract) before it gets connected elsewhere."
If your HTML looks like this:
<text></text>
<p class="norm">1</p>
<p class="norm">2</p>
<text></text>
<p class="norm">3</p>
... this:
for item in soup.findAll(['text', 'p']):
if item.name == 'text':
text = item
if item.name == 'p':
text.insert(len(text.contents), item)
... would produce the following:
<text><p class="norm">1</p><p class="norm">2</p></text>
<text><p class="norm">3</p></text>
|
In Python BeautifulSoup How to move tags
|
I have a partially converted XML document in soup coming from HTML. After some replacement and editing in the soup, the body is essentially -
<Text...></Text> # This replaces <a href..> tags but automatically creates the </Text>
<p class=norm ...</p>
<p class=norm ...</p>
<Text...></Text>
<p class=norm ...</p> and so forth.
I need to "move" the <p> tags to be children to <Text> or know how to suppress the </Text>. I want -
<Text...>
<p class=norm ...</p>
<p class=norm ...</p>
</Text>
<Text...>
<p class=norm ...</p>
</Text>
I've tried using item.insert and item.append but I'm thinking there must be a more elegant solution.
for item in soup.findAll(['p','span']):
if item.name == 'span' and item.has_key('class') and item['class'] == 'section':
xBCV = short_2_long(item._getAttrMap().get('value',''))
if currentnode:
pass
currentnode = Tag(soup,'Text', attrs=[('TypeOf', 'Section'),... ])
item.replaceWith(currentnode) # works but creates end tag
elif item.name == 'p' and item.has_key('class') and item['class'] == 'norm':
childcdatanode = None
for ahref in item.findAll('a'):
if childcdatanode:
pass
newlink = filter_hrefs(str(ahref))
childcdatanode = Tag(soup, newlink)
ahref.replaceWith(childcdatanode)
Thanks
|
[
"You can use insert to move tags. The docs say: \"An element can occur in only one place in one parse tree. If you give insert an element that's already connected to a soup object, it gets disconnected (with extract) before it gets connected elsewhere.\"\nIf your HTML looks like this:\n<text></text>\n<p class=\"norm\">1</p>\n<p class=\"norm\">2</p>\n<text></text>\n<p class=\"norm\">3</p>\n\n... this:\nfor item in soup.findAll(['text', 'p']):\n if item.name == 'text':\n text = item\n if item.name == 'p':\n text.insert(len(text.contents), item)\n\n... would produce the following:\n<text><p class=\"norm\">1</p><p class=\"norm\">2</p></text>\n<text><p class=\"norm\">3</p></text>\n\n"
] |
[
4
] |
[] |
[] |
[
"beautifulsoup",
"children",
"python",
"regex",
"xml"
] |
stackoverflow_0002732391_beautifulsoup_children_python_regex_xml.txt
|
Q:
Python + PyQt program freezes
I wrote PyQt application. After it's start I close it (GUI), but timer don't stops and Python sometimes freezes. Only thing to unfreeze it - Ctrl-C, after which following message appears:
Traceback (most recent call last):
File "", line 262, in timerEvent
KeyboardInterrupt
timer don't stops again, and CPython works very slowly. How to avoid this problem?
EDIT:
I added killTimer() to source but things don't changed that much. CPython is slow and hangs sometimes. How to fully destroy all PyQt objects?
Mw = TMainWindow()
TimerId = Mw.startTimer(25)
QApp.exec_()
Mw.killTimer(TimerId)
A:
Without further information this is a complete guess. One of the more frequent reasons that an application doesn't exit when the GUI is closed is because of QApplication::quitOnLastWindowClosed property being set to false.
|
Python + PyQt program freezes
|
I wrote PyQt application. After it's start I close it (GUI), but timer don't stops and Python sometimes freezes. Only thing to unfreeze it - Ctrl-C, after which following message appears:
Traceback (most recent call last):
File "", line 262, in timerEvent
KeyboardInterrupt
timer don't stops again, and CPython works very slowly. How to avoid this problem?
EDIT:
I added killTimer() to source but things don't changed that much. CPython is slow and hangs sometimes. How to fully destroy all PyQt objects?
Mw = TMainWindow()
TimerId = Mw.startTimer(25)
QApp.exec_()
Mw.killTimer(TimerId)
|
[
"Without further information this is a complete guess. One of the more frequent reasons that an application doesn't exit when the GUI is closed is because of QApplication::quitOnLastWindowClosed property being set to false.\n"
] |
[
0
] |
[] |
[] |
[
"cpython",
"freeze",
"pyqt",
"python",
"qt"
] |
stackoverflow_0002733456_cpython_freeze_pyqt_python_qt.txt
|
Q:
dictionary interface for large data sets
I have a set of key/values (all text) that is too large to load in memory at once. I would like to interact with this data via a Python dictionary-like interface.
Does such a module already exist?
Reading key values should be efficient and values compressed on disk to save space.
Edit:
Ideally cross platform, but only using Linux for now
Needs to be thread safe
A:
Take a look at the shelve module. It provides dictionaries with various backing database systems include BerkleyDB and dbm.
A:
Try bsddb(Python bindings for Berkley db)
A:
ended up creating my own solution based on this existing module.
A:
Assuming you are on a *nix platform, you are just BEGGING for tokyo-cabinet... It has a native set of ruby bindings... You can more information at 1978th.net...
|
dictionary interface for large data sets
|
I have a set of key/values (all text) that is too large to load in memory at once. I would like to interact with this data via a Python dictionary-like interface.
Does such a module already exist?
Reading key values should be efficient and values compressed on disk to save space.
Edit:
Ideally cross platform, but only using Linux for now
Needs to be thread safe
|
[
"Take a look at the shelve module. It provides dictionaries with various backing database systems include BerkleyDB and dbm.\n",
"Try bsddb(Python bindings for Berkley db)\n",
"ended up creating my own solution based on this existing module. \n",
"Assuming you are on a *nix platform, you are just BEGGING for tokyo-cabinet... It has a native set of ruby bindings... You can more information at 1978th.net...\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"dataset",
"dictionary",
"large_files",
"python"
] |
stackoverflow_0002550980_dataset_dictionary_large_files_python.txt
|
Q:
Apache + Mod_wsgi returning 502 Bad Gateway!
I'm serving Django with mod_wsgi and Apache... unfortunately requests are returning 502 Bad Gateway error messages...
Received a invalid response
HttpResponse('OK') is affected by this
render_to_response('...') is not!
any ideas?!?
A:
realy strange...
Because the render_to_response is implemented with HttpResponse.
Maybe there is a problem with your string inside HttpResponse().
Unicode Error?
Wrong Mimetype?
problem around your posted code..
A:
Are you using a proxy front end such as nginx? The mod_wsgi module doesn't generate such an error. The only scenario where can think this might occur, given that cant see why Django would generate a 502, is that you are using mod_wsgi embedded mode with nginx front end proxy, and the Apache server child process is crashing.
Where are you seeing this error message, in the browser or in the web server log files? Have you looked closely at Apache error log files for any other messages? Specifically look for segmentation fault message in main Apache error log (not virtual host error log).
|
Apache + Mod_wsgi returning 502 Bad Gateway!
|
I'm serving Django with mod_wsgi and Apache... unfortunately requests are returning 502 Bad Gateway error messages...
Received a invalid response
HttpResponse('OK') is affected by this
render_to_response('...') is not!
any ideas?!?
|
[
"realy strange...\nBecause the render_to_response is implemented with HttpResponse.\nMaybe there is a problem with your string inside HttpResponse(). \n\nUnicode Error? \nWrong Mimetype?\nproblem around your posted code..\n\n",
"Are you using a proxy front end such as nginx? The mod_wsgi module doesn't generate such an error. The only scenario where can think this might occur, given that cant see why Django would generate a 502, is that you are using mod_wsgi embedded mode with nginx front end proxy, and the Apache server child process is crashing.\nWhere are you seeing this error message, in the browser or in the web server log files? Have you looked closely at Apache error log files for any other messages? Specifically look for segmentation fault message in main Apache error log (not virtual host error log).\n"
] |
[
1,
1
] |
[] |
[] |
[
"apache",
"django",
"mod_wsgi",
"python",
"wsgi"
] |
stackoverflow_0002728396_apache_django_mod_wsgi_python_wsgi.txt
|
Q:
Webfaction apache + mod_wsgi + django configuration issue
A problem that I stumbled upon recently, and, even though I solved it, I would like to hear your opinion of what correct/simple/adopted solution would be.
I'm developing website using Django + python. When I run it on local machine with "python manage.py runserver", local address is http://127.0.0.1:8000/ by default.
However, on production server my app has other url, with path - like "http://server.name/myproj/"
I need to generate and use permanent urls. If I'm using {% url view params %}, I'm getting paths that are relative to / , since my urls.py contains this
urlpatterns = patterns('',
(r'^(\d+)?$', 'myproj.myapp.views.index'),
(r'^img/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/img' }),
(r'^css/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/css' }),
)
So far, I see 2 solutions:
modify urls.py, include '/myproj/' in case of production run
use request.build_absolute_uri() for creating link in views.py or pass some variable with 'hostname:port/path' in templates
Are there prettier ways to deal with this problem? Thank you.
Update: Well, the problem seems to be not in django, but in webfaction way to configure wsgi. Apache configuration for application with URL "hostname.com/myapp" contains the following line
WSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi
So, SCRIPT_NAME is empty, and the only solution I see is to get to mod_python or serve my application from root. Any ideas?
A:
You shouldn't need to do anything special. Django honours the SCRIPT_NAME environment variable that is set by mod_wsgi when you serve a Django site other than from the root, and prepends it to the url reversing code automatically.
If you're using mod_python (you shouldn't be), you may need to set django.root in your Apache configuration.
Updated I suspect this is due to the way that Webfaction serves Django sites via a proxy instance of Apache - this instance has no knowledge of the actual mount point as determined by Webfaction's control panel.
In this case, you'll probably need to set SCRIPT_NAME manually in your .wsgi script. I think this should work:
_application = django.core.handlers.wsgi.WSGIHandler()
def application(environ, start_response):
os.environ['SCRIPT_NAME'] = '/myproj/'
return _application(environ, start_response)
A:
Change:
WSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi
to:
WSGIScriptAlias /myproj /home/dreamiurg/webapps/pinfont/myproject.wsgi
Then change the nginx front end configuration of WebFaction to proxy to '/myproj' on back end instead of '/'.
That should be all that is required. You should not use '/myproj' prefix in urls.py.
In other words, just ensure the mount point for back end is same as where it appears mounted at the front end.
Modify WSGI script file to fudge SCRIPT_NAME, although it may work, is not generally recommended as not allowing Apache/mod_wsgi to do the proper thing, which may have other implications.
|
Webfaction apache + mod_wsgi + django configuration issue
|
A problem that I stumbled upon recently, and, even though I solved it, I would like to hear your opinion of what correct/simple/adopted solution would be.
I'm developing website using Django + python. When I run it on local machine with "python manage.py runserver", local address is http://127.0.0.1:8000/ by default.
However, on production server my app has other url, with path - like "http://server.name/myproj/"
I need to generate and use permanent urls. If I'm using {% url view params %}, I'm getting paths that are relative to / , since my urls.py contains this
urlpatterns = patterns('',
(r'^(\d+)?$', 'myproj.myapp.views.index'),
(r'^img/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/img' }),
(r'^css/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/css' }),
)
So far, I see 2 solutions:
modify urls.py, include '/myproj/' in case of production run
use request.build_absolute_uri() for creating link in views.py or pass some variable with 'hostname:port/path' in templates
Are there prettier ways to deal with this problem? Thank you.
Update: Well, the problem seems to be not in django, but in webfaction way to configure wsgi. Apache configuration for application with URL "hostname.com/myapp" contains the following line
WSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi
So, SCRIPT_NAME is empty, and the only solution I see is to get to mod_python or serve my application from root. Any ideas?
|
[
"You shouldn't need to do anything special. Django honours the SCRIPT_NAME environment variable that is set by mod_wsgi when you serve a Django site other than from the root, and prepends it to the url reversing code automatically.\nIf you're using mod_python (you shouldn't be), you may need to set django.root in your Apache configuration.\nUpdated I suspect this is due to the way that Webfaction serves Django sites via a proxy instance of Apache - this instance has no knowledge of the actual mount point as determined by Webfaction's control panel. \nIn this case, you'll probably need to set SCRIPT_NAME manually in your .wsgi script. I think this should work:\n_application = django.core.handlers.wsgi.WSGIHandler()\n\ndef application(environ, start_response):\n os.environ['SCRIPT_NAME'] = '/myproj/'\n return _application(environ, start_response)\n\n",
"Change:\nWSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi\n\nto:\nWSGIScriptAlias /myproj /home/dreamiurg/webapps/pinfont/myproject.wsgi\n\nThen change the nginx front end configuration of WebFaction to proxy to '/myproj' on back end instead of '/'.\nThat should be all that is required. You should not use '/myproj' prefix in urls.py.\nIn other words, just ensure the mount point for back end is same as where it appears mounted at the front end.\nModify WSGI script file to fudge SCRIPT_NAME, although it may work, is not generally recommended as not allowing Apache/mod_wsgi to do the proper thing, which may have other implications.\n"
] |
[
3,
2
] |
[] |
[] |
[
"django",
"production_environment",
"python",
"url"
] |
stackoverflow_0002729368_django_production_environment_python_url.txt
|
Q:
Any good python open source projects exemplifying coding standards and best practices?
In the question
A:
Check out Flask's code, the comments on the release announcement noted that the code was very well written:
http://lucumr.pocoo.org/2010/4/16/flask-0-1-released
Armin, the author of Flask, also wrote Werkzeug, which I use a lot, and find very well written. Here is the source:
http://github.com/mitsuhiko/flask/blob/master/flask.py
A:
You can't read too much source. I think a good idea would be to take some Pythonistas (Raymond Hettinger and Ian Bicking come to mind) and fish out their code from their projects or from other sources like ActiveState and go through them.
A:
I vote for Django, maybe too much specific on web developing field.
A:
the Python STDLIB
A:
I think that the Python interface for Redis written by Andy McCurdy is a superb example of how Python code should be written, packaged, and organized. Have a look at the source code for yourself!
A:
The Twisted code is pretty well-written, although it is a somewhat complex codebase.
A:
I know http://code.google.com/p/jaikuengine/ was ported to Google App engine by googlers, so there must be some good practices in there.
A:
Consider the stdlib, some modules more so than others. One project that I have found exemplifies good python coding practices is rietveld. Of course, good in one person's eyes is awful in another's.
|
Any good python open source projects exemplifying coding standards and best practices?
|
In the question
|
[
"Check out Flask's code, the comments on the release announcement noted that the code was very well written:\nhttp://lucumr.pocoo.org/2010/4/16/flask-0-1-released\nArmin, the author of Flask, also wrote Werkzeug, which I use a lot, and find very well written. Here is the source:\nhttp://github.com/mitsuhiko/flask/blob/master/flask.py\n",
"You can't read too much source. I think a good idea would be to take some Pythonistas (Raymond Hettinger and Ian Bicking come to mind) and fish out their code from their projects or from other sources like ActiveState and go through them.\n",
"I vote for Django, maybe too much specific on web developing field.\n",
"the Python STDLIB\n",
"I think that the Python interface for Redis written by Andy McCurdy is a superb example of how Python code should be written, packaged, and organized. Have a look at the source code for yourself!\n",
"The Twisted code is pretty well-written, although it is a somewhat complex codebase.\n",
"I know http://code.google.com/p/jaikuengine/ was ported to Google App engine by googlers, so there must be some good practices in there. \n",
"Consider the stdlib, some modules more so than others. One project that I have found exemplifies good python coding practices is rietveld. Of course, good in one person's eyes is awful in another's.\n"
] |
[
3,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"coding_style",
"python"
] |
stackoverflow_0002722758_coding_style_python.txt
|
Q:
app-engine-patch and "object_detail" view didn't work
Hi(Sorry for my ugly english)
I want to use the app-engine-patch and google app engine to create a simple blog, and use the django generic views handle the blog entry page.
But when I use Django's generic views "django.views.generic.list_detail.object_detail", I encountered an error in the following:
GenericViewError at /blog/entry/
Generic view must be called with either an object_id or a slug/slug_field.
Request Method: GET
Request URL: http://192.168.62.90:8000/blog/entry/
Exception Type: GenericViewError
Exception Value:
Generic view must be called with either an object_id or a slug/slug_field.
Exception Location: <unknown> in ?, line ?
Python Executable: /usr/bin/python
Python Version: 2.5.2
Python Path: ['/home/hugh/Desktop/app-engine-patch-sample', '/home/hugh/Desktop/app-engine-patch-sample/common', '/home/hugh/Desktop/app-engine-patch-sample/common/appenginepatch/appenginepatcher/lib', '/home/hugh/Desktop/app-engine-patch-sample/common/zip-packages/django-1.0.2.zip', '/home/hugh/Desktop/app-engine-patch-sample/common/appenginepatch', '/home/hugh/Desktop/google_appengine', '/home/hugh/Desktop/google_appengine/lib/django', '/home/hugh/Desktop/google_appengine/lib/antlr3', '/home/hugh/Desktop/google_appengine/lib/webob', '/home/hugh/Desktop/google_appengine/lib/ipaddr', '/home/hugh/Desktop/google_appengine/lib/yaml/lib', '/home/hugh/Desktop/app-engine-patch-sample', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages/Numeric', '/usr/lib/python2.5/site-packages/PIL', '/usr/lib/python2.5/site-packages/gst-0.10', '/var/lib/python-support/python2.5', '/usr/lib/python2.5/site-packages/gtk-2.0', '/var/lib/python-support/python2.5/gtk-2.0']
Server time: Thu, 29 Apr 2010 01:54:57 +0000
This link causing the problem above:
http://192.168.62.90:8000/blog/entry/?agphdXR1bW4xOTEychALEgpibG9nX2VudHJ5GCYM
My urls.py:
from django.conf import settings
from django.conf.urls.defaults import *
from django.views.generic.simple import direct_to_template
from django.views.generic import list_detail
from blog.models import Author, Entry
entry_info = {
'queryset': Entry.all().order('-pub_date'),
'template_name': 'index.html',
'template_object_name': 'entry',
}
urlpatterns = patterns('',
(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
(r'^$', list_detail.object_list, entry_info),
(r'^entry/(?P<object_id>.*)$',
list_detail.object_detail,
{'queryset': Entry.all(), 'template_name': 'sample_test_page.html'}),
)
I do not know the reason of this error.
A:
I believe your URL is missing 'object_id=' after the '?'.
|
app-engine-patch and "object_detail" view didn't work
|
Hi(Sorry for my ugly english)
I want to use the app-engine-patch and google app engine to create a simple blog, and use the django generic views handle the blog entry page.
But when I use Django's generic views "django.views.generic.list_detail.object_detail", I encountered an error in the following:
GenericViewError at /blog/entry/
Generic view must be called with either an object_id or a slug/slug_field.
Request Method: GET
Request URL: http://192.168.62.90:8000/blog/entry/
Exception Type: GenericViewError
Exception Value:
Generic view must be called with either an object_id or a slug/slug_field.
Exception Location: <unknown> in ?, line ?
Python Executable: /usr/bin/python
Python Version: 2.5.2
Python Path: ['/home/hugh/Desktop/app-engine-patch-sample', '/home/hugh/Desktop/app-engine-patch-sample/common', '/home/hugh/Desktop/app-engine-patch-sample/common/appenginepatch/appenginepatcher/lib', '/home/hugh/Desktop/app-engine-patch-sample/common/zip-packages/django-1.0.2.zip', '/home/hugh/Desktop/app-engine-patch-sample/common/appenginepatch', '/home/hugh/Desktop/google_appengine', '/home/hugh/Desktop/google_appengine/lib/django', '/home/hugh/Desktop/google_appengine/lib/antlr3', '/home/hugh/Desktop/google_appengine/lib/webob', '/home/hugh/Desktop/google_appengine/lib/ipaddr', '/home/hugh/Desktop/google_appengine/lib/yaml/lib', '/home/hugh/Desktop/app-engine-patch-sample', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages/Numeric', '/usr/lib/python2.5/site-packages/PIL', '/usr/lib/python2.5/site-packages/gst-0.10', '/var/lib/python-support/python2.5', '/usr/lib/python2.5/site-packages/gtk-2.0', '/var/lib/python-support/python2.5/gtk-2.0']
Server time: Thu, 29 Apr 2010 01:54:57 +0000
This link causing the problem above:
http://192.168.62.90:8000/blog/entry/?agphdXR1bW4xOTEychALEgpibG9nX2VudHJ5GCYM
My urls.py:
from django.conf import settings
from django.conf.urls.defaults import *
from django.views.generic.simple import direct_to_template
from django.views.generic import list_detail
from blog.models import Author, Entry
entry_info = {
'queryset': Entry.all().order('-pub_date'),
'template_name': 'index.html',
'template_object_name': 'entry',
}
urlpatterns = patterns('',
(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
(r'^$', list_detail.object_list, entry_info),
(r'^entry/(?P<object_id>.*)$',
list_detail.object_detail,
{'queryset': Entry.all(), 'template_name': 'sample_test_page.html'}),
)
I do not know the reason of this error.
|
[
"I believe your URL is missing 'object_id=' after the '?'.\n"
] |
[
0
] |
[] |
[] |
[
"app_engine_patch",
"django_views",
"python"
] |
stackoverflow_0002734114_app_engine_patch_django_views_python.txt
|
Q:
Match e-mail addresses not contained in HTML tag
I need to highlight an email addresses in text but not highlight them if contained in HTML tags, content, or attributes.
For example, the string example@example.com must be converted to <a href="mailto:example@example.com">example@example.com</a>
But email addresses in the string <a href="mailto:example@example.com">example@example.com</a> must not be processed.
I've tried something like this regexp:
(?<![":])[a-zA-Z0-9._%-+]+@[a-zA-Z0-9._%-]+.[a-zA-Z]{2,6}(?!")
but it doesn't work properly.
A:
I'll guessing that your source text is an HTML file which is missing anchor tags for only some of the contained e-mail addresses. If this is true, then you will not be able to use a regexp to reliably match only untagged e-mail addresses. For example, given the input:
...
<P>You'll find a lot more written by <A
href="mailto:SvartaIF@example.com"
title="some text including an@sign.org">
SvartaIF@example.com
</A>.
</P>
...
it becomes impossible to lexically associate the href with the address and also exclude an@sign.org. You need to use an HTML parser; BeautifulSoup is pretty popular.
|
Match e-mail addresses not contained in HTML tag
|
I need to highlight an email addresses in text but not highlight them if contained in HTML tags, content, or attributes.
For example, the string example@example.com must be converted to <a href="mailto:example@example.com">example@example.com</a>
But email addresses in the string <a href="mailto:example@example.com">example@example.com</a> must not be processed.
I've tried something like this regexp:
(?<![":])[a-zA-Z0-9._%-+]+@[a-zA-Z0-9._%-]+.[a-zA-Z]{2,6}(?!")
but it doesn't work properly.
|
[
"I'll guessing that your source text is an HTML file which is missing anchor tags for only some of the contained e-mail addresses. If this is true, then you will not be able to use a regexp to reliably match only untagged e-mail addresses. For example, given the input:\n...\n<P>You'll find a lot more written by <A \nhref=\"mailto:SvartaIF@example.com\"\ntitle=\"some text including an@sign.org\">\n\nSvartaIF@example.com\n</A>.\n</P>\n...\n\nit becomes impossible to lexically associate the href with the address and also exclude an@sign.org. You need to use an HTML parser; BeautifulSoup is pretty popular.\n"
] |
[
1
] |
[] |
[] |
[
"email",
"python",
"regex"
] |
stackoverflow_0002734329_email_python_regex.txt
|
Q:
differences between "d.clear()" and "d={}"
On my machine, the execution speed between d.clear() and d={} is over 100ns so am curious why one would use one over the other.
import timeit
def timing():
d = dict()
if __name__=='__main__':
t = timeit.Timer('timing()', 'from __main__ import timing')
print t.repeat()
A:
The difference is that d = {} creates a new dictionary and d.clear() just empties the dictionary you already have. This subtle difference matters if you have other places in your code holding references to your dictionary. In the first case those other objects won't see any change because you haven't modified the original dictionary. The following code shows this difference in action.
Creating a new dictionary:
>>> d = {'foo': 'bar'}
>>> d2 = d
>>> d = {}
>>> d2
{'foo': 'bar'}
Clearing the existing dictionary:
>>> d = {'foo': 'bar'}
>>> d2 = d
>>> d.clear()
>>> d2
{}
A:
d={} creates a new dictionary.
d.clear() clears the dictionary.
If you use d={}, then anything pointing to d will be pointing to the old d. This may introduce a bug.
If you use d.clear(), then anything pointing at d will now point at the cleared dictionary, this may also introduce a bug, if that was not what you intended.
Also, I don't think d.clear() will (in CPython) free up memory taken up by d. For performance, CPython doesn't take the memory away from dictionaries when you delete elements, as the usual use for dictionaries is building a big dictionary, and maybe pruning out a few elements. Reassigning memory (and making sure the hash table stays consistent) would take too long in most use cases. Instead, it fills the dictionary with turds (that's the technical term on the mailing list), which indicate that an element used to be there but since got deleted. I'm not entirely sure if d.clear() does this though, but deleting all the keys one by one certainly does.
|
differences between "d.clear()" and "d={}"
|
On my machine, the execution speed between d.clear() and d={} is over 100ns so am curious why one would use one over the other.
import timeit
def timing():
d = dict()
if __name__=='__main__':
t = timeit.Timer('timing()', 'from __main__ import timing')
print t.repeat()
|
[
"The difference is that d = {} creates a new dictionary and d.clear() just empties the dictionary you already have. This subtle difference matters if you have other places in your code holding references to your dictionary. In the first case those other objects won't see any change because you haven't modified the original dictionary. The following code shows this difference in action.\nCreating a new dictionary:\n>>> d = {'foo': 'bar'}\n>>> d2 = d\n>>> d = {}\n>>> d2\n{'foo': 'bar'}\n\nClearing the existing dictionary:\n>>> d = {'foo': 'bar'}\n>>> d2 = d\n>>> d.clear()\n>>> d2\n{}\n\n",
"d={} creates a new dictionary.\nd.clear() clears the dictionary.\nIf you use d={}, then anything pointing to d will be pointing to the old d. This may introduce a bug.\nIf you use d.clear(), then anything pointing at d will now point at the cleared dictionary, this may also introduce a bug, if that was not what you intended.\nAlso, I don't think d.clear() will (in CPython) free up memory taken up by d. For performance, CPython doesn't take the memory away from dictionaries when you delete elements, as the usual use for dictionaries is building a big dictionary, and maybe pruning out a few elements. Reassigning memory (and making sure the hash table stays consistent) would take too long in most use cases. Instead, it fills the dictionary with turds (that's the technical term on the mailing list), which indicate that an element used to be there but since got deleted. I'm not entirely sure if d.clear() does this though, but deleting all the keys one by one certainly does.\n"
] |
[
19,
6
] |
[] |
[] |
[
"python",
"timing"
] |
stackoverflow_0002732550_python_timing.txt
|
Q:
Could Python's logging SMTP Handler be freezing my thread for 2 minutes?
A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true.
I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines.
I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP.
I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.)
This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms.
Has anyone had any experience with this? Can anyone describe how to stop it from blocking?
** EDIT # 2 : I actually counted. 170 emails in two hours. Forget the previous edit. I counted wrong. It's late here...
A:
Stress-testing was revealing:
My logging configuration sent critical messages to SMTPHandler, and debug messages to a local log file.
For testing I created a moderately large number of threads (e.g. 50) that waited for a trigger, and then simultaneosly tried to log either a critical message or a debug message, depending on the test.
Test #1: All threads send critical messages: It revealed that the first critical message took about .9 seconds to send. The second critical message took around 1.9 seconds to send. The third longer still, quickly adding up. It seems that the messages that go to email block waiting for each other to complete the send.
Test #2: All threads send debug messages: These ran fairly quickly, from hundreds to thousands of microseconds.
Test #3: A mix of both. It was clear from the results that debug messages were also being blocked waiting for critical message's emails to go out.
So, it wasn't that 2 minutes meant there was a timeout. It was the two minutes represented a large number of threads blocked waiting in the queue.
Why were there so many critical messages being sent at once? That's the irony. There was a logging.debug() call inside a method that included a network call. I had some code monitoring the speed of the of the method (to see if the network call was taking too long). If so, it (of course) logged a critical error that sent an email. The next thread then blocked on the logging.debug() call, meaning it missed the deadline, triggering another email, triggering another thread to run slowly.
The 2 minute delay in one thread wasn't a network timeout. It was one thread waiting for another thread, that was blocked for 1 minute 57 - because it was waiting for another thread blocked for 1 minute 55, etc. etc. etc.
This isn't very pretty behaviour from SMTPHandler.
A:
A two minute pause sounds like a timeout - mostly probably in the networking stack.
Try adding:
* - nofile 64000
to your /etc/security/limits.conf file on all of the machines involved and then rebooting all of the machines to ensure it is applied to all running services.
|
Could Python's logging SMTP Handler be freezing my thread for 2 minutes?
|
A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true.
I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines.
I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP.
I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.)
This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms.
Has anyone had any experience with this? Can anyone describe how to stop it from blocking?
** EDIT # 2 : I actually counted. 170 emails in two hours. Forget the previous edit. I counted wrong. It's late here...
|
[
"Stress-testing was revealing:\nMy logging configuration sent critical messages to SMTPHandler, and debug messages to a local log file.\nFor testing I created a moderately large number of threads (e.g. 50) that waited for a trigger, and then simultaneosly tried to log either a critical message or a debug message, depending on the test.\nTest #1: All threads send critical messages: It revealed that the first critical message took about .9 seconds to send. The second critical message took around 1.9 seconds to send. The third longer still, quickly adding up. It seems that the messages that go to email block waiting for each other to complete the send.\nTest #2: All threads send debug messages: These ran fairly quickly, from hundreds to thousands of microseconds.\nTest #3: A mix of both. It was clear from the results that debug messages were also being blocked waiting for critical message's emails to go out.\nSo, it wasn't that 2 minutes meant there was a timeout. It was the two minutes represented a large number of threads blocked waiting in the queue.\nWhy were there so many critical messages being sent at once? That's the irony. There was a logging.debug() call inside a method that included a network call. I had some code monitoring the speed of the of the method (to see if the network call was taking too long). If so, it (of course) logged a critical error that sent an email. The next thread then blocked on the logging.debug() call, meaning it missed the deadline, triggering another email, triggering another thread to run slowly.\nThe 2 minute delay in one thread wasn't a network timeout. It was one thread waiting for another thread, that was blocked for 1 minute 57 - because it was waiting for another thread blocked for 1 minute 55, etc. etc. etc.\nThis isn't very pretty behaviour from SMTPHandler.\n",
"A two minute pause sounds like a timeout - mostly probably in the networking stack.\nTry adding:\n* - nofile 64000\n\nto your /etc/security/limits.conf file on all of the machines involved and then rebooting all of the machines to ensure it is applied to all running services.\n"
] |
[
2,
1
] |
[] |
[] |
[
"logging",
"python",
"smtp"
] |
stackoverflow_0002722036_logging_python_smtp.txt
|
Q:
Building an SNMP Request-Response service with Python Asyncore
I have a 3rd-party protocol module (SNMP) that is built on top of asyncore. The asyncore interface is used to process response messages. What is the proper technique to design a client that generate the request-side of the protocol, while the asyncore main loop is running. I can think of two options right now:
Use the loop,timeout parameters of asyncore.loop() to allow my client program time to send the appropriate request.
Create a client asyncore dispatcher that will be executed in the same asyncore processing loop as the receiver.
What is the best option? I'm working on the 2nd solution, cause the protocol API does not give me direct access to the asyncore parameters. Please correct me if I've misunderstood the proper technique for utilizing asyncore.
A:
I solved this by adding a callback function into the asyncore loop for the receiver process.
The solution was somewhat specific to the module I was experiment with (pySNMP), but here is the general idea:
define a function closure that returns a callable method with a stored reference to a dict and window variable. The dict tracks the expected responses, and the window is the size of the sender buffer.
pass a reference to the closure function into a customized asyncore.dispatcher instance. The callback function can be executed in the writeable method invocation.
set the timeout of the dispatcher to a small value. This prevents asyncore from blocking for too long, while waiting for received packets. I used .05 seconds. The lower you go, the more response your app is, but don't go too low.
update the asyncore read_handle method to remove the received responses from your global dict structure. This will allow new messages to be transmitted.
now kick-off the dispatcher and every loop of the asyncore, the system will call the callback function, and send any messages, up to the defined window size.
|
Building an SNMP Request-Response service with Python Asyncore
|
I have a 3rd-party protocol module (SNMP) that is built on top of asyncore. The asyncore interface is used to process response messages. What is the proper technique to design a client that generate the request-side of the protocol, while the asyncore main loop is running. I can think of two options right now:
Use the loop,timeout parameters of asyncore.loop() to allow my client program time to send the appropriate request.
Create a client asyncore dispatcher that will be executed in the same asyncore processing loop as the receiver.
What is the best option? I'm working on the 2nd solution, cause the protocol API does not give me direct access to the asyncore parameters. Please correct me if I've misunderstood the proper technique for utilizing asyncore.
|
[
"I solved this by adding a callback function into the asyncore loop for the receiver process.\nThe solution was somewhat specific to the module I was experiment with (pySNMP), but here is the general idea:\n\ndefine a function closure that returns a callable method with a stored reference to a dict and window variable. The dict tracks the expected responses, and the window is the size of the sender buffer.\npass a reference to the closure function into a customized asyncore.dispatcher instance. The callback function can be executed in the writeable method invocation.\nset the timeout of the dispatcher to a small value. This prevents asyncore from blocking for too long, while waiting for received packets. I used .05 seconds. The lower you go, the more response your app is, but don't go too low.\nupdate the asyncore read_handle method to remove the received responses from your global dict structure. This will allow new messages to be transmitted.\nnow kick-off the dispatcher and every loop of the asyncore, the system will call the callback function, and send any messages, up to the defined window size.\n\n"
] |
[
1
] |
[] |
[] |
[
"asyncore",
"client_server",
"pysnmp",
"python"
] |
stackoverflow_0002626512_asyncore_client_server_pysnmp_python.txt
|
Q:
Modifying Bresenham's line algorithm
I'm trying to use Bresenham's line algorithm to compute Field of View on a grid. The code I'm using calculates the lines without a problem but I'm having problems getting it to always return the line running from start point to endpoint. What do I need to do so that all lines returned run from (x0,y0) to (x1,y1)
def bresenham_line(self, x0, y0, x1, y1):
steep = abs(y1 - y0) > abs(x1 - x0)
if steep:
x0, y0 = y0, x0
x1, y1 = y1, x1
if x0 > x1:
x0, x1 = x1, x0
y0, y1 = y1, y0
if y0 < y1:
ystep = 1
else:
ystep = -1
deltax = x1 - x0
deltay = abs(y1 - y0)
error = -deltax / 2
y = y0
line = []
for x in range(x0, x1 + 1):
if steep:
line.append((y,x))
else:
line.append((x,y))
error = error + deltay
if error > 0:
y = y + ystep
error = error - deltax
return line
A:
Remember whether you switched x0 and x1 and then reverse the list if you did.
if x0 > x1:
x0, x1 = x1, x0
y0, y1 = y1, y0
becomes
switched = False
if x0 > x1:
switched = True
x0, x1 = x1, x0
y0, y1 = y1, y0
and at the end, just add:
if switched:
line.reverse()
|
Modifying Bresenham's line algorithm
|
I'm trying to use Bresenham's line algorithm to compute Field of View on a grid. The code I'm using calculates the lines without a problem but I'm having problems getting it to always return the line running from start point to endpoint. What do I need to do so that all lines returned run from (x0,y0) to (x1,y1)
def bresenham_line(self, x0, y0, x1, y1):
steep = abs(y1 - y0) > abs(x1 - x0)
if steep:
x0, y0 = y0, x0
x1, y1 = y1, x1
if x0 > x1:
x0, x1 = x1, x0
y0, y1 = y1, y0
if y0 < y1:
ystep = 1
else:
ystep = -1
deltax = x1 - x0
deltay = abs(y1 - y0)
error = -deltax / 2
y = y0
line = []
for x in range(x0, x1 + 1):
if steep:
line.append((y,x))
else:
line.append((x,y))
error = error + deltay
if error > 0:
y = y + ystep
error = error - deltax
return line
|
[
"Remember whether you switched x0 and x1 and then reverse the list if you did.\nif x0 > x1:\n x0, x1 = x1, x0\n y0, y1 = y1, y0\n\nbecomes\nswitched = False\nif x0 > x1:\n switched = True\n x0, x1 = x1, x0\n y0, y1 = y1, y0\n\nand at the end, just add:\nif switched:\n line.reverse()\n\n"
] |
[
4
] |
[] |
[] |
[
"algorithm",
"python"
] |
stackoverflow_0002734714_algorithm_python.txt
|
Q:
Disadvantage of Python eggs?
Are there any disadvantages about using eggs through easy-install compared to the "traditional" packages/modules/libs?
A:
One (potential) disadvantage is that eggs are zipped by default unless zip_safe=False is set in their setup() function in setup.py. If an egg is zipped, you can't get at the files in it (without unzipping it, obviously). If the module itself uses non-source files (such as templates) it will probably specify zip_safe=False, but another consequence is that you cannot effectively step into zipped modules using pdb, the Python debugger. That is, you can, but you won't be able to see the source or navigate properly.
A:
Using eggs does cause a long sys.path, which has to be searched and when it's really long that search can take a while. Only when you get a hundred entries or so is this going to be a problem (but installing a hundred eggs via easy_install is certainly possible).
|
Disadvantage of Python eggs?
|
Are there any disadvantages about using eggs through easy-install compared to the "traditional" packages/modules/libs?
|
[
"One (potential) disadvantage is that eggs are zipped by default unless zip_safe=False is set in their setup() function in setup.py. If an egg is zipped, you can't get at the files in it (without unzipping it, obviously). If the module itself uses non-source files (such as templates) it will probably specify zip_safe=False, but another consequence is that you cannot effectively step into zipped modules using pdb, the Python debugger. That is, you can, but you won't be able to see the source or navigate properly.\n",
"Using eggs does cause a long sys.path, which has to be searched and when it's really long that search can take a while. Only when you get a hundred entries or so is this going to be a problem (but installing a hundred eggs via easy_install is certainly possible).\n"
] |
[
8,
8
] |
[] |
[] |
[
"comparison",
"egg",
"python"
] |
stackoverflow_0002733629_comparison_egg_python.txt
|
Q:
this is my Receiving Email code,but can't Receiving Email .. (google-app-engine)
import logging, email
from google.appengine.ext import webapp
from google.appengine.ext.webapp.mail_handlers import InboundMailHandler
from google.appengine.ext.webapp.util import run_wsgi_app
class LogSenderHandler(InboundMailHandler):
def receive(self, message):
_subject = message.subject
_sender=message.sender
bodies = message.bodies('text/plain')
allBodies = ""
#for body in bodies:
# allBodies = allBodies + "\n---------------------------\n" + body[1].decode()
#m= mail.EmailMessage(sender="zjm1126@gmail.com ",subject="reply to "+_subject)
#m.to = _sender
#m.body =allBodies
#m.send()
message = mail.EmailMessage(sender="zjm1126@gmail.com",
subject="Your account has been approved")
message.to = _sender
message.body = """
Dear Albert:
Your example.com account has been approved. You can now visit
http://www.example.com/ and sign in using your Google Account to
access new features.
Please let us know if you have any questions.
The example.com Team
"""
message.send()
application = webapp.WSGIApplication([LogSenderHandler.mapping()], debug=True)
app.yaml:
application: zjm1126
version: 1-2
runtime: python
api_version: 1
inbound_services:
- mail
handlers:
- url: /media
static_dir: media
- url: /_ah/mail/.+
script: handle_incoming_email.py
login: admin
- url: /
script: a.py
- url: /sign
script: a.py
- url: .*
script: django_bootstrap.py
I use my email:zjm1126@gmail.com send some words to ss@zjm1126.appspotmail.com
I can't get a Receiving Email, why?
A:
I had the same problem after following the google tutorial as well. Thanks to this tute I discovered a rather important bit of code that slipped my mind and isn't in the google tutorial.
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Hope that helps.
A:
It looks like you're trying to make code from mail send\receive tutorial to work. I used that tutorial also to check how mail service works and didn't have problems with it. What I could suggest doing is:
decouple mail sending and receiving
scripts as it seem like you're going
to cycle it;
I guess you already have
the sending code somewhere else, but
just in case, something has to send
an email to
ss@zjm1126.appspotmail.com to
trigger the LogSenderHandler handler;
You can check and debug your code
locally by using zjm1126 development
console. Try sending an email from
here:
http://localhost:8080/_ah/admin/inboundmail
and put a breakpoint into the
LogSenderHandler.receive method to
see if it gets hit and what's going
on after that;
In your yaml I see other handlers but webapp.WSGIApplication has only LogSenderHandler mappings. It might be the reason why those other scripts are not getting executed;
other than that your code and yaml look fine and should work
hope this helps, regards
A:
Everything looks fine - your handler is returning a 200 OK. If you're not receiving the email it's sending, try logging the values you're using so you can check that everything's valid and what you expect it to be.
|
this is my Receiving Email code,but can't Receiving Email .. (google-app-engine)
|
import logging, email
from google.appengine.ext import webapp
from google.appengine.ext.webapp.mail_handlers import InboundMailHandler
from google.appengine.ext.webapp.util import run_wsgi_app
class LogSenderHandler(InboundMailHandler):
def receive(self, message):
_subject = message.subject
_sender=message.sender
bodies = message.bodies('text/plain')
allBodies = ""
#for body in bodies:
# allBodies = allBodies + "\n---------------------------\n" + body[1].decode()
#m= mail.EmailMessage(sender="zjm1126@gmail.com ",subject="reply to "+_subject)
#m.to = _sender
#m.body =allBodies
#m.send()
message = mail.EmailMessage(sender="zjm1126@gmail.com",
subject="Your account has been approved")
message.to = _sender
message.body = """
Dear Albert:
Your example.com account has been approved. You can now visit
http://www.example.com/ and sign in using your Google Account to
access new features.
Please let us know if you have any questions.
The example.com Team
"""
message.send()
application = webapp.WSGIApplication([LogSenderHandler.mapping()], debug=True)
app.yaml:
application: zjm1126
version: 1-2
runtime: python
api_version: 1
inbound_services:
- mail
handlers:
- url: /media
static_dir: media
- url: /_ah/mail/.+
script: handle_incoming_email.py
login: admin
- url: /
script: a.py
- url: /sign
script: a.py
- url: .*
script: django_bootstrap.py
I use my email:zjm1126@gmail.com send some words to ss@zjm1126.appspotmail.com
I can't get a Receiving Email, why?
|
[
"I had the same problem after following the google tutorial as well. Thanks to this tute I discovered a rather important bit of code that slipped my mind and isn't in the google tutorial.\ndef main():\n run_wsgi_app(application)\nif __name__ == \"__main__\":\n main()\n\nHope that helps.\n",
"It looks like you're trying to make code from mail send\\receive tutorial to work. I used that tutorial also to check how mail service works and didn't have problems with it. What I could suggest doing is: \n\ndecouple mail sending and receiving\nscripts as it seem like you're going\nto cycle it;\nI guess you already have\nthe sending code somewhere else, but\njust in case, something has to send\nan email to\nss@zjm1126.appspotmail.com to\ntrigger the LogSenderHandler handler;\nYou can check and debug your code\nlocally by using zjm1126 development\nconsole. Try sending an email from\nhere:\nhttp://localhost:8080/_ah/admin/inboundmail\nand put a breakpoint into the\nLogSenderHandler.receive method to\nsee if it gets hit and what's going\non after that;\nIn your yaml I see other handlers but webapp.WSGIApplication has only LogSenderHandler mappings. It might be the reason why those other scripts are not getting executed; \n\nother than that your code and yaml look fine and should work\nhope this helps, regards\n",
"Everything looks fine - your handler is returning a 200 OK. If you're not receiving the email it's sending, try logging the values you're using so you can check that everything's valid and what you expect it to be.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0002706208_google_app_engine_python.txt
|
Q:
Replacement for htmllib module in Python 3.0
I want to use the htmllib module but it's been removed from Python 3.0. Does anyone know what's the replacement for this module?
A:
It is Superseded by HTMLParser see Python library reorganization
A:
I haven't used it, but it looks like what you want is the html.parser library, and possibly also html.entity.
A:
I heard Beautiful soup is getting a port to 3.0.
A:
I believe lxml has been ported to Python 3
|
Replacement for htmllib module in Python 3.0
|
I want to use the htmllib module but it's been removed from Python 3.0. Does anyone know what's the replacement for this module?
|
[
"It is Superseded by HTMLParser see Python library reorganization\n",
"I haven't used it, but it looks like what you want is the html.parser library, and possibly also html.entity.\n",
"I heard Beautiful soup is getting a port to 3.0.\n",
"I believe lxml has been ported to Python 3\n"
] |
[
10,
8,
1,
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0002730752_python_python_3.x.txt
|
Q:
Documenting module/class/function bodies in Python Sphinx docs
Is there a way with Sphinx documentation to output a function or class body (the code itself) with the autodoc feature? I'm using autodoc to much success. In addition to the docstrings getting pulled in to the documentation I want like a link to click for each function where it will show you the source... is that possible?
This is about what most of my documentation looks like now:
.. module:`foo.mymodule`
Title
===================
.. automodule:: foo.mymodule
.. autoclass:: MyModulesClass
:members:
:undoc-members:
A:
If this still matters: The viewcode extension can do this, but needs the development version (1.0) of Sphinx
A:
I don't believe so. Autodoc is only for pulling the documentation out of the source code.
|
Documenting module/class/function bodies in Python Sphinx docs
|
Is there a way with Sphinx documentation to output a function or class body (the code itself) with the autodoc feature? I'm using autodoc to much success. In addition to the docstrings getting pulled in to the documentation I want like a link to click for each function where it will show you the source... is that possible?
This is about what most of my documentation looks like now:
.. module:`foo.mymodule`
Title
===================
.. automodule:: foo.mymodule
.. autoclass:: MyModulesClass
:members:
:undoc-members:
|
[
"If this still matters: The viewcode extension can do this, but needs the development version (1.0) of Sphinx\n",
"I don't believe so. Autodoc is only for pulling the documentation out of the source code.\n"
] |
[
2,
1
] |
[] |
[] |
[
"python",
"python_sphinx"
] |
stackoverflow_0001836633_python_python_sphinx.txt
|
Q:
python script problem once build and package it
I've written python script to scan wifi and send data to the server, I set interval value, so it keep on scanning and send the data, it read from config.txt file where i set the interval value to scan, I also add yes/no in my config file, so is 'no' it will scan only once and if 'yes' it will scan according to the interval level,
my code as below
import time,.....
from threading import Event, Thread
class RepeatTimer(Thread):
def __init__(self, interval, function, iterations=0, args=[], kwargs={}):
Thread.__init__(self)
self.interval = interval
self.function = function
self.iterations = iterations
self.args = args
self.kwargs = kwargs
self.finished = Event()
def run(self):
count = 0
while not self.finished.is_set() and (self.iterations <= 0 or count < self.iterations):
self.finished.wait(self.interval)
if not self.finished.is_set():
self.function(*self.args, **self.kwargs)
count += 1
def cancel(self):
self.finished.set()
def scanWifi(self):
#scanning process and sending data done here
obj = JW()
if status == "yes":
t = RepeatTimer(int(intervalTime),obj.scanWifi)
t.start()
else:
obj.scanWifi()
once I package my code, its only run when I set my config file set to 'no' where it scan only once, but when I set my config file to 'yes', there is no progress at all, so I found that there is problem with my class RepeatTimer(Timer) once build, but don't know how to solve
can anyone help me
thanks
A:
I think the problem is in the loop condition. Supposing that is_set() returns False, the second part is always False. While is intervalTime is not known, i think that it is positive (does has sense a negative interval time?) and count is never lesser than self.iterations: they are both 0.
But the code you posted is too few, it is not given to know how exactly works.
while not self.finished.is_set() and (self.iterations <= 0 or count < self.iterations):
|
python script problem once build and package it
|
I've written python script to scan wifi and send data to the server, I set interval value, so it keep on scanning and send the data, it read from config.txt file where i set the interval value to scan, I also add yes/no in my config file, so is 'no' it will scan only once and if 'yes' it will scan according to the interval level,
my code as below
import time,.....
from threading import Event, Thread
class RepeatTimer(Thread):
def __init__(self, interval, function, iterations=0, args=[], kwargs={}):
Thread.__init__(self)
self.interval = interval
self.function = function
self.iterations = iterations
self.args = args
self.kwargs = kwargs
self.finished = Event()
def run(self):
count = 0
while not self.finished.is_set() and (self.iterations <= 0 or count < self.iterations):
self.finished.wait(self.interval)
if not self.finished.is_set():
self.function(*self.args, **self.kwargs)
count += 1
def cancel(self):
self.finished.set()
def scanWifi(self):
#scanning process and sending data done here
obj = JW()
if status == "yes":
t = RepeatTimer(int(intervalTime),obj.scanWifi)
t.start()
else:
obj.scanWifi()
once I package my code, its only run when I set my config file set to 'no' where it scan only once, but when I set my config file to 'yes', there is no progress at all, so I found that there is problem with my class RepeatTimer(Timer) once build, but don't know how to solve
can anyone help me
thanks
|
[
"I think the problem is in the loop condition. Supposing that is_set() returns False, the second part is always False. While is intervalTime is not known, i think that it is positive (does has sense a negative interval time?) and count is never lesser than self.iterations: they are both 0.\nBut the code you posted is too few, it is not given to know how exactly works.\n while not self.finished.is_set() and (self.iterations <= 0 or count < self.iterations):\n\n"
] |
[
0
] |
[] |
[] |
[
"build",
"package",
"python"
] |
stackoverflow_0002735410_build_package_python.txt
|
Q:
create_or_update in ModelForm
I want to have a ModelForm that can create_or_update a model instance based on the request parameters.
I've been trying to cobble something together, but am realizing that my python fu is not strong enough, and the ModelForm implementation code is a quite hairy.
I found this update_or_create snipplet for working with a Model, but I think it would be incredibly useful if it were integrated with a ModelForm.
I would expect it to behave similarly to ModelForm.save():
class BetterModelForm(forms.ModelForm):
def create_or_update(self):
#magic
return (instance, created, updated)
Conversely I'd also be interested in hearing compelling reasons why this is not a good idea.
A:
This isn't really something that belongs in the ModelForm itself. Models already do this automatically - if the model instance has a value for pk, it is updated, otherwise it is inserted. So all you need to do when you instantiate your form is to pass in either an existing model instance, which will be updated on form save, or an empty instance (or even just None), in which case a new instance will be created.
|
create_or_update in ModelForm
|
I want to have a ModelForm that can create_or_update a model instance based on the request parameters.
I've been trying to cobble something together, but am realizing that my python fu is not strong enough, and the ModelForm implementation code is a quite hairy.
I found this update_or_create snipplet for working with a Model, but I think it would be incredibly useful if it were integrated with a ModelForm.
I would expect it to behave similarly to ModelForm.save():
class BetterModelForm(forms.ModelForm):
def create_or_update(self):
#magic
return (instance, created, updated)
Conversely I'd also be interested in hearing compelling reasons why this is not a good idea.
|
[
"This isn't really something that belongs in the ModelForm itself. Models already do this automatically - if the model instance has a value for pk, it is updated, otherwise it is inserted. So all you need to do when you instantiate your form is to pass in either an existing model instance, which will be updated on form save, or an empty instance (or even just None), in which case a new instance will be created.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"python"
] |
stackoverflow_0002733089_django_django_forms_django_models_python.txt
|
Q:
How to get debugging of an App Engine application working?
I've got 10+ years in C/C++, and it appears Visual Studio has spoilt me during that time. In Visual Studio, debbuging issimple: I just add a breakpoint to a line of code, and as soon as that code is executed, my breakpoint triggers, at which point I can view a callstack, local/member variables, etc.
I'm trying to achieve this functionality under App Engine. I assume that is possible?
All the searching I've done to this point has led me to using Pydev in Eclipse. As best I can tell, I am successfully launching my simple 'hello world' program in Debug mode.
But the IDE doesn't even seem to have an option to set a breakpoint? I must be missing something.
I've googled long and hard about this, but am having no luck. Most results trace back to the same old threads that don't deal directly with my issue.
Can anyone shed some light on how you get basic debugging setup using Pydev/Eclipse with App Engine?
Alternatively, if there's an easier way to debug App Engine than using Pydev/Eclipse, I'd love to hear about it.
Thanks in advance.
A:
In fact setting a breakpoint in eclipse is very easy. You have two options:
In the grey area next to your line numbers, doubleclick or right mouseclick -> toggle breakpoint.
|
How to get debugging of an App Engine application working?
|
I've got 10+ years in C/C++, and it appears Visual Studio has spoilt me during that time. In Visual Studio, debbuging issimple: I just add a breakpoint to a line of code, and as soon as that code is executed, my breakpoint triggers, at which point I can view a callstack, local/member variables, etc.
I'm trying to achieve this functionality under App Engine. I assume that is possible?
All the searching I've done to this point has led me to using Pydev in Eclipse. As best I can tell, I am successfully launching my simple 'hello world' program in Debug mode.
But the IDE doesn't even seem to have an option to set a breakpoint? I must be missing something.
I've googled long and hard about this, but am having no luck. Most results trace back to the same old threads that don't deal directly with my issue.
Can anyone shed some light on how you get basic debugging setup using Pydev/Eclipse with App Engine?
Alternatively, if there's an easier way to debug App Engine than using Pydev/Eclipse, I'd love to hear about it.
Thanks in advance.
|
[
"In fact setting a breakpoint in eclipse is very easy. You have two options:\nIn the grey area next to your line numbers, doubleclick or right mouseclick -> toggle breakpoint.\n"
] |
[
1
] |
[] |
[] |
[
"debugging",
"eclipse",
"google_app_engine",
"pydev",
"python"
] |
stackoverflow_0002735968_debugging_eclipse_google_app_engine_pydev_python.txt
|
Q:
Parse URL from plain text
How can I parse URLs from any give plain text (not limited to href attributes in tags)?
Any code examples in Python will be appreciated.
A:
You could use a Regular Expression to parse the string.
Look in this previously asked question:
What’s the cleanest way to extract URLs from a string using Python?
A:
See Jan Goyvaerts' blog.
So a Python code example could look like
result = re.findall(r"\b(?:(?:https?|ftp|file)://|www\.|ftp\.)[-A-Z0-9+&@#/%=~_|$?!:,.]*[A-Z0-9+&@#/%=~_|$]", subject)
|
Parse URL from plain text
|
How can I parse URLs from any give plain text (not limited to href attributes in tags)?
Any code examples in Python will be appreciated.
|
[
"You could use a Regular Expression to parse the string.\nLook in this previously asked question:\nWhat’s the cleanest way to extract URLs from a string using Python?\n",
"See Jan Goyvaerts' blog.\nSo a Python code example could look like\nresult = re.findall(r\"\\b(?:(?:https?|ftp|file)://|www\\.|ftp\\.)[-A-Z0-9+&@#/%=~_|$?!:,.]*[A-Z0-9+&@#/%=~_|$]\", subject)\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"parsing",
"python",
"url"
] |
stackoverflow_0002735181_parsing_python_url.txt
|
Q:
How to render custom columns with a GenericTreeModel
I have to display some data in a treeview. The "real" data model is huge and I cannot copy all the stuff in a TreeStore, so I guess I should use a GenericTreeModel to act like a virtual treeview. Btw, the first column is the classic icon+text style and I think I should declare a column with a CellRendererPixbuf (faq sample), but I'm not sure what the model methods on_get_n_columns() and on_get_value() should return. It's both a Pixbuf and a string value for the same column.
A:
Look at the tutorial, there is an example that packs two cell renderer to one column. The difference is that you are using a custom tree model and the behavior depends on how you modeled your model. If you have one column with the text and one column with the pixbuf you can use set_attributes:
column = gtk.TreeViewColumn('Pixbuf and text')
cell1 = gtk.CellRenderText()
cell2 = gtk.CellRenderPixbuf()
column.pack_start(cell1, True)
column.pack_start(cell2, False)
column.set_attribute(cell1, 'text', 0) # the first column contains the text
column.set_attribute(cell2, 'pixbuf', 1) # the second column contains the pixbuf
You can have otherwise a tree model with just one column with the objects that contains all you need, so just set a callback:
class MyObject:
def __init__(self, text, pixbuf):
self.text = text
self.pixbuf = pixbuf
def cell1_cb(col, cell, model, iter):
obj = model.get_value(iter)
cell.set_property('text', obj.text)
def cell2_cb(col, cell, model, iter):
obj = model.get_value(iter)
cell.set_property('pixbuf', obj.pixbuf)
column = gtk.TreeViewColumn('Pixbuf and text')
cell1 = gtk.CellRenderText()
cell2 = gtk.CellRenderPixbuf()
column.pack_start(cell1, True)
column.pack_start(cell2, False)
column.set_cell_data_func(cell1, cell1_cb)
column.set_cell_data_func(cell2, cell2_cb)
I hope I give you an idea of what you can do and a start point. Disclaimer: I did not test the code.
|
How to render custom columns with a GenericTreeModel
|
I have to display some data in a treeview. The "real" data model is huge and I cannot copy all the stuff in a TreeStore, so I guess I should use a GenericTreeModel to act like a virtual treeview. Btw, the first column is the classic icon+text style and I think I should declare a column with a CellRendererPixbuf (faq sample), but I'm not sure what the model methods on_get_n_columns() and on_get_value() should return. It's both a Pixbuf and a string value for the same column.
|
[
"Look at the tutorial, there is an example that packs two cell renderer to one column. The difference is that you are using a custom tree model and the behavior depends on how you modeled your model. If you have one column with the text and one column with the pixbuf you can use set_attributes:\ncolumn = gtk.TreeViewColumn('Pixbuf and text')\ncell1 = gtk.CellRenderText()\ncell2 = gtk.CellRenderPixbuf()\ncolumn.pack_start(cell1, True)\ncolumn.pack_start(cell2, False)\ncolumn.set_attribute(cell1, 'text', 0) # the first column contains the text\ncolumn.set_attribute(cell2, 'pixbuf', 1) # the second column contains the pixbuf\n\nYou can have otherwise a tree model with just one column with the objects that contains all you need, so just set a callback:\nclass MyObject:\n def __init__(self, text, pixbuf):\n self.text = text\n self.pixbuf = pixbuf\n\ndef cell1_cb(col, cell, model, iter):\n obj = model.get_value(iter)\n cell.set_property('text', obj.text)\n\ndef cell2_cb(col, cell, model, iter):\n obj = model.get_value(iter)\n cell.set_property('pixbuf', obj.pixbuf)\n\ncolumn = gtk.TreeViewColumn('Pixbuf and text')\ncell1 = gtk.CellRenderText()\ncell2 = gtk.CellRenderPixbuf()\ncolumn.pack_start(cell1, True)\ncolumn.pack_start(cell2, False)\ncolumn.set_cell_data_func(cell1, cell1_cb)\ncolumn.set_cell_data_func(cell2, cell2_cb)\n\nI hope I give you an idea of what you can do and a start point. Disclaimer: I did not test the code.\n"
] |
[
0
] |
[] |
[] |
[
"gtktreeview",
"pygtk",
"python"
] |
stackoverflow_0002735803_gtktreeview_pygtk_python.txt
|
Q:
python eval weirdness
I have the following code in one of my classes along with checks when the code does not eval:
filterParam="self.recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and self.recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]"
if eval(filterParam):
print "Evalled"
else:
print "Not Evalled\nfilterParam\n'%s'\ntmpBPSS\n'%s'\nself.recipientMSISDN\n'%s'\nself.recipientIMSI\n'%s'" % (filterParam, tmpBPSS, self.recipientMSISDN, self.recipientIMSI)
I am not getting anything to 'eval'. Here are the results:
Not Evalled
filterParam
'self.recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and self.recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]'
tmpBPSS
'bprm_DAILY_MO_919844000039#892000000'
self.recipientMSISDN
'919844000039'
self.recipientIMSI
'892000000'
So I used the outputs from the above to check the code in a python shell and as you can see the code evalled correctly:
>>> filterParam="recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]"
>>> tmpBPSS='bprm_DAILY_MO_919844000039#892000000'
>>> recipientMSISDN='919844000039'
>>> recipientIMSI='892000000'
>>> if eval(filterParam):
... print "Evalled"
... else:
... print "Not Evalled"
...
Evalled
Am I off my rocker or what am I missing?
A
A:
Most likely, the type of self.recipientIMSI or self.recipientMSISDN is int, and comparing them with strings returns False. Add this line to see if this is the case:
print type(self.recipientIMSI), type(self.recipientMSISDN)
If not, try checking what the same expression evaluates to without eval.
That said, Are you sure you need to use eval? Usually there's a way of doing things without eval or exec, which will lead to safer, more maintainable code.
A:
The return value from eval is not whether or the code was evaluated, but the actual value returned by doing so. Since you have an and statement in your code string, presumably one or both of the expressions evaluate to False.
A:
Why are you even doing the eval at all? Why not just make the comparison directly in the if statement?
It's possible there is a type mismatch. One of those values you specify could be unicode or some other type of string-like object. Whey you print it, you're casting it to a string and so they look equal, but they may be different types, and so evaluate to False.
|
python eval weirdness
|
I have the following code in one of my classes along with checks when the code does not eval:
filterParam="self.recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and self.recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]"
if eval(filterParam):
print "Evalled"
else:
print "Not Evalled\nfilterParam\n'%s'\ntmpBPSS\n'%s'\nself.recipientMSISDN\n'%s'\nself.recipientIMSI\n'%s'" % (filterParam, tmpBPSS, self.recipientMSISDN, self.recipientIMSI)
I am not getting anything to 'eval'. Here are the results:
Not Evalled
filterParam
'self.recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and self.recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]'
tmpBPSS
'bprm_DAILY_MO_919844000039#892000000'
self.recipientMSISDN
'919844000039'
self.recipientIMSI
'892000000'
So I used the outputs from the above to check the code in a python shell and as you can see the code evalled correctly:
>>> filterParam="recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]"
>>> tmpBPSS='bprm_DAILY_MO_919844000039#892000000'
>>> recipientMSISDN='919844000039'
>>> recipientIMSI='892000000'
>>> if eval(filterParam):
... print "Evalled"
... else:
... print "Not Evalled"
...
Evalled
Am I off my rocker or what am I missing?
A
|
[
"Most likely, the type of self.recipientIMSI or self.recipientMSISDN is int, and comparing them with strings returns False. Add this line to see if this is the case:\nprint type(self.recipientIMSI), type(self.recipientMSISDN)\n\nIf not, try checking what the same expression evaluates to without eval.\nThat said, Are you sure you need to use eval? Usually there's a way of doing things without eval or exec, which will lead to safer, more maintainable code.\n",
"The return value from eval is not whether or the code was evaluated, but the actual value returned by doing so. Since you have an and statement in your code string, presumably one or both of the expressions evaluate to False.\n",
"Why are you even doing the eval at all? Why not just make the comparison directly in the if statement?\nIt's possible there is a type mismatch. One of those values you specify could be unicode or some other type of string-like object. Whey you print it, you're casting it to a string and so they look equal, but they may be different types, and so evaluate to False.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"eval",
"python"
] |
stackoverflow_0002736460_eval_python.txt
|
Q:
Django : presenting a form very different from the model and with multiple field values in a Django-ish way?
I'm currently doing a firewall management application for Django, here's the (simplified) model :
class Port(models.Model):
number = models.PositiveIntegerField(primary_key=True)
application = models.CharField(max_length=16, blank=True)
class Rule(models.Model):
port = models.ForeignKey(Port)
ip_source = models.IPAddressField()
ip_mask = models.IntegerField(validators=[MaxValueValidator(32)])
machine = models.ForeignKey("vmm.machine")
What I would like to do, however, is to display to the user a form for entering rules, but with a very different organization than the model :
Port 80
O Not open
O Everywhere
O Specific addresses :
--------- delete field
--------- delete field
+ add address field
Port 443
... etc
Where Not open means that there is no rule for the given port, Everywhere means that there is only ONE rule (0.0.0.0/0) for the given port, and with specific addresses, you can add as many addresses as you want (I did this with JQuery), which will make as many rules.
Now I did a version completely "handmade", meaning that I create the forms entirely in my templates, set input names with a prefix, and parse all the POSTed stuff in my view (which is quite painful, and means that there's no point in using a web framework).
I also have a class which aggregates the rules together to easily pre-fill the forms with the informations "not open, everywhere, ...". I'm passing a list of those to the template, therefore it acts as an interface between my model and my "handmade" form :
class MachinePort(object):
def __init__(self, machine, port):
self.machine = machine
self.port = port
@property
def fully_open(self):
for rule in self.port.rule_set.filter(machine=self.machine):
if ipaddr.IPv4Network("%s/%s" % (rule.ip_source, rule.ip_mask)) == ipaddr.IPv4Network("0.0.0.0/0"):
return True
else :
return False
@property
def partly_open(self):
return bool(self.port.rule_set.filter(machine=self.machine)) and not self.fully_open
@property
def not_open(self):
return not self.partly_open and not self.fully_open
But all this is rather ugly ! Do anyone of you know if there is a classy way to do this ? In particular with the form... I don't know how to have a form that can have an undefined number of fields, neither how to transform these fields into Rule objects (because all the rule fields would have to be gathered from the form), neither how to save multiple objects... Well I could try to hack into the Form class, but seems like too much work for such a special case. Is there any nice feature I'm missing ?
A:
You can create usual Forms objects by subclassing Form and adding fields in constructor, as in:
self.base_fields[field_name] = field_instance
As for the Rule, You can create a custom Field that will validate() itself according to Your rules and add it to Your custom form as above.
So Yes, it must be handmande (AFAIK), but it's not so much code.
A:
Ok, finally I got it running by making the models closer to what I wanted to present to the user. But related to the topic of the question :
1) Nested forms/formsets are not a built-in Django feature, are a pain to implement by yourself, and are actually not needed... Rather, one should use forms' and formsets' prefixes.
2) Trying to work with forms not based on the models, process the data, then reinject it in the models, is much much more code than modifying the models a little bit to have nice model-based forms.
So what I did is I modified the models like that :
class PortConfig(Serializable):
port = models.ForeignKey(Port, editable=False)
machine = models.ForeignKey("vmm.machine", editable=False)
is_open = models.CharField(max_length=16, default="not_open", choices=is_open_choices)
class Rule(Serializable):
ip_source = models.CharField(max_length=24)
port_config = models.ForeignKey(PortConfig)
Then I simply used a "model formset" for PortConfig, and "model inline formset" for Rule, with a PortConfig as foreign key, and it went perfectly
3) I used this great JS library http://code.google.com/p/django-dynamic-formset/ to put the "add field" and "delete field" links ... you almost have nothing to do.
|
Django : presenting a form very different from the model and with multiple field values in a Django-ish way?
|
I'm currently doing a firewall management application for Django, here's the (simplified) model :
class Port(models.Model):
number = models.PositiveIntegerField(primary_key=True)
application = models.CharField(max_length=16, blank=True)
class Rule(models.Model):
port = models.ForeignKey(Port)
ip_source = models.IPAddressField()
ip_mask = models.IntegerField(validators=[MaxValueValidator(32)])
machine = models.ForeignKey("vmm.machine")
What I would like to do, however, is to display to the user a form for entering rules, but with a very different organization than the model :
Port 80
O Not open
O Everywhere
O Specific addresses :
--------- delete field
--------- delete field
+ add address field
Port 443
... etc
Where Not open means that there is no rule for the given port, Everywhere means that there is only ONE rule (0.0.0.0/0) for the given port, and with specific addresses, you can add as many addresses as you want (I did this with JQuery), which will make as many rules.
Now I did a version completely "handmade", meaning that I create the forms entirely in my templates, set input names with a prefix, and parse all the POSTed stuff in my view (which is quite painful, and means that there's no point in using a web framework).
I also have a class which aggregates the rules together to easily pre-fill the forms with the informations "not open, everywhere, ...". I'm passing a list of those to the template, therefore it acts as an interface between my model and my "handmade" form :
class MachinePort(object):
def __init__(self, machine, port):
self.machine = machine
self.port = port
@property
def fully_open(self):
for rule in self.port.rule_set.filter(machine=self.machine):
if ipaddr.IPv4Network("%s/%s" % (rule.ip_source, rule.ip_mask)) == ipaddr.IPv4Network("0.0.0.0/0"):
return True
else :
return False
@property
def partly_open(self):
return bool(self.port.rule_set.filter(machine=self.machine)) and not self.fully_open
@property
def not_open(self):
return not self.partly_open and not self.fully_open
But all this is rather ugly ! Do anyone of you know if there is a classy way to do this ? In particular with the form... I don't know how to have a form that can have an undefined number of fields, neither how to transform these fields into Rule objects (because all the rule fields would have to be gathered from the form), neither how to save multiple objects... Well I could try to hack into the Form class, but seems like too much work for such a special case. Is there any nice feature I'm missing ?
|
[
"You can create usual Forms objects by subclassing Form and adding fields in constructor, as in:\nself.base_fields[field_name] = field_instance\n\nAs for the Rule, You can create a custom Field that will validate() itself according to Your rules and add it to Your custom form as above.\nSo Yes, it must be handmande (AFAIK), but it's not so much code. \n",
"Ok, finally I got it running by making the models closer to what I wanted to present to the user. But related to the topic of the question :\n1) Nested forms/formsets are not a built-in Django feature, are a pain to implement by yourself, and are actually not needed... Rather, one should use forms' and formsets' prefixes.\n2) Trying to work with forms not based on the models, process the data, then reinject it in the models, is much much more code than modifying the models a little bit to have nice model-based forms.\nSo what I did is I modified the models like that :\nclass PortConfig(Serializable):\n port = models.ForeignKey(Port, editable=False)\n machine = models.ForeignKey(\"vmm.machine\", editable=False)\n is_open = models.CharField(max_length=16, default=\"not_open\", choices=is_open_choices)\n\nclass Rule(Serializable):\n ip_source = models.CharField(max_length=24)\n port_config = models.ForeignKey(PortConfig)\n\nThen I simply used a \"model formset\" for PortConfig, and \"model inline formset\" for Rule, with a PortConfig as foreign key, and it went perfectly\n3) I used this great JS library http://code.google.com/p/django-dynamic-formset/ to put the \"add field\" and \"delete field\" links ... you almost have nothing to do.\n"
] |
[
1,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002605886_django_python.txt
|
Q:
how to generate py files without default windows crlf?
how can I write a .py file from python such that its type should not be like 'ASCII file with Windows CRLF'
because when i run file.write(data) inside windows it write the file but when I try to
eval(open(file.py).read()) it fails and gives syntax error because of windows CRLF on each line......
see the error log - traceback
ERROR:web-services:[25]: info = eval(tools.file_open(terp_file).read(-1))
ERROR:web-services:[26]: File "<string>", line 1
ERROR:web-services:[27]: {
ERROR:web-services:[28]:
A:
The problem is not with the CRLF, but that eval is for evaluating a single expression, not an entire program.
You can use exec to execute a program from a string, or execfile to execute it directly from a file.
To answer your original question anyway, you can avoid writing CRLF by opening the file in binary mode: f = open(filename, 'wb')
A:
I think you're looking for execfile function.
A:
You can:
Write the file in binary mode by opening it with open(file, 'wb') or...
Strip the CR-LFs from the string before writing it with something like data.replace('\r','')
I would steer clear of exec and use execfile as mentioned by SilentGhost.
A:
Just open the file in binary mode: open("myfile.py", "rwb")
|
how to generate py files without default windows crlf?
|
how can I write a .py file from python such that its type should not be like 'ASCII file with Windows CRLF'
because when i run file.write(data) inside windows it write the file but when I try to
eval(open(file.py).read()) it fails and gives syntax error because of windows CRLF on each line......
see the error log - traceback
ERROR:web-services:[25]: info = eval(tools.file_open(terp_file).read(-1))
ERROR:web-services:[26]: File "<string>", line 1
ERROR:web-services:[27]: {
ERROR:web-services:[28]:
|
[
"The problem is not with the CRLF, but that eval is for evaluating a single expression, not an entire program.\nYou can use exec to execute a program from a string, or execfile to execute it directly from a file.\nTo answer your original question anyway, you can avoid writing CRLF by opening the file in binary mode: f = open(filename, 'wb')\n",
"I think you're looking for execfile function.\n",
"You can:\n\nWrite the file in binary mode by opening it with open(file, 'wb') or...\nStrip the CR-LFs from the string before writing it with something like data.replace('\\r','')\n\nI would steer clear of exec and use execfile as mentioned by SilentGhost.\n",
"Just open the file in binary mode: open(\"myfile.py\", \"rwb\")\n"
] |
[
1,
1,
1,
0
] |
[] |
[] |
[
"code_generation",
"eval",
"python"
] |
stackoverflow_0002737208_code_generation_eval_python.txt
|
Q:
Python: sort a list and change another one consequently
I have two lists: one contains a set of x points, the other contains y points. Python somehow manages to mix the x points up, or the user could. I'd need to sort them by lowest to highest, and move the y points to follow their x correspondants. They are in two separate lists.. how do I do it?
A:
You could zip the lists and sort the result. Sorting tuples should, by default, sort on the first member.
>>> xs = [3,2,1]
>>> ys = [1,2,3]
>>> points = zip(xs,ys)
>>> points
[(3, 1), (2, 2), (1, 3)]
>>> sorted(points)
[(1, 3), (2, 2), (3, 1)]
And then to unpack them again:
>>> sorted_points = sorted(points)
>>> new_xs = [point[0] for point in sorted_points]
>>> new_ys = [point[1] for point in sorted_points]
>>> new_xs
[1, 2, 3]
>>> new_ys
[3, 2, 1]
A:
>>> xs = [5, 2, 1, 4, 6, 3]
>>> ys = [1, 2, 3, 4, 5, 6]
>>> xs, ys = zip(*sorted(zip(xs, ys)))
>>> xs
(1, 2, 3, 4, 5, 6)
>>> ys
(3, 2, 6, 4, 1, 5)
A:
>>> import numpy
>>> sorted_index = numpy.argsort(xs)
>>> xs = [xs[i] for i in sorted_index]
>>> ys = [ys[i] for i in sorted_index]
if you can work with numpy.array
>>> xs = numpy.array([3,2,1])
>>> xs = numpy.array([1,2,3])
>>> sorted_index = numpy.argsort(xs)
>>> xs = xs[sorted_index]
>>> ys = ys[sorted_index]
A:
If the x and the y are meant to be a single unit (such as a point), it would make more sense to store them as tuples rather than two separate lists.
Regardless, here's what you should do:
x = [4, 2, 5, 4, 5,…]
y = [4, 5, 2, 3, 1,…]
zipped_list = zip(x,y)
sorted_list = sorted(zipped_list)
|
Python: sort a list and change another one consequently
|
I have two lists: one contains a set of x points, the other contains y points. Python somehow manages to mix the x points up, or the user could. I'd need to sort them by lowest to highest, and move the y points to follow their x correspondants. They are in two separate lists.. how do I do it?
|
[
"You could zip the lists and sort the result. Sorting tuples should, by default, sort on the first member.\n>>> xs = [3,2,1]\n>>> ys = [1,2,3]\n>>> points = zip(xs,ys)\n>>> points\n[(3, 1), (2, 2), (1, 3)]\n>>> sorted(points)\n[(1, 3), (2, 2), (3, 1)]\n\nAnd then to unpack them again:\n>>> sorted_points = sorted(points)\n>>> new_xs = [point[0] for point in sorted_points]\n>>> new_ys = [point[1] for point in sorted_points]\n>>> new_xs\n[1, 2, 3]\n>>> new_ys\n[3, 2, 1]\n\n",
">>> xs = [5, 2, 1, 4, 6, 3]\n>>> ys = [1, 2, 3, 4, 5, 6]\n>>> xs, ys = zip(*sorted(zip(xs, ys)))\n>>> xs\n(1, 2, 3, 4, 5, 6)\n>>> ys\n(3, 2, 6, 4, 1, 5)\n\n",
">>> import numpy\n\n>>> sorted_index = numpy.argsort(xs)\n>>> xs = [xs[i] for i in sorted_index]\n>>> ys = [ys[i] for i in sorted_index]\n\nif you can work with numpy.array\n>>> xs = numpy.array([3,2,1])\n>>> xs = numpy.array([1,2,3])\n>>> sorted_index = numpy.argsort(xs)\n>>> xs = xs[sorted_index]\n>>> ys = ys[sorted_index]\n\n",
"If the x and the y are meant to be a single unit (such as a point), it would make more sense to store them as tuples rather than two separate lists.\nRegardless, here's what you should do:\nx = [4, 2, 5, 4, 5,…]\ny = [4, 5, 2, 3, 1,…]\n\nzipped_list = zip(x,y)\nsorted_list = sorted(zipped_list)\n\n"
] |
[
19,
16,
10,
4
] |
[] |
[] |
[
"list",
"python",
"sorting"
] |
stackoverflow_0002732994_list_python_sorting.txt
|
Q:
Does Django cache url regex patterns somehow?
I'm a Django newbie who needs help: Even though I change some urls in my urls.py I keep on getting the same error message from Django. Here is the relevant line from my settings.py:
ROOT_URLCONF = 'mydjango.urls'
Here is my urls.py:
from django.conf.urls.defaults import *
# Uncomment the next two lines to enable the admin:
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
# Example:
# (r'^mydjango/', include('mydjango.foo.urls')),
# Uncomment the admin/doc line below and add 'django.contrib.admindocs'
# to INSTALLED_APPS to enable admin documentation:
#(r'^admin/doc/', include(django.contrib.admindocs.urls)),
# (r'^polls/', include('mydjango.polls.urls')),
(r'^$', 'mydjango.polls.views.homepage'),
(r'^polls/$', 'mydjango.polls.views.index'),
(r'^polls/(?P<poll_id>\d+)/$', 'mydjango.polls.views.detail'),
(r'^polls/(?P<poll_id>\d+)/results/$', 'mydjango.polls.views.results'),
(r'^polls/(?P<poll_id>\d+)/vote/$', 'mydjango.polls.views.vote'),
(r'^polls/randomTest1/', 'mydjango.polls.views.randomTest1'),
(r'^admin/', include(admin.site.urls)),
)
So I expect that whenever I visit http://mydjango.yafz.org/polls/randomTest1/ the mydjango.polls.views.randomTest1 function should run because in my polls/views.py I have the relevant function:
def randomTest1(request):
# mainText = request.POST['mainText']
return HttpResponse("Default random test")
However I keep on getting the following error message:
Page not found (404)
Request Method: GET
Request URL: http://mydjango.yafz.org/polls/randomTest1
Using the URLconf defined in mydjango.urls, Django tried these URL patterns, in this order:
1. ^$
2. ^polls/$
3. ^polls/(?P<poll_id>\d+)/$
4. ^polls/(?P<poll_id>\d+)/results/$
5. ^polls/(?P<poll_id>\d+)/vote/$
6. ^admin/
7. ^polls/randomTest/$
The current URL, polls/randomTest1, didn't match any of these.
I'm surprised because again and again I check urls.py and there is no
^polls/randomTest/$
in it, but there is
^polls/randomTest1/'
It seems like Django is somehow storing the previous contents of urls.py and I just don't know how to make my latest changes effective.
Any ideas? Why do I keep on seeing some old version of regexes when I try to load that page even though I changed my urls.py?
A:
Django compiles the URL regexes when it starts up for performance reasons - restart your server and you should see the new URL working correctly.
|
Does Django cache url regex patterns somehow?
|
I'm a Django newbie who needs help: Even though I change some urls in my urls.py I keep on getting the same error message from Django. Here is the relevant line from my settings.py:
ROOT_URLCONF = 'mydjango.urls'
Here is my urls.py:
from django.conf.urls.defaults import *
# Uncomment the next two lines to enable the admin:
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
# Example:
# (r'^mydjango/', include('mydjango.foo.urls')),
# Uncomment the admin/doc line below and add 'django.contrib.admindocs'
# to INSTALLED_APPS to enable admin documentation:
#(r'^admin/doc/', include(django.contrib.admindocs.urls)),
# (r'^polls/', include('mydjango.polls.urls')),
(r'^$', 'mydjango.polls.views.homepage'),
(r'^polls/$', 'mydjango.polls.views.index'),
(r'^polls/(?P<poll_id>\d+)/$', 'mydjango.polls.views.detail'),
(r'^polls/(?P<poll_id>\d+)/results/$', 'mydjango.polls.views.results'),
(r'^polls/(?P<poll_id>\d+)/vote/$', 'mydjango.polls.views.vote'),
(r'^polls/randomTest1/', 'mydjango.polls.views.randomTest1'),
(r'^admin/', include(admin.site.urls)),
)
So I expect that whenever I visit http://mydjango.yafz.org/polls/randomTest1/ the mydjango.polls.views.randomTest1 function should run because in my polls/views.py I have the relevant function:
def randomTest1(request):
# mainText = request.POST['mainText']
return HttpResponse("Default random test")
However I keep on getting the following error message:
Page not found (404)
Request Method: GET
Request URL: http://mydjango.yafz.org/polls/randomTest1
Using the URLconf defined in mydjango.urls, Django tried these URL patterns, in this order:
1. ^$
2. ^polls/$
3. ^polls/(?P<poll_id>\d+)/$
4. ^polls/(?P<poll_id>\d+)/results/$
5. ^polls/(?P<poll_id>\d+)/vote/$
6. ^admin/
7. ^polls/randomTest/$
The current URL, polls/randomTest1, didn't match any of these.
I'm surprised because again and again I check urls.py and there is no
^polls/randomTest/$
in it, but there is
^polls/randomTest1/'
It seems like Django is somehow storing the previous contents of urls.py and I just don't know how to make my latest changes effective.
Any ideas? Why do I keep on seeing some old version of regexes when I try to load that page even though I changed my urls.py?
|
[
"Django compiles the URL regexes when it starts up for performance reasons - restart your server and you should see the new URL working correctly.\n"
] |
[
7
] |
[] |
[] |
[
"django",
"django_urls",
"python"
] |
stackoverflow_0002737400_django_django_urls_python.txt
|
Q:
working on lists in python
'm trying to make a small modification to django lfs project, that will allow me to deactivate products with no stocks. Unfortunatelly I'm just beginning to learn python, so I have big trouble with its syntax. That's what I'm trying to do. I'm using method 'is_variant' returning tru if my product is a sub type. If it is a variant I'm turning to parent product, get it's active variants and check their stocks. If stock is more than 0 variable active is 0, else it is 1. If after looping through variants 'active' is still 1 I set parent product's active to false.
I somehow cannot make it work the proper way. When using :
def deactivate(self):
if self.is_variant():
prod = self.parent
prod.active = all(var.get_stock_amount() != 0 for var in prod.variants.filter(active=True))
else:
prod.active = self.get_stock_amount() != 0
self.parent.save()
It deactivates my product no matter if it's variants have stocks or not. And when using :
inactive = 0
if self.is_variant():
prod = self.parent
for s in prod.variants.filter(active=True):
if s.get_stock_amount() == 0:
inactive = 1
else:
inactive = 0
if inactive == 1:
prod.active = 0
prod.save()
else:
if self.get_stock_amount() == 0:
self.active = 0
self.save()
The same happens, so my product is deactivated each time.
I've checked return types in shell and self is a variant and it is active.
A:
First, I wouldn't call the list set, because this is a Python built-in method (see set). Use append on the list (your syntax is just incorrect and the error you get explicitly tells you so ;) ) and you have to initialize the list before:
def deactivate(self):
"""If there are no stocks, deactivate the product. Used in last step of checkout.
"""
if self.has_variants():
sets = []
for s in self.variants.filter(active=True):
sets.append(s)
for var in sets:
...
But why creating a list beforehand if the only purpose is to iterate over it again? You can just do:
def deactivate(self):
"""If there are no stocks, deactivate the product. Used in last step of checkout.
"""
if self.has_variants():
for s in self.variants.filter(active=True):
if s.get_stock_amount() == 0:
inactive = True
else:
inactive = False
else:
...
Read more about lists.
A:
This code is wrong in so many ways.
Creation of a list named set (as noted above)
Setting a variable multiple times in a loop without reading
Unneeded return value (if there is only one exit point, how can this be useful?)
I think this would work just as well:
def deactivate(self):
"""If there are no stocks, deactivate the product. Used in last step of checkout.
"""
if self.has_variants():
inactive = any(var.get_stock_amount() == 0 for var in self.variants.filter(active=True))
else:
inactive = self.get_stock_amount() == 0
self.active = not inactive
or maybe:
def deactivate(self):
"""If there are no stocks, deactivate the product. Used in last step of checkout.
"""
if self.has_variants():
self.active = all(var.get_stock_amount() != 0 for var in self.variants.filter(active=True))
else:
self.active = self.get_stock_amount() != 0
A:
Proper solution. Guess there's still plenty of place to optimize it but first I need to learn how :) :
def deactivate(self):
"""If there are no stocks, deactivate the product. Used in last step of checkout.
"""
inactive = False
if self.is_variant():
prod = self.parent
inactive = all(var.get_stock_amount() == 0 for var in prod.variants.filter(active=True))
if inactive:
prod.active = 0
prod.save()
else:
if self.get_stock_amount() == 0:
self.active = 0
self.save()
|
working on lists in python
|
'm trying to make a small modification to django lfs project, that will allow me to deactivate products with no stocks. Unfortunatelly I'm just beginning to learn python, so I have big trouble with its syntax. That's what I'm trying to do. I'm using method 'is_variant' returning tru if my product is a sub type. If it is a variant I'm turning to parent product, get it's active variants and check their stocks. If stock is more than 0 variable active is 0, else it is 1. If after looping through variants 'active' is still 1 I set parent product's active to false.
I somehow cannot make it work the proper way. When using :
def deactivate(self):
if self.is_variant():
prod = self.parent
prod.active = all(var.get_stock_amount() != 0 for var in prod.variants.filter(active=True))
else:
prod.active = self.get_stock_amount() != 0
self.parent.save()
It deactivates my product no matter if it's variants have stocks or not. And when using :
inactive = 0
if self.is_variant():
prod = self.parent
for s in prod.variants.filter(active=True):
if s.get_stock_amount() == 0:
inactive = 1
else:
inactive = 0
if inactive == 1:
prod.active = 0
prod.save()
else:
if self.get_stock_amount() == 0:
self.active = 0
self.save()
The same happens, so my product is deactivated each time.
I've checked return types in shell and self is a variant and it is active.
|
[
"First, I wouldn't call the list set, because this is a Python built-in method (see set). Use append on the list (your syntax is just incorrect and the error you get explicitly tells you so ;) ) and you have to initialize the list before:\ndef deactivate(self):\n\"\"\"If there are no stocks, deactivate the product. Used in last step of checkout.\n\"\"\"\nif self.has_variants():\n sets = []\n for s in self.variants.filter(active=True):\n sets.append(s) \n for var in sets:\n ...\n\nBut why creating a list beforehand if the only purpose is to iterate over it again? You can just do:\ndef deactivate(self):\n\"\"\"If there are no stocks, deactivate the product. Used in last step of checkout.\n\"\"\"\nif self.has_variants():\n for s in self.variants.filter(active=True): \n if s.get_stock_amount() == 0:\n inactive = True\n else:\n inactive = False\nelse:\n ...\n\nRead more about lists.\n",
"This code is wrong in so many ways.\n\nCreation of a list named set (as noted above)\nSetting a variable multiple times in a loop without reading\nUnneeded return value (if there is only one exit point, how can this be useful?)\n\nI think this would work just as well:\ndef deactivate(self):\n \"\"\"If there are no stocks, deactivate the product. Used in last step of checkout.\n \"\"\"\n if self.has_variants():\n inactive = any(var.get_stock_amount() == 0 for var in self.variants.filter(active=True))\n else:\n inactive = self.get_stock_amount() == 0\n self.active = not inactive\n\nor maybe:\ndef deactivate(self):\n \"\"\"If there are no stocks, deactivate the product. Used in last step of checkout.\n \"\"\"\n if self.has_variants():\n self.active = all(var.get_stock_amount() != 0 for var in self.variants.filter(active=True))\n else:\n self.active = self.get_stock_amount() != 0\n\n",
"Proper solution. Guess there's still plenty of place to optimize it but first I need to learn how :) :\ndef deactivate(self):\n \"\"\"If there are no stocks, deactivate the product. Used in last step of checkout.\n \"\"\"\n\n inactive = False\n\n if self.is_variant():\n prod = self.parent\n inactive = all(var.get_stock_amount() == 0 for var in prod.variants.filter(active=True))\n if inactive:\n prod.active = 0\n prod.save()\n else:\n if self.get_stock_amount() == 0:\n self.active = 0\n\n self.save()\n\n"
] |
[
8,
2,
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0002611307_django_django_models_python.txt
|
Q:
How to install pyobjc on SnowLeopard's non-default python installation
I'm having problems installing pyobjc on SnowLeopard.
It came with python 2.6 but I need 2.5 so I have installed 2.5 successfully. After that I have installed xcode. After that I have installed pyobjc with "easy_install-2.5 pyobjc"
But when I start my python 2.5 and from cmd line try to import Foundation, it says "no module named Foundation"
I tried to do
export PYTHONPATH="/Library/Python/2.5/site-packages/pyobjc_core-2.2-py2.5-macosx-10.6-i386.egg/objc"
before starting python interpreter but still no luck (this .egg directory is the only directory pyobjc installation made, and there are several more egg files there in site-packages... in objc subdir there is init.py file)
Of course, from 2.6 everything works fine. How do I find out what's wrong and what should i do?
When I print sys.modules from python 2.6 I find that objc that gets imported is basically from the same install location "/Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/", so why it won't work for 2.5?
A:
Ok, found what's wrong.
My SnowLeopard came with BOTH python 2.6 (default) and 2.5 installed
XCode installed objc for both.
So basically I have broken my pythonpath etc with additional python 2.5 and objc manual installations, somehow libraries weren't compatible (mine and original python are both 2.5.4 but slightly different release and what's more important probably built with different build options)
What I did is: making sure I start everything with original python2.5 (on my system it's in /usr/bin/python2.5), removing wrong entries from easy_install.pth in site-packages, and adding the path to PyObjc to easy_install.pth.
Sorry for not finding out sooner, but I hope this will be helpful to someone in the future!
|
How to install pyobjc on SnowLeopard's non-default python installation
|
I'm having problems installing pyobjc on SnowLeopard.
It came with python 2.6 but I need 2.5 so I have installed 2.5 successfully. After that I have installed xcode. After that I have installed pyobjc with "easy_install-2.5 pyobjc"
But when I start my python 2.5 and from cmd line try to import Foundation, it says "no module named Foundation"
I tried to do
export PYTHONPATH="/Library/Python/2.5/site-packages/pyobjc_core-2.2-py2.5-macosx-10.6-i386.egg/objc"
before starting python interpreter but still no luck (this .egg directory is the only directory pyobjc installation made, and there are several more egg files there in site-packages... in objc subdir there is init.py file)
Of course, from 2.6 everything works fine. How do I find out what's wrong and what should i do?
When I print sys.modules from python 2.6 I find that objc that gets imported is basically from the same install location "/Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/", so why it won't work for 2.5?
|
[
"Ok, found what's wrong.\nMy SnowLeopard came with BOTH python 2.6 (default) and 2.5 installed\nXCode installed objc for both.\nSo basically I have broken my pythonpath etc with additional python 2.5 and objc manual installations, somehow libraries weren't compatible (mine and original python are both 2.5.4 but slightly different release and what's more important probably built with different build options)\nWhat I did is: making sure I start everything with original python2.5 (on my system it's in /usr/bin/python2.5), removing wrong entries from easy_install.pth in site-packages, and adding the path to PyObjc to easy_install.pth.\nSorry for not finding out sooner, but I hope this will be helpful to someone in the future!\n"
] |
[
4
] |
[] |
[] |
[
"cocoa",
"import",
"macos",
"pyobjc",
"python"
] |
stackoverflow_0002722730_cocoa_import_macos_pyobjc_python.txt
|
Q:
Adding CSRF protection to simple comment forms in Django
I have blog comment forms in Django and I would like to know the following:
Should I add CSRF to the forms?
If I want to use the simple "render_comment_form" method, how do I add it?
If I can't add it like that, what is the best practice for doing it?
Each tutorial or discussion on the subject seems to have a different approach, and I am not certain I understand how it all works.
A:
My answer assumes that you are using Django 1.2:
Yes! You should protect all your data that is sent by POST requests to the server against CSRF attacks.
You don't need to add the token yourself. This is already done by django. Have a look at the default template that is used by the render_comment_form tag and you will see, that the csrf_token is already included. You can overwrite this template in your project and including the CSRF token into it is as easy as writing {% csrf_token %} into the form.
There is a way to protect your forms even if you don't set the tokens in the templates. Have a look at django's documentation about that topic. But this method is marked as a legacy method so it's not recommended to use that - it's only provided for backwards compatibility with versions of Django earlier than 1.2.
|
Adding CSRF protection to simple comment forms in Django
|
I have blog comment forms in Django and I would like to know the following:
Should I add CSRF to the forms?
If I want to use the simple "render_comment_form" method, how do I add it?
If I can't add it like that, what is the best practice for doing it?
Each tutorial or discussion on the subject seems to have a different approach, and I am not certain I understand how it all works.
|
[
"My answer assumes that you are using Django 1.2:\n\nYes! You should protect all your data that is sent by POST requests to the server against CSRF attacks.\nYou don't need to add the token yourself. This is already done by django. Have a look at the default template that is used by the render_comment_form tag and you will see, that the csrf_token is already included. You can overwrite this template in your project and including the CSRF token into it is as easy as writing {% csrf_token %} into the form.\nThere is a way to protect your forms even if you don't set the tokens in the templates. Have a look at django's documentation about that topic. But this method is marked as a legacy method so it's not recommended to use that - it's only provided for backwards compatibility with versions of Django earlier than 1.2.\n\n"
] |
[
1
] |
[] |
[] |
[
"blogs",
"django",
"python"
] |
stackoverflow_0002738120_blogs_django_python.txt
|
Q:
trying to run werkzeug on apache (wsgi error)
My data_site.wsgi file:
import main
application = application()
Error i get at apache:
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] Traceback (most recent call last):
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] File "/var/www/vhosts/data.oddprojects.net/htdocs/data_site.wsgi", line 1, in <module>
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] import main
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] ImportError: No module named main
Paths:
htdocs
data_site.wsgi
main.py
A:
The PYTHONPATH under mod_wsgi doesn't include the directory the .wsgi is in. I often use something like the below in my .wsgi files.
import os, sys; sys.path.append(os.path.dirname(__file__))
(You might opt for .insert(0, ...) instead of .append(...) if that works better for you.)
|
trying to run werkzeug on apache (wsgi error)
|
My data_site.wsgi file:
import main
application = application()
Error i get at apache:
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] Traceback (most recent call last):
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] File "/var/www/vhosts/data.oddprojects.net/htdocs/data_site.wsgi", line 1, in <module>
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] import main
[Thu Apr 29 07:07:41 2010] [error] [client 81.167.201.136] ImportError: No module named main
Paths:
htdocs
data_site.wsgi
main.py
|
[
"The PYTHONPATH under mod_wsgi doesn't include the directory the .wsgi is in. I often use something like the below in my .wsgi files.\nimport os, sys; sys.path.append(os.path.dirname(__file__))\n\n(You might opt for .insert(0, ...) instead of .append(...) if that works better for you.)\n"
] |
[
3
] |
[] |
[] |
[
"apache",
"mod_wsgi",
"python",
"werkzeug"
] |
stackoverflow_0002738214_apache_mod_wsgi_python_werkzeug.txt
|
Q:
Conditional Regular Expressions
I'm using Python and I want to use regular expressions to check if something "is part of an include list" but "is not part of an exclude list".
My include list is represented by a regex, for example:
And.*
Everything which starts with And.
Also the exclude list is represented by a regex, for example:
(?!Andrea)
Everything, but not the string Andrea. The exclude list is obviously a negation.
Using the two examples above, for example, I want to match everything which starts with And except for Andrea.
In the general case I have an includeRegEx and an excludeRegEx. I want to match everything which matchs includeRegEx but not matchs excludeRegEx. Attention: excludeRegEx is still in the negative form (as you can see in the example above), so it should be better to say: if something matches includeRegEx, I check if it also matches excludeRegEx, if it does, the match is satisfied. Is it possible to represent this in a single regular expression?
I think Conditional Regular Expressions could be the solution but I'm not really sure of that.
I'd like to see a working example in Python.
Thank you very much.
A:
Why not put both in one regex?
And(?!rea$).*
Since the lookahead only "looks ahead" without consuming any characters, this works just fine (well, this is the whole point of lookaround, actually).
So, in Python:
if re.match(r"And(?!rea$).*", subject):
# Successful match
# Note that re.match always anchor the match
# to the start of the string.
else:
# Match attempt failed
From the wording of your question, I'm not sure if you're starting with two already finished lists of "match/don't match" pairs. In that case, you could simply combine them automatically by concatenating the regexes. This works just as well but is uglier:
(?!Andrea$)And.*
In general, then:
(?!excludeRegex$)includeRegex
|
Conditional Regular Expressions
|
I'm using Python and I want to use regular expressions to check if something "is part of an include list" but "is not part of an exclude list".
My include list is represented by a regex, for example:
And.*
Everything which starts with And.
Also the exclude list is represented by a regex, for example:
(?!Andrea)
Everything, but not the string Andrea. The exclude list is obviously a negation.
Using the two examples above, for example, I want to match everything which starts with And except for Andrea.
In the general case I have an includeRegEx and an excludeRegEx. I want to match everything which matchs includeRegEx but not matchs excludeRegEx. Attention: excludeRegEx is still in the negative form (as you can see in the example above), so it should be better to say: if something matches includeRegEx, I check if it also matches excludeRegEx, if it does, the match is satisfied. Is it possible to represent this in a single regular expression?
I think Conditional Regular Expressions could be the solution but I'm not really sure of that.
I'd like to see a working example in Python.
Thank you very much.
|
[
"Why not put both in one regex?\nAnd(?!rea$).*\n\nSince the lookahead only \"looks ahead\" without consuming any characters, this works just fine (well, this is the whole point of lookaround, actually).\nSo, in Python:\nif re.match(r\"And(?!rea$).*\", subject):\n # Successful match \n # Note that re.match always anchor the match\n # to the start of the string.\nelse:\n # Match attempt failed\n\nFrom the wording of your question, I'm not sure if you're starting with two already finished lists of \"match/don't match\" pairs. In that case, you could simply combine them automatically by concatenating the regexes. This works just as well but is uglier:\n(?!Andrea$)And.*\n\nIn general, then:\n(?!excludeRegex$)includeRegex\n\n"
] |
[
2
] |
[] |
[] |
[
"conditional",
"python",
"regex"
] |
stackoverflow_0002738349_conditional_python_regex.txt
|
Q:
What's the advantage of using 'with .. as' statement in Python?
with open("hello.txt", "wb") as f:
f.write("Hello Python!\n")
seems to be the same as
f = open("hello.txt", "wb")
f.write("Hello Python!\n")
f.close()
What's the advantage of using open .. as instead of f = ?
Is it just syntactic sugar? Just saving one line of code?
A:
In order to be equivalent to the with statement version, the code you wrote should look instead like this:
f = open("hello.txt", "wb")
try:
f.write("Hello Python!\n")
finally:
f.close()
While this might seem like syntactic sugar, it ensures that you release resources. Generally the world is more complex than these contrived examples and if you forget a try.. except... or fail to handle an extreme case, you have resource leaks on your hands.
The with statement saves you from those leaks, making it easier to write clean code. For a complete explanation, look at PEP 343, it has plenty of examples.
A:
If f.write throws an exception, f.close() is called when you use with and not called in the second case. Also f has a smaller scope and the code is cleaner when using with.
A:
The former still closes f if an exception occurs during the f.write().
|
What's the advantage of using 'with .. as' statement in Python?
|
with open("hello.txt", "wb") as f:
f.write("Hello Python!\n")
seems to be the same as
f = open("hello.txt", "wb")
f.write("Hello Python!\n")
f.close()
What's the advantage of using open .. as instead of f = ?
Is it just syntactic sugar? Just saving one line of code?
|
[
"In order to be equivalent to the with statement version, the code you wrote should look instead like this:\nf = open(\"hello.txt\", \"wb\")\ntry:\n f.write(\"Hello Python!\\n\")\nfinally:\n f.close()\n\nWhile this might seem like syntactic sugar, it ensures that you release resources. Generally the world is more complex than these contrived examples and if you forget a try.. except... or fail to handle an extreme case, you have resource leaks on your hands. \nThe with statement saves you from those leaks, making it easier to write clean code. For a complete explanation, look at PEP 343, it has plenty of examples.\n",
"If f.write throws an exception, f.close() is called when you use with and not called in the second case. Also f has a smaller scope and the code is cleaner when using with.\n",
"The former still closes f if an exception occurs during the f.write().\n"
] |
[
32,
13,
3
] |
[] |
[] |
[
"python",
"with_statement"
] |
stackoverflow_0002738365_python_with_statement.txt
|
Q:
Error when running mmap related function in python 2.6
I tried to run the following code from http://docs.python.org/library/mmap.html
import mmap
# write a simple example file
with open("hello.txt", "wb") as f:
f.write("Hello Python!\n")
with open("hello.txt", "r+b") as f:
# memory-map the file, size 0 means whole file
map = mmap.mmap(f.fileno(), 0)
# read content via standard file methods
print map.readline() # prints "Hello Python!"
# read content via slice notation
print map[:5] # prints "Hello"
# update content using slice notation;
# note that new content must have same size
map[6:] = " world!\n"
# ... and read again using standard file methods
map.seek(0)
print map.readline() # prints "Hello world!"
# close the map
map.close()
But, I got an error.
TypeError: 'module' object is not callable
module body in mmap.py at line 9
map = mmap.mmap(f.fileno(), 0)
What's wrong with this?
I use python 2.6 on Snow Leopard/Mac.
A:
I think you're doing something weird calling your module mmap.py, and the import is getting confused and importing the same file instead... Try changing the name to something else (preferably not a standard library module name :p)
|
Error when running mmap related function in python 2.6
|
I tried to run the following code from http://docs.python.org/library/mmap.html
import mmap
# write a simple example file
with open("hello.txt", "wb") as f:
f.write("Hello Python!\n")
with open("hello.txt", "r+b") as f:
# memory-map the file, size 0 means whole file
map = mmap.mmap(f.fileno(), 0)
# read content via standard file methods
print map.readline() # prints "Hello Python!"
# read content via slice notation
print map[:5] # prints "Hello"
# update content using slice notation;
# note that new content must have same size
map[6:] = " world!\n"
# ... and read again using standard file methods
map.seek(0)
print map.readline() # prints "Hello world!"
# close the map
map.close()
But, I got an error.
TypeError: 'module' object is not callable
module body in mmap.py at line 9
map = mmap.mmap(f.fileno(), 0)
What's wrong with this?
I use python 2.6 on Snow Leopard/Mac.
|
[
"I think you're doing something weird calling your module mmap.py, and the import is getting confused and importing the same file instead... Try changing the name to something else (preferably not a standard library module name :p)\n"
] |
[
7
] |
[] |
[] |
[
"mmap",
"python"
] |
stackoverflow_0002738344_mmap_python.txt
|
Q:
running python from an android app
I am trying to run a python script through an application I've written. I found some pages which say that this piece of code is doing it, but I can't figure it out.
http://code.google.com/p/android-scripting/source/browse/android/AndroidScriptingEnvironment/src/com/google/ase/locale/LocaleReceiver.java
Can someone explain what is going on and how I can edit that to run an arbitrary script file in my project directory?
A:
That's not exactly supported yet via ASE. You can launch a script from ASE's script directory (/sdcard/ase/scripts) via an intent though. See http://code.google.com/p/android-scripting/source/browse/android/Common/src/com/google/ase/IntentBuilders.java for the code that ASE uses to launch scripts itself.
|
running python from an android app
|
I am trying to run a python script through an application I've written. I found some pages which say that this piece of code is doing it, but I can't figure it out.
http://code.google.com/p/android-scripting/source/browse/android/AndroidScriptingEnvironment/src/com/google/ase/locale/LocaleReceiver.java
Can someone explain what is going on and how I can edit that to run an arbitrary script file in my project directory?
|
[
"That's not exactly supported yet via ASE. You can launch a script from ASE's script directory (/sdcard/ase/scripts) via an intent though. See http://code.google.com/p/android-scripting/source/browse/android/Common/src/com/google/ase/IntentBuilders.java for the code that ASE uses to launch scripts itself.\n"
] |
[
1
] |
[] |
[] |
[
"android",
"android_scripting",
"java",
"python"
] |
stackoverflow_0002733881_android_android_scripting_java_python.txt
|
Q:
Exposing members or make them private in Python?
Is there a general convention about exposing members in Python classes? I know that this is a case of "it depends", but maybe there is a rule of thumb.
Private member:
class Node:
def __init__(self):
self.__children = []
def add_children(self, *args):
self.__children += args
node = Node()
node.add_children("one", "two")
Public member:
class Node2:
def __init__(self):
self.children = []
node2 = Node2()
node2.children += "one", "two"
If there is no good reason to make children private, would you stay with the method add_children?
A:
The prefix for "private" is a single underscore.
__ is used to mangle the name and avoid some problems e.g. when using multiple inheritance.
Personnally I've never used it.
In any case, the members will still be publicly accessible; this is just a convention.
A:
I believe it is more Pythonic to leave members exposed unless you have specific reason not to, on the basis that Python code generally does not intend to restrict the user any more than is necessary. This is why, for example, there is no such thing as a truly private member.
A:
Think of the choice of making something public or private as documentation. It shows your intent for the scope of a member, and serves as a warning to other developers that it may change in the future.
A:
The answer is, of course, "it depends". If I was to guess, I would use an add_children method. The point of a class is usually to abstract away what you are doing from the representation of the state you are storing. my_node.add_children(a, b) is an operation I would read as "add children a and b to my mode". my_node.children.extend([a, b]) makes the user think of children as a list and reach down inside of the instance to mutate it. This is sometimes, but less often, what you want. It also makes it hard not to have an API change if you eventually need adding children to do something additional.
__foo attributes are not private. Python name-mangles them in a systematic, easily-reproducable way. Python does not support real private attributes. Using __foo does not add privacy, but it does make your class harder to test, inherit, and use.
When I have an attribute I don't want as part of my public API, I prefix it with a single underscore, e.g. _foo. This convention is widely used.
I never use += with lists; I use the extend method instead. I don't like += because most people use extend and because its operation is a little confusing. a += b is different from a = a + b in two ways: the former mutates the original list and the latter creates a new list and rebinds it to the original name, and in the latter b must be a list and in the former b may be an arbitrary iterable. I find extend cleaner.
I always inherit object rather than nothing, i.e. class Node(object):, so that I am using new-style classes. New-style classes work in a little more native a way and have a few features old-style classes don't.
A:
"It depends."
Seriously. If the Node needs to do something when children are added, you'd want the first to catch the change and do what needs to be done. If it's just a list that you don't need to worry about changes, go with the latter.
A:
As you said, it depends. Generally i prefer have a method to interact with a class to get a separation between programming logic and it's implementation. Following the example that you posted, i will have wrote:
class Node:
def __init__(self):
self.childs = []
def add_childs(self, *args):
self.childs += args
because if a subclass would access to childs it could without any strange varname.
Usually i use private members only with properties to make a cache of a computed value.
A:
The convention is to prefix with a single underscore, however it is purely advisory and it is still possible to access to the attribute just like any other. Handy sometimes when you are writing unit tests
The double underscore prefix invokes name mangling. It may appear to you that the attribute is private, but you can access it like this.
>>> node=Node()
>>> node._Node__children
[]
|
Exposing members or make them private in Python?
|
Is there a general convention about exposing members in Python classes? I know that this is a case of "it depends", but maybe there is a rule of thumb.
Private member:
class Node:
def __init__(self):
self.__children = []
def add_children(self, *args):
self.__children += args
node = Node()
node.add_children("one", "two")
Public member:
class Node2:
def __init__(self):
self.children = []
node2 = Node2()
node2.children += "one", "two"
If there is no good reason to make children private, would you stay with the method add_children?
|
[
"The prefix for \"private\" is a single underscore.\n__ is used to mangle the name and avoid some problems e.g. when using multiple inheritance.\nPersonnally I've never used it.\nIn any case, the members will still be publicly accessible; this is just a convention.\n",
"I believe it is more Pythonic to leave members exposed unless you have specific reason not to, on the basis that Python code generally does not intend to restrict the user any more than is necessary. This is why, for example, there is no such thing as a truly private member.\n",
"Think of the choice of making something public or private as documentation. It shows your intent for the scope of a member, and serves as a warning to other developers that it may change in the future.\n",
"\nThe answer is, of course, \"it depends\". If I was to guess, I would use an add_children method. The point of a class is usually to abstract away what you are doing from the representation of the state you are storing. my_node.add_children(a, b) is an operation I would read as \"add children a and b to my mode\". my_node.children.extend([a, b]) makes the user think of children as a list and reach down inside of the instance to mutate it. This is sometimes, but less often, what you want. It also makes it hard not to have an API change if you eventually need adding children to do something additional.\n__foo attributes are not private. Python name-mangles them in a systematic, easily-reproducable way. Python does not support real private attributes. Using __foo does not add privacy, but it does make your class harder to test, inherit, and use.\nWhen I have an attribute I don't want as part of my public API, I prefix it with a single underscore, e.g. _foo. This convention is widely used.\nI never use += with lists; I use the extend method instead. I don't like += because most people use extend and because its operation is a little confusing. a += b is different from a = a + b in two ways: the former mutates the original list and the latter creates a new list and rebinds it to the original name, and in the latter b must be a list and in the former b may be an arbitrary iterable. I find extend cleaner.\nI always inherit object rather than nothing, i.e. class Node(object):, so that I am using new-style classes. New-style classes work in a little more native a way and have a few features old-style classes don't.\n\n",
"\"It depends.\"\nSeriously. If the Node needs to do something when children are added, you'd want the first to catch the change and do what needs to be done. If it's just a list that you don't need to worry about changes, go with the latter.\n",
"As you said, it depends. Generally i prefer have a method to interact with a class to get a separation between programming logic and it's implementation. Following the example that you posted, i will have wrote:\nclass Node:\n\n def __init__(self):\n self.childs = []\n\n def add_childs(self, *args):\n self.childs += args\n\nbecause if a subclass would access to childs it could without any strange varname.\nUsually i use private members only with properties to make a cache of a computed value.\n",
"The convention is to prefix with a single underscore, however it is purely advisory and it is still possible to access to the attribute just like any other. Handy sometimes when you are writing unit tests\nThe double underscore prefix invokes name mangling. It may appear to you that the attribute is private, but you can access it like this.\n>>> node=Node()\n>>> node._Node__children\n[]\n\n"
] |
[
5,
3,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"oop",
"python",
"visibility"
] |
stackoverflow_0002738153_oop_python_visibility.txt
|
Q:
Extending a form field to add new validations
I've written an app that uses forms to collect information that is then sent in an email. Many of these forms have a filefield used to attach files to the email. I'd like to validate two things, the size of the file (to ensure the emails are accepted by our mail server. I'd also like to check the file extension, to discourage attaching file types not useable for our users.
(This is the python class I'm trying to extend)
class FileField(Field):
widget = FileInput
default_error_messages = {
'invalid': _(u"No file was submitted. Check the encoding type on the form."),
'missing': _(u"No file was submitted."),
'empty': _(u"The submitted file is empty."),
'max_length': _(u'Ensure this filename has at most %(max)d characters (it has %(length)d).'),
}
def __init__(self, *args, **kwargs):
self.max_length = kwargs.pop('max_length', None)
super(FileField, self).__init__(*args, **kwargs)
def clean(self, data, initial=None):
super(FileField, self).clean(initial or data)
if not self.required and data in EMPTY_VALUES:
return None
elif not data and initial:
return initial
# UploadedFile objects should have name and size attributes.
try:
file_name = data.name
file_size = data.size
except AttributeError:
raise ValidationError(self.error_messages['invalid'])
if self.max_length is not None and len(file_name) > self.max_length:
error_values = {'max': self.max_length, 'length': len(file_name)}
raise ValidationError(self.error_messages['max_length'] % error_values)
if not file_name:
raise ValidationError(self.error_messages['invalid'])
if not file_size:
raise ValidationError(self.error_messages['empty'])
return data
A:
Just overload the "clean" method:
def clean(self, data, initial=None):
try:
if data.size > somesize:
raise ValidationError('File is too big')
(junk, ext) = os.path.splitext(data.name)
if not ext in ('.jpg', '.gif', '.png'):
raise ValidationError('Invalid file type')
except AttributeError:
raise ValidationError(self.error_messages['invalid'])
return FileField.clean(self, data, initial)
A:
in my opinion subclassing the actual field class is way to much effort. It should be easier to simply extend your form class. You could add a method which "cleans" the file field.
For example:
class MyForm(forms.Form):
attachment = forms.FileField(...)
def clean_attachment(self):
data = self.cleaned_data['attachment'] // UploadedFile object
exts = ['jpg', 'png'] // allowed extensions
// 1. check file size
if data.size > x:
raise forms.ValidationError("file to big")
// 2. check file extension
file_extension = data.name.split('.')[1] // simple method
if file_extension not in exts:
raise forms.ValidationError("Wrong file type")
return data
This is only a basic example and there are some rough edges. But you can use this example and improve it until you have a version that works for you.
Recommended readings:
Django Doc - Cleaning a specific field
Django Doc - UploadedFile class
Django Doc - File class
A:
This is what I ended up doing:
In my app's setting file:
exts = ['doc', 'docx', 'pdf', 'jpg', 'png', 'xls', 'xlsx', '.xlsm', '.xlsb']
max_email_attach_size = 10485760 #10MB written in bytes
In a new file I called formfunctions:
from django import forms
from django.forms.util import ErrorList, ValidationError
from app.settings import exts, max_email_attach_size
class SizedFileField(forms.FileField):
def clean(self, data, initial=None):
if not data in (None, ''):
try:
if data.size > max_email_attach_size:
raise ValidationError("The file is too big")
file_extension = data.name.split('.')[1]
if file_extension not in exts:
raise ValidationError("Invalid File Type")
except AttributeError:
raise ValidationError(self.error_messages['invalid'])
return forms.FileField.clean(self, data, initial)
and in my forms file:
from formfunctions import SizedFileField
An example class from the forms file:
class ExampleClass(forms.Form):
Email_Body = forms.CharField(widget=forms.Textarea, required=False)
Todays_Date = forms.CharField()
Attachment = SizedFileField(required=False)
|
Extending a form field to add new validations
|
I've written an app that uses forms to collect information that is then sent in an email. Many of these forms have a filefield used to attach files to the email. I'd like to validate two things, the size of the file (to ensure the emails are accepted by our mail server. I'd also like to check the file extension, to discourage attaching file types not useable for our users.
(This is the python class I'm trying to extend)
class FileField(Field):
widget = FileInput
default_error_messages = {
'invalid': _(u"No file was submitted. Check the encoding type on the form."),
'missing': _(u"No file was submitted."),
'empty': _(u"The submitted file is empty."),
'max_length': _(u'Ensure this filename has at most %(max)d characters (it has %(length)d).'),
}
def __init__(self, *args, **kwargs):
self.max_length = kwargs.pop('max_length', None)
super(FileField, self).__init__(*args, **kwargs)
def clean(self, data, initial=None):
super(FileField, self).clean(initial or data)
if not self.required and data in EMPTY_VALUES:
return None
elif not data and initial:
return initial
# UploadedFile objects should have name and size attributes.
try:
file_name = data.name
file_size = data.size
except AttributeError:
raise ValidationError(self.error_messages['invalid'])
if self.max_length is not None and len(file_name) > self.max_length:
error_values = {'max': self.max_length, 'length': len(file_name)}
raise ValidationError(self.error_messages['max_length'] % error_values)
if not file_name:
raise ValidationError(self.error_messages['invalid'])
if not file_size:
raise ValidationError(self.error_messages['empty'])
return data
|
[
"Just overload the \"clean\" method:\ndef clean(self, data, initial=None):\n try:\n if data.size > somesize:\n raise ValidationError('File is too big')\n\n (junk, ext) = os.path.splitext(data.name)\n if not ext in ('.jpg', '.gif', '.png'):\n raise ValidationError('Invalid file type')\n\n except AttributeError:\n raise ValidationError(self.error_messages['invalid'])\n\n return FileField.clean(self, data, initial)\n\n",
"in my opinion subclassing the actual field class is way to much effort. It should be easier to simply extend your form class. You could add a method which \"cleans\" the file field.\nFor example:\nclass MyForm(forms.Form):\n attachment = forms.FileField(...)\n\n def clean_attachment(self):\n data = self.cleaned_data['attachment'] // UploadedFile object\n exts = ['jpg', 'png'] // allowed extensions\n\n // 1. check file size\n if data.size > x:\n raise forms.ValidationError(\"file to big\")\n\n // 2. check file extension\n file_extension = data.name.split('.')[1] // simple method\n\n if file_extension not in exts:\n raise forms.ValidationError(\"Wrong file type\")\n\n return data\n\nThis is only a basic example and there are some rough edges. But you can use this example and improve it until you have a version that works for you.\nRecommended readings:\nDjango Doc - Cleaning a specific field\nDjango Doc - UploadedFile class\nDjango Doc - File class\n",
"This is what I ended up doing:\nIn my app's setting file:\nexts = ['doc', 'docx', 'pdf', 'jpg', 'png', 'xls', 'xlsx', '.xlsm', '.xlsb']\nmax_email_attach_size = 10485760 #10MB written in bytes\n\nIn a new file I called formfunctions:\nfrom django import forms\nfrom django.forms.util import ErrorList, ValidationError\nfrom app.settings import exts, max_email_attach_size\n\n\nclass SizedFileField(forms.FileField):\n\n def clean(self, data, initial=None):\n\n if not data in (None, ''):\n\n try:\n if data.size > max_email_attach_size:\n raise ValidationError(\"The file is too big\")\n\n file_extension = data.name.split('.')[1]\n if file_extension not in exts:\n raise ValidationError(\"Invalid File Type\")\n\n except AttributeError:\n raise ValidationError(self.error_messages['invalid'])\n\n return forms.FileField.clean(self, data, initial)\n\nand in my forms file:\nfrom formfunctions import SizedFileField\n\nAn example class from the forms file:\nclass ExampleClass(forms.Form):\n Email_Body = forms.CharField(widget=forms.Textarea, required=False)\n Todays_Date = forms.CharField()\n Attachment = SizedFileField(required=False)\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"django",
"django_forms",
"oop",
"python"
] |
stackoverflow_0002733740_django_django_forms_oop_python.txt
|
Q:
itertools.islice compared to list slice
I've been trying to apply an algorithm to reduce a python list into a smaller one based on a certain criteria. Due to the large volume of the original list, in the order of 100k elements, I tried to itertools for avoiding multiple memory allocations so I came up with this:
reducedVec = [ 'F' if sum( 1 for x in islice(vec, i, i+ratio) if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
Execution time for this takes a worryingly long time in the order of a few minutes, when vec has around 100k elements. When I tried instead:
reducedVec = [ 'F' if sum( 1 for x in vec[i:i+ratio] if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
in essence replace islice with a slice the execution is instantaneous.
Can you think of a plausible explanation for this? I would have thought that avoiding to repeatedly allocate a new list with a substantial number of elements, would actually save me a few computational cycles instead of crippling the whole execution.
Cheers,
Themis
A:
islice works with arbitrary iterables. To do this, rather than jumping straight to the nth element, it has to iterate over the first n-1, throwing them away, then yield the ones you want.
Check out the pure Python implementation from the itertools documentation:
def islice(iterable, *args):
# islice('ABCDEFG', 2) --> A B
# islice('ABCDEFG', 2, 4) --> C D
# islice('ABCDEFG', 2, None) --> C D E F G
# islice('ABCDEFG', 0, None, 2) --> A C E G
s = slice(*args)
it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1))
nexti = next(it)
for i, element in enumerate(iterable):
if i == nexti:
yield element
nexti = next(it)
Speaking of the itertools documentation, if I was trying to do this operation, I'd probably use the grouper recipe. It won't actually save you any memory, but it could if you rewrote it to be lazier, which wouldn't be tough.
from __future__ import division
from itertools import izip_longest
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
reducedVec = []
for chunk in grouper(ratio, vec):
if sum(1 for x in chunk if x == 'F') > ratio / 3:
reducedVec.append('F')
else:
reducedVec.append('T')
I like using grouper to abstract away the consecutive slices and find this code a lot easier to read than the original
A:
My guess would be that using islice() involves a Python function call for each element of vec, while the extended slice notation is understood by the parser and translates directly to CPython calls.
|
itertools.islice compared to list slice
|
I've been trying to apply an algorithm to reduce a python list into a smaller one based on a certain criteria. Due to the large volume of the original list, in the order of 100k elements, I tried to itertools for avoiding multiple memory allocations so I came up with this:
reducedVec = [ 'F' if sum( 1 for x in islice(vec, i, i+ratio) if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
Execution time for this takes a worryingly long time in the order of a few minutes, when vec has around 100k elements. When I tried instead:
reducedVec = [ 'F' if sum( 1 for x in vec[i:i+ratio] if x == 'F' )
> ratio / 3.0 else 'T'
for i in xrange(0, len(vec), ratio) ]
in essence replace islice with a slice the execution is instantaneous.
Can you think of a plausible explanation for this? I would have thought that avoiding to repeatedly allocate a new list with a substantial number of elements, would actually save me a few computational cycles instead of crippling the whole execution.
Cheers,
Themis
|
[
"islice works with arbitrary iterables. To do this, rather than jumping straight to the nth element, it has to iterate over the first n-1, throwing them away, then yield the ones you want.\nCheck out the pure Python implementation from the itertools documentation:\ndef islice(iterable, *args):\n # islice('ABCDEFG', 2) --> A B\n # islice('ABCDEFG', 2, 4) --> C D\n # islice('ABCDEFG', 2, None) --> C D E F G\n # islice('ABCDEFG', 0, None, 2) --> A C E G\n s = slice(*args)\n it = iter(xrange(s.start or 0, s.stop or sys.maxint, s.step or 1))\n nexti = next(it)\n for i, element in enumerate(iterable):\n if i == nexti:\n yield element\n nexti = next(it)\n\n\nSpeaking of the itertools documentation, if I was trying to do this operation, I'd probably use the grouper recipe. It won't actually save you any memory, but it could if you rewrote it to be lazier, which wouldn't be tough.\nfrom __future__ import division\n\nfrom itertools import izip_longest\ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\nreducedVec = []\nfor chunk in grouper(ratio, vec):\n if sum(1 for x in chunk if x == 'F') > ratio / 3:\n reducedVec.append('F')\n else:\n reducedVec.append('T')\n\nI like using grouper to abstract away the consecutive slices and find this code a lot easier to read than the original\n",
"My guess would be that using islice() involves a Python function call for each element of vec, while the extended slice notation is understood by the parser and translates directly to CPython calls.\n"
] |
[
13,
1
] |
[] |
[] |
[
"iteration",
"performance",
"python"
] |
stackoverflow_0002738096_iteration_performance_python.txt
|
Q:
In Windows shell scripting (cmd.exe) how do you assign the stdout of a program to an environment variable?
In UNIX you can assign the output of a script to an environment variable using the technique explained here - but what is the Windows equivalent?
I have a python utility which is intended to correct an environment variable. This script simply writes a sequence of chars to stdout. For the purposes of this question, the fact that my utility is written in python is irrelevant, it's just a program that I can call from the command-prompt which outputs a single line of text.
I'd like to do something like this (that works):
set WORKSPACE=[ the output of my_util.py ]
After running this command the value of the WORKSPACE environment variable should contain the exact same text that my utility would normally print out.
Can it be done? How?
UPDATE1: Somebody at work suggested:
python util.py | set /P WORKSPACE=
In theory, that would assign the stdout of the python-script top the env-var WORKSPACE, and yet it does not work, what is going wrong here?
A:
Use:
for /f "delims=" %A in ('<insert command here>') do @set <variable name>=%A
For example:
for /f "delims=" %A in ('time /t') do @set my_env_var=%A
...will run the command "time /t" and set the env variable "my_env_var" to the result.
Remember to use %%A instead of %A if you're running this inside a .BAT file.
|
In Windows shell scripting (cmd.exe) how do you assign the stdout of a program to an environment variable?
|
In UNIX you can assign the output of a script to an environment variable using the technique explained here - but what is the Windows equivalent?
I have a python utility which is intended to correct an environment variable. This script simply writes a sequence of chars to stdout. For the purposes of this question, the fact that my utility is written in python is irrelevant, it's just a program that I can call from the command-prompt which outputs a single line of text.
I'd like to do something like this (that works):
set WORKSPACE=[ the output of my_util.py ]
After running this command the value of the WORKSPACE environment variable should contain the exact same text that my utility would normally print out.
Can it be done? How?
UPDATE1: Somebody at work suggested:
python util.py | set /P WORKSPACE=
In theory, that would assign the stdout of the python-script top the env-var WORKSPACE, and yet it does not work, what is going wrong here?
|
[
"Use:\nfor /f \"delims=\" %A in ('<insert command here>') do @set <variable name>=%A\n\nFor example: \nfor /f \"delims=\" %A in ('time /t') do @set my_env_var=%A\n\n...will run the command \"time /t\" and set the env variable \"my_env_var\" to the result.\nRemember to use %%A instead of %A if you're running this inside a .BAT file.\n"
] |
[
12
] |
[] |
[] |
[
"cmd",
"python",
"windows"
] |
stackoverflow_0002738673_cmd_python_windows.txt
|
Q:
Lisp's "some" in Python?
I have a list of strings and a list of filters (which are also strings, to be interpreted as regular expressions). I want a list of all the elements in my string list that are accepted by at least one of the filters. Ideally, I'd write
[s for s in strings if some (lambda f: re.match (f, s), filters)]
where some is defined as
def some (pred, list):
for x in list:
res = pred (x)
if res:
return res
return False
Is something like that already available in Python, or is there a more idiomatic way to do this?
A:
There is a function called any which does roughly want you want. I think you are looking for this:
[s for s in strings if any(re.match(f, s) for f in filters)]
A:
[s for s in strings if any(re.match (f, s) for f in filters)]
A:
Python lambda's are only a fraction as powerful as their LISP counterparts.
In python lambdas cannot include blocks, so the for loop is not possible for a lambda
I would use a closure so that you dont have to send the list every time
def makesome(list):
def some(s)
for x in list:
if x.match(s):
return True
return False
return some
some = makesome(list)
[s for s in strings if some(s)]
|
Lisp's "some" in Python?
|
I have a list of strings and a list of filters (which are also strings, to be interpreted as regular expressions). I want a list of all the elements in my string list that are accepted by at least one of the filters. Ideally, I'd write
[s for s in strings if some (lambda f: re.match (f, s), filters)]
where some is defined as
def some (pred, list):
for x in list:
res = pred (x)
if res:
return res
return False
Is something like that already available in Python, or is there a more idiomatic way to do this?
|
[
"There is a function called any which does roughly want you want. I think you are looking for this:\n[s for s in strings if any(re.match(f, s) for f in filters)]\n\n",
"[s for s in strings if any(re.match (f, s) for f in filters)]\n\n",
"Python lambda's are only a fraction as powerful as their LISP counterparts.\nIn python lambdas cannot include blocks, so the for loop is not possible for a lambda\nI would use a closure so that you dont have to send the list every time\ndef makesome(list):\n def some(s)\n for x in list:\n if x.match(s): \n return True\n return False\n return some\n\nsome = makesome(list)\n\n[s for s in strings if some(s)]\n\n"
] |
[
22,
7,
1
] |
[] |
[] |
[
"lisp",
"python"
] |
stackoverflow_0002738777_lisp_python.txt
|
Q:
Python - do big doc strings waste memory?
I understand that in Python a string is simply an expression and a string by itself would be garbage collected immediately upon return of control to a code's caller, but...
Large class/method doc strings in
your code: do they waste memory
by building the string objects up?
Module level doc strings: are they
stored infinitely by the interpreter?
Does this even matter? My only concern came from the idea that if I'm using a large framework like Django, or multiple large open source libraries, they tend to be very well documented with potentially multiple megabytes of text. In these cases are the doc strings loaded into memory for code that's used along the way, and then kept there, or is it collected immediately like normal strings?
A:
"I understand that in Python a string is simply an expression and a string by itself would be garbage collected immediately upon return of control to a code's caller" indicates a misunderstanding, I think. A docstring is evaluated once (not on every function call) and stays alive at least as long as the function does.
"Does this even matter?" when it comes to optimization is not answered by thinking about it abstractly but by measuring. "Multiple megabytes" of text isn't probably isn't a lot in a memory-intensive application. The solution for saving memory likely lives elsewhere and you can determine whether that is the case by measurement.
Python's -OO command line switch removes docstrings.
A:
Python docstrings by default are kept around indefinitely, since they're accessible via the __doc__ attribute of a function or a module. For example, with the following in test.py:
"""This is a test module."""
def f():
"""This is a test function."""
pass
Then:
$ python
Python 2.5.1 (r251:54863, Oct 30 2007, 13:54:11)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> test.__doc__
'This is a test module.'
>>> test.f.__doc__
'This is a test function.'
>>>
The -OO option to the interpreter apparently causes it to remove docstrings from the generated .pyo files, but it doesn't have the effect I would expect:
$ python -OO
Python 2.5.1 (r251:54863, Oct 30 2007, 13:54:11)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> test.__file__
'/tmp/test.py'
>>>
$ grep "This is a test" /tmp/test.pyo
Binary file /tmp/test.pyo matches
$ python -OO
Python 2.5.1 (r251:54863, Oct 30 2007, 13:54:11)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> test.__file__
'/tmp/test.pyo'
>>> test.__doc__
'This is a test module.'
>>>
And in fact, the test.pyo file generated with -OO is identical to the test.pyc file generated with no command-line arguments. Can anyone explain this behavior?
|
Python - do big doc strings waste memory?
|
I understand that in Python a string is simply an expression and a string by itself would be garbage collected immediately upon return of control to a code's caller, but...
Large class/method doc strings in
your code: do they waste memory
by building the string objects up?
Module level doc strings: are they
stored infinitely by the interpreter?
Does this even matter? My only concern came from the idea that if I'm using a large framework like Django, or multiple large open source libraries, they tend to be very well documented with potentially multiple megabytes of text. In these cases are the doc strings loaded into memory for code that's used along the way, and then kept there, or is it collected immediately like normal strings?
|
[
"\n\"I understand that in Python a string is simply an expression and a string by itself would be garbage collected immediately upon return of control to a code's caller\" indicates a misunderstanding, I think. A docstring is evaluated once (not on every function call) and stays alive at least as long as the function does.\n\"Does this even matter?\" when it comes to optimization is not answered by thinking about it abstractly but by measuring. \"Multiple megabytes\" of text isn't probably isn't a lot in a memory-intensive application. The solution for saving memory likely lives elsewhere and you can determine whether that is the case by measurement.\nPython's -OO command line switch removes docstrings.\n\n",
"Python docstrings by default are kept around indefinitely, since they're accessible via the __doc__ attribute of a function or a module. For example, with the following in test.py:\n\"\"\"This is a test module.\"\"\"\n\ndef f():\n \"\"\"This is a test function.\"\"\"\n pass\n\nThen:\n$ python\nPython 2.5.1 (r251:54863, Oct 30 2007, 13:54:11) \n[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import test\n>>> test.__doc__\n'This is a test module.'\n>>> test.f.__doc__\n'This is a test function.'\n>>> \n\nThe -OO option to the interpreter apparently causes it to remove docstrings from the generated .pyo files, but it doesn't have the effect I would expect:\n$ python -OO\nPython 2.5.1 (r251:54863, Oct 30 2007, 13:54:11) \n[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import test\n>>> test.__file__\n'/tmp/test.py'\n>>> \n$ grep \"This is a test\" /tmp/test.pyo\nBinary file /tmp/test.pyo matches\n$ python -OO\nPython 2.5.1 (r251:54863, Oct 30 2007, 13:54:11) \n[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import test\n>>> test.__file__\n'/tmp/test.pyo'\n>>> test.__doc__\n'This is a test module.'\n>>> \n\nAnd in fact, the test.pyo file generated with -OO is identical to the test.pyc file generated with no command-line arguments. Can anyone explain this behavior?\n"
] |
[
10,
2
] |
[] |
[] |
[
"docstring",
"memory_management",
"python"
] |
stackoverflow_0002738904_docstring_memory_management_python.txt
|
Q:
Python debugging in Eclipse+PyDev
I try Eclipse+PyDev pair for some of my work. (Eclipse v3.5.0 + PyDev v1.5.6) I couldn't find a way to expose all of my variables to the PyDev console (Through PyDev console -> Console for current active editor option) I use a simple code to describe the issue. When I step-by-step go through the code I can't access my "x" variable from the console. It is viewed on Variables tab, but that's not really what I want.
Any help is appreciate.
See my screenshot for better description:
EDIT:
Assume adding a simple func like:
def myfunc(x):
return x**x
When I debug with the function added in the code I can access myfunc from the console easily. (Type myfunc and it will be available after this automatic execution:
>>> from part2.test import myfunc
>>> myfunc
Then when I do myfunc(5) it acts just like in the Python interpreter. It would be so useful to access variables in the similar fashion for debugging my code. I have big arrays and I do various tests and operations during debug process. Like:
Get my x and do x.sum(), later do x[::10], or transpose operate with other arrays observe results, experiment etc...
Hope there will be a better solution.
A:
Update:
In the latest PyDev versions, it's possible to right-click a frame in the stack and select PyDev > Debug console to have the interactive console with more functions associated to a context during a debug session.
Unfortunately, the actual interactive console, which would be the preferred way of playing with code (with code-completion, etc -- http://pydev.org/manual_adv_interactive_console.html) has no connection to a debug session right now (this is planned but still not implemented).
Still, with the 'simpler' console available, you are still able to interactively inspect and play with the variables available in a breakpoint scope: http://pydev.org/manual_adv_debug_console.html (which is the same as you'd have with pdb -- it's just a matter of typing code in the available console after a breakpoint is hit).
Cheers,
Fabio
A:
For this sort of exploratory debugging I like to use pdb, the batteries-included debugger. I haven't used it inside PyDev so I don't know how it would all fit together. My guess is it will do what you expect it to. An example of its usage:
import pdb
def myfunc(x):
pdb.set_trace()
return x**x
This will break right before executing the return statement, and it allows you to use full Pythonic statements to figure out what's going on. I use it like an interactive print statement: setting the place where I want to dive in, examining values and figuring results, and stepping through to watch it happen. Perhaps this is a lazy way of debugging, but sometimes you need more information before you can make less-lazy decisions :-)
The page I usually reference is at Python Conquers The Universe which also links a few other sources of information.
|
Python debugging in Eclipse+PyDev
|
I try Eclipse+PyDev pair for some of my work. (Eclipse v3.5.0 + PyDev v1.5.6) I couldn't find a way to expose all of my variables to the PyDev console (Through PyDev console -> Console for current active editor option) I use a simple code to describe the issue. When I step-by-step go through the code I can't access my "x" variable from the console. It is viewed on Variables tab, but that's not really what I want.
Any help is appreciate.
See my screenshot for better description:
EDIT:
Assume adding a simple func like:
def myfunc(x):
return x**x
When I debug with the function added in the code I can access myfunc from the console easily. (Type myfunc and it will be available after this automatic execution:
>>> from part2.test import myfunc
>>> myfunc
Then when I do myfunc(5) it acts just like in the Python interpreter. It would be so useful to access variables in the similar fashion for debugging my code. I have big arrays and I do various tests and operations during debug process. Like:
Get my x and do x.sum(), later do x[::10], or transpose operate with other arrays observe results, experiment etc...
Hope there will be a better solution.
|
[
"Update:\nIn the latest PyDev versions, it's possible to right-click a frame in the stack and select PyDev > Debug console to have the interactive console with more functions associated to a context during a debug session.\n\nUnfortunately, the actual interactive console, which would be the preferred way of playing with code (with code-completion, etc -- http://pydev.org/manual_adv_interactive_console.html) has no connection to a debug session right now (this is planned but still not implemented). \nStill, with the 'simpler' console available, you are still able to interactively inspect and play with the variables available in a breakpoint scope: http://pydev.org/manual_adv_debug_console.html (which is the same as you'd have with pdb -- it's just a matter of typing code in the available console after a breakpoint is hit).\nCheers,\nFabio\n",
"For this sort of exploratory debugging I like to use pdb, the batteries-included debugger. I haven't used it inside PyDev so I don't know how it would all fit together. My guess is it will do what you expect it to. An example of its usage:\nimport pdb\n\ndef myfunc(x):\n pdb.set_trace()\n return x**x\n\nThis will break right before executing the return statement, and it allows you to use full Pythonic statements to figure out what's going on. I use it like an interactive print statement: setting the place where I want to dive in, examining values and figuring results, and stepping through to watch it happen. Perhaps this is a lazy way of debugging, but sometimes you need more information before you can make less-lazy decisions :-)\nThe page I usually reference is at Python Conquers The Universe which also links a few other sources of information.\n"
] |
[
8,
2
] |
[] |
[] |
[
"eclipse",
"pydev",
"python"
] |
stackoverflow_0002704932_eclipse_pydev_python.txt
|
Q:
Python - question regarding the concurrent use of `multiprocess`
I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds).
My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare?
Thanks
A:
You are barking up the wrong tree if you are trying to use multiprocess to add concurrency to a network app. You are barking up a completely wrong tree if you're creating processes for each request. multiprocess is not what you want (at least as a concurrency model).
There's a good chance you want an asynchronous networking framework like Twisted.
A:
If you're really worried about having too many instances you could think about protecting the call with a Semaphore object. If I understand what you're doing then you can use the threaded semaphore object:
from threading import Semaphore
sem = Semaphore(10)
with sem:
make_multiprocessing_call()
I'm assuming that make_multiprocessing_call() will cleanup after itself.
This way only 10 "extra" instances of python will ever be opened, if another request comes along it will just have to wait until the previous have completed. Unfortunately this won't be in "Queue" order ... or any order in particular.
Hope that helps
A:
locks are only ever nessecary if you have multiple agents writing to a source. If they are just accessing, locks are not needed (and as you said defeat the purpose of multiprocessing).
Are you sure that would crash the system? On a web server using CGI, each request spawns a new process, so it's not unusual to see thousands of simultaneous processes (granted in python one should use wsgi and avoid this), which do not crash the system.
I suggest you test your theory -- it shouldn't be difficult to manufacture 10 simultaneous accesses -- and see if your server really does crash.
|
Python - question regarding the concurrent use of `multiprocess`
|
I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds).
My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare?
Thanks
|
[
"You are barking up the wrong tree if you are trying to use multiprocess to add concurrency to a network app. You are barking up a completely wrong tree if you're creating processes for each request. multiprocess is not what you want (at least as a concurrency model).\nThere's a good chance you want an asynchronous networking framework like Twisted. \n",
"If you're really worried about having too many instances you could think about protecting the call with a Semaphore object. If I understand what you're doing then you can use the threaded semaphore object:\nfrom threading import Semaphore\nsem = Semaphore(10)\nwith sem:\n make_multiprocessing_call()\n\nI'm assuming that make_multiprocessing_call() will cleanup after itself.\nThis way only 10 \"extra\" instances of python will ever be opened, if another request comes along it will just have to wait until the previous have completed. Unfortunately this won't be in \"Queue\" order ... or any order in particular. \nHope that helps\n",
"locks are only ever nessecary if you have multiple agents writing to a source. If they are just accessing, locks are not needed (and as you said defeat the purpose of multiprocessing).\nAre you sure that would crash the system? On a web server using CGI, each request spawns a new process, so it's not unusual to see thousands of simultaneous processes (granted in python one should use wsgi and avoid this), which do not crash the system.\nI suggest you test your theory -- it shouldn't be difficult to manufacture 10 simultaneous accesses -- and see if your server really does crash.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"gil",
"multiprocessing",
"multithreading",
"python"
] |
stackoverflow_0002738959_gil_multiprocessing_multithreading_python.txt
|
Q:
have you got a py-poppler-qt example?
I'm developing an application in PyQt4 that eventually has to open and show PDF files. For this task there is a python library: python-poppler (in various spelling flavours).
The problem is that it is terribly under documented and the only simple working example I found so far uses Python+Gtk+Cairo, while the example with Python+Qt I found uses an older version of the library, and many major changes have occurred ever since, hence it doesn't work anymore.
It's a week I'm trying to use the code in the PyGtk example to hack the code of the PyQt one, but no success so far.
Has anybody got a simple example of a Python-Qt program that opens and shows a PDF file, which might be useful to the community to see how to work with that library?
Thanks a lot.
Archive with broken pyqt example
Archive with working PyGtk example
A:
There is an example buried deep within an experimental (unused) branch of an app, here is a link to the specific file containing the code. Don't know if it'll help? All the relevant poppler code is self contained within the PdfViewer class at the bottom of that file.
http://bazaar.launchpad.net/~j-corwin/openlp/pdf/annotate/head:/openlp/plugins/presentations/lib/pdfcontroller.py
|
have you got a py-poppler-qt example?
|
I'm developing an application in PyQt4 that eventually has to open and show PDF files. For this task there is a python library: python-poppler (in various spelling flavours).
The problem is that it is terribly under documented and the only simple working example I found so far uses Python+Gtk+Cairo, while the example with Python+Qt I found uses an older version of the library, and many major changes have occurred ever since, hence it doesn't work anymore.
It's a week I'm trying to use the code in the PyGtk example to hack the code of the PyQt one, but no success so far.
Has anybody got a simple example of a Python-Qt program that opens and shows a PDF file, which might be useful to the community to see how to work with that library?
Thanks a lot.
Archive with broken pyqt example
Archive with working PyGtk example
|
[
"There is an example buried deep within an experimental (unused) branch of an app, here is a link to the specific file containing the code. Don't know if it'll help? All the relevant poppler code is self contained within the PdfViewer class at the bottom of that file.\nhttp://bazaar.launchpad.net/~j-corwin/openlp/pdf/annotate/head:/openlp/plugins/presentations/lib/pdfcontroller.py\n"
] |
[
5
] |
[] |
[] |
[
"poppler",
"pyqt4",
"python",
"qt"
] |
stackoverflow_0002507498_poppler_pyqt4_python_qt.txt
|
Q:
2D list has weird behavor when trying to modify a single value
Possible Duplicate:
Unexpected feature in a Python list of lists
So I am relatively new to Python and I am having trouble working with 2D Lists.
Here's my code:
data = [[None]*5]*5
data[0][0] = 'Cell A1'
print data
and here is the output (formatted for readability):
[['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None]]
Why does every row get assigned the value?
A:
This makes a list with five references to the same list:
data = [[None]*5]*5
Use something like this instead which creates five separate lists:
>>> data = [[None]*5 for _ in range(5)]
Now it does what you expect:
>>> data[0][0] = 'Cell A1'
>>> print data
[['Cell A1', None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None],
[None, None, None, None, None]]
A:
As the python library reference for sequence types, which includes lists, says
Note also that the copies are shallow; nested structures are not copied. This often haunts new Python programmers; consider:
>>> lists = [[]] * 3
>>> lists
[[], [], []]
>>> lists[0].append(3)
>>> lists
[[3], [3], [3]]
What has happened is that [[]] is a one-element list containing an empty list, so all three elements of [[]] * 3 are (pointers to) this single empty list. Modifying any of the elements of lists modifies this single list.
You can create a list of different lists this way:
>>> lists = [[] for i in range(3)]
>>> lists[0].append(3)
>>> lists[1].append(5)
>>> lists[2].append(7)
>>> lists
[[3], [5], [7]]
A:
In python every variable is an object, and so a reference. You first created an array of 5 Nones, and then you build an array with 5 times the same object.
|
2D list has weird behavor when trying to modify a single value
|
Possible Duplicate:
Unexpected feature in a Python list of lists
So I am relatively new to Python and I am having trouble working with 2D Lists.
Here's my code:
data = [[None]*5]*5
data[0][0] = 'Cell A1'
print data
and here is the output (formatted for readability):
[['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None],
['Cell A1', None, None, None, None]]
Why does every row get assigned the value?
|
[
"This makes a list with five references to the same list:\ndata = [[None]*5]*5\n\nUse something like this instead which creates five separate lists:\n>>> data = [[None]*5 for _ in range(5)]\n\nNow it does what you expect:\n>>> data[0][0] = 'Cell A1'\n>>> print data\n[['Cell A1', None, None, None, None],\n [None, None, None, None, None],\n [None, None, None, None, None],\n [None, None, None, None, None],\n [None, None, None, None, None]]\n\n",
"As the python library reference for sequence types, which includes lists, says\n\nNote also that the copies are shallow; nested structures are not copied. This often haunts new Python programmers; consider:\n\n>>> lists = [[]] * 3\n>>> lists\n [[], [], []]\n>>> lists[0].append(3)\n>>> lists\n [[3], [3], [3]]\n\n\nWhat has happened is that [[]] is a one-element list containing an empty list, so all three elements of [[]] * 3 are (pointers to) this single empty list. Modifying any of the elements of lists modifies this single list. \n\nYou can create a list of different lists this way:\n>>> lists = [[] for i in range(3)] \n>>> lists[0].append(3)\n>>> lists[1].append(5)\n>>> lists[2].append(7)\n>>> lists\n [[3], [5], [7]]\n\n",
"In python every variable is an object, and so a reference. You first created an array of 5 Nones, and then you build an array with 5 times the same object.\n"
] |
[
116,
18,
2
] |
[] |
[] |
[
"2d",
"list",
"python",
"python_2.7"
] |
stackoverflow_0002739552_2d_list_python_python_2.7.txt
|
Q:
Extract list of attributes from list of objects in python
I have an uniform list of objects in python:
class myClass(object):
def __init__(self, attr):
self.attr = attr
self.other = None
objs = [myClass (i) for i in range(10)]
Now I want to extract a list with some attribute of that class (let's say attr), in order to pass it so some function (for plotting that data for example)
What is the pythonic way of doing it,
attr=[o.attr for o in objsm]
?
Maybe derive list and add a method to it, so I can use some idiom like
objs.getattribute("attr")
?
A:
attrs = [o.attr for o in objs] was the right code for making a list like the one you describe. Don't try to subclass list for this. Is there something you did not like about that snippet?
A:
You can also write:
attr=(o.attr for o in objsm)
This way you get a generator that conserves memory. For more benefits look at Generator Expressions.
|
Extract list of attributes from list of objects in python
|
I have an uniform list of objects in python:
class myClass(object):
def __init__(self, attr):
self.attr = attr
self.other = None
objs = [myClass (i) for i in range(10)]
Now I want to extract a list with some attribute of that class (let's say attr), in order to pass it so some function (for plotting that data for example)
What is the pythonic way of doing it,
attr=[o.attr for o in objsm]
?
Maybe derive list and add a method to it, so I can use some idiom like
objs.getattribute("attr")
?
|
[
"attrs = [o.attr for o in objs] was the right code for making a list like the one you describe. Don't try to subclass list for this. Is there something you did not like about that snippet?\n",
"You can also write:\nattr=(o.attr for o in objsm)\n\nThis way you get a generator that conserves memory. For more benefits look at Generator Expressions.\n"
] |
[
115,
104
] |
[] |
[] |
[
"list",
"loops",
"python"
] |
stackoverflow_0002739800_list_loops_python.txt
|
Q:
Python: Created nested dictionary from list of paths
I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part)
[ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ]
Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN.
For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128.
So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this
data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} },
'ObjK' : {'part1':{spec}, 'partN':{spec} }
}
To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this:
class dotdict(dict):
def __getattr__(self, attr):
return self.get(attr, None)
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
The method for creating the nested "dotdict"s looks like this:
def addPath(self, spec, parts, base):
if len(parts) > 1:
item = base.setdefault(parts[0], dotdict())
self.addPath(spec, parts[1:], item)
else:
item = base.setdefault(parts[0], spec)
return base
Then I just do something like:
for path, spec in paths:
self.lookup = dotdict()
self.addPath(spec, path.split("."), self.lookup)
So, in the end
self.lookup.Obj4.part500 points to the spec.
Is there a better (more pythonic) way to do this?
A:
Unless you prefer to access the specs with dot notation, try putting them into the dictionary directly. In the below code, the name d tracks the innermost dictionary visited on the path:
specs = {}
for path, spec in paths:
parts = path.split('.')
d = specs
for p in parts[:-1]:
d = d.setdefault(p, {})
d[parts[-1]] = spec
If you have only two parts per path (ObjN and partN say), you could just do this:
specs = {}
for path, spec in paths:
[obj, part] = path.split('.')
specs.setdefault(obj, {})[part] = spec
|
Python: Created nested dictionary from list of paths
|
I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part)
[ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ]
Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN.
For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128.
So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this
data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} },
'ObjK' : {'part1':{spec}, 'partN':{spec} }
}
To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this:
class dotdict(dict):
def __getattr__(self, attr):
return self.get(attr, None)
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
The method for creating the nested "dotdict"s looks like this:
def addPath(self, spec, parts, base):
if len(parts) > 1:
item = base.setdefault(parts[0], dotdict())
self.addPath(spec, parts[1:], item)
else:
item = base.setdefault(parts[0], spec)
return base
Then I just do something like:
for path, spec in paths:
self.lookup = dotdict()
self.addPath(spec, path.split("."), self.lookup)
So, in the end
self.lookup.Obj4.part500 points to the spec.
Is there a better (more pythonic) way to do this?
|
[
"Unless you prefer to access the specs with dot notation, try putting them into the dictionary directly. In the below code, the name d tracks the innermost dictionary visited on the path:\nspecs = {}\nfor path, spec in paths:\n parts = path.split('.')\n d = specs\n for p in parts[:-1]:\n d = d.setdefault(p, {})\n d[parts[-1]] = spec\n\nIf you have only two parts per path (ObjN and partN say), you could just do this:\nspecs = {}\nfor path, spec in paths:\n [obj, part] = path.split('.')\n specs.setdefault(obj, {})[part] = spec\n\n"
] |
[
8
] |
[] |
[] |
[
"nested",
"path",
"python"
] |
stackoverflow_0002738141_nested_path_python.txt
|
Q:
Python raises a KeyError (for an out of dictionary key) even though the key IS in the dictionary
I'm getting a KeyError for an out of dictionary key, even though I know the key IS in fact in the dictionary. Any ideas as to what might be causing this?
print G.keys()
returns the following:
['24', '25', '20', '21', '22', '23', '1', '3', '2', '5', '4', '7', '6', '9', '8', '11', '10', '13', '12', '15', '14', '17', '16', '19', '18']
but when I try to access a value in the dictionary on the next line of code...
for w in G[v]: #note that in this example, v = 17
I get the following error message:
KeyError: 17
Any help, tips, or advice are all appreciated. Thanks.
A:
That's simple, 17 != '17'
A:
The keys are strings, you are trying to access them as ints.
A:
try with v = '17'. You must convert your int to string
|
Python raises a KeyError (for an out of dictionary key) even though the key IS in the dictionary
|
I'm getting a KeyError for an out of dictionary key, even though I know the key IS in fact in the dictionary. Any ideas as to what might be causing this?
print G.keys()
returns the following:
['24', '25', '20', '21', '22', '23', '1', '3', '2', '5', '4', '7', '6', '9', '8', '11', '10', '13', '12', '15', '14', '17', '16', '19', '18']
but when I try to access a value in the dictionary on the next line of code...
for w in G[v]: #note that in this example, v = 17
I get the following error message:
KeyError: 17
Any help, tips, or advice are all appreciated. Thanks.
|
[
"That's simple, 17 != '17'\n",
"The keys are strings, you are trying to access them as ints.\n",
"try with v = '17'. You must convert your int to string\n"
] |
[
28,
5,
3
] |
[] |
[] |
[
"dictionary",
"exception",
"key",
"python"
] |
stackoverflow_0002740036_dictionary_exception_key_python.txt
|
Q:
Why are underscores better than hyphens for file names?
From Building Skills in Python:
A file name like exercise_1.py is better than the name exercise-1.py. We can run both programs equally well from the command line, but the name with the hyphen limits our ability to write larger and more sophisticated programs.
Why is this?
A:
The issue here is that importing files with the hyphen-minus (the default keyboard key -; U+002D) in their name doesn't work since it represents minus signs in Python. So, if you had your own module you wanted to import, it shouldn't have a hyphen in its name:
>>> import test-1
File "<stdin>", line 1
import test-1
^
SyntaxError: invalid syntax
>>> import test_1
>>>
Larger programs tend to be logically separated into many different modules, hence the quote
the name with the hyphen limits our ability to write larger and more sophisticated programs.
A:
From that very document (p.368, Section 30.2 'Module Definition'):
Note that a module name must be a valid Python name... A module's name is limited to letters, digits and "_"s.
|
Why are underscores better than hyphens for file names?
|
From Building Skills in Python:
A file name like exercise_1.py is better than the name exercise-1.py. We can run both programs equally well from the command line, but the name with the hyphen limits our ability to write larger and more sophisticated programs.
Why is this?
|
[
"The issue here is that importing files with the hyphen-minus (the default keyboard key -; U+002D) in their name doesn't work since it represents minus signs in Python. So, if you had your own module you wanted to import, it shouldn't have a hyphen in its name:\n>>> import test-1\n File \"<stdin>\", line 1\n import test-1\n ^\nSyntaxError: invalid syntax\n>>> import test_1\n>>>\n\nLarger programs tend to be logically separated into many different modules, hence the quote\n\nthe name with the hyphen limits our ability to write larger and more sophisticated programs.\n\n",
"From that very document (p.368, Section 30.2 'Module Definition'):\n\nNote that a module name must be a valid Python name... A module's name is limited to letters, digits and \"_\"s.\n\n"
] |
[
82,
23
] |
[] |
[] |
[
"naming",
"python"
] |
stackoverflow_0002740026_naming_python.txt
|
Q:
Are there some cases where Python threads can safely manipulate shared state?
Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs.
Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable.
However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race:
from threading import Thread, Lock
import operator
def contains_all_ints(l, n):
l.sort()
for i in xrange(0, n):
if l[i] != i:
return False
return True
def test(ntests):
results = []
threads = []
def lockless_append(i):
results.append(i)
for i in xrange(0, ntests):
threads.append(Thread(target=lockless_append, args=(i,)))
threads[i].start()
for i in xrange(0, ntests):
threads[i].join()
if len(results) != ntests or not contains_all_ints(results, ntests):
return False
else:
return True
for i in range(0,100):
if test(100000):
print "OK", i
else:
print "appending to a list without locks *is* unsafe"
exit()
I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads?
Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?
A:
Appending to a list is thread-safe, yes. You can only append to a list while holding the GIL, and the list takes care not to release the GIL during the append operation (which is, after all, a fairly simple operation.) The order in which different thread's append operations go through is of course up for grabs, but they will all be strictly serialized operations because the GIL is never released during an append.
The same is not necessarily true for other operations. Lots of operations in Python can cause arbitrary Python code to be executed, which in turn can cause the GIL to be released. For example, i += 1 is three distinct operations, "get i', "add 1 to it" and "store it in i". "add 1 to it" would translate (in this case) into it.__iadd__(1), which can go off and do whatever it likes.
Python objects themselves guard their own internal state -- dicts won't get corrupted by two different threads trying to set items in them. But if the data in the dict is supposed to be internally consistent, neither the dict nor the GIL does anything to protect that, except (in usual thread fashion) by making it less likely but still possible things end up different than you thought.
A:
In CPython, thread switching is done when sys.getcheckinteval() bycodes have been executed. So a context switch can never occur during the execution of a single bytecode, and operations that are encoded as a single bytecode are inherently atomic and threadsafe, unless that bytecode executes other Python code or calls C code that releases the GIL. Most operations on the built-in collection types (dict, list etc) fall into the 'inherently threadsafe' category.
However this is an implementation detail that is specific to the C implementation of Python, and should not be relied upon. Other versions of Python (Jython, IronPython, PyPy etc) may not behave in the same way. There is also no guarantee that future versions of CPython will keep this behaviour.
|
Are there some cases where Python threads can safely manipulate shared state?
|
Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs.
Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable.
However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race:
from threading import Thread, Lock
import operator
def contains_all_ints(l, n):
l.sort()
for i in xrange(0, n):
if l[i] != i:
return False
return True
def test(ntests):
results = []
threads = []
def lockless_append(i):
results.append(i)
for i in xrange(0, ntests):
threads.append(Thread(target=lockless_append, args=(i,)))
threads[i].start()
for i in xrange(0, ntests):
threads[i].join()
if len(results) != ntests or not contains_all_ints(results, ntests):
return False
else:
return True
for i in range(0,100):
if test(100000):
print "OK", i
else:
print "appending to a list without locks *is* unsafe"
exit()
I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads?
Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?
|
[
"Appending to a list is thread-safe, yes. You can only append to a list while holding the GIL, and the list takes care not to release the GIL during the append operation (which is, after all, a fairly simple operation.) The order in which different thread's append operations go through is of course up for grabs, but they will all be strictly serialized operations because the GIL is never released during an append.\nThe same is not necessarily true for other operations. Lots of operations in Python can cause arbitrary Python code to be executed, which in turn can cause the GIL to be released. For example, i += 1 is three distinct operations, \"get i', \"add 1 to it\" and \"store it in i\". \"add 1 to it\" would translate (in this case) into it.__iadd__(1), which can go off and do whatever it likes.\nPython objects themselves guard their own internal state -- dicts won't get corrupted by two different threads trying to set items in them. But if the data in the dict is supposed to be internally consistent, neither the dict nor the GIL does anything to protect that, except (in usual thread fashion) by making it less likely but still possible things end up different than you thought.\n",
"In CPython, thread switching is done when sys.getcheckinteval() bycodes have been executed. So a context switch can never occur during the execution of a single bytecode, and operations that are encoded as a single bytecode are inherently atomic and threadsafe, unless that bytecode executes other Python code or calls C code that releases the GIL. Most operations on the built-in collection types (dict, list etc) fall into the 'inherently threadsafe' category.\nHowever this is an implementation detail that is specific to the C implementation of Python, and should not be relied upon. Other versions of Python (Jython, IronPython, PyPy etc) may not behave in the same way. There is also no guarantee that future versions of CPython will keep this behaviour.\n"
] |
[
7,
1
] |
[] |
[] |
[
"gil",
"multithreading",
"python"
] |
stackoverflow_0002740435_gil_multithreading_python.txt
|
Q:
Email attachment problem
I want to send an email with an attachment using the following code (Python 3.1)
(greatly simplified to show the example)
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
msg = MIMEMultipart()
msg['From'] = from_addr
msg['To'] = to_addr
msg['Subject'] = subject
msg.attach(MIMEText(body))
fp = open(att_file)
msg1 = MIMEText(fp.read())
attachment = msg1.add_header('Content-Disposition', 'attachment', filename=att_file)
msg.attach(attachment)
# set string to be sent as 3rd parameter to smptlib.SMTP.sendmail()
send_string = msg.as_string()
The attachment object msg1 returns 'email.mime.text.MIMEText' object at ', but when the msg1.add_header(...) line runs the result is None, hence the program falls-over in msg.as_string() because no part of the attachment can have a None value. (Traceback shows "'NoneType' object has no attribute 'get_content_maintype'" in line 118 of _dispatch in generator.py, many levels down from msg.as_string())
Has anyone any idea what the cause of the problem might be? Any help would be appreciated.
Alan Harris-Reid
A:
Use:
msg.attach(msg1)
|
Email attachment problem
|
I want to send an email with an attachment using the following code (Python 3.1)
(greatly simplified to show the example)
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
msg = MIMEMultipart()
msg['From'] = from_addr
msg['To'] = to_addr
msg['Subject'] = subject
msg.attach(MIMEText(body))
fp = open(att_file)
msg1 = MIMEText(fp.read())
attachment = msg1.add_header('Content-Disposition', 'attachment', filename=att_file)
msg.attach(attachment)
# set string to be sent as 3rd parameter to smptlib.SMTP.sendmail()
send_string = msg.as_string()
The attachment object msg1 returns 'email.mime.text.MIMEText' object at ', but when the msg1.add_header(...) line runs the result is None, hence the program falls-over in msg.as_string() because no part of the attachment can have a None value. (Traceback shows "'NoneType' object has no attribute 'get_content_maintype'" in line 118 of _dispatch in generator.py, many levels down from msg.as_string())
Has anyone any idea what the cause of the problem might be? Any help would be appreciated.
Alan Harris-Reid
|
[
"Use:\nmsg.attach(msg1)\n\n"
] |
[
3
] |
[] |
[] |
[
"attachment",
"email",
"python"
] |
stackoverflow_0002740605_attachment_email_python.txt
|
Q:
Issue with python string join
I have some code in which I apply a join to a list.
The list before the join looks like this:
["'DealerwebAgcy_NYK_GW_UAT'", "'DealerwebAgcy'", "'UAT'", '@ECNPhysicalMarketCo
nfigId', "'GATEWAY'", "'DEALERWEB_MD_AGCY'", "'NU1MKVETC'", "'mkvetcu'", "'C:\te
mp'", '0', "'NYK'", '0', '1', "'isqlw.exe'", 'GetDate()', '12345', "'NYK'", '350
', '7']
After the join this is the resulting string
'DealerwebAgcy_NYK_GW_UAT','DealerwebAgcy','UAT',@ECNPhysicalMarketConfigId,'GAT
EWAY','DEALERWEB_MD_AGCY','NU1MKVETC','mkvetcu','C: emp',0,'NYK',0,1,'isqlw.
exe',GetDate(),12345,'NYK',350,7
Note the element 'C:\temp' which ends up as ,'C: emp',
I tried something similar on the python command prompt , but I wasn't able to 2 repeat this.
the relevant code responsible for this magic is as follows.
values_dict["ECNMarketInstance"] = [strVal(self.EcnInstance_),strVal (self.DisplayName_) ,strVal(self.environment_), '@ECNPhysicalMarketConfigId',strVal(self.EcnGatewaTypeId_),strVal(self.ConnectionComponent_),strVal(self.UserName_),strVal(self.Password_),strVal(self.WorkingDir_),"0",strVal(self.region_),"0","1", strVal(self.LUVersion_), "GetDate()" , self.LUUserId_,strVal(self.LUOwningSite_),self.QuoteColumnId_ , self.Capabilities_]
delim = ","
joined = delim.join(values)
print values
print joined
A:
\t is a tab character.
You have two options: 1) make the string be "c:\\temp", or 2) use r"c:\temp"
|
Issue with python string join
|
I have some code in which I apply a join to a list.
The list before the join looks like this:
["'DealerwebAgcy_NYK_GW_UAT'", "'DealerwebAgcy'", "'UAT'", '@ECNPhysicalMarketCo
nfigId', "'GATEWAY'", "'DEALERWEB_MD_AGCY'", "'NU1MKVETC'", "'mkvetcu'", "'C:\te
mp'", '0', "'NYK'", '0', '1', "'isqlw.exe'", 'GetDate()', '12345', "'NYK'", '350
', '7']
After the join this is the resulting string
'DealerwebAgcy_NYK_GW_UAT','DealerwebAgcy','UAT',@ECNPhysicalMarketConfigId,'GAT
EWAY','DEALERWEB_MD_AGCY','NU1MKVETC','mkvetcu','C: emp',0,'NYK',0,1,'isqlw.
exe',GetDate(),12345,'NYK',350,7
Note the element 'C:\temp' which ends up as ,'C: emp',
I tried something similar on the python command prompt , but I wasn't able to 2 repeat this.
the relevant code responsible for this magic is as follows.
values_dict["ECNMarketInstance"] = [strVal(self.EcnInstance_),strVal (self.DisplayName_) ,strVal(self.environment_), '@ECNPhysicalMarketConfigId',strVal(self.EcnGatewaTypeId_),strVal(self.ConnectionComponent_),strVal(self.UserName_),strVal(self.Password_),strVal(self.WorkingDir_),"0",strVal(self.region_),"0","1", strVal(self.LUVersion_), "GetDate()" , self.LUUserId_,strVal(self.LUOwningSite_),self.QuoteColumnId_ , self.Capabilities_]
delim = ","
joined = delim.join(values)
print values
print joined
|
[
"\\t is a tab character.\nYou have two options: 1) make the string be \"c:\\\\temp\", or 2) use r\"c:\\temp\"\n"
] |
[
5
] |
[] |
[] |
[
"list",
"python",
"string"
] |
stackoverflow_0002740772_list_python_string.txt
|
Q:
Could somebody give me a high-level technical overview of WSGI details behind the scenes vs other web interface approaces with Python?
Firstly:
I understand what WSGI is and how to use it
I understand what "other" methods (Apache mod-python, fcgi, et al) are, and how to use them
I understand their practical differences
What I don't understand is how each of the various "other" methods work compared to something like UWSGI, behind the scenes. Does your server (Nginx, etc) route the request to your WSGI application and UWSGI creates a new Python interpreter for each request routed to it? How much different is is from the other more traditional / monkey patched methods is WSGI (aside from the different, easier Python interface that WSGI offers)? What light bulb moment am I missing?
A:
Except for CGI, a new Python interpreter is nearly never created per request. Read:
http://blog.dscpl.com.au/2009/03/python-interpreter-is-not-created-for.html
This was written in respect of mod_python but also applies to mod_wsgi and any WSGI hosting mechanism that uses persistent processes.
Also read:
http://www.python.org/dev/peps/pep-0333/#environ-variables
There you will find described the 'wsgi.run_once' variable described. This is used to indicate to a WSGI application when a hosting mechanism is used which would see a process only handling one request and then being exited, ie., CGI. Thus, write a test hello world application which dumps out the WSGI environment and see what it is set to for what you are using.
Also pay attention to the 'wsgi.multiprocess' and 'wsgi.multithread' variables. They tell you if a multi process server is being used such that there are multiple instances of your application handling requests at the same time. The 'wsgi.multithread' variable tells you if the process itself is handling multiple requests in concurrent threads in same process.
For more on multiprocess and multithread models in relation to Apache embedded systems, such as mod_python and mod_wsgi, and mod_wsgi daemon mode, see:
http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
|
Could somebody give me a high-level technical overview of WSGI details behind the scenes vs other web interface approaces with Python?
|
Firstly:
I understand what WSGI is and how to use it
I understand what "other" methods (Apache mod-python, fcgi, et al) are, and how to use them
I understand their practical differences
What I don't understand is how each of the various "other" methods work compared to something like UWSGI, behind the scenes. Does your server (Nginx, etc) route the request to your WSGI application and UWSGI creates a new Python interpreter for each request routed to it? How much different is is from the other more traditional / monkey patched methods is WSGI (aside from the different, easier Python interface that WSGI offers)? What light bulb moment am I missing?
|
[
"Except for CGI, a new Python interpreter is nearly never created per request. Read:\nhttp://blog.dscpl.com.au/2009/03/python-interpreter-is-not-created-for.html\nThis was written in respect of mod_python but also applies to mod_wsgi and any WSGI hosting mechanism that uses persistent processes.\nAlso read:\nhttp://www.python.org/dev/peps/pep-0333/#environ-variables\nThere you will find described the 'wsgi.run_once' variable described. This is used to indicate to a WSGI application when a hosting mechanism is used which would see a process only handling one request and then being exited, ie., CGI. Thus, write a test hello world application which dumps out the WSGI environment and see what it is set to for what you are using.\nAlso pay attention to the 'wsgi.multiprocess' and 'wsgi.multithread' variables. They tell you if a multi process server is being used such that there are multiple instances of your application handling requests at the same time. The 'wsgi.multithread' variable tells you if the process itself is handling multiple requests in concurrent threads in same process.\nFor more on multiprocess and multithread models in relation to Apache embedded systems, such as mod_python and mod_wsgi, and mod_wsgi daemon mode, see:\nhttp://code.google.com/p/modwsgi/wiki/ProcessesAndThreading\n"
] |
[
8
] |
[] |
[] |
[
"mod_wsgi",
"python",
"wsgi"
] |
stackoverflow_0002739892_mod_wsgi_python_wsgi.txt
|
Q:
Scraping *.aspx content using Python
I'm having difficulties scraping dynamically generated table in ASPX. Trying to scrape the gas prices from a site like this GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price.
Is there a way I could scrape the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scraping, but that's irrelevant unless there's a specific library...
Thanks in advance.
A:
The origin of the page (aspx) is not an issue here.
It looks like they're actively trying to thwart scraping attempts. The numbers are not fonts, rather they several div elements next to one another with background images that are numbers. They really don't want to be scraped.
(of course, if you were really determined you could probably map the class name of the div to... They're not very well 'encrypted')
Take note of the copyright notice at the bottom of the linked page
|
Scraping *.aspx content using Python
|
I'm having difficulties scraping dynamically generated table in ASPX. Trying to scrape the gas prices from a site like this GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price.
Is there a way I could scrape the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scraping, but that's irrelevant unless there's a specific library...
Thanks in advance.
|
[
"The origin of the page (aspx) is not an issue here.\nIt looks like they're actively trying to thwart scraping attempts. The numbers are not fonts, rather they several div elements next to one another with background images that are numbers. They really don't want to be scraped. \n(of course, if you were really determined you could probably map the class name of the div to... They're not very well 'encrypted')\nTake note of the copyright notice at the bottom of the linked page\n"
] |
[
4
] |
[] |
[] |
[
"asp.net",
"python",
"web_scraping"
] |
stackoverflow_0002741425_asp.net_python_web_scraping.txt
|
Q:
How to use a Python REPL in a script
I am learning Python to use QT with Python, not only C++, and am curious if I can embed a Python interpreter in my application as a REPL?
I want to allow users to script either loading a file and that file act as a plugin, or by evaluating code entered in a text box or something similar to embedding the interpreter in C or C++ and script the application using Python.
Can this be done if the application is itself written in Python and made into a standalone binary using py2exe or something similar, something like Anders did with the C# REPL or Miguel with Mono.
A:
Well, this is all certainly possible, but it is not beginner stuff.
Python offers a read-eval loop as a module, but you'd still have so create a console in QT where you can type in input and display results.
The same goes for a plugin system. It's very easy to import a script as a plugin and the plugin just has to import your application to access its state. But that's hardly a real plugin system, you'd want to create a proper API so the plugins don't break whenever something in the app changes.
|
How to use a Python REPL in a script
|
I am learning Python to use QT with Python, not only C++, and am curious if I can embed a Python interpreter in my application as a REPL?
I want to allow users to script either loading a file and that file act as a plugin, or by evaluating code entered in a text box or something similar to embedding the interpreter in C or C++ and script the application using Python.
Can this be done if the application is itself written in Python and made into a standalone binary using py2exe or something similar, something like Anders did with the C# REPL or Miguel with Mono.
|
[
"Well, this is all certainly possible, but it is not beginner stuff.\nPython offers a read-eval loop as a module, but you'd still have so create a console in QT where you can type in input and display results.\nThe same goes for a plugin system. It's very easy to import a script as a plugin and the plugin just has to import your application to access its state. But that's hardly a real plugin system, you'd want to create a proper API so the plugins don't break whenever something in the app changes.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"scripting"
] |
stackoverflow_0002741368_python_scripting.txt
|
Q:
a more pythonic way to express conditionally bounded loop?
I've got a loop that wants to execute to exhaustion or until some user specified limit is reached. I've got a construct that looks bad yet I can't seem to find a more elegant way to express it; is there one?
def ello_bruce(limit=None):
for i in xrange(10**5):
if predicate(i):
if not limit is None:
limit -= 1
if limit <= 0:
break
def predicate(i):
# lengthy computation
return True
Holy nesting! There has to be a better way. For purposes of a working example, xrange is used where I normally have an iterator of finite but unknown length (and predicate sometimes returns False).
A:
Maybe something like this would be a little better:
from itertools import ifilter, islice
def ello_bruce(limit=None):
for i in islice(ifilter(predicate, xrange(10**5)), limit):
# do whatever you want with i here
A:
I'd take a good look at the itertools library. Using that, I think you'd have something like...
# From the itertools examples
def tabulate(function, start=0):
return imap(function, count(start))
def take(n, iterable):
return list(islice(iterable, n))
# Then something like:
def ello_bruce(limit=None):
take(filter(tabulate(predicate)), limit)
A:
I'd start with
if limit is None: return
since nothing can ever happen to limit when it starts as None (if there are no desirable side effects in the iteration and in the computation of predicate -- if there are, then, in this case you can just do for i in xrange(10**5): predicate(i)).
If limit is not None, then you just want to perform max(limit, 1) computations of predicate that are true, so an itertools.islice of an itertools.ifilter would do:
import itertools as it
def ello_bruce(limit=None):
if limit is None:
for i in xrange(10**5): predicate(i)
else:
for _ in it.islice(
it.ifilter(predicate, xrange(10**5),
max(limit, 1)): pass
A:
You should remove the nested ifs:
if predicate(i) and not limit is None:
...
A:
What you want to do seems perfectly suited for a while loop:
def ello_bruce(limit=None):
max = 10**5
# if you consider 0 to be an invalid value for limit you can also do
# if limit:
if limit is None:
limit = max
while max and limit:
if predicate(i):
limit -= 1
max -=1
The loop stops if either max or limit reaches zero.
A:
Um. As far as I understand it, predicate just computes in segments, and you totally ignore its return value, right?
This is another take:
import itertools
def ello_bruce(limit=None):
if limit is None:
limiter= itertools.repeat(None)
else:
limiter= xrange(limit)
# since predicate is a Python function
# itertools looping won't be faster, so use plain for.
# remember to replace the xrange(100000) with your own iterator
for dummy in itertools.izip(xrange(100000), limiter):
pass
Also, remove the unneeded return True from the end of predicate.
|
a more pythonic way to express conditionally bounded loop?
|
I've got a loop that wants to execute to exhaustion or until some user specified limit is reached. I've got a construct that looks bad yet I can't seem to find a more elegant way to express it; is there one?
def ello_bruce(limit=None):
for i in xrange(10**5):
if predicate(i):
if not limit is None:
limit -= 1
if limit <= 0:
break
def predicate(i):
# lengthy computation
return True
Holy nesting! There has to be a better way. For purposes of a working example, xrange is used where I normally have an iterator of finite but unknown length (and predicate sometimes returns False).
|
[
"Maybe something like this would be a little better:\nfrom itertools import ifilter, islice\n\ndef ello_bruce(limit=None):\n for i in islice(ifilter(predicate, xrange(10**5)), limit):\n # do whatever you want with i here\n\n",
"I'd take a good look at the itertools library. Using that, I think you'd have something like...\n# From the itertools examples\ndef tabulate(function, start=0):\n return imap(function, count(start))\ndef take(n, iterable):\n return list(islice(iterable, n))\n\n# Then something like:\ndef ello_bruce(limit=None):\n take(filter(tabulate(predicate)), limit)\n\n",
"I'd start with\nif limit is None: return\n\nsince nothing can ever happen to limit when it starts as None (if there are no desirable side effects in the iteration and in the computation of predicate -- if there are, then, in this case you can just do for i in xrange(10**5): predicate(i)).\nIf limit is not None, then you just want to perform max(limit, 1) computations of predicate that are true, so an itertools.islice of an itertools.ifilter would do:\nimport itertools as it\n\ndef ello_bruce(limit=None):\n if limit is None:\n for i in xrange(10**5): predicate(i)\n else:\n for _ in it.islice(\n it.ifilter(predicate, xrange(10**5),\n max(limit, 1)): pass\n\n",
"You should remove the nested ifs:\nif predicate(i) and not limit is None:\n ...\n\n",
"What you want to do seems perfectly suited for a while loop:\ndef ello_bruce(limit=None):\n max = 10**5\n # if you consider 0 to be an invalid value for limit you can also do\n # if limit:\n if limit is None: \n limit = max\n\n while max and limit:\n if predicate(i):\n limit -= 1\n max -=1\n\nThe loop stops if either max or limit reaches zero.\n",
"Um. As far as I understand it, predicate just computes in segments, and you totally ignore its return value, right?\nThis is another take:\nimport itertools\n\ndef ello_bruce(limit=None):\n if limit is None:\n limiter= itertools.repeat(None)\n else:\n limiter= xrange(limit)\n\n # since predicate is a Python function\n # itertools looping won't be faster, so use plain for.\n # remember to replace the xrange(100000) with your own iterator\n for dummy in itertools.izip(xrange(100000), limiter):\n pass\n\nAlso, remove the unneeded return True from the end of predicate.\n"
] |
[
11,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"coding_style",
"python"
] |
stackoverflow_0002711289_coding_style_python.txt
|
Q:
How do I do an "OR" for my python regex?
re.compile("abc")
I would like to do "abc" OR "xyz".
A:
Use |:
re.compile("abc|xyz")
It's worth perusing regular-expression.info for detailed information as well as Regular Expression HOWTO and re — Regular expression operations from the Python documentation.
A:
I'll take this opportunity to point you to an excellent reference for many of life's problems: Wikipedia.
Regular Expressions on Wikipedia
You might also find answers here.
A:
I'm more a perl regex guy, but the current popular answer seems to match either abcyz or abxyz. I normally would have the regex look like "(abc)|(xyz)". You want to use the parenthesis to group the 2 strings your looking for.
A:
Remember the doc property inside of python. In particular, you can access it for regex by typing
import re
re.__doc__
at the python shell or by using a python doc browser such as the one built into Spyder.
This particular question is addressed there.
|
How do I do an "OR" for my python regex?
|
re.compile("abc")
I would like to do "abc" OR "xyz".
|
[
"Use |:\nre.compile(\"abc|xyz\")\n\nIt's worth perusing regular-expression.info for detailed information as well as Regular Expression HOWTO and re — Regular expression operations from the Python documentation.\n",
"I'll take this opportunity to point you to an excellent reference for many of life's problems: Wikipedia.\nRegular Expressions on Wikipedia\nYou might also find answers here.\n",
"I'm more a perl regex guy, but the current popular answer seems to match either abcyz or abxyz. I normally would have the regex look like \"(abc)|(xyz)\". You want to use the parenthesis to group the 2 strings your looking for.\n",
"Remember the doc property inside of python. In particular, you can access it for regex by typing \nimport re \nre.__doc__\n\nat the python shell or by using a python doc browser such as the one built into Spyder. \nThis particular question is addressed there.\n"
] |
[
11,
1,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0002734010_python_regex.txt
|
Q:
about python scripting
I have this code
class HNCS (ThreadingTCPServer):
def verify_request(self, request, client_address):
for key in connections:
if connections[key].client_address[0] == client_address[0]:
if client_address[0] != '127.0.0.1':
return False
return True
def welcome(self):
return '''______________________________________________________
------------------------------------------------------
%s
______________________________________________________
------------------------------------------------------
* Server started %s
* Waiting for connections on port %i
''' % (gpl, ctime(), PORT)
I only can't figure out the line where it says
if connections[key].client_address[0] == client_address[0]
how come we used client_address as an attribute after dictionary???
A:
Perhaps the dictionary is storing values which are objects that happen to have a client_address member property?
In other words, the .client_address there isn't the same thing as the client_address passed in as an argument. Instead, it's the name of a field within a class that happens to be stored in connections[key].
A:
Because the connections dictionary may have an object with a client_address attribute?
Like:
class SomeClass(object):
def __init__( self, address ) :
self.client_address = address
connections = {"oscar":SomeClass(["127.0.0.1","192.60.0.1"])}
for key in connections:
print connections[key].client_address[0]
edit
A dict is a dictionary where a value may be stored using a key. When you request that key again, you get the value back, is that simple.
So in my previous example, the line:
connections = {"oscar":SomeClass(["127.0.0.1","192.60.0.1"])}
Could have been written as:
connection = []
connections["oscar"] = SomeClass(["1","2"])
s = connections["oscar"]
In your comment test[self.name] = self your storing the object represented by self into the test dictionary using name as the key.
A:
for key in connections:
if connections[key].client_address[0] == client_address[0]:
This is simply looking at all the values stored in the connections dictionary, to see if any of their properties named client_address have the same first item (IP address) as the local variable client_address. It's not necessary for the variable to have the same name as the property of the value in the dictionary.
What it's saying is: abort the connection if another connection from the same IP address is already being served. (Except for localhost, which is allowed to have as many connections as it likes.)
It could be re-spelled as:
def verify_request(self, request, new_client_addr):
ip= new_client_addr[0]
active_ips= [value.client_address[0] for value in connections.values()]
return ip not in active_ips or ip=='127.0.0.1'
|
about python scripting
|
I have this code
class HNCS (ThreadingTCPServer):
def verify_request(self, request, client_address):
for key in connections:
if connections[key].client_address[0] == client_address[0]:
if client_address[0] != '127.0.0.1':
return False
return True
def welcome(self):
return '''______________________________________________________
------------------------------------------------------
%s
______________________________________________________
------------------------------------------------------
* Server started %s
* Waiting for connections on port %i
''' % (gpl, ctime(), PORT)
I only can't figure out the line where it says
if connections[key].client_address[0] == client_address[0]
how come we used client_address as an attribute after dictionary???
|
[
"Perhaps the dictionary is storing values which are objects that happen to have a client_address member property?\nIn other words, the .client_address there isn't the same thing as the client_address passed in as an argument. Instead, it's the name of a field within a class that happens to be stored in connections[key].\n",
"Because the connections dictionary may have an object with a client_address attribute?\nLike:\nclass SomeClass(object):\n def __init__( self, address ) :\n self.client_address = address\n\nconnections = {\"oscar\":SomeClass([\"127.0.0.1\",\"192.60.0.1\"])}\n\nfor key in connections:\n print connections[key].client_address[0]\n\nedit\nA dict is a dictionary where a value may be stored using a key. When you request that key again, you get the value back, is that simple.\nSo in my previous example, the line:\nconnections = {\"oscar\":SomeClass([\"127.0.0.1\",\"192.60.0.1\"])}\n\nCould have been written as:\nconnection = []\nconnections[\"oscar\"] = SomeClass([\"1\",\"2\"])\ns = connections[\"oscar\"]\n\nIn your comment test[self.name] = self your storing the object represented by self into the test dictionary using name as the key. \n",
" for key in connections:\n if connections[key].client_address[0] == client_address[0]:\n\nThis is simply looking at all the values stored in the connections dictionary, to see if any of their properties named client_address have the same first item (IP address) as the local variable client_address. It's not necessary for the variable to have the same name as the property of the value in the dictionary.\nWhat it's saying is: abort the connection if another connection from the same IP address is already being served. (Except for localhost, which is allowed to have as many connections as it likes.)\nIt could be re-spelled as:\ndef verify_request(self, request, new_client_addr):\n ip= new_client_addr[0]\n active_ips= [value.client_address[0] for value in connections.values()]\n return ip not in active_ips or ip=='127.0.0.1'\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0002741440_oop_python.txt
|
Q:
mysqldb python escaping ? or %s?
I am currently using mysqldb.
What is the correct way to escape strings in mysqldb arguments?
Note that E = lambda x: x.encode('utf-8')
1) so my connection is set with charset='utf8'.
These are the errors I am getting for these arguments: w1, w2 = u'你好', u'我好'
self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)))
ret = self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)))
File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 158, in execute
TypeError: not all arguments converted during string formatting
self.cur.execute("SELECT dist FROM distance WHERE w1=%s AND w2=%s", (E(w1), E(w2)))
This works fine, but when w1 or w2 has \ inside, then the escaping obviously failed.
I personally know that %s is not a good method to pass in arguemnts due to injection attacks etc.
A:
To be more specific ... the cursor.execute() method takes an optional argument which contains values to be quoted and interpolated into the SQL template/statement. This is NOT done with a simple % operator! cursor.execute(some_sql, some_params) is NOT the same as cursor.execute(some_sql % some_params)
The Python DB-API specifies that any compliant driver/module must provide a .paramstyle attribute which can be any of 'qmark', 'numeric', 'named', 'format', or 'pyformat' ... so that one could, in theory, adapt your SQL query strings to the supported form through introspection and a little munging. This should still be safer than trying to quote and interpolate values into your SQL strings yourself.
I was particularly amused to read Warning Never, never, NEVER use Python string ... interpolation ... Not even at gunpoint. in the PsycoPG docs.
A:
When I remember it right, then you don't need to manually encode your unicode strings. The mysqldb module will do this for you.
And the mysqldb module uses %s as parameters instead of ?. This is the reason for the error in your first example.
A:
You could use triple quotes and raw string format
self.cur.execute(r"""SELECT dist FROM distance ... """,...)
|
mysqldb python escaping ? or %s?
|
I am currently using mysqldb.
What is the correct way to escape strings in mysqldb arguments?
Note that E = lambda x: x.encode('utf-8')
1) so my connection is set with charset='utf8'.
These are the errors I am getting for these arguments: w1, w2 = u'你好', u'我好'
self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)))
ret = self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)))
File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 158, in execute
TypeError: not all arguments converted during string formatting
self.cur.execute("SELECT dist FROM distance WHERE w1=%s AND w2=%s", (E(w1), E(w2)))
This works fine, but when w1 or w2 has \ inside, then the escaping obviously failed.
I personally know that %s is not a good method to pass in arguemnts due to injection attacks etc.
|
[
"To be more specific ... the cursor.execute() method takes an optional argument which contains values to be quoted and interpolated into the SQL template/statement. This is NOT done with a simple % operator! cursor.execute(some_sql, some_params) is NOT the same as cursor.execute(some_sql % some_params)\nThe Python DB-API specifies that any compliant driver/module must provide a .paramstyle attribute which can be any of 'qmark', 'numeric', 'named', 'format', or 'pyformat' ... so that one could, in theory, adapt your SQL query strings to the supported form through introspection and a little munging. This should still be safer than trying to quote and interpolate values into your SQL strings yourself.\nI was particularly amused to read Warning Never, never, NEVER use Python string ... interpolation ... Not even at gunpoint. in the PsycoPG docs.\n",
"When I remember it right, then you don't need to manually encode your unicode strings. The mysqldb module will do this for you.\nAnd the mysqldb module uses %s as parameters instead of ?. This is the reason for the error in your first example.\n",
"You could use triple quotes and raw string format \nself.cur.execute(r\"\"\"SELECT dist FROM distance ... \"\"\",...)\n\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0002490852_mysql_python.txt
|
Q:
Search over multiple fields
I think I don't unterstand django-haystack properly:
I have a data model containing several fields, and I would to have two of them searched:
class UserProfile(models.Model):
user = models.ForeignKey(User, unique=True, default=None)
twitter_account = models.CharField(max_length=50, blank=False)
My search index settings:
class UserProfileIndex(SearchIndex):
text = CharField(document=True, model_attr='user')
twitter_account = CharField(model_attr='twitter_account')
def get_queryset(self):
"""Used when the entire index for model is updated."""
return UserProfile.objects.all()
But when I perform a search, only the field "username" is searched; "twitter_account" is ignored. When I select the Searchresults via dbshell, the objects contain the correct values for "user" and "twitter_account", but the result page shows a "no results":
{% if query %}
<h3>Results</h3>
{% for result in page.object_list %}
<p>
<a href="{{ result.object.get_absolute_url }}">{{ result.object.id }}</a>
</p>
{% empty %}
<p>No results</p>
{% endfor %}
{% endif %}
Any ideas?
A:
I guess thats because haystack uses the document field for generic searches unless you define a specific search for other fields like the twitter_account field.
from haystack documentation
Every SearchIndex requires there be
one (and only one) field with
document=True. This indicates to both
Haystack and the search engine about
which field is the primary field for
searching within.
Try specifing the index as follows
class UserProfileIndex(SearchIndex):
text = CharField(document=True, use_template=True)
user = CharField(model_attr='user')
twitter_account = CharField(model_attr='twitter_account')
and create the a file named
search/indexes//userprofile_text.txt
which will contain the following
{{ object.user.get_full_name }}
{{ object.twitter_account}}
now haystack will search in the contents of this file (where you can add whatever you want) when you don't specify an index filter.
|
Search over multiple fields
|
I think I don't unterstand django-haystack properly:
I have a data model containing several fields, and I would to have two of them searched:
class UserProfile(models.Model):
user = models.ForeignKey(User, unique=True, default=None)
twitter_account = models.CharField(max_length=50, blank=False)
My search index settings:
class UserProfileIndex(SearchIndex):
text = CharField(document=True, model_attr='user')
twitter_account = CharField(model_attr='twitter_account')
def get_queryset(self):
"""Used when the entire index for model is updated."""
return UserProfile.objects.all()
But when I perform a search, only the field "username" is searched; "twitter_account" is ignored. When I select the Searchresults via dbshell, the objects contain the correct values for "user" and "twitter_account", but the result page shows a "no results":
{% if query %}
<h3>Results</h3>
{% for result in page.object_list %}
<p>
<a href="{{ result.object.get_absolute_url }}">{{ result.object.id }}</a>
</p>
{% empty %}
<p>No results</p>
{% endfor %}
{% endif %}
Any ideas?
|
[
"I guess thats because haystack uses the document field for generic searches unless you define a specific search for other fields like the twitter_account field.\nfrom haystack documentation\n\nEvery SearchIndex requires there be\n one (and only one) field with\n document=True. This indicates to both\n Haystack and the search engine about\n which field is the primary field for\n searching within.\n\nTry specifing the index as follows\nclass UserProfileIndex(SearchIndex):\n text = CharField(document=True, use_template=True)\n user = CharField(model_attr='user')\n twitter_account = CharField(model_attr='twitter_account')\n\nand create the a file named\nsearch/indexes//userprofile_text.txt\nwhich will contain the following\n{{ object.user.get_full_name }}\n{{ object.twitter_account}}\n\nnow haystack will search in the contents of this file (where you can add whatever you want) when you don't specify an index filter.\n"
] |
[
9
] |
[] |
[] |
[
"django",
"django_haystack",
"python"
] |
stackoverflow_0002732898_django_django_haystack_python.txt
|
Q:
Python Profiling In Windows, How do you ignore Builtin Functions
I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize?
I hope that makes sense, any light you can shed on this subject would be very appreciated.
A:
OK, I assume your real goal is to make your code as fast as reasonably possible, right?
It is natural to assume you do that by finding out how long your functions take, but there is another way to look at it.
Consider as your program runs it traces out a call tree, which is kind of like a real tree outside your window. The trunk is like the main function, and where any branch splits off from it is like calling another function.
Suppose each "leaf" takes a certain amount of time, and what you want to do is prune the tree so as to remove as many leaves as possible.
One way is to find branches with lots of leaves and cut off the leaves. Another way is to cut off whole branches if you don't need them. The problem is to find heavy branches that you don't need.
One bone-simple way to do this is to pick several leaves at random, like 10, and on each one, trace a line down its branch all the way back to the trunk. Any branch point will have some number of these lines running through it, from leaf to trunk. The more lines run through that branch point, the more leaves are on that branch, and the more you could save by pruning it.
Here's how you can apply this to your program. To sample a leaf, you pause the program at random and look at the call stack. That is the line back to the trunk. Each call site on it (not function, call site) is a branch point. If that call site is on some fraction of samples, like 40%, then that is roughly how much you could save by pruning it.
So, don't think of it as measuring how long functions take. Think of it as asking which call sites are "heavy". That's all there is to it.
|
Python Profiling In Windows, How do you ignore Builtin Functions
|
I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize?
I hope that makes sense, any light you can shed on this subject would be very appreciated.
|
[
"OK, I assume your real goal is to make your code as fast as reasonably possible, right?\nIt is natural to assume you do that by finding out how long your functions take, but there is another way to look at it.\nConsider as your program runs it traces out a call tree, which is kind of like a real tree outside your window. The trunk is like the main function, and where any branch splits off from it is like calling another function.\nSuppose each \"leaf\" takes a certain amount of time, and what you want to do is prune the tree so as to remove as many leaves as possible.\nOne way is to find branches with lots of leaves and cut off the leaves. Another way is to cut off whole branches if you don't need them. The problem is to find heavy branches that you don't need.\nOne bone-simple way to do this is to pick several leaves at random, like 10, and on each one, trace a line down its branch all the way back to the trunk. Any branch point will have some number of these lines running through it, from leaf to trunk. The more lines run through that branch point, the more leaves are on that branch, and the more you could save by pruning it.\nHere's how you can apply this to your program. To sample a leaf, you pause the program at random and look at the call stack. That is the line back to the trunk. Each call site on it (not function, call site) is a branch point. If that call site is on some fraction of samples, like 40%, then that is roughly how much you could save by pruning it.\nSo, don't think of it as measuring how long functions take. Think of it as asking which call sites are \"heavy\". That's all there is to it.\n"
] |
[
9
] |
[] |
[] |
[
"built_in",
"cprofile",
"optimization",
"profiling",
"python"
] |
stackoverflow_0002741520_built_in_cprofile_optimization_profiling_python.txt
|
Q:
Can't run os.system command in Django?
We have a Django app running on apache server (mod_python) on a windows machine which needs to call some r scripts. To do so it would be easiest to call r through os.system, however when django gets to the os.system command it freezes up. I've also tried subprocess with the same result.
We have a possibly related problem in that Django can only access the file system of the machine it's on, all network drives appear to be invisible to it, which is VERY frustrating.
Any ideas on both of these issues (I'm assuming it's the same limitation in both instances) would be most appreciated.
A:
Instead of os.system, would RPy2 meet your needs? I've used it in a similar case to the one you're describing with Django, and it's worked quite well.
The high-level interface in rpy2 is designed to facilitate the use of R by Python programmers. R objects are exposed as instances of Python-implemented classes, with R functions as bound methods to those objects in a number of cases.
|
Can't run os.system command in Django?
|
We have a Django app running on apache server (mod_python) on a windows machine which needs to call some r scripts. To do so it would be easiest to call r through os.system, however when django gets to the os.system command it freezes up. I've also tried subprocess with the same result.
We have a possibly related problem in that Django can only access the file system of the machine it's on, all network drives appear to be invisible to it, which is VERY frustrating.
Any ideas on both of these issues (I'm assuming it's the same limitation in both instances) would be most appreciated.
|
[
"Instead of os.system, would RPy2 meet your needs? I've used it in a similar case to the one you're describing with Django, and it's worked quite well.\n\nThe high-level interface in rpy2 is designed to facilitate the use of R by Python programmers. R objects are exposed as instances of Python-implemented classes, with R functions as bound methods to those objects in a number of cases. \n\n"
] |
[
1
] |
[] |
[] |
[
"apache",
"django",
"python",
"windows"
] |
stackoverflow_0002741662_apache_django_python_windows.txt
|
Q:
Url for the current page from a Mako template in Pylons
I need to know the full url for the current page from within a Mako template file in Pylons.
The url will be using in an iframe contained within the page so it needs to be known when the page is being generated rather than after the page hits the server or from the environment. (Not sure if I am communicating that last bit properly)
A:
Not sure if this is the Pylons way of doing things but ${request.url} seems to work for me.
A:
I think you can use h.url_for('', qualified=True) to get the full URL.
Make sure you have imported url_for in your helper file: from routes.util import helpers as h
Have a look at http://pylonshq.com/docs/en/0.9.7/thirdparty/routes/#routes.util.url_for
|
Url for the current page from a Mako template in Pylons
|
I need to know the full url for the current page from within a Mako template file in Pylons.
The url will be using in an iframe contained within the page so it needs to be known when the page is being generated rather than after the page hits the server or from the environment. (Not sure if I am communicating that last bit properly)
|
[
"Not sure if this is the Pylons way of doing things but ${request.url} seems to work for me.\n",
"I think you can use h.url_for('', qualified=True) to get the full URL. \nMake sure you have imported url_for in your helper file: from routes.util import helpers as h\nHave a look at http://pylonshq.com/docs/en/0.9.7/thirdparty/routes/#routes.util.url_for\n"
] |
[
4,
0
] |
[] |
[] |
[
"mako",
"pylons",
"python",
"templates"
] |
stackoverflow_0002741893_mako_pylons_python_templates.txt
|
Q:
Unable to open images with Python's Image.open()
My code reads:
import Image
def generateThumbnail(self, width, height):
"""
Generates thumbnails for an image
"""
im = Image.open(self._file)
When I call this function, I get an error:
⇝ AttributeError: type object 'Image' has no attribute 'open'
However in the console:
import Image
im = Image.open('test.jpg')
I have no problem.
Any ideas?
Thanks!
A:
It's odd that you're getting an exception about Image being a type object, not a module. Is 'Image' being assigned to elsewhere in your code?
A:
Does your actual code have the incorrect statements:
from Image import Image
or
from Image import *
The Image module contains an Image class, but they are of course different (the module has an open method). If you use either of these two forms, Image will incorrectly refer to the class.
EDIT: Another possiblity is that you re defining a conflicting Image name yourself. Do you have your own Image class? If so, rename it.
A:
This is consistent with you having created a (new-style) class called Image, or imported it from somewhere else (perhaps inadvertently, from a "*" import), at some point after importing "Image":
>>> import Image
>>> Image.open
<function open at 0x99e3b0>
>>> class Image(object): pass
...
>>> Image.open
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Image' has no attribute 'open'
>>>
Look for this. You can check with "print Image", which should give you something like:
>>> print Image
<class 'foo.Image'>
>>>
|
Unable to open images with Python's Image.open()
|
My code reads:
import Image
def generateThumbnail(self, width, height):
"""
Generates thumbnails for an image
"""
im = Image.open(self._file)
When I call this function, I get an error:
⇝ AttributeError: type object 'Image' has no attribute 'open'
However in the console:
import Image
im = Image.open('test.jpg')
I have no problem.
Any ideas?
Thanks!
|
[
"It's odd that you're getting an exception about Image being a type object, not a module. Is 'Image' being assigned to elsewhere in your code?\n",
"Does your actual code have the incorrect statements:\nfrom Image import Image\n\nor\nfrom Image import *\n\nThe Image module contains an Image class, but they are of course different (the module has an open method). If you use either of these two forms, Image will incorrectly refer to the class.\nEDIT: Another possiblity is that you re defining a conflicting Image name yourself. Do you have your own Image class? If so, rename it.\n",
"This is consistent with you having created a (new-style) class called Image, or imported it from somewhere else (perhaps inadvertently, from a \"*\" import), at some point after importing \"Image\":\n>>> import Image\n>>> Image.open\n<function open at 0x99e3b0>\n>>> class Image(object): pass\n... \n>>> Image.open\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: type object 'Image' has no attribute 'open'\n>>> \n\nLook for this. You can check with \"print Image\", which should give you something like:\n>>> print Image\n<class 'foo.Image'>\n>>> \n\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"image",
"pylons",
"python"
] |
stackoverflow_0002742085_image_pylons_python.txt
|
Q:
installing python packages on android
I want to install a python package from source on android. Is this possible? I tried in the console to run the py install files, but distutils (.core, ccompiler) isn't being found. Is it possible to still install them?
A:
Android does not ship with a Python interpreter, nor does it ship with gcc or other compilers. You will need to get an ARM binary from somewhere or cross-compile one yourself. (BTW, I'm assuming ARM, but substitute in whatever architecture you happen to be running).
A:
If you're using ASE (Android Scripting Environment), and the package you want to install is pure-python (that is, no C compiled libraries), you can simply copy the package's directory manually onto the sdcard, to /sdcard/ase/extras/python (as specified on the ASE FAQ).
|
installing python packages on android
|
I want to install a python package from source on android. Is this possible? I tried in the console to run the py install files, but distutils (.core, ccompiler) isn't being found. Is it possible to still install them?
|
[
"Android does not ship with a Python interpreter, nor does it ship with gcc or other compilers. You will need to get an ARM binary from somewhere or cross-compile one yourself. (BTW, I'm assuming ARM, but substitute in whatever architecture you happen to be running).\n",
"If you're using ASE (Android Scripting Environment), and the package you want to install is pure-python (that is, no C compiled libraries), you can simply copy the package's directory manually onto the sdcard, to /sdcard/ase/extras/python (as specified on the ASE FAQ).\n"
] |
[
1,
0
] |
[] |
[] |
[
"android",
"android_scripting",
"java",
"python"
] |
stackoverflow_0002733971_android_android_scripting_java_python.txt
|
Q:
Using Android's `Timer.schedule` in Python
Could someone explain how to use the Timer.schedule - that is used for scheduling tasks - in Python?
Note that Python's time.sleep will not work in Android.
A:
time.sleep does work on the Android Scripting Environment (at least for the last 5 releases or so). I've just tested it on the interpreter console.
On the other hand, some dialogs (like makeToast, for instance) won't hold the script execution while they're being displayed. Perhaps that's why you may be feeling time.sleep doesn't work.
|
Using Android's `Timer.schedule` in Python
|
Could someone explain how to use the Timer.schedule - that is used for scheduling tasks - in Python?
Note that Python's time.sleep will not work in Android.
|
[
"time.sleep does work on the Android Scripting Environment (at least for the last 5 releases or so). I've just tested it on the interpreter console.\nOn the other hand, some dialogs (like makeToast, for instance) won't hold the script execution while they're being displayed. Perhaps that's why you may be feeling time.sleep doesn't work.\n"
] |
[
0
] |
[] |
[] |
[
"android",
"android_scripting",
"python",
"timer"
] |
stackoverflow_0002115227_android_android_scripting_python_timer.txt
|
Q:
Python match and return string in between
I have following code
stringA = "xxxxxxFoundAaaaaaaaaaaaaaaFoundBxxxxxxx"
stringB = "FoundA"
stringC = "FoundB"
How do I do a regular expression in python in order to return aaaaaaaaaaaaaa?
A:
>>>
>>> stringA = "xxxxxxFoundAaaaaaaaaaaaaaaFoundBxxxxxxx"
>>> stringB = "FoundA"
>>> stringC = "FoundB"
>>>
>>> import re
>>> re.search(re.escape(stringB)+"(.*?)"+re.escape(stringC),stringA).group(1)
'aaaaaaaaaaaaaa'
>>>
A:
re.search(re.escape(stringB) + "(.*?)" + re.escape(stringC), stringA).group(1)
|
Python match and return string in between
|
I have following code
stringA = "xxxxxxFoundAaaaaaaaaaaaaaaFoundBxxxxxxx"
stringB = "FoundA"
stringC = "FoundB"
How do I do a regular expression in python in order to return aaaaaaaaaaaaaa?
|
[
">>>\n>>> stringA = \"xxxxxxFoundAaaaaaaaaaaaaaaFoundBxxxxxxx\"\n>>> stringB = \"FoundA\"\n>>> stringC = \"FoundB\"\n>>>\n>>> import re\n>>> re.search(re.escape(stringB)+\"(.*?)\"+re.escape(stringC),stringA).group(1)\n'aaaaaaaaaaaaaa'\n>>>\n\n",
"re.search(re.escape(stringB) + \"(.*?)\" + re.escape(stringC), stringA).group(1)\n\n"
] |
[
14,
4
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0002742309_python_regex.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.