content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Python 2.6.4 property decorators not working
I've seen many examples online and in this forum of how to create properties in Python with special getters and setters. However, I can't get the special getter and setter methods to execute, nor can I use the @property decorator to transform a property as readonly.
I'm using Python 2.6.4 and here is my code. Different methods to use properties are employed, but neither work.
class PathInfo:
def __init__(self, path):
self.setpath(path)
def getpath(self):
return self.__path
def setpath(self, path):
if not path:
raise TypeError
if path.endswith('/'):
path = path[:-1]
self.__path = path
self.dirname = os.path.dirname(path)
self.basename = os.path.basename(path)
(self.rootname, self.dext) = os.path.splitext(self.basename)
self.ext = self.dext[1:]
path = property(fget=getpath, fset=setpath)
@property
def isdir(self):
return os.path.isdir(self.__path)
@property
def isfile(self):
return os.path.isfile(self.__path)
A:
PathInfo must subclass object.
Like this:
class PathInfo(object):
Properties work only on new style classes.
|
Python 2.6.4 property decorators not working
|
I've seen many examples online and in this forum of how to create properties in Python with special getters and setters. However, I can't get the special getter and setter methods to execute, nor can I use the @property decorator to transform a property as readonly.
I'm using Python 2.6.4 and here is my code. Different methods to use properties are employed, but neither work.
class PathInfo:
def __init__(self, path):
self.setpath(path)
def getpath(self):
return self.__path
def setpath(self, path):
if not path:
raise TypeError
if path.endswith('/'):
path = path[:-1]
self.__path = path
self.dirname = os.path.dirname(path)
self.basename = os.path.basename(path)
(self.rootname, self.dext) = os.path.splitext(self.basename)
self.ext = self.dext[1:]
path = property(fget=getpath, fset=setpath)
@property
def isdir(self):
return os.path.isdir(self.__path)
@property
def isfile(self):
return os.path.isfile(self.__path)
|
[
"PathInfo must subclass object.\nLike this:\nclass PathInfo(object):\n\nProperties work only on new style classes.\n"
] |
[
22
] |
[] |
[] |
[
"decorator",
"properties",
"python"
] |
stackoverflow_0002240351_decorator_properties_python.txt
|
Q:
Why does Nose not see any of my environmental variables?
I'm just getting started using Nose and Nosetests and my tests are failing because Nose can't see the environmental variables.
So far, the errors:
AttributeError: 'Settings' object has no attribute 'DJANGO_SETTINGS_MODULE'
I fixed this by exporting DJANGO_SETTINGS_MODULE from .bash_profile
export DJANGO_SETTINGS_MODULE="settings"
Now I'm seeing:
AttributeError: 'Settings' object has no attribute 'DATABASE_SUPPORTS_TRANSACTIONS'
Why would iPython and the Django webserver be able to see these ENV variables, but Nose can't?
A:
As Alok said, Nose doesn't call BaseDatabaseCreation.create_test_db('None') from django.db.backends.creation so you will need to set this setting manually.
I was not able to get that to work.
However, I found NoseDjango.
Install NoseDjango with:
easy_install django-nose
Since django-nose extends Django's built-in test command, you should add it to your INSTALLED_APPS in settings.py:
INSTALLED_APPS = (
...
'django_nose',
...
)
Then set TEST_RUNNER in settings.py:
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
Once NoseDjango is setup you can run your Nose tests via:
manage.py test
A:
Apparently nose doesn't call create_test_db() in django/db/backends/creation.py, so you are seeing this error. Just set it to None, or call the method yourself. Not sure if this is fixed in a recent version of Django.
|
Why does Nose not see any of my environmental variables?
|
I'm just getting started using Nose and Nosetests and my tests are failing because Nose can't see the environmental variables.
So far, the errors:
AttributeError: 'Settings' object has no attribute 'DJANGO_SETTINGS_MODULE'
I fixed this by exporting DJANGO_SETTINGS_MODULE from .bash_profile
export DJANGO_SETTINGS_MODULE="settings"
Now I'm seeing:
AttributeError: 'Settings' object has no attribute 'DATABASE_SUPPORTS_TRANSACTIONS'
Why would iPython and the Django webserver be able to see these ENV variables, but Nose can't?
|
[
"As Alok said, Nose doesn't call BaseDatabaseCreation.create_test_db('None') from django.db.backends.creation so you will need to set this setting manually. \nI was not able to get that to work.\nHowever, I found NoseDjango. \nInstall NoseDjango with: \neasy_install django-nose \n\nSince django-nose extends Django's built-in test command, you should add it to your INSTALLED_APPS in settings.py:\nINSTALLED_APPS = (\n...\n'django_nose',\n...\n)\n\nThen set TEST_RUNNER in settings.py: \nTEST_RUNNER = 'django_nose.NoseTestSuiteRunner'\n\nOnce NoseDjango is setup you can run your Nose tests via: \nmanage.py test\n\n",
"Apparently nose doesn't call create_test_db() in django/db/backends/creation.py, so you are seeing this error. Just set it to None, or call the method yourself. Not sure if this is fixed in a recent version of Django.\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"nose",
"python"
] |
stackoverflow_0002240067_django_nose_python.txt
|
Q:
blocking channels vs async message passing
I've noticed two methods to "message passing". One I've seen Erlang use and the other is from Stackless Python. From what I understand here's the difference
Erlang Style - Messages are sent and queued into the mailbox of the receiving process. From there they are removed in a FIFO basis. Once the first process sends the message it is free to continue.
Python Style - Process A queues up to send to process B. B is currently performing some other action, so A is frozen until B is ready to receive. Once B opens a read channel, A sends the data, then they both continue.
Now I see the pros of the Erlang method being that you don't have any blocked processes. If B never is able to receive, A can still continue. However I have noticed in some programs I have written, that it is possible for Erlang message boxes to get full of hundreds (or thousands) of messages since the inflow of messages is greater than the outflow.
Now I haven't written a large program in either framework/language so I'm wondering your experiences are with this, and if it's something I should even worry about.
Yes, I know this is abstract, but I'm also looking for rather abstract answers.
A:
My experience in Erlang programming is that when you expect a high messaging rate (that is, a faster producer than consumer) then you add your own flow control. A simple scenario
The consumer will: send message, wait for ack, then repeat.
The producer will: wait for message, send ack when message received and processed, then repeat.
One can also invert it, the producer waits for the consumer to come and grab the N next available messages.
These approaches and other flow control can be hidden behind functions, the first one is mostly already available in gen_server:call/2,3 against a gen_server OTP behavior process.
I see asynchronous messaging as in Erlang as the better approach, since when latencies are high you might very much want to avoid a synchronization when messaging between computers. One can then compose clever ways to implement flow control. Say, requiring an ack from the consumer for every N messages the producer have sent it, or send a special "ping me when you have received this one" message now and then, to count ping time.
A:
Broadly speaking, this is unbounded queues vs bounded queues. A stackless channel can be considered a special case of a queue with 0 size.
Bounded queues have a tendency to deadlock. Two threads/processes trying to send a message to each other, both with a full queue.
Unbounded queues have more subtle failure. A large mailbox won't meet latency requirements, as you mentioned. Go far enough and it will eventually overflow; no such thing as infinite memory, so it's really just a bounded queue with a huge limit that aborts the process when full.
Which is best? That's hard to say. There's no easy answers here.
|
blocking channels vs async message passing
|
I've noticed two methods to "message passing". One I've seen Erlang use and the other is from Stackless Python. From what I understand here's the difference
Erlang Style - Messages are sent and queued into the mailbox of the receiving process. From there they are removed in a FIFO basis. Once the first process sends the message it is free to continue.
Python Style - Process A queues up to send to process B. B is currently performing some other action, so A is frozen until B is ready to receive. Once B opens a read channel, A sends the data, then they both continue.
Now I see the pros of the Erlang method being that you don't have any blocked processes. If B never is able to receive, A can still continue. However I have noticed in some programs I have written, that it is possible for Erlang message boxes to get full of hundreds (or thousands) of messages since the inflow of messages is greater than the outflow.
Now I haven't written a large program in either framework/language so I'm wondering your experiences are with this, and if it's something I should even worry about.
Yes, I know this is abstract, but I'm also looking for rather abstract answers.
|
[
"My experience in Erlang programming is that when you expect a high messaging rate (that is, a faster producer than consumer) then you add your own flow control. A simple scenario\n\nThe consumer will: send message, wait for ack, then repeat. \nThe producer will: wait for message, send ack when message received and processed, then repeat.\n\nOne can also invert it, the producer waits for the consumer to come and grab the N next available messages.\nThese approaches and other flow control can be hidden behind functions, the first one is mostly already available in gen_server:call/2,3 against a gen_server OTP behavior process.\nI see asynchronous messaging as in Erlang as the better approach, since when latencies are high you might very much want to avoid a synchronization when messaging between computers. One can then compose clever ways to implement flow control. Say, requiring an ack from the consumer for every N messages the producer have sent it, or send a special \"ping me when you have received this one\" message now and then, to count ping time.\n",
"Broadly speaking, this is unbounded queues vs bounded queues. A stackless channel can be considered a special case of a queue with 0 size.\nBounded queues have a tendency to deadlock. Two threads/processes trying to send a message to each other, both with a full queue.\nUnbounded queues have more subtle failure. A large mailbox won't meet latency requirements, as you mentioned. Go far enough and it will eventually overflow; no such thing as infinite memory, so it's really just a bounded queue with a huge limit that aborts the process when full.\nWhich is best? That's hard to say. There's no easy answers here.\n"
] |
[
8,
4
] |
[] |
[] |
[
"actor",
"erlang",
"python",
"python_stackless",
"stackless"
] |
stackoverflow_0002239731_actor_erlang_python_python_stackless_stackless.txt
|
Q:
Will shell scripts called from python persist after the python script ends?
As part of an automated test, I have a python script that needs to call two shell scripts that start two different servers that need to interact after the calling script ends. (It's actually a jython script, but I'm not sure that matters at this point.) What can I do to ensure that the servers stay up after the python script ends?
At this point they're called something like this:
def runcmd(str, sleep):
debug('Inside runcmd, executing: ' + str)
os.chdir("/new/dir/")
directory = os.getcwd()
print 'current dir: '+ directory
os.system(str)
t = threading.Thread(
target=runcmd,
args=( cmd, 50,)
)
A:
Python threads will all die with Python. Also, os.system is blocking. But that's okay -- if the command that os.system() runs launches a new process (but not a child process), all will be fine. On Windows, for instance, if the command begins with "start" the "start"'d process will remain after Python dies.
EDIT: nohup is an equivalent to start on Linux. (Thanks to S. Lott).
A:
os.system() does not return until the process it launches has ended. Use subprocess or Runtime.exec() if you want it in a separate process.
A:
I wonder if using subprocess.Popen would work better for you.
maybe doing something like shell=True
A:
Threads won't work because they are part of the process. The system call won't work because it blocks as your new process executes.
You will need to use something like os.fork() to spawn a new process and execute it in the new process. Take a look at subprocess for some good cookbook style solutions to this.
A:
Generally, to launch a long-running server that's independent of its parent, you need to daemonize it. Depending on your environment, there are various wrappers that can assist in this process.
|
Will shell scripts called from python persist after the python script ends?
|
As part of an automated test, I have a python script that needs to call two shell scripts that start two different servers that need to interact after the calling script ends. (It's actually a jython script, but I'm not sure that matters at this point.) What can I do to ensure that the servers stay up after the python script ends?
At this point they're called something like this:
def runcmd(str, sleep):
debug('Inside runcmd, executing: ' + str)
os.chdir("/new/dir/")
directory = os.getcwd()
print 'current dir: '+ directory
os.system(str)
t = threading.Thread(
target=runcmd,
args=( cmd, 50,)
)
|
[
"Python threads will all die with Python. Also, os.system is blocking. But that's okay -- if the command that os.system() runs launches a new process (but not a child process), all will be fine. On Windows, for instance, if the command begins with \"start\" the \"start\"'d process will remain after Python dies. \nEDIT: nohup is an equivalent to start on Linux. (Thanks to S. Lott).\n",
"os.system() does not return until the process it launches has ended. Use subprocess or Runtime.exec() if you want it in a separate process.\n",
"I wonder if using subprocess.Popen would work better for you.\nmaybe doing something like shell=True \n",
"Threads won't work because they are part of the process. The system call won't work because it blocks as your new process executes.\nYou will need to use something like os.fork() to spawn a new process and execute it in the new process. Take a look at subprocess for some good cookbook style solutions to this.\n",
"Generally, to launch a long-running server that's independent of its parent, you need to daemonize it. Depending on your environment, there are various wrappers that can assist in this process. \n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"automation",
"jython",
"python",
"sh"
] |
stackoverflow_0002240494_automation_jython_python_sh.txt
|
Q:
How can I detect errors programatically when building an egg with setuptools?
If I have a script that builds eggs, basically by running
python setup.py bdist_egg --exclude-source-files
for a number of setup.py files that use setuptools to define how eggs are built, is there an easy way to determine if there were any errors in building the egg?
A situation I had recently, was that there was a syntax error in a module. Setuptools spat out a message onto standard error, but continued to create the egg, omitting to broken module. Because this was part of a batch creating a number of eggs, the error was missed, and the result was useless.
Is there a way to detect errors when building an egg programatically, other than just capturing standard error and parsing that?
A:
distutils use the py_compile.compile() function to compile source files. This function takes a doraise argument, that when set to True raises an exception on compilation errors (the default is to print the errors to stderr). distutils don't call py_compile.compile() with doraise=True, so compilation is not aborted on compilation errors.
To stop on errors and be able to check the setup.py return code (it will be nonzero on errors), you could patch the py_compile.compile() function. For example, in your setup.py:
from setuptools import setup
import py_compile
# Replace py_compile.compile with a function that calls it with doraise=True
orig_py_compile = py_compile.compile
def doraise_py_compile(file, cfile=None, dfile=None, doraise=False):
orig_py_compile(file, cfile=cfile, dfile=dfile, doraise=True)
py_compile.compile = doraise_py_compile
# Usual setup...
|
How can I detect errors programatically when building an egg with setuptools?
|
If I have a script that builds eggs, basically by running
python setup.py bdist_egg --exclude-source-files
for a number of setup.py files that use setuptools to define how eggs are built, is there an easy way to determine if there were any errors in building the egg?
A situation I had recently, was that there was a syntax error in a module. Setuptools spat out a message onto standard error, but continued to create the egg, omitting to broken module. Because this was part of a batch creating a number of eggs, the error was missed, and the result was useless.
Is there a way to detect errors when building an egg programatically, other than just capturing standard error and parsing that?
|
[
"distutils use the py_compile.compile() function to compile source files. This function takes a doraise argument, that when set to True raises an exception on compilation errors (the default is to print the errors to stderr). distutils don't call py_compile.compile() with doraise=True, so compilation is not aborted on compilation errors.\nTo stop on errors and be able to check the setup.py return code (it will be nonzero on errors), you could patch the py_compile.compile() function. For example, in your setup.py:\nfrom setuptools import setup\nimport py_compile\n\n# Replace py_compile.compile with a function that calls it with doraise=True\norig_py_compile = py_compile.compile\n\ndef doraise_py_compile(file, cfile=None, dfile=None, doraise=False):\n orig_py_compile(file, cfile=cfile, dfile=dfile, doraise=True)\n\npy_compile.compile = doraise_py_compile\n\n# Usual setup...\n\n"
] |
[
5
] |
[] |
[] |
[
"compiler_errors",
"error_handling",
"python",
"setuptools"
] |
stackoverflow_0002230843_compiler_errors_error_handling_python_setuptools.txt
|
Q:
Is CherryPy a robust webserver (ie, is it reliable under a huge load like Apache)?
I'm wondering because CherryPy is, from my knowledge, built purely in Python, which is obviously slower than C et al. Does this mean that it's only good for dev / testing environments, or could I use it behind NGINX like I use Apache with Fast CGI currently?
A:
CherryPy's WSGI server is about as fast as a pure-Python WSGI server is going to get. I personally use it behind Nginx in production, but even standalone on my dev machine I can load each instance with several hundred requests / sec. without problems.
Can you find a faster server? Yes. Is CherryPy a robust web server, and good enough for most people to use in production? Yes.
A:
You should probably consider Apache + mod_wsgi as the standard front-end for any Python-based web application.
You do not want to serve any static content (.CSS, .JPEG, etc.) from any Python-based application; you want static files served by Apache.
You want the dynamic HTML page handle separately by the mod_wsgi daemon.
|
Is CherryPy a robust webserver (ie, is it reliable under a huge load like Apache)?
|
I'm wondering because CherryPy is, from my knowledge, built purely in Python, which is obviously slower than C et al. Does this mean that it's only good for dev / testing environments, or could I use it behind NGINX like I use Apache with Fast CGI currently?
|
[
"CherryPy's WSGI server is about as fast as a pure-Python WSGI server is going to get. I personally use it behind Nginx in production, but even standalone on my dev machine I can load each instance with several hundred requests / sec. without problems.\nCan you find a faster server? Yes. Is CherryPy a robust web server, and good enough for most people to use in production? Yes.\n",
"You should probably consider Apache + mod_wsgi as the standard front-end for any Python-based web application.\nYou do not want to serve any static content (.CSS, .JPEG, etc.) from any Python-based application; you want static files served by Apache. \nYou want the dynamic HTML page handle separately by the mod_wsgi daemon.\n"
] |
[
19,
4
] |
[] |
[] |
[
"cherrypy",
"python",
"webserver"
] |
stackoverflow_0002240700_cherrypy_python_webserver.txt
|
Q:
Is there a significant overhead by using different versions of sha hashing (hashlib module)
The hashlib Python module provides the following hash algorithms constructors: md5(), sha1(), sha224(), sha256(), sha384(), and sha512().
Assuming I don't want to use md5, is there a big difference in using, say, sha1 instead of sha512? I want to use something like hashlib.shaXXX(hashString).hexdigest(), but as it's just for caching, I'm not sure I need the (eventual) extra overhead of 512...
Does this overhead exist, and if so, how big is it?
A:
Why not just benchmark it?
>>> def sha1(s):
... return hashlib.sha1(s).hexdigest()
...
>>> def sha512(s):
... return hashlib.sha512(s).hexdigest()
...
>>> t1 = timeit.Timer("sha1('asdf' * 100)", "from __main__ import sha1")
>>> t512 = timeit.Timer("sha512('asdf' * 100)", "from __main__ import sha512")
>>> t1.timeit()
3.2463729381561279
>>> t512.timeit()
6.5079669952392578
So on my machine, hash512 is twice as slow as sha1. But as GregS said, why would you use secure hash for caching? Try the builtin hash algorithms which should be really fast and tuned:
>>> s = "asdf"
>>> hash(s)
-618826466
>>> s = "xxx"
>>> hash(s)
943435
>>> hash("xxx")
943435
Or better yet, use the builtin Python dictionaries. Maybe you can tell us more about what you plan on caching.
EDIT:
I'm thinking that you are trying to achieve something like this:
hash = hashlib.sha1(object_to_cache_as_string).hexdigest()
cache[hash] = object_to_cache
What I was refferring to by "use the builtin Python dictinoaries" is that you can simplify the above:
cache[object_to_cache_as_string] = object_to_cache
In this way, Python takes care of the hashing so you don't have to!
Regarding your particular problem, you could refer to Python hashable dicts in order to make a dictionary hashable. Then, all you'd need to do to cache the object is:
cache[object_to_cache] = object_to_cache
EDIT - Notes about Python3
Python 3.3 introduces hash randomization, which means that computed hashes might be different across different processes, so you should not rely on the computed hash, unless setting the PYTHONHASHSEED environment variable to 0.
References:
- https://docs.python.org/3/reference/datamodel.html#object.hash
- https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED
A:
Perhaps a naive test... but it looks like it depends on how much you're hashing. 2 blocks of sha512 is faster than 4 blocks of sha256?
>>> import timeit
>>> import hashlib
>>> for sha in [ x for x in dir(hashlib) if x.startswith('sha') ]:
... t = timeit.Timer("hashlib.%s(data).hexdigest()" % sha,"import hashlib; data=open('/dev/urandom','r').read(1024)")
... print sha + "\t" + repr(t.timeit(1000))
...
sha1 0.0084478855133056641
sha224 0.034898042678833008
sha256 0.034902095794677734
sha384 0.01980900764465332
sha512 0.019846916198730469
|
Is there a significant overhead by using different versions of sha hashing (hashlib module)
|
The hashlib Python module provides the following hash algorithms constructors: md5(), sha1(), sha224(), sha256(), sha384(), and sha512().
Assuming I don't want to use md5, is there a big difference in using, say, sha1 instead of sha512? I want to use something like hashlib.shaXXX(hashString).hexdigest(), but as it's just for caching, I'm not sure I need the (eventual) extra overhead of 512...
Does this overhead exist, and if so, how big is it?
|
[
"Why not just benchmark it?\n>>> def sha1(s):\n... return hashlib.sha1(s).hexdigest()\n...\n>>> def sha512(s):\n... return hashlib.sha512(s).hexdigest()\n...\n>>> t1 = timeit.Timer(\"sha1('asdf' * 100)\", \"from __main__ import sha1\")\n>>> t512 = timeit.Timer(\"sha512('asdf' * 100)\", \"from __main__ import sha512\")\n>>> t1.timeit()\n3.2463729381561279\n>>> t512.timeit()\n6.5079669952392578\n\nSo on my machine, hash512 is twice as slow as sha1. But as GregS said, why would you use secure hash for caching? Try the builtin hash algorithms which should be really fast and tuned:\n>>> s = \"asdf\"\n>>> hash(s)\n-618826466\n>>> s = \"xxx\"\n>>> hash(s)\n943435\n>>> hash(\"xxx\")\n943435\n\nOr better yet, use the builtin Python dictionaries. Maybe you can tell us more about what you plan on caching.\nEDIT:\nI'm thinking that you are trying to achieve something like this:\nhash = hashlib.sha1(object_to_cache_as_string).hexdigest()\ncache[hash] = object_to_cache\n\nWhat I was refferring to by \"use the builtin Python dictinoaries\" is that you can simplify the above:\ncache[object_to_cache_as_string] = object_to_cache\n\nIn this way, Python takes care of the hashing so you don't have to!\nRegarding your particular problem, you could refer to Python hashable dicts in order to make a dictionary hashable. Then, all you'd need to do to cache the object is:\ncache[object_to_cache] = object_to_cache\n\nEDIT - Notes about Python3\nPython 3.3 introduces hash randomization, which means that computed hashes might be different across different processes, so you should not rely on the computed hash, unless setting the PYTHONHASHSEED environment variable to 0.\nReferences:\n - https://docs.python.org/3/reference/datamodel.html#object.hash\n - https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED\n",
"Perhaps a naive test... but it looks like it depends on how much you're hashing. 2 blocks of sha512 is faster than 4 blocks of sha256?\n>>> import timeit\n>>> import hashlib\n>>> for sha in [ x for x in dir(hashlib) if x.startswith('sha') ]:\n... t = timeit.Timer(\"hashlib.%s(data).hexdigest()\" % sha,\"import hashlib; data=open('/dev/urandom','r').read(1024)\")\n... print sha + \"\\t\" + repr(t.timeit(1000))\n...\nsha1 0.0084478855133056641\nsha224 0.034898042678833008\nsha256 0.034902095794677734\nsha384 0.01980900764465332\nsha512 0.019846916198730469\n\n"
] |
[
23,
6
] |
[] |
[] |
[
"hash",
"hashlib",
"python"
] |
stackoverflow_0002241013_hash_hashlib_python.txt
|
Q:
running clock and triggering
constantly running a clock and trigger an other function for every 5 seconds.
Please give me idea how to do this.
Thanks a bunch
A:
>>> import sched, time
>>> s = sched.scheduler(time.time, time.sleep)
>>> def print_time():
... s.enter(5, 1, print_time, ())
... print "From print_time", time.time()
...
>>> s.enter(0, 1, print_time, ())
Event(time=1265846894.4069381, priority=1, action=<function print_time at 0xb7d1ab1c>, argument=())
>>> s.run()
From print_time 1265846894.41
From print_time 1265846899.41
From print_time 1265846904.42
From print_time 1265846909.42
A:
import time
while True:
time.sleep(5)
someFunction()
As gnibbler says, the interval will actually be (5 seconds + the time it takes to run someFunction). If you need it to be exactly 5 seconds:
targetTime = time.time()
while True:
someFunction()
targetTime += 5
sleepTime = targetTime - time.time()
if sleepTime>0:
time.sleep(sleepTime)
A:
You can use the scheduler from the sched module.
Just make the scheduled function reschedule itself every time it completes.
|
running clock and triggering
|
constantly running a clock and trigger an other function for every 5 seconds.
Please give me idea how to do this.
Thanks a bunch
|
[
">>> import sched, time\n>>> s = sched.scheduler(time.time, time.sleep)\n>>> def print_time():\n... s.enter(5, 1, print_time, ())\n... print \"From print_time\", time.time()\n... \n>>> s.enter(0, 1, print_time, ())\nEvent(time=1265846894.4069381, priority=1, action=<function print_time at 0xb7d1ab1c>, argument=())\n>>> s.run()\nFrom print_time 1265846894.41\nFrom print_time 1265846899.41\nFrom print_time 1265846904.42\nFrom print_time 1265846909.42\n\n",
"import time\n\nwhile True:\n time.sleep(5)\n someFunction()\n\nAs gnibbler says, the interval will actually be (5 seconds + the time it takes to run someFunction). If you need it to be exactly 5 seconds:\ntargetTime = time.time()\nwhile True:\n someFunction()\n targetTime += 5\n sleepTime = targetTime - time.time()\n if sleepTime>0:\n time.sleep(sleepTime)\n\n",
"You can use the scheduler from the sched module.\nJust make the scheduled function reschedule itself every time it completes.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002241234_python.txt
|
Q:
Is there a Python ebXML client?
I am trying to use a remote web service with an ebXML/SOAP interface from my python application and am hitting a wall about how to best accomplish it.
So far, what I can find are lots of Java interface bindings but none for Python.
Do I have to start my project over in Java?
A:
Doesn't look too hopeful at http://pypi.python.org/pypi?%3Aaction=search&term=ebXML&submit=search
Do you know if there is a C library?
|
Is there a Python ebXML client?
|
I am trying to use a remote web service with an ebXML/SOAP interface from my python application and am hitting a wall about how to best accomplish it.
So far, what I can find are lots of Java interface bindings but none for Python.
Do I have to start my project over in Java?
|
[
"Doesn't look too hopeful at http://pypi.python.org/pypi?%3Aaction=search&term=ebXML&submit=search\nDo you know if there is a C library?\n"
] |
[
0
] |
[] |
[] |
[
"ebxml",
"python"
] |
stackoverflow_0002241149_ebxml_python.txt
|
Q:
When Does It Make Sense To Rewrite A Python Module in C?
In a game that I am writing, I use a 2D vector class which I have written to handle the speeds of the objects. This is called a large number of times every frame as there are a lot of objects on the screen, so any increase I can make in its speed will be useful.
It is pretty simple, consisting mostly of wrappers to the related math functions. It would be quite trivial to rewrite in C, but I am not sure whether doing so will make any significant difference as all it really does is call the underlying math functions, add, multiply or divide.
So, my question is under what circumstances does it make sense to rewrite in C? Where will you see a significant speed boost, and where can you see a reasonable speed boost without rewriting an extensive amount of the program?
A:
If you're vector-munging, give numpy a try first. Chances are you will get speeds not far from C if you utilize numpy's vector manipulation functions wisely.
Other than that, your question is very heuristic. If your code is too slow:
Profile it - chances are you'll be able to improve it in Python
Use the correct optimized C-based libraries (numpy in your case)
Try psyco
Try rewriting parts with cython
If all else fails, rewrite in C
A:
First measure then optimize
A:
You should never optimize anything, be it in C or any other language, without timing your code before and after your optimization:
your clever optimization could in fact induce a slow down
optimizing something that takes 1% of the total execution time will never give you more than 1% performance
The common approach is:
profile your code
identify a hotspot
time this hotspot
optimize it
time the hotspot again, see if it's faster. If it's not goto 3.
If you can't find hotspots it could mean that your app is already optimized, or that you are not using the good algorithm for your problem. In both cases profiling helps understanding what your code does.
For profiling python code under Linux, you can use pyprof2calltree which works in conjunction with kcachegrind, and is totally awesome.
A:
A nice Profiler I use on Linux is pycallgraph - however, as your program gets bigger it starts to create much larger images which are harder to trace. I'm pretty sure you can exclude modules, though.
A:
Common wisdom is "profile", "measure", etc. Well - maybe. Just get in the debugger and take 10 stackshots. If more than one of them terminates in your wrapper code, then it is costing more than 10% roughly, so you should consider re-doing it in C, to save that time. Chances are you will find other things also that are costing more than that.
|
When Does It Make Sense To Rewrite A Python Module in C?
|
In a game that I am writing, I use a 2D vector class which I have written to handle the speeds of the objects. This is called a large number of times every frame as there are a lot of objects on the screen, so any increase I can make in its speed will be useful.
It is pretty simple, consisting mostly of wrappers to the related math functions. It would be quite trivial to rewrite in C, but I am not sure whether doing so will make any significant difference as all it really does is call the underlying math functions, add, multiply or divide.
So, my question is under what circumstances does it make sense to rewrite in C? Where will you see a significant speed boost, and where can you see a reasonable speed boost without rewriting an extensive amount of the program?
|
[
"If you're vector-munging, give numpy a try first. Chances are you will get speeds not far from C if you utilize numpy's vector manipulation functions wisely.\nOther than that, your question is very heuristic. If your code is too slow:\n\nProfile it - chances are you'll be able to improve it in Python\nUse the correct optimized C-based libraries (numpy in your case)\nTry psyco\nTry rewriting parts with cython\nIf all else fails, rewrite in C\n\n",
"First measure then optimize\n",
"You should never optimize anything, be it in C or any other language, without timing your code before and after your optimization:\n\nyour clever optimization could in fact induce a slow down\noptimizing something that takes 1% of the total execution time will never give you more than 1% performance\n\nThe common approach is:\n\nprofile your code\nidentify a hotspot\ntime this hotspot\noptimize it\ntime the hotspot again, see if it's faster. If it's not goto 3.\n\nIf you can't find hotspots it could mean that your app is already optimized, or that you are not using the good algorithm for your problem. In both cases profiling helps understanding what your code does.\nFor profiling python code under Linux, you can use pyprof2calltree which works in conjunction with kcachegrind, and is totally awesome.\n",
"A nice Profiler I use on Linux is pycallgraph - however, as your program gets bigger it starts to create much larger images which are harder to trace. I'm pretty sure you can exclude modules, though.\n",
"Common wisdom is \"profile\", \"measure\", etc. Well - maybe. Just get in the debugger and take 10 stackshots. If more than one of them terminates in your wrapper code, then it is costing more than 10% roughly, so you should consider re-doing it in C, to save that time. Chances are you will find other things also that are costing more than that.\n"
] |
[
14,
9,
1,
0,
0
] |
[] |
[] |
[
"c",
"optimization",
"python"
] |
stackoverflow_0002096334_c_optimization_python.txt
|
Q:
I've got Python built using VS2008, how do I install it?
I'm working with boost::python and wanted to build the whole thing to make sure I can pull it off. However, I don't see any install script or way to build the MSI so I can install it.
Anyone know where the directions are? Or the projects I could use to make an MSI file?
Doing this on linux seems trivial:
make install
How do I do this under windows
A:
All of this is much easier with MinGW, plus there's the fact that it's likely to be compatible with the ABI of the official package so that you can just install that instead and only build extensions with MinGW.
A:
Well, the python mailing list was some help.
Turns out there is an tools/msi directory and in there is python code to help build the MSI from the tree you built. Only problem is you can't use it without having python and PythonWin installed. So I grabbed 2.6.4 python and pythonwin and installed them.
It uses COM objects and the CabSDK from MS to build the MSI file. And then it has a couple of "Issues" that I had to resolve. First you need a VS2008 shell so you can
nmake -f msisupport.mak
then you need to grab a copy of TIX (I didn't have to build it, just have it in place fore the license.terms file (probably could have just removed that list member for the same effect, but I was worried about something else being needed down below)
("Tcl", "tcl8*", "license.terms"),
("Tk", "tk8*", "license.terms"),
("Tix", "Tix-*", "license.terms")):
had to be changed to:
("Tcl", "tcl-8*", "license.terms"),
("Tk", "tk-8*", "license.terms"),
("Tix", "Tix*", "license.terms")):
because the package names have evidently changed in the not to distant past?
After that, I ran c:\python26\python msi.py and then it griped about the python264.chm being missing, so instead of trying to build it, I grabbed the one from the copy of python I had to install in order to build python and dumped it in the expected location.
Oh yea, I also had to go to the PC directory and
nmake -f icons.mak
This gave me a runnable msi file to install python (which was already installed, so that I could build the msi file to install my own version). Oh well, at least it is built now. Whew!
|
I've got Python built using VS2008, how do I install it?
|
I'm working with boost::python and wanted to build the whole thing to make sure I can pull it off. However, I don't see any install script or way to build the MSI so I can install it.
Anyone know where the directions are? Or the projects I could use to make an MSI file?
Doing this on linux seems trivial:
make install
How do I do this under windows
|
[
"All of this is much easier with MinGW, plus there's the fact that it's likely to be compatible with the ABI of the official package so that you can just install that instead and only build extensions with MinGW.\n",
"Well, the python mailing list was some help.\nTurns out there is an tools/msi directory and in there is python code to help build the MSI from the tree you built. Only problem is you can't use it without having python and PythonWin installed. So I grabbed 2.6.4 python and pythonwin and installed them.\nIt uses COM objects and the CabSDK from MS to build the MSI file. And then it has a couple of \"Issues\" that I had to resolve. First you need a VS2008 shell so you can \nnmake -f msisupport.mak\n\nthen you need to grab a copy of TIX (I didn't have to build it, just have it in place fore the license.terms file (probably could have just removed that list member for the same effect, but I was worried about something else being needed down below)\n(\"Tcl\", \"tcl8*\", \"license.terms\"),\n (\"Tk\", \"tk8*\", \"license.terms\"),\n (\"Tix\", \"Tix-*\", \"license.terms\")):\nhad to be changed to:\n(\"Tcl\", \"tcl-8*\", \"license.terms\"),\n (\"Tk\", \"tk-8*\", \"license.terms\"),\n (\"Tix\", \"Tix*\", \"license.terms\")):\nbecause the package names have evidently changed in the not to distant past? \nAfter that, I ran c:\\python26\\python msi.py and then it griped about the python264.chm being missing, so instead of trying to build it, I grabbed the one from the copy of python I had to install in order to build python and dumped it in the expected location.\nOh yea, I also had to go to the PC directory and \nnmake -f icons.mak\n\nThis gave me a runnable msi file to install python (which was already installed, so that I could build the msi file to install my own version). Oh well, at least it is built now. Whew!\n"
] |
[
0,
0
] |
[] |
[] |
[
"boost",
"python",
"windows_installer"
] |
stackoverflow_0002232579_boost_python_windows_installer.txt
|
Q:
more pythonic way of finding element in list that maximizes a function
OK, I have this simple function that finds the element of the list that maximizes the value of another positive function.
def get_max(f, s):
# f is a function and s is an iterable
best = None
best_value = -1
for element in s:
this_value = f(element)
if this_value > best_value:
best = element
best_value = this_value
return best
But I find it very long for the simple work it does. In fact, it reminds me of Java (brrrr).
Can anyone show me a more pythonic and clean way of doing this?
Thanks!
Manuel
A:
def get_max(f, s):
return max(s, key=f)
|
more pythonic way of finding element in list that maximizes a function
|
OK, I have this simple function that finds the element of the list that maximizes the value of another positive function.
def get_max(f, s):
# f is a function and s is an iterable
best = None
best_value = -1
for element in s:
this_value = f(element)
if this_value > best_value:
best = element
best_value = this_value
return best
But I find it very long for the simple work it does. In fact, it reminds me of Java (brrrr).
Can anyone show me a more pythonic and clean way of doing this?
Thanks!
Manuel
|
[
"def get_max(f, s):\n return max(s, key=f)\n\n"
] |
[
15
] |
[] |
[] |
[
"coding_style",
"list",
"maximize",
"python"
] |
stackoverflow_0002242489_coding_style_list_maximize_python.txt
|
Q:
(python) docstring is causing indentation error
def getText(nodelist):
"""Extracts the text between XML tags
I took this directly from http://docs.python.org/library/xml.dom.minidom.html.
For example, if I have a tag <Tag>525</Tag> this method returns me '525'
"""
rc = ""
for node in nodelist:
if node.nodeType == node.TEXT_NODE:
rc = rc + node.data
return rc
Gives me IndentationError: unindent does not match any outer indentation level
def getText(nodelist):
rc = ""
for node in nodelist:
if node.nodeType == node.TEXT_NODE:
rc = rc + node.data
return rc
Does not. All I am doing is deleting the docstring comment. What is going on?
A:
Your docstring starts with tabs. Make your code only use spaces for indentation (or only tabs), including the indentation for the docstrings.
A:
Make sure you are not mixing spaces and tabs for your indentation
|
(python) docstring is causing indentation error
|
def getText(nodelist):
"""Extracts the text between XML tags
I took this directly from http://docs.python.org/library/xml.dom.minidom.html.
For example, if I have a tag <Tag>525</Tag> this method returns me '525'
"""
rc = ""
for node in nodelist:
if node.nodeType == node.TEXT_NODE:
rc = rc + node.data
return rc
Gives me IndentationError: unindent does not match any outer indentation level
def getText(nodelist):
rc = ""
for node in nodelist:
if node.nodeType == node.TEXT_NODE:
rc = rc + node.data
return rc
Does not. All I am doing is deleting the docstring comment. What is going on?
|
[
"Your docstring starts with tabs. Make your code only use spaces for indentation (or only tabs), including the indentation for the docstrings.\n",
"Make sure you are not mixing spaces and tabs for your indentation\n"
] |
[
16,
3
] |
[] |
[] |
[
"indentation",
"python"
] |
stackoverflow_0002243009_indentation_python.txt
|
Q:
Django and VirtualEnv Development/Deployment Best Practices
Just curious how people are deploying their Django projects in combination with virtualenv
More specifically, how do you keep your production virtualenv's synched correctly with your development machine?
I use git for scm but I don't have my virtualenv inside the git repo - should I, or is it best to use the pip freeze and then re-create the environment on the server using the freeze output? (If you do this, could you please describe the steps - I am finding very little good documentation on the unfreezing process - is something like pip install -r freeze_output.txt possible?)
A:
I just set something like this up at work using pip, Fabric and git. The flow is basically like this, and borrows heavily from this script:
In our source tree, we maintain a requirements.txt file. We'll maintain this manually.
When we do a new release, the Fabric script creates an archive based on whatever treeish we pass it.
Fabric will find the SHA for what we're deploying with git log -1 --format=format:%h TREEISH. That gives us SHA_OF_THE_RELEASE
Fabric will get the last SHA for our requirements file with git log -1 --format=format:%h SHA_OF_THE_RELEASE requirements.txt. This spits out the short version of the hash, like 1d02afc which is the SHA for that file for this particular release.
The Fabric script will then look into a directory where our virtualenvs are stored on the remote host(s).
If there is not a directory named 1d02afc, a new virtualenv is created and setup with pip install -E /path/to/venv/1d02afc -r /path/to/requirements.txt
If there is an existing path/to/venv/1d02afc, nothing is done
The little magic part of this is passing whatever tree-ish you want to git, and having it do the packaging (from Fabric). By using git archive my-branch, git archive 1d02afc or whatever else, I'm guaranteed to get the right packages installed on my remote machines.
I went this route since I really didn't want to have extra virtuenvs floating around if the packages hadn't changed between release. I also don't like the idea of having the actual packages I depend on in my own source tree.
A:
I use this bootstrap.py: http://github.com/ccnmtl/ccnmtldjango/blob/master/ccnmtldjango/template/bootstrap.py
which expects are directory called 'requirements' that looks something like this: http://github.com/ccnmtl/ccnmtldjango/tree/master/ccnmtldjango/template/requirements/
There's an apps.txt, a libs.txt (which apps.txt includes--I just like to keep django apps seperate from other python modules) and a src directory which contains the actual tarballs.
When ./bootstrap.py is run, it creates the virtualenv (wiping a previous one if it exists) and installs everything from requirements/apps.txt into it. I do not ever install anything into the virtualenv otherwise. If I want to include a new library, I put the tarball into requirements/src/, add a line to one of the textfiles and re-run ./bootstrap.py.
bootstrap.py and requirements get checked into version control (also a copy of pip.py so I don't even have to have that installed system-wide anywhere). The virtualenv itself isn't. The scripts that I have that push out to production run ./bootstrap.py on the production server each time I push. (bootstrap.py also goes to some lengths to ensure that it's sticking to Python 2.5 since that's what we have on the production servers (Ubuntu Hardy) and my dev machine (Ubuntu Karmic) defaults to Python 2.6 if you're not careful)
|
Django and VirtualEnv Development/Deployment Best Practices
|
Just curious how people are deploying their Django projects in combination with virtualenv
More specifically, how do you keep your production virtualenv's synched correctly with your development machine?
I use git for scm but I don't have my virtualenv inside the git repo - should I, or is it best to use the pip freeze and then re-create the environment on the server using the freeze output? (If you do this, could you please describe the steps - I am finding very little good documentation on the unfreezing process - is something like pip install -r freeze_output.txt possible?)
|
[
"I just set something like this up at work using pip, Fabric and git. The flow is basically like this, and borrows heavily from this script:\n\nIn our source tree, we maintain a requirements.txt file. We'll maintain this manually.\nWhen we do a new release, the Fabric script creates an archive based on whatever treeish we pass it.\nFabric will find the SHA for what we're deploying with git log -1 --format=format:%h TREEISH. That gives us SHA_OF_THE_RELEASE\nFabric will get the last SHA for our requirements file with git log -1 --format=format:%h SHA_OF_THE_RELEASE requirements.txt. This spits out the short version of the hash, like 1d02afc which is the SHA for that file for this particular release.\nThe Fabric script will then look into a directory where our virtualenvs are stored on the remote host(s).\n\n\nIf there is not a directory named 1d02afc, a new virtualenv is created and setup with pip install -E /path/to/venv/1d02afc -r /path/to/requirements.txt\nIf there is an existing path/to/venv/1d02afc, nothing is done\n\n\nThe little magic part of this is passing whatever tree-ish you want to git, and having it do the packaging (from Fabric). By using git archive my-branch, git archive 1d02afc or whatever else, I'm guaranteed to get the right packages installed on my remote machines.\nI went this route since I really didn't want to have extra virtuenvs floating around if the packages hadn't changed between release. I also don't like the idea of having the actual packages I depend on in my own source tree.\n",
"I use this bootstrap.py: http://github.com/ccnmtl/ccnmtldjango/blob/master/ccnmtldjango/template/bootstrap.py\nwhich expects are directory called 'requirements' that looks something like this: http://github.com/ccnmtl/ccnmtldjango/tree/master/ccnmtldjango/template/requirements/\nThere's an apps.txt, a libs.txt (which apps.txt includes--I just like to keep django apps seperate from other python modules) and a src directory which contains the actual tarballs. \nWhen ./bootstrap.py is run, it creates the virtualenv (wiping a previous one if it exists) and installs everything from requirements/apps.txt into it. I do not ever install anything into the virtualenv otherwise. If I want to include a new library, I put the tarball into requirements/src/, add a line to one of the textfiles and re-run ./bootstrap.py. \nbootstrap.py and requirements get checked into version control (also a copy of pip.py so I don't even have to have that installed system-wide anywhere). The virtualenv itself isn't. The scripts that I have that push out to production run ./bootstrap.py on the production server each time I push. (bootstrap.py also goes to some lengths to ensure that it's sticking to Python 2.5 since that's what we have on the production servers (Ubuntu Hardy) and my dev machine (Ubuntu Karmic) defaults to Python 2.6 if you're not careful)\n"
] |
[
21,
4
] |
[] |
[] |
[
"django",
"git",
"python",
"virtualenv"
] |
stackoverflow_0002241055_django_git_python_virtualenv.txt
|
Q:
Getting certain attribute value using XPath
From the following HTML snippet:
<link rel="index" href="/index.php" />
<link rel="contents" href="/getdata.php" />
<link rel="copyright" href="/blabla.php" />
<link rel="shortcut icon" href="/img/all/favicon.ico" />
I'm trying to get the href value of the link tag with rel value = "shortcut icon", I'm trying to achieve that using XPath.
How to do that in Python?
A:
Like this:
data = """<link rel="index" href="/index.php" />
<link rel="contents" href="/getdata.php" />
<link rel="copyright" href="/blabla.php" />
<link rel="shortcut icon" href="/img/all/favicon.ico" />
"""
from lxml import etree
d = etree.HTML(data)
d.xpath('//link[@rel="shortcut icon"]/@href')
['/img/all/favicon.ico']
|
Getting certain attribute value using XPath
|
From the following HTML snippet:
<link rel="index" href="/index.php" />
<link rel="contents" href="/getdata.php" />
<link rel="copyright" href="/blabla.php" />
<link rel="shortcut icon" href="/img/all/favicon.ico" />
I'm trying to get the href value of the link tag with rel value = "shortcut icon", I'm trying to achieve that using XPath.
How to do that in Python?
|
[
"Like this:\ndata = \"\"\"<link rel=\"index\" href=\"/index.php\" />\n<link rel=\"contents\" href=\"/getdata.php\" />\n<link rel=\"copyright\" href=\"/blabla.php\" />\n<link rel=\"shortcut icon\" href=\"/img/all/favicon.ico\" />\n\"\"\"\n\nfrom lxml import etree\n\nd = etree.HTML(data)\n\nd.xpath('//link[@rel=\"shortcut icon\"]/@href')\n['/img/all/favicon.ico']\n\n"
] |
[
21
] |
[] |
[] |
[
"python",
"xpath"
] |
stackoverflow_0002243131_python_xpath.txt
|
Q:
How to distuingish between Django's automatically created ManyToMany through-models and manually defined ones?
Say we have models:
from django.db import models
class AutomaticModel(models.Model):
others = models.ManyToManyField('OtherModel')
class ManualModel(models.Model):
others = models.ManyToManyField('OtherModel', through='ThroughModel')
class OtherModel(models.Model):
pass
class ThroughModel(models.Model):
pblm = models.ForeignKey('ManualModel')
other = models.ForeignKey('OtherModel')
After this we can access the through models via
AutomaticModel._meta.get_field('others').rel.through and
ManualModel._meta.get_field('others').rel.through
Problem:
If given either of AutomaticModel or ManualModel (or their 'others' fields), how to determine, whether the through-model was created automatically or manually.
Of course, except for testing for names but it doesn't fit the general case -- also checking against contents of models.py seems a bit error prone as well. And there seem to be nothing in actual fields' __dict__ or anywhere else.
Any clues?
A:
Well, South developers seemed to know it: model is autogenerated if
# Django 1.0/1.1
(not field.rel.through)
or
# Django 1.2+
getattr(getattr(field.rel.through, "_meta", None), "auto_created", False)
Woohoo!
|
How to distuingish between Django's automatically created ManyToMany through-models and manually defined ones?
|
Say we have models:
from django.db import models
class AutomaticModel(models.Model):
others = models.ManyToManyField('OtherModel')
class ManualModel(models.Model):
others = models.ManyToManyField('OtherModel', through='ThroughModel')
class OtherModel(models.Model):
pass
class ThroughModel(models.Model):
pblm = models.ForeignKey('ManualModel')
other = models.ForeignKey('OtherModel')
After this we can access the through models via
AutomaticModel._meta.get_field('others').rel.through and
ManualModel._meta.get_field('others').rel.through
Problem:
If given either of AutomaticModel or ManualModel (or their 'others' fields), how to determine, whether the through-model was created automatically or manually.
Of course, except for testing for names but it doesn't fit the general case -- also checking against contents of models.py seems a bit error prone as well. And there seem to be nothing in actual fields' __dict__ or anywhere else.
Any clues?
|
[
"Well, South developers seemed to know it: model is autogenerated if\n# Django 1.0/1.1\n(not field.rel.through)\nor\n# Django 1.2+\ngetattr(getattr(field.rel.through, \"_meta\", None), \"auto_created\", False)\n\nWoohoo!\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_models",
"many_to_many",
"python"
] |
stackoverflow_0002238504_django_django_models_many_to_many_python.txt
|
Q:
How to efficiently get the k bigger elements of a list?
What´s the most efficient, elegant and pythonic way of solving this problem?
Given a list (or set or whatever) of n elements, we want to get the k biggest ones. ( You can assume k<n/2 without loss of generality, I guess)
For example, if the list were:
l = [9,1,6,4,2,8,3,7,5]
n = 9, and let's say k = 3.
What's the most efficient algorithm for retrieving the 3 biggest ones?
In this case we should get [9,8,7], in no particular order.
Thanks!
Manuel
A:
Use nlargest from heapq module
from heapq import nlargest
lst = [9,1,6,4,2,8,3,7,5]
nlargest(3, lst) # Gives [9,8,7]
You can also give a key to nlargest in case you wanna change your criteria:
from heapq import nlargest
tags = [ ("python", 30), ("ruby", 25), ("c++", 50), ("lisp", 20) ]
nlargest(2, tags, key=lambda e:e[1]) # Gives [ ("c++", 50), ("python", 30) ]
A:
The simple, O(n log n) way is to sort the list then get the last k elements.
The proper way is to use a selection algorithm, which runs in O(n + k log k) time.
Also, heapq.nlargest takes O(n log k) time on average, which may or may not be good enough.
(If k = O(n), then all 3 algorithms have the same complexity (i.e. don't bother). If k = O(log n), then the selection algorithm as described in Wikipedia is O(n) and heapq.nlargest is O(n log log n), but double logarithm is "constant enough" for most practical n that it doesn't matter.)
A:
l = [9,1,6,4,2,8,3,7,5]
sorted(l)[-k:]
A:
You can use the heapq module.
>>> from heapq import heapify, nlargest
>>> l = [9,1,6,4,2,8,3,7,5]
>>> heapify(l)
>>> nlargest(3, l)
[9, 8, 7]
>>>
A:
sorted(l, reverse=True)[:k]
|
How to efficiently get the k bigger elements of a list?
|
What´s the most efficient, elegant and pythonic way of solving this problem?
Given a list (or set or whatever) of n elements, we want to get the k biggest ones. ( You can assume k<n/2 without loss of generality, I guess)
For example, if the list were:
l = [9,1,6,4,2,8,3,7,5]
n = 9, and let's say k = 3.
What's the most efficient algorithm for retrieving the 3 biggest ones?
In this case we should get [9,8,7], in no particular order.
Thanks!
Manuel
|
[
"Use nlargest from heapq module\nfrom heapq import nlargest\nlst = [9,1,6,4,2,8,3,7,5]\nnlargest(3, lst) # Gives [9,8,7]\n\nYou can also give a key to nlargest in case you wanna change your criteria:\nfrom heapq import nlargest\ntags = [ (\"python\", 30), (\"ruby\", 25), (\"c++\", 50), (\"lisp\", 20) ]\nnlargest(2, tags, key=lambda e:e[1]) # Gives [ (\"c++\", 50), (\"python\", 30) ]\n\n",
"The simple, O(n log n) way is to sort the list then get the last k elements.\nThe proper way is to use a selection algorithm, which runs in O(n + k log k) time.\nAlso, heapq.nlargest takes O(n log k) time on average, which may or may not be good enough.\n(If k = O(n), then all 3 algorithms have the same complexity (i.e. don't bother). If k = O(log n), then the selection algorithm as described in Wikipedia is O(n) and heapq.nlargest is O(n log log n), but double logarithm is \"constant enough\" for most practical n that it doesn't matter.)\n",
"l = [9,1,6,4,2,8,3,7,5]\n\nsorted(l)[-k:]\n\n",
"You can use the heapq module.\n>>> from heapq import heapify, nlargest\n>>> l = [9,1,6,4,2,8,3,7,5]\n>>> heapify(l)\n>>> nlargest(3, l)\n[9, 8, 7]\n>>> \n\n",
"sorted(l, reverse=True)[:k]\n\n"
] |
[
68,
16,
9,
4,
4
] |
[] |
[] |
[
"algorithm",
"performance",
"python",
"sorting"
] |
stackoverflow_0002243542_algorithm_performance_python_sorting.txt
|
Q:
Venn Diagram up to 4 lists - outputting the intersections and unique sets
in my work I use a lot of Venn diagrams, and so far I've been relying on the web-based "Venny". This offers the nice option to export the various intersections (i.e., the elements belonging only to that specific intersection). Also, it does diagrams up to 4 lists.
Problem is, doing this with large lists (4K+ elements) and more than 3 sets is a chore (copy, paste, save...). Thus, I have decided to focus on generating the lists myself and use it just to plot.
This lengthy introduction leads to the crux of the matter. Given 3 or 4 lists which partially contain identical elements, how can I process them in Python to obtain the various sets (unique, common to 4, common to just first and second, etc...) as shown on the Venn diagram (3 list graphical example, 4 list graphical example)? It doesn't look too hard for 3 lists but for 4 it gets somewhat complex.
A:
Assuming you have python 2.6 or better:
>>> from itertools import combinations
>>>
>>> data = dict(
... list1 = set(list("alphabet")),
... list2 = set(list("fiddlesticks")),
... list3 = set(list("geography")),
... list4 = set(list("bovinespongiformencephalopathy")),
... )
>>>
>>> variations = {}
>>> for i in range(len(data)):
... for v in combinations(data.keys(),i+1):
... vsets = [ data[x] for x in v ]
... variations[tuple(sorted(v))] = reduce(lambda x,y: x.intersection(y), vsets)
...
>>> for k,v in sorted(variations.items(),key=lambda x: (len(x[0]),x[0])):
... print "%r\n\t%r" % (k,v)
...
('list1',)
set(['a', 'b', 'e', 'h', 'l', 'p', 't'])
('list2',)
set(['c', 'e', 'd', 'f', 'i', 'k', 'l', 's', 't'])
('list3',)
set(['a', 'e', 'g', 'h', 'o', 'p', 'r', 'y'])
('list4',)
set(['a', 'c', 'b', 'e', 'g', 'f', 'i', 'h', 'm', 'l', 'o', 'n', 'p', 's', 'r', 't', 'v', 'y'])
('list1', 'list2')
set(['e', 'l', 't'])
('list1', 'list3')
set(['a', 'h', 'e', 'p'])
('list1', 'list4')
set(['a', 'b', 'e', 'h', 'l', 'p', 't'])
('list2', 'list3')
set(['e'])
('list2', 'list4')
set(['c', 'e', 'f', 'i', 'l', 's', 't'])
('list3', 'list4')
set(['a', 'e', 'g', 'h', 'o', 'p', 'r', 'y'])
('list1', 'list2', 'list3')
set(['e'])
('list1', 'list2', 'list4')
set(['e', 'l', 't'])
('list1', 'list3', 'list4')
set(['a', 'h', 'e', 'p'])
('list2', 'list3', 'list4')
set(['e'])
('list1', 'list2', 'list3', 'list4')
set(['e'])
|
Venn Diagram up to 4 lists - outputting the intersections and unique sets
|
in my work I use a lot of Venn diagrams, and so far I've been relying on the web-based "Venny". This offers the nice option to export the various intersections (i.e., the elements belonging only to that specific intersection). Also, it does diagrams up to 4 lists.
Problem is, doing this with large lists (4K+ elements) and more than 3 sets is a chore (copy, paste, save...). Thus, I have decided to focus on generating the lists myself and use it just to plot.
This lengthy introduction leads to the crux of the matter. Given 3 or 4 lists which partially contain identical elements, how can I process them in Python to obtain the various sets (unique, common to 4, common to just first and second, etc...) as shown on the Venn diagram (3 list graphical example, 4 list graphical example)? It doesn't look too hard for 3 lists but for 4 it gets somewhat complex.
|
[
"Assuming you have python 2.6 or better:\n>>> from itertools import combinations\n>>>\n>>> data = dict(\n... list1 = set(list(\"alphabet\")),\n... list2 = set(list(\"fiddlesticks\")),\n... list3 = set(list(\"geography\")),\n... list4 = set(list(\"bovinespongiformencephalopathy\")),\n... )\n>>>\n>>> variations = {}\n>>> for i in range(len(data)):\n... for v in combinations(data.keys(),i+1):\n... vsets = [ data[x] for x in v ]\n... variations[tuple(sorted(v))] = reduce(lambda x,y: x.intersection(y), vsets)\n...\n>>> for k,v in sorted(variations.items(),key=lambda x: (len(x[0]),x[0])):\n... print \"%r\\n\\t%r\" % (k,v)\n...\n('list1',)\n set(['a', 'b', 'e', 'h', 'l', 'p', 't'])\n('list2',)\n set(['c', 'e', 'd', 'f', 'i', 'k', 'l', 's', 't'])\n('list3',)\n set(['a', 'e', 'g', 'h', 'o', 'p', 'r', 'y'])\n('list4',)\n set(['a', 'c', 'b', 'e', 'g', 'f', 'i', 'h', 'm', 'l', 'o', 'n', 'p', 's', 'r', 't', 'v', 'y'])\n('list1', 'list2')\n set(['e', 'l', 't'])\n('list1', 'list3')\n set(['a', 'h', 'e', 'p'])\n('list1', 'list4')\n set(['a', 'b', 'e', 'h', 'l', 'p', 't'])\n('list2', 'list3')\n set(['e'])\n('list2', 'list4')\n set(['c', 'e', 'f', 'i', 'l', 's', 't'])\n('list3', 'list4')\n set(['a', 'e', 'g', 'h', 'o', 'p', 'r', 'y'])\n('list1', 'list2', 'list3')\n set(['e'])\n('list1', 'list2', 'list4')\n set(['e', 'l', 't'])\n('list1', 'list3', 'list4')\n set(['a', 'h', 'e', 'p'])\n('list2', 'list3', 'list4')\n set(['e'])\n('list1', 'list2', 'list3', 'list4')\n set(['e'])\n\n"
] |
[
7
] |
[] |
[] |
[
"list",
"python",
"venn_diagram"
] |
stackoverflow_0002243690_list_python_venn_diagram.txt
|
Q:
Change dynamically the contents of a matplotlib plot
I while ago, I was comparing the output of two functions using python and matplotlib. The result was as good as simple, since plotting with matplotlib is quite easy: I just plotted two arrays with different markers. Piece of cake.
Now I find myself with the same problem, but now I have a lot of pair of curves to compare. I initially tried plotting everything with different colors and markers. This did not satisfy me since the ranges of each curve are not quite the same. In addition to this, I quickly ran out of colors and markers that were sufficiently different to identify (RGBCMYK, after that, custom colors resemble any of the previous ones).
I also tried subplotting each pair of curves, obtaining a window with many plots. Too crowded.
I tried one window per plot, too many windows.
So I was just wondering if there is any existing widget or if you have any suggestion (or a different idea) to accomplish this:
I want to see a pair of curves and then select easily the next one, with a slidebar, button, mouse scroll, or any other widget or event. By changing curves, the previous one should disappear, the legend should change and its axis as well.
A:
Well I managed to do it with an event handler for mouse clicks. I will change it for something more useful, but I post my solution anyway.
import matplotlib.pyplot as plt
figure = plt.figure()
# plotting
plt.plot([1,2,3],[10,20,30],'bo-')
plt.grid()
plt.legend()
def on_press(event):
print 'you pressed', event.button, event.xdata, event.ydata
event.canvas.figure.clear()
# select new curves to plot, in this example [1,2,3] [0,0,0]
event.canvas.figure.gca().plot([1,2,3],[0,0,0], 'ro-')
event.canvas.figure.gca().grid()
event.canvas.figure.gca().legend()
event.canvas.draw()
figure.canvas.mpl_connect('button_press_event', on_press)
A:
Sounds like you want to embed matplotlib in an application. There are some resources available for that:
user interface examples
Embedding in WX
A:
I really like using traits. If you follow the tutorial Writing a graphical application for scientific programming , you should be able to do what you want. The tutorial shows how to interact with a matplotlib graph using graphical user interface.
|
Change dynamically the contents of a matplotlib plot
|
I while ago, I was comparing the output of two functions using python and matplotlib. The result was as good as simple, since plotting with matplotlib is quite easy: I just plotted two arrays with different markers. Piece of cake.
Now I find myself with the same problem, but now I have a lot of pair of curves to compare. I initially tried plotting everything with different colors and markers. This did not satisfy me since the ranges of each curve are not quite the same. In addition to this, I quickly ran out of colors and markers that were sufficiently different to identify (RGBCMYK, after that, custom colors resemble any of the previous ones).
I also tried subplotting each pair of curves, obtaining a window with many plots. Too crowded.
I tried one window per plot, too many windows.
So I was just wondering if there is any existing widget or if you have any suggestion (or a different idea) to accomplish this:
I want to see a pair of curves and then select easily the next one, with a slidebar, button, mouse scroll, or any other widget or event. By changing curves, the previous one should disappear, the legend should change and its axis as well.
|
[
"Well I managed to do it with an event handler for mouse clicks. I will change it for something more useful, but I post my solution anyway.\nimport matplotlib.pyplot as plt\n\nfigure = plt.figure()\n# plotting\nplt.plot([1,2,3],[10,20,30],'bo-')\nplt.grid()\nplt.legend()\n\ndef on_press(event):\n print 'you pressed', event.button, event.xdata, event.ydata\n event.canvas.figure.clear()\n # select new curves to plot, in this example [1,2,3] [0,0,0]\n event.canvas.figure.gca().plot([1,2,3],[0,0,0], 'ro-')\n event.canvas.figure.gca().grid()\n event.canvas.figure.gca().legend()\n event.canvas.draw()\n\n\nfigure.canvas.mpl_connect('button_press_event', on_press)\n\n",
"Sounds like you want to embed matplotlib in an application. There are some resources available for that:\n\nuser interface examples\nEmbedding in WX\n\n",
"I really like using traits. If you follow the tutorial Writing a graphical application for scientific programming , you should be able to do what you want. The tutorial shows how to interact with a matplotlib graph using graphical user interface.\n"
] |
[
9,
2,
2
] |
[] |
[] |
[
"matplotlib",
"plot",
"python"
] |
stackoverflow_0002050728_matplotlib_plot_python.txt
|
Q:
How to replace launchd scheduling with a Python program
The Mac OS X system startup program launchd enables job scheduling (similar to cron.) By creating a launchd agent, one can trigger programs through one of the following events:
an interval of time has elapsed
a certain calendar date has come
a file path has been modified
something has been placed in a certain directory (queue directory)
a volume has been mounted
I have previously relied on launchd to start a collection of Python scripts for automating an OS X system. However, since adding a new script also often requires the installation of a new launchd agent for starting it, I would like take launchd out of the equation. A Python program should wait and watch for events like those above to occur, then dispatch the appropriate routine.
Is there a Python module which is appropriate for detecting events like those above? Or, as a more general question, how can I replace launchd in this setting using Python (and possibly AppleScript via the AppScript bridge)? Sorry if the question is rather vague. Reading suggestions are also appreciated.
A:
For the filesystem-monitoring problem, perhaps you are looking for pyfsevents. According to this post,
FSEvents API notifies your application
when changes occur in the file system.
You can use file system events to
monitor directories for any changes,
such as the creation, modification, or
removal of contained files and
directories.
|
How to replace launchd scheduling with a Python program
|
The Mac OS X system startup program launchd enables job scheduling (similar to cron.) By creating a launchd agent, one can trigger programs through one of the following events:
an interval of time has elapsed
a certain calendar date has come
a file path has been modified
something has been placed in a certain directory (queue directory)
a volume has been mounted
I have previously relied on launchd to start a collection of Python scripts for automating an OS X system. However, since adding a new script also often requires the installation of a new launchd agent for starting it, I would like take launchd out of the equation. A Python program should wait and watch for events like those above to occur, then dispatch the appropriate routine.
Is there a Python module which is appropriate for detecting events like those above? Or, as a more general question, how can I replace launchd in this setting using Python (and possibly AppleScript via the AppScript bridge)? Sorry if the question is rather vague. Reading suggestions are also appreciated.
|
[
"For the filesystem-monitoring problem, perhaps you are looking for pyfsevents. According to this post,\n\nFSEvents API notifies your application\n when changes occur in the file system.\n You can use file system events to\n monitor directories for any changes,\n such as the creation, modification, or\n removal of contained files and\n directories.\n\n"
] |
[
1
] |
[] |
[] |
[
"launchd",
"macos",
"python"
] |
stackoverflow_0002244261_launchd_macos_python.txt
|
Q:
Python Virtualbox API
I have made a command-line interface for virtualbox such that the virtualbox can be controlled from a remote machine. now I am trying to implement the commmand-line interface using python virtualbox api. For that I have downloaded the pyvb package (python api documentation shows functions that can be used for implementing this under pyvb package). but when I give pyvb.startVM(self,"name of vm",type='gui'), it shows an error:
AttributeError: 'module' object has no attribute 'startVM'
A:
startVM is in pyvb.vb.VB class. Also, it's not 'name of vm', as docs explain startVM should be called with pyvb.vm.vbVM as a first parameter and not a string.
|
Python Virtualbox API
|
I have made a command-line interface for virtualbox such that the virtualbox can be controlled from a remote machine. now I am trying to implement the commmand-line interface using python virtualbox api. For that I have downloaded the pyvb package (python api documentation shows functions that can be used for implementing this under pyvb package). but when I give pyvb.startVM(self,"name of vm",type='gui'), it shows an error:
AttributeError: 'module' object has no attribute 'startVM'
|
[
"startVM is in pyvb.vb.VB class. Also, it's not 'name of vm', as docs explain startVM should be called with pyvb.vm.vbVM as a first parameter and not a string.\n"
] |
[
3
] |
[] |
[] |
[
"api",
"python",
"virtualbox"
] |
stackoverflow_0002244368_api_python_virtualbox.txt
|
Q:
Comparing list item values to other items in other list in Python
I want to compare the values in one list to the values in a second list and return all those that are in the first list but not in the second i.e.
list1 = ['one','two','three','four','five']
list2 = ['one','two','four']
would return 'three' and 'five'.
I have only a little experience with python, so this may turn out to be a ridiculous and stupid way to attempt to solve it, but this what I have done so far:
def unusedCategories(self):
unused = []
for category in self.catList:
if category != used in self.usedList:
unused.append(category)
return unused
However this throws an error 'iteration over non-sequence', which I gather to mean that one or both 'lists' aren't actually lists (the raw output for both is in the same format as my first example)
A:
set(list1).difference(set(list2))
A:
Use sets to get the difference between the lists:
>>> list1 = ['one','two','three','four','five']
>>> list2 = ['one','two','four']
>>> set(list1) - set(list2)
set(['five', 'three'])
A:
with set.difference:
>>> list1 = ['one','two','three','four','five']
>>> list2 = ['one','two','four']
>>> set(list1).difference(list2)
{'five', 'three'}
you can skip conversion of list2 to set.
A:
You can do it with sets or a list comprehension:
unused = [i for i in list1 if i not in list2]
A:
All the answers here are correct. I would use list comprehension if the lists are short; sets will be more efficient. In exploring why your code doesn't work, I don't get the error. (It doesn't work, but that's a different issue).
>>> list1 = ['a','b','c']
>>> list2 = ['a','b','d']
>>> [c for c in list1 if not c in list2]
['c']
>>> set(list1).difference(set(list2))
set(['c'])
>>> L = list()
>>> for c in list1:
... if c != L in list2:
... L.append(c)
...
>>> L
[]
The problem is that the if statement makes no sense.
Hope this helps.
|
Comparing list item values to other items in other list in Python
|
I want to compare the values in one list to the values in a second list and return all those that are in the first list but not in the second i.e.
list1 = ['one','two','three','four','five']
list2 = ['one','two','four']
would return 'three' and 'five'.
I have only a little experience with python, so this may turn out to be a ridiculous and stupid way to attempt to solve it, but this what I have done so far:
def unusedCategories(self):
unused = []
for category in self.catList:
if category != used in self.usedList:
unused.append(category)
return unused
However this throws an error 'iteration over non-sequence', which I gather to mean that one or both 'lists' aren't actually lists (the raw output for both is in the same format as my first example)
|
[
"set(list1).difference(set(list2))\n",
"Use sets to get the difference between the lists:\n>>> list1 = ['one','two','three','four','five']\n>>> list2 = ['one','two','four']\n>>> set(list1) - set(list2)\nset(['five', 'three'])\n\n",
"with set.difference:\n>>> list1 = ['one','two','three','four','five']\n>>> list2 = ['one','two','four']\n>>> set(list1).difference(list2)\n{'five', 'three'}\n\nyou can skip conversion of list2 to set.\n",
"You can do it with sets or a list comprehension:\nunused = [i for i in list1 if i not in list2]\n\n",
"All the answers here are correct. I would use list comprehension if the lists are short; sets will be more efficient. In exploring why your code doesn't work, I don't get the error. (It doesn't work, but that's a different issue).\n>>> list1 = ['a','b','c']\n>>> list2 = ['a','b','d']\n>>> [c for c in list1 if not c in list2]\n['c']\n>>> set(list1).difference(set(list2))\nset(['c'])\n>>> L = list()\n>>> for c in list1:\n... if c != L in list2:\n... L.append(c)\n... \n>>> L\n[]\n\nThe problem is that the if statement makes no sense.\nHope this helps.\n"
] |
[
8,
6,
1,
0,
0
] |
[] |
[] |
[
"plone",
"python",
"zope"
] |
stackoverflow_0002244443_plone_python_zope.txt
|
Q:
Using Djangos ImageField to upload an image, rename it to a random filename and create thumbnail
Hay guys, I've wrote a simple upload method for my pictures
class Picture(models.Model):
path = models.CharField(max_length=200)
filename = models.CharField(max_length=200)
car = models.ForeignKey('Car')
thumb_path = models.CharField(max_length=200)
created_on = models.DateField(auto_now_add=True)
updated_on = models.DateField(auto_now=True)
def save(self):
if not self.id:
thumb_size = 128, 128
thumb_path = "assests/uploads/thumb"+self.filename
t = Image.open(self.path)
t.thumbnail(thumb_size,Image.ANTIALIAS)
t.save(thumb_path, "JPEG")
self.thumb_path = thumb_path
super(Picture, self).save()
def delete(self):
os.unlink(self.thumb_path)
os.unlink(self.path)
super(Picture, self).delete()
As you can see this isn't the best method, i want to move on to the ImageField() to do most of my work, but i still want the flexiability to create a thumbnail and random filename.
would i need to create another model for PictureThumbnail? I don't really want to use any 3rd part extensions.
how could i use ImageField to make this work? All the iamges are gong to be uploaded to /assests/uploads/
Thanks
A:
You could use an adapted version of:
http://www.djangosnippets.org/snippets/1100/
Or depending on what exactly you need to do you can consider a template filter based approach something like this:
http://www.djangosnippets.org/snippets/1887/
|
Using Djangos ImageField to upload an image, rename it to a random filename and create thumbnail
|
Hay guys, I've wrote a simple upload method for my pictures
class Picture(models.Model):
path = models.CharField(max_length=200)
filename = models.CharField(max_length=200)
car = models.ForeignKey('Car')
thumb_path = models.CharField(max_length=200)
created_on = models.DateField(auto_now_add=True)
updated_on = models.DateField(auto_now=True)
def save(self):
if not self.id:
thumb_size = 128, 128
thumb_path = "assests/uploads/thumb"+self.filename
t = Image.open(self.path)
t.thumbnail(thumb_size,Image.ANTIALIAS)
t.save(thumb_path, "JPEG")
self.thumb_path = thumb_path
super(Picture, self).save()
def delete(self):
os.unlink(self.thumb_path)
os.unlink(self.path)
super(Picture, self).delete()
As you can see this isn't the best method, i want to move on to the ImageField() to do most of my work, but i still want the flexiability to create a thumbnail and random filename.
would i need to create another model for PictureThumbnail? I don't really want to use any 3rd part extensions.
how could i use ImageField to make this work? All the iamges are gong to be uploaded to /assests/uploads/
Thanks
|
[
"You could use an adapted version of:\nhttp://www.djangosnippets.org/snippets/1100/\nOr depending on what exactly you need to do you can consider a template filter based approach something like this:\nhttp://www.djangosnippets.org/snippets/1887/\n"
] |
[
1
] |
[] |
[] |
[
"django",
"file",
"python",
"upload"
] |
stackoverflow_0002244537_django_file_python_upload.txt
|
Q:
getting specific xml nodes attributes
This might be a newbie question :) but it's irritating me since I'm new to XML. I have the following xml file:
<assetsMain>
<assetParent type='character' shortName='char'>
<asset>
pub
</asset>
<asset>
car
</asset>
</assetParent>
<assetParent type='par' shortName='pr'>
<asset>
camera
</asset>
<asset>
rig
</asset>
</assetParent>
</assetsMain>
Is it possible to retrieve all <assetParent> nodes and all their attributes and children texts? For example to have the result as the following:
[ [['character','char'],['pub','car']]
[['par','pr'],['camera','rig']]
]
By the way, I use DOM and Python 2.6
Thanks in advance.
A:
An answer using lxml.etree. Xpath would probably be reusable in another capable library:
>>> from lxml import etree
>>> data = """<assetsMain>
... <assetParent type='character' shortName='char'>
... <asset>pub</asset>
... <asset>car</asset>
... </assetParent>
... <assetParent type='par' shortName='pr'>
... <asset>camera</asset>
... <asset>rig</asset>
... </assetParent>
... </assetsMain>
... """
>>> doc = etree.XML(data)
>>> for aP in doc.xpath('//assetParent'):
... parent = aP.attrib['type']
... for a in aP.xpath('./asset/text()'):
... print parent, a.strip()
...
character pub
character car
par camera
par rig
A:
This code gives the output you want:
from xml.dom.minidom import parseString
document = """\
<assetsMain>
<assetParent type='character' shortName='char'>
<asset>
pub
</asset>
<asset>
car
</asset>
</assetParent>
<assetParent type='par' shortName='pr'>
<asset>
camera
</asset>
<asset>
rig
</asset>
</assetParent>
</assetsMain>
"""
def getNestedList():
dom = parseString(document)
li = []
for assetParent in dom.childNodes[0].getElementsByTagName("assetParent"):
# read type and shortName
a = [assetParent.getAttribute("type"), assetParent.getAttribute("shortName")]
# read content of asset nodes
b = [asset.childNodes[0].data.strip() for asset in assetParent.getElementsByTagName("asset")]
# put the lists together in a list and add them to the list (!)
li.append([a,b])
return li
if __name__=="__main__":
print getNestedList()
Note that we can select which child nodes we want to read with getElementsByTagName. The attributes are read with getAttribute on a node. Text content inside a node is read through the property data (the text itself is a child node as well). If you are reading text inside a node, you can check so that it really is text with:
if node.nodeType == node.TEXT_NODE:
Also note that there is no checking or error handling here. Nodes lacking child nodes will raise an IndexError.
Although, a nested list of three levels make me want to suggest you use dictionaries instead.
Output:
[[[u'character', u'char'], [u'pub', u'car']], [[u'par', u'pr'], [u'camera', u'rig']]]
|
getting specific xml nodes attributes
|
This might be a newbie question :) but it's irritating me since I'm new to XML. I have the following xml file:
<assetsMain>
<assetParent type='character' shortName='char'>
<asset>
pub
</asset>
<asset>
car
</asset>
</assetParent>
<assetParent type='par' shortName='pr'>
<asset>
camera
</asset>
<asset>
rig
</asset>
</assetParent>
</assetsMain>
Is it possible to retrieve all <assetParent> nodes and all their attributes and children texts? For example to have the result as the following:
[ [['character','char'],['pub','car']]
[['par','pr'],['camera','rig']]
]
By the way, I use DOM and Python 2.6
Thanks in advance.
|
[
"An answer using lxml.etree. Xpath would probably be reusable in another capable library:\n>>> from lxml import etree\n>>> data = \"\"\"<assetsMain>\n... <assetParent type='character' shortName='char'>\n... <asset>pub</asset>\n... <asset>car</asset>\n... </assetParent>\n... <assetParent type='par' shortName='pr'>\n... <asset>camera</asset>\n... <asset>rig</asset>\n... </assetParent>\n... </assetsMain>\n... \"\"\"\n>>> doc = etree.XML(data)\n>>> for aP in doc.xpath('//assetParent'):\n... parent = aP.attrib['type']\n... for a in aP.xpath('./asset/text()'):\n... print parent, a.strip()\n...\ncharacter pub\ncharacter car\npar camera\npar rig\n\n",
"This code gives the output you want:\nfrom xml.dom.minidom import parseString\n\ndocument = \"\"\"\\\n<assetsMain>\n <assetParent type='character' shortName='char'>\n <asset>\n pub\n </asset>\n <asset>\n car\n </asset>\n </assetParent>\n <assetParent type='par' shortName='pr'>\n <asset>\n camera\n </asset>\n <asset>\n rig\n </asset>\n </assetParent>\n</assetsMain>\n\"\"\"\n\ndef getNestedList():\n dom = parseString(document)\n li = []\n for assetParent in dom.childNodes[0].getElementsByTagName(\"assetParent\"):\n # read type and shortName\n a = [assetParent.getAttribute(\"type\"), assetParent.getAttribute(\"shortName\")]\n # read content of asset nodes\n b = [asset.childNodes[0].data.strip() for asset in assetParent.getElementsByTagName(\"asset\")]\n # put the lists together in a list and add them to the list (!)\n li.append([a,b])\n return li\n\nif __name__==\"__main__\":\n print getNestedList()\n\nNote that we can select which child nodes we want to read with getElementsByTagName. The attributes are read with getAttribute on a node. Text content inside a node is read through the property data (the text itself is a child node as well). If you are reading text inside a node, you can check so that it really is text with:\nif node.nodeType == node.TEXT_NODE:\n\nAlso note that there is no checking or error handling here. Nodes lacking child nodes will raise an IndexError.\nAlthough, a nested list of three levels make me want to suggest you use dictionaries instead.\nOutput:\n[[[u'character', u'char'], [u'pub', u'car']], [[u'par', u'pr'], [u'camera', u'rig']]]\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"python",
"xml"
] |
stackoverflow_0002244629_python_xml.txt
|
Q:
Threads in twisted... how to use them properly?
I need to write a simple app that runs two threads:
- thread 1: runs at timed periods, let's say every 1 minute
- thread 2: just a 'normal' while True loop that does 'stuff'
if not the requirement to run at timed interval I would have not looked at twisted at all, but simple sleep(60) is not good enough and construction like:
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.run()
Looked really simple to achieve what I wanted there.
Now, how do I 'properly' add another thread?
I see two options here:
Use threading library and run two 'python threads' one executing my while loop, and another running reactor.run(). But Google seems to object this approach and suggests using twisted threading
Use twisted threading. That's what I've tried, but somehow this looks bit clumsy to me.
Here's what I came up with:
def timed_thread():
print 'i will be called every 1 minute'
return
def normal_thread():
print 'this is a normal thread'
time.sleep(30)
return
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.callInThread(normal_thread)
reactor.run()
That seems to work, but! I can't stop the app. If I press ^C it wouldn't do anything (without 'callInThread' it just stops as you'd expect it to). ^Z bombs out to shell, and if I then do 'kill %1' it seems to kill the process (shell reports that), but the 'normal' thread keeps on running. kill PID wouldn't get rid of it, and the only cure is kill -9. Really strange.
So. What am I doing wrong? Is it a correct approach to implement two threads in twisted? Should I not bother with twisted? What other 'standard' alternatives are to implement timed calls? ('Standard' I mean I can easy_install or yum install them, I don't want to start downloading and using some random scripts from random web pages).
A:
You didn't explain why you actually need threads here. If you had, I might have been able to explain why you don't need them. ;)
That aside, I can confirm that your basic understanding of things is correct. One possible misunderstanding I can clear up, though, is the notion that "python threads" and "Twisted threads" are at all different from each other. They're not. Python provides a threading library. All of Twisted's thread APIs are implemented in terms of Python's threading library. Only the API is different.
As far as shutdown goes, you have two options.
Start your run-forever thread using Python's threading APIs directly and make the thread a daemon. Your process can exit even while daemon threads are still running. A possible problem with this solution is that some versions of Python have issues with daemon threads that will lead to a crash at shutdown time.
Create your thread using either Twisted's APIs or the stdlib threading APIs but also add a Twisted shutdown hook using reactor.addSystemEventTrigger('before', 'shutdown', f). In that hook, communicate with the work thread and tell it to shut down. For example, you could share a threading.Event between the Twisted thread and your work thread and have the hook set it. The work thread can periodically check to see if it has been set and exit when it notices that it has been. Aside from not crashing, this gives another advantage over daemon threads - it will let you run some cleanup or finalization code in your work thread before the process exits.
A:
Assuming that your main is relatively non-blocking:
import random
from twisted.internet import task
class MyProcess:
def __init__(self):
self.stats = []
self.lp = None
def myloopingCall(self):
print "I have %s stats" % len(self.stats)
def myMainFunction(self,reactor):
self.stats.append(random.random())
reactor.callLater(0,self.myMainFunction,reactor)
def start(self,reactor):
self.lp = task.LoopingCall(self.myloopingCall)
self.lp.start(2)
reactor.callLater(0,self.myMainFunction,reactor)
def stop(self):
if self.lp is not None:
self.lp.stop()
print "I'm done"
if __name__ == '__main__':
myproc = MyProcess()
from twisted.internet import reactor
reactor.callWhenRunning(myproc.start,reactor)
reactor.addSystemEventTrigger('during','shutdown',myproc.stop)
reactor.callLater(10,reactor.stop)
reactor.run()
$ python bleh.py
I have 0 stats
I have 33375 stats
I have 66786 stats
I have 100254 stats
I have 133625 stats
I'm done
|
Threads in twisted... how to use them properly?
|
I need to write a simple app that runs two threads:
- thread 1: runs at timed periods, let's say every 1 minute
- thread 2: just a 'normal' while True loop that does 'stuff'
if not the requirement to run at timed interval I would have not looked at twisted at all, but simple sleep(60) is not good enough and construction like:
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.run()
Looked really simple to achieve what I wanted there.
Now, how do I 'properly' add another thread?
I see two options here:
Use threading library and run two 'python threads' one executing my while loop, and another running reactor.run(). But Google seems to object this approach and suggests using twisted threading
Use twisted threading. That's what I've tried, but somehow this looks bit clumsy to me.
Here's what I came up with:
def timed_thread():
print 'i will be called every 1 minute'
return
def normal_thread():
print 'this is a normal thread'
time.sleep(30)
return
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.callInThread(normal_thread)
reactor.run()
That seems to work, but! I can't stop the app. If I press ^C it wouldn't do anything (without 'callInThread' it just stops as you'd expect it to). ^Z bombs out to shell, and if I then do 'kill %1' it seems to kill the process (shell reports that), but the 'normal' thread keeps on running. kill PID wouldn't get rid of it, and the only cure is kill -9. Really strange.
So. What am I doing wrong? Is it a correct approach to implement two threads in twisted? Should I not bother with twisted? What other 'standard' alternatives are to implement timed calls? ('Standard' I mean I can easy_install or yum install them, I don't want to start downloading and using some random scripts from random web pages).
|
[
"You didn't explain why you actually need threads here. If you had, I might have been able to explain why you don't need them. ;)\nThat aside, I can confirm that your basic understanding of things is correct. One possible misunderstanding I can clear up, though, is the notion that \"python threads\" and \"Twisted threads\" are at all different from each other. They're not. Python provides a threading library. All of Twisted's thread APIs are implemented in terms of Python's threading library. Only the API is different.\nAs far as shutdown goes, you have two options.\n\nStart your run-forever thread using Python's threading APIs directly and make the thread a daemon. Your process can exit even while daemon threads are still running. A possible problem with this solution is that some versions of Python have issues with daemon threads that will lead to a crash at shutdown time.\nCreate your thread using either Twisted's APIs or the stdlib threading APIs but also add a Twisted shutdown hook using reactor.addSystemEventTrigger('before', 'shutdown', f). In that hook, communicate with the work thread and tell it to shut down. For example, you could share a threading.Event between the Twisted thread and your work thread and have the hook set it. The work thread can periodically check to see if it has been set and exit when it notices that it has been. Aside from not crashing, this gives another advantage over daemon threads - it will let you run some cleanup or finalization code in your work thread before the process exits.\n\n",
"Assuming that your main is relatively non-blocking:\nimport random\nfrom twisted.internet import task\n\nclass MyProcess:\n def __init__(self):\n self.stats = []\n self.lp = None\n def myloopingCall(self):\n print \"I have %s stats\" % len(self.stats)\n def myMainFunction(self,reactor):\n self.stats.append(random.random())\n reactor.callLater(0,self.myMainFunction,reactor)\n def start(self,reactor):\n self.lp = task.LoopingCall(self.myloopingCall)\n self.lp.start(2)\n reactor.callLater(0,self.myMainFunction,reactor)\n def stop(self):\n if self.lp is not None:\n self.lp.stop()\n print \"I'm done\"\n\nif __name__ == '__main__':\n myproc = MyProcess()\n from twisted.internet import reactor\n reactor.callWhenRunning(myproc.start,reactor)\n reactor.addSystemEventTrigger('during','shutdown',myproc.stop)\n reactor.callLater(10,reactor.stop)\n reactor.run()\n\n\n$ python bleh.py\nI have 0 stats\nI have 33375 stats\nI have 66786 stats\nI have 100254 stats\nI have 133625 stats\nI'm done\n\n"
] |
[
5,
2
] |
[] |
[] |
[
"multithreading",
"python",
"timedelay",
"twisted"
] |
stackoverflow_0002243266_multithreading_python_timedelay_twisted.txt
|
Q:
How to extract unique values from nested dictionary with Python?
I like to make a function that puts out a list of all values that are in a dictionary. The list must not contain any double items. The list also has to be in alphabetical order.
I'm kind of new to Python, I can't come any further than printing all the values of the dictionary with the iteritems() function.
The dictionary is:
critics={'Lisa Rose': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.5,
'Just My Luck': 3.0, 'Superman Returns': 3.5, 'You, Me and Dupree': 2.5,
'The Night Listener': 3.0},
'Gene Seymour': {'Lady in the Water': 3.0, 'Snakes on a Plane': 3.5,
'Just My Luck': 1.5, 'Superman Returns': 5.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 3.5},
'Michael Phillips': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.0,
'Superman Returns': 3.5, 'The Night Listener': 4.0},
'Claudia Puig': {'Snakes on a Plane': 3.5, 'Just My Luck': 3.0,
'The Night Listener': 4.5, 'Superman Returns': 4.0,
'You, Me and Dupree': 2.5},
'Mick LaSalle': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'Just My Luck': 2.0, 'Superman Returns': 3.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 2.0},
'Jack Matthews': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'The Night Listener': 3.0, 'Superman Returns': 5.0, 'You, Me and Dupree': 3.5},
'Toby': {'Snakes on a Plane':4.5,'You, Me and Dupree':1.0,'Superman Returns':4.0}}
So I want to print a list of the movies that have been rated.
Like:
Just My Luck;
Lady in the Water;
Snakes on a Plane;
Superman Returns;
You, me and Dupree;
.
.
.
etcetera..
Can anybody help me out?
A:
the simplest way would be:
>>> d = {1: 'sadf', 2: 'sadf', 3: 'asdf'}
>>> sorted(set(d.itervalues()))
['asdf', 'sadf']
print it as you like.
For your update question answer would be:
>>> films = set()
>>> _ = [films.update(dic) for dic in critics.itervalues()]
>>> sorted(films)
['Just My Luck', 'Lady in the Water', 'Snakes on a Plane', 'Superman Returns', 'The Night Listener', 'You, Me and Dupree']
A:
Another solution:
>>> reduce(lambda x,y: set(x) | set(y),[ y.keys() for y in critics.values() ])
set(['Lady in the Water', 'Snakes on a Plane', 'You, Me and Dupree', 'Just My Luck', 'Superman Returns', 'The Night Listener'])
|
How to extract unique values from nested dictionary with Python?
|
I like to make a function that puts out a list of all values that are in a dictionary. The list must not contain any double items. The list also has to be in alphabetical order.
I'm kind of new to Python, I can't come any further than printing all the values of the dictionary with the iteritems() function.
The dictionary is:
critics={'Lisa Rose': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.5,
'Just My Luck': 3.0, 'Superman Returns': 3.5, 'You, Me and Dupree': 2.5,
'The Night Listener': 3.0},
'Gene Seymour': {'Lady in the Water': 3.0, 'Snakes on a Plane': 3.5,
'Just My Luck': 1.5, 'Superman Returns': 5.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 3.5},
'Michael Phillips': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.0,
'Superman Returns': 3.5, 'The Night Listener': 4.0},
'Claudia Puig': {'Snakes on a Plane': 3.5, 'Just My Luck': 3.0,
'The Night Listener': 4.5, 'Superman Returns': 4.0,
'You, Me and Dupree': 2.5},
'Mick LaSalle': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'Just My Luck': 2.0, 'Superman Returns': 3.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 2.0},
'Jack Matthews': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'The Night Listener': 3.0, 'Superman Returns': 5.0, 'You, Me and Dupree': 3.5},
'Toby': {'Snakes on a Plane':4.5,'You, Me and Dupree':1.0,'Superman Returns':4.0}}
So I want to print a list of the movies that have been rated.
Like:
Just My Luck;
Lady in the Water;
Snakes on a Plane;
Superman Returns;
You, me and Dupree;
.
.
.
etcetera..
Can anybody help me out?
|
[
"the simplest way would be:\n>>> d = {1: 'sadf', 2: 'sadf', 3: 'asdf'}\n>>> sorted(set(d.itervalues()))\n['asdf', 'sadf']\n\nprint it as you like.\nFor your update question answer would be:\n>>> films = set()\n>>> _ = [films.update(dic) for dic in critics.itervalues()]\n>>> sorted(films)\n['Just My Luck', 'Lady in the Water', 'Snakes on a Plane', 'Superman Returns', 'The Night Listener', 'You, Me and Dupree']\n\n",
"Another solution:\n>>> reduce(lambda x,y: set(x) | set(y),[ y.keys() for y in critics.values() ])\nset(['Lady in the Water', 'Snakes on a Plane', 'You, Me and Dupree', 'Just My Luck', 'Superman Returns', 'The Night Listener'])\n\n"
] |
[
4,
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0002244795_dictionary_python.txt
|
Q:
Spawning WSGI example (practical approach to WSGI)
I'm trying to understand how WSGI works. I know I could read the specs, but I'd still want to know how do I create a spawning application? A complete "hello world".
Could someone show me an example?
With everything, file naming, creating the module, running it. Every and each step. Thanks.
(NB: while spawning seems a great piece of software, it has a stupid name: I cannot find anything successfully on the web on the matter, because everything related to "spawning" also relates to "multithreading" or "IPC").
A:
From what I can see in the documentation, Spawning just runs stock WSGI apps, which means that you just write a WSGI script and then invoke Spawning against it:
spawn helloworld.simple_app
spawn helloworld.simple_app middleware.Upperware
As always, make sure you have installed any modules it depends on, such as paste.deploy.
|
Spawning WSGI example (practical approach to WSGI)
|
I'm trying to understand how WSGI works. I know I could read the specs, but I'd still want to know how do I create a spawning application? A complete "hello world".
Could someone show me an example?
With everything, file naming, creating the module, running it. Every and each step. Thanks.
(NB: while spawning seems a great piece of software, it has a stupid name: I cannot find anything successfully on the web on the matter, because everything related to "spawning" also relates to "multithreading" or "IPC").
|
[
"From what I can see in the documentation, Spawning just runs stock WSGI apps, which means that you just write a WSGI script and then invoke Spawning against it:\nspawn helloworld.simple_app\nspawn helloworld.simple_app middleware.Upperware\n\nAs always, make sure you have installed any modules it depends on, such as paste.deploy.\n"
] |
[
3
] |
[] |
[] |
[
"python",
"wsgi",
"wsgiserver"
] |
stackoverflow_0002244897_python_wsgi_wsgiserver.txt
|
Q:
How to count both sides of many-to-many relationship in Google App Engine
Consider a GAE (python) app that lets users comment on songs. The expected number of users is 1,000,000+. The expected number of songs is 5,000.
The app must be able to:
Give the number of songs a user has commented on
Give the number of users who have commented on a song
Counter management must be transactional so that they always reflect the underlying data.
It seems GAE apps must keep these types of counts calculated at all times since querying for them at request time would be inefficient.
My Data Model
class Song(BaseModel):
name = db.StringProperty()
# Number of users commenting on the song
user_count = db.IntegerProperty('user count', default=0, required=True)
date_added = db.DateTimeProperty('date added', False, True)
date_updated = db.DateTimeProperty('date updated', True, False)
class User(BaseModel):
email = db.StringProperty()
# Number of songs commented on by the user
song_count = db.IntegerProperty('song count', default=0, required=True)
date_added = db.DateTimeProperty('date added', False, True)
date_updated = db.DateTimeProperty('date updated', True, False)
class SongUser(BaseModel):
# Will be child of User
song = db.ReferenceProperty(Song, required=True, collection_name='songs')
comment = db.StringProperty('comment', required=True)
date_added = db.DateTimeProperty('date added', False, True)
date_updated = db.DateTimeProperty('date updated', True, False)
Code
This handles the user's song count transactionally but not the song's user count.
s = Song(name='Hey Jude')
s.put()
u = User(email='me@example.com')
u.put()
def add_mapping(song_key, song_comment, user_key):
u = User.get(user_key)
su = SongUser(parent=u, song=song_key, song_comment=song_comment, user=u);
u.song_count += 1
u.put()
su.put()
# Transactionally add mapping and increase user's song count
db.run_in_transaction(add_mapping, s.key(), 'Awesome', u.key())
# Increase song's user count (non-transactional)
s.user_count += 1
s.put()
The question is: How can I manage both counters transactionally?
Based on my understanding this would be impossible since User, Song, and SongUser would have to be a part of the same entity group. They can't be in one entity group because then all my data would be in one group and it could not be distributed by user.
A:
You really shouldn't have to worry about handling the user's count of songs on which they have commented inside a transaction because it seems unlikely that a User would be able to comment on more than one song at a time, right?
Now, it is definitely the case that many users could be commenting on the same song at one time, so that is where you have to worry about making sure that the data isn't made invalid by a race condition.
However, if you keep the count of the number of users who have commented on a song inside the Song entity, and lock the entity with a transaction, you are going to get very high contention for that entity and datastore timeouts will make you application have lots of problems.
This answer for this problem is Sharded Counters.
In order to make sure that you can create a new SongUser entity and update the related Song's sharded counter, you should consider having the SongUser entity have the related Song as a parent. That will put them in the same entity group and you can both create the SongUser and updated the sharded counter in the same transaction. The SongUser's relationship to the User who created it can be held in a ReferenceProperty.
Regarding your concern about the two updates (the transactional one and the User update) not both succeeding, that is always a possibility, but given that either update can fail, you will need to have proper exception-handling to ensure that both succeed. That's an important point: the in-transaction-updates are not guaranteed to succeed. You may get a TransactionfailedError exception if the transaction can not complete for any reason.
So, if your transaction completes without raising an exception, run the update to User in a transaction. That will get you automatic retries of the update to User, should some error occur. Unless there's something about possible contention on the User entity that I don't understand, the possiblity that it will not eventually succeed is surpassingly small. If that is an unacceptable risk, then I don't think that that AppEngine has a perfect solution to this problem for you.
First ask yourself: is it really that bad if the count of songs that someone has commented on is off by one? Is this as critical as updating a bank account balance or completing a stock sale?
|
How to count both sides of many-to-many relationship in Google App Engine
|
Consider a GAE (python) app that lets users comment on songs. The expected number of users is 1,000,000+. The expected number of songs is 5,000.
The app must be able to:
Give the number of songs a user has commented on
Give the number of users who have commented on a song
Counter management must be transactional so that they always reflect the underlying data.
It seems GAE apps must keep these types of counts calculated at all times since querying for them at request time would be inefficient.
My Data Model
class Song(BaseModel):
name = db.StringProperty()
# Number of users commenting on the song
user_count = db.IntegerProperty('user count', default=0, required=True)
date_added = db.DateTimeProperty('date added', False, True)
date_updated = db.DateTimeProperty('date updated', True, False)
class User(BaseModel):
email = db.StringProperty()
# Number of songs commented on by the user
song_count = db.IntegerProperty('song count', default=0, required=True)
date_added = db.DateTimeProperty('date added', False, True)
date_updated = db.DateTimeProperty('date updated', True, False)
class SongUser(BaseModel):
# Will be child of User
song = db.ReferenceProperty(Song, required=True, collection_name='songs')
comment = db.StringProperty('comment', required=True)
date_added = db.DateTimeProperty('date added', False, True)
date_updated = db.DateTimeProperty('date updated', True, False)
Code
This handles the user's song count transactionally but not the song's user count.
s = Song(name='Hey Jude')
s.put()
u = User(email='me@example.com')
u.put()
def add_mapping(song_key, song_comment, user_key):
u = User.get(user_key)
su = SongUser(parent=u, song=song_key, song_comment=song_comment, user=u);
u.song_count += 1
u.put()
su.put()
# Transactionally add mapping and increase user's song count
db.run_in_transaction(add_mapping, s.key(), 'Awesome', u.key())
# Increase song's user count (non-transactional)
s.user_count += 1
s.put()
The question is: How can I manage both counters transactionally?
Based on my understanding this would be impossible since User, Song, and SongUser would have to be a part of the same entity group. They can't be in one entity group because then all my data would be in one group and it could not be distributed by user.
|
[
"You really shouldn't have to worry about handling the user's count of songs on which they have commented inside a transaction because it seems unlikely that a User would be able to comment on more than one song at a time, right?\nNow, it is definitely the case that many users could be commenting on the same song at one time, so that is where you have to worry about making sure that the data isn't made invalid by a race condition.\nHowever, if you keep the count of the number of users who have commented on a song inside the Song entity, and lock the entity with a transaction, you are going to get very high contention for that entity and datastore timeouts will make you application have lots of problems.\nThis answer for this problem is Sharded Counters.\nIn order to make sure that you can create a new SongUser entity and update the related Song's sharded counter, you should consider having the SongUser entity have the related Song as a parent. That will put them in the same entity group and you can both create the SongUser and updated the sharded counter in the same transaction. The SongUser's relationship to the User who created it can be held in a ReferenceProperty.\nRegarding your concern about the two updates (the transactional one and the User update) not both succeeding, that is always a possibility, but given that either update can fail, you will need to have proper exception-handling to ensure that both succeed. That's an important point: the in-transaction-updates are not guaranteed to succeed. You may get a TransactionfailedError exception if the transaction can not complete for any reason.\nSo, if your transaction completes without raising an exception, run the update to User in a transaction. That will get you automatic retries of the update to User, should some error occur. Unless there's something about possible contention on the User entity that I don't understand, the possiblity that it will not eventually succeed is surpassingly small. If that is an unacceptable risk, then I don't think that that AppEngine has a perfect solution to this problem for you.\nFirst ask yourself: is it really that bad if the count of songs that someone has commented on is off by one? Is this as critical as updating a bank account balance or completing a stock sale? \n"
] |
[
1
] |
[] |
[] |
[
"data_modeling",
"google_app_engine",
"python",
"python_datamodel"
] |
stackoverflow_0002244850_data_modeling_google_app_engine_python_python_datamodel.txt
|
Q:
Uncatchable exception in Python
The issue came up in this question, which I'll recapitulate in this code:
import csv
FH = open('data.csv','wb')
line1 = [97,44,98,44,99,10]
line2 = [100,44,101,44,102,10]
for n in line1 + line2:
FH.write(chr(n))
FH.write(chr(0))
FH.close()
import _csv
FH = open('data.csv')
reader = csv.reader(FH)
for line in reader:
if '\0' in line: continue
if not line: continue
try:
print line
except _csv.Error:
print 'error'
Run it:
$ python test.py
['a', 'b', 'c']
['d', 'e', 'f']
Traceback (most recent call last):
File "test.py", line 14, in <module>
for line in reader:
_csv.Error: line contains NULL byte
So, I guess the inclusion of NUL in the file causes an "uncatchable" exception.
The question is, besides sanitizing the file first, what's the best way of dealing with this? How common are "uncatchable" exceptions?
A:
You are not putting the "try" block at the right place to catch this exception. In other words, this exception is "catchable", just revisit the question you have referenced.
The traceback clearly states that the problem is on the line with the "for" statement.
A:
It's not uncatchable, you're just trying to catch it in the wrong place. The error is occurring in the line:
for line in reader:
and you are putting your try block around:
print line
The exception has already been raised at this point.
You could wrap the entire block as shown in other answers, or isolate the exception by warping the loop to manually manipulate the iteration of your csv reader:
while 1:
try:
line = f.next()
except StopIteration:
break
except csv.Error:
print "Error occurred"
process_line(line)
This hurts readability in favor of limiting your exception handling to the relevant bit of code. Probably overkill with an exception as specific as csv.error, but it's a handy technique when trying to isolate, for instance, an IOError.
A:
The code that's throwing the exception isn't inside a try/except.
Traceback (most recent call last):
File "test.py", line 14, in <module>
for line in reader:
Just like the traceback shows, retrieving the next line from reader is what's causing the exception. You need to have the entire for inside a try.
A:
Try this :
FH = open('data.csv')
try:
reader = csv.reader(FH)
for line in reader:
if '\0' in line: continue
if not line: continue
print line
except _csv.Error:
print 'error'
|
Uncatchable exception in Python
|
The issue came up in this question, which I'll recapitulate in this code:
import csv
FH = open('data.csv','wb')
line1 = [97,44,98,44,99,10]
line2 = [100,44,101,44,102,10]
for n in line1 + line2:
FH.write(chr(n))
FH.write(chr(0))
FH.close()
import _csv
FH = open('data.csv')
reader = csv.reader(FH)
for line in reader:
if '\0' in line: continue
if not line: continue
try:
print line
except _csv.Error:
print 'error'
Run it:
$ python test.py
['a', 'b', 'c']
['d', 'e', 'f']
Traceback (most recent call last):
File "test.py", line 14, in <module>
for line in reader:
_csv.Error: line contains NULL byte
So, I guess the inclusion of NUL in the file causes an "uncatchable" exception.
The question is, besides sanitizing the file first, what's the best way of dealing with this? How common are "uncatchable" exceptions?
|
[
"You are not putting the \"try\" block at the right place to catch this exception. In other words, this exception is \"catchable\", just revisit the question you have referenced.\nThe traceback clearly states that the problem is on the line with the \"for\" statement.\n",
"It's not uncatchable, you're just trying to catch it in the wrong place. The error is occurring in the line:\nfor line in reader:\n\nand you are putting your try block around:\nprint line\n\nThe exception has already been raised at this point.\nYou could wrap the entire block as shown in other answers, or isolate the exception by warping the loop to manually manipulate the iteration of your csv reader:\nwhile 1:\n try:\n line = f.next()\n except StopIteration:\n break\n except csv.Error:\n print \"Error occurred\"\n process_line(line)\n\nThis hurts readability in favor of limiting your exception handling to the relevant bit of code. Probably overkill with an exception as specific as csv.error, but it's a handy technique when trying to isolate, for instance, an IOError.\n",
"The code that's throwing the exception isn't inside a try/except.\nTraceback (most recent call last):\n File \"test.py\", line 14, in <module>\n for line in reader:\n\nJust like the traceback shows, retrieving the next line from reader is what's causing the exception. You need to have the entire for inside a try.\n",
"Try this :\nFH = open('data.csv')\ntry:\n reader = csv.reader(FH)\n for line in reader:\n if '\\0' in line: continue\n if not line: continue\n print line\nexcept _csv.Error:\n print 'error'\n\n"
] |
[
7,
4,
0,
0
] |
[] |
[] |
[
"exception",
"python"
] |
stackoverflow_0002245243_exception_python.txt
|
Q:
how to measure execution time of functions (automatically) in Python
I need to have a base class which I will use to inherit other classes which I would like to measure execution time of its functions.
So intead of having something like this:
class Worker():
def doSomething(self):
start = time.time()
... do something
elapsed = (time.time() - start)
print "doSomething() took ", elapsed, " time to finish"
#outputs: doSomething() took XX time to finish
I would like to have something like this:
class Worker(BaseClass):
def doSomething(self):
... do something
#outputs the same: doSomething() took XX time to finish
So the BaseClass needs to dealing with measuring time
A:
One way to do this would be with a decorator (PEP for decorators) (first of a series of tutorial articles on decorators). Here's an example that does what you want.
from functools import wraps
from time import time
def timed(f):
@wraps(f)
def wrapper(*args, **kwds):
start = time()
result = f(*args, **kwds)
elapsed = time() - start
print "%s took %d time to finish" % (f.__name__, elapsed)
return result
return wrapper
This is an example of its use
@timed
def somefunction(countto):
for i in xrange(countto):
pass
return "Done"
To show how it works I called the function from the python prompt:
>>> timedec.somefunction(10000000)
somefunction took 0 time to finish
'Done'
>>> timedec.somefunction(100000000)
somefunction took 2 time to finish
'Done'
>>> timedec.somefunction(1000000000)
somefunction took 22 time to finish
'Done'
A:
Have you checked the "profile" module?
I.e. are you sure you need to implement your own custom framework instead of using the default profiling mechanism for the language?
You could also google for "python hotshot" for a similar solution.
A:
There is also timeit, which is part of the standard library, and is really easy to use. Remember: don't reinvent the wheel!
|
how to measure execution time of functions (automatically) in Python
|
I need to have a base class which I will use to inherit other classes which I would like to measure execution time of its functions.
So intead of having something like this:
class Worker():
def doSomething(self):
start = time.time()
... do something
elapsed = (time.time() - start)
print "doSomething() took ", elapsed, " time to finish"
#outputs: doSomething() took XX time to finish
I would like to have something like this:
class Worker(BaseClass):
def doSomething(self):
... do something
#outputs the same: doSomething() took XX time to finish
So the BaseClass needs to dealing with measuring time
|
[
"One way to do this would be with a decorator (PEP for decorators) (first of a series of tutorial articles on decorators). Here's an example that does what you want.\nfrom functools import wraps\nfrom time import time\n\ndef timed(f):\n @wraps(f)\n def wrapper(*args, **kwds):\n start = time()\n result = f(*args, **kwds)\n elapsed = time() - start\n print \"%s took %d time to finish\" % (f.__name__, elapsed)\n return result\n return wrapper\n\nThis is an example of its use\n@timed\ndef somefunction(countto):\n for i in xrange(countto):\n pass\n return \"Done\"\n\nTo show how it works I called the function from the python prompt:\n>>> timedec.somefunction(10000000)\nsomefunction took 0 time to finish\n'Done'\n>>> timedec.somefunction(100000000)\nsomefunction took 2 time to finish\n'Done'\n>>> timedec.somefunction(1000000000)\nsomefunction took 22 time to finish\n'Done'\n\n",
"Have you checked the \"profile\" module?\nI.e. are you sure you need to implement your own custom framework instead of using the default profiling mechanism for the language?\nYou could also google for \"python hotshot\" for a similar solution.\n",
"There is also timeit, which is part of the standard library, and is really easy to use. Remember: don't reinvent the wheel!\n"
] |
[
64,
10,
6
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0002245161_oop_python.txt
|
Q:
Python - Check network map
I'm looking for some help on logic, the code is not very Pythonic I'm still learning. We map the Z: drive to different locations all the time. Here is what I'm trying to accomplish
1: Check for an old map on Z: say \192.168.1.100\old
2: Map the new location to Z: say \192.168.1.200\new
3: Make sure the new Z: mapping exists and is still connected
4: If it gets disconnected or unmapped reconnect it and log it
90% of the code works, if I run it as is, it unmaps the old drive and maps the new drive but the name of the old drive stays the same even though it's mapped to the new location and I can browse it. The other problem is I only want to run checkOldDrive one time and just let checkDrive run. Any advice is appreciated.
#!/usr/bin/python
import pywintypes
import win32com.client
import os.path
import sys
import string
import fileinput
import time
import win32net
##################################################################
# Check for old Z: map and remove it
# Map the new instance of Z:
# Check if the Z: drive exists
# if the drive exists report to status.log we are working
# if the drive DOES NOT exist map it and report errors to the log
###################################################################
def checkDrive():
if os.path.exists('z:'):
saveout = sys.stdout
fsock = open('status.log', 'a')
sys.stdout = fsock
print os.getenv("COMPUTERNAME"), " - ", time.ctime(), " - Connected"
sys.stdout = saveout
fsock.close()
else:
ivvinetwork = win32com.client.Dispatch('Wscript.Network')
network_drives = ivvinetwork.EnumNetworkDrives()
for mapped_drive in [network_drives.Item(i)
for i in range(0, network_drives.Count() -1 , 2)
if network_drives.Item(i)]:
ivvinetwork.RemoveNetworkDrive(mapped_drive, True, True)
drive_mapping = [
('z:', '\\\\192.168.1.100\\newmap', 'someuser', 'somepass')]
for drive_letter, network_path, user_name, user_pass in drive_mapping:
try:
ivvinetwork.MapNetworkDrive(drive_letter, network_path, True, user_name, user_pass)
saveout = sys.stdout
fsock = open('status.log', 'a')
sys.stdout = fsock
print os.getenv("COMPUTERNAME"), " - ", time.ctime(), " - ", drive_mapping, "Drive Has Been Mapped"
sys.stdout = saveout
fsock.close()
except Exception, err:
saveout = sys.stdout
fsock = open('status.log', 'a')
sys.stdout = fsock
print os.getenv("COMPUTERNAME"), " - ", time.ctime(), " - ", err
sys.stdout = saveout
fsock.close()
def checkOldDrive():
if os.path.exists('z:'):
ivvinetwork = win32com.client.Dispatch('Wscript.Network')
network_drives = ivvinetwork.EnumNetworkDrives()
for mapped_drive in [network_drives.Item(i)
for i in range(0, network_drives.Count() -1 , 2)
if network_drives.Item(i)]:
ivvinetwork.RemoveNetworkDrive(mapped_drive, True, True)
checkOldDrive()
checkDrive()
A:
I've put together a script based on the one you laid out which I believe accomplishes what you have described.
I've tried to do it in a way that's both Pythonic and follows good programming principles.
In particular, I've done the following:
modularize much of the functionality into reusable functions
avoided repetition as much as possible. I did not factor out the hard-coded 'Z:' drive. I leave that to you as an exercise (as you see fit).
factored the logging definition into one location (so the format, etc are consistent and not repeated). The logging module made this easy.
moved all code out of the top level scope (except for some global constants). This allows the script to be run directly or imported by another script as a module.
Added some documentation strings to help document what each function does.
Kept each function short an succinct - so it can be read more easily on a single screen and in an isolated context.
Surely, there is still room for some improvement, but I have tested this script and it is functional. It should provide some good lessons while also helping you accomplish your task. Enjoy.
#!/usr/bin/env python
import os
import time
import win32com.client
import logging
old_mappings = [
r'\\192.168.1.100\old',
]
new_mapping = r'\\192.168.1.200\new'
LOG_FILENAME = 'status.log'
def main():
"""
Check to see if Z: is mapped to the old server; if so remove it and
map the Z: to the new server.
Then, repeatedly monitor the Z: mapping. If the Z: drive exists,
report to status.log that we are working. Otherwise, re-map it and
report errors to the log.
"""
setupLogging()
replaceMapping()
monitorMapping()
def replaceMapping():
if removeMapping():
createNewMapping()
def setupLogging():
format = os.environ['COMPUTERNAME'] + " - %(asctime)s - %(message)s"
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG, format=format)
def getCredentials():
"""
Return one of three things:
- an empty tuple
- a tuple containing just a username (if a password is not required)
- a tuple containing username and password
"""
return ('someuser', 'somepass')
def createNewMapping():
network = win32com.client.Dispatch('WScript.Network')
params = (
'Z:', # drive letter
new_mapping, # UNC path
True, # update profile
)
params += getCredentials()
try:
network.MapNetworkDrive(*params)
msg = '{params} - Drive has been mapped'
logging.getLogger().info(msg.format(**vars()))
except Exception as e:
msg = 'error mapping {params}'
logging.getLogger().exception(msg.format(**vars()))
def monitorMapping():
while True:
# only check once a minute
time.sleep(60)
checkMapping()
def checkMapping():
if getDriveMappings()['Z:'] == new_mapping:
msg = 'Drive is still mapped'
logging.getLogger().info(msg.format(**vars()))
else:
replaceMapping()
# From Python 2.6.4 docs
from itertools import izip_longest
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
def getDriveMappings():
"""
Return a dictionary of drive letter to UNC paths as mapped on the
system.
"""
network = win32com.client.Dispatch('WScript.Network')
# http://msdn.microsoft.com/en-us/library/t9zt39at%28VS.85%29.aspx
drives = network.EnumNetworkDrives()
# EnumNetworkDrives returns an even-length array of drive/unc pairs.
# Use grouper to convert this to a dictionary.
result = dict(grouper(2, drives))
# Potentially several UNC paths will be connected but not assigned
# to any drive letter. Since only the last will be in the
# dictionary, remove it.
if '' in result: del result['']
return result
def getUNCForDrive(drive):
"""
Get the UNC path for a mapped drive.
Throws a KeyError if no mapping exists.
"""
return getDriveMappings()[drive.upper()]
def removeMapping():
"""
Remove the old drive mapping. If it is removed, or was not present,
return True.
Otherwise, return False or None.
"""
mapped_drives = getDriveMappings()
drive_letter = 'Z:'
if not drive_letter in mapped_drives:
return True
if mapped_drives[drive_letter] in old_mappings:
network = win32com.client.Dispatch('WScript.Network')
force = True
update_profile = True
network.RemoveNetworkDrive(drive_letter, force, update_profile)
return True
# return None
if __name__ == '__main__':
main()
|
Python - Check network map
|
I'm looking for some help on logic, the code is not very Pythonic I'm still learning. We map the Z: drive to different locations all the time. Here is what I'm trying to accomplish
1: Check for an old map on Z: say \192.168.1.100\old
2: Map the new location to Z: say \192.168.1.200\new
3: Make sure the new Z: mapping exists and is still connected
4: If it gets disconnected or unmapped reconnect it and log it
90% of the code works, if I run it as is, it unmaps the old drive and maps the new drive but the name of the old drive stays the same even though it's mapped to the new location and I can browse it. The other problem is I only want to run checkOldDrive one time and just let checkDrive run. Any advice is appreciated.
#!/usr/bin/python
import pywintypes
import win32com.client
import os.path
import sys
import string
import fileinput
import time
import win32net
##################################################################
# Check for old Z: map and remove it
# Map the new instance of Z:
# Check if the Z: drive exists
# if the drive exists report to status.log we are working
# if the drive DOES NOT exist map it and report errors to the log
###################################################################
def checkDrive():
if os.path.exists('z:'):
saveout = sys.stdout
fsock = open('status.log', 'a')
sys.stdout = fsock
print os.getenv("COMPUTERNAME"), " - ", time.ctime(), " - Connected"
sys.stdout = saveout
fsock.close()
else:
ivvinetwork = win32com.client.Dispatch('Wscript.Network')
network_drives = ivvinetwork.EnumNetworkDrives()
for mapped_drive in [network_drives.Item(i)
for i in range(0, network_drives.Count() -1 , 2)
if network_drives.Item(i)]:
ivvinetwork.RemoveNetworkDrive(mapped_drive, True, True)
drive_mapping = [
('z:', '\\\\192.168.1.100\\newmap', 'someuser', 'somepass')]
for drive_letter, network_path, user_name, user_pass in drive_mapping:
try:
ivvinetwork.MapNetworkDrive(drive_letter, network_path, True, user_name, user_pass)
saveout = sys.stdout
fsock = open('status.log', 'a')
sys.stdout = fsock
print os.getenv("COMPUTERNAME"), " - ", time.ctime(), " - ", drive_mapping, "Drive Has Been Mapped"
sys.stdout = saveout
fsock.close()
except Exception, err:
saveout = sys.stdout
fsock = open('status.log', 'a')
sys.stdout = fsock
print os.getenv("COMPUTERNAME"), " - ", time.ctime(), " - ", err
sys.stdout = saveout
fsock.close()
def checkOldDrive():
if os.path.exists('z:'):
ivvinetwork = win32com.client.Dispatch('Wscript.Network')
network_drives = ivvinetwork.EnumNetworkDrives()
for mapped_drive in [network_drives.Item(i)
for i in range(0, network_drives.Count() -1 , 2)
if network_drives.Item(i)]:
ivvinetwork.RemoveNetworkDrive(mapped_drive, True, True)
checkOldDrive()
checkDrive()
|
[
"I've put together a script based on the one you laid out which I believe accomplishes what you have described.\nI've tried to do it in a way that's both Pythonic and follows good programming principles.\nIn particular, I've done the following:\n\nmodularize much of the functionality into reusable functions\navoided repetition as much as possible. I did not factor out the hard-coded 'Z:' drive. I leave that to you as an exercise (as you see fit).\nfactored the logging definition into one location (so the format, etc are consistent and not repeated). The logging module made this easy.\nmoved all code out of the top level scope (except for some global constants). This allows the script to be run directly or imported by another script as a module.\nAdded some documentation strings to help document what each function does.\nKept each function short an succinct - so it can be read more easily on a single screen and in an isolated context.\n\nSurely, there is still room for some improvement, but I have tested this script and it is functional. It should provide some good lessons while also helping you accomplish your task. Enjoy.\n#!/usr/bin/env python\nimport os\nimport time\nimport win32com.client\nimport logging\n\nold_mappings = [\n r'\\\\192.168.1.100\\old',\n ]\nnew_mapping = r'\\\\192.168.1.200\\new'\nLOG_FILENAME = 'status.log'\n\ndef main():\n \"\"\"\n Check to see if Z: is mapped to the old server; if so remove it and\n map the Z: to the new server.\n\n Then, repeatedly monitor the Z: mapping. If the Z: drive exists,\n report to status.log that we are working. Otherwise, re-map it and\n report errors to the log.\n \"\"\"\n setupLogging()\n replaceMapping()\n monitorMapping()\n\ndef replaceMapping():\n if removeMapping():\n createNewMapping()\n\ndef setupLogging():\n format = os.environ['COMPUTERNAME'] + \" - %(asctime)s - %(message)s\"\n logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG, format=format)\n\ndef getCredentials():\n \"\"\"\n Return one of three things:\n - an empty tuple\n - a tuple containing just a username (if a password is not required)\n - a tuple containing username and password\n \"\"\"\n return ('someuser', 'somepass')\n\ndef createNewMapping():\n network = win32com.client.Dispatch('WScript.Network')\n params = (\n 'Z:', # drive letter\n new_mapping, # UNC path\n True, # update profile\n )\n params += getCredentials()\n try:\n network.MapNetworkDrive(*params)\n msg = '{params} - Drive has been mapped'\n logging.getLogger().info(msg.format(**vars()))\n except Exception as e:\n msg = 'error mapping {params}'\n logging.getLogger().exception(msg.format(**vars()))\n\ndef monitorMapping():\n while True:\n # only check once a minute\n time.sleep(60)\n checkMapping()\n\ndef checkMapping():\n if getDriveMappings()['Z:'] == new_mapping:\n msg = 'Drive is still mapped'\n logging.getLogger().info(msg.format(**vars()))\n else:\n replaceMapping()\n\n# From Python 2.6.4 docs\nfrom itertools import izip_longest\ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\ndef getDriveMappings():\n \"\"\"\n Return a dictionary of drive letter to UNC paths as mapped on the\n system.\n \"\"\"\n network = win32com.client.Dispatch('WScript.Network')\n # http://msdn.microsoft.com/en-us/library/t9zt39at%28VS.85%29.aspx\n drives = network.EnumNetworkDrives()\n # EnumNetworkDrives returns an even-length array of drive/unc pairs.\n # Use grouper to convert this to a dictionary.\n result = dict(grouper(2, drives))\n # Potentially several UNC paths will be connected but not assigned\n # to any drive letter. Since only the last will be in the\n # dictionary, remove it.\n if '' in result: del result['']\n return result\n\ndef getUNCForDrive(drive):\n \"\"\"\n Get the UNC path for a mapped drive.\n Throws a KeyError if no mapping exists.\n \"\"\"\n return getDriveMappings()[drive.upper()]\n\ndef removeMapping():\n \"\"\"\n Remove the old drive mapping. If it is removed, or was not present,\n return True.\n Otherwise, return False or None.\n \"\"\"\n mapped_drives = getDriveMappings()\n drive_letter = 'Z:'\n if not drive_letter in mapped_drives:\n return True\n if mapped_drives[drive_letter] in old_mappings:\n network = win32com.client.Dispatch('WScript.Network')\n force = True\n update_profile = True\n network.RemoveNetworkDrive(drive_letter, force, update_profile)\n return True\n # return None\n\nif __name__ == '__main__':\n main()\n\n"
] |
[
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002244767_python.txt
|
Q:
How to go from list of words to a list of distinct letters in Python
Using Python, I'm trying to convert a sentence of words into a flat list of all distinct letters in that sentence.
Here's my current code:
words = 'She sells seashells by the seashore'
ltr = []
# Convert the string that is "words" to a list of its component words
word_list = [x.strip().lower() for x in words.split(' ')]
# Now convert the list of component words to a distinct list of
# all letters encountered.
for word in word_list:
for c in word:
if c not in ltr:
ltr.append(c)
print ltr
This code returns ['s', 'h', 'e', 'l', 'a', 'b', 'y', 't', 'o', 'r'], which is correct, but is there a more Pythonic way to this answer, probably using list comprehensions/set?
When I try to combine list-comprehension nesting and filtering, I get lists of lists instead of a flat list.
The order of the distinct letters in the final list (ltr) is not important; what's crucial is that they be unique.
A:
Sets provide a simple, efficient solution.
words = 'She sells seashells by the seashore'
unique_letters = set(words.lower())
unique_letters.discard(' ') # If there was a space, remove it.
A:
set([letter.lower() for letter in words if letter != ' '])
Edit: I just tried it and found this will also work (maybe this is what SilentGhost was referring to):
set(letter.lower() for letter in words if letter != ' ')
And if you need to have a list rather than a set, you can
list(set(letter.lower() for letter in words if letter != ' '))
A:
Make ltr a set and change your loop body a little:
ltr = set()
for word in word_list:
for c in word:
ltr.add(c)
Or using a list comprehension:
ltr = set([c for word in word_list for c in word])
A:
>>> set('She sells seashells by the seashore'.replace(' ', '').lower())
set(['a', 'b', 'e', 'h', 'l', 'o', 's', 'r', 't', 'y'])
>>> set(c.lower() for c in 'She sells seashells by the seashore' if not c.isspace())
set(['a', 'b', 'e', 'h', 'l', 'o', 's', 'r', 't', 'y'])
>>> from itertools import chain
>>> set(chain(*'She sells seashells by the seashore'.lower().split()))
set(['a', 'b', 'e', 'h', 'l', 'o', 's', 'r', 't', 'y'])
A:
here are some timings made with py3k:
>>> import timeit
>>> def t(): # mine (see history)
a = {i.lower() for i in words}
a.discard(' ')
return a
>>> timeit.timeit(t)
7.993071812372081
>>> def b(): # danben
return set(letter.lower() for letter in words if letter != ' ')
>>> timeit.timeit(b)
9.982847967921138
>>> def c(): # ephemient in comment
return {i.lower() for i in words if i != ' '}
>>> timeit.timeit(c)
8.241267610375516
>>> def d(): #Mike Graham
a = set(words.lower())
a.discard(' ')
return a
>>> timeit.timeit(d)
2.7693045186082372
A:
set(l for w in word_list for l in w)
A:
words = 'She sells seashells by the seashore'
ltr = list(set(list(words.lower())))
ltr.remove(' ')
print ltr
|
How to go from list of words to a list of distinct letters in Python
|
Using Python, I'm trying to convert a sentence of words into a flat list of all distinct letters in that sentence.
Here's my current code:
words = 'She sells seashells by the seashore'
ltr = []
# Convert the string that is "words" to a list of its component words
word_list = [x.strip().lower() for x in words.split(' ')]
# Now convert the list of component words to a distinct list of
# all letters encountered.
for word in word_list:
for c in word:
if c not in ltr:
ltr.append(c)
print ltr
This code returns ['s', 'h', 'e', 'l', 'a', 'b', 'y', 't', 'o', 'r'], which is correct, but is there a more Pythonic way to this answer, probably using list comprehensions/set?
When I try to combine list-comprehension nesting and filtering, I get lists of lists instead of a flat list.
The order of the distinct letters in the final list (ltr) is not important; what's crucial is that they be unique.
|
[
"Sets provide a simple, efficient solution.\nwords = 'She sells seashells by the seashore'\n\nunique_letters = set(words.lower())\nunique_letters.discard(' ') # If there was a space, remove it.\n\n",
"set([letter.lower() for letter in words if letter != ' '])\n\nEdit: I just tried it and found this will also work (maybe this is what SilentGhost was referring to):\nset(letter.lower() for letter in words if letter != ' ')\n\nAnd if you need to have a list rather than a set, you can\nlist(set(letter.lower() for letter in words if letter != ' '))\n\n",
"Make ltr a set and change your loop body a little:\nltr = set()\n\nfor word in word_list:\n for c in word:\n ltr.add(c)\n\nOr using a list comprehension:\nltr = set([c for word in word_list for c in word])\n\n",
"\n>>> set('She sells seashells by the seashore'.replace(' ', '').lower())\nset(['a', 'b', 'e', 'h', 'l', 'o', 's', 'r', 't', 'y'])\n>>> set(c.lower() for c in 'She sells seashells by the seashore' if not c.isspace())\nset(['a', 'b', 'e', 'h', 'l', 'o', 's', 'r', 't', 'y'])\n>>> from itertools import chain\n>>> set(chain(*'She sells seashells by the seashore'.lower().split()))\nset(['a', 'b', 'e', 'h', 'l', 'o', 's', 'r', 't', 'y'])\n\n",
"here are some timings made with py3k:\n>>> import timeit\n>>> def t(): # mine (see history)\n a = {i.lower() for i in words}\n a.discard(' ')\n return a\n\n>>> timeit.timeit(t)\n7.993071812372081\n>>> def b(): # danben\n return set(letter.lower() for letter in words if letter != ' ')\n\n>>> timeit.timeit(b)\n9.982847967921138\n>>> def c(): # ephemient in comment\n return {i.lower() for i in words if i != ' '}\n\n>>> timeit.timeit(c)\n8.241267610375516\n>>> def d(): #Mike Graham\n a = set(words.lower())\n a.discard(' ')\n return a\n\n>>> timeit.timeit(d)\n2.7693045186082372\n\n",
"set(l for w in word_list for l in w)\n\n",
"words = 'She sells seashells by the seashore'\n\nltr = list(set(list(words.lower())))\nltr.remove(' ')\nprint ltr\n\n"
] |
[
13,
3,
3,
2,
2,
0,
0
] |
[] |
[] |
[
"distinct",
"filter",
"letters",
"list_comprehension",
"python"
] |
stackoverflow_0002245903_distinct_filter_letters_list_comprehension_python.txt
|
Q:
Group form fields in django?
Is there a way in Django to group some fields from a ModelForm? For example, if there's a model with fields like: age, gender, dob, q1, q2, q3 and a form is created based in such Model, can I group the fields like: info_fields = (age, gender, dob) and response_fields = (q1, q2, q3). This would be helpful to display all fields in a more organized way on a template.
Thanks in advance
A:
See this post, I believe your hinting at using fieldsets in a ModelForm.
Django and fieldsets on ModelForm
|
Group form fields in django?
|
Is there a way in Django to group some fields from a ModelForm? For example, if there's a model with fields like: age, gender, dob, q1, q2, q3 and a form is created based in such Model, can I group the fields like: info_fields = (age, gender, dob) and response_fields = (q1, q2, q3). This would be helpful to display all fields in a more organized way on a template.
Thanks in advance
|
[
"See this post, I believe your hinting at using fieldsets in a ModelForm.\nDjango and fieldsets on ModelForm\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_forms",
"django_templates",
"python"
] |
stackoverflow_0002245612_django_django_forms_django_templates_python.txt
|
Q:
python chaco axis labels time formatting
In Enthought's Chaco, the TimeFormatter class is used to format the time string of the tick
labels. is there a way to specify the time format (something like time.strftime()).
the source code now hard-codes the format when displaying month and day of the month to the american style (MMDD). I would like to add some flexibility so that the time/date format hints would somehow be passed to the TimeFormatter
I dont know of any nice way to do this (other than changing the source code itself (TimeFormatter._formats dictionary))
A:
Honestly, the easiest way is going to be to monkeypatch the TimeFormatter's _formats dictionary:
from enthought.chaco.scales.formatters import TimeFormatter
TimeFormatter._formats['days'] = ('%d/%m', '%d%a',)
If you don't want to do this, then you need to subclass TimeFormatter. That's easy. What's more cumbersome is making all the existing scale systems that the chaco.scales package creates use your new subclass rather than the built-in TimeFormatter. If you look at scales.time_scale.TimeScale, it accepts a 'formatter' keyword argument in the constructor. So, at the bottom of time_scale.py, when the MDYScales list is built, you'd have to create your own:
EuroMDYScales = [TimeScale(day_of_month=range(1,31,3), formatter=MyFormatter()),
TimeScale(day_of_month=(1,8,15,22), formatter=MyFormatter()),
TimeScale(day_of_month=(1,15), formatter=MyFormatter()),
TimeScale(month_of_year=range(1,13), formatter=MyFormatter()),
TimeScale(month_of_year=range(1,13,3), formatter=MyFormatter()),
TimeScale(month_of_year=(1,7), formatter=MyFormatter()),
TimeScale(month_of_year=(1,), formatter=MyFormatter())]
Then, when you create the ScalesTickGenerator, you need to pass in these scales to the ScaleSystem:
euro_scale_system = CalendarScaleSystem(*(HMSScales + EuroMDYScales))
tick_gen = ScalesTickGenerator(scale=euro_scale_system)
Then you can create the axis, giving it this tick generator:
axis = PlotAxis(tick_generator = tick_gen)
HTH, sorry this is about a month lag. I don't really check StackOverflow very much. If you have other chaco questions, I'd recommend signing up on the chaco-users mailing list...
|
python chaco axis labels time formatting
|
In Enthought's Chaco, the TimeFormatter class is used to format the time string of the tick
labels. is there a way to specify the time format (something like time.strftime()).
the source code now hard-codes the format when displaying month and day of the month to the american style (MMDD). I would like to add some flexibility so that the time/date format hints would somehow be passed to the TimeFormatter
I dont know of any nice way to do this (other than changing the source code itself (TimeFormatter._formats dictionary))
|
[
"Honestly, the easiest way is going to be to monkeypatch the TimeFormatter's _formats dictionary:\nfrom enthought.chaco.scales.formatters import TimeFormatter\nTimeFormatter._formats['days'] = ('%d/%m', '%d%a',)\n\nIf you don't want to do this, then you need to subclass TimeFormatter. That's easy. What's more cumbersome is making all the existing scale systems that the chaco.scales package creates use your new subclass rather than the built-in TimeFormatter. If you look at scales.time_scale.TimeScale, it accepts a 'formatter' keyword argument in the constructor. So, at the bottom of time_scale.py, when the MDYScales list is built, you'd have to create your own:\nEuroMDYScales = [TimeScale(day_of_month=range(1,31,3), formatter=MyFormatter()),\n TimeScale(day_of_month=(1,8,15,22), formatter=MyFormatter()),\n TimeScale(day_of_month=(1,15), formatter=MyFormatter()),\n TimeScale(month_of_year=range(1,13), formatter=MyFormatter()),\n TimeScale(month_of_year=range(1,13,3), formatter=MyFormatter()),\n TimeScale(month_of_year=(1,7), formatter=MyFormatter()),\n TimeScale(month_of_year=(1,), formatter=MyFormatter())]\n\nThen, when you create the ScalesTickGenerator, you need to pass in these scales to the ScaleSystem:\neuro_scale_system = CalendarScaleSystem(*(HMSScales + EuroMDYScales))\ntick_gen = ScalesTickGenerator(scale=euro_scale_system)\n\nThen you can create the axis, giving it this tick generator:\naxis = PlotAxis(tick_generator = tick_gen)\n\nHTH, sorry this is about a month lag. I don't really check StackOverflow very much. If you have other chaco questions, I'd recommend signing up on the chaco-users mailing list...\n"
] |
[
4
] |
[] |
[] |
[
"chaco",
"python"
] |
stackoverflow_0002173632_chaco_python.txt
|
Q:
How do I deal with multiple common user interfaces?
I'm working on a python application that runs on 2 different platforms, namely regular desktop linux and Maemo 4. We use PyGTK on both platforms but on Maemo there are a bunch of little tweaks to make it look nice which are implemented as follows:
if util.platform.MAEMO:
# do something fancy for maemo
else:
# regular pygtk
There are roughly 15 of these if statements needed to get the UI looking and working nice on Maemo 4.
This has been very manageable for all this time. The problem is that a while ago there was a new version of Maemo released (5, aka fremantle) and it has some big differences compared to Maemo 4. I don't want to add a bunch of checks throughout the GUI code in order to get all 3 platforms working nicely with the same codebase because that would get messy. I also don't want to create a copy of the original GUI code for each platform and simply modify it for the specific platform (I'd like to re-use as much code as possible).
So, what are ways to have slightly different UIs for different platforms which are based on the same core UI code? I don't think this is a python or Maemo-specific question, I'd just like to know how this is done.
A:
You could wind up much of this in a factory:
def createSpec():
if util.platform.MAEMO: return Maemo4Spec()
elif util.platform.MAEMO5: return Maemo5Spec()
return StandardPyGTKSpec()
Then, somewhere early in your code, you just call that factory:
spec = createSpec()
Now, everywhere else you had conditions, you just call the necessary function:
spec.drawComboBox()
As long as drawComboBox(), handles anything specific to the platform, you should be in good shape.
A:
You could isolate the platform specific stuff you need to do into small consistently named functions inside a platform module, create the right function name using the platform you're running on and then getattr the right one and call it. The if/else boilerplate would disappear then.
A:
I've made a separate module to handle all of my specializing between normal Linux, Maemo 4.1, and Maemo 5. It detects what features are available and allows the program to gracefully degrade.
For example
def _fremantle_hildonize_window(app, window):
oldWindow = window
newWindow = hildon.StackableWindow()
oldWindow.get_child().reparent(newWindow)
app.add_window(newWindow)
return newWindow
def _hildon_hildonize_window(app, window):
oldWindow = window
newWindow = hildon.Window()
oldWindow.get_child().reparent(newWindow)
app.add_window(newWindow)
return newWindow
def _null_hildonize_window(app, window):
return window
try:
hildon.StackableWindow
hildonize_window = _fremantle_hildonize_window
except AttributeError:
try:
hildon.Window
hildonize_window = _hildon_hildonize_window
except AttributeError:
hildonize_window = _null_hildonize_window
For more, see
Dialcentral, Gonert, ejpi, or Quicknote's source code for a file called hildonize.py
https://garage.maemo.org/plugins/ggit/browse.php/?p=gc-dialer;a=blob;f=src/hildonize.py;
Another example from The One Ring's GObject Utils (go_utils.py)
def _old_timeout_add_seconds(timeout, callback):
return gobject.timeout_add(timeout * 1000, callback)
def _timeout_add_seconds(timeout, callback):
return gobject.timeout_add_seconds(timeout, callback)
try:
gobject.timeout_add_seconds
timeout_add_seconds = _timeout_add_seconds
except AttributeError:
timeout_add_seconds = _old_timeout_add_seconds
|
How do I deal with multiple common user interfaces?
|
I'm working on a python application that runs on 2 different platforms, namely regular desktop linux and Maemo 4. We use PyGTK on both platforms but on Maemo there are a bunch of little tweaks to make it look nice which are implemented as follows:
if util.platform.MAEMO:
# do something fancy for maemo
else:
# regular pygtk
There are roughly 15 of these if statements needed to get the UI looking and working nice on Maemo 4.
This has been very manageable for all this time. The problem is that a while ago there was a new version of Maemo released (5, aka fremantle) and it has some big differences compared to Maemo 4. I don't want to add a bunch of checks throughout the GUI code in order to get all 3 platforms working nicely with the same codebase because that would get messy. I also don't want to create a copy of the original GUI code for each platform and simply modify it for the specific platform (I'd like to re-use as much code as possible).
So, what are ways to have slightly different UIs for different platforms which are based on the same core UI code? I don't think this is a python or Maemo-specific question, I'd just like to know how this is done.
|
[
"You could wind up much of this in a factory:\ndef createSpec():\n if util.platform.MAEMO: return Maemo4Spec()\n elif util.platform.MAEMO5: return Maemo5Spec()\n return StandardPyGTKSpec()\n\nThen, somewhere early in your code, you just call that factory:\n spec = createSpec()\n\nNow, everywhere else you had conditions, you just call the necessary function:\n spec.drawComboBox()\n\nAs long as drawComboBox(), handles anything specific to the platform, you should be in good shape.\n",
"You could isolate the platform specific stuff you need to do into small consistently named functions inside a platform module, create the right function name using the platform you're running on and then getattr the right one and call it. The if/else boilerplate would disappear then.\n",
"I've made a separate module to handle all of my specializing between normal Linux, Maemo 4.1, and Maemo 5. It detects what features are available and allows the program to gracefully degrade.\nFor example\n def _fremantle_hildonize_window(app, window):\n oldWindow = window\n newWindow = hildon.StackableWindow()\n oldWindow.get_child().reparent(newWindow)\n app.add_window(newWindow)\n return newWindow\n\n\n def _hildon_hildonize_window(app, window):\n oldWindow = window\n newWindow = hildon.Window()\n oldWindow.get_child().reparent(newWindow)\n app.add_window(newWindow)\n return newWindow\n\n\n def _null_hildonize_window(app, window):\n return window\n\n\n try:\n hildon.StackableWindow\n hildonize_window = _fremantle_hildonize_window\n except AttributeError:\n try:\n hildon.Window\n hildonize_window = _hildon_hildonize_window\n except AttributeError:\n hildonize_window = _null_hildonize_window\n\nFor more, see\nDialcentral, Gonert, ejpi, or Quicknote's source code for a file called hildonize.py\nhttps://garage.maemo.org/plugins/ggit/browse.php/?p=gc-dialer;a=blob;f=src/hildonize.py;\nAnother example from The One Ring's GObject Utils (go_utils.py)\n def _old_timeout_add_seconds(timeout, callback):\n return gobject.timeout_add(timeout * 1000, callback)\n\n\n def _timeout_add_seconds(timeout, callback):\n return gobject.timeout_add_seconds(timeout, callback)\n\n\n try:\n gobject.timeout_add_seconds\n timeout_add_seconds = _timeout_add_seconds\n except AttributeError:\n timeout_add_seconds = _old_timeout_add_seconds\n\n"
] |
[
10,
0,
0
] |
[] |
[] |
[
"code_reuse",
"maemo",
"pygtk",
"python",
"user_interface"
] |
stackoverflow_0002022448_code_reuse_maemo_pygtk_python_user_interface.txt
|
Q:
Python: Monitoring and killing/throttling spawned processes based on load, time, etc
I have a queue of workers that spawn external third party apps using subprocess. I'd like to control how much of the overall resources of my server these process consume. Some of these external apps also tend to hang for unknown reasons, fixed with a restart.
What's a good way to:
Monitor the overall server load (say, load average or equivalent of vmstat) in python?
Monitor the cpu load of the processes I spawn?
Kill processes I've spawned if they're taking too long or taking too much cpu?
Basically I need to be able to control the load the I'm placing on my server with my spawned threads.
Hopefully there's a package or library that'll do all this for me?
A:
Functions to get load average and kill process are available in standard python library (os.getloadavg(), os.kill(), subprocess.Popen.kill()). There is a psutil package for the rest (psutil.Process.get_cpu_times(), psutil.Process.get_cpu_percent(), psutil.Process.get_memory_info(), psutil.Process.get_memory_percent() and more)
A:
As for governing CPU, you'll want to use nice to launch your processes.
For monitoring system load and other stats related to currently running processes, you might look into the /proc directory of pseudo-devices.
A:
Monitor the overall server load (say, load average or equivalent of vmstat) in python?
>>> import psutil, subprocess
>>> subp = subprocess.Popen('python', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> proc = psutil.Process(subp.pid)
>>> rss, vms = proc.get_memory_info()
>>> print "Resident memory: %s KB" %(rss / 1024)
Resident memory: 136 KB
>>> print "Virtual memory: %s KB" %(vms / 1024)
Virtual memory: 356 KB
>>> print proc.get_memory_percent()
0.00324324118077
Monitor the cpu load of the processes I spawn?
>>> proc.get_cpu_percent()
0.0
Kill processes I've spawned if they're taking too long or taking too much cpu?
>>> proc.kill()
>>>
|
Python: Monitoring and killing/throttling spawned processes based on load, time, etc
|
I have a queue of workers that spawn external third party apps using subprocess. I'd like to control how much of the overall resources of my server these process consume. Some of these external apps also tend to hang for unknown reasons, fixed with a restart.
What's a good way to:
Monitor the overall server load (say, load average or equivalent of vmstat) in python?
Monitor the cpu load of the processes I spawn?
Kill processes I've spawned if they're taking too long or taking too much cpu?
Basically I need to be able to control the load the I'm placing on my server with my spawned threads.
Hopefully there's a package or library that'll do all this for me?
|
[
"Functions to get load average and kill process are available in standard python library (os.getloadavg(), os.kill(), subprocess.Popen.kill()). There is a psutil package for the rest (psutil.Process.get_cpu_times(), psutil.Process.get_cpu_percent(), psutil.Process.get_memory_info(), psutil.Process.get_memory_percent() and more)\n",
"As for governing CPU, you'll want to use nice to launch your processes.\nFor monitoring system load and other stats related to currently running processes, you might look into the /proc directory of pseudo-devices.\n",
"Monitor the overall server load (say, load average or equivalent of vmstat) in python?\n\n>>> import psutil, subprocess\n>>> subp = subprocess.Popen('python', stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n>>> proc = psutil.Process(subp.pid)\n>>> rss, vms = proc.get_memory_info()\n>>> print \"Resident memory: %s KB\" %(rss / 1024)\nResident memory: 136 KB\n>>> print \"Virtual memory: %s KB\" %(vms / 1024)\nVirtual memory: 356 KB\n>>> print proc.get_memory_percent()\n0.00324324118077\n\nMonitor the cpu load of the processes I spawn?\n\n>>> proc.get_cpu_percent()\n0.0\n\nKill processes I've spawned if they're taking too long or taking too much cpu?\n\n>>> proc.kill()\n>>>\n\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001654922_python.txt
|
Q:
python web framework large project
I need your advices to choose a Python Web Framework for developing a large project:
Database (Postgresql)will have at least 500 tables, most of them with a composite primary
key, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.
Will a ORM be helpful?
A:
Django has been used by many large organizations (Washington Post, etc.) and can connect with Postgresql easily enough. I use it fairly often and have had no trouble.
A:
Yes. An ORM is essential for mapping SQL stuff to objects.
You have three choices.
Use someone else's ORM
Roll your own.
Try to execute low-level SQL queries and pick out the fields they want from the result set. This is -- actually -- a kind of ORM with the mappings scattered throughout the applications. It may be fast to execute and appear easy to develop, but it is a maintenance nightmare.
If you're designing the tables first, any ORM will be painful. For example, "composite primary key" is generally a bad idea, and with an ORM it's almost always a bad idea. You'll need to have a surrogate primary key. Then you can have all the composite keys with indexes you want. They just won't be "primary".
If you design the objects first, then work out tables that will implement the objects, the ORM will be pleasant, simple and will run quickly, also.
A:
Since most of your tables have composite primary keys, you'll want an ORM that supports that functionality. Django's default ORM does not support composite primary keys. SQLAlchemy does have that support (http://www.sqlalchemy.org/features.html).
The TurboGears framework uses SQLAlchemy as its default ORM. Pylons lets you use SQLAlchemy as well.
There are also ways to get Django to use SQLAlchemy, though I've not tried them myself. I prefer to use Django myself, but given your needs, I'd go with Pylons or TurboGears rather then shoe-horning a different ORM into the system.
A:
For such horrid data-layer complexity as 500 tables with 1500 views, differently from most answers, I would personally prefer to stick with SQL (PostgreSQL offers a really excellent implementation thereof, expecially in the new 8.4 version which you should really lobby for if you have any chance); the only ORM I would [grudgingly] accept is SQLAlchemy (one of the few ORBs I don't really mind -- but the main added value is portability to different DBMS: if you're committed to just one, and in a project of this DB complexity you'd better be, then my personal opinion is that any ORM is just overhead, as the data-access layer developers will need deep familiarity with the specific DBMS to crawl towards acceptable performance).
Having picked "raw psycopg2" or SQLAlchemy as the technology for my data-access layer, that would rule out Django (which in my experience only works well with its own ORM -- but that's not suitable for a project of such DB complexity, IMNSHO). I'd go with Werkzeug, personally, as the framework most suitable for highly complex projects requiring ridiculous amounts of flexibility and power -- though Pylons and Turbogears 2 on top of it may be acceptable as a fall-back if the team just doesn't have the web app experience and skill it takes to make truly beautiful music with a flexible framework such as Werkzeug.
Last but not least, I'd strongly lobby for Dojo for the presentation layer on the client -- a rich and strongly structured Javascript framework, offering superbly designed functionality for "local data", host access, &c, optimized for the best that each of several browsers (and plug-ins such as Gears) can offer, as well as advanced UI functionality, seems likeliest to lighten the heavy development burden on the back-end team (in fact, I'd strongly recommend looking at offering an essentially RESTful interface on the server side, and delegate all presentation work to Dojo on the client -- see this site for more, except I'd be thinking of JSON rather than XML as the preferred interchange format). But, I'll readily admit to knowing far less about the UI side of things than about back-ends, business logic and overall architecture, so take this last paragraph for what it's worth!-)
A:
Depending on what you want to do, you actually have a few possible frameworks :
[Django] Big, strong (to the limit of what a python framework can be), and the older in the race. Used by a few 'big' sites around the world ([Django sites]). Still is a bit of an overkill for almost everything and with a deprecated coding approach.
[Turbogears] is a recent framework based on Pylons. Don't know much about it, but got many good feedbacks from friends who tried it.
[Pylons] ( which Turbogears2 is based on ). Often saw at the "PHP of Python" , it allow very quick developements from scratch. Even if it can seem inappropriate for big projects, it's often the faster and easier way to go.
The last option is [Zope] ( with or without Plone ), but Plone is way to slow, and Zope learning curve is way too long ( not even speaking in replacing the ZODB with an SQL connector ) so if you don't know the framework yet, just forget about it.
And yes, An ORM seem mandatory for a project of this size. For Django, you'll have to handle migration to their database models (don't know how hard it is to plug SQLAlchemy in Django). For turbogears and Pylons, the most suitable solution is [SQLAlchemy], which is actually the most complete ( and rising ) ORM for python. For zope ... well, nevermind
Last but not least, I'm not sure you're starting on a good basis for your project. 500 tables on any python framework would scare me to death. A boring but rigid language such as java (hibernate+spring+tapestry or so) seem really more appropriate.
A:
I would absolutely recommend Repoze.bfg with SQLAlchemy for what you describe. I've done projects now in Django, TurboGears 1, TurboGears 2, Pylons, and dabbled in pure Zope3. BFG is far and away the framework most designed to accomodate a project growing in ways you don't anticipate at the beginning, but is far more lightweight and pared down than Grok or Zope 3. Also, the docs are the best technical docs of all of them, not the easiest, but the ones that answer the hard questions you're going to encounter the best. I'm currently doing a similar thing where we are overhauling a bunch of legacy databases into a new web deliverable app and we're using BFG, some Pylons, Zope 3 adapters, Genshi for templating, SQLAlchemy, and Dojo for the front end. We couldn't be happier with BFG, and it's working out great. BFGs classes as views that are actually zope multi-adapters is absolutely perfect for being able to override only very specific bits for certain domain resources. And the complete lack of magic globals anywhere makes testing and packaging the easiest we've had with any framework.
ymmv!
A:
Alwasy new requirements are coming.
So what you really need is a framework that will allow you to adapt rapidly to changing specs.
From personal experience, I can only discuss django, which is great because it allows you to get up and running quickly.
If you stick to its ORM, you will have a pretty easy time getting your models fleshed out and connected in useful ways. You will need to familiarize yourself with a database migration tool, because Django does not have one built in. dmigrations seems to be a leading tool for this.
Another choice for ORM's is SQLAlchemy, which appears to be a bit more mature out of the box.
|
python web framework large project
|
I need your advices to choose a Python Web Framework for developing a large project:
Database (Postgresql)will have at least 500 tables, most of them with a composite primary
key, lots of constraints, indexes & queries. About 1,500 views for starting. The project belongs to the financial area. Alwasy new requirements are coming.
Will a ORM be helpful?
|
[
"Django has been used by many large organizations (Washington Post, etc.) and can connect with Postgresql easily enough. I use it fairly often and have had no trouble.\n",
"Yes. An ORM is essential for mapping SQL stuff to objects. \nYou have three choices.\n\nUse someone else's ORM\nRoll your own.\nTry to execute low-level SQL queries and pick out the fields they want from the result set. This is -- actually -- a kind of ORM with the mappings scattered throughout the applications. It may be fast to execute and appear easy to develop, but it is a maintenance nightmare.\n\nIf you're designing the tables first, any ORM will be painful. For example, \"composite primary key\" is generally a bad idea, and with an ORM it's almost always a bad idea. You'll need to have a surrogate primary key. Then you can have all the composite keys with indexes you want. They just won't be \"primary\".\nIf you design the objects first, then work out tables that will implement the objects, the ORM will be pleasant, simple and will run quickly, also.\n",
"Since most of your tables have composite primary keys, you'll want an ORM that supports that functionality. Django's default ORM does not support composite primary keys. SQLAlchemy does have that support (http://www.sqlalchemy.org/features.html). \nThe TurboGears framework uses SQLAlchemy as its default ORM. Pylons lets you use SQLAlchemy as well. \nThere are also ways to get Django to use SQLAlchemy, though I've not tried them myself. I prefer to use Django myself, but given your needs, I'd go with Pylons or TurboGears rather then shoe-horning a different ORM into the system.\n",
"For such horrid data-layer complexity as 500 tables with 1500 views, differently from most answers, I would personally prefer to stick with SQL (PostgreSQL offers a really excellent implementation thereof, expecially in the new 8.4 version which you should really lobby for if you have any chance); the only ORM I would [grudgingly] accept is SQLAlchemy (one of the few ORBs I don't really mind -- but the main added value is portability to different DBMS: if you're committed to just one, and in a project of this DB complexity you'd better be, then my personal opinion is that any ORM is just overhead, as the data-access layer developers will need deep familiarity with the specific DBMS to crawl towards acceptable performance).\nHaving picked \"raw psycopg2\" or SQLAlchemy as the technology for my data-access layer, that would rule out Django (which in my experience only works well with its own ORM -- but that's not suitable for a project of such DB complexity, IMNSHO). I'd go with Werkzeug, personally, as the framework most suitable for highly complex projects requiring ridiculous amounts of flexibility and power -- though Pylons and Turbogears 2 on top of it may be acceptable as a fall-back if the team just doesn't have the web app experience and skill it takes to make truly beautiful music with a flexible framework such as Werkzeug.\nLast but not least, I'd strongly lobby for Dojo for the presentation layer on the client -- a rich and strongly structured Javascript framework, offering superbly designed functionality for \"local data\", host access, &c, optimized for the best that each of several browsers (and plug-ins such as Gears) can offer, as well as advanced UI functionality, seems likeliest to lighten the heavy development burden on the back-end team (in fact, I'd strongly recommend looking at offering an essentially RESTful interface on the server side, and delegate all presentation work to Dojo on the client -- see this site for more, except I'd be thinking of JSON rather than XML as the preferred interchange format). But, I'll readily admit to knowing far less about the UI side of things than about back-ends, business logic and overall architecture, so take this last paragraph for what it's worth!-)\n",
"Depending on what you want to do, you actually have a few possible frameworks :\n[Django] Big, strong (to the limit of what a python framework can be), and the older in the race. Used by a few 'big' sites around the world ([Django sites]). Still is a bit of an overkill for almost everything and with a deprecated coding approach.\n[Turbogears] is a recent framework based on Pylons. Don't know much about it, but got many good feedbacks from friends who tried it.\n[Pylons] ( which Turbogears2 is based on ). Often saw at the \"PHP of Python\" , it allow very quick developements from scratch. Even if it can seem inappropriate for big projects, it's often the faster and easier way to go.\nThe last option is [Zope] ( with or without Plone ), but Plone is way to slow, and Zope learning curve is way too long ( not even speaking in replacing the ZODB with an SQL connector ) so if you don't know the framework yet, just forget about it.\nAnd yes, An ORM seem mandatory for a project of this size. For Django, you'll have to handle migration to their database models (don't know how hard it is to plug SQLAlchemy in Django). For turbogears and Pylons, the most suitable solution is [SQLAlchemy], which is actually the most complete ( and rising ) ORM for python. For zope ... well, nevermind\nLast but not least, I'm not sure you're starting on a good basis for your project. 500 tables on any python framework would scare me to death. A boring but rigid language such as java (hibernate+spring+tapestry or so) seem really more appropriate.\n",
"I would absolutely recommend Repoze.bfg with SQLAlchemy for what you describe. I've done projects now in Django, TurboGears 1, TurboGears 2, Pylons, and dabbled in pure Zope3. BFG is far and away the framework most designed to accomodate a project growing in ways you don't anticipate at the beginning, but is far more lightweight and pared down than Grok or Zope 3. Also, the docs are the best technical docs of all of them, not the easiest, but the ones that answer the hard questions you're going to encounter the best. I'm currently doing a similar thing where we are overhauling a bunch of legacy databases into a new web deliverable app and we're using BFG, some Pylons, Zope 3 adapters, Genshi for templating, SQLAlchemy, and Dojo for the front end. We couldn't be happier with BFG, and it's working out great. BFGs classes as views that are actually zope multi-adapters is absolutely perfect for being able to override only very specific bits for certain domain resources. And the complete lack of magic globals anywhere makes testing and packaging the easiest we've had with any framework.\nymmv!\n",
"\nAlwasy new requirements are coming.\n\nSo what you really need is a framework that will allow you to adapt rapidly to changing specs.\nFrom personal experience, I can only discuss django, which is great because it allows you to get up and running quickly. \nIf you stick to its ORM, you will have a pretty easy time getting your models fleshed out and connected in useful ways. You will need to familiarize yourself with a database migration tool, because Django does not have one built in. dmigrations seems to be a leading tool for this.\nAnother choice for ORM's is SQLAlchemy, which appears to be a bit more mature out of the box. \n"
] |
[
12,
8,
5,
3,
2,
1,
0
] |
[] |
[] |
[
"frameworks",
"python",
"web_frameworks"
] |
stackoverflow_0001003131_frameworks_python_web_frameworks.txt
|
Q:
Is ActiveMQ's failover mechanism supported by C# (openwire) & python (stomp) clients?
I'd like to use ActiveMQ to connect python service with C# clients.
Is there a way to specify failover connection in C# (openwire) and python (Stomp)?
The ActiveMQ will be configured Shared File System Master Slave.
A:
C# client supports failover see: http://issues.apache.org/activemq/browse/AMQNET-26.
Python client probably doesn't support it.
|
Is ActiveMQ's failover mechanism supported by C# (openwire) & python (stomp) clients?
|
I'd like to use ActiveMQ to connect python service with C# clients.
Is there a way to specify failover connection in C# (openwire) and python (Stomp)?
The ActiveMQ will be configured Shared File System Master Slave.
|
[
"C# client supports failover see: http://issues.apache.org/activemq/browse/AMQNET-26.\nPython client probably doesn't support it.\n"
] |
[
2
] |
[] |
[] |
[
"activemq",
"c#",
"nms",
"python",
"stomp"
] |
stackoverflow_0002223460_activemq_c#_nms_python_stomp.txt
|
Q:
How does one run Spawning with Django within a virtualenv?
Because of the way Eventlet, which Spawning depends on, installs itself, it can't be installed into a virtualenv. The following error (wrapped for readability) illustrates:
Running eventlet-0.9.4/setup.py -q bdist_egg --dist-dir \
/tmp/easy_install-m_s75o/eventlet-0.9.4/egg-dist-tmp-fAZK_u
error: SandboxViolation: chmod('/home/myuser/.python-eggs/\
greenlet-0.2-py2.6-linux-i686.egg-tmp/tmpgxa_uc.$extract', 493) {}
Without patching the Python path beyond all recognition, and installing Spawning globally (which would break the whole point of having a virtualenv anyway), how would one install/run this?
A:
The following five commands worked without any problems. How are you installing spawning?
virtualenv test
cd test/
. bin/activate
easy_install spawning
python -c 'import spawning'
|
How does one run Spawning with Django within a virtualenv?
|
Because of the way Eventlet, which Spawning depends on, installs itself, it can't be installed into a virtualenv. The following error (wrapped for readability) illustrates:
Running eventlet-0.9.4/setup.py -q bdist_egg --dist-dir \
/tmp/easy_install-m_s75o/eventlet-0.9.4/egg-dist-tmp-fAZK_u
error: SandboxViolation: chmod('/home/myuser/.python-eggs/\
greenlet-0.2-py2.6-linux-i686.egg-tmp/tmpgxa_uc.$extract', 493) {}
Without patching the Python path beyond all recognition, and installing Spawning globally (which would break the whole point of having a virtualenv anyway), how would one install/run this?
|
[
"The following five commands worked without any problems. How are you installing spawning?\nvirtualenv test\ncd test/\n. bin/activate\neasy_install spawning\npython -c 'import spawning'\n\n"
] |
[
3
] |
[] |
[] |
[
"django",
"python",
"spawning",
"virtualenv",
"wsgi"
] |
stackoverflow_0002245430_django_python_spawning_virtualenv_wsgi.txt
|
Q:
stream socket send/receive broadcast messages?
I browsed the python socket docs and google for two days but I did not find any answer. Yeah I am a network programming newbie :)
I would like to implement some LAN chatting system with specific function for our needs. I am at the very beginning. I was able to implement a client-server model where the client connects to the server (socket.SOCK_STREAM) and they are able to change messages. I want to step forward. I want the client to discover the LAN with a broadcast how many other clients are available.
I failed. Is it possible that a socket.SOCK_STREAM type socket could not be used for this task?
If so, what are my opportunities? using udp packets? How I have to listen for brodcast messages/packets?
A:
The broadcast is defined by the destination address.
For example if your own ip is 192.168.1.2, the broadcast address would be 192.168.1.255 (in most cases)
It is not related directly to python and will probably not be in its documentation. You are searching for network "general" knowledge, to a level much higher than sockets programming
*EDIT
Yes you are right, you cannot use SOCK_STREAM. SOCK_STREAM defines TCP communication. You should use UDP for broadcasting with socket.SOCK_DGRAM
|
stream socket send/receive broadcast messages?
|
I browsed the python socket docs and google for two days but I did not find any answer. Yeah I am a network programming newbie :)
I would like to implement some LAN chatting system with specific function for our needs. I am at the very beginning. I was able to implement a client-server model where the client connects to the server (socket.SOCK_STREAM) and they are able to change messages. I want to step forward. I want the client to discover the LAN with a broadcast how many other clients are available.
I failed. Is it possible that a socket.SOCK_STREAM type socket could not be used for this task?
If so, what are my opportunities? using udp packets? How I have to listen for brodcast messages/packets?
|
[
"The broadcast is defined by the destination address.\nFor example if your own ip is 192.168.1.2, the broadcast address would be 192.168.1.255 (in most cases)\nIt is not related directly to python and will probably not be in its documentation. You are searching for network \"general\" knowledge, to a level much higher than sockets programming\n*EDIT\nYes you are right, you cannot use SOCK_STREAM. SOCK_STREAM defines TCP communication. You should use UDP for broadcasting with socket.SOCK_DGRAM\n"
] |
[
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002247228_python.txt
|
Q:
Python String Method Conundrum
The following code is supposed to print MyWords after removing SpamWords[0]. However; instead of returning "yes" it instead returns "None". Why is it returning "None"?
MyWords = "Spam yes"
SpamWords = ["SPAM"]
SpamCheckRange = 0
print ((MyWords.upper()).split()).remove(SpamWords[SpamCheckRange])
A:
Because remove is a method that changes the mutable list object it's called on, and returns None.
l= MyWords.upper().split()
l.remove(SpamWords[SpamCheckRange])
# l is ['YES']
Perhaps you want:
>>> [word for word in MyWords.split() if word.upper() not in SpamWords]
['yes']
A:
remove is a method of list (str.split returns a list), not str. It mutates the original list (removing what you pass) and returns None, not a modified list.
|
Python String Method Conundrum
|
The following code is supposed to print MyWords after removing SpamWords[0]. However; instead of returning "yes" it instead returns "None". Why is it returning "None"?
MyWords = "Spam yes"
SpamWords = ["SPAM"]
SpamCheckRange = 0
print ((MyWords.upper()).split()).remove(SpamWords[SpamCheckRange])
|
[
"Because remove is a method that changes the mutable list object it's called on, and returns None.\nl= MyWords.upper().split()\nl.remove(SpamWords[SpamCheckRange])\n# l is ['YES']\n\nPerhaps you want:\n>>> [word for word in MyWords.split() if word.upper() not in SpamWords]\n['yes']\n\n",
"remove is a method of list (str.split returns a list), not str. It mutates the original list (removing what you pass) and returns None, not a modified list.\n"
] |
[
7,
0
] |
[] |
[] |
[
"methods",
"python",
"string"
] |
stackoverflow_0002247600_methods_python_string.txt
|
Q:
Associative Matrices?
I'm working on a project where I need to store a matrix of numbers indexed by two string keys. The matrix is not jagged, i.e. if a column key exists for any row then it should exist for all rows. Similarly, if a row key exists for any column then it should exist for all columns.
The obvious way to express this is with an associative array of associative arrays, but this is both awkward and inefficient, and it doesn't enforce the non-jaggedness property. Do any popular programming languages provide an associative matrix either built into the language or as part of their standard libraries? If so, how do they work, both at the API and implementation level? I'm using Python and D for this project, but examples in other languages would still be useful because I would be able to look at the API and figure out the best way to implement something similar in Python or D.
A:
Why not just use a standard matrix, but then have two dictionaries - one that converts the row keys to row indices and one that converts the columns keys to columns indices. You could make your own structure that would work this way fairly easily I think. You just make a class that contains the matrix and the two dictionaries and go from there.
A:
In Python you could have a dict indexed by a tuple of two strings, e.g
>>> d = {}
>>> d["foo","bar"] = 10
>>> d
{('foo', 'bar'): 10}
I am not sure what "enforce non-jaggedness" means for you, but you could either use a defaultdict to return a default value for entries that have not been explicitly set, or initialise the dict with the a known value:
>>> xkeys = "abcdef"
>>> ykeys = "xyz"
>>> d = dict(((x,y), 0) for x in xkeys for y in ykeys)
>>> d
{('b', 'y'): 0, ('a', 'z'): 0, ('b', 'x'): 0, ('e', 'y'): 0, ('a', 'x'): 0, ('f', 'z'): 0, ('a', 'y'): 0, ('f', 'y'): 0, ('d', 'y'): 0, ('f', 'x'): 0, ('d', 'x'): 0, ('e', 'x'): 0, ('e', 'z'): 0, ('c', 'x'): 0, ('d', 'z'): 0, ('c', 'y'): 0, ('c', 'z'): 0, ('b', 'z'): 0}
If you want to enforce that only keys in a known set are allowed then I suggest subclassing dict to add the validation.
A:
the larry module for python was recently released. i believe it does what you want.
|
Associative Matrices?
|
I'm working on a project where I need to store a matrix of numbers indexed by two string keys. The matrix is not jagged, i.e. if a column key exists for any row then it should exist for all rows. Similarly, if a row key exists for any column then it should exist for all columns.
The obvious way to express this is with an associative array of associative arrays, but this is both awkward and inefficient, and it doesn't enforce the non-jaggedness property. Do any popular programming languages provide an associative matrix either built into the language or as part of their standard libraries? If so, how do they work, both at the API and implementation level? I'm using Python and D for this project, but examples in other languages would still be useful because I would be able to look at the API and figure out the best way to implement something similar in Python or D.
|
[
"Why not just use a standard matrix, but then have two dictionaries - one that converts the row keys to row indices and one that converts the columns keys to columns indices. You could make your own structure that would work this way fairly easily I think. You just make a class that contains the matrix and the two dictionaries and go from there.\n",
"In Python you could have a dict indexed by a tuple of two strings, e.g\n>>> d = {}\n>>> d[\"foo\",\"bar\"] = 10\n>>> d\n{('foo', 'bar'): 10}\n\nI am not sure what \"enforce non-jaggedness\" means for you, but you could either use a defaultdict to return a default value for entries that have not been explicitly set, or initialise the dict with the a known value:\n>>> xkeys = \"abcdef\"\n>>> ykeys = \"xyz\"\n>>> d = dict(((x,y), 0) for x in xkeys for y in ykeys)\n>>> d\n{('b', 'y'): 0, ('a', 'z'): 0, ('b', 'x'): 0, ('e', 'y'): 0, ('a', 'x'): 0, ('f', 'z'): 0, ('a', 'y'): 0, ('f', 'y'): 0, ('d', 'y'): 0, ('f', 'x'): 0, ('d', 'x'): 0, ('e', 'x'): 0, ('e', 'z'): 0, ('c', 'x'): 0, ('d', 'z'): 0, ('c', 'y'): 0, ('c', 'z'): 0, ('b', 'z'): 0}\n\nIf you want to enforce that only keys in a known set are allowed then I suggest subclassing dict to add the validation.\n",
"the larry module for python was recently released. i believe it does what you want.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"associative_array",
"d",
"data_structures",
"matrix",
"python"
] |
stackoverflow_0002247197_associative_array_d_data_structures_matrix_python.txt
|
Q:
How can you access the "calling" object through a RelatedManager in Django?
Say I have a model with a foreign key to a Django Auth user:
class Something(models.Model):
user = models.ForeignKey(User, related_name='something')
I can then access this model through a RelatedManager:
u = User.objects.create_user('richardhenry', 'richard@example.com', 'password')
u.something.all()
My question is, if I create a SomethingManager and define some methods on it:
class SomethingManager(models.Manager):
def do_something(self):
pass
Is it possible to get the original User object (as in, the variable u) within the do_something() method? (Through the related manager; passing it in via the method args isn't what I'm after.)
A:
Managers are only directly connected to the model they manage. So in this case, your Manager would be connected to Something, but not directly to User.
Also, Managers begin with querysets, not objects, so you'll have to work from there.
Keep in mind that to use your custom methods with a RelatedManager you need to set use_for_related_fields = True in your Manager.
So to get to the user, you'd have to be a bit roundabout and get the object, then the user:
def do_something(
ids = self.get_query_set().values_list('user__id', flat=True)
return User.objects.filter(id__in=ids).distinct()
The above should return just one user, you could add a .get() at the end to get the object instead of a queryset, but I like returning querysets since you can keep chaining them.
|
How can you access the "calling" object through a RelatedManager in Django?
|
Say I have a model with a foreign key to a Django Auth user:
class Something(models.Model):
user = models.ForeignKey(User, related_name='something')
I can then access this model through a RelatedManager:
u = User.objects.create_user('richardhenry', 'richard@example.com', 'password')
u.something.all()
My question is, if I create a SomethingManager and define some methods on it:
class SomethingManager(models.Manager):
def do_something(self):
pass
Is it possible to get the original User object (as in, the variable u) within the do_something() method? (Through the related manager; passing it in via the method args isn't what I'm after.)
|
[
"Managers are only directly connected to the model they manage. So in this case, your Manager would be connected to Something, but not directly to User. \nAlso, Managers begin with querysets, not objects, so you'll have to work from there.\nKeep in mind that to use your custom methods with a RelatedManager you need to set use_for_related_fields = True in your Manager.\nSo to get to the user, you'd have to be a bit roundabout and get the object, then the user:\ndef do_something(\n ids = self.get_query_set().values_list('user__id', flat=True)\n return User.objects.filter(id__in=ids).distinct()\n\nThe above should return just one user, you could add a .get() at the end to get the object instead of a queryset, but I like returning querysets since you can keep chaining them.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002247508_django_python.txt
|
Q:
Anyone benchmarked virtual machine performance for build servers?
We have been trying to use virtual machines for build servers. Our build servers are all running WinXP32 and we are hosting them on VMWare Server 2.0 running on Ubuntu 9.10. We build a mix of C, C++, python packages, and other various deployment tasks (installers, 7z files, archives, etc). The management using VMWare hosted build servers is great. We can move them around, shared system resources on one large 8-core box, remotely access the systems through a web interface, and just basically manage things better.
But the problem is that the performance compared to using a physical machine seems to range from bad to horrid depending upon what day it is. It has proven very frustrating. Sometimes the system load for the host will go above 20 and some times it will be below 1. It doesn't seem to be based on how much work is actually being done on the systems. I suspect there is a bottleneck in the system, but I can't seem to figure out what it is. (most recent suspect is I/O, but we have a dedicated 1TB 7200RPM SATA 2 drive with 32MB of cache doing nothing but the virtual machines. Seems like enough for 1-2 machines. All other specs seem to be enough too. 8GB RAM, 2GB per VM, 8 cores, 1 per vm).
So after exhausting everything I can think of, I wanted to turn to the Stack Overflow community.
Has anyone run or seen anyone else run benchmarks of software build performance within a VM.
What should we expect relative to a physical system?
How much performance are we giving up?
What hardware / vm server configurations are people using?
Any help would be greatly appreciated.
A:
Disk IO is definitely a problem here, you just can't do any significant amount of disk IO activity when you're backing it up with a single spindle. The 32MB cache on a single SATA drive is going to be saturated just by your Host and a couple of Guest OS's ticking over. If you look at the disk queue length counter in your Ubuntu Host OS you should see that it is high (anything above 1 on this system with 2 drive for any length of time means something is waiting for that disk).
When I'm sizing infrastructure for VM's I generally take a ballpark of 30-50 IOPS per VM as an average, and that's for systems that do not exercise the disk subsystem very much. For systems that don't require a lot of IO activity you can drop down a bit but the IO patterns for build systems will be heavily biased towards lots of very random fairly small reads. To compound the issue you want a lot of those VM's building concurrently which will drive contention for the disk through the roof. Overall disk bandwidth is probably not a big concern (that SATA drive can probably push 70-100Meg/sec when the IO pattern is totally sequential) but when the files are small and scattered you are IO bound by the limits of the spindle which will be about 70-100 IO per second on a 7.2k SATA. A host OS running a Type 2 Hypervisor like VMware Server with a single guest will probably hit that under a light load.
My recommendation would be to build a RAID 10 array with smaller and ideally faster drives. 10k SAS drives will give you 100-150 IOPs each so a pack of 4 can handle 600 read IOPS and 300 write IOPs before topping out. Also make sure you align all of the data partitions for the drive hosting the VMDK's and within the Guest OS's if you are putting the VM files on a RAID array. For workloads like these that will give you a 20-30% disk performance improvement. Avoid RAID 5 for something like this, space is cheap and the write penalty on RAID 5 means you need 4 drives in a RAID 5 pack to equal the write performance of a single drive.
One other point I'd add is that VMware Server is not a great Hypervisor in terms of performance, if at all possible move to a Type 1 Hypervisor (like ESXi v4, it's also free). It's not trivial to set up and you lose the Host OS completely so that might be an issue but you'll see far better IO performance across the board particularly for disk and network traffic.
Edited to respond to your comment.
1) To see whether you actually have a problem on your existing Ubuntu host.
I see you've tried dstat, I don't think it gives you enough detail to understand what's happening but I'm not familiar with using it so I might be wrong. Iostat will give you a good picture of what is going on - this article on using iostat will help you get a better picture of the actual IO pattern hitting the disk - http://bhavin.directi.com/iostat-and-disk-utilization-monitoring-nirvana/ . The avgrq-sz and avgwq-sz are the raw indicators of how many requests are queued. High numbers are generally bad but what is actually bad varies with the disk type and RAID geometry. What you are ultimately interested in is seeing whether your disk IO's are spending more\increasing time in the queue than in actually being serviced. The calculation (await-svctim)/await*100 really tells you whether your disk is struggling to keep up, above 50% and your IO's are spending as long queued as being serviced by the disk(s), if it approaches 100% the disk is getting totally slammed. If you do find that the host is not actually stressed and VMware Server is actually just lousy (which it could well be, I've never used it on a Linux platform) then you might want to try one of the alternatives like VirtualBox before you jump onto ESXi.
2) To figure out what you need.
Baseline the IO requirements of a typical build on a system that has good\acceptable performance - on Windows look at the IOPS counters - Disk Reads/sec and Disk Writes/sec counters and make sure the average queue length is <1. You need to know the peak values for both while the system is loaded, instantaneous peaks could be very high if everything is coming from disk cache so watch for sustained peak values over the course of a minute or so. Once you have those numbers you can scope out a disk subsystem that will deliver what you need. The reason you need to look at the IO numbers is that they reflect the actual switching that the drive heads have to go through to complete your reads and writes (the IO's per second, IOPS) and unless you are doing large file streaming or full disk backups they will most accurately reflect the limits your disk will hit when under load.
Modern disks can sustain approximately the following:
7.2k SATA drives - 70-100 IOPS
10k SAS drives - 120-150 IOPS
15k SAS drives - 150-200 IOPS
Note these are approximate numbers for typical drives and represent the saturated capability of the drives under maximum load with unfavourable IO patterns. This is designing for worst case, which is what you should do unless you really know what you are doing.
RAID packs allow you to parallelize your IO workload and with a decent RAID controller an N drive RAID pack will give you N*(Base IOPS for 1 disk) for read IO. For write IO there is a penalty caused by the RAID policy - RAID 0 has no penalty, writes are as fast as reads. RAID 5 requires 2 reads and 2 writes per IO (read parity, read existing block, write new parity, write new block) so it has a penalty of 4. RAID 10 has a penalty of 2 (2 writes per IO). RAID 6 has a penalty of 5. To figure out how many IOPS you need from a RAID array you take the basic read IOPS number your OS needs and add to that the product of the write IOPS number the OS needs and the relevant penalty factor.
3) Now work out the structure of the RAID array that will meet your performance needs
If your analysis of a physical baseline system tells you that you only need 4\5 IOPS then your single drive might be OK. I'd be amazed if it does but don't take my word for it - get your data and make an informed decision.
Anyway let's assume you measured 30 read IOPS and 20 write IOPS during your baseline exercise and you want to be able to support 8 instances of these build systems as VM's. To deliver this your disk subsystem will need to be able to support 240 read IOPS and 160 write IOPS to the OS. Adjust your own calculations to suit the number of systems you really need.
If you choose RAID 10 (and I strongly encourage it, RAID 10 sacrifices capacity for performance but when you design for enough performance you can size the disks to get the capacity you need and the result will usually be cheaper than RAID5 unless your IO pattern involves very few writes) Your disks need to be able to deliver 560 IOPS in total (240 for read, and 320 for write in order to account for the RAID 10 write penalty factor of 2).
This would require:
- 4 15k SAS drives
- 6 10k SAS drives (round up, RAID 10 requires an even no of drives)
- 8 7.2k SATA drives
If you were to choose RAID 5 you would have to adjust for the increased write penalty and will therefore need 880 IOPS to deliver the performance you want.
That would require:
- 6 15k SAS drives
- 8 10k SAS drives
- 14 7.2k SATA drives
You'll have a lot more space this way but it will cost almost twice as much because you need so many more drives and you'll need a fairly big box to fit those into. This is why I strongly recommend RAID 10 if performance is any concern at all.
Another option is to find a good SSD (like the Intel X-25E, not the X-25M or anything cheaper) that has enough storage to meet your needs. Buy two and set them up for RAID 1, SSD's are pretty good but their failure rates (even for drives like the X-25E's) are currently worse than rotating disks so unless you are prepared to deal with a dead system you want RAID 1 at a minimum. Combined with a good high end controller something like the X-25E will easily sustain 6k IOPS in the real world, that's the equivalent of 30 15k SAS drives. SSD's are quite expensive per GB of capacity but if they are used appropriately they can deliver much more cost effective solutions for tasks that are IO intensive.
|
Anyone benchmarked virtual machine performance for build servers?
|
We have been trying to use virtual machines for build servers. Our build servers are all running WinXP32 and we are hosting them on VMWare Server 2.0 running on Ubuntu 9.10. We build a mix of C, C++, python packages, and other various deployment tasks (installers, 7z files, archives, etc). The management using VMWare hosted build servers is great. We can move them around, shared system resources on one large 8-core box, remotely access the systems through a web interface, and just basically manage things better.
But the problem is that the performance compared to using a physical machine seems to range from bad to horrid depending upon what day it is. It has proven very frustrating. Sometimes the system load for the host will go above 20 and some times it will be below 1. It doesn't seem to be based on how much work is actually being done on the systems. I suspect there is a bottleneck in the system, but I can't seem to figure out what it is. (most recent suspect is I/O, but we have a dedicated 1TB 7200RPM SATA 2 drive with 32MB of cache doing nothing but the virtual machines. Seems like enough for 1-2 machines. All other specs seem to be enough too. 8GB RAM, 2GB per VM, 8 cores, 1 per vm).
So after exhausting everything I can think of, I wanted to turn to the Stack Overflow community.
Has anyone run or seen anyone else run benchmarks of software build performance within a VM.
What should we expect relative to a physical system?
How much performance are we giving up?
What hardware / vm server configurations are people using?
Any help would be greatly appreciated.
|
[
"Disk IO is definitely a problem here, you just can't do any significant amount of disk IO activity when you're backing it up with a single spindle. The 32MB cache on a single SATA drive is going to be saturated just by your Host and a couple of Guest OS's ticking over. If you look at the disk queue length counter in your Ubuntu Host OS you should see that it is high (anything above 1 on this system with 2 drive for any length of time means something is waiting for that disk).\nWhen I'm sizing infrastructure for VM's I generally take a ballpark of 30-50 IOPS per VM as an average, and that's for systems that do not exercise the disk subsystem very much. For systems that don't require a lot of IO activity you can drop down a bit but the IO patterns for build systems will be heavily biased towards lots of very random fairly small reads. To compound the issue you want a lot of those VM's building concurrently which will drive contention for the disk through the roof. Overall disk bandwidth is probably not a big concern (that SATA drive can probably push 70-100Meg/sec when the IO pattern is totally sequential) but when the files are small and scattered you are IO bound by the limits of the spindle which will be about 70-100 IO per second on a 7.2k SATA. A host OS running a Type 2 Hypervisor like VMware Server with a single guest will probably hit that under a light load.\nMy recommendation would be to build a RAID 10 array with smaller and ideally faster drives. 10k SAS drives will give you 100-150 IOPs each so a pack of 4 can handle 600 read IOPS and 300 write IOPs before topping out. Also make sure you align all of the data partitions for the drive hosting the VMDK's and within the Guest OS's if you are putting the VM files on a RAID array. For workloads like these that will give you a 20-30% disk performance improvement. Avoid RAID 5 for something like this, space is cheap and the write penalty on RAID 5 means you need 4 drives in a RAID 5 pack to equal the write performance of a single drive. \nOne other point I'd add is that VMware Server is not a great Hypervisor in terms of performance, if at all possible move to a Type 1 Hypervisor (like ESXi v4, it's also free). It's not trivial to set up and you lose the Host OS completely so that might be an issue but you'll see far better IO performance across the board particularly for disk and network traffic. \nEdited to respond to your comment.\n1) To see whether you actually have a problem on your existing Ubuntu host.\nI see you've tried dstat, I don't think it gives you enough detail to understand what's happening but I'm not familiar with using it so I might be wrong. Iostat will give you a good picture of what is going on - this article on using iostat will help you get a better picture of the actual IO pattern hitting the disk - http://bhavin.directi.com/iostat-and-disk-utilization-monitoring-nirvana/ . The avgrq-sz and avgwq-sz are the raw indicators of how many requests are queued. High numbers are generally bad but what is actually bad varies with the disk type and RAID geometry. What you are ultimately interested in is seeing whether your disk IO's are spending more\\increasing time in the queue than in actually being serviced. The calculation (await-svctim)/await*100 really tells you whether your disk is struggling to keep up, above 50% and your IO's are spending as long queued as being serviced by the disk(s), if it approaches 100% the disk is getting totally slammed. If you do find that the host is not actually stressed and VMware Server is actually just lousy (which it could well be, I've never used it on a Linux platform) then you might want to try one of the alternatives like VirtualBox before you jump onto ESXi.\n2) To figure out what you need.\nBaseline the IO requirements of a typical build on a system that has good\\acceptable performance - on Windows look at the IOPS counters - Disk Reads/sec and Disk Writes/sec counters and make sure the average queue length is <1. You need to know the peak values for both while the system is loaded, instantaneous peaks could be very high if everything is coming from disk cache so watch for sustained peak values over the course of a minute or so. Once you have those numbers you can scope out a disk subsystem that will deliver what you need. The reason you need to look at the IO numbers is that they reflect the actual switching that the drive heads have to go through to complete your reads and writes (the IO's per second, IOPS) and unless you are doing large file streaming or full disk backups they will most accurately reflect the limits your disk will hit when under load. \nModern disks can sustain approximately the following: \n\n7.2k SATA drives - 70-100 IOPS \n10k SAS drives - 120-150 IOPS \n15k SAS drives - 150-200 IOPS \n\nNote these are approximate numbers for typical drives and represent the saturated capability of the drives under maximum load with unfavourable IO patterns. This is designing for worst case, which is what you should do unless you really know what you are doing.\nRAID packs allow you to parallelize your IO workload and with a decent RAID controller an N drive RAID pack will give you N*(Base IOPS for 1 disk) for read IO. For write IO there is a penalty caused by the RAID policy - RAID 0 has no penalty, writes are as fast as reads. RAID 5 requires 2 reads and 2 writes per IO (read parity, read existing block, write new parity, write new block) so it has a penalty of 4. RAID 10 has a penalty of 2 (2 writes per IO). RAID 6 has a penalty of 5. To figure out how many IOPS you need from a RAID array you take the basic read IOPS number your OS needs and add to that the product of the write IOPS number the OS needs and the relevant penalty factor.\n3) Now work out the structure of the RAID array that will meet your performance needs\nIf your analysis of a physical baseline system tells you that you only need 4\\5 IOPS then your single drive might be OK. I'd be amazed if it does but don't take my word for it - get your data and make an informed decision.\nAnyway let's assume you measured 30 read IOPS and 20 write IOPS during your baseline exercise and you want to be able to support 8 instances of these build systems as VM's. To deliver this your disk subsystem will need to be able to support 240 read IOPS and 160 write IOPS to the OS. Adjust your own calculations to suit the number of systems you really need.\nIf you choose RAID 10 (and I strongly encourage it, RAID 10 sacrifices capacity for performance but when you design for enough performance you can size the disks to get the capacity you need and the result will usually be cheaper than RAID5 unless your IO pattern involves very few writes) Your disks need to be able to deliver 560 IOPS in total (240 for read, and 320 for write in order to account for the RAID 10 write penalty factor of 2).\nThis would require:\n - 4 15k SAS drives\n - 6 10k SAS drives (round up, RAID 10 requires an even no of drives)\n - 8 7.2k SATA drives \nIf you were to choose RAID 5 you would have to adjust for the increased write penalty and will therefore need 880 IOPS to deliver the performance you want.\nThat would require:\n - 6 15k SAS drives\n - 8 10k SAS drives\n - 14 7.2k SATA drives \nYou'll have a lot more space this way but it will cost almost twice as much because you need so many more drives and you'll need a fairly big box to fit those into. This is why I strongly recommend RAID 10 if performance is any concern at all.\nAnother option is to find a good SSD (like the Intel X-25E, not the X-25M or anything cheaper) that has enough storage to meet your needs. Buy two and set them up for RAID 1, SSD's are pretty good but their failure rates (even for drives like the X-25E's) are currently worse than rotating disks so unless you are prepared to deal with a dead system you want RAID 1 at a minimum. Combined with a good high end controller something like the X-25E will easily sustain 6k IOPS in the real world, that's the equivalent of 30 15k SAS drives. SSD's are quite expensive per GB of capacity but if they are used appropriately they can deliver much more cost effective solutions for tasks that are IO intensive.\n"
] |
[
8
] |
[] |
[] |
[
"automation",
"build",
"c++",
"python",
"vmware"
] |
stackoverflow_0002247755_automation_build_c++_python_vmware.txt
|
Q:
Python best way to check for existing key
Which is the more efficient/faster/better way to check if a key exists?
if 'subject' in request.POST:
subject = request.POST['subject']
else:
// handle error
OR
try:
subject = request.POST['subject']
except KeyError:
// handle error
A:
The latter (try/except) form is generally the better form.
try blocks are very cheap but catching an exception can be more expensive. A containment check on a dict tends to be cheap, but not cheaper than nothing. I suspect there will be a balance of efficiency depending on how often 'subject' is really there. However, this doesn't matter, since premature optimization is useless, distracting, wasteful, and ineffective. You would go with the better solution.
If the code would actually be of the form
if 'subject' in request.POST:
subject = request.POST['subject']
else:
subject = some_default
then what you actually want is request.POST.get('subject', some_default).
A:
I use .get() method — it is preferable method.
Python 2.5.2 (r252:60911, Jul 22 2009, 15:33:10)
[GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dis
>>> def f1(key, d):
... if key in d:
... return d[key]
... else:
... return "default"
...
>>> dis.dis(f1)
2 0 LOAD_FAST 0 (key)
3 LOAD_FAST 1 (d)
6 COMPARE_OP 6 (in)
9 JUMP_IF_FALSE 12 (to 24)
12 POP_TOP
3 13 LOAD_FAST 1 (d)
16 LOAD_FAST 0 (key)
19 BINARY_SUBSCR
20 RETURN_VALUE
21 JUMP_FORWARD 5 (to 29)
>> 24 POP_TOP
5 25 LOAD_CONST 1 ('default')
28 RETURN_VALUE
>> 29 LOAD_CONST 0 (None)
32 RETURN_VALUE
>>> def f2(key, d):
... return d.get(key, "default")
...
>>> dis.dis(f2)
2 0 LOAD_FAST 1 (d)
3 LOAD_ATTR 0 (get)
6 LOAD_FAST 0 (key)
9 LOAD_CONST 1 ('default')
12 CALL_FUNCTION 2
15 RETURN_VALUE
>>> def f3(key, d):
... try:
... return d[key]
... except KeyError:
... return "default"
...
>>> dis.dis(f3)
2 0 SETUP_EXCEPT 12 (to 15)
3 3 LOAD_FAST 1 (d)
6 LOAD_FAST 0 (key)
9 BINARY_SUBSCR
10 RETURN_VALUE
11 POP_BLOCK
12 JUMP_FORWARD 23 (to 38)
4 >> 15 DUP_TOP
16 LOAD_GLOBAL 0 (KeyError)
19 COMPARE_OP 10 (exception match)
22 JUMP_IF_FALSE 11 (to 36)
25 POP_TOP
26 POP_TOP
27 POP_TOP
28 POP_TOP
5 29 LOAD_CONST 1 ('default')
32 RETURN_VALUE
33 JUMP_FORWARD 2 (to 38)
>> 36 POP_TOP
37 END_FINALLY
>> 38 LOAD_CONST 0 (None)
41 RETURN_VALUE
A:
Last time I checked, the first one is a few nanoseconds faster. But most phythonistas seem to favor the second one.
I think I'm not the only one that want to reserve exceptions for exceptional behavior, so I try to use the first one, reserving the second one when it's invalid not to have the key
A:
The second will fail with collections.defaultdict, and the exception will cause a small performance bump. Other than there there is no real difference between the two.
A:
I think it depends on whether 'subject' not being in POST is actually an exception. If it is not supposed to happen but you are just being extra careful, then your second method would I assume be more efficient and quicker. However if you are using the check to do 1 thing or another then it is not appropriate to use an exception. From the look of your code, I would go with your second option.
A:
I too like get() you can also specify a default value (other than none) in case that makes sense.
A:
dict and many dict-like objects (including Django's HttpRequest you seem to be using) allow passing default value to get():
subject = request.POST.get('subject', '[some_default_subject]')
This is preferrable method as it is the shortest and most transparent about your intentions.
|
Python best way to check for existing key
|
Which is the more efficient/faster/better way to check if a key exists?
if 'subject' in request.POST:
subject = request.POST['subject']
else:
// handle error
OR
try:
subject = request.POST['subject']
except KeyError:
// handle error
|
[
"The latter (try/except) form is generally the better form. \ntry blocks are very cheap but catching an exception can be more expensive. A containment check on a dict tends to be cheap, but not cheaper than nothing. I suspect there will be a balance of efficiency depending on how often 'subject' is really there. However, this doesn't matter, since premature optimization is useless, distracting, wasteful, and ineffective. You would go with the better solution.\nIf the code would actually be of the form \nif 'subject' in request.POST:\n subject = request.POST['subject']\nelse:\n subject = some_default\n\nthen what you actually want is request.POST.get('subject', some_default).\n",
"I use .get() method — it is preferable method.\nPython 2.5.2 (r252:60911, Jul 22 2009, 15:33:10)\n[GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import dis\n>>> def f1(key, d):\n... if key in d:\n... return d[key]\n... else:\n... return \"default\"\n...\n>>> dis.dis(f1)\n 2 0 LOAD_FAST 0 (key)\n 3 LOAD_FAST 1 (d)\n 6 COMPARE_OP 6 (in)\n 9 JUMP_IF_FALSE 12 (to 24)\n 12 POP_TOP\n\n 3 13 LOAD_FAST 1 (d)\n 16 LOAD_FAST 0 (key)\n 19 BINARY_SUBSCR\n 20 RETURN_VALUE\n 21 JUMP_FORWARD 5 (to 29)\n >> 24 POP_TOP\n\n 5 25 LOAD_CONST 1 ('default')\n 28 RETURN_VALUE\n >> 29 LOAD_CONST 0 (None)\n 32 RETURN_VALUE\n>>> def f2(key, d):\n... return d.get(key, \"default\")\n...\n>>> dis.dis(f2)\n 2 0 LOAD_FAST 1 (d)\n 3 LOAD_ATTR 0 (get)\n 6 LOAD_FAST 0 (key)\n 9 LOAD_CONST 1 ('default')\n 12 CALL_FUNCTION 2\n 15 RETURN_VALUE\n>>> def f3(key, d):\n... try:\n... return d[key]\n... except KeyError:\n... return \"default\"\n...\n>>> dis.dis(f3)\n 2 0 SETUP_EXCEPT 12 (to 15)\n\n 3 3 LOAD_FAST 1 (d)\n 6 LOAD_FAST 0 (key)\n 9 BINARY_SUBSCR\n 10 RETURN_VALUE\n 11 POP_BLOCK\n 12 JUMP_FORWARD 23 (to 38)\n\n 4 >> 15 DUP_TOP\n 16 LOAD_GLOBAL 0 (KeyError)\n 19 COMPARE_OP 10 (exception match)\n 22 JUMP_IF_FALSE 11 (to 36)\n 25 POP_TOP\n 26 POP_TOP\n 27 POP_TOP\n 28 POP_TOP\n\n 5 29 LOAD_CONST 1 ('default')\n 32 RETURN_VALUE\n 33 JUMP_FORWARD 2 (to 38)\n >> 36 POP_TOP\n 37 END_FINALLY\n >> 38 LOAD_CONST 0 (None)\n 41 RETURN_VALUE\n\n",
"Last time I checked, the first one is a few nanoseconds faster. But most phythonistas seem to favor the second one.\nI think I'm not the only one that want to reserve exceptions for exceptional behavior, so I try to use the first one, reserving the second one when it's invalid not to have the key\n",
"The second will fail with collections.defaultdict, and the exception will cause a small performance bump. Other than there there is no real difference between the two.\n",
"I think it depends on whether 'subject' not being in POST is actually an exception. If it is not supposed to happen but you are just being extra careful, then your second method would I assume be more efficient and quicker. However if you are using the check to do 1 thing or another then it is not appropriate to use an exception. From the look of your code, I would go with your second option.\n",
"I too like get() you can also specify a default value (other than none) in case that makes sense.\n",
"dict and many dict-like objects (including Django's HttpRequest you seem to be using) allow passing default value to get():\nsubject = request.POST.get('subject', '[some_default_subject]')\n\nThis is preferrable method as it is the shortest and most transparent about your intentions.\n"
] |
[
47,
6,
4,
2,
1,
1,
0
] |
[
"subject = request.POST.get(\"subject\")\nif subject is None:\n ...\n\n:)\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0002247412_python.txt
|
Q:
Python: Identifying a numeric string?
I tried a couple of approaches, I am really only concerned with performance, not correctness. I noticed that the regex based implementation is about 3-4x slower than the one that uses type coercion. Is there another, more efficient way of doing this?
def IsNumber(x):
try:
_ = float(x)
except ValueError:
return False
return True
def IsNumber2(x):
import re
if re.match("^\d*.?\d*$", x) == None:
return False
return True
Thanks!
A:
First of all, they're not doing the same thing. Floats can be specified as "1e3", for example, and float() will accept that. It's also not coercion, but conversion.
Secondly, don't import re in IsNumber2, especially if you're trying to use it with timeit. Do the import outside of the function.
Finally, it doesn't surprise me that float() is faster. It's a dedicated routine written in C for a very specific purpose, while regex must be converted into a form that's interpreted.
Is your first version, that uses float(), fast enough? It should be, and I don't know of a better way to do the same thing in Python.
A:
Not really. Coercion is the accepted way to do this.
A:
The answer depends a lot on what you mean by 'numeric string'. If your definition of numeric string is 'anything that float accepts', then it's difficult to improve on the try-except method.
But bear in mind that float may be more liberal than you want it to be: on most machines, it'll accept strings representing infinities and nans. On my machine, it accepts 'nan(dead!$#parrot)', for example. It will also accept leading and trailing whitespace. And depending on your application, you may want to exclude exponential representations of floats. In these cases, using a regex would make sense. To just exclude infinities and nans, it might be quicker to use the try-except method and then use math.isnan and math.isinf to check the result of the conversion.
Writing a correct regex for numeric strings is a surprisingly error-prone task. Your IsNumber2 function accepts the string '.', for example. You can find a battle-tested version of a numeric-string regex in the decimal module source. Here it is (with some minor edits):
_parser = re.compile(r""" # A numeric string consists of:
(?P<sign>[-+])? # an optional sign, followed by either...
(
(?=\d|\.\d) # ...a number (with at least one digit)
(?P<int>\d*) # having a (possibly empty) integer part
(\.(?P<frac>\d*))? # followed by an optional fractional part
(E(?P<exp>[-+]?\d+))? # followed by an optional exponent, or...
|
Inf(inity)? # ...an infinity, or...
|
(?P<signal>s)? # ...an (optionally signaling)
NaN # NaN
(?P<diag>\d*) # with (possibly empty) diagnostic info.
)
\Z
""", re.VERBOSE | re.IGNORECASE | re.UNICODE).match
This pretty much matches exactly what float accepts, except for the leading and trailing whitespace and some slight differences for nans (the extra 's' for signaling nans, and the diagnostic info). When I need a numeric regex, I usually start with this one and edit out the bits I don't need.
N.B. It's conceivable that float could be slower than a regex, since it not only has to parse the string, but also turn it into a float, which is quite an involved computation; it would still be a surprise if it were, though.
A:
You might try compiling your regular expression first, but I'd imagine it would still be slower.
Also, if you want to know if your string is a number because you're going to do calculations with it, you'll have to coerce it anyway.
|
Python: Identifying a numeric string?
|
I tried a couple of approaches, I am really only concerned with performance, not correctness. I noticed that the regex based implementation is about 3-4x slower than the one that uses type coercion. Is there another, more efficient way of doing this?
def IsNumber(x):
try:
_ = float(x)
except ValueError:
return False
return True
def IsNumber2(x):
import re
if re.match("^\d*.?\d*$", x) == None:
return False
return True
Thanks!
|
[
"First of all, they're not doing the same thing. Floats can be specified as \"1e3\", for example, and float() will accept that. It's also not coercion, but conversion.\nSecondly, don't import re in IsNumber2, especially if you're trying to use it with timeit. Do the import outside of the function.\nFinally, it doesn't surprise me that float() is faster. It's a dedicated routine written in C for a very specific purpose, while regex must be converted into a form that's interpreted.\nIs your first version, that uses float(), fast enough? It should be, and I don't know of a better way to do the same thing in Python.\n",
"Not really. Coercion is the accepted way to do this.\n",
"The answer depends a lot on what you mean by 'numeric string'. If your definition of numeric string is 'anything that float accepts', then it's difficult to improve on the try-except method.\nBut bear in mind that float may be more liberal than you want it to be: on most machines, it'll accept strings representing infinities and nans. On my machine, it accepts 'nan(dead!$#parrot)', for example. It will also accept leading and trailing whitespace. And depending on your application, you may want to exclude exponential representations of floats. In these cases, using a regex would make sense. To just exclude infinities and nans, it might be quicker to use the try-except method and then use math.isnan and math.isinf to check the result of the conversion.\nWriting a correct regex for numeric strings is a surprisingly error-prone task. Your IsNumber2 function accepts the string '.', for example. You can find a battle-tested version of a numeric-string regex in the decimal module source. Here it is (with some minor edits):\n_parser = re.compile(r\"\"\" # A numeric string consists of:\n (?P<sign>[-+])? # an optional sign, followed by either...\n (\n (?=\\d|\\.\\d) # ...a number (with at least one digit)\n (?P<int>\\d*) # having a (possibly empty) integer part\n (\\.(?P<frac>\\d*))? # followed by an optional fractional part\n (E(?P<exp>[-+]?\\d+))? # followed by an optional exponent, or...\n |\n Inf(inity)? # ...an infinity, or...\n |\n (?P<signal>s)? # ...an (optionally signaling)\n NaN # NaN\n (?P<diag>\\d*) # with (possibly empty) diagnostic info.\n )\n \\Z\n\"\"\", re.VERBOSE | re.IGNORECASE | re.UNICODE).match\n\nThis pretty much matches exactly what float accepts, except for the leading and trailing whitespace and some slight differences for nans (the extra 's' for signaling nans, and the diagnostic info). When I need a numeric regex, I usually start with this one and edit out the bits I don't need.\nN.B. It's conceivable that float could be slower than a regex, since it not only has to parse the string, but also turn it into a float, which is quite an involved computation; it would still be a surprise if it were, though.\n",
"You might try compiling your regular expression first, but I'd imagine it would still be slower. \nAlso, if you want to know if your string is a number because you're going to do calculations with it, you'll have to coerce it anyway. \n"
] |
[
6,
2,
2,
0
] |
[] |
[] |
[
"coercion",
"python",
"regex"
] |
stackoverflow_0002248185_coercion_python_regex.txt
|
Q:
Python Jabber/XMPP client library for Twisted
I am looking for a Python library for writing Jabber/XMPP clients using the Twisted framework.
A:
Wokkel is your best bet. It's an enhancement on the core Twisted Words functionality built into Twisted. It has several major users, include the guys behind Stanziq/Strophe.
A:
Twisted Words
|
Python Jabber/XMPP client library for Twisted
|
I am looking for a Python library for writing Jabber/XMPP clients using the Twisted framework.
|
[
"Wokkel is your best bet. It's an enhancement on the core Twisted Words functionality built into Twisted. It has several major users, include the guys behind Stanziq/Strophe.\n",
"Twisted Words\n"
] |
[
12,
3
] |
[] |
[] |
[
"python",
"twisted",
"xmpp"
] |
stackoverflow_0002248587_python_twisted_xmpp.txt
|
Q:
Python: Embed Chaco in PyQt4 Mystery
How do i go about adding Chaco to an existing PyQt4 application?
Hours of searches yielded little (search for yourself). So far i've figured i need the following lines:
import os
os.environ['ETS_TOOLKIT']='qt4'
i could not find PyQt4-Chaco code anywhere on the internets
i would be very grateful to anyone filling in the blanks to show me the simplest line plot possible (with 2 points)
from PyQt4 import QtCore, QtGui
import sys
import os
os.environ['ETS_TOOLKIT']='qt4'
from enthought <blanks>
:
:
app = QtGui.QApplication(sys.argv)
main_window = QtGui.QMainWindow()
main_window.setCentralWidget(<blanks>)
main_window.show()
app.exec_()
print('bye')
what Chaco/Enthought class inherits from QWidget ?
A:
I just saw this today. It is absolutely possible and fairly straightforward to embed Chaco inside Qt as well as WX. In fact, all of the examples, when run with your ETS_TOOLKIT environment var set to "qt4", are doing exactly this. (Chaco requires there to be an underlying GUI toolkit.)
I have written a small, standalone example that fills in the blanks in your code template, and demonstrates how to embed a chaco Plot inside a Qt Window.
qt_example.py:
"""
Example of how to directly embed Chaco into Qt widgets.
The actual plot being created is drawn from the basic/line_plot1.py code.
"""
import sys
from numpy import linspace
from scipy.special import jn
from PyQt4 import QtGui, QtCore
from enthought.etsconfig.etsconfig import ETSConfig
ETSConfig.toolkit = "qt4"
from enthought.enable.api import Window
from enthought.chaco.api import ArrayPlotData, Plot
from enthought.chaco.tools.api import PanTool, ZoomTool
class PlotFrame(QtGui.QWidget):
""" This widget simply hosts an opaque enthought.enable.qt4_backend.Window
object, which provides the bridge between Enable/Chaco and the underlying
UI toolkit (qt4). This code is basically a duplicate of what's in
enthought.enable.example_support.DemoFrame, but is reproduced here to
make this example more stand-alone.
"""
def __init__(self, parent, **kw):
QtGui.QWidget.__init__(self)
def create_chaco_plot(parent):
x = linspace(-2.0, 10.0, 100)
pd = ArrayPlotData(index = x)
for i in range(5):
pd.set_data("y" + str(i), jn(i,x))
# Create some line plots of some of the data
plot = Plot(pd, title="Line Plot", padding=50, border_visible=True)
plot.legend.visible = True
plot.plot(("index", "y0", "y1", "y2"), name="j_n, n<3", color="red")
plot.plot(("index", "y3"), name="j_3", color="blue")
# Attach some tools to the plot
plot.tools.append(PanTool(plot))
zoom = ZoomTool(component=plot, tool_mode="box", always_on=False)
plot.overlays.append(zoom)
# This Window object bridges the Enable and Qt4 worlds, and handles events
# and drawing. We can create whatever hierarchy of nested containers we
# want, as long as the top-level item gets set as the .component attribute
# of a Window.
return Window(parent, -1, component = plot)
def main():
app = QtGui.QApplication(sys.argv)
main_window = QtGui.QMainWindow(size=QtCore.QSize(500,500))
enable_window = create_chaco_plot(main_window)
# The .control attribute references a QWidget that gives Chaco events
# and that Chaco paints into.
main_window.setCentralWidget(enable_window.control)
main_window.show()
app.exec_()
if __name__ == "__main__":
main()
A:
here is what you need:
import os, sys
os.environ['ETS_TOOLKIT'] = 'qt4'
from PyQt4 import QtGui
app = QtGui.QApplication(sys.argv)
from numpy import linspace, pi, sin
from enthought.enable.api import Component, Container, Window
from enthought.chaco.api import create_line_plot, \
add_default_axes, \
add_default_grids, \
OverlayPlotContainer
x = linspace(-pi,pi,100)
y = sin(x)
plot = create_line_plot((x,y))
add_default_grids(plot)
add_default_axes(plot)
container = OverlayPlotContainer(padding = 50)
container.add(plot)
plot_window = Window(None, -1, component=container)
plot_window.control.setWindowTitle('hello')
plot_window.control.resize(400,400)
plot_window.control.show()
app.exec_()
plot_window.control inherits from QWidget
A:
I don't know about Chaco, but I'm using VTK, here is code to draw some lines, having a (x,y,z) coordinates of them.
"""Define an actor and its properties, to be drawn on the scene using 'lines' representation."""
ren = vtk.vtkRenderer()
apd=vtk.vtkAppendPolyData()
for i in xrange(len(coordinates)):
line=vtk.vtkLineSource()
line.SetPoint1(coordinates[i][0]) # 1st atom coordinates for a given bond
line.SetPoint2(coordinates[i][1]) # 2nd atom coordinates for a given bond
line.SetResolution(21)
apd.AddInput(line.GetOutput())
mapper = vtk.vtkPolyDataMapper()
mapper.SetInput(apd.GetOutput())
lines_actor = vtk.vtkActor()
lines_actor.SetMapper(mapper)
lines_actor.GetProperty().SetColor(colorR, colorG, colorB)
lines_actor.GetProperty().SetOpacity(opacity)
# Add newly created actor to the renderer.
self.ren.AddViewProp(actor) # Prop is the superclass of all actors, composite props etc.
# Update renderer.
self.ren.GetRenderWindow().Render()
It uses QVTKRenderWindowInteractor to interact with the PyQT4.
|
Python: Embed Chaco in PyQt4 Mystery
|
How do i go about adding Chaco to an existing PyQt4 application?
Hours of searches yielded little (search for yourself). So far i've figured i need the following lines:
import os
os.environ['ETS_TOOLKIT']='qt4'
i could not find PyQt4-Chaco code anywhere on the internets
i would be very grateful to anyone filling in the blanks to show me the simplest line plot possible (with 2 points)
from PyQt4 import QtCore, QtGui
import sys
import os
os.environ['ETS_TOOLKIT']='qt4'
from enthought <blanks>
:
:
app = QtGui.QApplication(sys.argv)
main_window = QtGui.QMainWindow()
main_window.setCentralWidget(<blanks>)
main_window.show()
app.exec_()
print('bye')
what Chaco/Enthought class inherits from QWidget ?
|
[
"I just saw this today. It is absolutely possible and fairly straightforward to embed Chaco inside Qt as well as WX. In fact, all of the examples, when run with your ETS_TOOLKIT environment var set to \"qt4\", are doing exactly this. (Chaco requires there to be an underlying GUI toolkit.)\nI have written a small, standalone example that fills in the blanks in your code template, and demonstrates how to embed a chaco Plot inside a Qt Window.\nqt_example.py:\n\"\"\"\nExample of how to directly embed Chaco into Qt widgets.\n\nThe actual plot being created is drawn from the basic/line_plot1.py code.\n\"\"\"\n\nimport sys\nfrom numpy import linspace\nfrom scipy.special import jn\nfrom PyQt4 import QtGui, QtCore\n\nfrom enthought.etsconfig.etsconfig import ETSConfig\nETSConfig.toolkit = \"qt4\"\nfrom enthought.enable.api import Window\n\nfrom enthought.chaco.api import ArrayPlotData, Plot\nfrom enthought.chaco.tools.api import PanTool, ZoomTool\n\n\nclass PlotFrame(QtGui.QWidget):\n \"\"\" This widget simply hosts an opaque enthought.enable.qt4_backend.Window\n object, which provides the bridge between Enable/Chaco and the underlying\n UI toolkit (qt4). This code is basically a duplicate of what's in\n enthought.enable.example_support.DemoFrame, but is reproduced here to\n make this example more stand-alone.\n \"\"\"\n def __init__(self, parent, **kw):\n QtGui.QWidget.__init__(self)\n\ndef create_chaco_plot(parent):\n x = linspace(-2.0, 10.0, 100)\n pd = ArrayPlotData(index = x)\n for i in range(5):\n pd.set_data(\"y\" + str(i), jn(i,x))\n\n # Create some line plots of some of the data\n plot = Plot(pd, title=\"Line Plot\", padding=50, border_visible=True)\n plot.legend.visible = True\n plot.plot((\"index\", \"y0\", \"y1\", \"y2\"), name=\"j_n, n<3\", color=\"red\")\n plot.plot((\"index\", \"y3\"), name=\"j_3\", color=\"blue\")\n\n # Attach some tools to the plot\n plot.tools.append(PanTool(plot))\n zoom = ZoomTool(component=plot, tool_mode=\"box\", always_on=False)\n plot.overlays.append(zoom)\n\n # This Window object bridges the Enable and Qt4 worlds, and handles events\n # and drawing. We can create whatever hierarchy of nested containers we\n # want, as long as the top-level item gets set as the .component attribute\n # of a Window.\n return Window(parent, -1, component = plot)\n\ndef main():\n app = QtGui.QApplication(sys.argv)\n main_window = QtGui.QMainWindow(size=QtCore.QSize(500,500))\n\n enable_window = create_chaco_plot(main_window)\n\n # The .control attribute references a QWidget that gives Chaco events\n # and that Chaco paints into.\n main_window.setCentralWidget(enable_window.control)\n\n main_window.show()\n app.exec_()\n\nif __name__ == \"__main__\":\n main()\n\n",
"here is what you need:\nimport os, sys\nos.environ['ETS_TOOLKIT'] = 'qt4'\n\nfrom PyQt4 import QtGui\napp = QtGui.QApplication(sys.argv)\nfrom numpy import linspace, pi, sin\nfrom enthought.enable.api import Component, Container, Window\nfrom enthought.chaco.api import create_line_plot, \\\n add_default_axes, \\\n add_default_grids, \\\n OverlayPlotContainer\n\n\nx = linspace(-pi,pi,100)\ny = sin(x)\nplot = create_line_plot((x,y))\nadd_default_grids(plot)\nadd_default_axes(plot)\ncontainer = OverlayPlotContainer(padding = 50)\ncontainer.add(plot)\nplot_window = Window(None, -1, component=container)\nplot_window.control.setWindowTitle('hello')\nplot_window.control.resize(400,400)\nplot_window.control.show()\n\napp.exec_()\n\nplot_window.control inherits from QWidget\n",
"I don't know about Chaco, but I'm using VTK, here is code to draw some lines, having a (x,y,z) coordinates of them.\n \"\"\"Define an actor and its properties, to be drawn on the scene using 'lines' representation.\"\"\"\n ren = vtk.vtkRenderer()\n apd=vtk.vtkAppendPolyData()\n\n for i in xrange(len(coordinates)):\n line=vtk.vtkLineSource()\n\n line.SetPoint1(coordinates[i][0]) # 1st atom coordinates for a given bond\n line.SetPoint2(coordinates[i][1]) # 2nd atom coordinates for a given bond\n line.SetResolution(21)\n apd.AddInput(line.GetOutput())\n\n mapper = vtk.vtkPolyDataMapper()\n mapper.SetInput(apd.GetOutput())\n lines_actor = vtk.vtkActor()\n lines_actor.SetMapper(mapper)\n lines_actor.GetProperty().SetColor(colorR, colorG, colorB)\n lines_actor.GetProperty().SetOpacity(opacity)\n\n # Add newly created actor to the renderer.\n self.ren.AddViewProp(actor) # Prop is the superclass of all actors, composite props etc.\n # Update renderer.\n self.ren.GetRenderWindow().Render()\n\nIt uses QVTKRenderWindowInteractor to interact with the PyQT4.\n"
] |
[
8,
7,
0
] |
[
"I don't know Chaco but a quick look tells me that this is not possible.\nBoth Chaco and PyQt are graphical toolkits designed to interact with the user. Chaco is plot oriented and PyQt more application oriented. Each one has its own way of managing what a window is, how to detect user clicks, how to handle paint events, ... so that they don't mix together.\nIf you need plotting software, you can try to use matplotlib to generate static images of graph and show the image in PyQt. Or try a PyQt based graph or plotting toolkit.\n"
] |
[
-1
] |
[
"chaco",
"pyqt",
"pyqt4",
"python"
] |
stackoverflow_0002148279_chaco_pyqt_pyqt4_python.txt
|
Q:
Python error while using MysqlDb - sets module is deprecated
I'm currently getting the warning every time I run a Python script that uses MySQLdb:
/var/lib/python-support/python2.6/MySQLdb/__init__.py:34:
DeprecationWarning: the sets module is deprecated
from sets import ImmutableSet
I'd rather not mess with their lib if possible. I'm on Ubuntu server. Anyone know an easy way to fix that warning message?
Thanks
UPDATE:
Fixed it based on the suggestions below and this link: https://bugzilla.redhat.com/show_bug.cgi?id=505611
import warnings
warnings.filterwarnings('ignore', '.*the sets module is deprecated.*',
DeprecationWarning, 'MySQLdb')
import MySQLdb
A:
Do this before the mysql module is imported
import warnings
warnings.filterwarnings(action="ignore", message='the sets module is deprecated')
import sets
A:
You can ignore the warning using the warnings module, or the -W argument to Python. Don't ignore all DeprecationWarnings, though, just the ones from MySQLdb :)
A:
All it means is the sets module (more specifically the immutableset part) is deprecated, and you should use it's replacement, set. Set is inbuilt so no need to import.
If you need an immutable set, frozenset() should work.
|
Python error while using MysqlDb - sets module is deprecated
|
I'm currently getting the warning every time I run a Python script that uses MySQLdb:
/var/lib/python-support/python2.6/MySQLdb/__init__.py:34:
DeprecationWarning: the sets module is deprecated
from sets import ImmutableSet
I'd rather not mess with their lib if possible. I'm on Ubuntu server. Anyone know an easy way to fix that warning message?
Thanks
UPDATE:
Fixed it based on the suggestions below and this link: https://bugzilla.redhat.com/show_bug.cgi?id=505611
import warnings
warnings.filterwarnings('ignore', '.*the sets module is deprecated.*',
DeprecationWarning, 'MySQLdb')
import MySQLdb
|
[
"Do this before the mysql module is imported\nimport warnings\nwarnings.filterwarnings(action=\"ignore\", message='the sets module is deprecated')\nimport sets\n\n",
"You can ignore the warning using the warnings module, or the -W argument to Python. Don't ignore all DeprecationWarnings, though, just the ones from MySQLdb :)\n",
"All it means is the sets module (more specifically the immutableset part) is deprecated, and you should use it's replacement, set. Set is inbuilt so no need to import.\nIf you need an immutable set, frozenset() should work.\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0002248531_mysql_python.txt
|
Q:
(python) issue bash commands from within python script (alla perl system($cmd))
Within a python script, I want to issue a command. In perl, I could define a command, save it as a variable (here, $cmd) then type system($cmd) and then the command is executed.
How can i do that in python?
A:
You can use os.system(), but prefer subprocess instead.
A:
Another good choice is "commands" module: http://docs.python.org/library/commands.html.
A:
you can use os.system(), or the newer subprocess module. Other possible alternatives (for older Python versions) include these. (eg os.spawn*,os.popen*,etc)
Lastly, try using Python's modules to do operating system stuff instead of calling external commands if possible, unless its a third party tool you are executing and Python doesn't have the api.
|
(python) issue bash commands from within python script (alla perl system($cmd))
|
Within a python script, I want to issue a command. In perl, I could define a command, save it as a variable (here, $cmd) then type system($cmd) and then the command is executed.
How can i do that in python?
|
[
"You can use os.system(), but prefer subprocess instead.\n",
"Another good choice is \"commands\" module: http://docs.python.org/library/commands.html.\n",
"you can use os.system(), or the newer subprocess module. Other possible alternatives (for older Python versions) include these. (eg os.spawn*,os.popen*,etc)\nLastly, try using Python's modules to do operating system stuff instead of calling external commands if possible, unless its a third party tool you are executing and Python doesn't have the api. \n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"bash",
"python"
] |
stackoverflow_0002248259_bash_python.txt
|
Q:
Simulate a device driver crash in linux. Have python reload it
I have a web camera running in Linux using the uvcvideo module. And I'm using a python application to access the web camera and display the image.
I want the python program to handle it if the web camera for some reason don't work anymore. Have tested with just unloading the module. Works fine if i just unload the module before I run the python code, but if force it to unload in use, I get the following feedback.
VIDIOC_DQBUF: Inappropriate ioctl for device
And if I kill the python code, and restart it the whole machine freezes.
The code I'm trying to run is
import pygame
import Image
from pygame.locals import *
import sys
import time, os
import opencv
from opencv import highgui
camera = highgui.cvCreateCameraCapture(0)
fps = 10.0
pygame.init()
window = pygame.display.set_mode((640,480))
pygame.display.set_caption("WebCam Demo")
screen = pygame.display.get_surface()
while True:
events = pygame.event.get()
for event in events:
if event.type == QUIT or event.type == KEYDOWN:
sys.exit(0)
while True:
try:
ima = highgui.cvQueryFrame(camera)
im = opencv.adaptors.Ipl2PIL(ima)
break;
except TypeError:
print 'No camera'
os.system('sudo modprobe uvcvideo')
time.sleep(1)
camera = highgui.cvCreateCameraCapture(0)
pg_img = pygame.image.frombuffer(im.tostring(), im.size, im.mode)
screen.blit(pg_img, (0,0))
pygame.display.flip()
pygame.time.delay(int(1000 * 1.0/fps))
It's a modified version of http://www.jperla.com/blog/2007/09/26/capturing-frames-from-a-webcam-on-linux/ It's using openvc version 1.x and not 2.x.
Any idea on how to make this work?
A:
Do you mean USB camera ? I don't know about forced unloading while module is in use, but this won't happen and is not a good simulation of camera not working anymore. Try to handle camera disconnection /reconnection gracefully first.
I don't know what you are trying to achieve when simulating driver crash, but you can't handle a driver crash, which can result in a oops or whatever, with user code. There is no defensive programming that can save you once kernel code is going wild.
Now, if an error (an error is different from a crash) occurs in driver code, then it should be returned to you, and all you can do is retry or exit. If your application is meant to be used by any UVC camera, then buy an USB camera which respect UVC, and play with it (disconnect / reconnect).
As for hardware failure, there is not much you can do, except perhaps setting a timeout.
What you can do within your code is, if you discover a specific problem with the driver, is avoiding to trigger this specific problem. For instance, if you know changing from resolution x to resolution y leads to a freeze camera, or a driver oops, then avoid it.
But I would not spend much time trying to handle hypothetical crash you don't know anything about. Instead, you should try to exercise error code path. For example, what happens if your system is low on memory ? Or if your system load is such that your app can't keep up whith the incoming frame.
A:
The reason your code now is crashing is because when the driver crashes, the device special files representing your hardware disappear. Your code still has open file handles to those devices. Depending on what exactly your code is doing behind the scenes, it is likely trying to issue an IOCTL to a now-invalid file handle, a use case that is typically not handled well by library code because it should only happen in an event like this with some kind of kernel-land fault that user-land code can't do anything about anyway.
Dealing with the camera if it stops working is completely different than dealing with the driver crashing. A malfunctioning camera should never take down a (correctly-written) driver. If the driver goes down, there's not much that your userland code will be able to do about it. Nor should it need to. If the driver crashes, that's the driver writers' problem, not yours. If you have a driver that is crashing on you often enough that you're tempted to try and handle it, then I would go with a different driver or try fixing the one you are using. No amount of application code is going to fix a faulty driver.
Don't forget that your code isn't the only code using the driver. Internal kernel processes or other applications may be using the driver as well. If something else is using the driver when you pull it, you can cause that other code to hang (beyond your control) and potentially take the whole system down.
Now if your webcam hardware has a problem, the driver gracefully should give you a message or an error of some kind that your application code can detect and act on, while doing its own work to get the camera working again. Failing hardware should not pose a burden on application code; let the driver do its job and it will bring the camera back online if possible. If it is unable to do so, then either the camera is in an unrecoverable state or the driver has room for improvement (if that is the case, making an offer to the driver's developers to test their code on your hardware can sometimes be a fast way of getting better driver support for your device).
Instead of trying to tear out the driver while it is running, I would concentrate on having code to handle all the possible error states that the driver can return for your device.
A:
Linux really doesn't like it when you try to remove a kernel driver while any processes are using it. I'm not convinced there's any good way for your userland application to do this (and having your app try to run 'sudo modprobe uvcvideo' is scary enough already).
|
Simulate a device driver crash in linux. Have python reload it
|
I have a web camera running in Linux using the uvcvideo module. And I'm using a python application to access the web camera and display the image.
I want the python program to handle it if the web camera for some reason don't work anymore. Have tested with just unloading the module. Works fine if i just unload the module before I run the python code, but if force it to unload in use, I get the following feedback.
VIDIOC_DQBUF: Inappropriate ioctl for device
And if I kill the python code, and restart it the whole machine freezes.
The code I'm trying to run is
import pygame
import Image
from pygame.locals import *
import sys
import time, os
import opencv
from opencv import highgui
camera = highgui.cvCreateCameraCapture(0)
fps = 10.0
pygame.init()
window = pygame.display.set_mode((640,480))
pygame.display.set_caption("WebCam Demo")
screen = pygame.display.get_surface()
while True:
events = pygame.event.get()
for event in events:
if event.type == QUIT or event.type == KEYDOWN:
sys.exit(0)
while True:
try:
ima = highgui.cvQueryFrame(camera)
im = opencv.adaptors.Ipl2PIL(ima)
break;
except TypeError:
print 'No camera'
os.system('sudo modprobe uvcvideo')
time.sleep(1)
camera = highgui.cvCreateCameraCapture(0)
pg_img = pygame.image.frombuffer(im.tostring(), im.size, im.mode)
screen.blit(pg_img, (0,0))
pygame.display.flip()
pygame.time.delay(int(1000 * 1.0/fps))
It's a modified version of http://www.jperla.com/blog/2007/09/26/capturing-frames-from-a-webcam-on-linux/ It's using openvc version 1.x and not 2.x.
Any idea on how to make this work?
|
[
"Do you mean USB camera ? I don't know about forced unloading while module is in use, but this won't happen and is not a good simulation of camera not working anymore. Try to handle camera disconnection /reconnection gracefully first.\nI don't know what you are trying to achieve when simulating driver crash, but you can't handle a driver crash, which can result in a oops or whatever, with user code. There is no defensive programming that can save you once kernel code is going wild.\nNow, if an error (an error is different from a crash) occurs in driver code, then it should be returned to you, and all you can do is retry or exit. If your application is meant to be used by any UVC camera, then buy an USB camera which respect UVC, and play with it (disconnect / reconnect).\nAs for hardware failure, there is not much you can do, except perhaps setting a timeout. \nWhat you can do within your code is, if you discover a specific problem with the driver, is avoiding to trigger this specific problem. For instance, if you know changing from resolution x to resolution y leads to a freeze camera, or a driver oops, then avoid it.\nBut I would not spend much time trying to handle hypothetical crash you don't know anything about. Instead, you should try to exercise error code path. For example, what happens if your system is low on memory ? Or if your system load is such that your app can't keep up whith the incoming frame.\n",
"The reason your code now is crashing is because when the driver crashes, the device special files representing your hardware disappear. Your code still has open file handles to those devices. Depending on what exactly your code is doing behind the scenes, it is likely trying to issue an IOCTL to a now-invalid file handle, a use case that is typically not handled well by library code because it should only happen in an event like this with some kind of kernel-land fault that user-land code can't do anything about anyway.\nDealing with the camera if it stops working is completely different than dealing with the driver crashing. A malfunctioning camera should never take down a (correctly-written) driver. If the driver goes down, there's not much that your userland code will be able to do about it. Nor should it need to. If the driver crashes, that's the driver writers' problem, not yours. If you have a driver that is crashing on you often enough that you're tempted to try and handle it, then I would go with a different driver or try fixing the one you are using. No amount of application code is going to fix a faulty driver.\nDon't forget that your code isn't the only code using the driver. Internal kernel processes or other applications may be using the driver as well. If something else is using the driver when you pull it, you can cause that other code to hang (beyond your control) and potentially take the whole system down.\nNow if your webcam hardware has a problem, the driver gracefully should give you a message or an error of some kind that your application code can detect and act on, while doing its own work to get the camera working again. Failing hardware should not pose a burden on application code; let the driver do its job and it will bring the camera back online if possible. If it is unable to do so, then either the camera is in an unrecoverable state or the driver has room for improvement (if that is the case, making an offer to the driver's developers to test their code on your hardware can sometimes be a fast way of getting better driver support for your device).\nInstead of trying to tear out the driver while it is running, I would concentrate on having code to handle all the possible error states that the driver can return for your device.\n",
"Linux really doesn't like it when you try to remove a kernel driver while any processes are using it. I'm not convinced there's any good way for your userland application to do this (and having your app try to run 'sudo modprobe uvcvideo' is scary enough already).\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"linux",
"python"
] |
stackoverflow_0002243674_linux_python.txt
|
Q:
SQLAlchemy ForeignKey relation via an intermediate table
Suppose that I have a table Articles, which has fields article_id, content and it contains one article with id 1.
I also have a table Categories, which has fields category_id (primary key), category_name, and it contains one category with id 10.
Now suppose that I have a table ArticleProperties, that adds properties to Articles. This table has fields article_id, property_name, property_value.
Suppose that I want to create a mapping from Categories to Articles via ArticleProperties table.
I do this by inserting the following values in the ArticleProperties table: (article_id=1, property_name="category", property_value=10).
Is there any way in SQLAlchemy to express that rows in table ArticleProperties with property_name "category" are actually FOREIGN KEYS of table Articles to table Categories?
This is a complicated problem and I haven't found an answer myself.
Any help appreciated!
Thanks, Boda Cydo.
A:
Assuming I understand you question correctly, then No, you can't model that relationship as you have suggested. (It would help if you described your desired result, rather than your perceived solution)
What I think you may want is a many-to-many mapping table called ArticleCategories, consisting of 2 int columns, ArticleID and CategoryID (with respective FKs)
|
SQLAlchemy ForeignKey relation via an intermediate table
|
Suppose that I have a table Articles, which has fields article_id, content and it contains one article with id 1.
I also have a table Categories, which has fields category_id (primary key), category_name, and it contains one category with id 10.
Now suppose that I have a table ArticleProperties, that adds properties to Articles. This table has fields article_id, property_name, property_value.
Suppose that I want to create a mapping from Categories to Articles via ArticleProperties table.
I do this by inserting the following values in the ArticleProperties table: (article_id=1, property_name="category", property_value=10).
Is there any way in SQLAlchemy to express that rows in table ArticleProperties with property_name "category" are actually FOREIGN KEYS of table Articles to table Categories?
This is a complicated problem and I haven't found an answer myself.
Any help appreciated!
Thanks, Boda Cydo.
|
[
"Assuming I understand you question correctly, then No, you can't model that relationship as you have suggested. (It would help if you described your desired result, rather than your perceived solution)\nWhat I think you may want is a many-to-many mapping table called ArticleCategories, consisting of 2 int columns, ArticleID and CategoryID (with respective FKs)\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0002234030_python_sqlalchemy.txt
|
Q:
subprocess.Popen(..).communicate(..) throw away data at random when used with graphviz!
I am using graphviz's dot to generate some svg graphs for a web application. I call dot using Popen:
p = subprocess.Popen(u'/usr/bin/dot -Kfdp -Tsvg', shell=True,\
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
str = u'long-unicode-string-i-want-to-convert'
(stdout,stderr) = p.communicate(str)
What happends is that the dot program throw errors like:
Error: not well-formed (invalid token) in line 1
... <tr><td cellpadding="4bgcolor="#EEE8AA"> ...
in label of node n260
That obvious error is most certainly NOT in the input string. In particular, if I save it to str.txt with utf-8 encoding and do
/usr/bin/dot -Kfdp -Tsvg < str.txt > myimg.svg
I get the desired output. The only 'special' thing about str is that it contain characters like the danish øæå.
Right now I have no clue what I should do. The problem may very well be in dot; but it certainly seem to be triggered by Popen being different than using < from the shell, and i have no idea where to begin. Any help or ideas for alternatively calling dot (besides writing all the data to a file and calling that!) would be very appreciated!
A:
Sounds like you should be doing:
stdout, stderr = p.communicate(str.encode('utf-8'))
(except, of course, that you shouldn't shadow the builtin str.) The unicode type in Python holds unicode data, not UTF-8. If you want UTF-8, you need to explicitly encode it.
On top of that, there's no reason to use shell=True in that snippet, nor is the unicode literal passed to subprocess.Popen a particularly good idea (it just gets encoded to ASCII anyway.) And the backslash at the end is unnecessary -- Python knows the line is continued, because you have an open parenthesis that hasn't been closed yet. So, use:
p = subprocess.Popen(['/usr/bin/dot', '-Kfdp', '-Tsvg'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
|
subprocess.Popen(..).communicate(..) throw away data at random when used with graphviz!
|
I am using graphviz's dot to generate some svg graphs for a web application. I call dot using Popen:
p = subprocess.Popen(u'/usr/bin/dot -Kfdp -Tsvg', shell=True,\
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
str = u'long-unicode-string-i-want-to-convert'
(stdout,stderr) = p.communicate(str)
What happends is that the dot program throw errors like:
Error: not well-formed (invalid token) in line 1
... <tr><td cellpadding="4bgcolor="#EEE8AA"> ...
in label of node n260
That obvious error is most certainly NOT in the input string. In particular, if I save it to str.txt with utf-8 encoding and do
/usr/bin/dot -Kfdp -Tsvg < str.txt > myimg.svg
I get the desired output. The only 'special' thing about str is that it contain characters like the danish øæå.
Right now I have no clue what I should do. The problem may very well be in dot; but it certainly seem to be triggered by Popen being different than using < from the shell, and i have no idea where to begin. Any help or ideas for alternatively calling dot (besides writing all the data to a file and calling that!) would be very appreciated!
|
[
"Sounds like you should be doing:\nstdout, stderr = p.communicate(str.encode('utf-8'))\n\n(except, of course, that you shouldn't shadow the builtin str.) The unicode type in Python holds unicode data, not UTF-8. If you want UTF-8, you need to explicitly encode it.\nOn top of that, there's no reason to use shell=True in that snippet, nor is the unicode literal passed to subprocess.Popen a particularly good idea (it just gets encoded to ASCII anyway.) And the backslash at the end is unnecessary -- Python knows the line is continued, because you have an open parenthesis that hasn't been closed yet. So, use:\np = subprocess.Popen(['/usr/bin/dot', '-Kfdp', '-Tsvg'],\n stdin=subprocess.PIPE, stdout=subprocess.PIPE)\n\n"
] |
[
3
] |
[] |
[] |
[
"graphviz",
"pipe",
"popen",
"python",
"subprocess"
] |
stackoverflow_0002248795_graphviz_pipe_popen_python_subprocess.txt
|
Q:
AppEngine 'explicitly cancelled' error
I'm using Google AppEngine and the deferred library, with the Mapper class, as described here (with some improvements as in here). In some iterations of the mapper I get the following error:
CancelledError: The API call datastore_v3.Put() was explicitly cancelled.
The Mapper usually runs fine, I used to have a higher batch size, so that it would actually hit the DeadlineExceededError, and that was handled correctly.
Just to be sure, I reduced the batch_size to a very low number, so that it never even hits the DeadlineExceededError but I still get the CancelledError.
The stack trace is as follows:
File "utils.py", line 114, in _continue
self._batch_write()
File "utils.py", line 76, in _batch_write
db.put(self.to_put)
File "/google/appengine/ext/db/__init__.py", line 1238, in put
keys = datastore.Put(entities, rpc=rpc)
File "/google/appengine/api/datastore.py", line 255, in Put
'datastore_v3', 'Put', req, datastore_pb.PutResponse(), rpc)
File "/google/appengine/api/datastore.py", line 177, in _MakeSyncCall
rpc.check_success()
File "/google/appengine/api/apiproxy_stub_map.py", line 474, in check_success
self.__rpc.CheckSuccess()
File "/google/appengine/api/apiproxy_rpc.py", line 126, in CheckSuccess
raise self.exception
CancelledError: The API call datastore_v3.Put() was explicitly cancelled.
I can't really find a lot of information about this 'explicity cancelled' error, so I was wondering what caused it and how to investigate.
A:
After a DeadlineExceededError, you are allowed a short amount of grace time to handle the exception, eg defer the remainder of the computation.
If you run out of grace time the CancelledError kicks in.
There should be no way to catch/handle the CancelledError
|
AppEngine 'explicitly cancelled' error
|
I'm using Google AppEngine and the deferred library, with the Mapper class, as described here (with some improvements as in here). In some iterations of the mapper I get the following error:
CancelledError: The API call datastore_v3.Put() was explicitly cancelled.
The Mapper usually runs fine, I used to have a higher batch size, so that it would actually hit the DeadlineExceededError, and that was handled correctly.
Just to be sure, I reduced the batch_size to a very low number, so that it never even hits the DeadlineExceededError but I still get the CancelledError.
The stack trace is as follows:
File "utils.py", line 114, in _continue
self._batch_write()
File "utils.py", line 76, in _batch_write
db.put(self.to_put)
File "/google/appengine/ext/db/__init__.py", line 1238, in put
keys = datastore.Put(entities, rpc=rpc)
File "/google/appengine/api/datastore.py", line 255, in Put
'datastore_v3', 'Put', req, datastore_pb.PutResponse(), rpc)
File "/google/appengine/api/datastore.py", line 177, in _MakeSyncCall
rpc.check_success()
File "/google/appengine/api/apiproxy_stub_map.py", line 474, in check_success
self.__rpc.CheckSuccess()
File "/google/appengine/api/apiproxy_rpc.py", line 126, in CheckSuccess
raise self.exception
CancelledError: The API call datastore_v3.Put() was explicitly cancelled.
I can't really find a lot of information about this 'explicity cancelled' error, so I was wondering what caused it and how to investigate.
|
[
"After a DeadlineExceededError, you are allowed a short amount of grace time to handle the exception, eg defer the remainder of the computation.\nIf you run out of grace time the CancelledError kicks in.\nThere should be no way to catch/handle the CancelledError\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"google_cloud_datastore",
"python",
"scheduled_tasks"
] |
stackoverflow_0002248811_google_app_engine_google_cloud_datastore_python_scheduled_tasks.txt
|
Q:
Formatting an output file
I'm currently indexing my music collection with python. Ideally I'd like my output file to be formatted as;
"Artist;
Album;
Tracks - length - bitrate - md5
Artist2;
Album2;
Tracks - length - bitrate - md5"
But I can't seem to work out how to achieve this. Any suggestions?
A:
>>> import textwrap
>>> class Album(object):
... def __init__(self, title, artist, tracks, length, bitrate, md5):
... self.title=title
... self.artist=artist
... self.tracks=tracks
... self.length=length
... self.bitrate=bitrate
... self.md5=md5
... def __str__(self):
... return textwrap.dedent("""
... %(artist)s;
... %(title)s;
... %(tracks)s - %(length)s - %(bitrate)s - %(md5)s"""%(vars(self)))
...
>>> a=Album("album title","artist name",10,52.1,"320kb/s","4d53b0cb432ec371ca93ea30b62521d9")
>>> print a
artist name;
album title;
10 - 52.1 - 320kb/s - 4d53b0cb432ec371ca93ea30b62521d9
A:
If your input data is a list of tuples each with 6 strings (artist, album, tracks, length, bitrate, md5):
for artist, album, tracks, length, bitrate, md5 in input_data:
print "%s;" % artist
print "%s;" % album
print " %s - %s - %s - %s" % (tracks, length, bitrate, md5)
It's essentially just as easy if your input data is in a different format, but unless you tell us what format that input data is in, it's pretty silly for us to just try to guess.
|
Formatting an output file
|
I'm currently indexing my music collection with python. Ideally I'd like my output file to be formatted as;
"Artist;
Album;
Tracks - length - bitrate - md5
Artist2;
Album2;
Tracks - length - bitrate - md5"
But I can't seem to work out how to achieve this. Any suggestions?
|
[
">>> import textwrap\n>>> class Album(object):\n... def __init__(self, title, artist, tracks, length, bitrate, md5):\n... self.title=title\n... self.artist=artist\n... self.tracks=tracks\n... self.length=length\n... self.bitrate=bitrate\n... self.md5=md5\n... def __str__(self):\n... return textwrap.dedent(\"\"\"\n... %(artist)s;\n... %(title)s;\n... %(tracks)s - %(length)s - %(bitrate)s - %(md5)s\"\"\"%(vars(self)))\n... \n>>> a=Album(\"album title\",\"artist name\",10,52.1,\"320kb/s\",\"4d53b0cb432ec371ca93ea30b62521d9\")\n>>> print a\n\nartist name;\nalbum title;\n 10 - 52.1 - 320kb/s - 4d53b0cb432ec371ca93ea30b62521d9\n\n",
"If your input data is a list of tuples each with 6 strings (artist, album, tracks, length, bitrate, md5):\nfor artist, album, tracks, length, bitrate, md5 in input_data:\n print \"%s;\" % artist\n print \"%s;\" % album\n print \" %s - %s - %s - %s\" % (tracks, length, bitrate, md5)\n\nIt's essentially just as easy if your input data is in a different format, but unless you tell us what format that input data is in, it's pretty silly for us to just try to guess.\n"
] |
[
1,
1
] |
[] |
[] |
[
"formatting",
"logging",
"python",
"text"
] |
stackoverflow_0002249162_formatting_logging_python_text.txt
|
Q:
How to check contents of a folder using Python
How can you check the contents of a file with python, and then copy a file from the same folder and move it to a new location?
I have Python 3.1 but i can just as easily port to 2.6
thank you!
A:
for example
import os,shutil
root="/home"
destination="/tmp"
directory = os.path.join(root,"mydir")
os.chdir(directory)
for file in os.listdir("."):
flag=""
#check contents of file ?
for line in open(file):
if "something" in line:
flag="found"
if flag=="found":
try:
# or use os.rename() on local
shutil.move(file,destination)
except Exception,e: print e
else:
print "success"
If you look at the shutil doc, under .move() it says
shutil.move(src, dst)¶
Recursively move a file or directory to another location.
If the destination is on the current filesystem, then simply use rename.
Otherwise, copy src (with copy2()) to the dst and then remove src.
I guess you can use copy2() to move to another file system.
A:
os.listdir() and shutil.move().
|
How to check contents of a folder using Python
|
How can you check the contents of a file with python, and then copy a file from the same folder and move it to a new location?
I have Python 3.1 but i can just as easily port to 2.6
thank you!
|
[
"for example\nimport os,shutil\nroot=\"/home\"\ndestination=\"/tmp\"\ndirectory = os.path.join(root,\"mydir\")\nos.chdir(directory)\nfor file in os.listdir(\".\"):\n flag=\"\"\n #check contents of file ?\n for line in open(file):\n if \"something\" in line:\n flag=\"found\"\n if flag==\"found\":\n try:\n # or use os.rename() on local\n shutil.move(file,destination)\n except Exception,e: print e\n else:\n print \"success\"\n\nIf you look at the shutil doc, under .move() it says\nshutil.move(src, dst)¶\n\n Recursively move a file or directory to another location.\n If the destination is on the current filesystem, then simply use rename. \nOtherwise, copy src (with copy2()) to the dst and then remove src.\n\nI guess you can use copy2() to move to another file system.\n",
"os.listdir() and shutil.move().\n"
] |
[
3,
1
] |
[] |
[] |
[
"cpython",
"directory",
"python"
] |
stackoverflow_0002249132_cpython_directory_python.txt
|
Q:
MongoDB/py-mongo for queries with date functions
Im looking to use a document database such as MongoDB but looking through the documents I cant find much on queries that involve date functions. For example lets say that I'm asking one of the following questions of the DB:
"Tell me all the people who bought a product on tuesday"
"Get me all sales and group by month"
They are random questions but essentially they could be anything that has date functions. Would you have any idea how I would go about this?
Thanks, Chris.
A:
For the first query the best bet would be to do a range query for dates in between the start and end of tuesday. Something like:
db.foo.find({"purchase_date": {"$gt": monday_midnight, "$lte": tuesday_midnight}})
This will be nicer syntactically when the following case is finished, so might want to vote for it:
http://jira.mongodb.org/browse/SERVER-465
For the second you'll probably want to check out either PyMongo's group or map_reduce methods, either of which can accomplish aggregation like that.
|
MongoDB/py-mongo for queries with date functions
|
Im looking to use a document database such as MongoDB but looking through the documents I cant find much on queries that involve date functions. For example lets say that I'm asking one of the following questions of the DB:
"Tell me all the people who bought a product on tuesday"
"Get me all sales and group by month"
They are random questions but essentially they could be anything that has date functions. Would you have any idea how I would go about this?
Thanks, Chris.
|
[
"For the first query the best bet would be to do a range query for dates in between the start and end of tuesday. Something like:\ndb.foo.find({\"purchase_date\": {\"$gt\": monday_midnight, \"$lte\": tuesday_midnight}})\n\nThis will be nicer syntactically when the following case is finished, so might want to vote for it:\nhttp://jira.mongodb.org/browse/SERVER-465\nFor the second you'll probably want to check out either PyMongo's group or map_reduce methods, either of which can accomplish aggregation like that.\n"
] |
[
3
] |
[] |
[] |
[
"mongodb",
"pymongo",
"python"
] |
stackoverflow_0002248146_mongodb_pymongo_python.txt
|
Q:
Python MediaWiki table regex (find strings of a particular format, then extract substrings within)
I'm trying to find all strings of the format {{rdex|001|001|Bulbasaur|2|Grass|Poison}} in a large text file, and then extract the substrings corresponding to the first 001 and to Bulbasaur, perhaps as a tuple.
I'm assuming regex with capturing groups can be used for both; could anybody tell me the appropriate regex to use in Python 3.1 as well as a possible code outline? I'm a regex noob.
Thanks!
A:
re.match('^{{[^|]+\|([^|]+)\|[^|]+\|([^|]+)\|[^|]+\|[^|]+\|[^|]+\}}$', S).groups()
A:
import re
text="""{{rdex|001|001|Bulbasaur|2|Grass|Poison}}"""
re.findall("\{\{[^|]+\|(\d+)\|\d+\|([^|]+)",text)
[('001', 'Bulbasaur')]
A:
line="{{rdex|001|001|Bulbasaur|2|Grass|Poison}}"
s=line.find("{{")
e=line.find("}}")
if s != -1 and e != -1:
sub=line[s+2:e].split("|")
print sub[1],sub[3]
output
$ ./python.py
001 Bulbasaur
|
Python MediaWiki table regex (find strings of a particular format, then extract substrings within)
|
I'm trying to find all strings of the format {{rdex|001|001|Bulbasaur|2|Grass|Poison}} in a large text file, and then extract the substrings corresponding to the first 001 and to Bulbasaur, perhaps as a tuple.
I'm assuming regex with capturing groups can be used for both; could anybody tell me the appropriate regex to use in Python 3.1 as well as a possible code outline? I'm a regex noob.
Thanks!
|
[
"re.match('^{{[^|]+\\|([^|]+)\\|[^|]+\\|([^|]+)\\|[^|]+\\|[^|]+\\|[^|]+\\}}$', S).groups()\n\n",
"import re\ntext=\"\"\"{{rdex|001|001|Bulbasaur|2|Grass|Poison}}\"\"\"\nre.findall(\"\\{\\{[^|]+\\|(\\d+)\\|\\d+\\|([^|]+)\",text)\n[('001', 'Bulbasaur')]\n\n",
"line=\"{{rdex|001|001|Bulbasaur|2|Grass|Poison}}\"\ns=line.find(\"{{\")\ne=line.find(\"}}\")\nif s != -1 and e != -1:\n sub=line[s+2:e].split(\"|\")\n print sub[1],sub[3]\n\noutput\n$ ./python.py\n001 Bulbasaur\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"mediawiki",
"python",
"regex"
] |
stackoverflow_0002249340_mediawiki_python_regex.txt
|
Q:
How to improve the throughput of request_logs on Google App Engine
Downloading logs from App Engine is nontrivial. Requests are batched; appcfg.py does not use normal file IO but rather a temporary file (in reverse chronological order) which it ultimately appends to the local log file; when appending, the need to find the "sentinel" makes log rotation difficult since one must leave enough old logs for appcfg.py to remember where it left off. Finally, Google deletes old logs after some time (20 minutes for the app I use).
As an app scales, and the log generation rate grows, how can one increase the speed of fetching the logs so that appcfg.py does not fall behind?
A:
You can increase the per-request batch size of logs. In the latest SDK (1.3.1), check out google_appengine/google/appengine/tools/appcfg.py around like 861 (RequestLogLines method of LogsRequester class). You can modify the "limit" parameter.
I am using 1000 and it works pretty well.
|
How to improve the throughput of request_logs on Google App Engine
|
Downloading logs from App Engine is nontrivial. Requests are batched; appcfg.py does not use normal file IO but rather a temporary file (in reverse chronological order) which it ultimately appends to the local log file; when appending, the need to find the "sentinel" makes log rotation difficult since one must leave enough old logs for appcfg.py to remember where it left off. Finally, Google deletes old logs after some time (20 minutes for the app I use).
As an app scales, and the log generation rate grows, how can one increase the speed of fetching the logs so that appcfg.py does not fall behind?
|
[
"You can increase the per-request batch size of logs. In the latest SDK (1.3.1), check out google_appengine/google/appengine/tools/appcfg.py around like 861 (RequestLogLines method of LogsRequester class). You can modify the \"limit\" parameter.\nI am using 1000 and it works pretty well.\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"logging",
"python"
] |
stackoverflow_0002249530_google_app_engine_logging_python.txt
|
Q:
How to generate data model from sql schema in Django?
Our website uses a PHP front-end and a PostgreSQL database. We don't have a back-end at the moment except phpPgAdmin. The database admin has to type data into phpPgAmin manually, which is error-prone and tedious. We want to use Django to build a back-end.
The database has a few dozen of tables already there. Is it possible to import the database schema into Django and create models automatically?
A:
Yes it is possible, using the inspectdb command:
python manage.py inspectdb
or
python manage.py inspectdb > models.py
to get them in into the file
This will look at the database configured in your settings.py and outputs model classes to standard output.
As Ignacio pointed out, there is a guide for your situation in the documentation.
A:
If each table has an autoincrement integer PK then you can use the legacy database instructions.
|
How to generate data model from sql schema in Django?
|
Our website uses a PHP front-end and a PostgreSQL database. We don't have a back-end at the moment except phpPgAdmin. The database admin has to type data into phpPgAmin manually, which is error-prone and tedious. We want to use Django to build a back-end.
The database has a few dozen of tables already there. Is it possible to import the database schema into Django and create models automatically?
|
[
"Yes it is possible, using the inspectdb command:\npython manage.py inspectdb\n\nor\npython manage.py inspectdb > models.py\n\nto get them in into the file\nThis will look at the database configured in your settings.py and outputs model classes to standard output.\nAs Ignacio pointed out, there is a guide for your situation in the documentation.\n",
"If each table has an autoincrement integer PK then you can use the legacy database instructions.\n"
] |
[
51,
2
] |
[] |
[] |
[
"database",
"django",
"python"
] |
stackoverflow_0002249489_database_django_python.txt
|
Q:
Python lazy iterator
I am trying to understand how and when iterator expressions get evaluated. The following seems to be a lazy expression:
g = (i for i in range(1000) if i % 3 == i % 2)
This one, however fails on construction:
g = (line.strip() for line in open('xxx', 'r') if len(line) > 10)
I do not have the file named 'xxx'. However, since this thing is lazy, why is it failing so soon?
Thanks.
EDI: Wow, I made a lazy one!
g = (line.strip() for i in range(3) for line in open(str(i), 'r'))
A:
The iteration over the file returned by the call to open() is lazy. The call to open() is not.
A:
From the documentation:
Variables used in the generator
expression are evaluated lazily in a
separate scope when the next()
method is called for the generator
object (in the same fashion as for
normal generators). However, the in
expression of the leftmost for
clause is immediately evaluated in the
current scope so that an error
produced by it can be seen before any
other possible error in the code that
handles the generator expression.
|
Python lazy iterator
|
I am trying to understand how and when iterator expressions get evaluated. The following seems to be a lazy expression:
g = (i for i in range(1000) if i % 3 == i % 2)
This one, however fails on construction:
g = (line.strip() for line in open('xxx', 'r') if len(line) > 10)
I do not have the file named 'xxx'. However, since this thing is lazy, why is it failing so soon?
Thanks.
EDI: Wow, I made a lazy one!
g = (line.strip() for i in range(3) for line in open(str(i), 'r'))
|
[
"The iteration over the file returned by the call to open() is lazy. The call to open() is not.\n",
"From the documentation:\n\nVariables used in the generator\n expression are evaluated lazily in a\n separate scope when the next()\n method is called for the generator\n object (in the same fashion as for\n normal generators). However, the in\n expression of the leftmost for\n clause is immediately evaluated in the\n current scope so that an error\n produced by it can be seen before any\n other possible error in the code that\n handles the generator expression.\n\n"
] |
[
6,
6
] |
[] |
[] |
[
"lazy_evaluation",
"python"
] |
stackoverflow_0002249651_lazy_evaluation_python.txt
|
Q:
Convert unicode string to array of bytes
I'm using OpenGL and I need to pass to a function array of bytes.
glCallLists(len('text'), GL_UNSIGNED_BYTES, 'text');
This way it's working fine. But I need to pass unicode text. I think that it should work like this:
text = u'unicode text'
glCallLists(len(text), GL_UNSIGNED_SHORT, convert_to_array_of_words(text));
Here I use GL_UNSIGNED_SHORT that says I'll give array where each element takes 2 bytes, and somehow convert unicode text to array of words.
So, how can I convert unicode string to "raw" array of chars' numbers?
A:
The UTF encoding that takes up 2 bytes per character is UTF-16:
print repr(u'あいうえお'.encode('utf-16be'))
print repr(u'あいうえお'.encode('utf-16le'))
|
Convert unicode string to array of bytes
|
I'm using OpenGL and I need to pass to a function array of bytes.
glCallLists(len('text'), GL_UNSIGNED_BYTES, 'text');
This way it's working fine. But I need to pass unicode text. I think that it should work like this:
text = u'unicode text'
glCallLists(len(text), GL_UNSIGNED_SHORT, convert_to_array_of_words(text));
Here I use GL_UNSIGNED_SHORT that says I'll give array where each element takes 2 bytes, and somehow convert unicode text to array of words.
So, how can I convert unicode string to "raw" array of chars' numbers?
|
[
"The UTF encoding that takes up 2 bytes per character is UTF-16:\nprint repr(u'あいうえお'.encode('utf-16be'))\nprint repr(u'あいうえお'.encode('utf-16le'))\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"unicode"
] |
stackoverflow_0002249817_python_unicode.txt
|
Q:
Can we get the following flexibility in Python as In Perl
Sorry. I am not trying to start any flame. My scripting experience is from Perl, and I am pretty new in Python.
I just want to check whether I can have the same degree of flexibility as in Python.
In Python :
page = form.getvalue("page")
str = 'This is string : ' + str(int(page) + 1)
In Perl :
$str = 'This is string : ' . ($page + 1);
Is there any way I can avoid int / str conversion?
A:
No, since Python is strongly typed. If you keep page as an int you can do the following:
s = 'This is string : %d' % (page + 1,)
A:
It looks like page is a str
page = form.getvalue("page")
S = 'This is string : %d'%(int(page)+1)
otherwise make page an int
page = int(form.getvalue("page"))
S = 'This is string : %d'%(page+1)
For the record (and to show that this is nothing to do with strong typing), you can also do crazy stuff like this:
>>> class strplus(int):
... def __radd__(self, other):
... return str(int(other).__add__(self))
...
>>> page = form.getvalue("page")
>>> page + strplus(1)
'3'
A:
You could use:
mystr = "This string is: %s" % (int(page) + 1)
... the string conversion will be automatic when interpolating into the %s via the % (string formating operator).
You can't get around the need to convert from string to integer. Python will never conflate strings for other data types. In various contexts Python can return the string or "representation" of an object so there are some implicit data casts into string forms. (Under the hood these call .__str__() or .__repr__() object methods).
(While some folks don't like it I personally think the notion of overloading % for string interpolation is far more sensible than a function named sprintf() (if you have a language with operator overloading support anyway).
A:
No. Python doesn't have the same level of polymorphism as perl. You can print anything, and mix and match floats and ints quite easily, and lots of things (0, '', "", () and []) all end up False, but no, it's not perl in terms of polymorphism.
|
Can we get the following flexibility in Python as In Perl
|
Sorry. I am not trying to start any flame. My scripting experience is from Perl, and I am pretty new in Python.
I just want to check whether I can have the same degree of flexibility as in Python.
In Python :
page = form.getvalue("page")
str = 'This is string : ' + str(int(page) + 1)
In Perl :
$str = 'This is string : ' . ($page + 1);
Is there any way I can avoid int / str conversion?
|
[
"No, since Python is strongly typed. If you keep page as an int you can do the following:\ns = 'This is string : %d' % (page + 1,)\n\n",
"It looks like page is a str\npage = form.getvalue(\"page\")\nS = 'This is string : %d'%(int(page)+1)\n\notherwise make page an int\npage = int(form.getvalue(\"page\"))\nS = 'This is string : %d'%(page+1)\n\nFor the record (and to show that this is nothing to do with strong typing), you can also do crazy stuff like this:\n>>> class strplus(int):\n... def __radd__(self, other):\n... return str(int(other).__add__(self))\n... \n>>> page = form.getvalue(\"page\")\n>>> page + strplus(1)\n'3'\n\n",
"You could use:\nmystr = \"This string is: %s\" % (int(page) + 1)\n\n... the string conversion will be automatic when interpolating into the %s via the % (string formating operator).\nYou can't get around the need to convert from string to integer. Python will never conflate strings for other data types. In various contexts Python can return the string or \"representation\" of an object so there are some implicit data casts into string forms. (Under the hood these call .__str__() or .__repr__() object methods).\n(While some folks don't like it I personally think the notion of overloading % for string interpolation is far more sensible than a function named sprintf() (if you have a language with operator overloading support anyway).\n",
"No. Python doesn't have the same level of polymorphism as perl. You can print anything, and mix and match floats and ints quite easily, and lots of things (0, '', \"\", () and []) all end up False, but no, it's not perl in terms of polymorphism.\n"
] |
[
7,
1,
1,
0
] |
[] |
[] |
[
"perl",
"python"
] |
stackoverflow_0002249419_perl_python.txt
|
Q:
Django not translating Bittorrent query string properly
I'm writing a small Bittorrent tracker on top of the Django framework, as part of a larger project. However, I'm having problems with decoding the "info_hash" parameter of the announce request.
Basically, uTorrent takes the SHA1 hash of the torrent in question and URL encodes the hex representation of it, which is then sent to the tracker in a GET request as the info_hash parameter.
The info_hash
A44B44B0EE8D85A9F7135489D522A19DA2C87C91
gets encoded as:
%a4KD%b0%ee%8d%85%a9%f7%13T%89%d5%22%a1%9d%a2%c8%7c%91
However, Django decodes this to the Unicode string:
u'\ufffdKD\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\x13T\ufffd\ufffd"\ufffd\ufffd\ufffd\ufffd|\ufffd'
instead of a string literal like this:
\xa4KD\xb0\xee\x8d\x85\xa9\xf7\x13T\x89\xd5"\xa1\x9d\xa2\xc8|\x91
How can I stop Django from trying to translate the info_hash to Unicode, so I can then unquote it? My goal is to get a string literal that I can then encode to a hex string.
Any thoughts? Apologies if there's some concept about encoding that I'm missing. Thanks!
A:
What is your settings.DEFAULT_ENCODING? Also how deoes the hash look like in HTTP headers? It shouldn't be modified at all during encoding as below:
>>> import urllib
>>> urllib.urlencode({'hash':"A44B44B0EE8D85A9F7135489D522A19DA2C87C91"})
'hash=A44B44B0EE8D85A9F7135489D522A19DA2C87C91'
Since:
>>> urllib.quote('A44B44B0EE8D85A9F7135489D522A19DA2C87C91') == 'A44B44B0EE8D85A9F7135489D522A19DA2C87C91'
True
And therefore:
>>> urllib.unquote('%a4KD%b0%ee%8d%85%a9%f7%13T%89%d5%22%a1%9d%a2%c8%7c%91') == 'A44B44B0EE8D85A9F7135489D522A19DA2C87C91'
False
A:
Django decodes all GET data using the default encoding. You'll need to get the query string yourself, possibly from os.environ['QUERY_STRING'] or request.environ['QUERY_STRING'].
|
Django not translating Bittorrent query string properly
|
I'm writing a small Bittorrent tracker on top of the Django framework, as part of a larger project. However, I'm having problems with decoding the "info_hash" parameter of the announce request.
Basically, uTorrent takes the SHA1 hash of the torrent in question and URL encodes the hex representation of it, which is then sent to the tracker in a GET request as the info_hash parameter.
The info_hash
A44B44B0EE8D85A9F7135489D522A19DA2C87C91
gets encoded as:
%a4KD%b0%ee%8d%85%a9%f7%13T%89%d5%22%a1%9d%a2%c8%7c%91
However, Django decodes this to the Unicode string:
u'\ufffdKD\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\x13T\ufffd\ufffd"\ufffd\ufffd\ufffd\ufffd|\ufffd'
instead of a string literal like this:
\xa4KD\xb0\xee\x8d\x85\xa9\xf7\x13T\x89\xd5"\xa1\x9d\xa2\xc8|\x91
How can I stop Django from trying to translate the info_hash to Unicode, so I can then unquote it? My goal is to get a string literal that I can then encode to a hex string.
Any thoughts? Apologies if there's some concept about encoding that I'm missing. Thanks!
|
[
"What is your settings.DEFAULT_ENCODING? Also how deoes the hash look like in HTTP headers? It shouldn't be modified at all during encoding as below:\n>>> import urllib\n>>> urllib.urlencode({'hash':\"A44B44B0EE8D85A9F7135489D522A19DA2C87C91\"})\n'hash=A44B44B0EE8D85A9F7135489D522A19DA2C87C91'\n\nSince:\n>>> urllib.quote('A44B44B0EE8D85A9F7135489D522A19DA2C87C91') == 'A44B44B0EE8D85A9F7135489D522A19DA2C87C91'\nTrue\n\nAnd therefore:\n>>> urllib.unquote('%a4KD%b0%ee%8d%85%a9%f7%13T%89%d5%22%a1%9d%a2%c8%7c%91') == 'A44B44B0EE8D85A9F7135489D522A19DA2C87C91'\nFalse\n\n",
"Django decodes all GET data using the default encoding. You'll need to get the query string yourself, possibly from os.environ['QUERY_STRING'] or request.environ['QUERY_STRING'].\n"
] |
[
1,
0
] |
[] |
[] |
[
"bittorrent",
"django",
"encoding",
"python"
] |
stackoverflow_0002249947_bittorrent_django_encoding_python.txt
|
Q:
Django Master-Detail View Plugins
Let's say I have 3 django apps, app Country, app Social and app Financial.
Country is a 'master navigation' app. It lists all the countries in a 'index' view and shows details for each country on its 'details' view.
Each country's details include their Social details (from the social app) and their Financial details (from the financial app).
Social and Financial both have a detail view (for each country)
Is there an elegant way to 'plug' in those sub-detail views into the master detail view provided by Countries? So for each country detail page I would see 2 tabs showing the social and the financial details for that country.
A:
2 common solution I use for this problem:
Partial Templates:
Create a template for rendering "social" and "financial" that does not need stuff from the view, other than the object it is working on (and uses the objects functions or template tags to render it).
then you can easily {% include %} it (and set the needed variable first).
This partial view does not render a full HTML page, but only a single DIV or some other HTML element you wish to use. If you also need a "social-only" page, you can create a page that renders the header and then includes the partial template. You can use a convention like _template.html for the partial template, and template.html for the regular template.
AJAX:
Make your "social" and "financial" views aware of being called in XMLHTTPRequest (request.is_ajax()). If they are, they return only a DIV element, without all the HTML around it. This way your master page can render without it, and add that content on the fly.
The AJAX way has several advantages: you don't render the plugin views on the same request as the whole page, so if you have many of these plugin views, the master page will load faster, and you can have a smart javascript choose only the relevant plugin views to ask for.
Also, you can use the normal view to generate data you need in the template (which you can't really do in the Partial Templates method).
|
Django Master-Detail View Plugins
|
Let's say I have 3 django apps, app Country, app Social and app Financial.
Country is a 'master navigation' app. It lists all the countries in a 'index' view and shows details for each country on its 'details' view.
Each country's details include their Social details (from the social app) and their Financial details (from the financial app).
Social and Financial both have a detail view (for each country)
Is there an elegant way to 'plug' in those sub-detail views into the master detail view provided by Countries? So for each country detail page I would see 2 tabs showing the social and the financial details for that country.
|
[
"2 common solution I use for this problem: \nPartial Templates:\nCreate a template for rendering \"social\" and \"financial\" that does not need stuff from the view, other than the object it is working on (and uses the objects functions or template tags to render it).\nthen you can easily {% include %} it (and set the needed variable first).\nThis partial view does not render a full HTML page, but only a single DIV or some other HTML element you wish to use. If you also need a \"social-only\" page, you can create a page that renders the header and then includes the partial template. You can use a convention like _template.html for the partial template, and template.html for the regular template.\nAJAX:\nMake your \"social\" and \"financial\" views aware of being called in XMLHTTPRequest (request.is_ajax()). If they are, they return only a DIV element, without all the HTML around it. This way your master page can render without it, and add that content on the fly.\nThe AJAX way has several advantages: you don't render the plugin views on the same request as the whole page, so if you have many of these plugin views, the master page will load faster, and you can have a smart javascript choose only the relevant plugin views to ask for.\nAlso, you can use the normal view to generate data you need in the template (which you can't really do in the Partial Templates method).\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_templates",
"django_views",
"master_detail",
"python"
] |
stackoverflow_0002249285_django_django_templates_django_views_master_detail_python.txt
|
Q:
Use psycopg2 to obtain long value from PostgreSQl
I am facing problem in retrieving long value from PostgreSQL
I use the following SQL command :
SELECT *, (extract(epoch FROM start_timestamp) * 1000) FROM lot
WHERE EXTRACT(EPOCH FROM lot.start_timestamp) * 1000 >=1265299200000 AND
EXTRACT(EPOCH FROM lot.start_timestamp) * 1000 <=1265990399999
ORDER BY start_timestamp DESC limit 9 offset 0
I am interested in the unix timestamp in ms
I run this command manually through pgAdmin.
From pgAdmin, I saw 1265860762817 for column (extract(epoch FROM start_timestamp) * 1000)
However, when I retrieve through psycopg2, here is what I use :
cur = conn.cursor()
cur.execute(sql)
rows = cur.fetchall()
for row in rows :
print row[3]
And here is what I get :
1.26586076282e+12
I pass this value, from python cgi to JavaScript in JSON
If I do manual conversion in JavaScript
1.26586076282e+12
// summary.date is 1.26586076282e+12
var timestamp = summary.date * 1;
// Get 1265860762820
alert(timestamp);
There are 3ms error difference
1265860762817
1265860762820-
==============
3
==============
How can I avoid such error? I can I make sure psycopg2 is returning me 1265860762817, not 1.26586076282e+12
A:
I have the same results using both psycopg2 and pygres. While this select returns float value you can modify your code to format returned value:
def format_float_fld(v):
#return str(v)
return ('%20.0f' % (v)).strip()
If you use str(s) then you will get scientific notation.
You can also change query to return bigint instead of float:
SELECT (EXTRACT(EPOCH FROM TIMESTAMP '2010-02-16 20:38:40.123') * 1000)::bigint;
|
Use psycopg2 to obtain long value from PostgreSQl
|
I am facing problem in retrieving long value from PostgreSQL
I use the following SQL command :
SELECT *, (extract(epoch FROM start_timestamp) * 1000) FROM lot
WHERE EXTRACT(EPOCH FROM lot.start_timestamp) * 1000 >=1265299200000 AND
EXTRACT(EPOCH FROM lot.start_timestamp) * 1000 <=1265990399999
ORDER BY start_timestamp DESC limit 9 offset 0
I am interested in the unix timestamp in ms
I run this command manually through pgAdmin.
From pgAdmin, I saw 1265860762817 for column (extract(epoch FROM start_timestamp) * 1000)
However, when I retrieve through psycopg2, here is what I use :
cur = conn.cursor()
cur.execute(sql)
rows = cur.fetchall()
for row in rows :
print row[3]
And here is what I get :
1.26586076282e+12
I pass this value, from python cgi to JavaScript in JSON
If I do manual conversion in JavaScript
1.26586076282e+12
// summary.date is 1.26586076282e+12
var timestamp = summary.date * 1;
// Get 1265860762820
alert(timestamp);
There are 3ms error difference
1265860762817
1265860762820-
==============
3
==============
How can I avoid such error? I can I make sure psycopg2 is returning me 1265860762817, not 1.26586076282e+12
|
[
"I have the same results using both psycopg2 and pygres. While this select returns float value you can modify your code to format returned value:\ndef format_float_fld(v):\n #return str(v)\n return ('%20.0f' % (v)).strip()\n\nIf you use str(s) then you will get scientific notation.\nYou can also change query to return bigint instead of float:\n SELECT (EXTRACT(EPOCH FROM TIMESTAMP '2010-02-16 20:38:40.123') * 1000)::bigint;\n\n"
] |
[
1
] |
[] |
[] |
[
"postgresql",
"python"
] |
stackoverflow_0002250135_postgresql_python.txt
|
Q:
Replace a word in a file
I am new to Python programming...
I have a .txt file....... It looks like..
0,Salary,14000
0,Bonus,5000
0,gift,6000
I want to to replace the first '0' value to '1' in each line. How can I do this? Any one can help me.... With sample code..
Thanks in advance.
Nimmyliji
A:
I know that you're asking about Python, but forgive me for suggesting that perhaps a different tool is better for the job. :) It's a one-liner via sed:
sed 's/^0,/1,/' yourtextfile.txt > output.txt
This applies the regex /^0,/ (which matches any 0, that occurs at the beginning of a line) to each line and replaces the matched text with 1, instead. The output is directed into the file output.txt specified.
A:
inFile = open("old.txt", "r")
outFile = open("new.txt", "w")
for line in inFile:
outFile.write(",".join(["1"] + (line.split(","))[1:]))
inFile.close()
outFile.close()
If you would like something more general, take a look to Python csv module. It contains utilities for processing comma-separated values (abbreviated as csv) in files. But it can work with arbitrary delimiter, not only comma. So as you sample is obviously a csv file, you can use it as follows:
import csv
reader = csv.reader(open("old.txt"))
writer = csv.writer(open("new.txt", "w"))
writer.writerows(["1"] + line[1:] for line in reader)
To overwrite original file with new one:
import os
os.remove("old.txt")
os.rename("new.txt", "old.txt")
I think that writing to new file and then renaming it is more fault-tolerant and less likely corrupt your data than direct overwriting of source file. Imagine, that your program raised an exception while source file was already read to memory and reopened for writing. So you would lose original data and your new data wouldn't be saved because of program crash. In my case, I only lose new data while preserving original.
A:
f = open(filepath,'r')
data = f.readlines()
f.close()
edited = []
for line in data:
edited.append( '1'+line[1:] )
f = open(filepath,'w')
f.writelines(edited)
f.flush()
f.close()
Or in Python 2.5+:
with open(filepath,'r') as f:
data = f.readlines()
with open(outfilepath, 'w') as f:
for line in data:
f.write( '1' + line[1:] )
This should do it. I wouldn't recommend it for a truly big file though ;-)
What is going on (ex 1):
1: Open the file in read mode
2,3: Read all the lines into a list (each line is a separate index) and close the file.
4,5,6: Iterate over the list constructing a new list where each line has the first character replaced by a 1. The line[1:] slices the string from index 1 onward. We concatenate the 1 with the truncated list.
7,8,9: Reopen the file in write mode, write the list to the file (overwrite), flush the buffer, and close the file handle.
In Ex. 2:
I use the with statement that lets the file handle closing itself, but do essentially the same thing.
A:
o=open("output.txt","w")
for line in open("file"):
s=line.split(",")
s[0]="1"
o.write(','.join(s))
o.close()
Or you can use fileinput with in place edit
import fileinput
for line in fileinput.FileInput("file",inplace=1):
s=line.split(",")
s[0]="1"
print ','.join(s)
|
Replace a word in a file
|
I am new to Python programming...
I have a .txt file....... It looks like..
0,Salary,14000
0,Bonus,5000
0,gift,6000
I want to to replace the first '0' value to '1' in each line. How can I do this? Any one can help me.... With sample code..
Thanks in advance.
Nimmyliji
|
[
"I know that you're asking about Python, but forgive me for suggesting that perhaps a different tool is better for the job. :) It's a one-liner via sed:\nsed 's/^0,/1,/' yourtextfile.txt > output.txt\n\nThis applies the regex /^0,/ (which matches any 0, that occurs at the beginning of a line) to each line and replaces the matched text with 1, instead. The output is directed into the file output.txt specified.\n",
"inFile = open(\"old.txt\", \"r\")\noutFile = open(\"new.txt\", \"w\")\nfor line in inFile: \n outFile.write(\",\".join([\"1\"] + (line.split(\",\"))[1:]))\n\ninFile.close()\noutFile.close()\n\nIf you would like something more general, take a look to Python csv module. It contains utilities for processing comma-separated values (abbreviated as csv) in files. But it can work with arbitrary delimiter, not only comma. So as you sample is obviously a csv file, you can use it as follows:\nimport csv\nreader = csv.reader(open(\"old.txt\"))\nwriter = csv.writer(open(\"new.txt\", \"w\"))\nwriter.writerows([\"1\"] + line[1:] for line in reader)\n\nTo overwrite original file with new one:\nimport os\nos.remove(\"old.txt\")\nos.rename(\"new.txt\", \"old.txt\")\n\nI think that writing to new file and then renaming it is more fault-tolerant and less likely corrupt your data than direct overwriting of source file. Imagine, that your program raised an exception while source file was already read to memory and reopened for writing. So you would lose original data and your new data wouldn't be saved because of program crash. In my case, I only lose new data while preserving original.\n",
"f = open(filepath,'r')\ndata = f.readlines()\nf.close()\n\nedited = []\nfor line in data:\n edited.append( '1'+line[1:] )\n\nf = open(filepath,'w')\nf.writelines(edited)\nf.flush()\nf.close()\n\nOr in Python 2.5+:\nwith open(filepath,'r') as f:\n data = f.readlines()\n\nwith open(outfilepath, 'w') as f:\n for line in data:\n f.write( '1' + line[1:] )\n\nThis should do it. I wouldn't recommend it for a truly big file though ;-)\nWhat is going on (ex 1):\n1: Open the file in read mode\n2,3: Read all the lines into a list (each line is a separate index) and close the file.\n4,5,6: Iterate over the list constructing a new list where each line has the first character replaced by a 1. The line[1:] slices the string from index 1 onward. We concatenate the 1 with the truncated list.\n7,8,9: Reopen the file in write mode, write the list to the file (overwrite), flush the buffer, and close the file handle.\nIn Ex. 2:\nI use the with statement that lets the file handle closing itself, but do essentially the same thing.\n",
"o=open(\"output.txt\",\"w\")\nfor line in open(\"file\"):\n s=line.split(\",\")\n s[0]=\"1\"\n o.write(','.join(s))\no.close()\n\nOr you can use fileinput with in place edit\nimport fileinput\nfor line in fileinput.FileInput(\"file\",inplace=1):\n s=line.split(\",\")\n s[0]=\"1\"\n print ','.join(s)\n\n"
] |
[
4,
3,
2,
2
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0002250357_file_python.txt
|
Q:
Can someone help me with this JAVA SAXParser?
I've been fiddling for 3 hours and I can't get this F***** parser to work. Sorry for cursing.
I don't understand why I can't find one decent tutorial that does exactly what I want.
I just want to send the function a String/XML. Then, parse it. it's not that hard. In python, I can do it with my eyes closed. Awesome, freaking documentation right here: http://www.crummy.com/software/BeautifulSoup/documentation.html
import BeautifulSoup
soup = BeautifulSoup(the_xml)
persons_name = soup.findAll('first_name')[0].string
Why can't I find a good, simple, documentation that teaches me how to parse XML????? This is my current code for JAVA SAX, and its not working, and I don't even know why.
public static void parseit(String thexml)
{
SAXParserFactory factory = SAXParserFactory.newInstance();
try {
SAXParser saxParser = factory.newSAXParser();
saxParser.parse( thexml , new DefaultHandler() );
} catch (Throwable err) {
err.printStackTrace ();
}
}
Can someone just write me the code to parse the XML using SAX parser...please...It's just like 5 lines of code.
A:
You have to extends your default handler DefaultHandler. For example, try this:
saxParser.parse( new InputSource(new StringReader(thexml)) , new DefaultHandler()
{
public void startElement(String uri, String localName, String qName, Attributes attributes)
{
System.out.println("Hello "+qName);
}
});
A:
Ok, so what you need to do is to implement your own handler (instead of using default one). So replace
saxParser.parse( thexml , new DefaultHandler() );
with
saxParser.parse( thexml , new MyFreakingHandler() );
where MyFreakingHandler implements interface HandlerBase or it can extend DefaultHandler class. Then simply provide implementation for such methods like
public void startDocument () throws SAXException
public void endElement (String name) throws SAXException
I don't know however why you could not find any tutorial on the web. I haven't been using SAXParser for at least 3 years now and in order to answer your post I just simply asked Google for help.
EDIT:
Ok, so to clear things out. There used to be an official Java tutorial for SAX, that somehow I cannot find on the web now, however there are still number of decent non-official tutorials that can be quite helpful. Try with this on for instance: http://www.java-samples.com/showtutorial.php?tutorialid=152
A:
You must extend DefaultHandler with your own implementation. The sax parser is good if you are working with large documents. If not, you might be better off with another xml parser, for example dom4j.
Here's a simple sax tutorial
A:
I don't know if this would be an option for you, but since Groovy and Java play nice together why not try one of the Groovy options to process XML.
In particular look at the XML Slurper (http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper)
def records = new XmlSlurper().parseText(thexml)
def persons_name = records.first_name[0]
In my opinion that is as close as you'll get to BeautifulSoup in a Java compatible way.
A:
Using the Java XPath API
XPathFactory factory = XPathFactory.newInstance();
XPath xPath = factory.newXPath();
XPathExpression xPathExpression = xPath.compile("//first_name");
NodeList nodes = (NodeList) xPathExpression.evaluate(
new InputSource(new FileInputStream(the_xml)), XPathConstants.NODESET);
Yes it is unnecessarily verbose.
|
Can someone help me with this JAVA SAXParser?
|
I've been fiddling for 3 hours and I can't get this F***** parser to work. Sorry for cursing.
I don't understand why I can't find one decent tutorial that does exactly what I want.
I just want to send the function a String/XML. Then, parse it. it's not that hard. In python, I can do it with my eyes closed. Awesome, freaking documentation right here: http://www.crummy.com/software/BeautifulSoup/documentation.html
import BeautifulSoup
soup = BeautifulSoup(the_xml)
persons_name = soup.findAll('first_name')[0].string
Why can't I find a good, simple, documentation that teaches me how to parse XML????? This is my current code for JAVA SAX, and its not working, and I don't even know why.
public static void parseit(String thexml)
{
SAXParserFactory factory = SAXParserFactory.newInstance();
try {
SAXParser saxParser = factory.newSAXParser();
saxParser.parse( thexml , new DefaultHandler() );
} catch (Throwable err) {
err.printStackTrace ();
}
}
Can someone just write me the code to parse the XML using SAX parser...please...It's just like 5 lines of code.
|
[
"You have to extends your default handler DefaultHandler. For example, try this:\n saxParser.parse( new InputSource(new StringReader(thexml)) , new DefaultHandler()\n {\n public void startElement(String uri, String localName, String qName, Attributes attributes)\n {\n System.out.println(\"Hello \"+qName);\n } \n });\n\n",
"Ok, so what you need to do is to implement your own handler (instead of using default one). So replace \nsaxParser.parse( thexml , new DefaultHandler() );\n\nwith\n saxParser.parse( thexml , new MyFreakingHandler() );\n\nwhere MyFreakingHandler implements interface HandlerBase or it can extend DefaultHandler class. Then simply provide implementation for such methods like \npublic void startDocument () throws SAXException\npublic void endElement (String name) throws SAXException\n\nI don't know however why you could not find any tutorial on the web. I haven't been using SAXParser for at least 3 years now and in order to answer your post I just simply asked Google for help.\nEDIT:\nOk, so to clear things out. There used to be an official Java tutorial for SAX, that somehow I cannot find on the web now, however there are still number of decent non-official tutorials that can be quite helpful. Try with this on for instance: http://www.java-samples.com/showtutorial.php?tutorialid=152\n",
"You must extend DefaultHandler with your own implementation. The sax parser is good if you are working with large documents. If not, you might be better off with another xml parser, for example dom4j.\nHere's a simple sax tutorial\n",
"I don't know if this would be an option for you, but since Groovy and Java play nice together why not try one of the Groovy options to process XML.\nIn particular look at the XML Slurper (http://groovy.codehaus.org/Reading+XML+using+Groovy's+XmlSlurper)\ndef records = new XmlSlurper().parseText(thexml)\ndef persons_name = records.first_name[0]\n\nIn my opinion that is as close as you'll get to BeautifulSoup in a Java compatible way.\n",
"Using the Java XPath API\nXPathFactory factory = XPathFactory.newInstance();\nXPath xPath = factory.newXPath();\nXPathExpression xPathExpression = xPath.compile(\"//first_name\");\nNodeList nodes = (NodeList) xPathExpression.evaluate(\n new InputSource(new FileInputStream(the_xml)), XPathConstants.NODESET);\n\nYes it is unnecessarily verbose.\n"
] |
[
3,
3,
2,
0,
0
] |
[] |
[] |
[
"java",
"python",
"xml"
] |
stackoverflow_0002250450_java_python_xml.txt
|
Q:
Python returning the wrong length of string when using special characters
I have a string ë́aúlt that I want to get the length of a manipulate based on character positions and so on. The problem is that the first ë́ is being counted twice, or I guess ë is in position 0 and ´ is in position 1.
Is there any possible way in Python to have a character like ë́ be represented as 1?
I'm using UTF-8 encoding for the actual code and web page it is being outputted to.
edit: Just some background on why I need to do this. I am working on a project that translates English to Seneca (a form of Native American language) and ë́ shows up quite a bit. Some rewrite rules for certain words require knowledge of letter position (itself and surrounding letters) and other characteristics, such as accents and other diacritic markings.
A:
UTF-8 is an unicode encoding which uses more than one byte for special characters. If you don't want the length of the encoded string, simple decode it and use len() on the unicode object (and not the str object!).
Here are some examples:
>>> # creates a str literal (with utf-8 encoding, if this was
>>> # specified on the beginning of the file):
>>> len('ë́aúlt')
9
>>> # creates a unicode literal (you should generally use this
>>> # version if you are dealing with special characters):
>>> len(u'ë́aúlt')
6
>>> # the same str literal (written in an encoded notation):
>>> len('\xc3\xab\xcc\x81a\xc3\xbalt')
9
>>> # you can convert any str to an unicode object by decoding() it:
>>> len('\xc3\xab\xcc\x81a\xc3\xbalt'.decode('utf-8'))
6
Of course, you can also access single characters in an unicode object like you would do in a str object (they are both inheriting from basestring and therefore have the same methods):
>>> test = u'ë́aúlt'
>>> print test[0]
ë
If you develop localized applications, it's generally a good idea to use only unicode-objects internally, by decoding all inputs you get. After the work is done, you can encode the result again as 'UTF-8'. If you keep to this principle, you will never see your server crashing because of any internal UnicodeDecodeErrors you might get otherwise ;)
PS: Please note, that the str and unicode datatype have changed significantly in Python 3. In Python 3 there are only unicode strings and plain byte strings which can't be mixed anymore. That should help to avoid common pitfalls with unicode handling...
Regards,
Christoph
A:
The problem is that the first ë́ is being counted twice, or I guess ë is in position 0 and ´ is in position 1.
Yes. That's how code points are defined by Unicode. In general, you can ask Python to convert a letter and a separate ‘combining’ diacritical mark like U+0301 COMBINING ACUTE ACCENT using Unicode normalisation:
>>> unicodedata.normalize('NFC', u'a\u0301')
u'\xe1' # single character: á
However, there is no single character in Unicode for “e with diaeresis and acute accent” because no language in the world has ever used the letter ‘ë́’. (Pinyin transliteration has “u with diaeresis and acute accent”, but not ‘e’.) Consequently font support is poor; it renders really badly in many cases and is a messy blob on my web browser.
To work out where the ‘editable points’ in a string of Unicode code points are is a tricky job that requires quite a bit of domain knowledge of languages. It's part of the issue of “complex text layout”, an area which also includes issues such as bidirectional text and contextual glpyh shaping and ligatures. To do complex text layout you'll need a library such as Uniscribe on Windows, or Pango generally (for which there is a Python interface).
If, on the other hand, you merely want to completely ignore all combining characters when doing a count, you can get rid of them easily enough:
def withoutcombining(s):
return ''.join(c for c in s if unicodedata.combining(c)==0)
>>> withoutcombining(u'ë́aúlt')
'\xeba\xfalt' # ëaúlt
>>> len(_)
5
A:
The best you can do is to use unicodedata.normalize() to decompose the character and then filter out the accents.
Don't forget to use unicode and unicode literals in your code.
A:
which Python version are you using?
Python 3.1 doesn't have this issue.
>>> print(len("ë́aúlt"))
6
Regards
Djoudi
A:
You said: I have a string ë́aúlt that I want to get the length of a manipulate based on character positions and so on. The problem is that the first ë́ is being counted twice, or I guess ë is in position 0 and ´ is in position 1.
The first step in working on any Unicode problem is to know exactly what is in your data; don't guess. In this case your guess is correct; it won't always be.
"Exactly what is in your data": use the repr() built-in function (for lots more things apart from unicode). A useful advantage of showing the repr() output in your question is that answerers then have exactly what you have. Note that your text displays in only FOUR positions instead of 5 with some browsers/fonts -- the 'e' and its diacritics and the 'a' are mangled together in one position.
You can use the unicodedata.name() function to tell you what each component is.
Here's an example:
# coding: utf8
import unicodedata
x = u"ë́aúlt"
print(repr(x))
for c in x:
try:
name = unicodedata.name(c)
except:
name = "<no name>"
print "U+%04X" % ord(c), repr(c), name
Results:
u'\xeb\u0301a\xfalt'
U+00EB u'\xeb' LATIN SMALL LETTER E WITH DIAERESIS
U+0301 u'\u0301' COMBINING ACUTE ACCENT
U+0061 u'a' LATIN SMALL LETTER A
U+00FA u'\xfa' LATIN SMALL LETTER U WITH ACUTE
U+006C u'l' LATIN SMALL LETTER L
U+0074 u't' LATIN SMALL LETTER T
Now read @bobince's answer :-)
|
Python returning the wrong length of string when using special characters
|
I have a string ë́aúlt that I want to get the length of a manipulate based on character positions and so on. The problem is that the first ë́ is being counted twice, or I guess ë is in position 0 and ´ is in position 1.
Is there any possible way in Python to have a character like ë́ be represented as 1?
I'm using UTF-8 encoding for the actual code and web page it is being outputted to.
edit: Just some background on why I need to do this. I am working on a project that translates English to Seneca (a form of Native American language) and ë́ shows up quite a bit. Some rewrite rules for certain words require knowledge of letter position (itself and surrounding letters) and other characteristics, such as accents and other diacritic markings.
|
[
"UTF-8 is an unicode encoding which uses more than one byte for special characters. If you don't want the length of the encoded string, simple decode it and use len() on the unicode object (and not the str object!).\nHere are some examples:\n>>> # creates a str literal (with utf-8 encoding, if this was\n>>> # specified on the beginning of the file):\n>>> len('ë́aúlt') \n9\n>>> # creates a unicode literal (you should generally use this\n>>> # version if you are dealing with special characters):\n>>> len(u'ë́aúlt') \n6\n>>> # the same str literal (written in an encoded notation):\n>>> len('\\xc3\\xab\\xcc\\x81a\\xc3\\xbalt') \n9\n>>> # you can convert any str to an unicode object by decoding() it:\n>>> len('\\xc3\\xab\\xcc\\x81a\\xc3\\xbalt'.decode('utf-8')) \n6\n\nOf course, you can also access single characters in an unicode object like you would do in a str object (they are both inheriting from basestring and therefore have the same methods):\n>>> test = u'ë́aúlt'\n>>> print test[0]\në\n\nIf you develop localized applications, it's generally a good idea to use only unicode-objects internally, by decoding all inputs you get. After the work is done, you can encode the result again as 'UTF-8'. If you keep to this principle, you will never see your server crashing because of any internal UnicodeDecodeErrors you might get otherwise ;)\nPS: Please note, that the str and unicode datatype have changed significantly in Python 3. In Python 3 there are only unicode strings and plain byte strings which can't be mixed anymore. That should help to avoid common pitfalls with unicode handling...\nRegards,\nChristoph\n",
"\nThe problem is that the first ë́ is being counted twice, or I guess ë is in position 0 and ´ is in position 1.\n\nYes. That's how code points are defined by Unicode. In general, you can ask Python to convert a letter and a separate ‘combining’ diacritical mark like U+0301 COMBINING ACUTE ACCENT using Unicode normalisation:\n>>> unicodedata.normalize('NFC', u'a\\u0301')\nu'\\xe1' # single character: á\n\nHowever, there is no single character in Unicode for “e with diaeresis and acute accent” because no language in the world has ever used the letter ‘ë́’. (Pinyin transliteration has “u with diaeresis and acute accent”, but not ‘e’.) Consequently font support is poor; it renders really badly in many cases and is a messy blob on my web browser.\nTo work out where the ‘editable points’ in a string of Unicode code points are is a tricky job that requires quite a bit of domain knowledge of languages. It's part of the issue of “complex text layout”, an area which also includes issues such as bidirectional text and contextual glpyh shaping and ligatures. To do complex text layout you'll need a library such as Uniscribe on Windows, or Pango generally (for which there is a Python interface).\nIf, on the other hand, you merely want to completely ignore all combining characters when doing a count, you can get rid of them easily enough:\ndef withoutcombining(s):\n return ''.join(c for c in s if unicodedata.combining(c)==0)\n\n>>> withoutcombining(u'ë́aúlt')\n'\\xeba\\xfalt' # ëaúlt\n>>> len(_)\n5\n\n",
"The best you can do is to use unicodedata.normalize() to decompose the character and then filter out the accents.\nDon't forget to use unicode and unicode literals in your code.\n",
"which Python version are you using?\nPython 3.1 doesn't have this issue.\n>>> print(len(\"ë́aúlt\"))\n6\n\nRegards\nDjoudi\n",
"You said: I have a string ë́aúlt that I want to get the length of a manipulate based on character positions and so on. The problem is that the first ë́ is being counted twice, or I guess ë is in position 0 and ´ is in position 1.\nThe first step in working on any Unicode problem is to know exactly what is in your data; don't guess. In this case your guess is correct; it won't always be.\n\"Exactly what is in your data\": use the repr() built-in function (for lots more things apart from unicode). A useful advantage of showing the repr() output in your question is that answerers then have exactly what you have. Note that your text displays in only FOUR positions instead of 5 with some browsers/fonts -- the 'e' and its diacritics and the 'a' are mangled together in one position.\nYou can use the unicodedata.name() function to tell you what each component is.\nHere's an example:\n# coding: utf8\nimport unicodedata\nx = u\"ë́aúlt\"\nprint(repr(x))\nfor c in x:\n try:\n name = unicodedata.name(c)\n except:\n name = \"<no name>\"\n print \"U+%04X\" % ord(c), repr(c), name\n\nResults:\nu'\\xeb\\u0301a\\xfalt'\nU+00EB u'\\xeb' LATIN SMALL LETTER E WITH DIAERESIS\nU+0301 u'\\u0301' COMBINING ACUTE ACCENT\nU+0061 u'a' LATIN SMALL LETTER A\nU+00FA u'\\xfa' LATIN SMALL LETTER U WITH ACUTE\nU+006C u'l' LATIN SMALL LETTER L\nU+0074 u't' LATIN SMALL LETTER T\n\nNow read @bobince's answer :-)\n"
] |
[
22,
6,
1,
0,
0
] |
[] |
[] |
[
"character_encoding",
"python"
] |
stackoverflow_0002247205_character_encoding_python.txt
|
Q:
app-engine (python) Modeling relationships, am I doing it wrong?
Using the following models I am trying to figure out a way to generate a news feed of sorts so that when a user logs in they are presented with a list of upcoming events for bars that they choose to follow. Will I be able to query something like “SELECT * FROM Barevent WHERE parent_bar IN UserprofileInstance.following”? If so will this be efficient?
class Barprofile(db.Model):
b_user = db.UserProperty()
created = db.DateTimeProperty(auto_now_add=True)
barname = db.StringProperty()
address = db.PostalAddressProperty()
zipcode = db.StringProperty()
class Barevent(db.Model):
created = db.DateTimeProperty(auto_now_add=True)
when = db.DateProperty()
starttime = db.TimeProperty()
endtime = db.TimeProperty()
description = db.StringProperty(multiline=True)
parent_bar = db.ReferenceProperty(Barprofile, collection_name='bar_events')
class Userprofile(db.Model):
b_user = db.UserProperty()
following = db.ListProperty(db.Key)
A:
Your model and your query should work fine, and it would not be inefficient. The one thing that you will want to keep in mind is that if Barprofile entities are deleted, you may want to manually remove their keys from the following property of each Userprofile. Having an orphaned Barprofile reference will not break the query, but if you ever use that list for other purposes (such as displaying to a user which bars he or she is subscribed to), you will got an error if you try to query on that key.
A:
The structure you're describing will work, but bear in mind that the 'IN' operator requires executing one query for each item in the list - so it could result in a lot of queries.
However, what you're dealing with is a 'scatter-gather' problem akin to Twitter, so you don't have much choice - you either gather things at read time like this, or you broadcast them to all listening users at update time.
A:
There are a lot of ways to do GAE queries but I don't see why yours wouldn't work.
If you had a specific Barprofile (B) then I think you could (in Python) do;
query = db.GqlQuery("SELECT * FROM Barevent WHERE parent_bar = :1,
B.b_user)
day = query.get()
But I'm not quite sure why you have both a Userprofile and a Barprofile (so I think you'd want to reconsider those three).
Generally, you don't need (or even want) to normalize data in GAE as you're used to doing for traditional RDBMS environments. There's no particular advantage to normalizing your data that way anymore, i.e. instead of making it up to your DB to ensure consistency you do it at the application level now.
|
app-engine (python) Modeling relationships, am I doing it wrong?
|
Using the following models I am trying to figure out a way to generate a news feed of sorts so that when a user logs in they are presented with a list of upcoming events for bars that they choose to follow. Will I be able to query something like “SELECT * FROM Barevent WHERE parent_bar IN UserprofileInstance.following”? If so will this be efficient?
class Barprofile(db.Model):
b_user = db.UserProperty()
created = db.DateTimeProperty(auto_now_add=True)
barname = db.StringProperty()
address = db.PostalAddressProperty()
zipcode = db.StringProperty()
class Barevent(db.Model):
created = db.DateTimeProperty(auto_now_add=True)
when = db.DateProperty()
starttime = db.TimeProperty()
endtime = db.TimeProperty()
description = db.StringProperty(multiline=True)
parent_bar = db.ReferenceProperty(Barprofile, collection_name='bar_events')
class Userprofile(db.Model):
b_user = db.UserProperty()
following = db.ListProperty(db.Key)
|
[
"Your model and your query should work fine, and it would not be inefficient. The one thing that you will want to keep in mind is that if Barprofile entities are deleted, you may want to manually remove their keys from the following property of each Userprofile. Having an orphaned Barprofile reference will not break the query, but if you ever use that list for other purposes (such as displaying to a user which bars he or she is subscribed to), you will got an error if you try to query on that key.\n",
"The structure you're describing will work, but bear in mind that the 'IN' operator requires executing one query for each item in the list - so it could result in a lot of queries.\nHowever, what you're dealing with is a 'scatter-gather' problem akin to Twitter, so you don't have much choice - you either gather things at read time like this, or you broadcast them to all listening users at update time.\n",
"There are a lot of ways to do GAE queries but I don't see why yours wouldn't work.\nIf you had a specific Barprofile (B) then I think you could (in Python) do;\n query = db.GqlQuery(\"SELECT * FROM Barevent WHERE parent_bar = :1,\n B.b_user)\n day = query.get()\n\nBut I'm not quite sure why you have both a Userprofile and a Barprofile (so I think you'd want to reconsider those three).\nGenerally, you don't need (or even want) to normalize data in GAE as you're used to doing for traditional RDBMS environments. There's no particular advantage to normalizing your data that way anymore, i.e. instead of making it up to your DB to ensure consistency you do it at the application level now.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"google_app_engine",
"google_cloud_datastore",
"python"
] |
stackoverflow_0002246727_google_app_engine_google_cloud_datastore_python.txt
|
Q:
How to do drag & drop with wxWidgets module of Python?
I'm using Python and I want to do a drag & drop interface.
For example, with a large picture whose size is bigger then the screen, I want to click on it and drag it to see other parts. Something like "google maps"!
In google maps if we click two times we do "zoom" but if we click one time and while pressed, we move the mouse, we move around the map. That's what I want to do!
Is it possible to do this using wxWidgets module of Python?
If so, how?
Thanks in advance :)
A:
I found the solution over here:
http://www.java2s.com/Code/Python/Event/Mouseactiondrag.htm
With this code I can do what I want!
And the name was only Mouse Drag and not Mouse Drag and Drop (soz)
|
How to do drag & drop with wxWidgets module of Python?
|
I'm using Python and I want to do a drag & drop interface.
For example, with a large picture whose size is bigger then the screen, I want to click on it and drag it to see other parts. Something like "google maps"!
In google maps if we click two times we do "zoom" but if we click one time and while pressed, we move the mouse, we move around the map. That's what I want to do!
Is it possible to do this using wxWidgets module of Python?
If so, how?
Thanks in advance :)
|
[
"I found the solution over here:\nhttp://www.java2s.com/Code/Python/Event/Mouseactiondrag.htm\nWith this code I can do what I want!\nAnd the name was only Mouse Drag and not Mouse Drag and Drop (soz)\n"
] |
[
0
] |
[] |
[] |
[
"drag_and_drop",
"python",
"wxpython",
"wxwidgets"
] |
stackoverflow_0002237725_drag_and_drop_python_wxpython_wxwidgets.txt
|
Q:
Django MySql Raw Query Error - Parameter index out of range
This view is running fine on plain pyton/Django/mysql on Windows
I'm porting this to run over jython/Django/mysql and it gives error -
Exception received is : error setting index [10] [SQLCode: 0]
Parameter index out of range (10 > number of parameters, which is 0). [SQLCode: 0],
[SQLState: S1009]
The Query is -
cursor.execute("select value from table_name
where value_till_dt >= str_to_date('%s,%s,%s,%s,%s', '%%m,%%d,%%Y,%%H,%%i')
AND value_till_dt <= str_to_date('%s,%s,%s,%s,%s', '%%m,%%d,%%Y,%%H,%%i')
and granularity='5'
ORDER BY value_till_dt",
[int(tempStart.month),int(tempStart.day), int(tempStart.year), int(tempStart.hour), int(tempStart.minute),
int(tempEnd.month), int(tempEnd.day), int(tempEnd.year), int(tempEnd.hour), int(tempEnd.minute)])
As you see there are 10 parameters being passed to this query.
Does the error mean that the query is not getting the parameters ?
I have printed out the parameters just before the execution and they are showing as being passed correctly -
1 - Start Parameters being passed are : 1 11 2010 10 0
2 - End Parameters being passed are : 1 11 2010 10 5
The only different in the second environment is that there is no data available for this date range. But the error does not seem to be related to data.
Any thoughts are appreciated.
A:
It's indeed a parameter style problem. You have to use ? instead of %s.
Here is how you reproduce the error you are getting:
shell> jython
>>> from com.ziclix.python.sql import zxJDBC
>>> (d, v) = "jdbc:mysql://localhost/test", "org.gjt.mm.mysql.Driver"
>>> cnx = zxJDBC.connect(d, None, None, v)
>>> cur = cnx.cursor()
>>> cur.execute("SELECT %s", ('ham',))
..
zxJDBC.Error: error setting index [1] [SQLCode: 0]
Parameter index out of range (1 > number of parameters,
which is 0). [SQLCode: 0], [SQLState: S1009]
Now, if you use quotes around the ?-mark, you'll get the same problem:
>>> cur.execute("SELECT '?'", ('ham',))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
zxJDBC.Error: error setting index [1] [SQLCode: 0]
Parameter index out of range (1 > number of parameters,
which is 0). [SQLCode: 0], [SQLState: S1009]
The point is to not use quotes and let the database interface do it for you:
>>> cur.execute("SELECT ?", ('ham',))
>>> cur.fetchall()
[(u'ham',)]
Here is how I would do it in the code. You first make the strings you are going to use for the str_to_date() functions like this:
start = "%d,%d,%d,%d,%d" % (int(tempStart.month),
int(tempStart.day), int(tempStart.year),int(tempStart.hour),
int(tempStart.minute))
stop = "%d,%d,%d,%d,%d" % (int(tempEnd.month),
int(tempEnd.day), int(tempEnd.year), int(tempEnd.hour),
int(tempEnd.minute))
You make the SELECT statement, but don't use any quotes, and pass it on to the cursor. The database interface will do the job for you. Also, we put 'granularity' value as a parameter.
select = """SELECT value FROM table_name
WHERE value_till_dt >= str_to_date(?, '%%m,%%d,%%Y,%%H,%%i')
AND value_till_dt <= str_to_date(?, '%%m,%%d,%%Y,%%H,%%i')
AND granularity=?
ORDER BY value_till_dt
"""
cursor.execute(select, (start,stop,5))
I hope this helps!
A:
Are you sure that the parameter marker is %s and not ? or even :parameter? Check the paramstyle argument of the DB-API module to find out.
|
Django MySql Raw Query Error - Parameter index out of range
|
This view is running fine on plain pyton/Django/mysql on Windows
I'm porting this to run over jython/Django/mysql and it gives error -
Exception received is : error setting index [10] [SQLCode: 0]
Parameter index out of range (10 > number of parameters, which is 0). [SQLCode: 0],
[SQLState: S1009]
The Query is -
cursor.execute("select value from table_name
where value_till_dt >= str_to_date('%s,%s,%s,%s,%s', '%%m,%%d,%%Y,%%H,%%i')
AND value_till_dt <= str_to_date('%s,%s,%s,%s,%s', '%%m,%%d,%%Y,%%H,%%i')
and granularity='5'
ORDER BY value_till_dt",
[int(tempStart.month),int(tempStart.day), int(tempStart.year), int(tempStart.hour), int(tempStart.minute),
int(tempEnd.month), int(tempEnd.day), int(tempEnd.year), int(tempEnd.hour), int(tempEnd.minute)])
As you see there are 10 parameters being passed to this query.
Does the error mean that the query is not getting the parameters ?
I have printed out the parameters just before the execution and they are showing as being passed correctly -
1 - Start Parameters being passed are : 1 11 2010 10 0
2 - End Parameters being passed are : 1 11 2010 10 5
The only different in the second environment is that there is no data available for this date range. But the error does not seem to be related to data.
Any thoughts are appreciated.
|
[
"It's indeed a parameter style problem. You have to use ? instead of %s.\nHere is how you reproduce the error you are getting:\nshell> jython\n>>> from com.ziclix.python.sql import zxJDBC\n>>> (d, v) = \"jdbc:mysql://localhost/test\", \"org.gjt.mm.mysql.Driver\"\n>>> cnx = zxJDBC.connect(d, None, None, v)\n>>> cur = cnx.cursor()\n>>> cur.execute(\"SELECT %s\", ('ham',))\n..\nzxJDBC.Error: error setting index [1] [SQLCode: 0]\nParameter index out of range (1 > number of parameters,\n which is 0). [SQLCode: 0], [SQLState: S1009]\n\nNow, if you use quotes around the ?-mark, you'll get the same problem:\n>>> cur.execute(\"SELECT '?'\", ('ham',)) \nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nzxJDBC.Error: error setting index [1] [SQLCode: 0]\nParameter index out of range (1 > number of parameters,\n which is 0). [SQLCode: 0], [SQLState: S1009]\n\nThe point is to not use quotes and let the database interface do it for you:\n>>> cur.execute(\"SELECT ?\", ('ham',)) \n>>> cur.fetchall()\n[(u'ham',)]\n\nHere is how I would do it in the code. You first make the strings you are going to use for the str_to_date() functions like this:\nstart = \"%d,%d,%d,%d,%d\" % (int(tempStart.month),\n int(tempStart.day), int(tempStart.year),int(tempStart.hour), \n int(tempStart.minute))\nstop = \"%d,%d,%d,%d,%d\" % (int(tempEnd.month),\n int(tempEnd.day), int(tempEnd.year), int(tempEnd.hour),\n int(tempEnd.minute))\n\nYou make the SELECT statement, but don't use any quotes, and pass it on to the cursor. The database interface will do the job for you. Also, we put 'granularity' value as a parameter.\nselect = \"\"\"SELECT value FROM table_name\n WHERE value_till_dt >= str_to_date(?, '%%m,%%d,%%Y,%%H,%%i')\n AND value_till_dt <= str_to_date(?, '%%m,%%d,%%Y,%%H,%%i')\n AND granularity=?\n ORDER BY value_till_dt\n\"\"\"\ncursor.execute(select, (start,stop,5))\n\nI hope this helps!\n",
"Are you sure that the parameter marker is %s and not ? or even :parameter? Check the paramstyle argument of the DB-API module to find out.\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"jython",
"mysql",
"python"
] |
stackoverflow_0002248321_django_jython_mysql_python.txt
|
Q:
Using subprocess.call to crop an image
I'm having trouble in my python script, and I don't understand it :
subprocess.call(['convert', file, '-crop', '80x10+90+980', '+repage', 'test.jpg'])
Returns "invalid argument - -crop"
But if I run this from the command line, it works fine :
convert test.jpg -crop 80x10+90+980 +repage test.jpg
What am I missing here ?
A:
Is there more than one convert in the system? Try an absolute path to the command you want?
A:
What about using the python image library instead? That seems much more reliable than to call a subprocess (especially for error handling...).
A:
file is a _____builtin_____ class. Overriding it may produce unwanted results. Try using a different variable name.
A:
I've actually tried your code:
>>> import subprocess
>>> subprocess.call(['convert', 'capa.jpg', '-crop', '80x10+90+980', '+repage', 'capa2.jpg'])
0
>>>
And it works for me!
So you must have something wrong, somewhere else. Check our assumptions again.
|
Using subprocess.call to crop an image
|
I'm having trouble in my python script, and I don't understand it :
subprocess.call(['convert', file, '-crop', '80x10+90+980', '+repage', 'test.jpg'])
Returns "invalid argument - -crop"
But if I run this from the command line, it works fine :
convert test.jpg -crop 80x10+90+980 +repage test.jpg
What am I missing here ?
|
[
"Is there more than one convert in the system? Try an absolute path to the command you want?\n",
"What about using the python image library instead? That seems much more reliable than to call a subprocess (especially for error handling...).\n",
"file is a _____builtin_____ class. Overriding it may produce unwanted results. Try using a different variable name.\n",
"I've actually tried your code:\n>>> import subprocess\n>>> subprocess.call(['convert', 'capa.jpg', '-crop', '80x10+90+980', '+repage', 'capa2.jpg'])\n0\n>>> \n\nAnd it works for me!\nSo you must have something wrong, somewhere else. Check our assumptions again.\n"
] |
[
2,
1,
1,
1
] |
[] |
[] |
[
"imagemagick",
"python"
] |
stackoverflow_0002250933_imagemagick_python.txt
|
Q:
MySQL select query not working with limit, offset parameters
I am running MySQL 5.1 on my windows vista installation. The table in question uses MyISAM, has about 10 million rows. It is used to store text messages posted by users on a website.
I am trying to run the following query on it,
query = "select id, text from messages order by id limit %d offset %d" %(limit, offset)
where limit is set to a fixed value (in this case 20000) and offset is incremented in steps of 20000.
This query goes into an infinite loop when offset = 240000. This particular value and not any other value.
I isolated this query into a script and ran it, and got the same results. I then tried to run the last query (with offset = 240000) directly, and it worked !
I then tried executing the same queries directly in a mysql client to make sure that the error was not in the python DB accessor module. All the queries returned results, except the one with offset = 240000.
I then looked at the mysql server logs and saw the following.
[ERROR] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Sort aborted
This probably means that when I stopped the python process (out of frustration), the mysqld process was 'sorting' something. When I looked at the my.ini file, I saw a lot of MAX_* options. I am currently experimenting with these, but just throwing it out there in the meanwhile.
Any help appreciated!
A:
Have you checked the table with myisamchk?
|
MySQL select query not working with limit, offset parameters
|
I am running MySQL 5.1 on my windows vista installation. The table in question uses MyISAM, has about 10 million rows. It is used to store text messages posted by users on a website.
I am trying to run the following query on it,
query = "select id, text from messages order by id limit %d offset %d" %(limit, offset)
where limit is set to a fixed value (in this case 20000) and offset is incremented in steps of 20000.
This query goes into an infinite loop when offset = 240000. This particular value and not any other value.
I isolated this query into a script and ran it, and got the same results. I then tried to run the last query (with offset = 240000) directly, and it worked !
I then tried executing the same queries directly in a mysql client to make sure that the error was not in the python DB accessor module. All the queries returned results, except the one with offset = 240000.
I then looked at the mysql server logs and saw the following.
[ERROR] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Sort aborted
This probably means that when I stopped the python process (out of frustration), the mysqld process was 'sorting' something. When I looked at the my.ini file, I saw a lot of MAX_* options. I am currently experimenting with these, but just throwing it out there in the meanwhile.
Any help appreciated!
|
[
"Have you checked the table with myisamchk?\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0002249331_mysql_python.txt
|
Q:
render users' equations in Python
I am a very new/inexperienced Python programmer. I teach maths and am trying to create a GUI graph-plotting package suitable for schoolchildren.
As well as plotting a graph, I would ideally like to render the equation a user enters [eg. y = (x^2)/3] in a nicely formatted style - ideally updating in real-time as the user enters their expression.
I have looked into the capabilities of such as matplotlib, but it seems like the user would have to enter the above expression as something like frac{x^2,3}, which is not ideal for schoolchildren.
Many thanks in advance if anyone can help - sorry if it's a difficult question!
best wishes, Geddes
A:
You could look at how Lybniz does it. Or you could use Lybniz. Just saying.
A:
Perhaps you could make use of SymPy's printing capabilities.
A:
I am not sure whether you intend to have your students build this plotting tool in python or you want to build the tool yourself so they can use it to e.g., visualize changes in function behavior as inputs are varied. If the latter, then perhaps it's not important which language the tool is implemented in, so i'll mention one app i think is fits your brief description almost perfectly.
As well as plotting a graph, I would ideally like to render the equation a user enters [eg. y = (x^2)/3] in a nicely formatted style - ideally updating in real-time as the user enters their expression.
A free App called "Grapher." It comes packaged with the Mac OS X (10.4 and above). The fact that it is Mac-only might be a deal-breaker, still i wanted mention it in case your students are using Macs in a computer lab, as many grade-school students are. (Note: not to be confused with "AP Grapher"--also a Mac app but it's a wireless hotspot finder or something like that).
The basic feature set: fully interactive, enter an equation (intuitive--uses a subset of the mac key bindings) to create fairly complex equations from calculus, linear algebra, statistics, differential equations, and the like. Once entered, along with a range of values, the equation is beautifully plotted. Grapher has both a 2D and a 3D mode. Here's a screenshot of Grapher's main app window showing an equation plotted in 3D.
Is there a windows version? I've heard rumors that one exits, but i wasn't able to find any definitive information about it from a few quick Web searches just now.
A:
I don't know if it will do what you want… But it might be worth looking at Numpy/Scipy/Matplotlib.
|
render users' equations in Python
|
I am a very new/inexperienced Python programmer. I teach maths and am trying to create a GUI graph-plotting package suitable for schoolchildren.
As well as plotting a graph, I would ideally like to render the equation a user enters [eg. y = (x^2)/3] in a nicely formatted style - ideally updating in real-time as the user enters their expression.
I have looked into the capabilities of such as matplotlib, but it seems like the user would have to enter the above expression as something like frac{x^2,3}, which is not ideal for schoolchildren.
Many thanks in advance if anyone can help - sorry if it's a difficult question!
best wishes, Geddes
|
[
"You could look at how Lybniz does it. Or you could use Lybniz. Just saying.\n",
"Perhaps you could make use of SymPy's printing capabilities.\n",
"I am not sure whether you intend to have your students build this plotting tool in python or you want to build the tool yourself so they can use it to e.g., visualize changes in function behavior as inputs are varied. If the latter, then perhaps it's not important which language the tool is implemented in, so i'll mention one app i think is fits your brief description almost perfectly.\n\nAs well as plotting a graph, I would ideally like to render the equation a user enters [eg. y = (x^2)/3] in a nicely formatted style - ideally updating in real-time as the user enters their expression.\n\nA free App called \"Grapher.\" It comes packaged with the Mac OS X (10.4 and above). The fact that it is Mac-only might be a deal-breaker, still i wanted mention it in case your students are using Macs in a computer lab, as many grade-school students are. (Note: not to be confused with \"AP Grapher\"--also a Mac app but it's a wireless hotspot finder or something like that).\nThe basic feature set: fully interactive, enter an equation (intuitive--uses a subset of the mac key bindings) to create fairly complex equations from calculus, linear algebra, statistics, differential equations, and the like. Once entered, along with a range of values, the equation is beautifully plotted. Grapher has both a 2D and a 3D mode. Here's a screenshot of Grapher's main app window showing an equation plotted in 3D.\nIs there a windows version? I've heard rumors that one exits, but i wasn't able to find any definitive information about it from a few quick Web searches just now.\n",
"I don't know if it will do what you want… But it might be worth looking at Numpy/Scipy/Matplotlib.\n"
] |
[
8,
3,
2,
0
] |
[] |
[] |
[
"equation",
"matplotlib",
"python",
"wxpython"
] |
stackoverflow_0002247757_equation_matplotlib_python_wxpython.txt
|
Q:
How to test a folder for new files using python
How would you go about testing to see if 2 folders contain the same files, and then to be able to manipulate ONLY the file which is new.
A = listdir('C:/')
B = listdir('D:/')
If A==B
...
I know this could be used to test if directories are different but is there a better way?
And if A and B are the same, except B has one more file than A, how do i use just the new file?
Thank you, i hope my question isnt confusing
A:
http://docs.python.org/library/filecmp.html
http://docs.python.org/library/filecmp.html#the-dircmp-class
import filecmp
compare = filecmp.dircmp( "C:/", "D:/" )
for f in compare.left_only:
print "C: new", f
for f in compare.right_only:
print "D: new", f
A:
A = set(os.listdir('C:\\'))
B = set(os.listdir('D:\\'))
print 'Files in A but not in B:', A - B
print 'Files in B but not in A:', B - A
|
How to test a folder for new files using python
|
How would you go about testing to see if 2 folders contain the same files, and then to be able to manipulate ONLY the file which is new.
A = listdir('C:/')
B = listdir('D:/')
If A==B
...
I know this could be used to test if directories are different but is there a better way?
And if A and B are the same, except B has one more file than A, how do i use just the new file?
Thank you, i hope my question isnt confusing
|
[
"http://docs.python.org/library/filecmp.html\nhttp://docs.python.org/library/filecmp.html#the-dircmp-class\nimport filecmp\ncompare = filecmp.dircmp( \"C:/\", \"D:/\" )\nfor f in compare.left_only:\n print \"C: new\", f\nfor f in compare.right_only:\n print \"D: new\", f\n\n",
"A = set(os.listdir('C:\\\\'))\nB = set(os.listdir('D:\\\\'))\n\nprint 'Files in A but not in B:', A - B\nprint 'Files in B but not in A:', B - A\n\n"
] |
[
8,
4
] |
[] |
[] |
[
"directory",
"file",
"python"
] |
stackoverflow_0002251751_directory_file_python.txt
|
Q:
Python# screenshot failure
I am using PIL(Python Imaging Library) for grabbing the image. But grabber() throws the following error message if I minimized the window
img=ImageGrab.grab()
File "C:\Python26\lib\site-packages\PIL\ImageGrab.py", line 47, in grab
size, data = grabber()
IOError: screen grab failed
My browser shot factory is installed in a Win 2003 Server in a VM. I am doing a remote desktop to view the browser shot factory's console.
A:
There is a very similar question located here - it even has some alternatives and solutions, and (as Adam Bernier has written), imagegrab works only on Windows.
|
Python# screenshot failure
|
I am using PIL(Python Imaging Library) for grabbing the image. But grabber() throws the following error message if I minimized the window
img=ImageGrab.grab()
File "C:\Python26\lib\site-packages\PIL\ImageGrab.py", line 47, in grab
size, data = grabber()
IOError: screen grab failed
My browser shot factory is installed in a Win 2003 Server in a VM. I am doing a remote desktop to view the browser shot factory's console.
|
[
"There is a very similar question located here - it even has some alternatives and solutions, and (as Adam Bernier has written), imagegrab works only on Windows. \n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_imaging_library"
] |
stackoverflow_0002116368_python_python_imaging_library.txt
|
Q:
What behaviour is preferred? (Embedding Python)
I'm embedding Python into an application. MyClass.name is a property of str type:
>>> foo = MyClass()
>>> foo.name
'Default Name'
Should I allow users to do this:
>>> foo.name = 123
>>> foo.name
'123'
or not?
>>> foo.name = 123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: name must be a string
A:
Definitely raise a TypeError rather than attempting to automatically coerce. Normally I would be conservative about saying that something is "pythonic" or not – it's one of those words that really just means "the speaker thinks that this is good" – but if it means anything at all, it must refer to adhering to the Zen of Python, and in this case, it's dictum 12: "In the face of ambiguity, refuse the temptation to guess."
Right now, you're just looking at integers, but what about floats, or Decimals, or Fractions? Are you using the str or the repr of your value? Is the right string value "Decimal('10')" or "10" if you have a Decimal instance? What about tuples, lists, strings, bytes, arrays, dicts, and all the other built-in types in Python? What about user-defined types that you don't know about yet, and can't ever know about because they're part of a program that hasn't been written yet?
It's unlikely that your library is special enough to have weird, automatic behavior that will confuse people used to Python's conventions. (Dictum 8: "Special cases aren't special enough to break the rules.") However, if this really improves the usability of your library, and you feel you can comfortably answer all of these questions for any arbitrary type that a user might assign, then go ahead and do magical coercion. (Dictum 9: "Although practicality beats purity.") But it's a very rare library indeed that really needs this sort of behavior, and if you do do it, document the heck out of it so that when it does something surprising, your users (or even maybe just yourself!) can go back and read up on the specific rules.
A:
That depends really on what you want to achieve. I think the best solution is to allow any object, that can be coerced to string and then use python properties.
class MyClass(object):
def set_name(self, name):
self._name = str(name)
def get_name(self):
return self._name
name = propert(get_name, set_name)
|
What behaviour is preferred? (Embedding Python)
|
I'm embedding Python into an application. MyClass.name is a property of str type:
>>> foo = MyClass()
>>> foo.name
'Default Name'
Should I allow users to do this:
>>> foo.name = 123
>>> foo.name
'123'
or not?
>>> foo.name = 123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: name must be a string
|
[
"Definitely raise a TypeError rather than attempting to automatically coerce. Normally I would be conservative about saying that something is \"pythonic\" or not – it's one of those words that really just means \"the speaker thinks that this is good\" – but if it means anything at all, it must refer to adhering to the Zen of Python, and in this case, it's dictum 12: \"In the face of ambiguity, refuse the temptation to guess.\"\nRight now, you're just looking at integers, but what about floats, or Decimals, or Fractions? Are you using the str or the repr of your value? Is the right string value \"Decimal('10')\" or \"10\" if you have a Decimal instance? What about tuples, lists, strings, bytes, arrays, dicts, and all the other built-in types in Python? What about user-defined types that you don't know about yet, and can't ever know about because they're part of a program that hasn't been written yet?\nIt's unlikely that your library is special enough to have weird, automatic behavior that will confuse people used to Python's conventions. (Dictum 8: \"Special cases aren't special enough to break the rules.\") However, if this really improves the usability of your library, and you feel you can comfortably answer all of these questions for any arbitrary type that a user might assign, then go ahead and do magical coercion. (Dictum 9: \"Although practicality beats purity.\") But it's a very rare library indeed that really needs this sort of behavior, and if you do do it, document the heck out of it so that when it does something surprising, your users (or even maybe just yourself!) can go back and read up on the specific rules.\n",
"That depends really on what you want to achieve. I think the best solution is to allow any object, that can be coerced to string and then use python properties.\nclass MyClass(object):\n\n def set_name(self, name):\n self._name = str(name)\n def get_name(self):\n return self._name\n name = propert(get_name, set_name)\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002252089_python.txt
|
Q:
ImageField not failing at IOError exception
Hay, i have a model which saves 2 images
class Picture(models.Model):
picture = models.ImageField(upload_to=make_filename)
thumbnail = models.ImageField(upload_to=make_thumb_filename)
car = models.ForeignKey('Car')
created_on = models.DateField(auto_now_add=True)
updated_on = models.DateField(auto_now=True)
def save(self):
super(Picture, self).save()
size = 200, 200
filename = str(self.thumbnail.path)
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image.save(filename)
As you can see i have overwrote the save() method
I'n my view i have a simple try, except block, which checks for IOErrors (which are raised if a file other than an image is uploaded)
def upload(request):
car = Car.objects.get(pk=1)
try:
picture = Picture(picture=request.FILES['image'], thumbnail=request.FILES['image'], car=car)
picture.save()
except IOError:
return HttpResponseRedirect("/test/")
However, the exception is raised, but the files are still written to the server (and db)
Any ideas how to make sure files dont get written if the IOError is raised?
EDIT
Fixed by writing a custom method
def is_accectable_file(filename):
extension = filename.split('.')[-1]
acceptable_filetypes = ['jpeg','jpeg','gif','png']
if extension in acceptable_filetypes:
return True
else:
return False
Then exiting my model to
class Picture(models.Model):
picture = models.ImageField(upload_to=make_filename)
thumbnail = models.ImageField(upload_to=make_thumb_filename)
car = models.ForeignKey('Car')
created_on = models.DateField(auto_now_add=True)
updated_on = models.DateField(auto_now=True)
def save(self, *args, **kwargs):
if is_accectable_file(self.picture.name):
super(Picture, self).save(*args,**kwargs)
size = 200, 200
filename = str(self.thumbnail.path)
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image.save(filename)
return True
else:
return False
and my view to
def upload(request):
car = Car.objects.get(pk=1)
try:
picture = Picture(picture=request.FILES['image'], thumbnail=request.FILES['image'], car=car)
if picture.save():
return HttpResponse("fine")
else:
return HttpResponse("invalid type")
except:
return HttpResponse("no file")
A:
The code that (I assume) throws the IOError is being run after you call the super(Picture,self).save() method. Because of this, the picture getting written to the database even if the exception is thrown.
You just need to move the super call to after the setup code.
As an aside, if you're overriding save I'd recommend doing it as follows:
def save(self,*args,**kwargs):
...
super(Picture, self).save(*args,**kwargs)
Otherwise you'll get an exception in any case where Django is passing in arguments to save (and I believe there are a few cases where it does, at least in the admin).
|
ImageField not failing at IOError exception
|
Hay, i have a model which saves 2 images
class Picture(models.Model):
picture = models.ImageField(upload_to=make_filename)
thumbnail = models.ImageField(upload_to=make_thumb_filename)
car = models.ForeignKey('Car')
created_on = models.DateField(auto_now_add=True)
updated_on = models.DateField(auto_now=True)
def save(self):
super(Picture, self).save()
size = 200, 200
filename = str(self.thumbnail.path)
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image.save(filename)
As you can see i have overwrote the save() method
I'n my view i have a simple try, except block, which checks for IOErrors (which are raised if a file other than an image is uploaded)
def upload(request):
car = Car.objects.get(pk=1)
try:
picture = Picture(picture=request.FILES['image'], thumbnail=request.FILES['image'], car=car)
picture.save()
except IOError:
return HttpResponseRedirect("/test/")
However, the exception is raised, but the files are still written to the server (and db)
Any ideas how to make sure files dont get written if the IOError is raised?
EDIT
Fixed by writing a custom method
def is_accectable_file(filename):
extension = filename.split('.')[-1]
acceptable_filetypes = ['jpeg','jpeg','gif','png']
if extension in acceptable_filetypes:
return True
else:
return False
Then exiting my model to
class Picture(models.Model):
picture = models.ImageField(upload_to=make_filename)
thumbnail = models.ImageField(upload_to=make_thumb_filename)
car = models.ForeignKey('Car')
created_on = models.DateField(auto_now_add=True)
updated_on = models.DateField(auto_now=True)
def save(self, *args, **kwargs):
if is_accectable_file(self.picture.name):
super(Picture, self).save(*args,**kwargs)
size = 200, 200
filename = str(self.thumbnail.path)
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image.save(filename)
return True
else:
return False
and my view to
def upload(request):
car = Car.objects.get(pk=1)
try:
picture = Picture(picture=request.FILES['image'], thumbnail=request.FILES['image'], car=car)
if picture.save():
return HttpResponse("fine")
else:
return HttpResponse("invalid type")
except:
return HttpResponse("no file")
|
[
"The code that (I assume) throws the IOError is being run after you call the super(Picture,self).save() method. Because of this, the picture getting written to the database even if the exception is thrown.\nYou just need to move the super call to after the setup code.\nAs an aside, if you're overriding save I'd recommend doing it as follows:\ndef save(self,*args,**kwargs):\n ...\n super(Picture, self).save(*args,**kwargs)\n\nOtherwise you'll get an exception in any case where Django is passing in arguments to save (and I believe there are a few cases where it does, at least in the admin).\n"
] |
[
1
] |
[] |
[] |
[
"django",
"imagefield",
"ioerror",
"python"
] |
stackoverflow_0002252355_django_imagefield_ioerror_python.txt
|
Q:
Class Inheritance
I am trying to get completely to grips with class inheritence in Python. I have created program's with classes but they are all in one file. I have also created scripts with multiple files containing just functions. I have started using class inheritence in scripts with multiple files and I am hitting problems. I have 2 basic scripts below and I am trying to get the second script to inherit values from the first script. The code is as follow's:
First Script:
class test():
def q():
a = 20
return a
def w():
b = 30
return b
if __name__ == '__main__':
a = q()
b = w()
if __name__ == '__main__':
(a, b) = test()
Second Script:
from class1 import test
class test2(test):
def e(a, b):
print a
print b
e(a, b)
if __name__ == '__main__':
test2(test)
Can anyone explain to me how to get the second file to inherit the first files values? Thanks for any help.
A:
I would say you messed up class definition with function stuff. It should look more like this:
class Test(object):
def __init__(self):
self.a = 20
self.b = 30
if __name__ == '__main__':
test_instance = Test()
and
from class1 import Test
class Test2(Test):
def e(self):
print self.a
print self.b
if __name__ == '__main__':
test_instance = Test2()
test_instance.e() # prints 20 and 30
It looks like your problem is not (only) inheritance, but also how to correctly define classes in Python.
Some notes:
Always use capitalized names for classes. That is more or less convention.
As ruibm pointed out, every (non-static) method of a class has to have a first parameter that is named (by convention) self.
You can create instance variables by setting them as self.variable = value in the __init__ method.
If you call Test() you get an object back. Unless you assign it to a variable, just calling test2() as you did in your second piece of code has no effect. Maybe it had in your case because defined your class in a weird way.
A:
In Python, each member function (method) of a class should have a variable called self which is pretty much the this pointer/reference in C++, Java, C#.
Basically, to make your code work add self as the first argument to all methods. To assign/read member variables use self.a and self.b otherwise you're just creating temporary function variables the way you have it right now.
|
Class Inheritance
|
I am trying to get completely to grips with class inheritence in Python. I have created program's with classes but they are all in one file. I have also created scripts with multiple files containing just functions. I have started using class inheritence in scripts with multiple files and I am hitting problems. I have 2 basic scripts below and I am trying to get the second script to inherit values from the first script. The code is as follow's:
First Script:
class test():
def q():
a = 20
return a
def w():
b = 30
return b
if __name__ == '__main__':
a = q()
b = w()
if __name__ == '__main__':
(a, b) = test()
Second Script:
from class1 import test
class test2(test):
def e(a, b):
print a
print b
e(a, b)
if __name__ == '__main__':
test2(test)
Can anyone explain to me how to get the second file to inherit the first files values? Thanks for any help.
|
[
"I would say you messed up class definition with function stuff. It should look more like this:\nclass Test(object):\n\n def __init__(self):\n self.a = 20\n self.b = 30\n\nif __name__ == '__main__':\n test_instance = Test()\n\nand\nfrom class1 import Test\n\nclass Test2(Test):\n\n def e(self):\n print self.a\n print self.b\n\n\nif __name__ == '__main__':\n test_instance = Test2()\n test_instance.e() # prints 20 and 30\n\nIt looks like your problem is not (only) inheritance, but also how to correctly define classes in Python.\nSome notes:\n\nAlways use capitalized names for classes. That is more or less convention.\nAs ruibm pointed out, every (non-static) method of a class has to have a first parameter that is named (by convention) self.\nYou can create instance variables by setting them as self.variable = value in the __init__ method.\nIf you call Test() you get an object back. Unless you assign it to a variable, just calling test2() as you did in your second piece of code has no effect. Maybe it had in your case because defined your class in a weird way.\n\n",
"In Python, each member function (method) of a class should have a variable called self which is pretty much the this pointer/reference in C++, Java, C#. \nBasically, to make your code work add self as the first argument to all methods. To assign/read member variables use self.a and self.b otherwise you're just creating temporary function variables the way you have it right now.\n"
] |
[
13,
1
] |
[] |
[] |
[
"class",
"inheritance",
"python"
] |
stackoverflow_0002252620_class_inheritance_python.txt
|
Q:
What are my options for doing multithreaded/concurrent programming in Python?
I'm writing a simple site spider and I've decided to take this opportunity to learn something new in concurrent programming in Python. Instead of using threads and a queue, I decided to try something else, but I don't know what would suit me.
I have heard about Stackless, Celery, Twisted, Tornado, and other things. I don't want to have to set up a database and the whole other dependencies of Celery, but I would if it's a good fit for my purpose.
My question is: What is a good balance between suitability for my app and usefulness in general? I have taken a look at the tasklets in Stackless but I'm not sure that the urlopen() call won't block or that they will execute in parallel, I haven't seen that mentioned anywhere.
Can someone give me a few details on my options and what would be best to use?
Thanks.
A:
Tornado is a web server, so it wouldn't help you much in writing a spider. Twisted is much more general (and, inevitably, complex), good for all kinds of networking tasks (and with good integration with the event loop of several GUI frameworks). Indeed, there used to be a twisted.web.spider (but it was removed years ago, since it was unmaintained -- so you'll have to roll your own on top of the facilities Twisted does provide).
A:
I must say that Twisted gets my vote.
Performing event-drive tasks is fairly straightforward in Twisted. Integration with other important system components such as GTK+ and DBus is very easy.
The HTTP client support is basic for now but improving (>9.0.0): see related question.
The added bonus is that Twisted is available in the Ubuntu default repository ;-)
A:
For a quick look at package sizes, see
ohloh.net/p/compare .
Of course source size is only a rough metric (what I'd really like is nr pages doc, nr pages examples,
dependencies), but it can help.
|
What are my options for doing multithreaded/concurrent programming in Python?
|
I'm writing a simple site spider and I've decided to take this opportunity to learn something new in concurrent programming in Python. Instead of using threads and a queue, I decided to try something else, but I don't know what would suit me.
I have heard about Stackless, Celery, Twisted, Tornado, and other things. I don't want to have to set up a database and the whole other dependencies of Celery, but I would if it's a good fit for my purpose.
My question is: What is a good balance between suitability for my app and usefulness in general? I have taken a look at the tasklets in Stackless but I'm not sure that the urlopen() call won't block or that they will execute in parallel, I haven't seen that mentioned anywhere.
Can someone give me a few details on my options and what would be best to use?
Thanks.
|
[
"Tornado is a web server, so it wouldn't help you much in writing a spider. Twisted is much more general (and, inevitably, complex), good for all kinds of networking tasks (and with good integration with the event loop of several GUI frameworks). Indeed, there used to be a twisted.web.spider (but it was removed years ago, since it was unmaintained -- so you'll have to roll your own on top of the facilities Twisted does provide).\n",
"I must say that Twisted gets my vote. \nPerforming event-drive tasks is fairly straightforward in Twisted. Integration with other important system components such as GTK+ and DBus is very easy.\nThe HTTP client support is basic for now but improving (>9.0.0): see related question.\nThe added bonus is that Twisted is available in the Ubuntu default repository ;-)\n",
"For a quick look at package sizes, see\nohloh.net/p/compare .\nOf course source size is only a rough metric (what I'd really like is nr pages doc, nr pages examples,\ndependencies), but it can help.\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"concurrency",
"multithreading",
"parallel_processing",
"python",
"python_stackless"
] |
stackoverflow_0002249126_concurrency_multithreading_parallel_processing_python_python_stackless.txt
|
Q:
Getting raw post data in Google App Engine Python API
I am trying to get raw data sent as post to Google App engine, using self.request.get('content'), but in vain. It returns empty. I am sure the data is being sent from the client, coz I checked with another simple server code.
Any idea what I am doing wrong? I am using the following code on the client side generating the POST call (objective-c/cocoa-touch)
NSMutableArray *array = [[NSMutableArray alloc] init];
NSMutableDictionary *diction = [[NSMutableDictionary alloc] init];
NSString *tempcurrentQuestion = [[NSString alloc] initWithFormat:@"%d", (questionNo+1)];
NSString *tempansweredOption = [[NSString alloc] initWithFormat:@"%d", (answeredOption)];
[diction setValue:tempcurrentQuestion forKey:@"questionNo"];
[diction setValue:tempansweredOption forKey:@"answeredOption"];
[diction setValue:country forKey:@"country"];
[array addObject:diction];
NSString *post1 = [[CJSONSerializer serializer] serializeObject:array];
NSString *post = [NSString stringWithFormat:@"json=%@", post1];
NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES];
NSLog(@"Length: %d", [postData length]);
NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]];
NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease];
[request setURL:[NSURL URLWithString:@"http://localhost:8080/userResult/"]];
[request setHTTPMethod:@"POST"];
[request setValue:postLength forHTTPHeaderField:@"Content-Length"];
[request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[request setHTTPBody:postData];
questionsFlag = FALSE;
[[NSURLConnection alloc] initWithRequest:request delegate:self];
The server side code is:
class userResult(webapp.RequestHandler):
def __init__(self):
self.qNo = 1
def post(self):
return self.request.get('json')
A:
self.request.get('content') will give you data sent with the argument name 'content'. If you want the raw post data, use self.request.body.
A:
Try submitting the POST data with a content type other than application/x-www-form-urlencoded, which is the default when a form is submitted by a browser. If you use a different content type, the raw post data will be in self.request.body, as Wooble suggested.
If this is actually coming from an HTML form, you can add the enctype attribute to the <form> element to change the encoding used by the browser. Try something like enctype="application/octet-stream".
|
Getting raw post data in Google App Engine Python API
|
I am trying to get raw data sent as post to Google App engine, using self.request.get('content'), but in vain. It returns empty. I am sure the data is being sent from the client, coz I checked with another simple server code.
Any idea what I am doing wrong? I am using the following code on the client side generating the POST call (objective-c/cocoa-touch)
NSMutableArray *array = [[NSMutableArray alloc] init];
NSMutableDictionary *diction = [[NSMutableDictionary alloc] init];
NSString *tempcurrentQuestion = [[NSString alloc] initWithFormat:@"%d", (questionNo+1)];
NSString *tempansweredOption = [[NSString alloc] initWithFormat:@"%d", (answeredOption)];
[diction setValue:tempcurrentQuestion forKey:@"questionNo"];
[diction setValue:tempansweredOption forKey:@"answeredOption"];
[diction setValue:country forKey:@"country"];
[array addObject:diction];
NSString *post1 = [[CJSONSerializer serializer] serializeObject:array];
NSString *post = [NSString stringWithFormat:@"json=%@", post1];
NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES];
NSLog(@"Length: %d", [postData length]);
NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]];
NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease];
[request setURL:[NSURL URLWithString:@"http://localhost:8080/userResult/"]];
[request setHTTPMethod:@"POST"];
[request setValue:postLength forHTTPHeaderField:@"Content-Length"];
[request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[request setHTTPBody:postData];
questionsFlag = FALSE;
[[NSURLConnection alloc] initWithRequest:request delegate:self];
The server side code is:
class userResult(webapp.RequestHandler):
def __init__(self):
self.qNo = 1
def post(self):
return self.request.get('json')
|
[
"self.request.get('content') will give you data sent with the argument name 'content'. If you want the raw post data, use self.request.body.\n",
"Try submitting the POST data with a content type other than application/x-www-form-urlencoded, which is the default when a form is submitted by a browser. If you use a different content type, the raw post data will be in self.request.body, as Wooble suggested.\nIf this is actually coming from an HTML form, you can add the enctype attribute to the <form> element to change the encoding used by the browser. Try something like enctype=\"application/octet-stream\".\n"
] |
[
7,
4
] |
[] |
[] |
[
"google_app_engine",
"iphone",
"post",
"python",
"request"
] |
stackoverflow_0002251584_google_app_engine_iphone_post_python_request.txt
|
Q:
How to run command line python script in django view?
I have a .py file that a php file runs like this:
$link = exec(dirname(__FILE__) . /xxx.py ' .excapeshellarg($url) . '2>&1', $output, $exit_code);
I want to run the xxx.py in my django view and asign the output to a variable. The xxx.py file has a def main(url) function and if __name__ == '__main__': at the bottom. Is there a way I can edit the xxx.py file and call the def main function from my view?
Currently it doesn't run like it runs on command line when called directly from the view.
Thanks for your response.
A:
Is there a way I can edit the xxx.py file and call the def main function from my view?
Yes. Modify the main function so that it returns its results as a string rather than printing it to stdout, which is what it appears to be doing at the moment.
Then, from inside your view, you can do something like:
import xxx
results = xxx.main('foo')
# Do something with results
|
How to run command line python script in django view?
|
I have a .py file that a php file runs like this:
$link = exec(dirname(__FILE__) . /xxx.py ' .excapeshellarg($url) . '2>&1', $output, $exit_code);
I want to run the xxx.py in my django view and asign the output to a variable. The xxx.py file has a def main(url) function and if __name__ == '__main__': at the bottom. Is there a way I can edit the xxx.py file and call the def main function from my view?
Currently it doesn't run like it runs on command line when called directly from the view.
Thanks for your response.
|
[
"\nIs there a way I can edit the xxx.py file and call the def main function from my view?\n\nYes. Modify the main function so that it returns its results as a string rather than printing it to stdout, which is what it appears to be doing at the moment.\nThen, from inside your view, you can do something like:\nimport xxx\nresults = xxx.main('foo')\n# Do something with results\n\n"
] |
[
2
] |
[] |
[] |
[
"command_line",
"django",
"python",
"scripting"
] |
stackoverflow_0002252741_command_line_django_python_scripting.txt
|
Q:
Site name appearing in django URLs
I'm having an issue where a call to the url template tag in Django is appending the site name (I don't want it in there.)
Let's say that the site name is 'mysite'.
So for example:
<a href="{% url myapp.views.myview "myparam" %}">Link text</a>
is producing:
<a href="/mysite/foo/bar">Link text</a>
when I want it to produce:
<a href="/foo/bar">Link text</a>
My urls.py is set up like this:
from django.conf.urls.defaults import *
import mysite.myapp.views
urlpatterns = patterns('',
(r'^/foo/bar/$', 'mysite.myapp.views.myview'),
)
Can anyone point me in the right direction?
Edit - when the site was in development, it was on a subdirectory of a test server, with the app as the subdirectory! So it was sitting on http://www.mytestserver.com/mysite. There's no caching in place, and all the references to /mysite were removed prior to going live.
A:
Check your modpython configuration, if you've got one. There may be a line that looks like PythonOption django.root /mysite Remove that.
A:
Are you sure, that this is the rendered version? Docs say, that an absolute url should be produced, i.e. /mysite/foo/bar. Are you checking source in the browser? Try printing out the result of render_to_string (or other rendering function you are using) and check, if there example.com is added too.
|
Site name appearing in django URLs
|
I'm having an issue where a call to the url template tag in Django is appending the site name (I don't want it in there.)
Let's say that the site name is 'mysite'.
So for example:
<a href="{% url myapp.views.myview "myparam" %}">Link text</a>
is producing:
<a href="/mysite/foo/bar">Link text</a>
when I want it to produce:
<a href="/foo/bar">Link text</a>
My urls.py is set up like this:
from django.conf.urls.defaults import *
import mysite.myapp.views
urlpatterns = patterns('',
(r'^/foo/bar/$', 'mysite.myapp.views.myview'),
)
Can anyone point me in the right direction?
Edit - when the site was in development, it was on a subdirectory of a test server, with the app as the subdirectory! So it was sitting on http://www.mytestserver.com/mysite. There's no caching in place, and all the references to /mysite were removed prior to going live.
|
[
"Check your modpython configuration, if you've got one. There may be a line that looks like PythonOption django.root /mysite Remove that.\n",
"Are you sure, that this is the rendered version? Docs say, that an absolute url should be produced, i.e. /mysite/foo/bar. Are you checking source in the browser? Try printing out the result of render_to_string (or other rendering function you are using) and check, if there example.com is added too.\n"
] |
[
6,
1
] |
[] |
[] |
[
"django",
"django_urls",
"python",
"url"
] |
stackoverflow_0002252593_django_django_urls_python_url.txt
|
Q:
"De-instrument" an instantiated object from the sqlalchemy ORM
Is there an easy way to "de-instrument" an instantiated class coming from sqlalchemy's ORM, i.e., turn it into a regular object?
I.e., suppose I have a Worker class that's mapped to a worker table:
class Worker(object):
def earnings(self):
return self.wage*self.hours
mapper(Worker,workers)
where workers is a reflected table containing lots of observations. The reason I want to do this is that methods like worker.earnings() are very slow, on account of all the sqlalchemy overhead (which I don't need for my application). E.g., accessing self.wage is about 10 times slower than it would be if self.wage was a property of a regular class.
A:
If you need to permanently deinstrument a class, just dispose of the mapper:
sqlalchemy.orm.class_mapper(Worker).dispose()
SQLAlchemy instrumentation lives as property descriptors on the class object. So if you need separate deinstrumented versions of objects you'll need to create a version of the class that doesn't have the descriptors in it's type hierarchy.
A good way would be to have a persistent subclass for each model class and create the mappers to the persistent classes. Here's a class decorator that creates the subclass for you and adds it as a class attribute on the original:
def deinstrumentable(cls):
"""Create a deinstrumentable subclass of the class."""
def deinstrument(self):
"""Create a non-instrumented copy of the object."""
obj = cls.__new__(cls)
obj.__dict__.update(self.__dict__)
del obj._sa_instance_state
return obj
persistent = type('Persisted%s' % cls.__name__, (cls,), {
'Base': cls,
'deinstrument': deinstrument
})
return persistent
You would use it in the definition like this:
@deinstrumentable
class Worker(object):
def earnings(self):
return self.wage*self.hours
mapper(Worker, workers)
And when you have a persistent object, you can create a deinstrumented version of it like this:
worker = session.query(Worker).first()
detached_worker = worker.deinstrument()
You can create a deinstrumented version directly like this:
detached_worker = Worker.Base()
A:
If you know the names of the fields you want, say you have them in a list of strings called fields, and those of the methods you want, like earnings in your example, in a list of strings called methods, then:
def deinstrument(obj, fields, methods):
cls = type(obj)
class newcls(object): pass
newobj = newcls()
for f in fields:
setattr(newobj, f, getattr(obj, f))
for m in methods:
setattr(newcls, m, getattr(cls, m).im_func)
return newobj
You'll probably want __name__ among the fields strings, so that the new object's class has the same name as the one you're "de-instrumenting".
|
"De-instrument" an instantiated object from the sqlalchemy ORM
|
Is there an easy way to "de-instrument" an instantiated class coming from sqlalchemy's ORM, i.e., turn it into a regular object?
I.e., suppose I have a Worker class that's mapped to a worker table:
class Worker(object):
def earnings(self):
return self.wage*self.hours
mapper(Worker,workers)
where workers is a reflected table containing lots of observations. The reason I want to do this is that methods like worker.earnings() are very slow, on account of all the sqlalchemy overhead (which I don't need for my application). E.g., accessing self.wage is about 10 times slower than it would be if self.wage was a property of a regular class.
|
[
"If you need to permanently deinstrument a class, just dispose of the mapper:\nsqlalchemy.orm.class_mapper(Worker).dispose()\n\nSQLAlchemy instrumentation lives as property descriptors on the class object. So if you need separate deinstrumented versions of objects you'll need to create a version of the class that doesn't have the descriptors in it's type hierarchy.\nA good way would be to have a persistent subclass for each model class and create the mappers to the persistent classes. Here's a class decorator that creates the subclass for you and adds it as a class attribute on the original:\ndef deinstrumentable(cls):\n \"\"\"Create a deinstrumentable subclass of the class.\"\"\"\n def deinstrument(self):\n \"\"\"Create a non-instrumented copy of the object.\"\"\"\n obj = cls.__new__(cls)\n obj.__dict__.update(self.__dict__)\n del obj._sa_instance_state\n return obj\n\n persistent = type('Persisted%s' % cls.__name__, (cls,), {\n 'Base': cls,\n 'deinstrument': deinstrument\n })\n\n return persistent\n\nYou would use it in the definition like this:\n@deinstrumentable\nclass Worker(object):\n def earnings(self):\n return self.wage*self.hours\n\nmapper(Worker, workers)\n\nAnd when you have a persistent object, you can create a deinstrumented version of it like this:\nworker = session.query(Worker).first()\ndetached_worker = worker.deinstrument()\n\nYou can create a deinstrumented version directly like this:\ndetached_worker = Worker.Base()\n\n",
"If you know the names of the fields you want, say you have them in a list of strings called fields, and those of the methods you want, like earnings in your example, in a list of strings called methods, then:\ndef deinstrument(obj, fields, methods):\n cls = type(obj)\n class newcls(object): pass\n newobj = newcls()\n for f in fields:\n setattr(newobj, f, getattr(obj, f))\n for m in methods:\n setattr(newcls, m, getattr(cls, m).im_func)\n return newobj\n\nYou'll probably want __name__ among the fields strings, so that the new object's class has the same name as the one you're \"de-instrumenting\".\n"
] |
[
5,
0
] |
[] |
[] |
[
"optimization",
"python",
"sqlalchemy"
] |
stackoverflow_0002249694_optimization_python_sqlalchemy.txt
|
Q:
"Error when calling the metaclass bases" when declaring class inside a module
Let me start by saying, I also get the same error whey defining __init__ and running super()'s __init__. I only simplified it down to this custom method to see if the error still happened.
import HTMLParser
class Spider(HTMLParser):
"""
Just a subclass.
"""
This alone in a module raises the following error:
Traceback (most recent call last):
File "D:\my\path\to\my\file
class Spider(HTMLParser):
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
A:
And the answer is that I'm a complete noob. This is a module, not a class, but I'll leave this up here in case other noobs run into the same problem.
Solution:
from HTMLParser import HTMLParser
Each time I think I'm starting to become a pro, something like this happens :(
|
"Error when calling the metaclass bases" when declaring class inside a module
|
Let me start by saying, I also get the same error whey defining __init__ and running super()'s __init__. I only simplified it down to this custom method to see if the error still happened.
import HTMLParser
class Spider(HTMLParser):
"""
Just a subclass.
"""
This alone in a module raises the following error:
Traceback (most recent call last):
File "D:\my\path\to\my\file
class Spider(HTMLParser):
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
|
[
"And the answer is that I'm a complete noob. This is a module, not a class, but I'll leave this up here in case other noobs run into the same problem.\nSolution:\nfrom HTMLParser import HTMLParser\n\nEach time I think I'm starting to become a pro, something like this happens :(\n"
] |
[
33
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002253816_python.txt
|
Q:
Python: ball isn't defined
I get this error:
Traceback (most recent call last):
File "D:/Python26/PYTHON-PROGRAMME/049 bam", line 9, in <module>
ball[i][j]=sphere()
NameError: name 'ball' is not defined
when I run this code. But the ball is defined ( ball[i][j]=sphere() ). Isn`t it?
#2D-wave
#VPython
from visual import *
#ball array #ready
for i in range(5):
for y in range(5):
ball[i][j]=sphere()
timer = 0
dt = 0.001
while(1):
timer += dt
for i in range(5):
for y in range(5):
#wave equation
x = sqrt(i**2 + j**2) # x = distance to the source
ball[i][j].pos.y = amplitude * sin (k * x + omega * timer)
if timer > 5:
break
A:
No, ball is not defined. You need to create a list() before you can start assigning to the list's indices. Similarly the nested lists need to be created before you assign to them. Try this:
ball = [None] * 5
for i in range(5):
ball[i] = [None] * 5
for j in range(5):
ball[i][j]=sphere()
or this:
ball = [[sphere() for y in range(5)] for x in range(5)]
The latter syntax which uses two list comprehensions is more idiomatic--more Pythonic, if you will.
A:
When you say ball[i][j], you have to already have some object ball so that you can index (twice) into it. Try this segment instead:
ball = []
for i in range(5):
ball.append([])
for y in range(5):
ball[i].append(sphere())
A:
Python doesn't know that ball is a list. Before using it (in the first for loop), you'll have to initialize it as
ball = []
so Python knows to treat it as a list.
A:
no ball is not defined. this line: ball[i][j]=sphere() assigns value to an element of object ball points to. There is nothing ball points to therefore it's not possible to assign anything.
A:
In your program, ball is just a name that doesn't refer to anything. Using indexing like a[i] requires that a refer to an object that already supports indexing. Similarly, a[i][j] requires that a[i] refer to an object that supports indexing.
It sounds like you want it to refer to a list of lists, but this is not a great solution. You may be a lot happier performing your operations on numpy arrays, which abstract away all your looping and can really speed up computations.
|
Python: ball isn't defined
|
I get this error:
Traceback (most recent call last):
File "D:/Python26/PYTHON-PROGRAMME/049 bam", line 9, in <module>
ball[i][j]=sphere()
NameError: name 'ball' is not defined
when I run this code. But the ball is defined ( ball[i][j]=sphere() ). Isn`t it?
#2D-wave
#VPython
from visual import *
#ball array #ready
for i in range(5):
for y in range(5):
ball[i][j]=sphere()
timer = 0
dt = 0.001
while(1):
timer += dt
for i in range(5):
for y in range(5):
#wave equation
x = sqrt(i**2 + j**2) # x = distance to the source
ball[i][j].pos.y = amplitude * sin (k * x + omega * timer)
if timer > 5:
break
|
[
"No, ball is not defined. You need to create a list() before you can start assigning to the list's indices. Similarly the nested lists need to be created before you assign to them. Try this:\nball = [None] * 5\n\nfor i in range(5):\n ball[i] = [None] * 5\n\n for j in range(5):\n ball[i][j]=sphere()\n\nor this:\nball = [[sphere() for y in range(5)] for x in range(5)]\n\nThe latter syntax which uses two list comprehensions is more idiomatic--more Pythonic, if you will.\n",
"When you say ball[i][j], you have to already have some object ball so that you can index (twice) into it. Try this segment instead:\nball = [] \nfor i in range(5):\n ball.append([])\n for y in range(5):\n ball[i].append(sphere())\n\n",
"Python doesn't know that ball is a list. Before using it (in the first for loop), you'll have to initialize it as\nball = []\n\nso Python knows to treat it as a list.\n",
"no ball is not defined. this line: ball[i][j]=sphere() assigns value to an element of object ball points to. There is nothing ball points to therefore it's not possible to assign anything.\n",
"In your program, ball is just a name that doesn't refer to anything. Using indexing like a[i] requires that a refer to an object that already supports indexing. Similarly, a[i][j] requires that a[i] refer to an object that supports indexing.\nIt sounds like you want it to refer to a list of lists, but this is not a great solution. You may be a lot happier performing your operations on numpy arrays, which abstract away all your looping and can really speed up computations.\n"
] |
[
3,
3,
1,
1,
1
] |
[] |
[] |
[
"arrays",
"python"
] |
stackoverflow_0002254009_arrays_python.txt
|
Q:
In python, what is more efficient? Modifying lists or strings?
Regardless of ease of use, which is more computationally efficient? Constantly slicing lists and appending to them? Or taking substrings and doing the same?
As an example, let's say I have two binary strings "11011" and "01001". If I represent these as lists, I'll be choosing a random "slice" point. Let's say I get 3. I'll Take the first 3 characters of the first string and the remaining characters of the second string (so I'd have to slice both) and create a new string out of it.
Would this be more efficiently done by cutting the substrings or by representing it as a list ( [1, 1, 0, 1, 1] ) rather than a string?
A:
>>> a = "11011"
>>> b = "01001"
>>> import timeit
>>> def strslice():
return a[:3] + b[3:]
>>> def lstslice():
return list(a)[:3] + list(b)[3:]
>>> c = list(a)
>>> d = list(b)
>>> def lsts():
return c[:3] + d[3:]
>>> timeit.timeit(strslice)
0.5103488475836432
>>> timeit.timeit(lstslice)
2.4350100538824613
>>> timeit.timeit(lsts)
1.0648406858527295
A:
timeit is a good tool for micro-benchmarking, but it needs to be used with the utmost care when the operations you want to compare may involve in-place alterations -- in this case, you need to include extra operations designed to make needed copies. Then, first time just the "extra" overhead:
$ python -mtimeit -s'a="11011";b="01001"' 'la=list(a);lb=list(b)'
100000 loops, best of 3: 5.01 usec per loop
$ python -mtimeit -s'a="11011";b="01001"' 'la=list(a);lb=list(b)'
100000 loops, best of 3: 5.06 usec per loop
So making the two brand-new lists we need (to avoid alteration) costs a tad more than 5 microseconds (when focused on small differences, run things at least 2-3 times to eyeball the uncertainty range). After which:
$ python -mtimeit -s'a="11011";b="01001"' 'la=list(a);lb=list(b);x=a[:3]+b[3:]'
100000 loops, best of 3: 5.5 usec per loop
$ python -mtimeit -s'a="11011";b="01001"' 'la=list(a);lb=list(b);x=a[:3]+b[3:]'
100000 loops, best of 3: 5.47 usec per loop
string slicing and concatenation in this case can be seen to cost another 410-490 nanoseconds. And:
$ python -mtimeit -s'a="11011";b="01001"' 'la=list(a);lb=list(b);la[3:]=lb[3:]'
100000 loops, best of 3: 5.99 usec per loop
$ python -mtimeit -s'a="11011";b="01001"' 'la=list(a);lb=list(b);la[3:]=lb[3:]'
100000 loops, best of 3: 5.99 usec per loop
in-place list splicing can be seen to cost 930-980 nanoseconds. The difference is safely above the noise/uncertainty levels, so you can reliably state that for this use case working with strings is going to take roughly half as much time as working in-place with lists. Of course, it's also crucial to measure a range of use cases that are relevant and representative of your typical bottleneck tasks!
A:
In general, modifying lists is more efficient than modifying strings, because strings are immutable.
A:
It really depends on actual use cases, and as others have said, profile it, but in general, appending to lists will be better, because it can be done in place, whereas "appending to strings" actually creates a new string that concatenates the old strings. This can rapidly eat up memory. (Which is a different issue from computational efficiency, really).
Edit: If you want computational efficiency with binary values, don't use strings or lists. Use integers and bitwise operations. With recent versions of python, you can use binary representations when you need them:
>>> bin(42)
'0b101010'
>>> 0b101010
42
>>> int('101010')
101010
>>> int('101010', 2)
42
>>> int('0b101010')
...
ValueError: invalid literal for int() with base 10: '0b101010'
>>> int('0b101010', 2)
42
Edit 2:
def strslice(a, b):
return a[:3] + b[3:]
might be better written something like:
def binspice(a, b):
mask = 0b11100
return (a & mask) + (b & ~mask)
>>> a = 0b11011
>>> b = 0b1001
>>> bin(binsplice(a, b))
'0b11001
>>>
It might need to be modified if your binary numbers are different sizes.
|
In python, what is more efficient? Modifying lists or strings?
|
Regardless of ease of use, which is more computationally efficient? Constantly slicing lists and appending to them? Or taking substrings and doing the same?
As an example, let's say I have two binary strings "11011" and "01001". If I represent these as lists, I'll be choosing a random "slice" point. Let's say I get 3. I'll Take the first 3 characters of the first string and the remaining characters of the second string (so I'd have to slice both) and create a new string out of it.
Would this be more efficiently done by cutting the substrings or by representing it as a list ( [1, 1, 0, 1, 1] ) rather than a string?
|
[
">>> a = \"11011\"\n>>> b = \"01001\"\n>>> import timeit\n>>> def strslice():\n return a[:3] + b[3:]\n\n>>> def lstslice():\n return list(a)[:3] + list(b)[3:]\n>>> c = list(a)\n>>> d = list(b)\n>>> def lsts():\n return c[:3] + d[3:]\n\n>>> timeit.timeit(strslice)\n0.5103488475836432\n>>> timeit.timeit(lstslice)\n2.4350100538824613\n>>> timeit.timeit(lsts)\n1.0648406858527295\n\n",
"timeit is a good tool for micro-benchmarking, but it needs to be used with the utmost care when the operations you want to compare may involve in-place alterations -- in this case, you need to include extra operations designed to make needed copies. Then, first time just the \"extra\" overhead:\n$ python -mtimeit -s'a=\"11011\";b=\"01001\"' 'la=list(a);lb=list(b)'\n100000 loops, best of 3: 5.01 usec per loop\n$ python -mtimeit -s'a=\"11011\";b=\"01001\"' 'la=list(a);lb=list(b)'\n100000 loops, best of 3: 5.06 usec per loop\n\nSo making the two brand-new lists we need (to avoid alteration) costs a tad more than 5 microseconds (when focused on small differences, run things at least 2-3 times to eyeball the uncertainty range). After which:\n$ python -mtimeit -s'a=\"11011\";b=\"01001\"' 'la=list(a);lb=list(b);x=a[:3]+b[3:]'\n100000 loops, best of 3: 5.5 usec per loop\n$ python -mtimeit -s'a=\"11011\";b=\"01001\"' 'la=list(a);lb=list(b);x=a[:3]+b[3:]'\n100000 loops, best of 3: 5.47 usec per loop\n\nstring slicing and concatenation in this case can be seen to cost another 410-490 nanoseconds. And:\n$ python -mtimeit -s'a=\"11011\";b=\"01001\"' 'la=list(a);lb=list(b);la[3:]=lb[3:]'\n100000 loops, best of 3: 5.99 usec per loop\n$ python -mtimeit -s'a=\"11011\";b=\"01001\"' 'la=list(a);lb=list(b);la[3:]=lb[3:]'\n100000 loops, best of 3: 5.99 usec per loop\n\nin-place list splicing can be seen to cost 930-980 nanoseconds. The difference is safely above the noise/uncertainty levels, so you can reliably state that for this use case working with strings is going to take roughly half as much time as working in-place with lists. Of course, it's also crucial to measure a range of use cases that are relevant and representative of your typical bottleneck tasks!\n",
"In general, modifying lists is more efficient than modifying strings, because strings are immutable.\n",
"It really depends on actual use cases, and as others have said, profile it, but in general, appending to lists will be better, because it can be done in place, whereas \"appending to strings\" actually creates a new string that concatenates the old strings. This can rapidly eat up memory. (Which is a different issue from computational efficiency, really).\nEdit: If you want computational efficiency with binary values, don't use strings or lists. Use integers and bitwise operations. With recent versions of python, you can use binary representations when you need them:\n>>> bin(42)\n'0b101010'\n>>> 0b101010\n42\n>>> int('101010')\n101010\n>>> int('101010', 2)\n42\n>>> int('0b101010')\n...\nValueError: invalid literal for int() with base 10: '0b101010'\n>>> int('0b101010', 2)\n42\n\nEdit 2:\ndef strslice(a, b):\n return a[:3] + b[3:]\n\nmight be better written something like:\ndef binspice(a, b):\n mask = 0b11100\n return (a & mask) + (b & ~mask)\n\n>>> a = 0b11011\n>>> b = 0b1001\n>>> bin(binsplice(a, b))\n'0b11001\n>>> \n\nIt might need to be modified if your binary numbers are different sizes.\n"
] |
[
7,
5,
4,
0
] |
[] |
[] |
[
"list",
"python",
"string"
] |
stackoverflow_0002253234_list_python_string.txt
|
Q:
How to create graphs in Delphi application
I need to create graphs on the fly about specific process, with some informative texts and colors.
In the Unix world there's Graphviz including 'dot' for layout generation, is there something similar which could be used with Delphi?
I'm using Delphi 2007.
Also Python alternative could be considered, but I'd prefer pure Delphi in this case.
A:
You can use SimpleGraph from DelphiArea.
A have test and use it and it's a great component. Freeware with sources.
Regards.
A:
@Harriv, You can try WinGraphviz wich is a COM Wrapper for Graphviz.
check this link for more info.
A:
TMS also have a diagram studio and a workflow studio
and a post about the ways to make it
A:
Steema Software has a Delphi VCL TeeChart product that you may find interesting depending on your needs.
Steema Software TeeChart VCL
I have experimented with the trial version of this. I was able to create some very nice looking graphs. I was also able to use a shape file of the counties in our state to show statistics per county in a 3D view where the counties with the highest values stood out to the user.
|
How to create graphs in Delphi application
|
I need to create graphs on the fly about specific process, with some informative texts and colors.
In the Unix world there's Graphviz including 'dot' for layout generation, is there something similar which could be used with Delphi?
I'm using Delphi 2007.
Also Python alternative could be considered, but I'd prefer pure Delphi in this case.
|
[
"You can use SimpleGraph from DelphiArea.\nA have test and use it and it's a great component. Freeware with sources. \n\nRegards.\n",
"@Harriv, You can try WinGraphviz wich is a COM Wrapper for Graphviz.\ncheck this link for more info.\n\n",
"TMS also have a diagram studio and a workflow studio\nand a post about the ways to make it\n",
"Steema Software has a Delphi VCL TeeChart product that you may find interesting depending on your needs.\nSteema Software TeeChart VCL\nI have experimented with the trial version of this. I was able to create some very nice looking graphs. I was also able to use a shape file of the counties in our state to show statistics per county in a 3D view where the counties with the highest values stood out to the user.\n"
] |
[
6,
3,
1,
1
] |
[] |
[] |
[
"delphi",
"graph",
"python"
] |
stackoverflow_0002252779_delphi_graph_python.txt
|
Q:
Django flatpages and a catchall startpage
I'm using django 1.1 and flatpages. It works pretty well, but I didn't manage to get a catchall or default page running.
As soon as I add a entry to url.py for my startpage, the flatpages aren't displayed anymore.
(r'^', 'myproject.mysite.views.startpage'),
I know flatpages uses a 404 hook, but how do you configure the default website?
A:
I believe this is what you want (with a $):
(r'^$', 'myproject.mysite.views.startpage')
It should catch only empty requests.
A:
This regex matches everything, so no wonder that flatpages are not working - they are only fallback, activated on 404 error. And with this regex you don't give a chance for 404 error to show.
So, what you want to do is not possible with such regex catchall and flatpages.
Personally, if I want to do catch-all, I put all 'normal' URLs above it - but flatpages are not using URLs so...
|
Django flatpages and a catchall startpage
|
I'm using django 1.1 and flatpages. It works pretty well, but I didn't manage to get a catchall or default page running.
As soon as I add a entry to url.py for my startpage, the flatpages aren't displayed anymore.
(r'^', 'myproject.mysite.views.startpage'),
I know flatpages uses a 404 hook, but how do you configure the default website?
|
[
"I believe this is what you want (with a $):\n(r'^$', 'myproject.mysite.views.startpage')\n\nIt should catch only empty requests.\n",
"This regex matches everything, so no wonder that flatpages are not working - they are only fallback, activated on 404 error. And with this regex you don't give a chance for 404 error to show.\nSo, what you want to do is not possible with such regex catchall and flatpages.\nPersonally, if I want to do catch-all, I put all 'normal' URLs above it - but flatpages are not using URLs so... \n"
] |
[
4,
2
] |
[] |
[] |
[
"django",
"django_flatpages",
"python"
] |
stackoverflow_0002254366_django_django_flatpages_python.txt
|
Q:
Django-tinymce not working; Getting a normal textarea instead
I'm trying to use django-tinymce to make fields that are editable through Django's admin with a TinyMCE field. I am using tinymce.models.HTMLField as the field for this.
The problem is it's not working. I get a normal textarea. I check the HTML source, and it seems like all the code needed for TinyMCE is there. I also confirmed that the statically-served JavaScript file is indeed being served. But for some reason it isn't working.
What I did notice though, is that if I avoid setting TINYMCE_COMPRESSOR = True in the settings file, it does start to work. What can cause this behavior?
A:
What are your webserver and web browser. Perhaps it is trying to set the gzip/bzip header and the server isn't processing it... so it goes out plaintext but the client expects compressed?
|
Django-tinymce not working; Getting a normal textarea instead
|
I'm trying to use django-tinymce to make fields that are editable through Django's admin with a TinyMCE field. I am using tinymce.models.HTMLField as the field for this.
The problem is it's not working. I get a normal textarea. I check the HTML source, and it seems like all the code needed for TinyMCE is there. I also confirmed that the statically-served JavaScript file is indeed being served. But for some reason it isn't working.
What I did notice though, is that if I avoid setting TINYMCE_COMPRESSOR = True in the settings file, it does start to work. What can cause this behavior?
|
[
"What are your webserver and web browser. Perhaps it is trying to set the gzip/bzip header and the server isn't processing it... so it goes out plaintext but the client expects compressed?\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_admin",
"javascript",
"python",
"tinymce"
] |
stackoverflow_0002254398_django_django_admin_javascript_python_tinymce.txt
|
Q:
Python Import and 'object has no attribute' with Qt
From research on Stack Overflow and other sites I'm 99% sure that the problem I'm having is due to incorrect importing. Below is a QLabel sub class that I'm using to respond to some mouse events:
import Qt
import sys
class ASMovableLabel(Qt.QLabel):
def mouseReleaseEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
def mousePressEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
elif button == 3:
print ('RIGHT CLICK')
self.setLayout()
def mouseMoveEvent(self, event):
print ("you moved the mouse: %f, %f", event.x, event.y)
self.frameRect.setTopLeft(Qt.QPoint(event.x, event.y))
When mouseMoveEvent is triggered I get the following error:
self.frameRect.setTopLeft(Qt.QPoint(event.x, event.y))
AttributeError: 'builtin_function_or_method' object has no attribute 'setTopLeft'
The other solutions to this type of error I've seen have revolved around the name space, so I would or would not need to include Qt. before all the Qt classes but this error is much farther down in the Qt objects. Please point out my mistake!
I have also tried:
from PyQt4 import Qt
It gives the same error
UPDATE: based on Messa's comment I made few changes:
import Qt
import sys
class ASMovableLabel(Qt.QLabel):
def mouseReleaseEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
def mousePressEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
elif button == 3:
print ('RIGHT CLICK')
self.setLayout() #this won't set to nil
def mouseMoveEvent(self, event):
self.frameRect().setTopLeft(Qt.QPoint(event.globalX(), event.globalY()))
So it seems that in Python the dot syntax are function calls and need to include that trailing "()". This doesn't include self ( i.e. self().something() )
A:
Try
self.frameRect().setTopLeft(Qt.QPoint(event.x, event.y))
instead of
self.frameRect.setTopLeft(Qt.QPoint(event.x, event.y))
|
Python Import and 'object has no attribute' with Qt
|
From research on Stack Overflow and other sites I'm 99% sure that the problem I'm having is due to incorrect importing. Below is a QLabel sub class that I'm using to respond to some mouse events:
import Qt
import sys
class ASMovableLabel(Qt.QLabel):
def mouseReleaseEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
def mousePressEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
elif button == 3:
print ('RIGHT CLICK')
self.setLayout()
def mouseMoveEvent(self, event):
print ("you moved the mouse: %f, %f", event.x, event.y)
self.frameRect.setTopLeft(Qt.QPoint(event.x, event.y))
When mouseMoveEvent is triggered I get the following error:
self.frameRect.setTopLeft(Qt.QPoint(event.x, event.y))
AttributeError: 'builtin_function_or_method' object has no attribute 'setTopLeft'
The other solutions to this type of error I've seen have revolved around the name space, so I would or would not need to include Qt. before all the Qt classes but this error is much farther down in the Qt objects. Please point out my mistake!
I have also tried:
from PyQt4 import Qt
It gives the same error
UPDATE: based on Messa's comment I made few changes:
import Qt
import sys
class ASMovableLabel(Qt.QLabel):
def mouseReleaseEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
def mousePressEvent(self, event):
button = event.button()
if button == 1:
print ('LEFT CLICK')
elif button == 3:
print ('RIGHT CLICK')
self.setLayout() #this won't set to nil
def mouseMoveEvent(self, event):
self.frameRect().setTopLeft(Qt.QPoint(event.globalX(), event.globalY()))
So it seems that in Python the dot syntax are function calls and need to include that trailing "()". This doesn't include self ( i.e. self().something() )
|
[
"Try\nself.frameRect().setTopLeft(Qt.QPoint(event.x, event.y))\n\ninstead of\nself.frameRect.setTopLeft(Qt.QPoint(event.x, event.y))\n\n"
] |
[
1
] |
[] |
[] |
[
"import",
"pyqt",
"python",
"qt"
] |
stackoverflow_0002254708_import_pyqt_python_qt.txt
|
Q:
Receive output of python script from PHP?
I want to launch a python script similar to this web crawler, wait for it to finish, process the data in php, then return the results to the user.
From what I hear, getting the output from python is trivial, but the above script is doing stuff in parallel, so just printing stuff as it finishes won't give me any kind of usable structure.
What would you suggest to use to pass an array of html data from the python script to php? A temporary file? mysql? I have no experience whatsoever in python, so you'll need to be pretty explicit.
Cheers.
A:
I suggest a file, you can dump structured data for example to JSON or YAML (both is easily writable and readable in both Python and PHP).
Why is "printing stuff as it finishes" not usable? Writing file and then reading it is basically the same. You don't have to use only print in Python, you can use standard output (sys.stdout) in the same way as a opened file.
|
Receive output of python script from PHP?
|
I want to launch a python script similar to this web crawler, wait for it to finish, process the data in php, then return the results to the user.
From what I hear, getting the output from python is trivial, but the above script is doing stuff in parallel, so just printing stuff as it finishes won't give me any kind of usable structure.
What would you suggest to use to pass an array of html data from the python script to php? A temporary file? mysql? I have no experience whatsoever in python, so you'll need to be pretty explicit.
Cheers.
|
[
"I suggest a file, you can dump structured data for example to JSON or YAML (both is easily writable and readable in both Python and PHP).\nWhy is \"printing stuff as it finishes\" not usable? Writing file and then reading it is basically the same. You don't have to use only print in Python, you can use standard output (sys.stdout) in the same way as a opened file.\n"
] |
[
1
] |
[] |
[] |
[
"php",
"python"
] |
stackoverflow_0002254827_php_python.txt
|
Q:
tarfile: determine compression of an open tarball
I am on working on a Python script which is supposed to process a tarball and output new one, trying to keep the format of the original. Thus, I am looking for a way to lookup the compression method used in an open tarball to open the new one with same compression.
AFAICS TarFile class doesn't provide any public interface to get the needed information directly. And I would like to avoid reading the file independently of the tarfile module.
I am currently considering looking up the class of the underlying file object (t.fileobj.__class__) or trying to open the input file in all possible modes and choosing the correct format basing on which one succeeds.
A:
Ok, I have found a better solution.
f = t.fileobj.__class__(newfn, 'w')
A:
Tar doesn't compress, it concatenates (which is why TarFile won't tell you what compression method is used, because there isn't one).
Are you trying to find out if it's a tar.gz, tar.bz2, or tar.Z ?
A:
When you open the tarfile, you can choose the mode. From the docs:
If mode is not suitable to open a certain (compressed) file for reading, ReadError is raised.
So why not try opening the file as a .gz, .bz2 etc., catching the exception each time? The one that opens without an exception tells you the type of compression you want to replicate.
|
tarfile: determine compression of an open tarball
|
I am on working on a Python script which is supposed to process a tarball and output new one, trying to keep the format of the original. Thus, I am looking for a way to lookup the compression method used in an open tarball to open the new one with same compression.
AFAICS TarFile class doesn't provide any public interface to get the needed information directly. And I would like to avoid reading the file independently of the tarfile module.
I am currently considering looking up the class of the underlying file object (t.fileobj.__class__) or trying to open the input file in all possible modes and choosing the correct format basing on which one succeeds.
|
[
"Ok, I have found a better solution.\nf = t.fileobj.__class__(newfn, 'w')\n\n",
"Tar doesn't compress, it concatenates (which is why TarFile won't tell you what compression method is used, because there isn't one). \nAre you trying to find out if it's a tar.gz, tar.bz2, or tar.Z ?\n",
"When you open the tarfile, you can choose the mode. From the docs:\n\nIf mode is not suitable to open a certain (compressed) file for reading, ReadError is raised.\n\nSo why not try opening the file as a .gz, .bz2 etc., catching the exception each time? The one that opens without an exception tells you the type of compression you want to replicate.\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"python",
"tarfile"
] |
stackoverflow_0002254017_python_tarfile.txt
|
Q:
Return all non-overlapping matches of pattern in string
In Python I can use re.findall(pattern, string) to return all non-overlapping matches of pattern in a string.
For example, in the following SVG path command:
import re
spam = "M317.0,169.7C311.1,170.5 285.7,146.8 300.7,178.57 L 321.4,175.01"
eggs = re.findall("([A-Za-z]|-?[0-9]+\.?[0-9]*(?:e-?[0-9]*)?)", spam)
print(eggs)
['M', '317.0', '169.7', 'C', '311.1', '170.5', '285.7', '146.8', '300.7', '178.5', 'L', '321.4', '175.0']
This there a light-weight, clean, and efficient way to do this type of regular expression pattern matching in in C or C++? Please note that I'm not looking for a solution that relies upon Boost. Ideally I would like to minimize dependencies and keep my code lean...
A:
SLRE - Super Light Regular Expression library
SLRE is an ANSI C library that implements a tiny subset of Perl regular expressions. It is primarily targeted for developers who want to parse configuation files, where speed is unimportant. It is in single .c file, easily modifiable for custom needs. For example, if one wants to introduce a new metacharacter, '\i', that means 'IP address', it is easy to do so.
Features
* Crossplatform - pure ANSI C
* Very simple API
* Light: about 5kB of code when compiled
* Uses no dynamic memory allocation
* Thread safe
Supported RE Syntax
^ Match beginning of a buffer
$ Match end of a buffer
() Grouping and substring capturing
[...] Match any character from set
[^...] Match any character but ones from set
\s Match whitespace
\S Match non-whitespace
\d Match decimal digit
\r Match carriage return
\n Match newline
+ Match one or more times (greedy)
+? Match one or more times (non-greedy)
* Match zero or more times (greedy)
*? Match zero or more times (non-greedy)
? Match zero or once
\xDD Match byte with hex value 0xDD
\meta Match one of the meta character: ^$().[*+?\
/*
* ----------------------------------------------------------------------------
* "THE BEER-WARE LICENSE" (Revision 42):
* Sergey Lyubka wrote this file. As long as you retain this notice you
* can do whatever you want with this stuff. If we meet some day, and you think
* this stuff is worth it, you can buy me a beer in return.
* ----------------------------------------------------------------------------
*/
A:
You'll need a C++ regular expression library. There are a number of them but they will create a dependency. C++ has no native (or STL) regular expression support.
|
Return all non-overlapping matches of pattern in string
|
In Python I can use re.findall(pattern, string) to return all non-overlapping matches of pattern in a string.
For example, in the following SVG path command:
import re
spam = "M317.0,169.7C311.1,170.5 285.7,146.8 300.7,178.57 L 321.4,175.01"
eggs = re.findall("([A-Za-z]|-?[0-9]+\.?[0-9]*(?:e-?[0-9]*)?)", spam)
print(eggs)
['M', '317.0', '169.7', 'C', '311.1', '170.5', '285.7', '146.8', '300.7', '178.5', 'L', '321.4', '175.0']
This there a light-weight, clean, and efficient way to do this type of regular expression pattern matching in in C or C++? Please note that I'm not looking for a solution that relies upon Boost. Ideally I would like to minimize dependencies and keep my code lean...
|
[
"SLRE - Super Light Regular Expression library\nSLRE is an ANSI C library that implements a tiny subset of Perl regular expressions. It is primarily targeted for developers who want to parse configuation files, where speed is unimportant. It is in single .c file, easily modifiable for custom needs. For example, if one wants to introduce a new metacharacter, '\\i', that means 'IP address', it is easy to do so.\nFeatures\n* Crossplatform - pure ANSI C\n* Very simple API\n* Light: about 5kB of code when compiled\n* Uses no dynamic memory allocation\n* Thread safe \nSupported RE Syntax\n^ Match beginning of a buffer\n$ Match end of a buffer\n() Grouping and substring capturing\n[...] Match any character from set\n[^...] Match any character but ones from set\n\\s Match whitespace\n\\S Match non-whitespace\n\\d Match decimal digit\n\\r Match carriage return\n\\n Match newline\n+ Match one or more times (greedy)\n+? Match one or more times (non-greedy)\n* Match zero or more times (greedy)\n*? Match zero or more times (non-greedy)\n? Match zero or once\n\\xDD Match byte with hex value 0xDD\n\\meta Match one of the meta character: ^$().[*+?\\ \n/*\n * ----------------------------------------------------------------------------\n * \"THE BEER-WARE LICENSE\" (Revision 42):\n * Sergey Lyubka wrote this file. As long as you retain this notice you\n * can do whatever you want with this stuff. If we meet some day, and you think\n * this stuff is worth it, you can buy me a beer in return.\n * ----------------------------------------------------------------------------\n */\n\n",
"You'll need a C++ regular expression library. There are a number of them but they will create a dependency. C++ has no native (or STL) regular expression support.\n"
] |
[
5,
2
] |
[] |
[] |
[
"c++",
"pattern_matching",
"python",
"regex"
] |
stackoverflow_0002255302_c++_pattern_matching_python_regex.txt
|
Q:
Extra data about objects in django templates
I have django objects:
class Event(models.Model):
title = models.CharField(max_length=255)
event_start_date = models.DateField(null=True, blank='true')
...
class RegistrationDate(models.Model):
event = models.ForeignKey(tblEvents)
date_type = models.CharField(max_length=10, choices=registration_date_type)
start_date = models.DateField(blank='true', null='true')
end_date = models.DateField(blank='true', null='true')
An Event can have early, normal, and late registration periods.
I wrote a function that takes in an event and returns one of: None, "Early", "Normal", or "Late"
All that works great.
In my app, I want to display a list of events and where their registration status is. So I did a query as such.
Events = tblEvents.objects.all()
So I have all of the info about the event, but not the status.
What is the easiest/best way to get the status for each event displayed in the template.
I figure that I can write a template tag, but that seems like more work then should be necessary.
A:
Add a property to your Event class e.g.:
class Event:
# stuff here
@property
def status(self):
# do the same thing here as in your status function
return status
The you can do in your template:
{{ event.status }}
A:
I think you can make that function you wrote a class method of Event. Then you can just call it from the template. For example...
{% if event %}
event.getStatus
{% endif %}
...but I haven't done Django in a little while.
|
Extra data about objects in django templates
|
I have django objects:
class Event(models.Model):
title = models.CharField(max_length=255)
event_start_date = models.DateField(null=True, blank='true')
...
class RegistrationDate(models.Model):
event = models.ForeignKey(tblEvents)
date_type = models.CharField(max_length=10, choices=registration_date_type)
start_date = models.DateField(blank='true', null='true')
end_date = models.DateField(blank='true', null='true')
An Event can have early, normal, and late registration periods.
I wrote a function that takes in an event and returns one of: None, "Early", "Normal", or "Late"
All that works great.
In my app, I want to display a list of events and where their registration status is. So I did a query as such.
Events = tblEvents.objects.all()
So I have all of the info about the event, but not the status.
What is the easiest/best way to get the status for each event displayed in the template.
I figure that I can write a template tag, but that seems like more work then should be necessary.
|
[
"Add a property to your Event class e.g.:\nclass Event:\n # stuff here\n\n @property\n def status(self):\n # do the same thing here as in your status function\n return status\n\nThe you can do in your template:\n{{ event.status }}\n\n",
"I think you can make that function you wrote a class method of Event. Then you can just call it from the template. For example...\n{% if event %}\n event.getStatus\n{% endif %}\n\n...but I haven't done Django in a little while.\n"
] |
[
5,
2
] |
[] |
[] |
[
"django",
"django_models",
"django_templates",
"python"
] |
stackoverflow_0002255488_django_django_models_django_templates_python.txt
|
Q:
How can I import a C++ python extension into a module in another directory?
Here is the directory structure:
app/
__init__.py
sub1/
__init__.py
mod1.py
sub2/
__init__.py
sub2.so
test_sub2.py
The folder app is on my PYTHONPATH
All of the __init__.py files are empty.
The shared library sub2.so is a C++ extension module that I compiled using cmake and boost-python.
test_sub2.py is a test script for the class defined in sub2.so.
If I run test_sub2.py from the sub2 directory, it imports the module correctly and the test passes.
How do I import the class A from sub2.so into mod1.py?
A:
The way to import it is to import app.sub2.sub2, from any source file. Your test should actually live outside of app and use that module-path to get to the extension module.
A:
Try
import .app.sub2.sub2
in your mod1.py file
A:
Use relative imports:
from ..sub2.sub2 import A
This is similar to a relative path "../sub2/sub2.so".
|
How can I import a C++ python extension into a module in another directory?
|
Here is the directory structure:
app/
__init__.py
sub1/
__init__.py
mod1.py
sub2/
__init__.py
sub2.so
test_sub2.py
The folder app is on my PYTHONPATH
All of the __init__.py files are empty.
The shared library sub2.so is a C++ extension module that I compiled using cmake and boost-python.
test_sub2.py is a test script for the class defined in sub2.so.
If I run test_sub2.py from the sub2 directory, it imports the module correctly and the test passes.
How do I import the class A from sub2.so into mod1.py?
|
[
"The way to import it is to import app.sub2.sub2, from any source file. Your test should actually live outside of app and use that module-path to get to the extension module.\n",
"Try \nimport .app.sub2.sub2 \n\nin your mod1.py file\n",
"Use relative imports:\nfrom ..sub2.sub2 import A\n\nThis is similar to a relative path \"../sub2/sub2.so\".\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"import",
"module",
"python"
] |
stackoverflow_0002255543_import_module_python.txt
|
Q:
Different results from converting a file from iso-8859-1 to utf-8 iconv in shell vs calling it from python with subprocess
Well, this could be a simple question, to be frank I'm a little confused with encodings an all those things.
Let's suppose I have the file 01234.txt which is iso-8859-1.
When I do:
iconv --from-code=iso-8859-1 --to-code=utf-8 01234.txt > 01234_utf8.txt
It gives me the desired result, but when I do the same thing with python and using subprocess:
import subprocess
p0 = subprocess.Popen([<here the same command>], shell=True)
p0.wait()
I get almost the same result, but the new file is missing e.g. part of the line before the last one and the last one.
Here the last three lines of both files:
iconv result:
795719000|MARIA TERESA MARROU VILLALOBOS|107
259871385|CHRISTIAM ALBERTO SUAREZ VILLALOBOS|107
311015100|JORGE MEZA CERVANTES|09499386
python result:
795719000|MARIA TERESA MARROU VILLALOBOS|107
259871385|CHRISTIAM
EDIT: In the python file I've tried using coding: utf-8 and coding: iso-8859-1 (not both at the same time).
EDIT: I've used codecs in bpython it works great. When using it from a file I get the not desired result.
EDIT: I'm using linux (Ubuntu 9.10) and python 2.6.2.
Any suggestions?
A:
You wrote: "In the python file I've used coding: utf-8 and coding: iso-8859-1."
Only the first of those will be used. Secondly, that specifies the encoding of the Python source file in which it appears, so that the Python compiler can do its job. Consequently it is absolutely nothing to do with the encodings of your input file and output file. A script to transcode data from encoding X to encoding Y can be written using only ASCII characters.
Now to your problem:
You wrote: "p0 = subprocess.Popen([<here the same command>], shell=True)"
Please (always) when asking a question, show the EXACT code that was run, not what you hoped/thought was run. Use copy/paste, don't retype it. Don't try to put it in a comment; edit your question.
Update: Here is a GUESS, based on the symptoms: you are losing the last few bytes of a file -- looks like failure to flush a buffer before fading away. Is the size of the truncated output file an integral power of 2?
Perhaps you should not rely on the command line processor doing > 01234_utf8.txt reliably. If you omit that part of the command, does the full payload appear on stdout? If, so you may be able to work around the problem by opening the output file yourself, passing its handle as the stdout arg, and later doing handle.flush() and handle.close().
|
Different results from converting a file from iso-8859-1 to utf-8 iconv in shell vs calling it from python with subprocess
|
Well, this could be a simple question, to be frank I'm a little confused with encodings an all those things.
Let's suppose I have the file 01234.txt which is iso-8859-1.
When I do:
iconv --from-code=iso-8859-1 --to-code=utf-8 01234.txt > 01234_utf8.txt
It gives me the desired result, but when I do the same thing with python and using subprocess:
import subprocess
p0 = subprocess.Popen([<here the same command>], shell=True)
p0.wait()
I get almost the same result, but the new file is missing e.g. part of the line before the last one and the last one.
Here the last three lines of both files:
iconv result:
795719000|MARIA TERESA MARROU VILLALOBOS|107
259871385|CHRISTIAM ALBERTO SUAREZ VILLALOBOS|107
311015100|JORGE MEZA CERVANTES|09499386
python result:
795719000|MARIA TERESA MARROU VILLALOBOS|107
259871385|CHRISTIAM
EDIT: In the python file I've tried using coding: utf-8 and coding: iso-8859-1 (not both at the same time).
EDIT: I've used codecs in bpython it works great. When using it from a file I get the not desired result.
EDIT: I'm using linux (Ubuntu 9.10) and python 2.6.2.
Any suggestions?
|
[
"You wrote: \"In the python file I've used coding: utf-8 and coding: iso-8859-1.\"\nOnly the first of those will be used. Secondly, that specifies the encoding of the Python source file in which it appears, so that the Python compiler can do its job. Consequently it is absolutely nothing to do with the encodings of your input file and output file. A script to transcode data from encoding X to encoding Y can be written using only ASCII characters.\nNow to your problem:\nYou wrote: \"p0 = subprocess.Popen([<here the same command>], shell=True)\"\nPlease (always) when asking a question, show the EXACT code that was run, not what you hoped/thought was run. Use copy/paste, don't retype it. Don't try to put it in a comment; edit your question.\nUpdate: Here is a GUESS, based on the symptoms: you are losing the last few bytes of a file -- looks like failure to flush a buffer before fading away. Is the size of the truncated output file an integral power of 2?\nPerhaps you should not rely on the command line processor doing > 01234_utf8.txt reliably. If you omit that part of the command, does the full payload appear on stdout? If, so you may be able to work around the problem by opening the output file yourself, passing its handle as the stdout arg, and later doing handle.flush() and handle.close().\n"
] |
[
1
] |
[] |
[] |
[
"iconv",
"python",
"subprocess"
] |
stackoverflow_0002255509_iconv_python_subprocess.txt
|
Q:
Pylons: Routes information availability from templates
I'm building a Pylons application using evoque as our templating engine, though I think my question is relevant to other template engines. I have a base template that I'm using for our pages, and that base template does all the includes for CSS and Javascript files. I'd like to perform conditional test to include/exclude CSS and Javascript files based on the actual page being display. Is there a way to access the routes information from the template, in other words to get the /{controller}/{action} information? This would allow me to get only the relevant CSS and Javascript files for that page based on the controller/action combination.
Thanks in advance,
Doug
A:
You can pull the controller and action information from environ['pylons.routes_dict']['controller'] and ['action'].
I'm not sure if environ is passed into the tmpl_context by default, but if not, you can just add something like this to the BaseController.__before__ method:
c.routes_dict = environ['pylons.routes_dict']
Then reference c.routes_dict['controller'] in your template.
|
Pylons: Routes information availability from templates
|
I'm building a Pylons application using evoque as our templating engine, though I think my question is relevant to other template engines. I have a base template that I'm using for our pages, and that base template does all the includes for CSS and Javascript files. I'd like to perform conditional test to include/exclude CSS and Javascript files based on the actual page being display. Is there a way to access the routes information from the template, in other words to get the /{controller}/{action} information? This would allow me to get only the relevant CSS and Javascript files for that page based on the controller/action combination.
Thanks in advance,
Doug
|
[
"You can pull the controller and action information from environ['pylons.routes_dict']['controller'] and ['action'].\nI'm not sure if environ is passed into the tmpl_context by default, but if not, you can just add something like this to the BaseController.__before__ method:\nc.routes_dict = environ['pylons.routes_dict']\n\nThen reference c.routes_dict['controller'] in your template.\n"
] |
[
2
] |
[] |
[] |
[
"pylons",
"python",
"routes",
"templates"
] |
stackoverflow_0002255026_pylons_python_routes_templates.txt
|
Q:
Sharing scripts that require a virtualenv to be activated
I have virtualenv and virtualenvwrapper installed on a shared Linux server with default settings (virtualenvs are in ~/.virtualenvs). I have several Python scripts that can only be run when the correct virtualenv is activated.
Now I want to share those scripts with other users on the server, but without requiring them to know anything about virtualenv... so they can run python scriptname or ./scriptname and the script will run with the libraries available in my virtualenv.
What's the cleanest way to do this? I've toyed with a few options (like changing the shebang line to point at the virtualenv provided interpreter), but they seem quite inflexible. Any suggestions?
Edit: This is a development server where several other people have accounts. However, none of them are Python programmers (I'm currently trying to convert them). I just want to make it easy for them to run these scripts and possibly inspect their logic, without exposing non-Pythonistas to environment details. Thanks.
A:
Use the following magic(5) at the start of the script.
#!/usr/bin/env python
Change which virtualenv is active and it'll use the python from that virtualenv.
Deactivate the virtualenv, it still runs.
A:
I would vote for adding a shebang line in scriptname pointing to the correct virtualenv python. You just tell your users the full path to scriptname (or put it in their PATH), and they don't even need to know it is a Python script.
If your users are programmers, then I don't see why you wouldn't want them to know/learn about virtualenv.
A:
If it's only on one server, then flexibility is irrelevant. Modify the shebang. If you're worried about that, make a packaged, installed copy on the dev server that doesn't use the virtualenv. Once it's out of develepment, whether that's for local users or users in guatemala, virtualenv is no longer the right tool.
|
Sharing scripts that require a virtualenv to be activated
|
I have virtualenv and virtualenvwrapper installed on a shared Linux server with default settings (virtualenvs are in ~/.virtualenvs). I have several Python scripts that can only be run when the correct virtualenv is activated.
Now I want to share those scripts with other users on the server, but without requiring them to know anything about virtualenv... so they can run python scriptname or ./scriptname and the script will run with the libraries available in my virtualenv.
What's the cleanest way to do this? I've toyed with a few options (like changing the shebang line to point at the virtualenv provided interpreter), but they seem quite inflexible. Any suggestions?
Edit: This is a development server where several other people have accounts. However, none of them are Python programmers (I'm currently trying to convert them). I just want to make it easy for them to run these scripts and possibly inspect their logic, without exposing non-Pythonistas to environment details. Thanks.
|
[
"Use the following magic(5) at the start of the script.\n#!/usr/bin/env python\n\nChange which virtualenv is active and it'll use the python from that virtualenv.\nDeactivate the virtualenv, it still runs.\n",
"I would vote for adding a shebang line in scriptname pointing to the correct virtualenv python. You just tell your users the full path to scriptname (or put it in their PATH), and they don't even need to know it is a Python script.\nIf your users are programmers, then I don't see why you wouldn't want them to know/learn about virtualenv.\n",
"If it's only on one server, then flexibility is irrelevant. Modify the shebang. If you're worried about that, make a packaged, installed copy on the dev server that doesn't use the virtualenv. Once it's out of develepment, whether that's for local users or users in guatemala, virtualenv is no longer the right tool.\n"
] |
[
106,
6,
-1
] |
[] |
[] |
[
"python",
"virtualenv"
] |
stackoverflow_0002253712_python_virtualenv.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.