content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
Can I override a query in DJango?
I know you can override delete and save methods in DJango models, but can you override a select query somehow to intercept and change a parameter slightly. I have a hashed value I want to check for, and would like to keep the hashing internal to the model.
A:
You don't make it absolutely clear what you want to do, but I think there are two possibilities here.
The general way to override the database query is to define a custom Manager, and override get_query_set method. You can add extra filtering criteria here.
However, if I understand your question properly, you are trying to change the query for a particular field only. In this case, I think the better answer is to define a custom Field. Here you can override get_db_prep_lookup which allows you to customise the value that is used in the database lookup.
A:
if you're using 1.2, you can try raw(), which seems like exactly what you're looking for.
|
Can I override a query in DJango?
|
I know you can override delete and save methods in DJango models, but can you override a select query somehow to intercept and change a parameter slightly. I have a hashed value I want to check for, and would like to keep the hashing internal to the model.
|
[
"You don't make it absolutely clear what you want to do, but I think there are two possibilities here.\nThe general way to override the database query is to define a custom Manager, and override get_query_set method. You can add extra filtering criteria here.\nHowever, if I understand your question properly, you are trying to change the query for a particular field only. In this case, I think the better answer is to define a custom Field. Here you can override get_db_prep_lookup which allows you to customise the value that is used in the database lookup.\n",
"if you're using 1.2, you can try raw(), which seems like exactly what you're looking for.\n"
] |
[
3,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002439631_django_python.txt
|
Q:
How to make socket.listen(1) work for some time and then continue rest of code?
I'm making server that make a tcp socket and work over port range, with each port it will listen on that port for some time, then continue the rest of the code.
like this::
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
msg =''
ports = [x for x in xrange(4000)]
while True:
try:
for i in ports:
sck.bind(('',i))
## sck.listen(1)
## make it just for some time and then continue this
## if there a connection do this
conn, addr = sck.accept()
msg = conn.recv(2048)
## do something
##if no connection continue the for loop
conn.close()
except KeyboardInterrupt:
exit()
so how i could make sck.listen(1) work just for some time ??
A:
You can settimeout on the socket to the maximum amount of time you want to wait on it each time (call it again before every listen to the time you want to wait this time around) -- you'll get an exception, socket.timeout, if the timer expires, so be sure to have a try/except socket.timeout: around it to catch that case. (A select.select with a timeout would also work, and has the advantage of being able to wait on multiple sockets and for various conditions, but it's a bit less direct as an answer to your very specific question).
I got many downvotes last time I gave such an answer... presumably by purists who want to make sure nobody, ever, programs in way they disapprove of (e.g. through a very peculiar construct such as yours as opposed to the many normal, usual way of writing servers). Let's see what happens this time around!-)
|
How to make socket.listen(1) work for some time and then continue rest of code?
|
I'm making server that make a tcp socket and work over port range, with each port it will listen on that port for some time, then continue the rest of the code.
like this::
import socket
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
msg =''
ports = [x for x in xrange(4000)]
while True:
try:
for i in ports:
sck.bind(('',i))
## sck.listen(1)
## make it just for some time and then continue this
## if there a connection do this
conn, addr = sck.accept()
msg = conn.recv(2048)
## do something
##if no connection continue the for loop
conn.close()
except KeyboardInterrupt:
exit()
so how i could make sck.listen(1) work just for some time ??
|
[
"You can settimeout on the socket to the maximum amount of time you want to wait on it each time (call it again before every listen to the time you want to wait this time around) -- you'll get an exception, socket.timeout, if the timer expires, so be sure to have a try/except socket.timeout: around it to catch that case. (A select.select with a timeout would also work, and has the advantage of being able to wait on multiple sockets and for various conditions, but it's a bit less direct as an answer to your very specific question).\nI got many downvotes last time I gave such an answer... presumably by purists who want to make sure nobody, ever, programs in way they disapprove of (e.g. through a very peculiar construct such as yours as opposed to the many normal, usual way of writing servers). Let's see what happens this time around!-)\n"
] |
[
13
] |
[] |
[] |
[
"ports",
"python",
"sockets"
] |
stackoverflow_0002444178_ports_python_sockets.txt
|
Q:
how to use cherrpy built in data storage
Ok I have been reading the cherrypy documents for sometime and have not found a simple example yet. Let say I have a simple hello world site, how do I store data? Lets say I want to store a = 1, and b =2 to a dictionary using cherrypy. The config files are confusing as hell. Anyone have very simple example of storing values from a simple site in cherrypy?
Here is my code what am I doing wrong? I made a tmp file c:/tmp, where is the config file, and or where do I put it? This code worked before I try adding config?
import cherrypy
import os
cherrypy.config.update({'tools.sessions.on': True,
'tools.sessions.storage_type': "file",
'tools.sessions.storage_path': "/tmp",
'tools.sessions.timeout': 60})
class Application:
def hello(self,what='Hello', who='world'):
cherrypy.session['a'] = 1
return '%s, %s!' % (what, who)
hello.explose=True
root = Application()
cherrypy.quickstart(root)
A:
Edit your config file:
[/]
tools.sessions.on = True
tools.sessions.storage_type = "file" # leave blank for in-memory
tools.sessions.storage_path = "/home/site/sessions"
tools.sessions.timeout = 60
Setting data on a session:
cherrypy.session['fieldname'] = 'fieldvalue'
Getting data:
cherrypy.session.get('fieldname')
Source: http://www.cherrypy.org/wiki/CherryPySessions
A:
You configure cherrypy to use sessions and store them to a file, e.g. in this way:
cherrypy.config.update({'tools.sessions.on': True,
'tools.sessions.storage_type': "file",
'tools.sessions.storage_path': "/tmp/cherrypy_mysessions",
'tools.sessions.timeout': 60})
(or similarly in the config file of course), then cherrypy.session is the "per-user" dict you want, and cherrypy.session['a'] = 1 and similarly for 'b' is how you can store data there.
|
how to use cherrpy built in data storage
|
Ok I have been reading the cherrypy documents for sometime and have not found a simple example yet. Let say I have a simple hello world site, how do I store data? Lets say I want to store a = 1, and b =2 to a dictionary using cherrypy. The config files are confusing as hell. Anyone have very simple example of storing values from a simple site in cherrypy?
Here is my code what am I doing wrong? I made a tmp file c:/tmp, where is the config file, and or where do I put it? This code worked before I try adding config?
import cherrypy
import os
cherrypy.config.update({'tools.sessions.on': True,
'tools.sessions.storage_type': "file",
'tools.sessions.storage_path': "/tmp",
'tools.sessions.timeout': 60})
class Application:
def hello(self,what='Hello', who='world'):
cherrypy.session['a'] = 1
return '%s, %s!' % (what, who)
hello.explose=True
root = Application()
cherrypy.quickstart(root)
|
[
"Edit your config file:\n[/]\ntools.sessions.on = True\ntools.sessions.storage_type = \"file\" # leave blank for in-memory\ntools.sessions.storage_path = \"/home/site/sessions\"\ntools.sessions.timeout = 60\n\nSetting data on a session:\ncherrypy.session['fieldname'] = 'fieldvalue'\n\nGetting data:\ncherrypy.session.get('fieldname')\n\nSource: http://www.cherrypy.org/wiki/CherryPySessions\n",
"You configure cherrypy to use sessions and store them to a file, e.g. in this way:\n cherrypy.config.update({'tools.sessions.on': True,\n 'tools.sessions.storage_type': \"file\",\n 'tools.sessions.storage_path': \"/tmp/cherrypy_mysessions\",\n 'tools.sessions.timeout': 60})\n\n(or similarly in the config file of course), then cherrypy.session is the \"per-user\" dict you want, and cherrypy.session['a'] = 1 and similarly for 'b' is how you can store data there.\n"
] |
[
2,
1
] |
[] |
[] |
[
"cherrypy",
"python"
] |
stackoverflow_0002444270_cherrypy_python.txt
|
Q:
Problem with for-loop in python
This code is supposed to be able to sort the items in self.array based upon the order of the characters in self.order. The method sort runs properly until the third iteration, unil for some reason the for loop seems to repeat indefinitely. What is going on here?
Edit: I'm making my own sort function because it is a bonus part of a python assignment I have.
class sorting_class:
def __init__(self):
self.array = ['ca', 'bd', 'ac', 'ab'] #An array of strings
self.arrayt = []
self.globali = 0
self.globalii = 0
self.order = ['a', 'b', 'c', 'd'] #Order of characters
self.orderi = 0
self.carry = []
self.leave = []
self.sortedlist = []
def sort(self):
for arrayi in self.arrayt: #This should only loop for the number items in self.arrayt. However, the third time this is run it seems to loop indefinitely.
print ('run', arrayi) #Shows the problem
if self.order[self.orderi] == arrayi[self.globali]:
self.carry.append(arrayi)
else:
if self.globali != 0:
self.leave.append(arrayi)
def srt(self):
self.arrayt = self.array
my.sort() #First this runs the first time.
while len(self.sortedlist) != len(self.array):
if len(self.carry) == 1:
self.sortedlist.append(self.carry)
self.arrayt = self.leave
self.leave = []
self.carry = []
self.globali = 1
self.orderi = 0
my.sort()
elif len(self.carry) == 0:
if len(self.leave) != 0: #Because nothing matches 'aa' during the second iteration, this code runs the third time"
self.arrayt = self.leave
self.globali = 1
self.orderi += 1
my.sort()
else:
self.arrayt = self.array
self.globalii += 1
self.orderi = self.globalii
self.globali = 0
my.sort()
self.orderi = 0
else: #This is what runs the second time.
self.arrayt = self.carry
self.carry = []
self.globali += 1
my.sort()
my = sorting_class()
my.srt()
A:
During the third pass of your loop you are appending new elements to the list you are iterating over therefore you can never leave the loop:
self.arrayt = self.leave - this assignment leads to the fact that self.leave.append(arrayi) will append elements to the list self.arrayt refers to.
In general you may think about creating copies of lists not just assigning different variables/members to the same list instances.
A:
You have self.arrayt = self.leave which makes arrayt refer to exactly the same array as leave (it's not a copy of the contents!!!), then in the loop for arrayi in self.arrayt: you perpetrate a self.leave.append(arrayi) -- which lenghtens self.leave, which is just another name for the very list self.arrayt you're looping on. Appending to the list you're looping on is a good recipe for infinite loops.
This is just one symptom of this code's inextricable messiness. I recommend you do your sorting with the built-in sort method and put your energy into defining the right key= key-extractor function to get things sorted the exact way you want -- a much more productive use of your time.
A:
The key-extractor Alex mentions is trivial enough to put in a lambda function
>>> array = ['ca', 'bd', 'ac', 'ab']
>>> order = ['a', 'b', 'c', 'd']
>>> sorted(array, key=lambda v:map(order.index,v))
['ab', 'ac', 'bd', 'ca']
>>> order = ['b', 'a', 'c', 'd']
>>> sorted(array, key=lambda v:map(order.index,v))
['bd', 'ab', 'ac', 'ca']
>>> order = ['d', 'c', 'b', 'a']
>>> sorted(array, key=lambda v:map(order.index,v))
['ca', 'bd', 'ac', 'ab']
Let's see how this works:
map calls the method order.index for each item in v and uses those return values to create a list.
v will be one of the elements of array
>>> order = ['a', 'b', 'c', 'd']
>>> map(order.index,array[0])
[2, 0]
>>> map(order.index,array[1])
[1, 3]
>>> map(order.index,array[2])
[0, 2]
>>> map(order.index,array[3])
[0, 1]
The function is supplied as a key= to sort, so internally those lists are being sorted instead of the strings.
|
Problem with for-loop in python
|
This code is supposed to be able to sort the items in self.array based upon the order of the characters in self.order. The method sort runs properly until the third iteration, unil for some reason the for loop seems to repeat indefinitely. What is going on here?
Edit: I'm making my own sort function because it is a bonus part of a python assignment I have.
class sorting_class:
def __init__(self):
self.array = ['ca', 'bd', 'ac', 'ab'] #An array of strings
self.arrayt = []
self.globali = 0
self.globalii = 0
self.order = ['a', 'b', 'c', 'd'] #Order of characters
self.orderi = 0
self.carry = []
self.leave = []
self.sortedlist = []
def sort(self):
for arrayi in self.arrayt: #This should only loop for the number items in self.arrayt. However, the third time this is run it seems to loop indefinitely.
print ('run', arrayi) #Shows the problem
if self.order[self.orderi] == arrayi[self.globali]:
self.carry.append(arrayi)
else:
if self.globali != 0:
self.leave.append(arrayi)
def srt(self):
self.arrayt = self.array
my.sort() #First this runs the first time.
while len(self.sortedlist) != len(self.array):
if len(self.carry) == 1:
self.sortedlist.append(self.carry)
self.arrayt = self.leave
self.leave = []
self.carry = []
self.globali = 1
self.orderi = 0
my.sort()
elif len(self.carry) == 0:
if len(self.leave) != 0: #Because nothing matches 'aa' during the second iteration, this code runs the third time"
self.arrayt = self.leave
self.globali = 1
self.orderi += 1
my.sort()
else:
self.arrayt = self.array
self.globalii += 1
self.orderi = self.globalii
self.globali = 0
my.sort()
self.orderi = 0
else: #This is what runs the second time.
self.arrayt = self.carry
self.carry = []
self.globali += 1
my.sort()
my = sorting_class()
my.srt()
|
[
"During the third pass of your loop you are appending new elements to the list you are iterating over therefore you can never leave the loop:\nself.arrayt = self.leave - this assignment leads to the fact that self.leave.append(arrayi) will append elements to the list self.arrayt refers to. \nIn general you may think about creating copies of lists not just assigning different variables/members to the same list instances.\n",
"You have self.arrayt = self.leave which makes arrayt refer to exactly the same array as leave (it's not a copy of the contents!!!), then in the loop for arrayi in self.arrayt: you perpetrate a self.leave.append(arrayi) -- which lenghtens self.leave, which is just another name for the very list self.arrayt you're looping on. Appending to the list you're looping on is a good recipe for infinite loops.\nThis is just one symptom of this code's inextricable messiness. I recommend you do your sorting with the built-in sort method and put your energy into defining the right key= key-extractor function to get things sorted the exact way you want -- a much more productive use of your time.\n",
"The key-extractor Alex mentions is trivial enough to put in a lambda function\n>>> array = ['ca', 'bd', 'ac', 'ab']\n>>> order = ['a', 'b', 'c', 'd']\n>>> sorted(array, key=lambda v:map(order.index,v))\n['ab', 'ac', 'bd', 'ca']\n\n>>> order = ['b', 'a', 'c', 'd']\n>>> sorted(array, key=lambda v:map(order.index,v))\n['bd', 'ab', 'ac', 'ca']\n\n>>> order = ['d', 'c', 'b', 'a']\n>>> sorted(array, key=lambda v:map(order.index,v))\n['ca', 'bd', 'ac', 'ab']\n\nLet's see how this works:\nmap calls the method order.index for each item in v and uses those return values to create a list.\nv will be one of the elements of array\n>>> order = ['a', 'b', 'c', 'd']\n>>> map(order.index,array[0])\n[2, 0]\n>>> map(order.index,array[1])\n[1, 3]\n>>> map(order.index,array[2])\n[0, 2]\n>>> map(order.index,array[3])\n[0, 1]\n\nThe function is supplied as a key= to sort, so internally those lists are being sorted instead of the strings.\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"class",
"for_loop",
"python"
] |
stackoverflow_0002444337_class_for_loop_python.txt
|
Q:
Problem running python/matplotlib in background after ending ssh session
I have to VPN and then ssh from home to my work server and want to run a python script in the background, then log out of the ssh session. My script makes several histogram plots using matplotlib, and as long as I keep the connection open everything is fine, but if I log out I keep getting an error message in the log file I created for the script.
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 2058, in loglog
ax = gca()
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 582, in gca
ax = gcf().gca(**kwargs)
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 276, in gcf
return figure()
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 254, in figure
**kwargs)
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager
window = Tk.Tk()
File "/Home/eud/jmcohen/.local/lib/python2.5/lib-tk/Tkinter.py", line 1647, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: couldn't connect to display "localhost:10.0"
I'm assuming that it doesn't know where to create the figures I want since I close my X11 ssh session. If I'm logged in while the script is running I don't see any figures popping up (although that's because I don't have the show() command in my script), and I thought that python uses tkinter to display figures. The way that I'm creating the figures is,
loglog()
hist(list,x)
ylabel('y')
xlabel('x')
savefig('%s_hist.ps' %source.name)
close()
The script requires some initial input, so the way I'm running it in the background is
python scriptToRun.py << start>& logfile.log&
Is there a way around this, or do I just have to stay ssh'd into my machine?
Thanks.
A:
I believe your matplotlib backend requires X11. Look in your matplotlibrc file to determine what your default is (from the error, I'm betting TkAgg). To run without X11, use the Agg backend. Either set it globally in the matplotlibrc file or on a script by script by adding this to the python program:
import matplotlib
matplotlib.use('Agg')
A:
It looks like you're running in interactive mode by default, so matplotlib wants to plot everything to the screen first, which of course it can't do.
Try putting
ioff()
at the top of your script, along with making the backend change.
reference: http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ioff
A:
Sorry if this is a stupid answer, but if you're just running a console session, would 'screen' not suffice? Detachable sessions, etc.
A:
If you are running on a *nix OS the problem is your session is terminated and all processes requiring a session are also terminated when you disconnect. More specifically all your processes are sent a SIGHUP (signal hang-up). The default handling of SITHUP is to terminate the process. If you want you script to continue it needs to ignore the signal. The easiest way to do that assuming you start your script via the command line it to run it using the nohup command:
nohup python scriptToRun.py << start>& logfile.log&
nohup normally sends standard out and standard error to the file nohup.out in the current directory. Since you're redirecting already output nohup.out will not be created.
|
Problem running python/matplotlib in background after ending ssh session
|
I have to VPN and then ssh from home to my work server and want to run a python script in the background, then log out of the ssh session. My script makes several histogram plots using matplotlib, and as long as I keep the connection open everything is fine, but if I log out I keep getting an error message in the log file I created for the script.
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 2058, in loglog
ax = gca()
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 582, in gca
ax = gcf().gca(**kwargs)
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 276, in gcf
return figure()
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 254, in figure
**kwargs)
File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager
window = Tk.Tk()
File "/Home/eud/jmcohen/.local/lib/python2.5/lib-tk/Tkinter.py", line 1647, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: couldn't connect to display "localhost:10.0"
I'm assuming that it doesn't know where to create the figures I want since I close my X11 ssh session. If I'm logged in while the script is running I don't see any figures popping up (although that's because I don't have the show() command in my script), and I thought that python uses tkinter to display figures. The way that I'm creating the figures is,
loglog()
hist(list,x)
ylabel('y')
xlabel('x')
savefig('%s_hist.ps' %source.name)
close()
The script requires some initial input, so the way I'm running it in the background is
python scriptToRun.py << start>& logfile.log&
Is there a way around this, or do I just have to stay ssh'd into my machine?
Thanks.
|
[
"I believe your matplotlib backend requires X11. Look in your matplotlibrc file to determine what your default is (from the error, I'm betting TkAgg). To run without X11, use the Agg backend. Either set it globally in the matplotlibrc file or on a script by script by adding this to the python program:\nimport matplotlib\nmatplotlib.use('Agg')\n\n",
"It looks like you're running in interactive mode by default, so matplotlib wants to plot everything to the screen first, which of course it can't do.\nTry putting\nioff()\n\nat the top of your script, along with making the backend change.\nreference: http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ioff\n",
"Sorry if this is a stupid answer, but if you're just running a console session, would 'screen' not suffice? Detachable sessions, etc.\n",
"If you are running on a *nix OS the problem is your session is terminated and all processes requiring a session are also terminated when you disconnect. More specifically all your processes are sent a SIGHUP (signal hang-up). The default handling of SITHUP is to terminate the process. If you want you script to continue it needs to ignore the signal. The easiest way to do that assuming you start your script via the command line it to run it using the nohup command:\nnohup python scriptToRun.py << start>& logfile.log&\n\nnohup normally sends standard out and standard error to the file nohup.out in the current directory. Since you're redirecting already output nohup.out will not be created.\n"
] |
[
25,
12,
2,
0
] |
[] |
[] |
[
"background",
"matplotlib",
"python",
"ssh",
"tkinter"
] |
stackoverflow_0002443702_background_matplotlib_python_ssh_tkinter.txt
|
Q:
Dynamically expanding Django forms
I would like to create a form where a user can enter an arbitrary # of items in separate textboxes. The user could add (and potentially remove) fields as needed. Something like this:
(source: eggdrop.ch)
I found the following different solutions:
http://www.eggdrop.ch/blog/2007/02/15/django-dynamicforms/
http://dewful.com/?p=100
These methods both appear a bit involved. Is there a simpler way?
A:
I know this is a new feature in the admin in Django 1.2.
Maybe you can take a look at the way they implemented it there.
|
Dynamically expanding Django forms
|
I would like to create a form where a user can enter an arbitrary # of items in separate textboxes. The user could add (and potentially remove) fields as needed. Something like this:
(source: eggdrop.ch)
I found the following different solutions:
http://www.eggdrop.ch/blog/2007/02/15/django-dynamicforms/
http://dewful.com/?p=100
These methods both appear a bit involved. Is there a simpler way?
|
[
"I know this is a new feature in the admin in Django 1.2. \nMaybe you can take a look at the way they implemented it there.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002444656_django_python.txt
|
Q:
Python 2 dict_items.sort() in Python 3
I'm porting some code from Python 2 to 3. This is valid code in Python 2 syntax:
def print_sorted_dictionary(dictionary):
items=dictionary.items()
items.sort()
In Python 3, the dict_items have no method 'sort' - how can I make a workaround for this in Python 3?
A:
Use items = sorted(dictionary.items()), it works great in both Python 2 and Python 3.
A:
dict.items returns a view instead of a list in Python 3 (somewhat similarly to the iteritems method in Python 2.x). To get a sorted list of the items use
sorted_items = sorted(d.items())
The sorted builtin takes an iterable and returns a new list of its items, sorted.
|
Python 2 dict_items.sort() in Python 3
|
I'm porting some code from Python 2 to 3. This is valid code in Python 2 syntax:
def print_sorted_dictionary(dictionary):
items=dictionary.items()
items.sort()
In Python 3, the dict_items have no method 'sort' - how can I make a workaround for this in Python 3?
|
[
"Use items = sorted(dictionary.items()), it works great in both Python 2 and Python 3.\n",
"dict.items returns a view instead of a list in Python 3 (somewhat similarly to the iteritems method in Python 2.x). To get a sorted list of the items use\nsorted_items = sorted(d.items())\n\nThe sorted builtin takes an iterable and returns a new list of its items, sorted. \n"
] |
[
10,
3
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0002444697_python_python_3.x.txt
|
Q:
Dynamic mass hosting using mod_wsgi
I am trying to configure an apache server using mod_wsgi for dynamic mass hosting. Each user will have it's own instance of a python application located in /mnt/data/www/domains/[user_name] and there will be a vhost.map telling me which domain maps to each user's directory (the directory will have the same name as the user). What i do not know is how to write the WSGIScriptAliasMatch line so that it also takes the path from the vhost.map file.
What i want to do is something like this: I can have on my server different domains like www.virgilbalibanu.com or virgil.balibanu.com and flaviu.balibanu.com where each domain would belog to another user, the user name having no neccesary connection to the domain name. I want to do this beacuse a user, wehn he makes an acoount receives something like virgil.mydomain.com but if he has his own domain he can change it later to that, for example www.virgilbalibanu.ro, and this way I would only need to chenage the line in the vhost.map file
So far I have something like this:
Alias /media/ /mnt/data/www/iitcms/media/
#all media is taken from here
RewriteEngine on
RewriteMap lowercase int:tolower
# define the map file
RewriteMap vhost txt:/mnt/data/www/domains/vhost.map
#this does not work either, can;t say why atm
RewriteCond %{REQUEST_URI} ^/uploads/
RewriteCond ${lowercase:%{SERVER_NAME}} ^(.+)$
RewriteCond ${vhost:%1} ^(/.*)$
RewriteRule ^/(.*)$ %1/media/uploads/$1
#---> this I have no ideea how i could do
WSGIScriptAliasMatch ^([^/]+) /mnt/data/www/domains/$1/apache/django.wsgi
<Directory "/mnt/data/www/domains">
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
<DirectoryMatch ^/mnt/data/www/domains/([^/]+)/apache>
AllowOverride None
Options FollowSymLinks ExecCGI
Order deny,allow
Allow from all
</DirectoryMatch>
<Directory /mnt/data/www/iitcms/media>
AllowOverride None
Options Indexes FollowSymLinks MultiViews
Order allow,deny
Allow from all
</Directory>
<DirectoryMatch ^/mnt/data/www/domains/([^/]+)/media/uploads>
AllowOverride None
Options Indexes FollowSymLinks MultiViews
Order allow,deny
Allow from all
</DirectoryMatch>
I know the part i did with mod_rewrite doesn't work, couldn't really say why not but that's not as important so far, I am curious how could i write the WSGIScriptAliasMatch line so that to accomplish my objective.
I would be very grateful for any help, or any other ideas related to how i can deal with this. Also it would be great if I'd manage to get each site to run in wsgi daemon mode, thou that is not as important.
Thanks,
Virgil
A:
Discussion thread about this at:
http://groups.google.com/group/modwsgi/browse_frm/thread/2a9905f24c10a967
|
Dynamic mass hosting using mod_wsgi
|
I am trying to configure an apache server using mod_wsgi for dynamic mass hosting. Each user will have it's own instance of a python application located in /mnt/data/www/domains/[user_name] and there will be a vhost.map telling me which domain maps to each user's directory (the directory will have the same name as the user). What i do not know is how to write the WSGIScriptAliasMatch line so that it also takes the path from the vhost.map file.
What i want to do is something like this: I can have on my server different domains like www.virgilbalibanu.com or virgil.balibanu.com and flaviu.balibanu.com where each domain would belog to another user, the user name having no neccesary connection to the domain name. I want to do this beacuse a user, wehn he makes an acoount receives something like virgil.mydomain.com but if he has his own domain he can change it later to that, for example www.virgilbalibanu.ro, and this way I would only need to chenage the line in the vhost.map file
So far I have something like this:
Alias /media/ /mnt/data/www/iitcms/media/
#all media is taken from here
RewriteEngine on
RewriteMap lowercase int:tolower
# define the map file
RewriteMap vhost txt:/mnt/data/www/domains/vhost.map
#this does not work either, can;t say why atm
RewriteCond %{REQUEST_URI} ^/uploads/
RewriteCond ${lowercase:%{SERVER_NAME}} ^(.+)$
RewriteCond ${vhost:%1} ^(/.*)$
RewriteRule ^/(.*)$ %1/media/uploads/$1
#---> this I have no ideea how i could do
WSGIScriptAliasMatch ^([^/]+) /mnt/data/www/domains/$1/apache/django.wsgi
<Directory "/mnt/data/www/domains">
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
<DirectoryMatch ^/mnt/data/www/domains/([^/]+)/apache>
AllowOverride None
Options FollowSymLinks ExecCGI
Order deny,allow
Allow from all
</DirectoryMatch>
<Directory /mnt/data/www/iitcms/media>
AllowOverride None
Options Indexes FollowSymLinks MultiViews
Order allow,deny
Allow from all
</Directory>
<DirectoryMatch ^/mnt/data/www/domains/([^/]+)/media/uploads>
AllowOverride None
Options Indexes FollowSymLinks MultiViews
Order allow,deny
Allow from all
</DirectoryMatch>
I know the part i did with mod_rewrite doesn't work, couldn't really say why not but that's not as important so far, I am curious how could i write the WSGIScriptAliasMatch line so that to accomplish my objective.
I would be very grateful for any help, or any other ideas related to how i can deal with this. Also it would be great if I'd manage to get each site to run in wsgi daemon mode, thou that is not as important.
Thanks,
Virgil
|
[
"Discussion thread about this at:\nhttp://groups.google.com/group/modwsgi/browse_frm/thread/2a9905f24c10a967\n"
] |
[
1
] |
[] |
[] |
[
"apache2",
"django",
"mod_wsgi",
"python"
] |
stackoverflow_0002426637_apache2_django_mod_wsgi_python.txt
|
Q:
Generating two thumbnails from the same image in Django
this seems like quite an easy problem but I can't figure out what is going on here.
Basically, what I'd like to do is create two different thumbnails from one image on a Django model. What ends up happening is that it seems to be looping and recreating the same image (while appending an underscore to it each time) until it throws up an error that the filename is to big. So, you end up something like:
OSError: [Errno 36] File name too long: 'someimg________________etc.jpg'
Here is the code (the save method is on the Artist model):
def save(self, *args, **kwargs):
if self.image:
iname = os.path.split(self.image.name)[-1]
fname, ext = os.path.splitext(iname)
tlname, tsname = fname + '_thumb_l' + ext, fname + '_thumb_s' + ext
self.thumb_large.save(tlname, make_thumb(self.image, size=(250,250)))
self.thumb_small.save(tsname, make_thumb(self.image, size=(100,100)))
super(Artist, self).save(*args, **kwargs)
def make_thumb(infile, size=(100,100)):
infile.seek(0)
image = Image.open(infile)
if image.mode not in ('L', 'RGB'):
image.convert('RGB')
image.thumbnail(size, Image.ANTIALIAS)
temp = StringIO()
image.save(temp, 'png')
return ContentFile(temp.getvalue())
I didn't show imports for the sake of brevity. Assume there are two ImageFields on the Artist model: thumb_large, and thumb_small.
The way I am testing if this works is, in the shell:
artist = Artist.objects.get(id=1)
artist.save()
#error here after a little wait (until I assume it generates enough images that the OSError gets raised)
If this isn't the correct way to do it, I'd appreciate any feedback. Thanks!
A:
Generally I like to give thumbnailing capabilities to the template author as much as possible. That way they can adjust the size of the things in the template. Whereas building it into the business logic layer is more fixed. You might have a reason though.
This template filter should generate the file on first load then load the file on future loads. Its borrowed from some blog a long time back although I think I added the center crop feature. There are most likely others with even more features.
{% load thumbnailer %}
...
<img src="{{someimage|thumbnail_crop:'200x200'}}" />
file appname/templatetags/thumbnailer.py
import os
import Image
from django.template import Library
register.filter(thumbnail)
from settings import MEDIA_ROOT, MEDIA_URL
def thumbnail_crop(file, size='104x104', noimage=''):
# defining the size
x, y = [int(x) for x in size.split('x')]
# defining the filename and the miniature filename
try:
filehead, filetail = os.path.split(file.path)
except:
return '' # '/media/img/noimage.jpg'
basename, format = os.path.splitext(filetail)
#quick fix for format
if format.lower() =='.gif':
return (filehead + '/' + filetail).replace(MEDIA_ROOT, MEDIA_URL)
miniature = basename + '_' + size + format
filename = file.path
miniature_filename = os.path.join(filehead, miniature)
filehead, filetail = os.path.split(file.url)
miniature_url = filehead + '/' + miniature
if os.path.exists(miniature_filename) and os.path.getmtime(filename)>os.path.getmtime(miniature_filename):
os.unlink(miniature_filename)
# if the image wasn't already resized, resize it
if not os.path.exists(miniature_filename):
try:
image = Image.open(filename)
except:
return noimage
src_width, src_height = image.size
src_ratio = float(src_width) / float(src_height)
dst_width, dst_height = x, y
dst_ratio = float(dst_width) / float(dst_height)
if dst_ratio < src_ratio:
crop_height = src_height
crop_width = crop_height * dst_ratio
x_offset = float(src_width - crop_width) / 2
y_offset = 0
else:
crop_width = src_width
crop_height = crop_width / dst_ratio
x_offset = 0
y_offset = float(src_height - crop_height) / 3
image = image.crop((x_offset, y_offset, x_offset+int(crop_width), y_offset+int(crop_height)))
image = image.resize((dst_width, dst_height), Image.ANTIALIAS)
try:
image.save(miniature_filename, image.format, quality=90, optimize=1)
except:
try:
image.save(miniature_filename, image.format, quality=90)
except:
return '' #'/media/img/noimage.jpg'
return miniature_url
register.filter(thumbnail_crop)
|
Generating two thumbnails from the same image in Django
|
this seems like quite an easy problem but I can't figure out what is going on here.
Basically, what I'd like to do is create two different thumbnails from one image on a Django model. What ends up happening is that it seems to be looping and recreating the same image (while appending an underscore to it each time) until it throws up an error that the filename is to big. So, you end up something like:
OSError: [Errno 36] File name too long: 'someimg________________etc.jpg'
Here is the code (the save method is on the Artist model):
def save(self, *args, **kwargs):
if self.image:
iname = os.path.split(self.image.name)[-1]
fname, ext = os.path.splitext(iname)
tlname, tsname = fname + '_thumb_l' + ext, fname + '_thumb_s' + ext
self.thumb_large.save(tlname, make_thumb(self.image, size=(250,250)))
self.thumb_small.save(tsname, make_thumb(self.image, size=(100,100)))
super(Artist, self).save(*args, **kwargs)
def make_thumb(infile, size=(100,100)):
infile.seek(0)
image = Image.open(infile)
if image.mode not in ('L', 'RGB'):
image.convert('RGB')
image.thumbnail(size, Image.ANTIALIAS)
temp = StringIO()
image.save(temp, 'png')
return ContentFile(temp.getvalue())
I didn't show imports for the sake of brevity. Assume there are two ImageFields on the Artist model: thumb_large, and thumb_small.
The way I am testing if this works is, in the shell:
artist = Artist.objects.get(id=1)
artist.save()
#error here after a little wait (until I assume it generates enough images that the OSError gets raised)
If this isn't the correct way to do it, I'd appreciate any feedback. Thanks!
|
[
"Generally I like to give thumbnailing capabilities to the template author as much as possible. That way they can adjust the size of the things in the template. Whereas building it into the business logic layer is more fixed. You might have a reason though.\nThis template filter should generate the file on first load then load the file on future loads. Its borrowed from some blog a long time back although I think I added the center crop feature. There are most likely others with even more features.\n{% load thumbnailer %}\n...\n<img src=\"{{someimage|thumbnail_crop:'200x200'}}\" />\n\nfile appname/templatetags/thumbnailer.py\nimport os\nimport Image\nfrom django.template import Library\n\nregister.filter(thumbnail)\nfrom settings import MEDIA_ROOT, MEDIA_URL\n\ndef thumbnail_crop(file, size='104x104', noimage=''):\n # defining the size\n x, y = [int(x) for x in size.split('x')]\n # defining the filename and the miniature filename\n try:\n filehead, filetail = os.path.split(file.path)\n except:\n return '' # '/media/img/noimage.jpg'\n\n basename, format = os.path.splitext(filetail)\n #quick fix for format\n if format.lower() =='.gif':\n return (filehead + '/' + filetail).replace(MEDIA_ROOT, MEDIA_URL)\n\n miniature = basename + '_' + size + format\n filename = file.path\n miniature_filename = os.path.join(filehead, miniature)\n filehead, filetail = os.path.split(file.url)\n miniature_url = filehead + '/' + miniature\n if os.path.exists(miniature_filename) and os.path.getmtime(filename)>os.path.getmtime(miniature_filename):\n os.unlink(miniature_filename)\n # if the image wasn't already resized, resize it\n if not os.path.exists(miniature_filename):\n try:\n image = Image.open(filename)\n except:\n return noimage\n\n src_width, src_height = image.size\n src_ratio = float(src_width) / float(src_height)\n dst_width, dst_height = x, y\n dst_ratio = float(dst_width) / float(dst_height)\n\n if dst_ratio < src_ratio:\n crop_height = src_height\n crop_width = crop_height * dst_ratio\n x_offset = float(src_width - crop_width) / 2\n y_offset = 0\n else:\n crop_width = src_width\n crop_height = crop_width / dst_ratio\n x_offset = 0\n y_offset = float(src_height - crop_height) / 3\n image = image.crop((x_offset, y_offset, x_offset+int(crop_width), y_offset+int(crop_height)))\n image = image.resize((dst_width, dst_height), Image.ANTIALIAS)\n try:\n image.save(miniature_filename, image.format, quality=90, optimize=1)\n except:\n try:\n image.save(miniature_filename, image.format, quality=90)\n except:\n return '' #'/media/img/noimage.jpg'\n\n return miniature_url\n\nregister.filter(thumbnail_crop)\n\n"
] |
[
3
] |
[] |
[] |
[
"django",
"python",
"python_imaging_library",
"thumbnails"
] |
stackoverflow_0002444691_django_python_python_imaging_library_thumbnails.txt
|
Q:
Unpacking tuples/arrays/lists as indices for Numpy Arrays
I would love to be able to do
>>> A = numpy.array(((1,2),(3,4)))
>>> idx = (0,0)
>>> A[*idx]
and get
1
however this is not valid syntax. Is there a way of doing this without explicitly writing out
>>> A[idx[0], idx[1]]
?
EDIT: Thanks for the replies. In my program I was indexing with a Numpy array rather than a tuple and getting strange results. Converting to a tuple as Alok suggests does the trick.
A:
It's easier than you think:
>>> import numpy
>>> A = numpy.array(((1,2),(3,4)))
>>> idx = (0,0)
>>> A[idx]
1
A:
Try
A[tuple(idx)]
Unless you have a more complex use case that's not as simple as this example, the above should work for all arrays.
A:
No unpacking is necessary—when you have a comma between [ and ], you are making a tuple, not passing arguments. foo[bar, baz] is equivalent to foo[(bar, baz)]. So if you have a tuple t = bar, baz you would simply say foo[t].
A:
Indexing an object calls:
object.__getitem__(index)
When you do A[1, 2], it's the equivalent of:
A.__getitem__((1, 2))
So when you do:
b = (1, 2)
A[1, 2] == A[b]
A[1, 2] == A[(1, 2)]
Both statements will evaluate to True.
If you happen to index with a list, it might not index the same, as [1, 2] != (1, 2)
|
Unpacking tuples/arrays/lists as indices for Numpy Arrays
|
I would love to be able to do
>>> A = numpy.array(((1,2),(3,4)))
>>> idx = (0,0)
>>> A[*idx]
and get
1
however this is not valid syntax. Is there a way of doing this without explicitly writing out
>>> A[idx[0], idx[1]]
?
EDIT: Thanks for the replies. In my program I was indexing with a Numpy array rather than a tuple and getting strange results. Converting to a tuple as Alok suggests does the trick.
|
[
"It's easier than you think:\n>>> import numpy\n>>> A = numpy.array(((1,2),(3,4)))\n>>> idx = (0,0)\n>>> A[idx]\n1\n\n",
"Try\nA[tuple(idx)]\n\nUnless you have a more complex use case that's not as simple as this example, the above should work for all arrays.\n",
"No unpacking is necessary—when you have a comma between [ and ], you are making a tuple, not passing arguments. foo[bar, baz] is equivalent to foo[(bar, baz)]. So if you have a tuple t = bar, baz you would simply say foo[t].\n",
"Indexing an object calls:\nobject.__getitem__(index)\n\nWhen you do A[1, 2], it's the equivalent of:\nA.__getitem__((1, 2))\n\nSo when you do:\nb = (1, 2)\n\nA[1, 2] == A[b]\nA[1, 2] == A[(1, 2)]\n\nBoth statements will evaluate to True.\nIf you happen to index with a list, it might not index the same, as [1, 2] != (1, 2)\n"
] |
[
24,
21,
5,
4
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0002444923_numpy_python.txt
|
Q:
Can a CLSID be different for the same program installed on two different machines?
I am using comtypes to generate wrappers for a certain com library. I am having certain issues with a few things, that are not being generated properly. I can get around this by doing the missing work, manually. However can i depend on the fact that CLSID's will not change?
Lets say:
I install a program with the com library Foo 1.0, now i install the exact same version of that program on another PC, will the CLSID's of the interfaces change?
This might be a terribly dumb question.
A:
The CLSID is at least supposed not to change. Naturally a program can do a lot many stupid things breaking regulations. But: AS the CLSID is how the class is loaded, a changed CLSID would mean the USING program of a class would also have to use the changed CLSID.
Su, yous assumption is right - if the same program in the same version is installed on two computers, it is safe to assume the CLSID does not change.
This is even supposed t obe so between versions.... but if the library Foo 1.0 is only used by one program, the programmer may get away with a changed CLSID. It is not supposed to change, though.
A:
Disclaimer: Done a lot of COM, but never with python.
The UUID for a COM interface is part of the definition of the interface. It should be the same on every machine, and for all time.
Also, in ATL COM land, classes have CLSIDs, interfaces have IIDs. They both have UUIDs (or possibly GUIDs). Not sure about python.
|
Can a CLSID be different for the same program installed on two different machines?
|
I am using comtypes to generate wrappers for a certain com library. I am having certain issues with a few things, that are not being generated properly. I can get around this by doing the missing work, manually. However can i depend on the fact that CLSID's will not change?
Lets say:
I install a program with the com library Foo 1.0, now i install the exact same version of that program on another PC, will the CLSID's of the interfaces change?
This might be a terribly dumb question.
|
[
"The CLSID is at least supposed not to change. Naturally a program can do a lot many stupid things breaking regulations. But: AS the CLSID is how the class is loaded, a changed CLSID would mean the USING program of a class would also have to use the changed CLSID.\nSu, yous assumption is right - if the same program in the same version is installed on two computers, it is safe to assume the CLSID does not change.\nThis is even supposed t obe so between versions.... but if the library Foo 1.0 is only used by one program, the programmer may get away with a changed CLSID. It is not supposed to change, though.\n",
"Disclaimer: Done a lot of COM, but never with python.\nThe UUID for a COM interface is part of the definition of the interface. It should be the same on every machine, and for all time.\nAlso, in ATL COM land, classes have CLSIDs, interfaces have IIDs. They both have UUIDs (or possibly GUIDs). Not sure about python.\n"
] |
[
1,
1
] |
[] |
[] |
[
"com",
"python"
] |
stackoverflow_0002444897_com_python.txt
|
Q:
How does Python differentiate between the different data types?
Sorry if this is quite noobish to you, but I'm just starting out to learn Python after learning C++ & Java, and I am wondering how in the world I could just declare variables like id = 0 and name = 'John' without any int's or string's in front! I figured out that perhaps it's because there are no ''s in a number, but how would Python figure that out in something like def increase(first, second) instead of something like int increase(int first, int second) in C++?!
A:
The literal objects you mention carry (pointers to;-) their own types with them of course, so when a name's bound to that object the problem of type doesn't arise -- the object always has a type, the name doesn't -- just delegates that to the object it's bound to.
There's no "figuring out" in def increase(first, second): -- name increase gets bound to a function object, names first and second are recorded as parameters-names and will get bound (quite possibly to objects of different types at various points) as increase gets called.
So say the body is return first + second -- a call to increase('foo', 'bar') will then happily return 'foobar' (delegating the addition to the objects, which in this case are strings), and maybe later a call to increase(23, 45) will just as happily return 68 -- again by delegating the addition to the objects bound to those names at the point of call, which in this case are ints. And if you call with incompatible types you'll get an exception as the delegated addition operation can't make sense of the situation -- no big deal!
A:
Python is dynamically typed: all variables can refer to an object of any type. id and name can be anything, but the actual objects are of types like int and str. 0 is a literal that is parsed to make an int object, and 'John' a literal that makes a str object. Many object types do not have literals and are returned by a callable (like frozenset—there's no way to make a literal frozenset, you must call frozenset.)
Consequently, there is no such thing as declaration of variables, since you aren't defining anything about the variable. id = 0 and name = 'John' are just assignment.
increase returns an int because that's what you return in it; nothing in Python forces it not to be any other object. first and second are only ints if you make them so.
Objects, to a certain extent, share a common interface. You can use the same operators and functions on them all, and if they support that particular operation, it works. It is a common, recommended technique to use different types that behave similarly interchangably; this is called duck typing. For example, if something takes a file object you can instead pass a cStringIO.StringIO object, which supports the same method as a file (like read and write) but is a completely different type. This is sort of like Java interfaces, but does not require any formal usage, you just define the appropriate methods.
A:
Python uses the duck-typing method - if it walks, looks and quacks like a duck, then it's a duck. If you pass in a string, and try to do something numerical on it, then it will fail.
Have a look at: http://en.wikipedia.org/wiki/Python_%28programming_language%29#Typing and http://en.wikipedia.org/wiki/Duck_typing
A:
When it comes to assigning literal values to variables, the type of the literal value can be inferred at the time of lexical analysis. For example, anything matching the regular expression (-)?[1-9][0-9]* can be inferred to be an integer literal. If you want to convert it to a float, there needs to be an explicit cast. Similarly, a string literal is any sequence of characters enclosed in single or double quotes.
In a method call, the parameters are not type-checked. You only need to pass in the correct number of them to be able to call the method. So long as the body of the method does not cause any errors with respect to the arguments, you can call the same method with lots of different types of arguments.
A:
In Python, Unlike in C++ and Java, numbers and strings are both objects. So this:
id = 0
name = 'John'
is equivalent to:
id = int(0)
name = str('John')
Since variables id and name are references that may address any Python object, they don't need to be declared with a particular type.
|
How does Python differentiate between the different data types?
|
Sorry if this is quite noobish to you, but I'm just starting out to learn Python after learning C++ & Java, and I am wondering how in the world I could just declare variables like id = 0 and name = 'John' without any int's or string's in front! I figured out that perhaps it's because there are no ''s in a number, but how would Python figure that out in something like def increase(first, second) instead of something like int increase(int first, int second) in C++?!
|
[
"The literal objects you mention carry (pointers to;-) their own types with them of course, so when a name's bound to that object the problem of type doesn't arise -- the object always has a type, the name doesn't -- just delegates that to the object it's bound to.\nThere's no \"figuring out\" in def increase(first, second): -- name increase gets bound to a function object, names first and second are recorded as parameters-names and will get bound (quite possibly to objects of different types at various points) as increase gets called.\nSo say the body is return first + second -- a call to increase('foo', 'bar') will then happily return 'foobar' (delegating the addition to the objects, which in this case are strings), and maybe later a call to increase(23, 45) will just as happily return 68 -- again by delegating the addition to the objects bound to those names at the point of call, which in this case are ints. And if you call with incompatible types you'll get an exception as the delegated addition operation can't make sense of the situation -- no big deal!\n",
"Python is dynamically typed: all variables can refer to an object of any type. id and name can be anything, but the actual objects are of types like int and str. 0 is a literal that is parsed to make an int object, and 'John' a literal that makes a str object. Many object types do not have literals and are returned by a callable (like frozenset—there's no way to make a literal frozenset, you must call frozenset.)\nConsequently, there is no such thing as declaration of variables, since you aren't defining anything about the variable. id = 0 and name = 'John' are just assignment.\nincrease returns an int because that's what you return in it; nothing in Python forces it not to be any other object. first and second are only ints if you make them so.\nObjects, to a certain extent, share a common interface. You can use the same operators and functions on them all, and if they support that particular operation, it works. It is a common, recommended technique to use different types that behave similarly interchangably; this is called duck typing. For example, if something takes a file object you can instead pass a cStringIO.StringIO object, which supports the same method as a file (like read and write) but is a completely different type. This is sort of like Java interfaces, but does not require any formal usage, you just define the appropriate methods.\n",
"Python uses the duck-typing method - if it walks, looks and quacks like a duck, then it's a duck. If you pass in a string, and try to do something numerical on it, then it will fail.\nHave a look at: http://en.wikipedia.org/wiki/Python_%28programming_language%29#Typing and http://en.wikipedia.org/wiki/Duck_typing\n",
"When it comes to assigning literal values to variables, the type of the literal value can be inferred at the time of lexical analysis. For example, anything matching the regular expression (-)?[1-9][0-9]* can be inferred to be an integer literal. If you want to convert it to a float, there needs to be an explicit cast. Similarly, a string literal is any sequence of characters enclosed in single or double quotes.\nIn a method call, the parameters are not type-checked. You only need to pass in the correct number of them to be able to call the method. So long as the body of the method does not cause any errors with respect to the arguments, you can call the same method with lots of different types of arguments.\n",
"In Python, Unlike in C++ and Java, numbers and strings are both objects. So this:\n id = 0\n name = 'John'\n\nis equivalent to:\n id = int(0)\n name = str('John')\n\nSince variables id and name are references that may address any Python object, they don't need to be declared with a particular type.\n"
] |
[
13,
7,
6,
4,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002445193_python.txt
|
Q:
Convert PPT to PNG via python
I want to convert PPT to png, or other image formats using Python.
This question has been asked on SO, but essentially recommends running OpenOffice in headless X server, which was an absolute pain last time I used it. (Mostly due to hard to replicate bugs due to OO crashing.)
Is there any other way, (Hopefully using Linux CLI utilities only, and pure Python above them?)
A:
A basic workflow :
convert your ppt to pdf by using a pdf printer from PowerPoint or OpenOffice's built in PDF converter
use ghostscript to convert the pdf to png or other image format (something along the line of gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -r100 -sOutputFile=out.png in.pdf)
You can use Python to script this (and pilot OOo / MSPP using Uno / COM), or any script you want.
As far as I know, there is no Python library handling PPT files or converting PDF files to PNG.
As for the OOo crash handling, I would catch Exceptions and attempt a restart of OOo when such event occurs (and probably skip the file, adding it to a list of suspicious files requiring manual processing).
You may find this article http://www.linuxjournal.com/node/1007788 interesting as it provides a class which uses an existing OOo instance to connect or launches one if required in a transparent fashion. It comes with an example of xls -> csv conversion (http://www.linuxjournal.com/content/convert-spreadsheets-csv-files-python-and-pyuno) which can be used as a basis for the conversion you want to attempt.
|
Convert PPT to PNG via python
|
I want to convert PPT to png, or other image formats using Python.
This question has been asked on SO, but essentially recommends running OpenOffice in headless X server, which was an absolute pain last time I used it. (Mostly due to hard to replicate bugs due to OO crashing.)
Is there any other way, (Hopefully using Linux CLI utilities only, and pure Python above them?)
|
[
"A basic workflow : \n\nconvert your ppt to pdf by using a pdf printer from PowerPoint or OpenOffice's built in PDF converter\nuse ghostscript to convert the pdf to png or other image format (something along the line of gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -r100 -sOutputFile=out.png in.pdf) \n\nYou can use Python to script this (and pilot OOo / MSPP using Uno / COM), or any script you want. \nAs far as I know, there is no Python library handling PPT files or converting PDF files to PNG. \nAs for the OOo crash handling, I would catch Exceptions and attempt a restart of OOo when such event occurs (and probably skip the file, adding it to a list of suspicious files requiring manual processing). \nYou may find this article http://www.linuxjournal.com/node/1007788 interesting as it provides a class which uses an existing OOo instance to connect or launches one if required in a transparent fashion. It comes with an example of xls -> csv conversion (http://www.linuxjournal.com/content/convert-spreadsheets-csv-files-python-and-pyuno) which can be used as a basis for the conversion you want to attempt. \n"
] |
[
2
] |
[] |
[] |
[
"file_conversion",
"powerpoint",
"python"
] |
stackoverflow_0002443464_file_conversion_powerpoint_python.txt
|
Q:
What are the semantics of the 'is' operator in Python?
How does the is operator determine if two objects are the same? How does it work? I can't find it documented.
A:
From the documentation:
Every object has an identity, a type
and a value. An object’s identity
never changes once it has been
created; you may think of it as the
object’s address in memory. The ‘is‘
operator compares the identity of two
objects; the id() function returns an
integer representing its identity
(currently implemented as its
address).
This would seem to indicate that it compares the memory addresses of the arguments, though the fact that it says "you may think of it as the object's address in memory" might indicate that the particular implementation is not guranteed; only the semantics are.
A:
Comparison Operators
Is works by comparing the object referenced to see if the operands point to the same object.
>>> a = [1, 2]
>>> b = a
>>> a is b
True
>>> c = [1, 2]
>>> a is c
False
c is not the same list as a therefore the is relation is false.
A:
To add to the other answers, you can think of a is b working as if it was is_(a, b):
def is_(a, b):
return id(a) == id(b)
Note that you cannot directly replace a is b with id(a) == id(b), but the above function avoids that through parameters.
|
What are the semantics of the 'is' operator in Python?
|
How does the is operator determine if two objects are the same? How does it work? I can't find it documented.
|
[
"From the documentation:\n\nEvery object has an identity, a type\n and a value. An object’s identity\n never changes once it has been\n created; you may think of it as the\n object’s address in memory. The ‘is‘\n operator compares the identity of two\n objects; the id() function returns an\n integer representing its identity\n (currently implemented as its\n address).\n\nThis would seem to indicate that it compares the memory addresses of the arguments, though the fact that it says \"you may think of it as the object's address in memory\" might indicate that the particular implementation is not guranteed; only the semantics are.\n",
"Comparison Operators\nIs works by comparing the object referenced to see if the operands point to the same object.\n>>> a = [1, 2]\n>>> b = a\n>>> a is b\nTrue\n>>> c = [1, 2]\n>>> a is c\nFalse\n\nc is not the same list as a therefore the is relation is false.\n",
"To add to the other answers, you can think of a is b working as if it was is_(a, b):\ndef is_(a, b):\n return id(a) == id(b)\n\nNote that you cannot directly replace a is b with id(a) == id(b), but the above function avoids that through parameters.\n"
] |
[
14,
14,
7
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002438667_python.txt
|
Q:
How to control a subthread process in python?
Code first:
'''this is main structure of my program'''
from twisted.web import http
from twisted.protocols import basic
import threading
threadstop = False #thread trigger,to be done
class MyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.start()
def run(self):
while True:
if threadstop:
return
dosomething()
'''def some function'''
if __name__ == '__main__':
from twisted.internet import reactor
t = MyThread()
reactor.listenTCP(serverport,myHttpFactory())
reactor.run()
As my first multithread program,I feel happy that it works as expected. But now I find I cannot control it. If I run it on front,Control+C can only stop the main process, and I can still find it in processlist; if I run it in background,I have to use kill -9 pid to stop it. And I wonder if there's a way to control the subthread process by a trigger variable, or a better way to stop the whole process other than kill -9.
A:
Use the atexit module to register (in the main thread) a function that set the global threadstop to True, or, more simply, set the daemon attribute of the thread object to True so it won't keep the process alive if the main thread exits.
A:
This is not a direct answer to your question and Alex has already addressed your query, but here's a thought.
I see you're using python's threading. Did you try using twisted.internet.threads ?
When I find myself using threads in a twisted application, I go to twisted.internet.threads
|
How to control a subthread process in python?
|
Code first:
'''this is main structure of my program'''
from twisted.web import http
from twisted.protocols import basic
import threading
threadstop = False #thread trigger,to be done
class MyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.start()
def run(self):
while True:
if threadstop:
return
dosomething()
'''def some function'''
if __name__ == '__main__':
from twisted.internet import reactor
t = MyThread()
reactor.listenTCP(serverport,myHttpFactory())
reactor.run()
As my first multithread program,I feel happy that it works as expected. But now I find I cannot control it. If I run it on front,Control+C can only stop the main process, and I can still find it in processlist; if I run it in background,I have to use kill -9 pid to stop it. And I wonder if there's a way to control the subthread process by a trigger variable, or a better way to stop the whole process other than kill -9.
|
[
"Use the atexit module to register (in the main thread) a function that set the global threadstop to True, or, more simply, set the daemon attribute of the thread object to True so it won't keep the process alive if the main thread exits.\n",
"This is not a direct answer to your question and Alex has already addressed your query, but here's a thought.\nI see you're using python's threading. Did you try using twisted.internet.threads ?\nWhen I find myself using threads in a twisted application, I go to twisted.internet.threads\n"
] |
[
2,
2
] |
[] |
[] |
[
"multithreading",
"python",
"twisted"
] |
stackoverflow_0002444616_multithreading_python_twisted.txt
|
Q:
Test directory permissions in Python?
In Python on Windows, is there a way to determine if a user has permission to access a directory? I've taken a look at os.access but it gives false results.
>>> os.access('C:\haveaccess', os.R_OK)
False
>>> os.access(r'C:\haveaccess', os.R_OK)
True
>>> os.access('C:\donthaveaccess', os.R_OK)
False
>>> os.access(r'C:\donthaveaccess', os.R_OK)
True
Am I doing something wrong? Is there a better way to check if a user has permission to access a directory?
A:
It can be complicated to check for permissions in Windows (beware of issues in Vista with UAC, for example! -- see this related question).
Are you talking about simple read access, i.e. reading the directory's contents?
The surest way of testing permissions would be to try to access the directory (e.g. do an os.listdir) and catch the exception.
Also, in order for paths to be interpreted correctly you have to use raw strings or escape the backslashes ('\\'), -- or use forward slashes instead.
(EDIT: you can avoid slashes altogether by using os.path.join -- the recommended way to build paths)
A:
While os.access tries its best to tell if a path is accessible or not, it doesn't claim to be perfect. From the Python docs:
Note: I/O operations may fail even
when access() indicates that they
would succeed, particularly for
operations on network filesystems
which may have permissions semantics
beyond the usual POSIX permission-bit
model.
The recommended way to find out if the user has access to do whatever is to try to do it, and catch any exceptions that occur.
A:
Actually 'C:\haveaccess' is different than r'C:\haveaccess'.
From Python point of view 'C:\haveaccess' is not a valid path, so use 'C:\\haveaccess' instead.
I think os.access works just fine.
|
Test directory permissions in Python?
|
In Python on Windows, is there a way to determine if a user has permission to access a directory? I've taken a look at os.access but it gives false results.
>>> os.access('C:\haveaccess', os.R_OK)
False
>>> os.access(r'C:\haveaccess', os.R_OK)
True
>>> os.access('C:\donthaveaccess', os.R_OK)
False
>>> os.access(r'C:\donthaveaccess', os.R_OK)
True
Am I doing something wrong? Is there a better way to check if a user has permission to access a directory?
|
[
"It can be complicated to check for permissions in Windows (beware of issues in Vista with UAC, for example! -- see this related question).\nAre you talking about simple read access, i.e. reading the directory's contents?\nThe surest way of testing permissions would be to try to access the directory (e.g. do an os.listdir) and catch the exception.\nAlso, in order for paths to be interpreted correctly you have to use raw strings or escape the backslashes ('\\\\'), -- or use forward slashes instead. \n(EDIT: you can avoid slashes altogether by using os.path.join -- the recommended way to build paths)\n",
"While os.access tries its best to tell if a path is accessible or not, it doesn't claim to be perfect. From the Python docs:\n\nNote: I/O operations may fail even\n when access() indicates that they\n would succeed, particularly for\n operations on network filesystems\n which may have permissions semantics\n beyond the usual POSIX permission-bit\n model.\n\nThe recommended way to find out if the user has access to do whatever is to try to do it, and catch any exceptions that occur.\n",
"Actually 'C:\\haveaccess' is different than r'C:\\haveaccess'.\nFrom Python point of view 'C:\\haveaccess' is not a valid path, so use 'C:\\\\haveaccess' instead.\nI think os.access works just fine.\n"
] |
[
7,
5,
0
] |
[] |
[] |
[
"directory",
"permissions",
"python",
"windows"
] |
stackoverflow_0000539133_directory_permissions_python_windows.txt
|
Q:
In Django, what is a one-to-one relationship?
I've always been using ForeignKeys.
A:
A one-to-one relationship is a unique relation between two entities in both directions. I.e. for an entity A there exists only one entity B and vice versa.
The documentation says:
Conceptually, this is similar to a ForeignKey with unique=True, but the "reverse" side of the relation will directly return a single object.
This is most useful as the primary key of a model which "extends" another model in some way; Multi-table inheritance is implemented by adding an implicit one-to-one relation from the child model to the parent model, for example.
|
In Django, what is a one-to-one relationship?
|
I've always been using ForeignKeys.
|
[
"A one-to-one relationship is a unique relation between two entities in both directions. I.e. for an entity A there exists only one entity B and vice versa.\nThe documentation says:\n\nConceptually, this is similar to a ForeignKey with unique=True, but the \"reverse\" side of the relation will directly return a single object.\nThis is most useful as the primary key of a model which \"extends\" another model in some way; Multi-table inheritance is implemented by adding an implicit one-to-one relation from the child model to the parent model, for example.\n\n"
] |
[
3
] |
[] |
[] |
[
"database",
"django",
"mysql",
"python"
] |
stackoverflow_0002445823_database_django_mysql_python.txt
|
Q:
how to add markup to text using JavaScript regex
I need to add markup to some text using JavaScript regular expressions. In Python I could do this with:
>>> import re
>>> re.sub('(banana|apple)', r'<b>\1</b>', 'I have 1 banana and 2 apples!')
'I have 1 <b>banana</b> and 2 <b>apple</b>s!'
What is the equivalent in JavaScript?
string.replace(regex, newstring) seems to only take a raw string for replacing.
A:
In the new string, you can reference capture groups via the tokens $1, $2, etc. A lot of the high-level reference sites (like w3schools) fail to document that. It's in the spec, of course, or more accessibly discussed on MDC.
So taking your example:
"I have 1 banana and 2 apples!".replace(/(banana|apple)/gi, "<b>$1</b>");
(Of course, the "s" in "apples" won't be inside the tag...) I used the 'i' flag there assuming you want to process "Banana" and "Apple" as well as "banana" and "apple".
A:
You can use String.replace() for this and use $n to reference captured groups from the regular expression:
var in = 'I have 1 banana and 2 apples!';
var out = in.replace(/(banana|apple)/g, "<b>$1</b>");
|
how to add markup to text using JavaScript regex
|
I need to add markup to some text using JavaScript regular expressions. In Python I could do this with:
>>> import re
>>> re.sub('(banana|apple)', r'<b>\1</b>', 'I have 1 banana and 2 apples!')
'I have 1 <b>banana</b> and 2 <b>apple</b>s!'
What is the equivalent in JavaScript?
string.replace(regex, newstring) seems to only take a raw string for replacing.
|
[
"In the new string, you can reference capture groups via the tokens $1, $2, etc. A lot of the high-level reference sites (like w3schools) fail to document that. It's in the spec, of course, or more accessibly discussed on MDC.\nSo taking your example:\n\"I have 1 banana and 2 apples!\".replace(/(banana|apple)/gi, \"<b>$1</b>\");\n\n(Of course, the \"s\" in \"apples\" won't be inside the tag...) I used the 'i' flag there assuming you want to process \"Banana\" and \"Apple\" as well as \"banana\" and \"apple\".\n",
"You can use String.replace() for this and use $n to reference captured groups from the regular expression:\nvar in = 'I have 1 banana and 2 apples!';\nvar out = in.replace(/(banana|apple)/g, \"<b>$1</b>\");\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"javascript",
"python",
"regex"
] |
stackoverflow_0002446056_javascript_python_regex.txt
|
Q:
Is there an API for Aardvark?
Is there an API for Aardvark (http://vark.com)? How can I programmatically ask questions and get answers?
A:
Since their website do not seems to provide an API, you'll have to rely upon the old, brutal and nevertheless efficient (as long as the site do not change) html scanning of pages.
Using java platform, i would suggest you to use Groovy goodness, like XmlSlurper, which allow one to parse an XML document with ease.
Besides, a fast googling told me that they're thinking about it.
|
Is there an API for Aardvark?
|
Is there an API for Aardvark (http://vark.com)? How can I programmatically ask questions and get answers?
|
[
"Since their website do not seems to provide an API, you'll have to rely upon the old, brutal and nevertheless efficient (as long as the site do not change) html scanning of pages.\nUsing java platform, i would suggest you to use Groovy goodness, like XmlSlurper, which allow one to parse an XML document with ease.\nBesides, a fast googling told me that they're thinking about it.\n"
] |
[
1
] |
[] |
[] |
[
"java",
"php",
"python"
] |
stackoverflow_0002441803_java_php_python.txt
|
Q:
read a text field in Python using regular expressions
I have text file, like
FILED AS OF DATE: 20090209
DATE AS OF CHANGE: 20090209
I need to find the position using FILED AS OF DATE: and read the date. I know how to do it using python strings. But using a regular expression seems cooler:)
Btw, how to parse the date?
Thanks!
A:
#!/usr/bin/env python
import datetime, fileinput, re
for line in fileinput.input():
if 'FILED AS OF DATE' in line:
line = line.rstrip()
dt = datetime.datetime.strptime(line, 'FILED AS OF DATE: %Y%m%d')
# or with regex
date_str, = re.findall(r'\d+', line)
dt = datetime.datetime.strptime(date_str, '%Y%m%d')
print dt.date()
Example:
$ ./finddate.py input.txt
Output:
2009-02-09
A:
Is this what you need?
/FILED.*([0-9]{4})([0-9]{2})([0-9]{2})$/
Search for FILED then anything then parses date divided in 3 groups.
A:
You really do not need to use RE for this.
Regarding parsing date, you can use datetime.strptime(date_string, format). Then you can convert it from datetime.datetime to datetime.date if required.
Alternatively use python-dateutil parse() function, which is quite handy when the format of your date(time) value is not fixed.
|
read a text field in Python using regular expressions
|
I have text file, like
FILED AS OF DATE: 20090209
DATE AS OF CHANGE: 20090209
I need to find the position using FILED AS OF DATE: and read the date. I know how to do it using python strings. But using a regular expression seems cooler:)
Btw, how to parse the date?
Thanks!
|
[
"#!/usr/bin/env python\nimport datetime, fileinput, re\n\nfor line in fileinput.input():\n if 'FILED AS OF DATE' in line:\n line = line.rstrip()\n dt = datetime.datetime.strptime(line, 'FILED AS OF DATE: %Y%m%d')\n\n # or with regex\n date_str, = re.findall(r'\\d+', line)\n dt = datetime.datetime.strptime(date_str, '%Y%m%d')\n\n print dt.date()\n\nExample:\n$ ./finddate.py input.txt\n\nOutput:\n2009-02-09\n\n",
"Is this what you need?\n/FILED.*([0-9]{4})([0-9]{2})([0-9]{2})$/\n\nSearch for FILED then anything then parses date divided in 3 groups.\n",
"You really do not need to use RE for this. \nRegarding parsing date, you can use datetime.strptime(date_string, format). Then you can convert it from datetime.datetime to datetime.date if required. \nAlternatively use python-dateutil parse() function, which is quite handy when the format of your date(time) value is not fixed.\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0002446447_python_regex.txt
|
Q:
django deployment apache
I would like to create a python script, which will:
Create a django project in the current directory. Fix settings.py, urls.py.
Do syncdb
Install new apache instance listening on specific port (command line argument), with WSGI configured to serve my project.
I can't figure out how to do point 3.
EDIT:
Peter Rowell:
I need the solution for both Linux and Windows
I have root access
This is a dedicated host
Apache only
A:
Jacob Kaplan Moss' Django Deployment Workshop assets have some nice examples. You'll probably still need to do some legwork on your end to automate things to your taste but there may be some stuff in there you can use as a starting point.
http://github.com/jacobian/django-deployment-workshop
A:
One way is to use Apache's mod_wsgi. After installing, you create a wsgi file and point Apache's config to it.
Sample wsgi file:
import os
import sys
sys.path.append('/path/to/settings.py')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mydjangoapp.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
Add this to your apache config (on Linux it is in /etc/apache2/sites-available/default):
<VirtualHost *:1234>
ServerName my.host.name.com
WSGIScriptAlias / /path/to/wsgi/file/django.wsgi
</VirtualHost>
(assuming the port is 1234)
|
django deployment apache
|
I would like to create a python script, which will:
Create a django project in the current directory. Fix settings.py, urls.py.
Do syncdb
Install new apache instance listening on specific port (command line argument), with WSGI configured to serve my project.
I can't figure out how to do point 3.
EDIT:
Peter Rowell:
I need the solution for both Linux and Windows
I have root access
This is a dedicated host
Apache only
|
[
"Jacob Kaplan Moss' Django Deployment Workshop assets have some nice examples. You'll probably still need to do some legwork on your end to automate things to your taste but there may be some stuff in there you can use as a starting point.\nhttp://github.com/jacobian/django-deployment-workshop\n",
"One way is to use Apache's mod_wsgi. After installing, you create a wsgi file and point Apache's config to it.\nSample wsgi file:\nimport os\nimport sys\n\nsys.path.append('/path/to/settings.py')\nos.environ['DJANGO_SETTINGS_MODULE'] = 'mydjangoapp.settings'\n\nimport django.core.handlers.wsgi\napplication = django.core.handlers.wsgi.WSGIHandler()\n\nAdd this to your apache config (on Linux it is in /etc/apache2/sites-available/default):\n<VirtualHost *:1234>\n ServerName my.host.name.com\n WSGIScriptAlias / /path/to/wsgi/file/django.wsgi\n</VirtualHost>\n\n(assuming the port is 1234)\n"
] |
[
1,
1
] |
[] |
[] |
[
"apache",
"deployment",
"django",
"python"
] |
stackoverflow_0002419279_apache_deployment_django_python.txt
|
Q:
monkey patching time.time() in python
I've an application where, for testing, I need to replace the time.time() call with a specific timestamp, I've done that in the past using ruby
(code available here: http://github.com/zemariamm/Back-to-Future/blob/master/back_to_future.rb )
However I do not know how to do this using Python.
Any hints ?
Cheers,
Ze Maria
A:
You can simply set time.time to point to your new time function, like this:
import time
def my_time():
return 0.0
old_time = time.time
time.time = my_time
|
monkey patching time.time() in python
|
I've an application where, for testing, I need to replace the time.time() call with a specific timestamp, I've done that in the past using ruby
(code available here: http://github.com/zemariamm/Back-to-Future/blob/master/back_to_future.rb )
However I do not know how to do this using Python.
Any hints ?
Cheers,
Ze Maria
|
[
"You can simply set time.time to point to your new time function, like this:\nimport time\n\ndef my_time():\n return 0.0\n\nold_time = time.time\ntime.time = my_time\n\n"
] |
[
14
] |
[] |
[] |
[
"datetime",
"monkeypatching",
"python",
"ruby",
"time"
] |
stackoverflow_0002446987_datetime_monkeypatching_python_ruby_time.txt
|
Q:
How would I write this query in GeoDjango? (It's a library for geographic calculations in Django)
Right now I'm using raw SQL to find people within 500 meters of the current user.
cursor.execute("SELECT user_id FROM myapp_location WHERE\
GLength(LineStringFromWKB(LineString(asbinary(utm), asbinary(PointFromWKB(point(%s, %s)))))) < %s"\
,(user_utm_easting, user_utm_northing, 500));
How would I do this in GeoDjango? It gets a little tiring writing custom SQL everywhere.
A:
Well assuming you have the appropriate model,
from django.contrib.gis.db import models
class User(models.Model):
location = models.PointField()
objects = models.GeoManager()
it would look like:
User.objects.filter(location__dwithin=(current_user.location, D(m=500)))
But note that such distance lookups are not supported by MySQL.
|
How would I write this query in GeoDjango? (It's a library for geographic calculations in Django)
|
Right now I'm using raw SQL to find people within 500 meters of the current user.
cursor.execute("SELECT user_id FROM myapp_location WHERE\
GLength(LineStringFromWKB(LineString(asbinary(utm), asbinary(PointFromWKB(point(%s, %s)))))) < %s"\
,(user_utm_easting, user_utm_northing, 500));
How would I do this in GeoDjango? It gets a little tiring writing custom SQL everywhere.
|
[
"Well assuming you have the appropriate model, \nfrom django.contrib.gis.db import models\n\nclass User(models.Model):\n location = models.PointField()\n objects = models.GeoManager()\n\nit would look like:\nUser.objects.filter(location__dwithin=(current_user.location, D(m=500)))\n\nBut note that such distance lookups are not supported by MySQL.\n"
] |
[
1
] |
[] |
[] |
[
"database",
"django",
"location",
"mysql",
"python"
] |
stackoverflow_0002447257_database_django_location_mysql_python.txt
|
Q:
How to define initialized C-array in the Pyrex?
I want to define initialized C-array in Pyrex, e.g. equivalent of:
unsigned char a[8] = {0,1,2,3,4,5,6,7};
What will be equivalent in Pyrex?
Just array is
cdef unsigned char a[8]
But how can I made it initialized with my values?
A:
In Cython, Pyrex's successor, this feature was added over a year a go to fix this feature request, so for example the following works in Cython now:
cdef double a[] = [0.5, 0.3, 0.1, 0.1]
However, Pyrex's development is proceeding much more slowly (which is why Cython was forked years ago by developers rarin' for faster action), so I doubt it's picked up this feature (though you can try, esp. if you're using the very latest release of Pyrex, 0.9.8.6).
If Pyrex isn't giving you the features you want, may I suggest switching to Cython instead? Most Pyrex code should just recompile smoothly in Cython, and you do get the extra features this way.
|
How to define initialized C-array in the Pyrex?
|
I want to define initialized C-array in Pyrex, e.g. equivalent of:
unsigned char a[8] = {0,1,2,3,4,5,6,7};
What will be equivalent in Pyrex?
Just array is
cdef unsigned char a[8]
But how can I made it initialized with my values?
|
[
"In Cython, Pyrex's successor, this feature was added over a year a go to fix this feature request, so for example the following works in Cython now:\ncdef double a[] = [0.5, 0.3, 0.1, 0.1]\n\nHowever, Pyrex's development is proceeding much more slowly (which is why Cython was forked years ago by developers rarin' for faster action), so I doubt it's picked up this feature (though you can try, esp. if you're using the very latest release of Pyrex, 0.9.8.6).\nIf Pyrex isn't giving you the features you want, may I suggest switching to Cython instead? Most Pyrex code should just recompile smoothly in Cython, and you do get the extra features this way.\n"
] |
[
4
] |
[] |
[] |
[
"c",
"pyrex",
"python",
"python_c_extension"
] |
stackoverflow_0002446873_c_pyrex_python_python_c_extension.txt
|
Q:
Date versus time interval plotting in Matplotlib
The pyplot plot_date function expects pairs of dates and values to be plotted with a certain line style. Is there a recommended approach to plot multiple values or interval data against date/time values?
A:
To plot interval data, you may use the error bar provided by the errorbar() function and the use axis.xaxis_date() to make matplotlib format the axis like plot_date() function does.
Here is an example:
#!/usr/bin/python
import datetime
import numpy as np
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
# dates for xaxis
event_date = [datetime.datetime(2008, 12, 3), datetime.datetime(2009, 1, 5), datetime.datetime(2009, 2, 3)]
# base date for yaxis can be anything, since information is in the time
anydate = datetime.date(2001,1,1)
# event times
event_start = [datetime.time(20, 12), datetime.time(12, 15), datetime.time(8, 1,)]
event_finish = [datetime.time(23, 56), datetime.time(16, 5), datetime.time(18, 34)]
# translate times and dates lists into matplotlib date format numpy arrays
start = np.fromiter((mdates.date2num(datetime.datetime.combine(anydate, event)) for event in event_start), dtype = 'float', count = len(event_start))
finish = np.fromiter((mdates.date2num(datetime.datetime.combine(anydate, event)) for event in event_finish), dtype = 'float', count = len(event_finish))
date = mdates.date2num(event_date)
# calculate events durations
duration = finish - start
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# use errorbar to represent event duration
ax.errorbar(date, start, [np.zeros(len(duration)), duration], linestyle = '')
# make matplotlib treat both axis as times
ax.xaxis_date()
ax.yaxis_date()
plt.show()
|
Date versus time interval plotting in Matplotlib
|
The pyplot plot_date function expects pairs of dates and values to be plotted with a certain line style. Is there a recommended approach to plot multiple values or interval data against date/time values?
|
[
"To plot interval data, you may use the error bar provided by the errorbar() function and the use axis.xaxis_date() to make matplotlib format the axis like plot_date() function does.\nHere is an example:\n#!/usr/bin/python\n\nimport datetime\nimport numpy as np\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\n\n# dates for xaxis\nevent_date = [datetime.datetime(2008, 12, 3), datetime.datetime(2009, 1, 5), datetime.datetime(2009, 2, 3)]\n\n# base date for yaxis can be anything, since information is in the time\nanydate = datetime.date(2001,1,1)\n\n# event times\nevent_start = [datetime.time(20, 12), datetime.time(12, 15), datetime.time(8, 1,)]\nevent_finish = [datetime.time(23, 56), datetime.time(16, 5), datetime.time(18, 34)]\n\n# translate times and dates lists into matplotlib date format numpy arrays\nstart = np.fromiter((mdates.date2num(datetime.datetime.combine(anydate, event)) for event in event_start), dtype = 'float', count = len(event_start))\nfinish = np.fromiter((mdates.date2num(datetime.datetime.combine(anydate, event)) for event in event_finish), dtype = 'float', count = len(event_finish))\ndate = mdates.date2num(event_date)\n\n# calculate events durations\nduration = finish - start\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# use errorbar to represent event duration\nax.errorbar(date, start, [np.zeros(len(duration)), duration], linestyle = '')\n# make matplotlib treat both axis as times\nax.xaxis_date()\nax.yaxis_date()\n\nplt.show()\n\n"
] |
[
5
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0002207670_matplotlib_python.txt
|
Q:
How to customize pickle for django model objects
My app uses a "per-user session" to allow multiple sessions from the same user to share state. It operates very similarly to the django session by pickling objects.
I need to pickle a complex object that refers to django model objects. The standard pickling process stores a denormalized object in the pickle. So if the object changes on the database between pickling and unpickling, the model is now out of date. (I know this is true with in-memory objects too, but the pickling is a convenient time to address it.)
Clearly it would be cleaner to store this complex in the database, but it's not practical. The code for it is necessarily changing rapidly as the project evolves. Having to update the database schema every time the object's data model changes would slow the project down a lot.
So what I'd like is a way to not pickle the full django model object. Instead just store its class and id, and re-fetch the contents from the database on load. Can I specify a custom pickle method for this class? I'm happy to write a wrapper class around the django model to handle the lazy fetching from db, if there's a way to do the pickling.
A:
It's unclear what your goal is.
"But if I just store the id and class in a tuple then I'm necessarily going back to the database every time I use any of the django objects. I'd like to be able to keep the ones I'm using in memory over the course of a page request."
This doesn't make sense, since a view function is a page request and you have local variables in your view function that keep your objects around until you're finished.
Further, Django's ORM bas a cache.
Finally, the Django-supplied session is the usual place for "in-memory objects" between requests.
You shouldn't need to pickle anything.
A:
You can overload the serialization methods. But it would be simpler to put the id and class in a tuple or dict and pickle that.
|
How to customize pickle for django model objects
|
My app uses a "per-user session" to allow multiple sessions from the same user to share state. It operates very similarly to the django session by pickling objects.
I need to pickle a complex object that refers to django model objects. The standard pickling process stores a denormalized object in the pickle. So if the object changes on the database between pickling and unpickling, the model is now out of date. (I know this is true with in-memory objects too, but the pickling is a convenient time to address it.)
Clearly it would be cleaner to store this complex in the database, but it's not practical. The code for it is necessarily changing rapidly as the project evolves. Having to update the database schema every time the object's data model changes would slow the project down a lot.
So what I'd like is a way to not pickle the full django model object. Instead just store its class and id, and re-fetch the contents from the database on load. Can I specify a custom pickle method for this class? I'm happy to write a wrapper class around the django model to handle the lazy fetching from db, if there's a way to do the pickling.
|
[
"It's unclear what your goal is.\n\"But if I just store the id and class in a tuple then I'm necessarily going back to the database every time I use any of the django objects. I'd like to be able to keep the ones I'm using in memory over the course of a page request.\"\nThis doesn't make sense, since a view function is a page request and you have local variables in your view function that keep your objects around until you're finished.\nFurther, Django's ORM bas a cache.\nFinally, the Django-supplied session is the usual place for \"in-memory objects\" between requests.\nYou shouldn't need to pickle anything.\n",
"You can overload the serialization methods. But it would be simpler to put the id and class in a tuple or dict and pickle that.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"pickle",
"python"
] |
stackoverflow_0002448035_django_pickle_python.txt
|
Q:
How can I use functools.partial on multiple methods on an object, and freeze parameters out of order?
I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameters being frozen (think of it as generalizing partial to apply to classes). And I'd prefer to do this without editing the original object, just like partial doesn't change its original function.
I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works:
>>> def foo(x, y):
... print x, y
...
>>> bar = bind(foo, y=3)
>>> bar(2)
2 3
But my proxy class does not work, and I'm not sure why:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> a = Foo()
>>> b = PureProxy(a, bar=bind(Foo.bar, y=3))
>>> b.bar(2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: bar() takes exactly 3 arguments (2 given)
I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows.
from types import MethodType
class PureProxy(object):
def __init__(self, underlying, **substitutions):
self.underlying = underlying
for name in substitutions:
subst_attr = substitutions[name]
if hasattr(subst_attr, "underlying"):
setattr(self, name, MethodType(subst_attr, self, PureProxy))
def __getattribute__(self, name):
return getattr(object.__getattribute__(self, "underlying"), name)
def bind(f, *args, **kwargs):
""" Lets you freeze arguments of a function be certain values. Unlike
functools.partial, you can freeze arguments by name, which has the bonus
of letting you freeze them out of order. args will be treated just like
partial, but kwargs will properly take into account if you are specifying
a regular argument by name. """
argspec = inspect.getargspec(f)
argdict = copy(kwargs)
if hasattr(f, "im_func"):
f = f.im_func
args_idx = 0
for arg in argspec.args:
if args_idx >= len(args):
break
argdict[arg] = args[args_idx]
args_idx += 1
num_plugged = args_idx
def new_func(*inner_args, **inner_kwargs):
args_idx = 0
for arg in argspec.args[num_plugged:]:
if arg in argdict:
continue
if args_idx >= len(inner_args):
# We can't raise an error here because some remaining arguments
# may have been passed in by keyword.
break
argdict[arg] = inner_args[args_idx]
args_idx += 1
f(**dict(argdict, **inner_kwargs))
new_func.underlying = f
return new_func
Update: In case anyone can benefit, here's the final implementation I went with:
from types import MethodType
class PureProxy(object):
""" Intended usage:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> a = Foo()
>>> b = PureProxy(a, bar=FreezeArgs(y=3))
>>> b.bar(1)
1 3
"""
def __init__(self, underlying, **substitutions):
self.underlying = underlying
for name in substitutions:
subst_attr = substitutions[name]
if isinstance(subst_attr, FreezeArgs):
underlying_func = getattr(underlying, name)
new_method_func = bind(underlying_func, *subst_attr.args, **subst_attr.kwargs)
setattr(self, name, MethodType(new_method_func, self, PureProxy))
def __getattr__(self, name):
return getattr(self.underlying, name)
class FreezeArgs(object):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
def bind(f, *args, **kwargs):
""" Lets you freeze arguments of a function be certain values. Unlike
functools.partial, you can freeze arguments by name, which has the bonus
of letting you freeze them out of order. args will be treated just like
partial, but kwargs will properly take into account if you are specifying
a regular argument by name. """
argspec = inspect.getargspec(f)
argdict = copy(kwargs)
if hasattr(f, "im_func"):
f = f.im_func
args_idx = 0
for arg in argspec.args:
if args_idx >= len(args):
break
argdict[arg] = args[args_idx]
args_idx += 1
num_plugged = args_idx
def new_func(*inner_args, **inner_kwargs):
args_idx = 0
for arg in argspec.args[num_plugged:]:
if arg in argdict:
continue
if args_idx >= len(inner_args):
# We can't raise an error here because some remaining arguments
# may have been passed in by keyword.
break
argdict[arg] = inner_args[args_idx]
args_idx += 1
f(**dict(argdict, **inner_kwargs))
return new_func
A:
You're "binding too deep": change def __getattribute__(self, name): to def __getattr__(self, name): in class PureProxy. __getattribute__ intercepts every attribute access and so bypasses everything that you've set with setattr(self, name, ... making those setattr bereft of any effect, which obviously's not what you want; __getattr__ is called only for access to attributes not otherwise defined so those setattr calls become "operative" & useful.
In the body of that override, you can and should also change object.__getattribute__(self, "underlying") to self.underlying (since you're not overriding __getattribute__ any more). There are other changes I'd suggest (enumerate in lieu of the low-level logic you're using for counters, etc) but they wouldn't change the semantics.
With the change I suggest, your sample code works (you'll have to keep testing with more subtle cases of course). BTW, the way I debugged this was simply to stick in print statements in the appropriate places (a jurassic=era approach but still my favorite;-).
|
How can I use functools.partial on multiple methods on an object, and freeze parameters out of order?
|
I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameters being frozen (think of it as generalizing partial to apply to classes). And I'd prefer to do this without editing the original object, just like partial doesn't change its original function.
I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works:
>>> def foo(x, y):
... print x, y
...
>>> bar = bind(foo, y=3)
>>> bar(2)
2 3
But my proxy class does not work, and I'm not sure why:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> a = Foo()
>>> b = PureProxy(a, bar=bind(Foo.bar, y=3))
>>> b.bar(2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: bar() takes exactly 3 arguments (2 given)
I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows.
from types import MethodType
class PureProxy(object):
def __init__(self, underlying, **substitutions):
self.underlying = underlying
for name in substitutions:
subst_attr = substitutions[name]
if hasattr(subst_attr, "underlying"):
setattr(self, name, MethodType(subst_attr, self, PureProxy))
def __getattribute__(self, name):
return getattr(object.__getattribute__(self, "underlying"), name)
def bind(f, *args, **kwargs):
""" Lets you freeze arguments of a function be certain values. Unlike
functools.partial, you can freeze arguments by name, which has the bonus
of letting you freeze them out of order. args will be treated just like
partial, but kwargs will properly take into account if you are specifying
a regular argument by name. """
argspec = inspect.getargspec(f)
argdict = copy(kwargs)
if hasattr(f, "im_func"):
f = f.im_func
args_idx = 0
for arg in argspec.args:
if args_idx >= len(args):
break
argdict[arg] = args[args_idx]
args_idx += 1
num_plugged = args_idx
def new_func(*inner_args, **inner_kwargs):
args_idx = 0
for arg in argspec.args[num_plugged:]:
if arg in argdict:
continue
if args_idx >= len(inner_args):
# We can't raise an error here because some remaining arguments
# may have been passed in by keyword.
break
argdict[arg] = inner_args[args_idx]
args_idx += 1
f(**dict(argdict, **inner_kwargs))
new_func.underlying = f
return new_func
Update: In case anyone can benefit, here's the final implementation I went with:
from types import MethodType
class PureProxy(object):
""" Intended usage:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> a = Foo()
>>> b = PureProxy(a, bar=FreezeArgs(y=3))
>>> b.bar(1)
1 3
"""
def __init__(self, underlying, **substitutions):
self.underlying = underlying
for name in substitutions:
subst_attr = substitutions[name]
if isinstance(subst_attr, FreezeArgs):
underlying_func = getattr(underlying, name)
new_method_func = bind(underlying_func, *subst_attr.args, **subst_attr.kwargs)
setattr(self, name, MethodType(new_method_func, self, PureProxy))
def __getattr__(self, name):
return getattr(self.underlying, name)
class FreezeArgs(object):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
def bind(f, *args, **kwargs):
""" Lets you freeze arguments of a function be certain values. Unlike
functools.partial, you can freeze arguments by name, which has the bonus
of letting you freeze them out of order. args will be treated just like
partial, but kwargs will properly take into account if you are specifying
a regular argument by name. """
argspec = inspect.getargspec(f)
argdict = copy(kwargs)
if hasattr(f, "im_func"):
f = f.im_func
args_idx = 0
for arg in argspec.args:
if args_idx >= len(args):
break
argdict[arg] = args[args_idx]
args_idx += 1
num_plugged = args_idx
def new_func(*inner_args, **inner_kwargs):
args_idx = 0
for arg in argspec.args[num_plugged:]:
if arg in argdict:
continue
if args_idx >= len(inner_args):
# We can't raise an error here because some remaining arguments
# may have been passed in by keyword.
break
argdict[arg] = inner_args[args_idx]
args_idx += 1
f(**dict(argdict, **inner_kwargs))
return new_func
|
[
"You're \"binding too deep\": change def __getattribute__(self, name): to def __getattr__(self, name): in class PureProxy. __getattribute__ intercepts every attribute access and so bypasses everything that you've set with setattr(self, name, ... making those setattr bereft of any effect, which obviously's not what you want; __getattr__ is called only for access to attributes not otherwise defined so those setattr calls become \"operative\" & useful.\nIn the body of that override, you can and should also change object.__getattribute__(self, \"underlying\") to self.underlying (since you're not overriding __getattribute__ any more). There are other changes I'd suggest (enumerate in lieu of the low-level logic you're using for counters, etc) but they wouldn't change the semantics.\nWith the change I suggest, your sample code works (you'll have to keep testing with more subtle cases of course). BTW, the way I debugged this was simply to stick in print statements in the appropriate places (a jurassic=era approach but still my favorite;-).\n"
] |
[
3
] |
[] |
[] |
[
"functional_programming",
"metaprogramming",
"python",
"standard_library"
] |
stackoverflow_0002448187_functional_programming_metaprogramming_python_standard_library.txt
|
Q:
Python - platform-independent 5.1 Sound Library
Is there any dolby/5.1/7.1 audio processing Python library? It would be best if it is platform independent.
Would be nice if it looks like:
import lib
f = lib.open("8channels_audiofile")
lib.play(from=f.channel3, to="left rear");
A:
http://pysonic.sourceforge.net/ - this depends on FMOD, which is free for non-commercial use, and supported on many platforms.
See the FMOD website for details: http://www.fmod.org/
|
Python - platform-independent 5.1 Sound Library
|
Is there any dolby/5.1/7.1 audio processing Python library? It would be best if it is platform independent.
Would be nice if it looks like:
import lib
f = lib.open("8channels_audiofile")
lib.play(from=f.channel3, to="left rear");
|
[
"http://pysonic.sourceforge.net/ - this depends on FMOD, which is free for non-commercial use, and supported on many platforms.\nSee the FMOD website for details: http://www.fmod.org/\n"
] |
[
1
] |
[] |
[] |
[
"audio",
"multimedia",
"python",
"signal_processing"
] |
stackoverflow_0002448652_audio_multimedia_python_signal_processing.txt
|
Q:
Check result of AX_PYTHON_MODULE in configure.ac
In using the m4_ax_python_module.m4 macro in configure.ac (AX_PYTHON_MODULE), one can know at configure time if a given module is installed. It takes two arguments, the module name, and second argument which if not empty, will lead to an exit, useful when the module is a must-have.
In the case where you don't want a fatal exit, how do you test in configure.ac which modules were found or not? They output "yes" or "no" when configure is run, but that's all I've found so far. Basically If I have these lines in configure.ac:
EDIT: added square brackets around module names
AX_PYTHON_MODULE([json],[])
AX_PYTHON_MODULE([simplejson],[])
How do I test which of the two modules were found?
See http://www.gnu.org/software/autoconf-archive/ax_python_module.html#ax_python_module for documentation about this macro.
A:
Ok the best solution I've found so far was:
EDIT: using AS_IF instead of just if test
AS_IF([test "x${HAVE_PYMOD_JSON}" = "xno"],
AS_IF([test "x${HAVE_PYMOD_SIMPLEJSON}" = "xno"],
[AC_MSG_ERROR([Requires one of json or simplejson])]))
What through me off was in the macro, the AS_TR_CPP transforms its arguments into #define style macros, i.e. all upper case.
|
Check result of AX_PYTHON_MODULE in configure.ac
|
In using the m4_ax_python_module.m4 macro in configure.ac (AX_PYTHON_MODULE), one can know at configure time if a given module is installed. It takes two arguments, the module name, and second argument which if not empty, will lead to an exit, useful when the module is a must-have.
In the case where you don't want a fatal exit, how do you test in configure.ac which modules were found or not? They output "yes" or "no" when configure is run, but that's all I've found so far. Basically If I have these lines in configure.ac:
EDIT: added square brackets around module names
AX_PYTHON_MODULE([json],[])
AX_PYTHON_MODULE([simplejson],[])
How do I test which of the two modules were found?
See http://www.gnu.org/software/autoconf-archive/ax_python_module.html#ax_python_module for documentation about this macro.
|
[
"Ok the best solution I've found so far was:\nEDIT: using AS_IF instead of just if test\nAS_IF([test \"x${HAVE_PYMOD_JSON}\" = \"xno\"], \n AS_IF([test \"x${HAVE_PYMOD_SIMPLEJSON}\" = \"xno\"],\n [AC_MSG_ERROR([Requires one of json or simplejson])]))\n\nWhat through me off was in the macro, the AS_TR_CPP transforms its arguments into #define style macros, i.e. all upper case.\n"
] |
[
1
] |
[] |
[] |
[
"autotools",
"configure",
"python"
] |
stackoverflow_0002448756_autotools_configure_python.txt
|
Q:
Getting a Jabber status via Python
I'm developing a website using the Django framework, and I need to retrieve Jabber (okay, Google Talk) statuses for a user. Most of the Jabber python libraries seem like an incredible amount of overkill (and overhead) for a simple task. Is there any simple way to do this?
I know very little about XMPP/Jabber, though of course I'm willing to learn. Do you need to be an authenticated and "friended" user to retrieve another user's status?
A:
I recommend checking out Google AppEngine's XMPP API (Django runs on AppEngine, too). AFAIK you have to be authorized to check a user's status.
A:
Do you need to be an authenticated and
"friended" user to retrieve another
user's status?
Yes.
To get the status of a given user, you should write a jabber bot and the user should add your bot as a friend. Then you would be able to get the status of that user. FriendFeed and other services do that.
Google Buzz is from Google, so they already have access to your chat status...
|
Getting a Jabber status via Python
|
I'm developing a website using the Django framework, and I need to retrieve Jabber (okay, Google Talk) statuses for a user. Most of the Jabber python libraries seem like an incredible amount of overkill (and overhead) for a simple task. Is there any simple way to do this?
I know very little about XMPP/Jabber, though of course I'm willing to learn. Do you need to be an authenticated and "friended" user to retrieve another user's status?
|
[
"I recommend checking out Google AppEngine's XMPP API (Django runs on AppEngine, too). AFAIK you have to be authorized to check a user's status.\n",
"\nDo you need to be an authenticated and\n \"friended\" user to retrieve another\n user's status?\n\nYes.\nTo get the status of a given user, you should write a jabber bot and the user should add your bot as a friend. Then you would be able to get the status of that user. FriendFeed and other services do that.\nGoogle Buzz is from Google, so they already have access to your chat status...\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"google_talk",
"python",
"xmpp"
] |
stackoverflow_0002375705_django_google_talk_python_xmpp.txt
|
Q:
SQLAlchemy custom query column
I have a declarative table defined like this:
class Transaction(Base):
__tablename__ = "transactions"
id = Column(Integer, primary_key=True)
account_id = Column(Integer)
transfer_account_id = Column(Integer)
amount = Column(Numeric(12, 2))
...
The query should be:
SELECT id, (CASE WHEN transfer_account_id=1 THEN -amount ELSE amount) AS amount
FROM transactions
WHERE account_id = 1 OR transfer_account_id = 1
My code is:
query = Transaction.query.filter_by(account_id=1, transfer_account_id=1)
query = query.add_column(case(...).label("amount"))
But it doesn't replace the amount column.
Been trying to do this with for hours and I don't want to use raw SQL.
A:
The construct you are looking for is called column_property. You could use a secondary mapper to actually replace the amount column. Are you sure you are not making things too difficult for yourself by not just storing the negative values in the database directly or giving the "corrected" column a different name?
from sqlalchemy.orm import mapper, column_property
wrongmapper = sqlalchemy.orm.mapper(Transaction, Transaction.__table,
non_primary = True,
properties = {'amount':
column_property(case([(Transaction.transfer_account_id==1, -1*Transaction.amount)],
else_=Transaction.amount)})
Session.query(wrongmapper).filter(...)
A:
Any query you do will not replace original amount column. But you can load another column using following query:
q = session.query(Transaction,
case([(Transaction.transfer_account_id==1, -1*Transaction.amount)], else_=Transaction.amount).label('special_amount')
)
q = q.filter(or_(Transaction.account_id==1, Transaction.transfer_account_id==1))
This will not return only Transaction objects, but rather tuple(Transaction, Decimal)
But if you want this property be part of your object, then:
Since your case when ... function is completely independent from the condition in WHERE, I would suggest that you change your code in following way:
1) add a property to you object, which does the case when ... check as following:
@property
def special_amount(self):
return -self.amount if self.transfer_account_id == 1 else self.amount
You can completely wrap this special handling of the amount providing a setter property as well:
@special_amount.setter
def special_amount(self, value):
if self.transfer_account_id is None:
raise Exception('Cannot decide on special handling, because transfer_account_id is not set')
self.amount = -value if self.transfer_account_id == 1 else value
2) fix your query to only have a filter clause with or_ clause (it looks like your query does not work at all):
q = session.query(Transaction).filter(
or_(Transaction.account_id==1,
Transaction.transfer_account_id==1)
)
# then get your results with the proper amount sign:
for t in q.all():
print q.id, q.special_amount
|
SQLAlchemy custom query column
|
I have a declarative table defined like this:
class Transaction(Base):
__tablename__ = "transactions"
id = Column(Integer, primary_key=True)
account_id = Column(Integer)
transfer_account_id = Column(Integer)
amount = Column(Numeric(12, 2))
...
The query should be:
SELECT id, (CASE WHEN transfer_account_id=1 THEN -amount ELSE amount) AS amount
FROM transactions
WHERE account_id = 1 OR transfer_account_id = 1
My code is:
query = Transaction.query.filter_by(account_id=1, transfer_account_id=1)
query = query.add_column(case(...).label("amount"))
But it doesn't replace the amount column.
Been trying to do this with for hours and I don't want to use raw SQL.
|
[
"The construct you are looking for is called column_property. You could use a secondary mapper to actually replace the amount column. Are you sure you are not making things too difficult for yourself by not just storing the negative values in the database directly or giving the \"corrected\" column a different name?\nfrom sqlalchemy.orm import mapper, column_property\nwrongmapper = sqlalchemy.orm.mapper(Transaction, Transaction.__table,\n non_primary = True,\n properties = {'amount':\n column_property(case([(Transaction.transfer_account_id==1, -1*Transaction.amount)], \n else_=Transaction.amount)})\n\nSession.query(wrongmapper).filter(...)\n\n",
"Any query you do will not replace original amount column. But you can load another column using following query:\nq = session.query(Transaction,\n case([(Transaction.transfer_account_id==1, -1*Transaction.amount)], else_=Transaction.amount).label('special_amount')\n )\nq = q.filter(or_(Transaction.account_id==1, Transaction.transfer_account_id==1))\n\nThis will not return only Transaction objects, but rather tuple(Transaction, Decimal)\n\nBut if you want this property be part of your object, then:\nSince your case when ... function is completely independent from the condition in WHERE, I would suggest that you change your code in following way:\n1) add a property to you object, which does the case when ... check as following:\n@property\ndef special_amount(self):\n return -self.amount if self.transfer_account_id == 1 else self.amount\n\nYou can completely wrap this special handling of the amount providing a setter property as well:\n@special_amount.setter\ndef special_amount(self, value):\n if self.transfer_account_id is None:\n raise Exception('Cannot decide on special handling, because transfer_account_id is not set')\n self.amount = -value if self.transfer_account_id == 1 else value\n\n2) fix your query to only have a filter clause with or_ clause (it looks like your query does not work at all):\nq = session.query(Transaction).filter(\n or_(Transaction.account_id==1, \n Transaction.transfer_account_id==1)\n)\n\n# then get your results with the proper amount sign:\nfor t in q.all():\n print q.id, q.special_amount\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"declarative",
"python",
"sqlalchemy"
] |
stackoverflow_0002444679_declarative_python_sqlalchemy.txt
|
Q:
What's the best way to record the type of every variable assignment in a Python program?
Python is so dynamic that it's not always clear what's going on in a large program, and looking at a tiny bit of source code does not always help. To make matters worse, editors tend to have poor support for navigating to the definitions of tokens or import statements in a Python file.
One way to compensate might be to write a special profiler that, instead of timing the program, would record the runtime types and paths of objects of the program and expose this data to the editor.
This might be implemented with sys.settrace() which sets a callback for each line of code and is how pdb is implemented, or by using the ast module and an import hook to instrument the code, or is there a better strategy? How would you write something like this without making it impossibly slow, and without runnning afoul of extreme dynamism e.g side affects on property access?
A:
I don't think you can help making it slow, but it should be possible to detect the address of each variable when you encounter a STORE_FAST STORE_NAME STORE_* opcode.
Whether or not this has been done before, I do not know.
If you need debugging, look at PDB, this will allow you to step through your code and access any variables.
import pdb
def test():
print 1
pdb.set_trace() # you will enter an interpreter here
print 2
A:
What if you monkey-patched object's class or another prototypical object?
This might not be the easiest if you're not using new-style classes.
A:
You might want to check out PyChecker's code - it does (i think) what you are looking to do.
A:
Pythoscope does something very similar to what you describe and it uses a combination of static information in a form of AST and dynamic information through sys.settrace.
BTW, if you have problems refactoring your project, give Pythoscope a try.
|
What's the best way to record the type of every variable assignment in a Python program?
|
Python is so dynamic that it's not always clear what's going on in a large program, and looking at a tiny bit of source code does not always help. To make matters worse, editors tend to have poor support for navigating to the definitions of tokens or import statements in a Python file.
One way to compensate might be to write a special profiler that, instead of timing the program, would record the runtime types and paths of objects of the program and expose this data to the editor.
This might be implemented with sys.settrace() which sets a callback for each line of code and is how pdb is implemented, or by using the ast module and an import hook to instrument the code, or is there a better strategy? How would you write something like this without making it impossibly slow, and without runnning afoul of extreme dynamism e.g side affects on property access?
|
[
"I don't think you can help making it slow, but it should be possible to detect the address of each variable when you encounter a STORE_FAST STORE_NAME STORE_* opcode.\nWhether or not this has been done before, I do not know.\nIf you need debugging, look at PDB, this will allow you to step through your code and access any variables.\nimport pdb\ndef test():\n print 1\n pdb.set_trace() # you will enter an interpreter here\n print 2\n\n",
"What if you monkey-patched object's class or another prototypical object?\nThis might not be the easiest if you're not using new-style classes.\n",
"You might want to check out PyChecker's code - it does (i think) what you are looking to do.\n",
"Pythoscope does something very similar to what you describe and it uses a combination of static information in a form of AST and dynamic information through sys.settrace.\nBTW, if you have problems refactoring your project, give Pythoscope a try.\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"profiling",
"python"
] |
stackoverflow_0000823103_profiling_python.txt
|
Q:
Python Image Library, Close method
I have been using pil for the first time today. And I wanted to resize an image assuming it was larger than 800x600 and also create a thumbnail. I could do either of these tasks separately but not together in one method (I am doing a custom save method in django admin). This returns a "cannot identify image file" error message.
The error is on the line "image = Image.open(self.photo)" after "#if image is size is greatet than 800 x 600 then resize image."
I thought this may be because the image is already open, but if i remove the line I still get issues. So I thought I could try closing after creating a thumbnail and then reopening.
But I couldn't find a close method....
A:
Ah, if i only open the orginal image once and create the thumbnail after resizing then the problem is solved
|
Python Image Library, Close method
|
I have been using pil for the first time today. And I wanted to resize an image assuming it was larger than 800x600 and also create a thumbnail. I could do either of these tasks separately but not together in one method (I am doing a custom save method in django admin). This returns a "cannot identify image file" error message.
The error is on the line "image = Image.open(self.photo)" after "#if image is size is greatet than 800 x 600 then resize image."
I thought this may be because the image is already open, but if i remove the line I still get issues. So I thought I could try closing after creating a thumbnail and then reopening.
But I couldn't find a close method....
|
[
"Ah, if i only open the orginal image once and create the thumbnail after resizing then the problem is solved\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python",
"python_imaging_library"
] |
stackoverflow_0002449115_django_python_python_imaging_library.txt
|
Q:
How to get the list of price offers on an item from Amazon with python-amazon-product-api item_lookup function?
I am trying to write a function to get a list of offers (their prices) for an item based on the ASIN:
def price_offers(asin):
from amazonproduct import API, ResultPaginator, AWSError
from config import AWS_KEY, SECRET_KEY
api = API(AWS_KEY, SECRET_KEY, 'de')
str_asin = str(asin)
node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All')
for a in node:
print a.Offer.OfferListing.Price.FormattedPrice
I am reading
http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?ItemLookup.html and trying to make this work, but all the time it just says:
Failure instance: Traceback: <type 'exceptions.AttributeError'>: no such child: {http://webservices.amazon.com/AWSECommerceService/2009-10-01}Offer
A:
Seems like there is no Offer element in your response. Try
node = api.item_lookup(...)
from lxml import etree
print etree.tostring(node, pretty_print=True)
to see how the returned XML looks like.
A:
OK, thanks. To anwser my own question for others who might have the same problem, the right way to do the above is:
def price_offers(asin):
from amazonproduct import API, ResultPaginator, AWSError
from config import AWS_KEY, SECRET_KEY
api = API(AWS_KEY, SECRET_KEY, 'de')
str_asin = str(asin)
node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All')
for a in node.Items.Item.Offers.Offer:
print a.OfferListing.Price.FormattedPrice
amazonproduct comes from
http://pypi.python.org/pypi/python-amazon-product-api
|
How to get the list of price offers on an item from Amazon with python-amazon-product-api item_lookup function?
|
I am trying to write a function to get a list of offers (their prices) for an item based on the ASIN:
def price_offers(asin):
from amazonproduct import API, ResultPaginator, AWSError
from config import AWS_KEY, SECRET_KEY
api = API(AWS_KEY, SECRET_KEY, 'de')
str_asin = str(asin)
node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All')
for a in node:
print a.Offer.OfferListing.Price.FormattedPrice
I am reading
http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?ItemLookup.html and trying to make this work, but all the time it just says:
Failure instance: Traceback: <type 'exceptions.AttributeError'>: no such child: {http://webservices.amazon.com/AWSECommerceService/2009-10-01}Offer
|
[
"Seems like there is no Offer element in your response. Try\nnode = api.item_lookup(...)\nfrom lxml import etree\nprint etree.tostring(node, pretty_print=True)\n\nto see how the returned XML looks like.\n",
"OK, thanks. To anwser my own question for others who might have the same problem, the right way to do the above is:\ndef price_offers(asin):\n from amazonproduct import API, ResultPaginator, AWSError\n from config import AWS_KEY, SECRET_KEY\n api = API(AWS_KEY, SECRET_KEY, 'de')\n str_asin = str(asin)\n node = api.item_lookup(id=str_asin, ResponseGroup='Offers', Condition='All', MerchantId='All')\n for a in node.Items.Item.Offers.Offer:\n print a.OfferListing.Price.FormattedPrice\n\namazonproduct comes from\nhttp://pypi.python.org/pypi/python-amazon-product-api\n"
] |
[
6,
6
] |
[] |
[] |
[
"amazon",
"amazon_web_services",
"python"
] |
stackoverflow_0002445420_amazon_amazon_web_services_python.txt
|
Q:
Many to many relation SQLAlchemy (does relation exsist attribute)
I'm re-asking this question but with a different framework this time. I have two Models: User and Book with a M2M-relation. I want Book to have an attribute "read" that is True when the relation exists. Is this possible in SQLAlchemy?
A:
Take a look at SQL Expressions as Mapped Attributes. Something like this should do the job for you:
Book.read = column_property(
select(
[func.count(user_to_book_table.c.user_id)],
user_to_book_table.c.book_id == book_table.c.id
).label('read')
)
Even though it is not Boolean, you can still use it in the IF statements correctly:
if mybook.read:
print 'very popular book indeed'
Alternatively you can just add a computed (read-only) property on the Book object, but this will load all the Users into your session:
@property
def read(self):
return len(self.books)!=0
|
Many to many relation SQLAlchemy (does relation exsist attribute)
|
I'm re-asking this question but with a different framework this time. I have two Models: User and Book with a M2M-relation. I want Book to have an attribute "read" that is True when the relation exists. Is this possible in SQLAlchemy?
|
[
"Take a look at SQL Expressions as Mapped Attributes. Something like this should do the job for you: \nBook.read = column_property(\n select(\n [func.count(user_to_book_table.c.user_id)],\n user_to_book_table.c.book_id == book_table.c.id\n ).label('read')\n )\n\nEven though it is not Boolean, you can still use it in the IF statements correctly:\nif mybook.read:\n print 'very popular book indeed'\n\nAlternatively you can just add a computed (read-only) property on the Book object, but this will load all the Users into your session:\n@property\ndef read(self):\n return len(self.books)!=0\n\n"
] |
[
1
] |
[] |
[] |
[
"pylons",
"python",
"sqlalchemy"
] |
stackoverflow_0002449258_pylons_python_sqlalchemy.txt
|
Q:
GQL Request BadArgument Error. How to get around with my case?
My query is essentially the following:
entries=Entry.all().order("-votes").order("-date").filter("votes >", VOTE_FILTER).fetch(PAGE_SIZE+1, page* PAGE_SIZE)
I want to grab N of the latest entries that have a voting score above some benchmark (VOTE_FILTER). Google currently says that I cannot filter on 'votes' because I order by 'date.' I don't see a way that I can do this the way I want to, so I'd appreciate any advice.
A:
Assuming your 'vote filter' is a fixed threshold, you need to add a property to your model that records if it's above that threshold or not, enabling you to do a simple equality test to determine which records should be included.
A:
Yep, there are Restrictions on Queries as this is Gql not Sql. It looks like you'll need to use a Query Cursor and reject entries on votes <= VOTEFILTER in your code.
The semantics of Bigtable are certainly different than an RDBM and I'm still trying to wrap my head around them, too.
|
GQL Request BadArgument Error. How to get around with my case?
|
My query is essentially the following:
entries=Entry.all().order("-votes").order("-date").filter("votes >", VOTE_FILTER).fetch(PAGE_SIZE+1, page* PAGE_SIZE)
I want to grab N of the latest entries that have a voting score above some benchmark (VOTE_FILTER). Google currently says that I cannot filter on 'votes' because I order by 'date.' I don't see a way that I can do this the way I want to, so I'd appreciate any advice.
|
[
"Assuming your 'vote filter' is a fixed threshold, you need to add a property to your model that records if it's above that threshold or not, enabling you to do a simple equality test to determine which records should be included.\n",
"Yep, there are Restrictions on Queries as this is Gql not Sql. It looks like you'll need to use a Query Cursor and reject entries on votes <= VOTEFILTER in your code.\nThe semantics of Bigtable are certainly different than an RDBM and I'm still trying to wrap my head around them, too.\n"
] |
[
4,
0
] |
[] |
[] |
[
"google_app_engine",
"gql",
"python"
] |
stackoverflow_0002449090_google_app_engine_gql_python.txt
|
Q:
Django: Adding inline formset rows without javascript
This post relates to this:
Add row to inlines dynamically in django admin
Is there a way to achive adding inline formsets WITHOUT using javascript? Obviously, there would be a page-refresh involved.
So, if the form had a button called 'add'...
I figured I could do it like this:
if request.method=='POST':
if 'add' in request.POST:
PrimaryFunctionFormSet = inlineformset_factory(Position,Function,extra=1)
prims = PrimaryFunctionFormSet(request.POST)
Which I thought would add 1 each time, then populate the form with the post data. However, it seems that the extra=1 does not add 1 to the post data.
A:
Got it.
Sometimes it's the simplest solution. Just make a copy of the request.POST data and modify the TOTAL-FORMS.
for example..
if request.method=='POST':
PrimaryFunctionFormSet = inlineformset_factory(Position,Function)
if 'add' in request.POST:
cp = request.POST.copy()
cp['prim-TOTAL_FORMS'] = int(cp['prim-TOTAL_FORMS'])+ 1
prims = PrimaryFunctionFormSet(cp,prefix='prim')
Then just spit the form out as normal. Keeps your data, adds an inline editor.
|
Django: Adding inline formset rows without javascript
|
This post relates to this:
Add row to inlines dynamically in django admin
Is there a way to achive adding inline formsets WITHOUT using javascript? Obviously, there would be a page-refresh involved.
So, if the form had a button called 'add'...
I figured I could do it like this:
if request.method=='POST':
if 'add' in request.POST:
PrimaryFunctionFormSet = inlineformset_factory(Position,Function,extra=1)
prims = PrimaryFunctionFormSet(request.POST)
Which I thought would add 1 each time, then populate the form with the post data. However, it seems that the extra=1 does not add 1 to the post data.
|
[
"Got it.\nSometimes it's the simplest solution. Just make a copy of the request.POST data and modify the TOTAL-FORMS.\nfor example..\nif request.method=='POST':\n PrimaryFunctionFormSet = inlineformset_factory(Position,Function)\n if 'add' in request.POST:\n cp = request.POST.copy()\n cp['prim-TOTAL_FORMS'] = int(cp['prim-TOTAL_FORMS'])+ 1\n prims = PrimaryFunctionFormSet(cp,prefix='prim')\n\nThen just spit the form out as normal. Keeps your data, adds an inline editor.\n"
] |
[
6
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"django_views",
"python"
] |
stackoverflow_0002448970_django_django_forms_django_models_django_views_python.txt
|
Q:
Extending appengine's db.Property with caching
I'm looking to implement a property class for appengine, very similar to the existing db.ReferenceProperty. I am implementing my own version because I want some other default return values. My question is, how do I make the property remember its returned value, so that the datastore query is only performed the first time the property is fetched? What I had is below, and it does not work. I read that the Property classes do not belong to the instances, but to the model definition, so I guess that the return value is not cached for each instance, but overwritten on the model every time. Where should I store this _resolved variable?
class PageProperty(db.Property):
data_type = Page
def get_value_for_datastore(self, model_instance):
page = super(PageProperty, self).get_value_for_datastore(model_instance)
self._resolved = page
return page.key().name()
def make_value_from_datastore(self, value):
if not hasattr(self, '_resolved'):
self._resolved = Page.get_by_name(value)
return self._resolved
Edit
Alex' answer is certainly usable. But it seems that the built-in db.ReferenceProperty does store the _RESOLVED variable on the model instance. As evidenced by:
[...]
setattr(model_instance, self.__resolved_attr_name(), value)
[...]
def __resolved_attr_name(self):
return '_RESOLVED' + self._attr_name()
The get_value_for_datastore method is passed the model instance, but make_value_from_datastore is not, so how do they find the _RESOLVED property from that method?
Edit 2
From the code I gather that google is using the __get__() and __set__() methods, both of which do get the model instance as an argument. Are those usable in custom classes? What is the difference with get_value_for_datastore and its counterpart?
A:
A PageProperty instance exists per-model, not per-entity (where an entity is an instance of the model class). So I think you need a dictionary that maps pagename -> Page entity, instead of a single attribute per PageProperty instance. E.g., maybe something like...:
class PageProperty(db.Property):
data_type = Page
def __init__(self, *a, **k):
super(PageProperty, self).__init__(*a, **k)
self._mycache = {}
def get_value_for_datastore(self, model_instance):
page = super(PageProperty, self).get_value_for_datastore(model_instance)
name = page.key().name()
self._mycache[name] = page
return name
def make_value_from_datastore(self, value):
if value not in self._mycache:
self._mycache[value] = Page.get_by_name(value)
return self._mycache[value]
A:
If you only want to change some small part of the behaviour of ReferenceProperty, you may want to simply extend it, overriding its default_value method. You may find the source for ReferenceProperty to be instructive.
|
Extending appengine's db.Property with caching
|
I'm looking to implement a property class for appengine, very similar to the existing db.ReferenceProperty. I am implementing my own version because I want some other default return values. My question is, how do I make the property remember its returned value, so that the datastore query is only performed the first time the property is fetched? What I had is below, and it does not work. I read that the Property classes do not belong to the instances, but to the model definition, so I guess that the return value is not cached for each instance, but overwritten on the model every time. Where should I store this _resolved variable?
class PageProperty(db.Property):
data_type = Page
def get_value_for_datastore(self, model_instance):
page = super(PageProperty, self).get_value_for_datastore(model_instance)
self._resolved = page
return page.key().name()
def make_value_from_datastore(self, value):
if not hasattr(self, '_resolved'):
self._resolved = Page.get_by_name(value)
return self._resolved
Edit
Alex' answer is certainly usable. But it seems that the built-in db.ReferenceProperty does store the _RESOLVED variable on the model instance. As evidenced by:
[...]
setattr(model_instance, self.__resolved_attr_name(), value)
[...]
def __resolved_attr_name(self):
return '_RESOLVED' + self._attr_name()
The get_value_for_datastore method is passed the model instance, but make_value_from_datastore is not, so how do they find the _RESOLVED property from that method?
Edit 2
From the code I gather that google is using the __get__() and __set__() methods, both of which do get the model instance as an argument. Are those usable in custom classes? What is the difference with get_value_for_datastore and its counterpart?
|
[
"A PageProperty instance exists per-model, not per-entity (where an entity is an instance of the model class). So I think you need a dictionary that maps pagename -> Page entity, instead of a single attribute per PageProperty instance. E.g., maybe something like...:\nclass PageProperty(db.Property):\n data_type = Page\n\n def __init__(self, *a, **k):\n super(PageProperty, self).__init__(*a, **k)\n self._mycache = {} \n\n def get_value_for_datastore(self, model_instance):\n page = super(PageProperty, self).get_value_for_datastore(model_instance) \n name = page.key().name()\n self._mycache[name] = page\n return name\n\n def make_value_from_datastore(self, value):\n if value not in self._mycache:\n self._mycache[value] = Page.get_by_name(value)\n return self._mycache[value]\n\n",
"If you only want to change some small part of the behaviour of ReferenceProperty, you may want to simply extend it, overriding its default_value method. You may find the source for ReferenceProperty to be instructive.\n"
] |
[
2,
1
] |
[] |
[] |
[
"descriptor",
"google_app_engine",
"python"
] |
stackoverflow_0002438496_descriptor_google_app_engine_python.txt
|
Q:
Scale 2D coordinates and keep their relative euclidean distances intact?
I have a set of points like: pointA(3302.34,9392.32), pointB(34322.32,11102.03), etc.
I need to scale these so each x- and y-coordinate is in the range (0.0 - 1.0).
I tried doing this by first finding the largest x value in the data set (maximum_x_value), and the largest y value in the set (minimum_y_value). I then did the following:
pointA.x = (pointA.x - minimum_x_value) / (maximum_x_value - minimum_x_value)
pointA.y = (pointA.y - minimum_y_value) / (maximum_y_value - minimum_y_value)
This changes the relative distances(?), and therefore makes the data useless for my purposes. Is there a way to scale these coordinates while keeping their relative distances the intact?
A:
You need to scale the x values and the y values by the same amount! I would suggest scaling by the larger of the two ranges (either x or y). In pseudocode, you'd have something like
scale = max(maximum_x_value - minimum_x_value,
maximum_y_value - minimum_y_value)
Then all the distances between points will be scaled by scale, which is what I presume you're asking for, so if point p_1 was twice as far from point p_2 as from p_3 before rescaling, it will be twice as far after rescaling as well. You should be able to prove this to yourself pretty easily using the Pythagorean theorem.
A:
Assuming you want your entire data set to be centered on (0.5, 0.5) with a range of (0,1) in both axes, it's easiest to think of the total transformation needed in three steps:
Center the data on the origin:
P.x -= (maxX + minX) / 2
P.y -= (maxY + minY) / 2
Scale it down by the same amount in both dimensions, such that the larger of the two ranges becomes (-0.5, 0.5):
scale = max(maxX - minX, maxY - minY)
P.x /= scale
P.y /= scale
Translate the points by (0.5, 0.5) to bring everything where you want it:
P.x += 0.5
P.y += 0.5
This approach has the advantage of working perfectly for any given input data, and also filling as much of the unit square as possible while maintaining aspect ratio (and hence relative distances).
A:
Step 1: Re-Locate the origin
Let your new "origin" be (minimum_x_value, minimum_y_value). Shift all your data points by subtracting minimum_x_value from all x-coordinates and by subtracting minimum_y_value from all y-coordinates.
Step 2: Normalize the remaining data
Scale the rest of your data down to fit within the 0.0-1.0 window. Find max_coord as the larger of your maximum x-value or your maximum y-value. Divide all x- and y-coordinates by max_coord.
A:
If you mean that you are not keeping aspect ratio: just scale to the minimum bounding square instead of minimum bounding rectangle. You should choose the scale factor along both axises to max(dx,dy).
A:
You have to scale them by the same factor to keep the distances the same.
I'd forget about subtracting the minimum (Note: this part is only true if the points are always positive, which is my usual use case), and just divide by the maximum of the two maxes:
maxval = max(max(A.x), max(A.y)) #or however you find these
A.x = A.x/maxval
A.y = A.y/maxval
|
Scale 2D coordinates and keep their relative euclidean distances intact?
|
I have a set of points like: pointA(3302.34,9392.32), pointB(34322.32,11102.03), etc.
I need to scale these so each x- and y-coordinate is in the range (0.0 - 1.0).
I tried doing this by first finding the largest x value in the data set (maximum_x_value), and the largest y value in the set (minimum_y_value). I then did the following:
pointA.x = (pointA.x - minimum_x_value) / (maximum_x_value - minimum_x_value)
pointA.y = (pointA.y - minimum_y_value) / (maximum_y_value - minimum_y_value)
This changes the relative distances(?), and therefore makes the data useless for my purposes. Is there a way to scale these coordinates while keeping their relative distances the intact?
|
[
"You need to scale the x values and the y values by the same amount! I would suggest scaling by the larger of the two ranges (either x or y). In pseudocode, you'd have something like \nscale = max(maximum_x_value - minimum_x_value,\n maximum_y_value - minimum_y_value)\n\nThen all the distances between points will be scaled by scale, which is what I presume you're asking for, so if point p_1 was twice as far from point p_2 as from p_3 before rescaling, it will be twice as far after rescaling as well. You should be able to prove this to yourself pretty easily using the Pythagorean theorem.\n",
"Assuming you want your entire data set to be centered on (0.5, 0.5) with a range of (0,1) in both axes, it's easiest to think of the total transformation needed in three steps: \n\nCenter the data on the origin:\nP.x -= (maxX + minX) / 2\nP.y -= (maxY + minY) / 2\nScale it down by the same amount in both dimensions, such that the larger of the two ranges becomes (-0.5, 0.5):\nscale = max(maxX - minX, maxY - minY)\nP.x /= scale\nP.y /= scale\nTranslate the points by (0.5, 0.5) to bring everything where you want it:\nP.x += 0.5\nP.y += 0.5\n\nThis approach has the advantage of working perfectly for any given input data, and also filling as much of the unit square as possible while maintaining aspect ratio (and hence relative distances).\n",
"Step 1: Re-Locate the origin\nLet your new \"origin\" be (minimum_x_value, minimum_y_value). Shift all your data points by subtracting minimum_x_value from all x-coordinates and by subtracting minimum_y_value from all y-coordinates.\nStep 2: Normalize the remaining data\nScale the rest of your data down to fit within the 0.0-1.0 window. Find max_coord as the larger of your maximum x-value or your maximum y-value. Divide all x- and y-coordinates by max_coord.\n",
"If you mean that you are not keeping aspect ratio: just scale to the minimum bounding square instead of minimum bounding rectangle. You should choose the scale factor along both axises to max(dx,dy).\n",
"You have to scale them by the same factor to keep the distances the same. \nI'd forget about subtracting the minimum (Note: this part is only true if the points are always positive, which is my usual use case), and just divide by the maximum of the two maxes:\nmaxval = max(max(A.x), max(A.y)) #or however you find these\nA.x = A.x/maxval\nA.y = A.y/maxval\n\n"
] |
[
10,
9,
4,
3,
3
] |
[] |
[] |
[
"coordinates",
"math",
"python",
"scale"
] |
stackoverflow_0002450035_coordinates_math_python_scale.txt
|
Q:
Django: How to detect if translation is activated?
django.utils.translation.get_language() returns default locale if translation is not activated. Is there a way to find out whether the translation is activated (via translation.activate()) or not?
A:
Horribly hacky, but should work in at least 1.1.1:
import django.utils.translation.trans_real as trans
from django.utils.thread_support import currentThread
def isactive():
return currentThread() in trans._active
A:
Depends on application and architecture...
Hack provided by Ignacio should works, but what is you will run in non activated yet thread?
I would use Ignacio solution + add Queue visible by all threads, monkeypatch trans_real.activate function and set attribute in queue.
|
Django: How to detect if translation is activated?
|
django.utils.translation.get_language() returns default locale if translation is not activated. Is there a way to find out whether the translation is activated (via translation.activate()) or not?
|
[
"Horribly hacky, but should work in at least 1.1.1:\nimport django.utils.translation.trans_real as trans\nfrom django.utils.thread_support import currentThread\n\ndef isactive():\n return currentThread() in trans._active\n\n",
"Depends on application and architecture...\nHack provided by Ignacio should works, but what is you will run in non activated yet thread?\nI would use Ignacio solution + add Queue visible by all threads, monkeypatch trans_real.activate function and set attribute in queue.\n"
] |
[
3,
0
] |
[
"Always inspect source code for such question, it's faster than posting to Web!\nDjango does it's black magic behind the scene, and uses some kind of dispatcher to simulate disabled translations.\nThe best way for you to do is:\nimport setttings\nassert settings.USE_i18N == True\n\n"
] |
[
-2
] |
[
"django",
"internationalization",
"python"
] |
stackoverflow_0001605706_django_internationalization_python.txt
|
Q:
String formatting error
Using the code print('{0} is not'.format('That that is not')) in Python 3.1.1, I get the following error:
AttributeError: 'str' object has no attribute 'format'
when I delete the line Netbeans automatically inserted at the beginning:
from distutils.command.bdist_dumb import format
which itself causes an error of
ImportError: cannot import name format
What am I doing wrong here?
A:
You must be running an older version of Python. This does work in Python 3.1.1+:
$ python3
Python 3.1.1+ (r311:74480, Nov 2 2009, 14:49:22)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> '{0} is not'.format('That that is not')
'That that is not is not'
You will, however, get this error in Python 2.5.4:
$ python2.5
Python 2.5.4 (r254:67916, Jan 20 2010, 21:44:03)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> '{0} is not'.format('That that is not')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'str' object has no attribute 'format'
This feature seems to have been backported to Python 2.6, so you won't get this error there. You must be running Python < 2.6.
|
String formatting error
|
Using the code print('{0} is not'.format('That that is not')) in Python 3.1.1, I get the following error:
AttributeError: 'str' object has no attribute 'format'
when I delete the line Netbeans automatically inserted at the beginning:
from distutils.command.bdist_dumb import format
which itself causes an error of
ImportError: cannot import name format
What am I doing wrong here?
|
[
"You must be running an older version of Python. This does work in Python 3.1.1+:\n$ python3\nPython 3.1.1+ (r311:74480, Nov 2 2009, 14:49:22) \n[GCC 4.4.1] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> '{0} is not'.format('That that is not')\n'That that is not is not'\n\nYou will, however, get this error in Python 2.5.4:\n$ python2.5\nPython 2.5.4 (r254:67916, Jan 20 2010, 21:44:03) \n[GCC 4.4.1] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> '{0} is not'.format('That that is not')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'str' object has no attribute 'format'\n\nThis feature seems to have been backported to Python 2.6, so you won't get this error there. You must be running Python < 2.6.\n"
] |
[
6
] |
[] |
[] |
[
"format",
"python",
"string"
] |
stackoverflow_0002450188_format_python_string.txt
|
Q:
Scheduling a JasperServer Report via SOAP using Python
I was able to figure out how to run reports, download files, list folders, etc. on a JasperServer using Python with SOAPpy and xml.dom minidom.
Here's an example execute report request, which works:
repositoryURL = 'http://user@pass:myjasperserver:8080/jasperserver/services/repository'
repositoryWSDL = repositoryURL + '?wsdl'
server = SOAPProxy(repositoryURL, repositoryWSDL)
print server._ns(repositoryWSDL).runReport('''
<request operationName="runReport" locale="en">
<argument name="RUN_OUTPUT_FORMAT">PDF</argument>
<resourceDescriptor name="" wsType="" uriString="/reports/baz">
<label>null</label>
<parameter name="foo">bar</parameter>
</resourceDescriptor>
</request>
''')
However, I'm having trouble formatting my requests properly for the "ReportScheduler" section of the server. I've consulted the documentation located here (http://jasperforge.org/espdocs/docsbrowse.php?id=74&type=docs&group_id=112&fid=305), and have tried model my requests after their samples with no luck (see page 27).
Here are two examples that I've tried, which both return the same error:
schedulingURL = 'http://user@pass:myjasperserver:8080/jasperserver/services/ReportScheduler'
schedulingWSDL = schedulingURL + '?wsdl'
server = SOAPProxy(schedulingURL, schedulingWSDL)
# first request
print server._ns(schedulingWSDL).scheduleJob('''
<request operationName="scheduleJob" locale="en">
<job>
<reportUnitURI>/reports/baz</reportUnitURI>
<label>baz</label>
<description>baz</description>
<simpleTrigger>
<startDate>2009-05-15T15:45:00.000Z</startDate>
<occurenceCount>1</occurenceCount>
</simpleTrigger>
<baseOutputFilename>baz</baseOutputFilename>
<outputFormats>
<outputFormats>PDF</outputFormats>
</outputFormats>
<repositoryDestination>
<folderURI>/reports_generated</folderURI>
<sequentialFilenames>true</sequentialFilenames>
<overwriteFiles>false</overwriteFiles>
</repositoryDestination>
<mailNotification>
<toAddresses>my@email.com</toAddresses>
<subject>test</subject>
<messageText>test</messageText>
<resultSendType>SEND_ATTACHMENT</resultSendType>
</mailNotification>
</job>
</request>''')
# second request (trying different format here)
print server._ns(schedulingWSDL).scheduleJob('''
<ns1:scheduleJob soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns1="http://www.jasperforge.org/jasperserver/ws">
<job xsi:type="ns1:Job">
<reportUnitURI xsi:type="xsd:string">/reports/baz</reportUnitURI>
<username xsi:type="xsd:string" xsi:nil="true"/>
<label xsi:type="xsd:string">baz</label>
<description xsi:type="xsd:string">baz</description>
<simpleTrigger xsi:type="ns1:JobSimpleTrigger">
<timezone xsi:type="xsd:string" xsi:nil="true"/>
<startDate xsi:type="xsd:dateTime">2008-10-09T09:25:00.000Z</startDate>
<endDate xsi:type="xsd:dateTime" xsi:nil="true"/>
<occurrenceCount xsi:type="xsd:int">1</occurrenceCount>
<recurrenceInterval xsi:type="xsd:int" xsi:nil="true"/>
<recurrenceIntervalUnit xsi:type="ns1:IntervalUnit" xsi:nil="true"/>
</simpleTrigger>
<calendarTrigger xsi:type="ns1:JobCalendarTrigger" xsi:nil="true"/>
<parameters soapenc:arrayType="ns1:JobParameter[4]" xsi:type="soapenc:Array" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/">
</parameters>
<baseOutputFilename xsi:type="xsd:string">test</baseOutputFilename>
<outputFormats soapenc:arrayType="xsd:string[1]" xsi:type="soapenc:Array" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/">
<outputFormats xsi:type="xsd:string">PDF</outputFormats>
</outputFormats>
<outputLocale xsi:type="xsd:string" xsi:nil="true"/>
<repositoryDestination xsi:type="ns1:JobRepositoryDestination">
<folderURI xsi:type="xsd:string">/reports_generated</folderURI>
<sequentialFilenames xsi:type="xsd:boolean">false</sequentialFilenames>
<overwriteFiles xsi:type="xsd:boolean">false</overwriteFiles>
</repositoryDestination>
<mailNotification xsi:type="ns1:JobMailNotification" xsi:nil="true"/>
</job>
</ns1:scheduleJob>''')
Each of these requests result in errors:
SOAPpy.Types.faultType: <Fault soapenv:Server.userException: org.xml.sax.SAXException:
Bad types (class java.lang.String -> class com.jaspersoft.jasperserver.ws.scheduling.Job):
<SOAPpy.Types.structType detail at 14743952>: {'hostname': 'myhost'}>
Any help/guidance would be appreciated. Thank you.
A:
I've had a lot of bad experiences with minidom. I recommend you use lxml. I haven't had any experience with soap itself, so I can't speak to the rest of the issue.
A:
Without knowing anything at all about Jasper, I can guarantee you that you'll do better to replace your hardcoded SOAP requests with a simple client based on the excellent suds library. It abstracts away the SOAP and leaves you with squeaky-clean API access.
easy_install suds and the docs should be enough to get you going.
|
Scheduling a JasperServer Report via SOAP using Python
|
I was able to figure out how to run reports, download files, list folders, etc. on a JasperServer using Python with SOAPpy and xml.dom minidom.
Here's an example execute report request, which works:
repositoryURL = 'http://user@pass:myjasperserver:8080/jasperserver/services/repository'
repositoryWSDL = repositoryURL + '?wsdl'
server = SOAPProxy(repositoryURL, repositoryWSDL)
print server._ns(repositoryWSDL).runReport('''
<request operationName="runReport" locale="en">
<argument name="RUN_OUTPUT_FORMAT">PDF</argument>
<resourceDescriptor name="" wsType="" uriString="/reports/baz">
<label>null</label>
<parameter name="foo">bar</parameter>
</resourceDescriptor>
</request>
''')
However, I'm having trouble formatting my requests properly for the "ReportScheduler" section of the server. I've consulted the documentation located here (http://jasperforge.org/espdocs/docsbrowse.php?id=74&type=docs&group_id=112&fid=305), and have tried model my requests after their samples with no luck (see page 27).
Here are two examples that I've tried, which both return the same error:
schedulingURL = 'http://user@pass:myjasperserver:8080/jasperserver/services/ReportScheduler'
schedulingWSDL = schedulingURL + '?wsdl'
server = SOAPProxy(schedulingURL, schedulingWSDL)
# first request
print server._ns(schedulingWSDL).scheduleJob('''
<request operationName="scheduleJob" locale="en">
<job>
<reportUnitURI>/reports/baz</reportUnitURI>
<label>baz</label>
<description>baz</description>
<simpleTrigger>
<startDate>2009-05-15T15:45:00.000Z</startDate>
<occurenceCount>1</occurenceCount>
</simpleTrigger>
<baseOutputFilename>baz</baseOutputFilename>
<outputFormats>
<outputFormats>PDF</outputFormats>
</outputFormats>
<repositoryDestination>
<folderURI>/reports_generated</folderURI>
<sequentialFilenames>true</sequentialFilenames>
<overwriteFiles>false</overwriteFiles>
</repositoryDestination>
<mailNotification>
<toAddresses>my@email.com</toAddresses>
<subject>test</subject>
<messageText>test</messageText>
<resultSendType>SEND_ATTACHMENT</resultSendType>
</mailNotification>
</job>
</request>''')
# second request (trying different format here)
print server._ns(schedulingWSDL).scheduleJob('''
<ns1:scheduleJob soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns1="http://www.jasperforge.org/jasperserver/ws">
<job xsi:type="ns1:Job">
<reportUnitURI xsi:type="xsd:string">/reports/baz</reportUnitURI>
<username xsi:type="xsd:string" xsi:nil="true"/>
<label xsi:type="xsd:string">baz</label>
<description xsi:type="xsd:string">baz</description>
<simpleTrigger xsi:type="ns1:JobSimpleTrigger">
<timezone xsi:type="xsd:string" xsi:nil="true"/>
<startDate xsi:type="xsd:dateTime">2008-10-09T09:25:00.000Z</startDate>
<endDate xsi:type="xsd:dateTime" xsi:nil="true"/>
<occurrenceCount xsi:type="xsd:int">1</occurrenceCount>
<recurrenceInterval xsi:type="xsd:int" xsi:nil="true"/>
<recurrenceIntervalUnit xsi:type="ns1:IntervalUnit" xsi:nil="true"/>
</simpleTrigger>
<calendarTrigger xsi:type="ns1:JobCalendarTrigger" xsi:nil="true"/>
<parameters soapenc:arrayType="ns1:JobParameter[4]" xsi:type="soapenc:Array" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/">
</parameters>
<baseOutputFilename xsi:type="xsd:string">test</baseOutputFilename>
<outputFormats soapenc:arrayType="xsd:string[1]" xsi:type="soapenc:Array" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/">
<outputFormats xsi:type="xsd:string">PDF</outputFormats>
</outputFormats>
<outputLocale xsi:type="xsd:string" xsi:nil="true"/>
<repositoryDestination xsi:type="ns1:JobRepositoryDestination">
<folderURI xsi:type="xsd:string">/reports_generated</folderURI>
<sequentialFilenames xsi:type="xsd:boolean">false</sequentialFilenames>
<overwriteFiles xsi:type="xsd:boolean">false</overwriteFiles>
</repositoryDestination>
<mailNotification xsi:type="ns1:JobMailNotification" xsi:nil="true"/>
</job>
</ns1:scheduleJob>''')
Each of these requests result in errors:
SOAPpy.Types.faultType: <Fault soapenv:Server.userException: org.xml.sax.SAXException:
Bad types (class java.lang.String -> class com.jaspersoft.jasperserver.ws.scheduling.Job):
<SOAPpy.Types.structType detail at 14743952>: {'hostname': 'myhost'}>
Any help/guidance would be appreciated. Thank you.
|
[
"I've had a lot of bad experiences with minidom. I recommend you use lxml. I haven't had any experience with soap itself, so I can't speak to the rest of the issue. \n",
"Without knowing anything at all about Jasper, I can guarantee you that you'll do better to replace your hardcoded SOAP requests with a simple client based on the excellent suds library. It abstracts away the SOAP and leaves you with squeaky-clean API access. \neasy_install suds and the docs should be enough to get you going.\n"
] |
[
1,
1
] |
[] |
[] |
[
"jasper_reports",
"jasperserver",
"python",
"soap"
] |
stackoverflow_0000870188_jasper_reports_jasperserver_python_soap.txt
|
Q:
Python for loop question
I was wondering how to achieve the following in python:
for( int i = 0; cond...; i++)
if cond...
i++; //to skip an run-through
I tried this with no luck.
for i in range(whatever):
if cond... :
i += 1
A:
Python's for loops are different. i gets reassigned to the next value every time through the loop.
The following will do what you want, because it is taking the literal version of what C++ is doing:
i = 0
while i < some_value:
if cond...:
i+=1
...code...
i+=1
Here's why:
in C++, the following code segments are equivalent:
for(..a..; ..b..; ..c..) {
...code...
}
and
..a..
while(..b..) {
..code..
..c..
}
whereas the python for loop looks something like:
for x in ..a..:
..code..
turns into
my_iter = iter(..a..)
while (my_iter is not empty):
x = my_iter.next()
..code..
A:
There is a continue keyword which skips the current iteration and advances to the next one (and a break keyword which skips all loop iterations and exits the loop):
for i in range(10):
if i % 2 == 0:
# skip even numbers
continue
print i
A:
Remember that you are iterating over the elements in the list, and not iterating over a number.
For example consider the following:
for i in ["cat", "dog"]:
print i
What would happen if you did i+1 there? You can see now why it doesn't skip the next element in the list.
Instead of actually iterating over all values, you could try to adjust what is contained inside the list you are iterating over.
Example:
r = range(10)
for i in filter(lambda x: x % 2 == 0, r):
print i
You can also consider breaking up the for body into 2. The first part will skip to the next element by using continue, and the second part will do the action if you did not skip.
A:
You can explicitly increment the iterator.
whatever = iter(whatever)
for i in whatever:
if cond:
whatever.next()
You will need to catch StopIteration if cond can be True on the last element.
A:
There is an alternate approach to this, depending on the task you are trying to accomplish. If cond is entirely a function of the input data you are looping over, you might try something like the following:
def check_cond(item):
if item satisfies cond:
return True
return False
for item in filter(check_cond, list):
...
This is the functional programming way to do this, sort of like LINQ in C# 3.0+. I'm not so certain it's all that pythonic (for a while Guido van Rossum wanted to remove filter, map and reduce from Python 3) but it certainly is elegant and the way I would do it.
A:
You can't trivially "skip the next leg" (you can of course skip this leg with a continue). If you really insist you can do it with an auxiliary bool, e.g.
skipping = False
for i in whatever:
if skipping:
skipping = False
continue
skipping = cond
...
or for generality with an auxiliary int:
skipping = 0
for i in whatever:
if skipping:
skipping -= 1
continue
if badcond:
skipping = 5 # skip 5 legs
...
However, it would be better to encapsulate such complex looping logic in an appropriate generator -- hard to give examples unless you can be a bit more concrete about what you want though (that "pseudo-C" with two presumably 100% different uses of the same boolean cond is REALLY hard to follow;-).
A:
for i in filter(lambda x:x!=2,range(5)):
|
Python for loop question
|
I was wondering how to achieve the following in python:
for( int i = 0; cond...; i++)
if cond...
i++; //to skip an run-through
I tried this with no luck.
for i in range(whatever):
if cond... :
i += 1
|
[
"Python's for loops are different. i gets reassigned to the next value every time through the loop.\nThe following will do what you want, because it is taking the literal version of what C++ is doing:\ni = 0\nwhile i < some_value:\n if cond...:\n i+=1\n ...code...\n i+=1\n\nHere's why:\nin C++, the following code segments are equivalent:\nfor(..a..; ..b..; ..c..) {\n ...code...\n}\n\nand\n..a..\nwhile(..b..) {\n ..code..\n ..c..\n}\n\nwhereas the python for loop looks something like:\nfor x in ..a..:\n ..code..\n\nturns into\nmy_iter = iter(..a..)\nwhile (my_iter is not empty):\n x = my_iter.next()\n ..code..\n\n",
"There is a continue keyword which skips the current iteration and advances to the next one (and a break keyword which skips all loop iterations and exits the loop):\nfor i in range(10):\n if i % 2 == 0:\n # skip even numbers\n continue \n print i\n\n",
"Remember that you are iterating over the elements in the list, and not iterating over a number.\nFor example consider the following:\nfor i in [\"cat\", \"dog\"]:\n print i\n\nWhat would happen if you did i+1 there? You can see now why it doesn't skip the next element in the list.\nInstead of actually iterating over all values, you could try to adjust what is contained inside the list you are iterating over.\nExample:\nr = range(10)\nfor i in filter(lambda x: x % 2 == 0, r):\n print i\n\nYou can also consider breaking up the for body into 2. The first part will skip to the next element by using continue, and the second part will do the action if you did not skip.\n",
"You can explicitly increment the iterator.\nwhatever = iter(whatever)\nfor i in whatever:\n if cond:\n whatever.next()\n\nYou will need to catch StopIteration if cond can be True on the last element.\n",
"There is an alternate approach to this, depending on the task you are trying to accomplish. If cond is entirely a function of the input data you are looping over, you might try something like the following:\ndef check_cond(item):\n if item satisfies cond:\n return True\n return False\n\nfor item in filter(check_cond, list):\n ...\n\nThis is the functional programming way to do this, sort of like LINQ in C# 3.0+. I'm not so certain it's all that pythonic (for a while Guido van Rossum wanted to remove filter, map and reduce from Python 3) but it certainly is elegant and the way I would do it.\n",
"You can't trivially \"skip the next leg\" (you can of course skip this leg with a continue). If you really insist you can do it with an auxiliary bool, e.g.\nskipping = False\nfor i in whatever:\n if skipping:\n skipping = False\n continue\n skipping = cond\n ...\n\nor for generality with an auxiliary int:\nskipping = 0\nfor i in whatever:\n if skipping:\n skipping -= 1\n continue\n if badcond:\n skipping = 5 # skip 5 legs\n ...\n\nHowever, it would be better to encapsulate such complex looping logic in an appropriate generator -- hard to give examples unless you can be a bit more concrete about what you want though (that \"pseudo-C\" with two presumably 100% different uses of the same boolean cond is REALLY hard to follow;-).\n",
"for i in filter(lambda x:x!=2,range(5)):\n\n"
] |
[
44,
13,
4,
4,
2,
1,
0
] |
[] |
[] |
[
"for_loop",
"loops",
"python"
] |
stackoverflow_0002429560_for_loop_loops_python.txt
|
Q:
Desktop Application Development with Javascript, Python / Ruby
Besides using Appcelerator's Titanium Desktop, are there other approaches to integrating Javascript and Ruby/Python into cross-platform desktop applications? Just trying to get a sense of the landscape here. From searching the web, it seems Titanium may be leading the charge in terms of this type of integration. I wasn't able to find references that suggest you can do something similar in Adobe AIR.
I am interested in building desktop applications that exploit Protovis and possibly other Javascript interactive vis packages for the UI. At the end of the day, I can go the web app route if need be, but being able to develop desktop apps is helpful.
Would appreciate your perspective on this...
Chris
A:
There is Pyjamas Desktop, but might be a bit out of date.
A:
You can also script Swing or SWT with JRuby, either directly, or via one of the numerous frameworks.
You might manage to integrate protovis via a webkit or gecko (like redcar does) embedding, or a java html renderer, there are some. Or just use a java viz kit.
|
Desktop Application Development with Javascript, Python / Ruby
|
Besides using Appcelerator's Titanium Desktop, are there other approaches to integrating Javascript and Ruby/Python into cross-platform desktop applications? Just trying to get a sense of the landscape here. From searching the web, it seems Titanium may be leading the charge in terms of this type of integration. I wasn't able to find references that suggest you can do something similar in Adobe AIR.
I am interested in building desktop applications that exploit Protovis and possibly other Javascript interactive vis packages for the UI. At the end of the day, I can go the web app route if need be, but being able to develop desktop apps is helpful.
Would appreciate your perspective on this...
Chris
|
[
"There is Pyjamas Desktop, but might be a bit out of date.\n",
"You can also script Swing or SWT with JRuby, either directly, or via one of the numerous frameworks.\nYou might manage to integrate protovis via a webkit or gecko (like redcar does) embedding, or a java html renderer, there are some. Or just use a java viz kit.\n"
] |
[
2,
0
] |
[] |
[] |
[
"desktop_application",
"javascript",
"python",
"ruby"
] |
stackoverflow_0002436078_desktop_application_javascript_python_ruby.txt
|
Q:
How can I make a wxPython app constantly update and execute code?
Given the following simple program:
import wx
class TestDraw(wx.Panel):
def __init__(self,parent=None,id=-1):
wx.Panel.__init__(self,parent,id,style=wx.TAB_TRAVERSAL)
self.SetBackgroundColour("#FFFFFF")
self.Bind(wx.EVT_PAINT,self.onPaint)
self.SetDoubleBuffered(True)
self.circleX=320
self.circleY=240
def onPaint(self, event):
event.Skip()
dc=wx.PaintDC(self)
dc.BeginDrawing()
dc.DrawCircle(self.circleX,self.circleY,100)
dc.EndDrawing()
class TestFrame(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(640,480))
self.mainPanel=TestDraw(self,-1)
self.Show(True)
app = wx.App(False)
frame = TestFrame(None,"Test App")
app.MainLoop()
How can I change it so that I can execute logic and repaint the panel at a constant rate? I'd like the circle to bounce around the screen, but I just can't figure out the place I would change its x and y variables.
A:
Your can use a wxTimer to periodically call an onTimer(self) method.
|
How can I make a wxPython app constantly update and execute code?
|
Given the following simple program:
import wx
class TestDraw(wx.Panel):
def __init__(self,parent=None,id=-1):
wx.Panel.__init__(self,parent,id,style=wx.TAB_TRAVERSAL)
self.SetBackgroundColour("#FFFFFF")
self.Bind(wx.EVT_PAINT,self.onPaint)
self.SetDoubleBuffered(True)
self.circleX=320
self.circleY=240
def onPaint(self, event):
event.Skip()
dc=wx.PaintDC(self)
dc.BeginDrawing()
dc.DrawCircle(self.circleX,self.circleY,100)
dc.EndDrawing()
class TestFrame(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(640,480))
self.mainPanel=TestDraw(self,-1)
self.Show(True)
app = wx.App(False)
frame = TestFrame(None,"Test App")
app.MainLoop()
How can I change it so that I can execute logic and repaint the panel at a constant rate? I'd like the circle to bounce around the screen, but I just can't figure out the place I would change its x and y variables.
|
[
"Your can use a wxTimer to periodically call an onTimer(self) method.\n"
] |
[
3
] |
[] |
[] |
[
"python",
"user_interface",
"wxpython"
] |
stackoverflow_0002450972_python_user_interface_wxpython.txt
|
Q:
How to write services with CPython?
Does CPython have any library that helps to write binding-independent services?
I have found some SOAP libraries for Python, but it misses the flexibility of choosing the binding at runtime.
A:
Packages such as SimpleXMLRPCServer (part of the Python standard library), SimpleJSONRPCServer, and probably at least some of the SOAP server-side libraries you found (the good ones;-), are based on the concept of registering functions and instances with the package to make them available to clients of the service -- basically you write your service's functionality independently, just exposing that functionality as functions and classes (much as you would for any other application's core logic, not just a service), and then, at runtime (presumably, mostly at server startup time), you register those functions, and instances of those classes, so they become accessible as "the service". I'd call that pretty much a "binding-independent" approach;-).
|
How to write services with CPython?
|
Does CPython have any library that helps to write binding-independent services?
I have found some SOAP libraries for Python, but it misses the flexibility of choosing the binding at runtime.
|
[
"Packages such as SimpleXMLRPCServer (part of the Python standard library), SimpleJSONRPCServer, and probably at least some of the SOAP server-side libraries you found (the good ones;-), are based on the concept of registering functions and instances with the package to make them available to clients of the service -- basically you write your service's functionality independently, just exposing that functionality as functions and classes (much as you would for any other application's core logic, not just a service), and then, at runtime (presumably, mostly at server startup time), you register those functions, and instances of those classes, so they become accessible as \"the service\". I'd call that pretty much a \"binding-independent\" approach;-).\n"
] |
[
2
] |
[] |
[] |
[
"cpython",
"python",
"service",
"soa",
"soap"
] |
stackoverflow_0002450839_cpython_python_service_soa_soap.txt
|
Q:
Storing multiple discarded datas in a single variable using a string accumulator
For an assignment for my intro to python course, we are to write a program that generates 100 sets of x,y coordinates.
X must be a float between -100.0 and 100.0 inclusive, but not 0.
Y is Y = ((1/x) * 3070) but if the absolute value of Y is greater than 100, both numbers must be discarded (BUT STORED) and another set generated.
The results must be displayed in a table, and then after the table, the discarded results must be shown.
The teacher said we should use a "string accumulator" to store the discarded data.
This is what I have so far, and I'm stuck at storing the discarded data.
EDIT: got it! thanks!
# import random.py
import random
# import math.py
import math
# define main
def main():
xDiscarded = 'Discarded X Values'
yDiscarded = 'Discarded Y Values'
# print header
print(" x \t y ")
x = random.uniform(-100.0, 100.0)
while x == 0:
x = random.uniform(-100.0, 100.0)
y = ((1/x) * 3070)
if math.fabs(y) > 100:
xDiscarded += ", " + str(x)
yDiscarded += ", " + str(y)
else:
print(x, '\t', y)
print(xDiscarded)
print(yDiscarded)
As you can see, I run into the problem of when abs(y) > 100, I'm not too sure how to store the discarded data and let it accumulate every time abs(y) > 100. I'm cool with the data being stored as "351.2, 231.1, 152.2" I just don't know how to turn the variable into a string and store it. We haven't learned arrays yet so I can't do that.
Any help would be much appreciated. Thanks!
A:
"string accumulator" is not a Python "terms of art". Maybe the teacher meant "accumulate it all into a single string" (a horrible approach in Python), or maybe (if the course has already covered lists) he mean a list of strings (the proper Python approach).
Other answers already cover the first possibility, but in case the second (good) one is meant, what you need is:
a) change the initialization to
xDiscarded = []
yDiscarded = []
so they're both empty lists;
b) change the "conditional discard" to something like
if math.fabs(y) > 100:
xDiscarded.append(str(x))
yDiscarded.append(str(y))
to accumulate in the strings-lists (you should probably also do some neat formatting here, but that's not strictly speaking necessary);
c) change the output part to
print('Discarded X Values: ' + ', '.join(xDiscarded))
print('Discarded Y Values: ' + ', '.join(yDiscarded))
to do the nice output with proper "titles" and punctuation.
A:
You can convert a number to a string like so:
x = 100.0
xstr = str(x)
You can add on to a string like so:
xstr += 'another string'
It's also ok to have an empty string:
emptystring = ''
Hopefully those ideas will get you in the right direction.
A:
You can turn the variable into a string representation using str(y). To control formatting you can use string interpolation e.g. "%.3f" % y.
You can add strings together and assign the results to a variable. E.g.:
string_a = string_b + string_c
or:
string_a += string_b
It's not necessarily efficient, but it works.
A:
Very simple. You can "turn numbers into strings" using the formatting operator (%), as follows:
import random
def getRandomX():
x = 0
while not x:
x = random.uniform(-100.0, 100.0)
return x
def main():
discarded = ''
for i in range(100):
x = getRandomX()
y = 3070 / x
while abs(y) > 100:
discarded += '(%f, %f)\n' % (x, y)
x = getRandomX()
y = 3070 / x
print('(%f, %f)' % (x, y))
print('\nDiscarded:\n' + discarded)
main()
Note also that you don't need to import math, as abs() is a built-in. Also note that you need to recreate X if Y is invalid.
|
Storing multiple discarded datas in a single variable using a string accumulator
|
For an assignment for my intro to python course, we are to write a program that generates 100 sets of x,y coordinates.
X must be a float between -100.0 and 100.0 inclusive, but not 0.
Y is Y = ((1/x) * 3070) but if the absolute value of Y is greater than 100, both numbers must be discarded (BUT STORED) and another set generated.
The results must be displayed in a table, and then after the table, the discarded results must be shown.
The teacher said we should use a "string accumulator" to store the discarded data.
This is what I have so far, and I'm stuck at storing the discarded data.
EDIT: got it! thanks!
# import random.py
import random
# import math.py
import math
# define main
def main():
xDiscarded = 'Discarded X Values'
yDiscarded = 'Discarded Y Values'
# print header
print(" x \t y ")
x = random.uniform(-100.0, 100.0)
while x == 0:
x = random.uniform(-100.0, 100.0)
y = ((1/x) * 3070)
if math.fabs(y) > 100:
xDiscarded += ", " + str(x)
yDiscarded += ", " + str(y)
else:
print(x, '\t', y)
print(xDiscarded)
print(yDiscarded)
As you can see, I run into the problem of when abs(y) > 100, I'm not too sure how to store the discarded data and let it accumulate every time abs(y) > 100. I'm cool with the data being stored as "351.2, 231.1, 152.2" I just don't know how to turn the variable into a string and store it. We haven't learned arrays yet so I can't do that.
Any help would be much appreciated. Thanks!
|
[
"\"string accumulator\" is not a Python \"terms of art\". Maybe the teacher meant \"accumulate it all into a single string\" (a horrible approach in Python), or maybe (if the course has already covered lists) he mean a list of strings (the proper Python approach).\nOther answers already cover the first possibility, but in case the second (good) one is meant, what you need is:\na) change the initialization to\n xDiscarded = []\n yDiscarded = []\n\nso they're both empty lists;\nb) change the \"conditional discard\" to something like\n if math.fabs(y) > 100:\n xDiscarded.append(str(x))\n yDiscarded.append(str(y))\n\nto accumulate in the strings-lists (you should probably also do some neat formatting here, but that's not strictly speaking necessary);\nc) change the output part to\nprint('Discarded X Values: ' + ', '.join(xDiscarded))\nprint('Discarded Y Values: ' + ', '.join(yDiscarded))\n\nto do the nice output with proper \"titles\" and punctuation.\n",
"You can convert a number to a string like so:\nx = 100.0\nxstr = str(x)\n\nYou can add on to a string like so:\nxstr += 'another string'\n\nIt's also ok to have an empty string:\nemptystring = ''\n\nHopefully those ideas will get you in the right direction.\n",
"You can turn the variable into a string representation using str(y). To control formatting you can use string interpolation e.g. \"%.3f\" % y.\nYou can add strings together and assign the results to a variable. E.g.:\nstring_a = string_b + string_c\n\nor:\nstring_a += string_b\n\nIt's not necessarily efficient, but it works.\n",
"Very simple. You can \"turn numbers into strings\" using the formatting operator (%), as follows:\nimport random\n\ndef getRandomX():\n x = 0\n while not x:\n x = random.uniform(-100.0, 100.0)\n return x\n\ndef main():\n discarded = ''\n\n for i in range(100):\n x = getRandomX()\n y = 3070 / x\n\n while abs(y) > 100:\n discarded += '(%f, %f)\\n' % (x, y)\n x = getRandomX()\n y = 3070 / x\n\n print('(%f, %f)' % (x, y))\n\n print('\\nDiscarded:\\n' + discarded)\n\nmain()\n\nNote also that you don't need to import math, as abs() is a built-in. Also note that you need to recreate X if Y is invalid.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002450898_python.txt
|
Q:
Deleting object in function
Let's say I have created two objects from class foo and now want to combine the two. How, if at all possible, can I accomplish that within a function like this:
def combine(first, second):
first.value += second.value
del second #this doesn't work, though first.value *does* get changed
instead of doing something like
def combine(first, second):
first.value += second.value
in the function and putting del second immediately after the function call?
A:
No. All del does against names is unbind them. This only removes the local reference. The object will be destroyed when there are no references to it anywhere, or all the references are in a reference loop.
|
Deleting object in function
|
Let's say I have created two objects from class foo and now want to combine the two. How, if at all possible, can I accomplish that within a function like this:
def combine(first, second):
first.value += second.value
del second #this doesn't work, though first.value *does* get changed
instead of doing something like
def combine(first, second):
first.value += second.value
in the function and putting del second immediately after the function call?
|
[
"No. All del does against names is unbind them. This only removes the local reference. The object will be destroyed when there are no references to it anywhere, or all the references are in a reference loop.\n"
] |
[
4
] |
[] |
[] |
[
"function",
"python",
"python_3.x"
] |
stackoverflow_0002451467_function_python_python_3.x.txt
|
Q:
How can I draw to a MemoryDC using the GraphicsContext, and then blit that to a PaintDC?
I'm looking to add double buffering to a drawing function like this.
dc = wx.PaintDC(self)
gc = wx.GraphicsContext.Create(dc)
#draw GraphicsPaths to the gc
I tried to first draw to a MemoryDC and then blit that back to the PaintDC:
dc = wx.MemoryDC()
dc.SelectObject(wx.NullBitmap)
gc = wx.GraphicsContext.Create(dc)
#draw GraphicsPaths to the gc
dc2=wx.PaintDC(self)
dc2.Blit(0,0,640,480,dc,0,0)
However, this gives me nothing but a blank screen. Am I misunderstanding how the MemoryDC is supposed to work?
A:
You need to create a bitmap, not use wx.NullBitmap.
bitmap = wx.EmptyBitmap(w, h)
dc = wx.MemoryDC(bitmap)
|
How can I draw to a MemoryDC using the GraphicsContext, and then blit that to a PaintDC?
|
I'm looking to add double buffering to a drawing function like this.
dc = wx.PaintDC(self)
gc = wx.GraphicsContext.Create(dc)
#draw GraphicsPaths to the gc
I tried to first draw to a MemoryDC and then blit that back to the PaintDC:
dc = wx.MemoryDC()
dc.SelectObject(wx.NullBitmap)
gc = wx.GraphicsContext.Create(dc)
#draw GraphicsPaths to the gc
dc2=wx.PaintDC(self)
dc2.Blit(0,0,640,480,dc,0,0)
However, this gives me nothing but a blank screen. Am I misunderstanding how the MemoryDC is supposed to work?
|
[
"You need to create a bitmap, not use wx.NullBitmap.\nbitmap = wx.EmptyBitmap(w, h)\ndc = wx.MemoryDC(bitmap)\n\n"
] |
[
1
] |
[] |
[] |
[
"graphicscontext",
"python",
"wxpython"
] |
stackoverflow_0002451610_graphicscontext_python_wxpython.txt
|
Q:
Quicker way than "try" and "except" ? - Python
I'm often having code written as follows
try:
self.title = item.title().content.string
except AttributeError, e:
self.title = None
Is there a quicker way of dealing with this? a one-liner?
A:
What exceptions are you getting from item.title()? The bare except (horrible practice!) doesn't tell us. If it's an AttributeError (where item doesn't have a title method, for example),
self.title = getattr(item, 'title', lambda: None)()
might be the one-liner you seek (but performance won't be enormously different, mind you;-).
Edit: as the OP entirely changed the question (it was originally just using self.title(), it's now using self.title().content.string, and does specifically catch AttributeError rather than using a bare except), the previous version of this answer of course doesn't apply any more. The proper answer now is: attempting a one-liner is an absurd approach, when the chain of attribute references &c keeps growing longer and longer (how many will there be next time, nine? Since they jumped from one to three with the first edit...;-).
And with no idea of which of the many elementary operations expressed by that long, Law of Demeter-scoffing chain of references might raise the AttributeError, any attempt at optimization would be flying rather blind, too.
A:
In one line, although I’d only recommend this in 5% of all use cases.
self.title = item.title().content.string if hasattr(item, 'title') else None
A:
Assuming the AttributeError happens on string:
self.title = getattr(item.title().content, 'string', None)
A:
You should know ahead of time whether or not an object has a given attribute. It is a bad sign when you have an object but do not know what it is.
You retrieve three attributes in your try block. A try block should contain as little code as possible. You could let an error pass silently if a different attribute is missing than you think.
getattr lets you have a default value, but typically should not be used for this purpose.
A:
Your question focuses on the speed of this operation. First, why do you think this operation is slow? Second, there isn't a faster way to access the attributes. Even trying to avoid the catch by checking for the attribute first will likely be slower, simply because of the Python conditionals needed to check if the attribute exists. Also, hasattr attempts to read the attribute, and catches AttributeError, returning False. So checking for the attribute will actually involve a try/except anyway.
A:
How about a two-liner?
try: self.title = item.title().content.string
except AttributeError, e: self.title = None
Denser, less readable, but you really save two keypresses!
|
Quicker way than "try" and "except" ? - Python
|
I'm often having code written as follows
try:
self.title = item.title().content.string
except AttributeError, e:
self.title = None
Is there a quicker way of dealing with this? a one-liner?
|
[
"What exceptions are you getting from item.title()? The bare except (horrible practice!) doesn't tell us. If it's an AttributeError (where item doesn't have a title method, for example),\nself.title = getattr(item, 'title', lambda: None)()\n\nmight be the one-liner you seek (but performance won't be enormously different, mind you;-).\nEdit: as the OP entirely changed the question (it was originally just using self.title(), it's now using self.title().content.string, and does specifically catch AttributeError rather than using a bare except), the previous version of this answer of course doesn't apply any more. The proper answer now is: attempting a one-liner is an absurd approach, when the chain of attribute references &c keeps growing longer and longer (how many will there be next time, nine? Since they jumped from one to three with the first edit...;-).\nAnd with no idea of which of the many elementary operations expressed by that long, Law of Demeter-scoffing chain of references might raise the AttributeError, any attempt at optimization would be flying rather blind, too.\n",
"In one line, although I’d only recommend this in 5% of all use cases.\nself.title = item.title().content.string if hasattr(item, 'title') else None\n\n",
"Assuming the AttributeError happens on string:\nself.title = getattr(item.title().content, 'string', None)\n\n",
"\nYou should know ahead of time whether or not an object has a given attribute. It is a bad sign when you have an object but do not know what it is.\nYou retrieve three attributes in your try block. A try block should contain as little code as possible. You could let an error pass silently if a different attribute is missing than you think.\ngetattr lets you have a default value, but typically should not be used for this purpose.\n\n",
"Your question focuses on the speed of this operation. First, why do you think this operation is slow? Second, there isn't a faster way to access the attributes. Even trying to avoid the catch by checking for the attribute first will likely be slower, simply because of the Python conditionals needed to check if the attribute exists. Also, hasattr attempts to read the attribute, and catches AttributeError, returning False. So checking for the attribute will actually involve a try/except anyway.\n",
"How about a two-liner?\ntry: self.title = item.title().content.string\nexcept AttributeError, e: self.title = None\n\nDenser, less readable, but you really save two keypresses!\n"
] |
[
6,
2,
2,
0,
0,
0
] |
[] |
[] |
[
"beautifulsoup",
"python"
] |
stackoverflow_0002443489_beautifulsoup_python.txt
|
Q:
How to get a template tag to auto-check a checkbox in Django
I'm using a ModelForm class to generate a bunch of checkboxes for a ManyToManyField but I've run into one problem: while the default behaviour automatically checks the appropriate boxes (when I'm editing an object), I can't figure out how to get that information in my own custom templatetag.
Here's what I've got in my model:
from myproject.interests.models import Interest
class Node(models.Model):
interests = models.ManyToManyField(Interest, blank=True, null=True)
class MyForm(ModelForm):
from django.forms import CheckboxSelectMultiple, ModelMultipleChoiceField
interests = ModelMultipleChoiceField(
widget=CheckboxSelectMultiple(),
queryset=Interest.objects.all(),
required=False
)
class Meta:
model = MyModel
And in my view:
from myproject.myapp.models import MyModel,MyForm
obj = MyModel.objects.get(pk=1)
f = MyForm(instance=obj)
return render_to_response(
"path/to/form.html", {
"form": f,
},
context_instance=RequestContext(request)
)
And in my template:
{{ form.interests|alignboxes:"CheckOption" }}
And here's my templatetag:
@register.filter
def alignboxes(boxes, cls):
"""
Details on how this works can be found here:
http://docs.djangoproject.com/en/1.1/howto/custom-template-tags/
"""
r = ""
i = 0
for box in boxes.field.choices.queryset:
r += "<label for=\"id_%s_%d\" class=\"%s\"><input type=\"checkbox\" name=\"%s\" value=\"%s\" id=\"id_%s_%d\" /> %s</label>\n" % (
boxes.name,
i,
cls,
boxes.name,
box.id,
boxes.name,
i,
box.name
)
i = i + 1
return mark_safe(r)
The thing is, I'm only doing this so I can wrap some simpler markup around these boxes, so if someone knows how to make that happen in an easier way, I'm all ears. I'd be happy with knowing a way to access whether or not a box should be checked though.
A:
In your input tag for the checkbox, you can just add the checked attribute based on some condition. Say your box object has property checked which value is either "checked" or empty string ""
r += "<label for=\"id_%s_%d\" class=\"%s\"><input type=\"checkbox\" name=\"%s\" value=\"%s\" id=\"id_%s_%d\" %s /> %s</label>\n" % (
boxes.name,
i,
cls,
boxes.name,
box.id,
boxes.name,
i,
box.checked,
box.name
)
A:
Turns out the value I was looking for, the elements in the list that were "checked" isn't in the field, but rather part of the form object. I re-worked the template tag to look like this and it does exactly what I need:
@register.filter
def alignboxes(boxes, cls):
r = ""
i = 0
for box in boxes.field.choices.queryset:
checked = "checked=checked" if i in boxes.form.initial[boxes.name] else ""
r += "<label for=\"id_%s_%d\" class=\"%s\"><input type=\"checkbox\" name=\"%s\" value=\"%s\" id=\"id_%s_%d\" %s /> %s</label>\n" % (
boxes.name,
i,
cls,
boxes.name,
box.pk,
boxes.name,
i,
checked,
box.name
)
i = i + 1
return r
For those who might come after, note that the checked value above was found in boxes.form.initial[boxes.name]
|
How to get a template tag to auto-check a checkbox in Django
|
I'm using a ModelForm class to generate a bunch of checkboxes for a ManyToManyField but I've run into one problem: while the default behaviour automatically checks the appropriate boxes (when I'm editing an object), I can't figure out how to get that information in my own custom templatetag.
Here's what I've got in my model:
from myproject.interests.models import Interest
class Node(models.Model):
interests = models.ManyToManyField(Interest, blank=True, null=True)
class MyForm(ModelForm):
from django.forms import CheckboxSelectMultiple, ModelMultipleChoiceField
interests = ModelMultipleChoiceField(
widget=CheckboxSelectMultiple(),
queryset=Interest.objects.all(),
required=False
)
class Meta:
model = MyModel
And in my view:
from myproject.myapp.models import MyModel,MyForm
obj = MyModel.objects.get(pk=1)
f = MyForm(instance=obj)
return render_to_response(
"path/to/form.html", {
"form": f,
},
context_instance=RequestContext(request)
)
And in my template:
{{ form.interests|alignboxes:"CheckOption" }}
And here's my templatetag:
@register.filter
def alignboxes(boxes, cls):
"""
Details on how this works can be found here:
http://docs.djangoproject.com/en/1.1/howto/custom-template-tags/
"""
r = ""
i = 0
for box in boxes.field.choices.queryset:
r += "<label for=\"id_%s_%d\" class=\"%s\"><input type=\"checkbox\" name=\"%s\" value=\"%s\" id=\"id_%s_%d\" /> %s</label>\n" % (
boxes.name,
i,
cls,
boxes.name,
box.id,
boxes.name,
i,
box.name
)
i = i + 1
return mark_safe(r)
The thing is, I'm only doing this so I can wrap some simpler markup around these boxes, so if someone knows how to make that happen in an easier way, I'm all ears. I'd be happy with knowing a way to access whether or not a box should be checked though.
|
[
"In your input tag for the checkbox, you can just add the checked attribute based on some condition. Say your box object has property checked which value is either \"checked\" or empty string \"\"\nr += \"<label for=\\\"id_%s_%d\\\" class=\\\"%s\\\"><input type=\\\"checkbox\\\" name=\\\"%s\\\" value=\\\"%s\\\" id=\\\"id_%s_%d\\\" %s /> %s</label>\\n\" % (\n boxes.name,\n i,\n cls,\n boxes.name,\n box.id,\n boxes.name,\n i,\n box.checked,\n box.name\n)\n\n",
"Turns out the value I was looking for, the elements in the list that were \"checked\" isn't in the field, but rather part of the form object. I re-worked the template tag to look like this and it does exactly what I need:\n@register.filter\ndef alignboxes(boxes, cls):\n\n r = \"\"\n i = 0\n for box in boxes.field.choices.queryset:\n checked = \"checked=checked\" if i in boxes.form.initial[boxes.name] else \"\"\n r += \"<label for=\\\"id_%s_%d\\\" class=\\\"%s\\\"><input type=\\\"checkbox\\\" name=\\\"%s\\\" value=\\\"%s\\\" id=\\\"id_%s_%d\\\" %s /> %s</label>\\n\" % (\n boxes.name,\n i,\n cls,\n boxes.name,\n box.pk,\n boxes.name,\n i,\n checked,\n box.name\n )\n i = i + 1\n\n return r\n\nFor those who might come after, note that the checked value above was found in boxes.form.initial[boxes.name]\n"
] |
[
3,
0
] |
[] |
[] |
[
"django",
"python",
"templatetags"
] |
stackoverflow_0002447261_django_python_templatetags.txt
|
Q:
how to get the index or the element itself of an element found with "if element in list"
Does a direct way to do this exists?
if element in aList:
#get the element from the list
I'm thinking something like this:
aList = [ ([1,2,3],4) , ([5,6,7],8) ]
element = [5,6,7]
if element in aList
#print the 8
A:
L = [([1, 2, 3], 4), ([5, 6, 7], 8)]
element = [5, 6, 7]
for a, b in L:
if a == element:
print b
break
else:
print "not found"
But it sounds like you want to use a dictionary:
L = [([1, 2, 3], 4), ([5, 6, 7], 8)]
element = [5, 6, 7]
D = dict((tuple(a), b) for a, b in L)
# keys must be hashable: list is not, but tuple is
# or you could just build the dict directly:
#D = {(1,2,3): 4, (5,6,7): 8}
v = D.get(tuple(element))
if v is not None:
print v
else:
print "not found"
Note that while there are more compact forms using next below, I imagined the reality of your code (rather than the contrived example) to be doing something at least slightly more complicated, so that using an block for the if and else becomes more readable with multiple statements.
A:
(Note: this answer refers to the question text, not the example given in the code, which doesn't quite match.)
Printing the element itself doesn't make any sense, because you already have it in the test:
if element in lst:
print element
If you want the index, there's an index method:
if element in lst:
print lst.index(element)
And, on the off chance that you're asking this because you want to loop through a list and do things with both the value and the index, be sure to use the enumerate idiom:
for i, val in enumerate(lst):
print "list index": i
print "corresponding value": val
A:
>>> aList = [ ([1,2,3],4) , ([5,6,7],8) ]
>>> element = [5,6,7]
if you only wish to check if the first element is present
>>> any(element==x[0] for x in aList)
True
to find the corresponding value
>>> next(x[1] for x in aList if element==x[0])
8
A:
>>> aList = [ ([1,2,3],4) , ([5,6,7],8) ]
>>> for i in aList:
... if [5,6,7] in i:
... print i[-1]
...
8
A:
[5, 6, 7] is not an item of the aList you show, so the if will fail, and your question as posed just doesn't pertain. More generally, the loop implied in such an if tosses away the index anyway. A way to make your code snippet work would be, instead of the if, to have something like (Python 2.6 or better -- honk if you need to work on different versions):
where = next((x for x in aList if x[0] == element), None)
if where:
print(x[1])
More generally, the expressions in the next and in the print must depend on the exact "fine grained" structure of aList -- in your example, x[0] and x[1] work just fine, but in a slightly different example you may need different expressions. There is no "generic" way that totally ignores how your data is actually structured and "magically works anyway"!-)
A:
One possible solution.
aList = [ ([1,2,3],4) , ([5,6,7],8) ]
element = [5,6,7]
>>> print(*[y for x,y in aList if element == x])
8
A:
The code in your question is sort of weird. But, assuming you're learning the basics:
Getting the index of an element:
it's actually simple: list.index(element). Assuming of course, the element only appears once. If it appears more than once, you can use the extra parameters:
list.index(element, start_index): here it will start searching from start_index. There's also:
list.index(element, start_index, end_index): I think it's self explanitory.
Getting the index in a for loop
If you're looping on a list and you want to loop on both the index and the element, the pythonic way is to enumerate the list:
for index, element in enumerate(some_list):
# here, element is some_list[index]
Here, enumerate is a function that takes a list and returns a list of tuples. Say your list is ['a', 'b', 'c'], then enumerate would return: [ (1, 'a'), (2, 'b'), (3, 'c') ]
When you iterate over that, each item is a tuple, and you can unpack that tuple.
tuple unpacking is basically like this:
>>> t = (1, 'a')
>>> x, y = t
>>> t
(1, 'a')
>>> x
1
>>> y
'a'
>>>
|
how to get the index or the element itself of an element found with "if element in list"
|
Does a direct way to do this exists?
if element in aList:
#get the element from the list
I'm thinking something like this:
aList = [ ([1,2,3],4) , ([5,6,7],8) ]
element = [5,6,7]
if element in aList
#print the 8
|
[
"L = [([1, 2, 3], 4), ([5, 6, 7], 8)]\nelement = [5, 6, 7]\n\nfor a, b in L:\n if a == element:\n print b\n break\nelse:\n print \"not found\"\n\nBut it sounds like you want to use a dictionary:\nL = [([1, 2, 3], 4), ([5, 6, 7], 8)]\nelement = [5, 6, 7]\n\nD = dict((tuple(a), b) for a, b in L)\n# keys must be hashable: list is not, but tuple is\n# or you could just build the dict directly:\n#D = {(1,2,3): 4, (5,6,7): 8}\n\nv = D.get(tuple(element))\nif v is not None:\n print v\nelse:\n print \"not found\"\n\nNote that while there are more compact forms using next below, I imagined the reality of your code (rather than the contrived example) to be doing something at least slightly more complicated, so that using an block for the if and else becomes more readable with multiple statements.\n",
"(Note: this answer refers to the question text, not the example given in the code, which doesn't quite match.)\nPrinting the element itself doesn't make any sense, because you already have it in the test:\nif element in lst:\n print element\n\nIf you want the index, there's an index method:\nif element in lst:\n print lst.index(element)\n\nAnd, on the off chance that you're asking this because you want to loop through a list and do things with both the value and the index, be sure to use the enumerate idiom:\nfor i, val in enumerate(lst):\n print \"list index\": i\n print \"corresponding value\": val\n\n",
">>> aList = [ ([1,2,3],4) , ([5,6,7],8) ]\n>>> element = [5,6,7]\n\nif you only wish to check if the first element is present\n>>> any(element==x[0] for x in aList)\nTrue\n\nto find the corresponding value\n>>> next(x[1] for x in aList if element==x[0])\n8\n\n",
">>> aList = [ ([1,2,3],4) , ([5,6,7],8) ]\n>>> for i in aList:\n... if [5,6,7] in i:\n... print i[-1]\n...\n8\n\n",
"[5, 6, 7] is not an item of the aList you show, so the if will fail, and your question as posed just doesn't pertain. More generally, the loop implied in such an if tosses away the index anyway. A way to make your code snippet work would be, instead of the if, to have something like (Python 2.6 or better -- honk if you need to work on different versions):\nwhere = next((x for x in aList if x[0] == element), None)\nif where:\n print(x[1])\n\nMore generally, the expressions in the next and in the print must depend on the exact \"fine grained\" structure of aList -- in your example, x[0] and x[1] work just fine, but in a slightly different example you may need different expressions. There is no \"generic\" way that totally ignores how your data is actually structured and \"magically works anyway\"!-)\n",
"One possible solution.\naList = [ ([1,2,3],4) , ([5,6,7],8) ]\nelement = [5,6,7]\n\n>>> print(*[y for x,y in aList if element == x])\n8\n\n",
"The code in your question is sort of weird. But, assuming you're learning the basics:\nGetting the index of an element:\nit's actually simple: list.index(element). Assuming of course, the element only appears once. If it appears more than once, you can use the extra parameters:\nlist.index(element, start_index): here it will start searching from start_index. There's also:\nlist.index(element, start_index, end_index): I think it's self explanitory.\nGetting the index in a for loop\nIf you're looping on a list and you want to loop on both the index and the element, the pythonic way is to enumerate the list:\nfor index, element in enumerate(some_list):\n # here, element is some_list[index]\n\nHere, enumerate is a function that takes a list and returns a list of tuples. Say your list is ['a', 'b', 'c'], then enumerate would return: [ (1, 'a'), (2, 'b'), (3, 'c') ]\nWhen you iterate over that, each item is a tuple, and you can unpack that tuple.\ntuple unpacking is basically like this:\n>>> t = (1, 'a')\n>>> x, y = t\n>>> t\n(1, 'a')\n>>> x\n1\n>>> y\n'a'\n>>> \n\n"
] |
[
3,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"list",
"python",
"tuples"
] |
stackoverflow_0002452093_list_python_tuples.txt
|
Q:
GetAuthSubToken returns None
Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app:
client = gdata.service.GDataService()
gdata.alt.appengine.run_on_appengine(client)
sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
client.UpgradeToSessionToken(sessionToken)
logging.info(client.GetAuthSubToken())
what gets logged is "None" so that does seem right :-(
if I use this:
temp = client.upgrade_to_session_token(sessionToken)
logging.info(dump(temp))
I get this:
{'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'}
so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work.
If I try to use AuthSubTokenInfo I get this:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "controllers/indexController.py", line 47, in get
logging.info(client.AuthSubTokenInfo())
File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo
token = self.token_store.find_token(scopes[0])
TypeError: 'NoneType' object is unsubscriptable
so it looks like my token_store is not getting filled in correctly, is that something I should be doing?
Also I am using gdata 2.0.9
Thanks
Matt
A:
To answer my own question:
When you get the Token just call:
client.token_store.add_token(sessionToken)
and App Engine will store it in a new entity type for you. Then when making calls to the calendar service just dont set the authsubtoken as it will take care of that for you also.
|
GetAuthSubToken returns None
|
Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app:
client = gdata.service.GDataService()
gdata.alt.appengine.run_on_appengine(client)
sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
client.UpgradeToSessionToken(sessionToken)
logging.info(client.GetAuthSubToken())
what gets logged is "None" so that does seem right :-(
if I use this:
temp = client.upgrade_to_session_token(sessionToken)
logging.info(dump(temp))
I get this:
{'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'}
so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work.
If I try to use AuthSubTokenInfo I get this:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "controllers/indexController.py", line 47, in get
logging.info(client.AuthSubTokenInfo())
File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo
token = self.token_store.find_token(scopes[0])
TypeError: 'NoneType' object is unsubscriptable
so it looks like my token_store is not getting filled in correctly, is that something I should be doing?
Also I am using gdata 2.0.9
Thanks
Matt
|
[
"To answer my own question:\nWhen you get the Token just call:\nclient.token_store.add_token(sessionToken)\n\nand App Engine will store it in a new entity type for you. Then when making calls to the calendar service just dont set the authsubtoken as it will take care of that for you also.\n"
] |
[
0
] |
[] |
[] |
[
"gdata",
"gdata_api",
"google_app_engine",
"python"
] |
stackoverflow_0002441813_gdata_gdata_api_google_app_engine_python.txt
|
Q:
Python: Create a duplicate of an array
I have an double array
alist[1][1]=-1
alist2=[]
for x in xrange(10):
alist2.append(alist[x])
alist2[1][1]=15
print alist[1][1]
and I get 15. Clearly I'm passing a pointer rather than an actual variable... Is there an easy way to make a seperate double array (no shared pointers) without having to do a double for loop?
Thanks,
Dan
A:
I think copy.deepcopy() is for just this case.
A:
You can use somelist[:], that is a slice like somelist[1:2] from beginning to end, to create a (shallow) copy of a list. Applying this to your for-loop gives:
alist2 = []
for x in xrange(10):
alist2.append(alist[x][:])
This can also be written as a list comprehension:
alist2 = [item[:] for item in alist]
A:
A list of lists is not usually a great solution for making a 2d array. You probably want to use numpy, which provides a very useful, efficient n-dimensional array type. numpy arrays can be copied.
Other solutions that are usually better than a plain list of lists include a dict with tuples as keys (d[1, 1] would be the 1, 1 component) or defining your own 2d array class. Of course, dicts can be copied and you could abstract copying away for your class.
To copy a list of lists, you can use copy.deepcopy, which will go one level deep when copying.
A:
make a copy of the list when append.
alist2.append(alist[x][:])
A:
If you're already looping over the list anyway then just copying the inner lists as you go is easiest, as per seanmonstar's answer.
If you just want to do a deep copy of the list you could call copy.deepcopy() on it.
A:
Usually you can do something like:
new_list = old_list[:]
So you could perhaps throw that in your singular for loop?
for x in range(10):
alist2.append(alist[x][:])
|
Python: Create a duplicate of an array
|
I have an double array
alist[1][1]=-1
alist2=[]
for x in xrange(10):
alist2.append(alist[x])
alist2[1][1]=15
print alist[1][1]
and I get 15. Clearly I'm passing a pointer rather than an actual variable... Is there an easy way to make a seperate double array (no shared pointers) without having to do a double for loop?
Thanks,
Dan
|
[
"I think copy.deepcopy() is for just this case.\n",
"You can use somelist[:], that is a slice like somelist[1:2] from beginning to end, to create a (shallow) copy of a list. Applying this to your for-loop gives:\nalist2 = []\nfor x in xrange(10):\n alist2.append(alist[x][:])\n\nThis can also be written as a list comprehension:\nalist2 = [item[:] for item in alist]\n\n",
"A list of lists is not usually a great solution for making a 2d array. You probably want to use numpy, which provides a very useful, efficient n-dimensional array type. numpy arrays can be copied.\nOther solutions that are usually better than a plain list of lists include a dict with tuples as keys (d[1, 1] would be the 1, 1 component) or defining your own 2d array class. Of course, dicts can be copied and you could abstract copying away for your class.\nTo copy a list of lists, you can use copy.deepcopy, which will go one level deep when copying.\n",
"make a copy of the list when append.\n alist2.append(alist[x][:])\n\n",
"If you're already looping over the list anyway then just copying the inner lists as you go is easiest, as per seanmonstar's answer.\nIf you just want to do a deep copy of the list you could call copy.deepcopy() on it.\n",
"Usually you can do something like:\nnew_list = old_list[:]\n\nSo you could perhaps throw that in your singular for loop?\nfor x in range(10):\n alist2.append(alist[x][:])\n\n"
] |
[
9,
8,
4,
1,
1,
0
] |
[] |
[] |
[
"arrays",
"list",
"pointers",
"python"
] |
stackoverflow_0002452321_arrays_list_pointers_python.txt
|
Q:
Organizing Python objects for retrieval
I have a Club class and a Player Class. The player class has an attribute Fav.clubs which will have unique club values. So the user is supposed to enter various club names. Based on the club names I must retrieve those club objects and establish the relationship that this particular player has this Fav.clubs.
The attribute Fav.clubs in Player class should store the names of Club. Now what I have to do is, take input from user about Fav.clubs (a list). After that traverse each element in the list and access the string name to find the corresponding club object and then store that object instance in Player class.
A:
Store all clubs in a dictionary called all_clubs. The key should be the club-name and the value the club object itself. Then you can do all_clubs[clubname] to retrieve the club object for a given name.
The player might have an attribute club_names which is the list of the unique names you described and a property clubs which might look like this:
class Player(object):
# ...
@property
def clubs(self):
result = []
for name in self.club_names:
result.append(all_clubs.get(name))
return result
Alternatively, it might also be a good idea to use a ORM tool like sqlalchemy and a simple file-based or in-memory sqlite database. Then you have the power of SQL and a extremely good relational mapping. But if you are new to Python, I wouldn't use something like that, because sqlalchemy is quite a complex topic and the mapping uses some python magic in the background, which you might not understand at the beginning. Therefore, I would suggest the first method.
A:
Use a dict to store a map between strings and instances of Club:
clubmap = {
'basketball': Club("Basketball club"),
'chess': Club("Chess club"),
}
someplayer.joinclub(clubmap['basketball'])
|
Organizing Python objects for retrieval
|
I have a Club class and a Player Class. The player class has an attribute Fav.clubs which will have unique club values. So the user is supposed to enter various club names. Based on the club names I must retrieve those club objects and establish the relationship that this particular player has this Fav.clubs.
The attribute Fav.clubs in Player class should store the names of Club. Now what I have to do is, take input from user about Fav.clubs (a list). After that traverse each element in the list and access the string name to find the corresponding club object and then store that object instance in Player class.
|
[
"Store all clubs in a dictionary called all_clubs. The key should be the club-name and the value the club object itself. Then you can do all_clubs[clubname] to retrieve the club object for a given name.\nThe player might have an attribute club_names which is the list of the unique names you described and a property clubs which might look like this:\nclass Player(object):\n # ...\n\n @property\n def clubs(self):\n result = []\n for name in self.club_names:\n result.append(all_clubs.get(name))\n return result\n\nAlternatively, it might also be a good idea to use a ORM tool like sqlalchemy and a simple file-based or in-memory sqlite database. Then you have the power of SQL and a extremely good relational mapping. But if you are new to Python, I wouldn't use something like that, because sqlalchemy is quite a complex topic and the mapping uses some python magic in the background, which you might not understand at the beginning. Therefore, I would suggest the first method.\n",
"Use a dict to store a map between strings and instances of Club:\nclubmap = {\n 'basketball': Club(\"Basketball club\"),\n 'chess': Club(\"Chess club\"),\n}\n\nsomeplayer.joinclub(clubmap['basketball'])\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002452688_python.txt
|
Q:
Encrypting XML database in python
i am using XML as my backend for the application...
LXML is used to parse the xml.
How can i encrypt this xml file to make sure that the data is protected......
thanks in advance.
A:
As XML contains repetitive structure it is better to first compress and then encrypt it.
Download and install PyDes.
from pyDes import *
import bz2
def encrypt(data,password):
k = des(password, CBC, "\0\0\0\0\0\0\0\0", pad=None, padmode=PAD_PKCS5)
d = k.encrypt(data)
return d
def decrypt(data,password):
k = des(password, CBC, "\0\0\0\0\0\0\0\0", pad=None, padmode=PAD_PKCS5)
d = k.decrypt(data)
return d
password = "eight222" # password length should be 8
data = '''
<?xml version="1.0"?>
<library>
<shelf id="fiction">
<book>
<title>Of Mice and Men</title>
<author>John Steinbeck</author>
</book>
<book>
<title>Harry Potter and the Philosopher's Stone</title>
<author>J.K. Rowling</author>
</book>
</shelf>
</library>
'''
print len(data)
compressed_data = bz2.compress(data)
print len(compressed_data)
encrypted_data = encrypt(compressed_data,password)
print "%r"%encrypted_data
uncompressed_encrypted_data = encrypt(data,password)
print len(encrypted_data)
print len(uncompressed_encrypted_data)
print bz2.decompress(decrypt(encrypted_data,password))
There are lots of cryptography libraries available in python
Pure-Python RSA implementation
Python Encryption Examples
PyXMLSec
PyCrypto - The Python Cryptography Toolkit
|
Encrypting XML database in python
|
i am using XML as my backend for the application...
LXML is used to parse the xml.
How can i encrypt this xml file to make sure that the data is protected......
thanks in advance.
|
[
"As XML contains repetitive structure it is better to first compress and then encrypt it.\nDownload and install PyDes.\nfrom pyDes import *\nimport bz2\n\ndef encrypt(data,password):\n k = des(password, CBC, \"\\0\\0\\0\\0\\0\\0\\0\\0\", pad=None, padmode=PAD_PKCS5)\n d = k.encrypt(data)\n return d\n\ndef decrypt(data,password):\n k = des(password, CBC, \"\\0\\0\\0\\0\\0\\0\\0\\0\", pad=None, padmode=PAD_PKCS5)\n d = k.decrypt(data)\n return d\n\npassword = \"eight222\" # password length should be 8\n\ndata = '''\n<?xml version=\"1.0\"?>\n <library>\n <shelf id=\"fiction\">\n <book>\n <title>Of Mice and Men</title>\n <author>John Steinbeck</author>\n </book>\n <book>\n <title>Harry Potter and the Philosopher's Stone</title>\n <author>J.K. Rowling</author>\n </book>\n </shelf>\n </library> \n'''\n\nprint len(data)\n\ncompressed_data = bz2.compress(data)\nprint len(compressed_data)\n\nencrypted_data = encrypt(compressed_data,password)\n\nprint \"%r\"%encrypted_data\n\nuncompressed_encrypted_data = encrypt(data,password)\n\nprint len(encrypted_data)\nprint len(uncompressed_encrypted_data)\nprint bz2.decompress(decrypt(encrypted_data,password))\n\nThere are lots of cryptography libraries available in python \n\nPure-Python RSA implementation\nPython Encryption Examples\nPyXMLSec\nPyCrypto - The Python Cryptography Toolkit\n\n"
] |
[
4
] |
[] |
[] |
[
"encryption",
"lxml",
"python",
"xml"
] |
stackoverflow_0002452744_encryption_lxml_python_xml.txt
|
Q:
Python evaluation order
Here's the code, I don't quite understand, how does it work. Could anyone tell, is that an expected behavior?
$ipython
In [1]: 1 in [1] == True
Out[1]: False
In [2]: (1 in [1]) == True
Out[2]: True
In [3]: 1 in ([1] == True)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/dmedvinsky/projects/condo/condo/<ipython console> in <module>()
TypeError: argument of type 'bool' is not iterable
In [4]: from sys import version_info
In [5]: version_info
Out[5]: (2, 6, 4, 'final', 0)
A:
This is an example of "chaining" which is a gotcha in Python. It's a (possibly silly) trick of Python that:
a op b op c
is equivalent to:
(a op b) and (b op c)
for all operators of the same precedence. Unfortunately, in and == have the same precedence, as do is and all comparisons.
So, here is your unexpected case:
1 in [1] == True # -> (1 in [1]) and ([1] == True) -> True and False -> False
See See http://docs.python.org/reference/expressions.html#summary for the precedence table.
|
Python evaluation order
|
Here's the code, I don't quite understand, how does it work. Could anyone tell, is that an expected behavior?
$ipython
In [1]: 1 in [1] == True
Out[1]: False
In [2]: (1 in [1]) == True
Out[2]: True
In [3]: 1 in ([1] == True)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/dmedvinsky/projects/condo/condo/<ipython console> in <module>()
TypeError: argument of type 'bool' is not iterable
In [4]: from sys import version_info
In [5]: version_info
Out[5]: (2, 6, 4, 'final', 0)
|
[
"This is an example of \"chaining\" which is a gotcha in Python. It's a (possibly silly) trick of Python that:\na op b op c\n\nis equivalent to:\n(a op b) and (b op c)\n\nfor all operators of the same precedence. Unfortunately, in and == have the same precedence, as do is and all comparisons. \nSo, here is your unexpected case:\n1 in [1] == True # -> (1 in [1]) and ([1] == True) -> True and False -> False\n\nSee See http://docs.python.org/reference/expressions.html#summary for the precedence table.\n"
] |
[
15
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002452837_python.txt
|
Q:
Beginner python - stuck in a loop
I have two begininer programs, both using the 'while' function, one works correctly, and the other gets me stuck in a loop. The first program is this;
num=54
bob = True
print('The guess a number Game!')
while bob == True:
guess = int(input('What is your guess? '))
if guess==num:
print('wow! You\'re awesome!')
print('but don\'t worry, you still suck')
bob = False
elif guess>num:
print('try a lower number')
else:
print('close, but too low')
print('game over')``
and it gives the predictable output of;
The guess a number Game!
What is your guess? 12
close, but too low
What is your guess? 56
try a lower number
What is your guess? 54
wow! You're awesome!
but don't worry, you still suck
game over
However, I also have this program, which doesn't work;
#define vars
a = int(input('Please insert a number: '))
b = int(input('Please insert a second number: '))
#try a function
def func_tim(a,b):
bob = True
while bob == True:
if a == b:
print('nice and equal')
bob = False
elif b > a:
print('b is picking on a!')
else:
print('a is picking on b!')
#call a function
func_tim(a,b)
Which outputs;
Please insert a number: 12
Please insert a second number: 14
b is picking on a!
b is picking on a!
b is picking on a!
...(repeat in a loop)....
Can someone please let me know why these programs are different? Thank you!
A:
In the second example, the user doesn't get a chance to enter a new guess inside the loop, so a and b remain the same.
A:
In the second program you never give the user a chance to pick two new numbers if they're not equal. Put the lines where you get input from the user inside the loop, like this:
#try a function
def func_tim():
bob = True
while bob == True:
#define vars
a = int(input('Please insert a number: '))
b = int(input('Please insert a second number: '))
if a == b:
print('nice and equal')
bob = False
elif b > a:
print('b is picking on a!')
else:
print('a is picking on b!')
#call a function
func_tim()
A:
in your 2nd program, if b > a, you will go back to the loop because bob is still true. You forgot to ask the user to input again.. try it this way
def func_tim():
while 1:
a = int(input('Please insert a number: '))
b = int(input('Please insert a second number: '))
if a == b:
print('nice and equal')
break
elif b > a:
print('b is picking on a!')
else:
print('a is picking on b!')
func_tim()
A:
Your second program doesn't allow the user to reenter his guess if it's not correct. Put the input into the while loop.
Additional hint: Don't make checks like variable == True, just say while variable:.
|
Beginner python - stuck in a loop
|
I have two begininer programs, both using the 'while' function, one works correctly, and the other gets me stuck in a loop. The first program is this;
num=54
bob = True
print('The guess a number Game!')
while bob == True:
guess = int(input('What is your guess? '))
if guess==num:
print('wow! You\'re awesome!')
print('but don\'t worry, you still suck')
bob = False
elif guess>num:
print('try a lower number')
else:
print('close, but too low')
print('game over')``
and it gives the predictable output of;
The guess a number Game!
What is your guess? 12
close, but too low
What is your guess? 56
try a lower number
What is your guess? 54
wow! You're awesome!
but don't worry, you still suck
game over
However, I also have this program, which doesn't work;
#define vars
a = int(input('Please insert a number: '))
b = int(input('Please insert a second number: '))
#try a function
def func_tim(a,b):
bob = True
while bob == True:
if a == b:
print('nice and equal')
bob = False
elif b > a:
print('b is picking on a!')
else:
print('a is picking on b!')
#call a function
func_tim(a,b)
Which outputs;
Please insert a number: 12
Please insert a second number: 14
b is picking on a!
b is picking on a!
b is picking on a!
...(repeat in a loop)....
Can someone please let me know why these programs are different? Thank you!
|
[
"In the second example, the user doesn't get a chance to enter a new guess inside the loop, so a and b remain the same. \n",
"In the second program you never give the user a chance to pick two new numbers if they're not equal. Put the lines where you get input from the user inside the loop, like this:\n#try a function\ndef func_tim():\n bob = True\n while bob == True:\n #define vars\n a = int(input('Please insert a number: '))\n b = int(input('Please insert a second number: '))\n\n if a == b:\n print('nice and equal')\n bob = False\n elif b > a:\n print('b is picking on a!')\n else:\n print('a is picking on b!')\n#call a function\nfunc_tim()\n\n",
"in your 2nd program, if b > a, you will go back to the loop because bob is still true. You forgot to ask the user to input again.. try it this way\n def func_tim():\n while 1:\n a = int(input('Please insert a number: '))\n b = int(input('Please insert a second number: '))\n if a == b:\n print('nice and equal')\n break\n elif b > a:\n print('b is picking on a!')\n else:\n print('a is picking on b!')\n\n\nfunc_tim()\n\n",
"Your second program doesn't allow the user to reenter his guess if it's not correct. Put the input into the while loop. \nAdditional hint: Don't make checks like variable == True, just say while variable:.\n"
] |
[
4,
3,
2,
2
] |
[] |
[] |
[
"infinite_loop",
"python",
"python_3.x"
] |
stackoverflow_0002452961_infinite_loop_python_python_3.x.txt
|
Q:
ZSI.generate.Wsdl2PythonError: unsupported local simpleType restriction
i have this simple type from an external webservice:
<xsd:element name="card_number" maxOccurs="1"
minOccurs="1">
<xsd:simpleType>
<xsd:restriction base="tns:PanType">
<xsd:pattern value="\d{16}"></xsd:pattern>
<xsd:whiteSpace value="collapse"></xsd:whiteSpace>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>
but whe i launch wsdl2py -b filename.wsdl i got this error:
ZSI.generate.Wsdl2PythonError: unsupported local simpleType restriction: <schema targetNamespace="https://xxxxx.yyyyy.zz/sss/"><complexType name="PaymentReq"><sequence><element name="card_number"><simpleType>
How can i fix this? I tried to change from simpleType to compleType and wsdl2py generate python code without problem. In this way i can't be able to use card_number in my python object.
Thanks for helping.
A:
I'm not sure if this is still the case, but a quick google suggests that simpleTypes with user-defined restriction bases aren't supported by ZSI.
If this is still the case, then you could modify the restriction for "card_number" to remove the base and update the restriction-facets within the simpleType-restriction to reflect what the base would have provided.
If you post the content of restriction facets for PanType, we can tell you what that would be.
|
ZSI.generate.Wsdl2PythonError: unsupported local simpleType restriction
|
i have this simple type from an external webservice:
<xsd:element name="card_number" maxOccurs="1"
minOccurs="1">
<xsd:simpleType>
<xsd:restriction base="tns:PanType">
<xsd:pattern value="\d{16}"></xsd:pattern>
<xsd:whiteSpace value="collapse"></xsd:whiteSpace>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>
but whe i launch wsdl2py -b filename.wsdl i got this error:
ZSI.generate.Wsdl2PythonError: unsupported local simpleType restriction: <schema targetNamespace="https://xxxxx.yyyyy.zz/sss/"><complexType name="PaymentReq"><sequence><element name="card_number"><simpleType>
How can i fix this? I tried to change from simpleType to compleType and wsdl2py generate python code without problem. In this way i can't be able to use card_number in my python object.
Thanks for helping.
|
[
"I'm not sure if this is still the case, but a quick google suggests that simpleTypes with user-defined restriction bases aren't supported by ZSI.\nIf this is still the case, then you could modify the restriction for \"card_number\" to remove the base and update the restriction-facets within the simpleType-restriction to reflect what the base would have provided.\nIf you post the content of restriction facets for PanType, we can tell you what that would be.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"soap",
"xml",
"zsi"
] |
stackoverflow_0002453186_python_soap_xml_zsi.txt
|
Q:
How to stop a QDialog from executing while still in the __init__ statement (or immediately after)?
I am wondering how I can go about stopping a dialog from opening if certain conditions are met in its __init__ statement.
The following code tries to call the 'self.close()' function and it does, but (I'm assuming) since the dialog has not yet started its event loop, that it doesn't trigger the close event? So is there another way to close and/or stop the dialog from opening without triggering an event?
Example code:
from PyQt4 import QtCore, QtGui
class dlg_closeInit(QtGui.QDialog):
'''
Close the dialog if a certain condition is met in the __init__ statement
'''
def __init__(self):
QtGui.QDialog.__init__(self)
self.txt_mytext = QtGui.QLineEdit('some text')
self.btn_accept = QtGui.QPushButton('Accept')
self.myLayout = QtGui.QVBoxLayout(self)
self.myLayout.addWidget(self.txt_mytext)
self.myLayout.addWidget(self.btn_accept)
self.setLayout(self.myLayout)
# Connect the button
self.connect(self.btn_accept,QtCore.SIGNAL('clicked()'), self.on_accept)
self.close()
def on_accept(self):
# Get the data...
self.mydata = self.txt_mytext.text()
self.accept()
def get_data(self):
return self.mydata
def closeEvent(self, event):
print 'Closing...'
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
dialog = dlg_closeInit()
if dialog.exec_():
print dialog.get_data()
else:
print "Failed"
A:
The dialog will be run only if exec_ method is called. You should therefore check conditions in the exec_ method and if they are met, run exec_ from QDialog.
Other method is to raise an exception inside the constructor (though I am not sure, it is a good practice; in other languages you generally shouldn't allow such behaviour inside constructor) and catch it outside. If you catch an exception, simply don't run exec_ method.
Remember, that unless you run exec_, you don't need to close the window. The dialog is constructed, but not shown yet.
|
How to stop a QDialog from executing while still in the __init__ statement (or immediately after)?
|
I am wondering how I can go about stopping a dialog from opening if certain conditions are met in its __init__ statement.
The following code tries to call the 'self.close()' function and it does, but (I'm assuming) since the dialog has not yet started its event loop, that it doesn't trigger the close event? So is there another way to close and/or stop the dialog from opening without triggering an event?
Example code:
from PyQt4 import QtCore, QtGui
class dlg_closeInit(QtGui.QDialog):
'''
Close the dialog if a certain condition is met in the __init__ statement
'''
def __init__(self):
QtGui.QDialog.__init__(self)
self.txt_mytext = QtGui.QLineEdit('some text')
self.btn_accept = QtGui.QPushButton('Accept')
self.myLayout = QtGui.QVBoxLayout(self)
self.myLayout.addWidget(self.txt_mytext)
self.myLayout.addWidget(self.btn_accept)
self.setLayout(self.myLayout)
# Connect the button
self.connect(self.btn_accept,QtCore.SIGNAL('clicked()'), self.on_accept)
self.close()
def on_accept(self):
# Get the data...
self.mydata = self.txt_mytext.text()
self.accept()
def get_data(self):
return self.mydata
def closeEvent(self, event):
print 'Closing...'
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
dialog = dlg_closeInit()
if dialog.exec_():
print dialog.get_data()
else:
print "Failed"
|
[
"The dialog will be run only if exec_ method is called. You should therefore check conditions in the exec_ method and if they are met, run exec_ from QDialog.\nOther method is to raise an exception inside the constructor (though I am not sure, it is a good practice; in other languages you generally shouldn't allow such behaviour inside constructor) and catch it outside. If you catch an exception, simply don't run exec_ method.\nRemember, that unless you run exec_, you don't need to close the window. The dialog is constructed, but not shown yet.\n"
] |
[
1
] |
[] |
[] |
[
"pyqt4",
"python",
"qdialog"
] |
stackoverflow_0002405750_pyqt4_python_qdialog.txt
|
Q:
Doxygen C++ comment string parser in python?
Does anybody know of a python module to parse a doxygen style C++ comment string? I mean a string like this (simple example):
/**
* A constructor.
* A more elaborate description of the constructor.
* @param param1 test1
* @param param2 test2
*/
and I would like to extract the brief, the long description, the parameters, the return value etc. I'm currently doing this using string methods and regular expressions but my solution is not very robust.
Alternatively can anybody recommend an easy to use python parser lib that I can set up quickly?
Thanks in advance
A:
You might be able to set something up using the SimpleParse module, but this does require creating an EBNF grammar which might be more investment than you are interested in.
The Sphinx/Doxygen bridge (Breathe) uses the xml output of Doxygen and acts on that instead. Perhaps a similar approach could work here - run Doxygen to extract xml formatted docs and then leverage some of the code from Breathe to get at the data you require.
A:
You should take a look at how doxygen is implemented to see how it handles parsing. I very much doubt it uses regex.
|
Doxygen C++ comment string parser in python?
|
Does anybody know of a python module to parse a doxygen style C++ comment string? I mean a string like this (simple example):
/**
* A constructor.
* A more elaborate description of the constructor.
* @param param1 test1
* @param param2 test2
*/
and I would like to extract the brief, the long description, the parameters, the return value etc. I'm currently doing this using string methods and regular expressions but my solution is not very robust.
Alternatively can anybody recommend an easy to use python parser lib that I can set up quickly?
Thanks in advance
|
[
"You might be able to set something up using the SimpleParse module, but this does require creating an EBNF grammar which might be more investment than you are interested in.\nThe Sphinx/Doxygen bridge (Breathe) uses the xml output of Doxygen and acts on that instead. Perhaps a similar approach could work here - run Doxygen to extract xml formatted docs and then leverage some of the code from Breathe to get at the data you require.\n",
"You should take a look at how doxygen is implemented to see how it handles parsing. I very much doubt it uses regex.\n"
] |
[
5,
1
] |
[] |
[] |
[
"c++",
"doxygen",
"parsing",
"python"
] |
stackoverflow_0002377985_c++_doxygen_parsing_python.txt
|
Q:
Python list is not the same reference
This is the code:
L=[1,2]
L is L[:]
False
Why is this False?
A:
L[:] (slice notation) means: Make a copy of the entire list, element by element.
So you have two lists that have identical content, but are separate entities. Since is evaluates object identity, it returns False.
L == L[:] returns True.
A:
When in doubt ask for id ;)
>>> li = [1,2,4]
>>> id(li)
18686240
>>> id(li[:])
18644144
>>>
A:
The getslice method of list, which is called when you to L[], returns a list; so, when you call it with the ':' argument, it doesn't behave differently, it returns a new list with the same elements as the original.
>>> id(L)
>>> id(L[:])
>>> L[:] == L
True
>>> L[:] is L
False
|
Python list is not the same reference
|
This is the code:
L=[1,2]
L is L[:]
False
Why is this False?
|
[
"L[:] (slice notation) means: Make a copy of the entire list, element by element.\nSo you have two lists that have identical content, but are separate entities. Since is evaluates object identity, it returns False.\nL == L[:] returns True.\n",
"When in doubt ask for id ;)\n>>> li = [1,2,4]\n>>> id(li)\n18686240\n>>> id(li[:])\n18644144\n>>> \n\n",
"The getslice method of list, which is called when you to L[], returns a list; so, when you call it with the ':' argument, it doesn't behave differently, it returns a new list with the same elements as the original.\n>>> id(L)\n>>> id(L[:])\n>>> L[:] == L \nTrue\n>>> L[:] is L\nFalse\n\n"
] |
[
14,
6,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002453672_python.txt
|
Q:
etree.findall: 'OR'-lookup?
I want to find all stylesheet definitions in a XHTML file with lxml.etree.findall. This could be as simple as
elems = tree.findall('link[@rel="stylesheet"]') + tree.findall('style')
But the problem with CSS style definitions is that the order matters, e.g.
<link rel="stylesheet" type="text/css" href="/media/css/first.css" />
<style>body:{font-size: 10px;}</style>
<link rel="stylesheet" type="text/css" href="/media/css/second.css" />
if the contents of the style tag is applied after the rules in the two link tags, the result may be completely different from the one where the rules are applied in order of definition.
So, how would I do a lookup that inlcudes both link[@rel="stylesheet"] and style?
A:
Possible using XPATH:
data = """<link rel="stylesheet" type="text/css" href="/media/css/first.css" />
<style>body:{font-size: 10px;}</style>
<link rel="stylesheet" type="text/css" href="/media/css/second.css" />
"""
from lxml import etree
h = etree.HTML(data)
h.xpath('//link[@rel="stylesheet"]|//style')
[<Element link at 97a007c>,
<Element style at 97a002c>,
<Element link at 97a0054>]
|
etree.findall: 'OR'-lookup?
|
I want to find all stylesheet definitions in a XHTML file with lxml.etree.findall. This could be as simple as
elems = tree.findall('link[@rel="stylesheet"]') + tree.findall('style')
But the problem with CSS style definitions is that the order matters, e.g.
<link rel="stylesheet" type="text/css" href="/media/css/first.css" />
<style>body:{font-size: 10px;}</style>
<link rel="stylesheet" type="text/css" href="/media/css/second.css" />
if the contents of the style tag is applied after the rules in the two link tags, the result may be completely different from the one where the rules are applied in order of definition.
So, how would I do a lookup that inlcudes both link[@rel="stylesheet"] and style?
|
[
"Possible using XPATH:\ndata = \"\"\"<link rel=\"stylesheet\" type=\"text/css\" href=\"/media/css/first.css\" />\n<style>body:{font-size: 10px;}</style>\n<link rel=\"stylesheet\" type=\"text/css\" href=\"/media/css/second.css\" />\n\"\"\"\n\nfrom lxml import etree\n\nh = etree.HTML(data)\n\nh.xpath('//link[@rel=\"stylesheet\"]|//style')\n\n[<Element link at 97a007c>,\n <Element style at 97a002c>,\n <Element link at 97a0054>]\n\n"
] |
[
3
] |
[] |
[] |
[
"elementtree",
"lxml",
"python",
"xpath"
] |
stackoverflow_0002453891_elementtree_lxml_python_xpath.txt
|
Q:
Django QuerySet ordering by number of reverse ForeignKey matches
I have the following Django models:
class Foo(models.Model):
title = models.CharField(_(u'Title'), max_length=600)
class Bar(models.Model):
foo = models.ForeignKey(Foo)
eg_id = models.PositiveIntegerField(_(u'Example ID'), default=0)
I wish to return a list of Foo objects which have a reverse relationship with Bar objects that have a eg_id value contained in a list of values. So I have:
id_list = [7, 8, 9, 10]
qs = Foo.objects.filter(bar__eg_id__in=id_list)
How do I order the matching Foo objects according to the number of related Bar objects which have an eg_id value in the id_list?
A:
Use Django's lovely aggregation features.
from django.db.models import Count
qs = Foo.objects.filter(
bar__eg_id__in=id_list
).annotate(
bar_count=Count('bar')
).order_by('bar_count')
A:
You can do that by using aggregation, and more especifically annotation and order_by.
In your example, it would be:
from django.db.models import Count
id_list = [7, 8, 9, 10]
qs = Foo.objects.filter(bar__eg_id__in=id_list)
qs = qs.annotate(count=Count("bar"))
qs = qs.order_by('-count')
|
Django QuerySet ordering by number of reverse ForeignKey matches
|
I have the following Django models:
class Foo(models.Model):
title = models.CharField(_(u'Title'), max_length=600)
class Bar(models.Model):
foo = models.ForeignKey(Foo)
eg_id = models.PositiveIntegerField(_(u'Example ID'), default=0)
I wish to return a list of Foo objects which have a reverse relationship with Bar objects that have a eg_id value contained in a list of values. So I have:
id_list = [7, 8, 9, 10]
qs = Foo.objects.filter(bar__eg_id__in=id_list)
How do I order the matching Foo objects according to the number of related Bar objects which have an eg_id value in the id_list?
|
[
"Use Django's lovely aggregation features.\nfrom django.db.models import Count\nqs = Foo.objects.filter(\n bar__eg_id__in=id_list\n ).annotate(\n bar_count=Count('bar')\n ).order_by('bar_count')\n\n",
"You can do that by using aggregation, and more especifically annotation and order_by.\nIn your example, it would be:\nfrom django.db.models import Count\n\nid_list = [7, 8, 9, 10]\nqs = Foo.objects.filter(bar__eg_id__in=id_list)\nqs = qs.annotate(count=Count(\"bar\"))\nqs = qs.order_by('-count')\n\n"
] |
[
22,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002453821_django_python.txt
|
Q:
What is a Pythonic way to get a list of tuples of all the possible combinations of the elements of two lists?
Suppose I have two differently-sized lists
a = [1, 2, 3]
b = ['a', 'b']
What is a Pythonic way to get a list of tuples c of all the possible combinations of one element from a and one element from b?
>>> print c
[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b'), (3, 'a'), (3, 'b')]
The order of elements in c does not matter.
The solution with two for loops is trivial, but it doesn't seem particularly Pythonic.
A:
Use a list comprehension:
>>> a = [1, 2, 3]
>>> b = ['a', 'b']
>>> c = [(x,y) for x in a for y in b]
>>> print c
[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b'), (3, 'a'), (3, 'b')]
A:
Try itertools.product.
|
What is a Pythonic way to get a list of tuples of all the possible combinations of the elements of two lists?
|
Suppose I have two differently-sized lists
a = [1, 2, 3]
b = ['a', 'b']
What is a Pythonic way to get a list of tuples c of all the possible combinations of one element from a and one element from b?
>>> print c
[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b'), (3, 'a'), (3, 'b')]
The order of elements in c does not matter.
The solution with two for loops is trivial, but it doesn't seem particularly Pythonic.
|
[
"Use a list comprehension:\n>>> a = [1, 2, 3]\n>>> b = ['a', 'b']\n>>> c = [(x,y) for x in a for y in b]\n>>> print c\n[(1, 'a'), (1, 'b'), (2, 'a'), (2, 'b'), (3, 'a'), (3, 'b')]\n\n",
"Try itertools.product.\n"
] |
[
13,
10
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0002454626_list_python.txt
|
Q:
How do you get SQLAlchemy to override MySQL "on update CURRENT_TIMESTAMP"
I've inherited an older database that was setup with a "on update CURRENT_TIMESTAMP" put on a field that should only describe an item's creation. With PHP I have been using "timestamp=timestamp" on UPDATE clauses, but in SQLAlchemy I can't seem to force the system to use the set timestamp.
Do I have no choice and need to update the MySQL table (millions of rows)?
foo = session.query(f).get(int(1))
ts = foo.timestamp
setattr(foo, 'timestamp', ts)
setattr(foo, 'bar', bar)
www_model.www_Session.commit()
I have also tried:
foo = session.query(f).get(int(1))
setattr(foo, 'timestamp', foo.timestamp)
setattr(foo, 'bar', bar)
www_model.www_Session.commit()
A:
SQLAlchemy doesn't try to set the field because it thinks the value hasn't changed.
You can tell SQLAlchemy to reassign the value by specifying the onupdate attribute on the Column:
Column('timestamp', ..., onupdate=literal_column('timestamp'))
This will result in SQLAlchemy automatically adding timestamp=timestamp to all update queries.
If you need to do it one off on an instance, you can assign the column to it:
foo.timestamp = literal_column('timestamp')
# or
foo.timestamp = foo_tbl.c.timestamp
|
How do you get SQLAlchemy to override MySQL "on update CURRENT_TIMESTAMP"
|
I've inherited an older database that was setup with a "on update CURRENT_TIMESTAMP" put on a field that should only describe an item's creation. With PHP I have been using "timestamp=timestamp" on UPDATE clauses, but in SQLAlchemy I can't seem to force the system to use the set timestamp.
Do I have no choice and need to update the MySQL table (millions of rows)?
foo = session.query(f).get(int(1))
ts = foo.timestamp
setattr(foo, 'timestamp', ts)
setattr(foo, 'bar', bar)
www_model.www_Session.commit()
I have also tried:
foo = session.query(f).get(int(1))
setattr(foo, 'timestamp', foo.timestamp)
setattr(foo, 'bar', bar)
www_model.www_Session.commit()
|
[
"SQLAlchemy doesn't try to set the field because it thinks the value hasn't changed.\nYou can tell SQLAlchemy to reassign the value by specifying the onupdate attribute on the Column:\n Column('timestamp', ..., onupdate=literal_column('timestamp'))\n\nThis will result in SQLAlchemy automatically adding timestamp=timestamp to all update queries.\nIf you need to do it one off on an instance, you can assign the column to it:\nfoo.timestamp = literal_column('timestamp')\n# or \nfoo.timestamp = foo_tbl.c.timestamp\n\n"
] |
[
6
] |
[] |
[] |
[
"mysql",
"python",
"sqlalchemy"
] |
stackoverflow_0002450593_mysql_python_sqlalchemy.txt
|
Q:
How do you run python scripts from other script and have their root in my root?
I have a module "B", I want to run it from a script "C", and I want to call global variables in "B", as they were in the "C" root. Another problem is if I imported sys in "B" when I run "C" it doesn't see sys
# NameError: global name 'sys' is not defined #
What shall I do?
A:
When you import a module B (like import B), every line in B will be interpreted. I assume this is what you mean when you say you want to run it. To reference members in B's namespace, you can get them like:
B.something_defined_in_B.
If you wish to use sys explicitly in C, you will need to import it within C as well.
A:
is it in your PYTHON_PATH?
if not, in script C's init.py
import os, sys
sys.path.append('/PATH/TO/MODULE/B')
then, in module C
from B import *
something_defined_in_B()
|
How do you run python scripts from other script and have their root in my root?
|
I have a module "B", I want to run it from a script "C", and I want to call global variables in "B", as they were in the "C" root. Another problem is if I imported sys in "B" when I run "C" it doesn't see sys
# NameError: global name 'sys' is not defined #
What shall I do?
|
[
"When you import a module B (like import B), every line in B will be interpreted. I assume this is what you mean when you say you want to run it. To reference members in B's namespace, you can get them like:\nB.something_defined_in_B.\nIf you wish to use sys explicitly in C, you will need to import it within C as well.\n",
"is it in your PYTHON_PATH?\nif not, in script C's init.py\nimport os, sys\nsys.path.append('/PATH/TO/MODULE/B')\n\nthen, in module C\nfrom B import *\nsomething_defined_in_B()\n\n"
] |
[
5,
1
] |
[] |
[] |
[
"global_variables",
"import",
"module",
"python"
] |
stackoverflow_0002454576_global_variables_import_module_python.txt
|
Q:
Python ctypes callback function to SWIG
I have a SWIG C++ function that expects a function pointer (WNDPROC), and want to give it a Python function that has been wrapped by ctypes.WINFUNCTYPE.
It seems to me that this should be compatible, but SWIG's type checking throws an exception because it doesn't know that the ctypes.WINFUNCTYPE type is acctually a WNDPROC.
What can I do to pass my callback to SWIG so that it understands it?
A:
I don't have a windows machine to really check this, but I think you need to create a typemap to tell swig how to convert the PyObject wrapper to a WNDPROC:
// assuming the wrapped object has an attribute "pointer" which contains
// the numerical address of the WNDPROC
%typemap(in) WNDPROC {
PyObject * addrobj = PyObject_GetAttrString($input, "pointer");
void * ptr = PyLong_AsVoidPt(addrobj);
$1 = (WNDPROC)ptr;
}
|
Python ctypes callback function to SWIG
|
I have a SWIG C++ function that expects a function pointer (WNDPROC), and want to give it a Python function that has been wrapped by ctypes.WINFUNCTYPE.
It seems to me that this should be compatible, but SWIG's type checking throws an exception because it doesn't know that the ctypes.WINFUNCTYPE type is acctually a WNDPROC.
What can I do to pass my callback to SWIG so that it understands it?
|
[
"I don't have a windows machine to really check this, but I think you need to create a typemap to tell swig how to convert the PyObject wrapper to a WNDPROC:\n// assuming the wrapped object has an attribute \"pointer\" which contains \n// the numerical address of the WNDPROC\n%typemap(in) WNDPROC {\n PyObject * addrobj = PyObject_GetAttrString($input, \"pointer\");\n void * ptr = PyLong_AsVoidPt(addrobj);\n $1 = (WNDPROC)ptr;\n}\n\n"
] |
[
4
] |
[] |
[] |
[
"c++",
"ctypes",
"python",
"swig"
] |
stackoverflow_0002032470_c++_ctypes_python_swig.txt
|
Q:
how to make a variable change from the text "1m" into "1000000" in python
I have variables with values like 1.7m 1.8k and 1.2b how can I convert them to a real number value for example
1.7m = 1700000
1.8k = 1800
1.2b = 1200000000
A:
I would define a dictionary:
tens = dict(k=10e3, m=10e6, b=10e9)
then
x='1.7m'
factor, exp = x[0:-1], x[-1].lower()
ans = int(float(factor) * tens[exp])
A:
You might be interested in a units library like quantities or unum.
A:
Using lambda:
>>> tens = {'k': 10e3, 'm': 10e6, 'b': 10e9}
>>> f = lambda x: int(float(x[:-1])*tens[x[-1]])
>>> f('1.7m')
17000000
>>> f('1.8k')
18000
>>> f('1.2b')
12000000000
A:
Here is an example using re:
input = '17k, 14.05m, 1.235b'
multipliers = { 'k': 1e3,
'm': 1e6,
'b': 1e9,
}
pattern = r'([0-9.]+)([bkm])'
for number, suffix in re.findall(pattern, input):
number = float(number)
print number * multipliers[suffix]
|
how to make a variable change from the text "1m" into "1000000" in python
|
I have variables with values like 1.7m 1.8k and 1.2b how can I convert them to a real number value for example
1.7m = 1700000
1.8k = 1800
1.2b = 1200000000
|
[
"I would define a dictionary:\ntens = dict(k=10e3, m=10e6, b=10e9)\n\nthen \nx='1.7m'\nfactor, exp = x[0:-1], x[-1].lower()\nans = int(float(factor) * tens[exp])\n\n",
"You might be interested in a units library like quantities or unum.\n",
"Using lambda:\n>>> tens = {'k': 10e3, 'm': 10e6, 'b': 10e9}\n>>> f = lambda x: int(float(x[:-1])*tens[x[-1]])\n>>> f('1.7m')\n17000000\n>>> f('1.8k')\n18000\n>>> f('1.2b')\n12000000000\n\n",
"Here is an example using re:\ninput = '17k, 14.05m, 1.235b'\n\nmultipliers = { 'k': 1e3,\n 'm': 1e6,\n 'b': 1e9,\n }\n\npattern = r'([0-9.]+)([bkm])'\n\nfor number, suffix in re.findall(pattern, input):\n number = float(number)\n print number * multipliers[suffix]\n\n"
] |
[
11,
1,
1,
0
] |
[] |
[] |
[
"numbers",
"python"
] |
stackoverflow_0002449848_numbers_python.txt
|
Q:
How do we override the choice field display of a reference property in appengine using Django?
The default choice field display of a reference property in appengine returns the choices
as the string representation of the entire object. What is the best method to override this behaviour? I tried to override str() in the referenced class. But it does not work.
A:
I got it to work by overriding the init method of the modelform to pick up the correct fields as I had to do filtering of the choices as well.
A:
The correct way would be to override the __unicode__ method of the class, like:
def __unicode__(self):
return self.name
where name is the value that you want to display.
|
How do we override the choice field display of a reference property in appengine using Django?
|
The default choice field display of a reference property in appengine returns the choices
as the string representation of the entire object. What is the best method to override this behaviour? I tried to override str() in the referenced class. But it does not work.
|
[
"I got it to work by overriding the init method of the modelform to pick up the correct fields as I had to do filtering of the choices as well. \n",
"The correct way would be to override the __unicode__ method of the class, like:\ndef __unicode__(self):\n return self.name\n\nwhere name is the value that you want to display.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"google_app_engine",
"python",
"referenceproperty"
] |
stackoverflow_0001635638_django_google_app_engine_python_referenceproperty.txt
|
Q:
Matplotlib installation problems
I need to install matplotlib in a remote linux machine, and I am a normal user there.
I downlodad the source and run
python setup.py build
but I get errors, related with numpy, which is not installed, so I decieded to install it first. I download and compile with
python setup.py build
My question now is, how do I tell to teh matplotlib installation where the numpy files have been installed?
Thanks
A:
Since you are a user and not root on the remote machine, it may be that your environement is not configured correctly.
Check that you can load numpy from the interperter.
import numpy
If that fails, you may need to add its installed location to sys.path
import sys
sys.path.append("\user\local\numpy")
import numpy
Once you know where it is and can get it to load in the interperter, you can modify your site.py to add the path automatically.
A:
Hmm, I wonder if you just need the numpy directory in PYTHONPATH before executing the installer.
export PYTHONPATH="/path/to/numpy"
Then run the build command.
|
Matplotlib installation problems
|
I need to install matplotlib in a remote linux machine, and I am a normal user there.
I downlodad the source and run
python setup.py build
but I get errors, related with numpy, which is not installed, so I decieded to install it first. I download and compile with
python setup.py build
My question now is, how do I tell to teh matplotlib installation where the numpy files have been installed?
Thanks
|
[
"Since you are a user and not root on the remote machine, it may be that your environement is not configured correctly.\nCheck that you can load numpy from the interperter.\n\n\nimport numpy\n\n\nIf that fails, you may need to add its installed location to sys.path\nimport sys\nsys.path.append(\"\\user\\local\\numpy\")\nimport numpy\nOnce you know where it is and can get it to load in the interperter, you can modify your site.py to add the path automatically.\n",
"Hmm, I wonder if you just need the numpy directory in PYTHONPATH before executing the installer.\nexport PYTHONPATH=\"/path/to/numpy\"\n\nThen run the build command.\n"
] |
[
1,
0
] |
[] |
[] |
[
"matplotlib",
"numpy",
"python"
] |
stackoverflow_0002453324_matplotlib_numpy_python.txt
|
Q:
How to spell check python docstring with emacs?
I'd like to run a spell checker on the docstrings of my Python code, if possible from within emacs.
I've found the ispell-check-comments setting which can be used to spell check only comments in code, but I was not able to target only the docstrings which are a fairly python-specific thing.
A:
I recommend you to try flyspell-mode. You could use something like:
(add-hook 'python-mode-hook 'flyspell-prog-mode)
in your Emacs configuration.
|
How to spell check python docstring with emacs?
|
I'd like to run a spell checker on the docstrings of my Python code, if possible from within emacs.
I've found the ispell-check-comments setting which can be used to spell check only comments in code, but I was not able to target only the docstrings which are a fairly python-specific thing.
|
[
"I recommend you to try flyspell-mode. You could use something like:\n(add-hook 'python-mode-hook 'flyspell-prog-mode)\nin your Emacs configuration.\n"
] |
[
19
] |
[] |
[] |
[
"docstring",
"emacs",
"python",
"spell_checking"
] |
stackoverflow_0002455062_docstring_emacs_python_spell_checking.txt
|
Q:
GAE-mechanize sourcecode
i saw a project that made mechanize compatible to google app engine. But I couldn't find the sourcecode to it
It would be very nice if someone can give me the source of it, because I most likely need this in the app I'm creating currently.
A:
See Mechanize and Google App Engine
I got this by googling mechanize google app engine.
|
GAE-mechanize sourcecode
|
i saw a project that made mechanize compatible to google app engine. But I couldn't find the sourcecode to it
It would be very nice if someone can give me the source of it, because I most likely need this in the app I'm creating currently.
|
[
"See Mechanize and Google App Engine\nI got this by googling mechanize google app engine.\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"mechanize",
"python"
] |
stackoverflow_0002456321_google_app_engine_mechanize_python.txt
|
Q:
BasicHTTPServer, SimpleHTTPServer and concurrency
I'm writing a small web server for testing purposes using python, BasicHTTPServer and SimpleHTTPServer. It looks like it's processing one request at a time. Is there any way to make it a little faster without messing around too deeply?
Basicly my code looks as the following and I'd like to keep it this simple ;)
os.chdir(webroot)
httpd = BaseHTTPServer.HTTPServer(("", port), SimpleHTTPServer.SimpleHTTPRequestHandler)
print("Serving directory %s on port %i" %(webroot, port) )
try:
httpd.serve_forever()
except KeyboardInterrupt:
print("Server stopped.")
A:
You can make your own threading or forking class with a mixin inheritance from SocketServer:
import SocketServer
import BaseHTTPServer
class ThreadingHTTPServer(SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
pass
This has its limits as it doesn't use a thread pool, is limited by the GIT, etc, but it could help a little (with relatively little effort). Remember that requests will be served simultaneously by multiple threads, so be sure to put proper locking around accesses to global/shared data (unless such data's immutable after startup) done in the course of serving a request.
This SO question covers the same ground (not particularly at length).
A:
You might also look at CherryPy -- it's pretty simple, too, and has multiple request threads with no additional effort. Although your needs may be modest now, CP has a lot of nice capabilities that may benefit you in the future.
A:
Depending on what your requirements are, another option might be to hook in Paste. Though, based on your example, it may be overkill. Something to keep in the toolbox.
|
BasicHTTPServer, SimpleHTTPServer and concurrency
|
I'm writing a small web server for testing purposes using python, BasicHTTPServer and SimpleHTTPServer. It looks like it's processing one request at a time. Is there any way to make it a little faster without messing around too deeply?
Basicly my code looks as the following and I'd like to keep it this simple ;)
os.chdir(webroot)
httpd = BaseHTTPServer.HTTPServer(("", port), SimpleHTTPServer.SimpleHTTPRequestHandler)
print("Serving directory %s on port %i" %(webroot, port) )
try:
httpd.serve_forever()
except KeyboardInterrupt:
print("Server stopped.")
|
[
"You can make your own threading or forking class with a mixin inheritance from SocketServer:\nimport SocketServer\nimport BaseHTTPServer\n\nclass ThreadingHTTPServer(SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):\n pass\n\nThis has its limits as it doesn't use a thread pool, is limited by the GIT, etc, but it could help a little (with relatively little effort). Remember that requests will be served simultaneously by multiple threads, so be sure to put proper locking around accesses to global/shared data (unless such data's immutable after startup) done in the course of serving a request.\nThis SO question covers the same ground (not particularly at length).\n",
"You might also look at CherryPy -- it's pretty simple, too, and has multiple request threads with no additional effort. Although your needs may be modest now, CP has a lot of nice capabilities that may benefit you in the future.\n",
"Depending on what your requirements are, another option might be to hook in Paste. Though, based on your example, it may be overkill. Something to keep in the toolbox.\n"
] |
[
9,
1,
0
] |
[] |
[] |
[
"concurrency",
"python",
"simplehttpserver"
] |
stackoverflow_0002455606_concurrency_python_simplehttpserver.txt
|
Q:
can't figure out serving static images in django dev environment
I've read the article (and few others on the subject), but still can't figure out how to show an image unless a link to a file existing on a web-service is hard-coded into the html template.
I've got in urls.py:
...
(r'^galleries/(landscapes)/(?P<path>.jpg)$',
'django.views.static.serve', {'document_root': settings.MEDIA_URL}),
...
where 'landscapes' is one of the albums I'm trying to show images from. (There are several more of them.)
In views.py it calls the template with code like that:
...
<li><img src=160.jpg alt='' title='' /></li>
...
which resolves the image link in html into:
http://127.0.0.1:8000/galleries/landscapes/160.jpg
In settings.py I have:
MEDIA_ROOT = 'C:/siteURL/galleries/'
MEDIA_URL = 'http://some-good-URL/galleries/'
In file system there is a file C:/siteURL/galleries/landscapes/160.jpg and I do have the same file at http://some-good-URL/galleries/landscapes/160.jpg
No matter what I use in urls.py — MEDIA_ROOT or MEDIA_URL (with expectation to have either local images served or from the web-server) — I get following in the source code in the browser:
<li><img src=160.jpg /></li>
There is no image shown in the browser.
What am I doing wrong?
A:
This is a long post, basically summarizing all the things I learned about Django in order to get static files to work (it took me a while to understand how all the different parts fit together).
To serve static images in your development server (and later, your real server), you're going to have to do a few things (note specifically the third and fourth steps):
Set MEDIA_ROOT
MEDIA_ROOT is a constant which tells Django the physical path of the file (on your filesystem). Using your example, MEDIA_ROOT needs to be set to 'C:/siteURL/galleries/', like you wrote. MEDIA_ROOT is going to be used in one of the following steps, which is why we set it.
Set MEDIA_URL
MEDIA_URL is the "url" at which your images sit. In other words, whenever you want to get an image, the url to look for starts with MEDIA_URL. Usually this is not going to start with "http", since you're serving from your own server (my MEDIA_URL is usually set to '/site_media/', meaning to start from the root domain, then go to site_media etc.)
Use MEDIA_URL
MEDIA_URL doesn't work by magic, you actually have to use it. For example, when you're writing the HTML which gets a file, it needs to look like this:
<li><img src="{{MEDIA_URL}}/160.jpg" /></li>
See how I'm telling the template to use the MEDIA_URL prefix? That eventually translates to 'http://some-good-URL/galleries/160.jpg'.
Note that to be able to actually use the MEDIA_URL in your templates, you're going to have to add the line 'django.core.context_processors.media' to your TEMPLATE_CONTEXT_PROCESSORS setting in your settings.py file, if I'm not mistaken.
Make your dev server serve static files
In a real environment, you will configure files with addresses like "static_media" to be served without going through Django. But in a dev environment, you'll want to server them from Django as well, so you should add this generic line to the end of your urls.py file:
if settings.DEBUG:
# Serve static files in debug.
urlpatterns += patterns('',
(r'^site_media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT,
'show_indexes' : True}),
)
Note how that takes anything with the url "site_media/*" (which is actually my MEDIA_URL) and serves it from my MEDIA_ROOT folder, which is the place where the MEDIA_ROOT setting comes into play.
Final note
What confused me is that a lot of the things here are for convenience. For example, MEDIA_ROOT is only used in your debug url pattern, to tell Django where to load from. And MEDIA_URL is only there to encourage you not to put in absolute URLs in all your HTML files, because then when you decide to move the files to a different server, you'd have to manually change them all (instead of just changing the MEDIA_URL constant).
Of course, none of this is necessary: you can hard-code the debug url patter with your own folder, make sure that the static files really are being server from the url (by visiting it in your browser), and then hand-code that without using the MEDIA_URL setting into the HTML file, just to make sure things work.
A:
This looks buggy...:
r'^galleries/(landscapes)/(?P<path>.jpg)$'
this RE will only match image names with a single character begore the jpg suffix, not four (as in, e.g., '160.jpg'). Maybe you meant...
r'^galleries/(landscapes)/(?P<path>.*jpg)$'
...?
A:
Take this as a cross between the two previous answers, both of which are good. First, your regex is incorrect as Alex pointed out. I would suggest setting it as:
(r'^local_media/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.MEDIA_ROOT}), # static content
Because you're probably going to want to server css and js files too, not just images. This regex takes care of any and every static file you might want to serve.
Next, you'll want to specify the MEDIA_URL for your img tags. You currently have:
<img src=160.jpg />
Instead, it needs to be something like:
<img src=[YOUR MEDIA_URL]160.jpg />
The trick I use is easy. At the top of my views.py, I have the following code:
from django.conf import settings
resp = {}
resp['MEDIA_URL'] = settings.MEDIA_URL
and then I just pass the resp dictionary to every template I render. Now I can write those same img tags as:
<img src={{MEDIA_URL}}160.jpg />
Best of all, this part of your code can be used in production as well (not the regexes, just the MEDIA_URL bit).
|
can't figure out serving static images in django dev environment
|
I've read the article (and few others on the subject), but still can't figure out how to show an image unless a link to a file existing on a web-service is hard-coded into the html template.
I've got in urls.py:
...
(r'^galleries/(landscapes)/(?P<path>.jpg)$',
'django.views.static.serve', {'document_root': settings.MEDIA_URL}),
...
where 'landscapes' is one of the albums I'm trying to show images from. (There are several more of them.)
In views.py it calls the template with code like that:
...
<li><img src=160.jpg alt='' title='' /></li>
...
which resolves the image link in html into:
http://127.0.0.1:8000/galleries/landscapes/160.jpg
In settings.py I have:
MEDIA_ROOT = 'C:/siteURL/galleries/'
MEDIA_URL = 'http://some-good-URL/galleries/'
In file system there is a file C:/siteURL/galleries/landscapes/160.jpg and I do have the same file at http://some-good-URL/galleries/landscapes/160.jpg
No matter what I use in urls.py — MEDIA_ROOT or MEDIA_URL (with expectation to have either local images served or from the web-server) — I get following in the source code in the browser:
<li><img src=160.jpg /></li>
There is no image shown in the browser.
What am I doing wrong?
|
[
"This is a long post, basically summarizing all the things I learned about Django in order to get static files to work (it took me a while to understand how all the different parts fit together). \nTo serve static images in your development server (and later, your real server), you're going to have to do a few things (note specifically the third and fourth steps):\nSet MEDIA_ROOT\nMEDIA_ROOT is a constant which tells Django the physical path of the file (on your filesystem). Using your example, MEDIA_ROOT needs to be set to 'C:/siteURL/galleries/', like you wrote. MEDIA_ROOT is going to be used in one of the following steps, which is why we set it.\nSet MEDIA_URL\nMEDIA_URL is the \"url\" at which your images sit. In other words, whenever you want to get an image, the url to look for starts with MEDIA_URL. Usually this is not going to start with \"http\", since you're serving from your own server (my MEDIA_URL is usually set to '/site_media/', meaning to start from the root domain, then go to site_media etc.)\nUse MEDIA_URL\nMEDIA_URL doesn't work by magic, you actually have to use it. For example, when you're writing the HTML which gets a file, it needs to look like this:\n<li><img src=\"{{MEDIA_URL}}/160.jpg\" /></li>\n\nSee how I'm telling the template to use the MEDIA_URL prefix? That eventually translates to 'http://some-good-URL/galleries/160.jpg'.\nNote that to be able to actually use the MEDIA_URL in your templates, you're going to have to add the line 'django.core.context_processors.media' to your TEMPLATE_CONTEXT_PROCESSORS setting in your settings.py file, if I'm not mistaken.\nMake your dev server serve static files\nIn a real environment, you will configure files with addresses like \"static_media\" to be served without going through Django. But in a dev environment, you'll want to server them from Django as well, so you should add this generic line to the end of your urls.py file:\nif settings.DEBUG:\n# Serve static files in debug.\nurlpatterns += patterns('',\n (r'^site_media/(?P<path>.*)$', 'django.views.static.serve',\n {'document_root': settings.MEDIA_ROOT,\n 'show_indexes' : True}),\n)\n\nNote how that takes anything with the url \"site_media/*\" (which is actually my MEDIA_URL) and serves it from my MEDIA_ROOT folder, which is the place where the MEDIA_ROOT setting comes into play.\nFinal note\nWhat confused me is that a lot of the things here are for convenience. For example, MEDIA_ROOT is only used in your debug url pattern, to tell Django where to load from. And MEDIA_URL is only there to encourage you not to put in absolute URLs in all your HTML files, because then when you decide to move the files to a different server, you'd have to manually change them all (instead of just changing the MEDIA_URL constant).\nOf course, none of this is necessary: you can hard-code the debug url patter with your own folder, make sure that the static files really are being server from the url (by visiting it in your browser), and then hand-code that without using the MEDIA_URL setting into the HTML file, just to make sure things work.\n",
"This looks buggy...:\nr'^galleries/(landscapes)/(?P<path>.jpg)$'\n\nthis RE will only match image names with a single character begore the jpg suffix, not four (as in, e.g., '160.jpg'). Maybe you meant...\nr'^galleries/(landscapes)/(?P<path>.*jpg)$'\n\n...?\n",
"Take this as a cross between the two previous answers, both of which are good. First, your regex is incorrect as Alex pointed out. I would suggest setting it as:\n (r'^local_media/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.MEDIA_ROOT}), # static content\n\nBecause you're probably going to want to server css and js files too, not just images. This regex takes care of any and every static file you might want to serve.\nNext, you'll want to specify the MEDIA_URL for your img tags. You currently have:\n<img src=160.jpg />\n\nInstead, it needs to be something like:\n<img src=[YOUR MEDIA_URL]160.jpg />\n\nThe trick I use is easy. At the top of my views.py, I have the following code:\nfrom django.conf import settings\nresp = {}\nresp['MEDIA_URL'] = settings.MEDIA_URL\n\nand then I just pass the resp dictionary to every template I render. Now I can write those same img tags as:\n<img src={{MEDIA_URL}}160.jpg />\n\nBest of all, this part of your code can be used in production as well (not the regexes, just the MEDIA_URL bit).\n"
] |
[
11,
5,
1
] |
[] |
[] |
[
"django",
"image",
"python"
] |
stackoverflow_0002451352_django_image_python.txt
|
Q:
Using Python, what's the best way to create a set of files on disk for testing?
I'm looking for a way to create a tree of test files to unit test a packaging tool. Basically, I want to create some common file system structures -- directories, nested directories, symlinks within the selected tree, symlinks outside the tree, &c.
Ideally I want to do this with as little boilerplate as possible. Of course, I could hand-write the set of files I want to see, but I'm thinking that somebody has to have automated this for a test suite somewhere. Any suggestions?
A:
Automated in what way?
You could write a simple format to define a basic file structure using nested dictionaries:
## if you saved this as tree.py
## you could use it by doing:
# from tree import *
## then following the examples at the bottom of this file
import os, shutil, time
class Node:
def __init__(self, name):
self.name = name
self.parent = None
def setParent(self, parent):
self.parent = parent
def resolve(self):
if self.parent: assert self.parent.exists()
self.create()
def exists(self):
if os.path.exists(self.getPath()):
return True
else:
return False
def getPath(self): # you can nest things in symlinks
if self.parent:
return os.path.join(self.parent.getPath(), self.name)
else:
return self.name
def create(self):
raise NotImplemented, 'you must subclass node with your file type'
def __repr__(self):
return '<Node: %s>' % self.name
class Symlink(Node):
def __init__(self, target, name):
self.name = name
self.target = target
self.parent = None
def create(self, basePath=None):
os.symlink(self.target, self.getFilePath())
assert 'symlink' in dir(os), "tried to create a symlink, but operating system doesn't support it"
class Folder(Node):
def create(self):
## swap os.mkdir() for os.makedirs() if you want parent
## directories to be created if they don't already exist
# os.makedirs(self.getPath()
os.mkdir(self.getPath())
class File(Node):
def __init__(self, name, contents=''):
self.name = name
self.contents = contents
self.parent = None
def create(self):
f = open(self.getPath(), 'wb')
f.write(self.contents)
f.close()
def createAll(tree, parent=None):
for node in tree:
next = None
if type(tree) == dict:
if tree[node]:
next = tree[node]
if type(node) in (str, unicode):
# coerce string to folder
node = Folder(node)
if parent:
node.setParent(parent)
node.resolve()
if next: createAll(next, node)
if __name__ == '__main__':
for name in ('src', 'src2', 'src3'):
if os.path.exists(name):
shutil.rmtree(name)
time.sleep(0.1) # give it time to delete, took a second in one of my tests and denied access to creation
empty = None # probably better than using None syntactically to indicate closed nodes of the tree
test = {
Folder('src'): {
# if you *know* your folder won't contain any more levels, you can use a list instead of a dict
# which means you don't need to specify None as the value for the folder key
Folder('test'): [
Symlink('..', 'recursive'),
Symlink('..', 'still recursive'),
Symlink('..', 'another recursion'),
],
Folder('whee'): {
Folder('nested'): {
Folder('nested'): {
Folder('done'): empty,
Symlink('recursive', '..'): empty,
}
}
}
}
}
# the same structure expressed in a cleaner way, made possible by coercing strings to folder nodes:
test2 = {
'src2': {
File('blank'): empty,
File('whee.txt', 'this file is named whee.txt'): empty,
# see above comment about using list as a container
'test': [
Symlink('..', 'recursive'),
Symlink('..', 'still recursive'),
Symlink('..', 'another recursion'),
],
'whee': {
'nested': {
'nested': {
'done': empty,
Symlink('..', 'recursive'): empty,
}
}
}
}
}
test3 = {
'src2': {
File('blank'): empty,
File('whee.txt', 'this file is named whee.txt'): empty,
# see above comment about using list as a container
'test': [
File('file1.txt', 'poor substitute for a symlink'),
File('file2.txt', 'I wish I could be a symlink'),
File('file3.txt', "I'm hungry"),
],
'nest': {
'nested': {
'nested': {
'done': empty,
File('rawr.txt', 'I like pie.'): empty,
}
}
}
}
}
if 'symlink' in dir(os): # these tests are no good if the OS doesn't support symlinks
createAll(test)
createAll(test2)
createAll(test3)
You could also zip the required files and have the script unzip them when it runs.
A:
I do this type of thing for testing Unix user creation and home directory copies. The Zip suggestion is a good one.
I personally keep two directory structures -- one is a source and one becomes the test structure. I just sync the source to the destination via shutil.copytree as part of the test setup.
That makes it easy to change the test structure on the fly (and not have to unzip).
|
Using Python, what's the best way to create a set of files on disk for testing?
|
I'm looking for a way to create a tree of test files to unit test a packaging tool. Basically, I want to create some common file system structures -- directories, nested directories, symlinks within the selected tree, symlinks outside the tree, &c.
Ideally I want to do this with as little boilerplate as possible. Of course, I could hand-write the set of files I want to see, but I'm thinking that somebody has to have automated this for a test suite somewhere. Any suggestions?
|
[
"Automated in what way?\nYou could write a simple format to define a basic file structure using nested dictionaries:\n## if you saved this as tree.py\n## you could use it by doing:\n# from tree import *\n## then following the examples at the bottom of this file\n\nimport os, shutil, time\n\nclass Node:\n def __init__(self, name):\n self.name = name\n self.parent = None\n\n def setParent(self, parent):\n self.parent = parent\n\n def resolve(self):\n if self.parent: assert self.parent.exists()\n self.create()\n\n def exists(self):\n if os.path.exists(self.getPath()):\n return True\n else:\n return False\n\n def getPath(self): # you can nest things in symlinks\n if self.parent:\n return os.path.join(self.parent.getPath(), self.name)\n else:\n return self.name\n\n def create(self):\n raise NotImplemented, 'you must subclass node with your file type'\n\n def __repr__(self):\n return '<Node: %s>' % self.name\n\nclass Symlink(Node):\n def __init__(self, target, name):\n self.name = name\n self.target = target\n self.parent = None\n\n def create(self, basePath=None):\n os.symlink(self.target, self.getFilePath())\n\n assert 'symlink' in dir(os), \"tried to create a symlink, but operating system doesn't support it\"\n\nclass Folder(Node):\n def create(self):\n ## swap os.mkdir() for os.makedirs() if you want parent\n ## directories to be created if they don't already exist\n\n # os.makedirs(self.getPath()\n os.mkdir(self.getPath())\n\nclass File(Node):\n def __init__(self, name, contents=''):\n self.name = name\n self.contents = contents\n self.parent = None\n\n def create(self):\n f = open(self.getPath(), 'wb')\n f.write(self.contents)\n f.close()\n\ndef createAll(tree, parent=None):\n for node in tree:\n next = None\n if type(tree) == dict:\n if tree[node]:\n next = tree[node]\n\n if type(node) in (str, unicode):\n # coerce string to folder\n node = Folder(node)\n\n if parent:\n node.setParent(parent)\n\n node.resolve()\n if next: createAll(next, node)\n\nif __name__ == '__main__':\n\n for name in ('src', 'src2', 'src3'):\n if os.path.exists(name):\n shutil.rmtree(name)\n\n time.sleep(0.1) # give it time to delete, took a second in one of my tests and denied access to creation\n\n empty = None # probably better than using None syntactically to indicate closed nodes of the tree\n\n test = {\n Folder('src'): {\n # if you *know* your folder won't contain any more levels, you can use a list instead of a dict\n # which means you don't need to specify None as the value for the folder key\n Folder('test'): [\n Symlink('..', 'recursive'),\n Symlink('..', 'still recursive'),\n Symlink('..', 'another recursion'),\n ],\n Folder('whee'): {\n Folder('nested'): {\n Folder('nested'): {\n Folder('done'): empty,\n Symlink('recursive', '..'): empty,\n }\n }\n }\n }\n }\n\n # the same structure expressed in a cleaner way, made possible by coercing strings to folder nodes:\n test2 = {\n 'src2': {\n File('blank'): empty,\n File('whee.txt', 'this file is named whee.txt'): empty,\n # see above comment about using list as a container\n 'test': [\n Symlink('..', 'recursive'),\n Symlink('..', 'still recursive'),\n Symlink('..', 'another recursion'),\n ],\n 'whee': {\n 'nested': {\n 'nested': {\n 'done': empty,\n Symlink('..', 'recursive'): empty,\n }\n }\n }\n }\n }\n\n test3 = {\n 'src2': {\n File('blank'): empty,\n File('whee.txt', 'this file is named whee.txt'): empty,\n # see above comment about using list as a container\n 'test': [\n File('file1.txt', 'poor substitute for a symlink'),\n File('file2.txt', 'I wish I could be a symlink'),\n File('file3.txt', \"I'm hungry\"),\n ],\n 'nest': {\n 'nested': {\n 'nested': {\n 'done': empty,\n File('rawr.txt', 'I like pie.'): empty,\n }\n }\n }\n }\n }\n\n if 'symlink' in dir(os): # these tests are no good if the OS doesn't support symlinks\n createAll(test)\n createAll(test2)\n\n createAll(test3)\n\nYou could also zip the required files and have the script unzip them when it runs.\n",
"I do this type of thing for testing Unix user creation and home directory copies. The Zip suggestion is a good one. \nI personally keep two directory structures -- one is a source and one becomes the test structure. I just sync the source to the destination via shutil.copytree as part of the test setup. \nThat makes it easy to change the test structure on the fly (and not have to unzip).\n"
] |
[
1,
1
] |
[] |
[] |
[
"automation",
"python",
"unit_testing"
] |
stackoverflow_0002456226_automation_python_unit_testing.txt
|
Q:
how to build good python web application
i never worked with web programming and
i've been asked lately to write a web-based software to manage assets and tasks. to be used by more than 900 persons
what are the recommended modules , frameworks , libraries for this task.
and it will be highly appreciated if you guyz recommend some books and articles that might help me. thanks in advance
A:
Check out Django. I would say it is the most comprehensive and easy to use python web framework.
They have a book and tutorial as well.
You might also like to visit Python wiki about web frameworks for more suggestions. But still, I highly recommend Django.
A:
I've really enjoyed working with CherryPy in my project. Django had a little more of a CMS feel than I needed. As a Python novice, CherryPy was very approachable to me. After several months of working with it, I often find interesting ways to use and extend it. Not sure how good a match it might be for your project, but it's at least worth checking out as an alternative to Django.
A:
I've been working with Pylons for a while now and I highly recommend it. Before using it I evaluated Django as well. I found Pylons was a better fit due to how easy it was to customize and fit into my work flow. Django seemed to be great for quickly starting projects, but I felt it was tough to make more complicated tasks work. I've developed a task/inventory/contact management system with Pylons and I've been nothing but amazed with how quickly it's allowed me to develop and deploy.
A:
let's just get all the frameworks mentioned again.
Turbogears
bfg
webob
web2py
zope
grok
etc etc etc....
A:
I've never built web application with python-based framework but if I had to I would try Django
I know people who worked with it and were very satisfied
A:
Django is a great place to start since it is the most widely used web framework for Python. You could also look at Pinax which is build off of Django. Pinax is typically used for rapid development. Its templates are great for that. Web.py is also another great Python web framework worth looking in to.
A:
I haven't used it but one of my co-workers has used the Python Google Data API and has said good things. Not positive of everything it can do but it might be helpful to you.
|
how to build good python web application
|
i never worked with web programming and
i've been asked lately to write a web-based software to manage assets and tasks. to be used by more than 900 persons
what are the recommended modules , frameworks , libraries for this task.
and it will be highly appreciated if you guyz recommend some books and articles that might help me. thanks in advance
|
[
"Check out Django. I would say it is the most comprehensive and easy to use python web framework.\nThey have a book and tutorial as well.\nYou might also like to visit Python wiki about web frameworks for more suggestions. But still, I highly recommend Django.\n",
"I've really enjoyed working with CherryPy in my project. Django had a little more of a CMS feel than I needed. As a Python novice, CherryPy was very approachable to me. After several months of working with it, I often find interesting ways to use and extend it. Not sure how good a match it might be for your project, but it's at least worth checking out as an alternative to Django.\n",
"I've been working with Pylons for a while now and I highly recommend it. Before using it I evaluated Django as well. I found Pylons was a better fit due to how easy it was to customize and fit into my work flow. Django seemed to be great for quickly starting projects, but I felt it was tough to make more complicated tasks work. I've developed a task/inventory/contact management system with Pylons and I've been nothing but amazed with how quickly it's allowed me to develop and deploy.\n",
"let's just get all the frameworks mentioned again.\nTurbogears\nbfg\nwebob\nweb2py\nzope\ngrok\netc etc etc....\n",
"I've never built web application with python-based framework but if I had to I would try Django\nI know people who worked with it and were very satisfied\n",
"Django is a great place to start since it is the most widely used web framework for Python. You could also look at Pinax which is build off of Django. Pinax is typically used for rapid development. Its templates are great for that. Web.py is also another great Python web framework worth looking in to.\n",
"I haven't used it but one of my co-workers has used the Python Google Data API and has said good things. Not positive of everything it can do but it might be helpful to you.\n"
] |
[
10,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"python",
"web_applications"
] |
stackoverflow_0002455996_python_web_applications.txt
|
Q:
Reading object tree from file into Python
I have a Python app that contains an object structure that the user can manipulate. What I want to do is allow the user to create a file declaring how the object structure should be created.
For example, I would like the user to be able to create the following file:
foo.bar.baz = true
x.y.z = 12
and for my app to then create that object tree automatically. What's the best way to do something like this?
A:
Typically problems like this one are solved with XML. However in your case you can do something even easier.
Assuming the dots represent hierarchy delimiters, you could read in the left hand side of the = sign (input.split('=')[0]), and then perform a split('.') on the dots. Next, create a nested dictionary structure to match this. i.e. object[foo][bar][baz] returns True and object[x][y][z] returns 12
If you don't feel like coding this by hand, try out one of the many Configuration File parsers for python
|
Reading object tree from file into Python
|
I have a Python app that contains an object structure that the user can manipulate. What I want to do is allow the user to create a file declaring how the object structure should be created.
For example, I would like the user to be able to create the following file:
foo.bar.baz = true
x.y.z = 12
and for my app to then create that object tree automatically. What's the best way to do something like this?
|
[
"Typically problems like this one are solved with XML. However in your case you can do something even easier.\nAssuming the dots represent hierarchy delimiters, you could read in the left hand side of the = sign (input.split('=')[0]), and then perform a split('.') on the dots. Next, create a nested dictionary structure to match this. i.e. object[foo][bar][baz] returns True and object[x][y][z] returns 12\nIf you don't feel like coding this by hand, try out one of the many Configuration File parsers for python\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002456828_python.txt
|
Q:
Serialize a message format to xml
I have a python list as
[
(A,{'a':1,'b':2,'c':3,'d':4}),
B,{'a':1,'b':2,'c':3,'d':4}),
...
]
I want to know if there is a standard library of serializing this kind of list to xml or should I hand code it to a file.
Edit : Added Detail
Assuming this is used to construct a message such that
message = A( Feild attributes{'a':1,'b':2,'c':3,'d':4}) || B Field attributes{'a':1,'b':2,'c':3,'d':4}) || C Field attributes{'a':1,'b':2,'c':3,'d':4})
A:
Does it need to be XML? This is the usual domain of the pickle module.
But, no, there's no standard serialize-Python-object-to-XML library. (I have one I wrote a while ago, it's not published, much less "standard".) There are libraries like lxml for converting XML to Python objects and back, and the usual sax, dom, and expat libraries for reading XML.
A:
"use json/yaml/whitespace" comments aside (I suppose you have your reasons to do so, instead go for pickle/json),
you can try the very pythonic elementtree library (in the standardlib), or even use some advice from google : search "converting python dictionary to xml"
(not to sound too rude .. take it with a wink)
looking at your example, what are A and B ? integers ? strings ? classmethods ?
A:
Why are you using XML? There are often better solutions, like JSON, which is plenty portable and standard.
The easiest way might be to use YAML. YAML's main representation is not XML, but there is a canonical way (YAXML) to represent YAML serialized data as XML.
|
Serialize a message format to xml
|
I have a python list as
[
(A,{'a':1,'b':2,'c':3,'d':4}),
B,{'a':1,'b':2,'c':3,'d':4}),
...
]
I want to know if there is a standard library of serializing this kind of list to xml or should I hand code it to a file.
Edit : Added Detail
Assuming this is used to construct a message such that
message = A( Feild attributes{'a':1,'b':2,'c':3,'d':4}) || B Field attributes{'a':1,'b':2,'c':3,'d':4}) || C Field attributes{'a':1,'b':2,'c':3,'d':4})
|
[
"Does it need to be XML? This is the usual domain of the pickle module.\nBut, no, there's no standard serialize-Python-object-to-XML library. (I have one I wrote a while ago, it's not published, much less \"standard\".) There are libraries like lxml for converting XML to Python objects and back, and the usual sax, dom, and expat libraries for reading XML.\n",
"\"use json/yaml/whitespace\" comments aside (I suppose you have your reasons to do so, instead go for pickle/json), \nyou can try the very pythonic elementtree library (in the standardlib), or even use some advice from google : search \"converting python dictionary to xml\"\n(not to sound too rude .. take it with a wink)\nlooking at your example, what are A and B ? integers ? strings ? classmethods ?\n",
"\nWhy are you using XML? There are often better solutions, like JSON, which is plenty portable and standard.\nThe easiest way might be to use YAML. YAML's main representation is not XML, but there is a canonical way (YAXML) to represent YAML serialized data as XML.\n\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"python",
"xml",
"xml_serialization"
] |
stackoverflow_0002456722_python_xml_xml_serialization.txt
|
Q:
fastest way to search through this data object? (python)
I have a data object that looks like this:
{
'node-16': {
'tags': ['cuda'],
'localNodes': [
{
'name': 'nC',
'consumesFrom': ['nA', 'nB'],
'classType': 'VectorAdder.VectorAdder'
},
{
'name': 'nB',
'consumesFrom': None,
'classType': 'RandomVector'
}
]
},
'node-17': {
'tags': ['boring'],
'localNodes': [
{
'name': 'nA',
'consumesFrom': None,
'classType': 'RandomVector'
}
]
}
}
Notice that node nA is a producer for nC. What's the fastest way to find out if a given localNode is a producer for another localnode in the data structure (and not within the same list)?
For example, I would like to know that nA (node-17) produces for nC (exists on node-16). But I don't need to know that nB produces for nC, since they exist in the same localNodes list.
A:
namedict = dict((x['name'], y) for y in data for x in data[y]['localNodes'])
proddict = dict((z['name'], [y for y in z['consumesFrom'] if namedict[y] != x])
for x in data for z in data[x]['localNodes'] if z['consumesFrom'] is not None)
print 'nA' in proddict['nC']
|
fastest way to search through this data object? (python)
|
I have a data object that looks like this:
{
'node-16': {
'tags': ['cuda'],
'localNodes': [
{
'name': 'nC',
'consumesFrom': ['nA', 'nB'],
'classType': 'VectorAdder.VectorAdder'
},
{
'name': 'nB',
'consumesFrom': None,
'classType': 'RandomVector'
}
]
},
'node-17': {
'tags': ['boring'],
'localNodes': [
{
'name': 'nA',
'consumesFrom': None,
'classType': 'RandomVector'
}
]
}
}
Notice that node nA is a producer for nC. What's the fastest way to find out if a given localNode is a producer for another localnode in the data structure (and not within the same list)?
For example, I would like to know that nA (node-17) produces for nC (exists on node-16). But I don't need to know that nB produces for nC, since they exist in the same localNodes list.
|
[
"namedict = dict((x['name'], y) for y in data for x in data[y]['localNodes'])\nproddict = dict((z['name'], [y for y in z['consumesFrom'] if namedict[y] != x])\n for x in data for z in data[x]['localNodes'] if z['consumesFrom'] is not None)\n\nprint 'nA' in proddict['nC']\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002457064_python.txt
|
Q:
Looping through columns in a .csv files in Python
I want to be able to use Python to open a .csv file like this:
5,26,42,2,1,6,6
and then perform some operation on them like addition.
total = 0
with open("file.csv") as csv_file:
for row in csv.reader(csv_file, delimiter=','):
for number in range(7):
total += int(row[number])
The problem is that since the .csv file only has one row and an unknown number of columns, I don't know how to make this work without either hard-coding it like or using really ugly code.
Is there any way of looping through the columns using something like for columns in file in Python?
A:
You can just say
for col in row:
total += int(col)
For example:
import csv
from StringIO import StringIO
total = 0
for row in csv.reader(StringIO("1,2,3,4")):
for col in row:
total += int(col)
print total # prints 10
The reason why you can do this is that csv.reader returns a simple list for every row, so you can iterate over it as you would any other list in Python.
However, in your case, since you know that you have a file with a single line of comma-separated integers, you could make this much simpler:
line = open("ints.txt").read().split(",")
total = sum(int(i) for i in line)
A:
You can iterate over a list of columns just as you iterate over the rows in a csv reader:
total = 0
with open("file.csv") as csv_file:
for row in csv.reader(csv_file, delimiter=','):
for col in row:
total += int(col)
Or you can add the sum of each row on each pass, and skip the inside loop:
total = 0
with open("file.csv") as csv_file:
for row in csv.reader(csv_file, delimiter=','):
total += sum(map(int, row))
Or you can save creating an extra list by using itertools.imap instead of map.
|
Looping through columns in a .csv files in Python
|
I want to be able to use Python to open a .csv file like this:
5,26,42,2,1,6,6
and then perform some operation on them like addition.
total = 0
with open("file.csv") as csv_file:
for row in csv.reader(csv_file, delimiter=','):
for number in range(7):
total += int(row[number])
The problem is that since the .csv file only has one row and an unknown number of columns, I don't know how to make this work without either hard-coding it like or using really ugly code.
Is there any way of looping through the columns using something like for columns in file in Python?
|
[
"You can just say\nfor col in row:\n total += int(col)\n\nFor example:\nimport csv\nfrom StringIO import StringIO\n\ntotal = 0\nfor row in csv.reader(StringIO(\"1,2,3,4\")):\n for col in row:\n total += int(col)\n\nprint total # prints 10\n\nThe reason why you can do this is that csv.reader returns a simple list for every row, so you can iterate over it as you would any other list in Python.\nHowever, in your case, since you know that you have a file with a single line of comma-separated integers, you could make this much simpler:\nline = open(\"ints.txt\").read().split(\",\")\ntotal = sum(int(i) for i in line)\n\n",
"You can iterate over a list of columns just as you iterate over the rows in a csv reader:\ntotal = 0\nwith open(\"file.csv\") as csv_file:\n for row in csv.reader(csv_file, delimiter=','):\n for col in row:\n total += int(col)\n\nOr you can add the sum of each row on each pass, and skip the inside loop:\ntotal = 0\nwith open(\"file.csv\") as csv_file:\n for row in csv.reader(csv_file, delimiter=','):\n total += sum(map(int, row))\n\nOr you can save creating an extra list by using itertools.imap instead of map.\n"
] |
[
9,
3
] |
[] |
[] |
[
"csv",
"python",
"sum"
] |
stackoverflow_0002457193_csv_python_sum.txt
|
Q:
Checkboxes with pylons
I have been trying to add some check boxes in a pylons mako. However I don't know how to get their values in the controller. It seems that it can only get the first value of the check boxes. I tried using form encode but i got several errors. Is there an easier way to do this?
Thanks
A:
I'm assuming that "I can only get the first value" means you've got a series of checkboxes with the same value for the 'name' attribute within your form?
Now, if that's the case and you're wanting a list of boolean values based on whether or not the boxes are checked or not, you'll need to do two things:
First, when you define your form elements using form encode on your checkbox, set it up such that a missing value on a checkbox element returns 'False.' This way, as the browser won't send a value over unless a checkbox is "on", you validation coerces the missing value to False.
class Registration(formencode.Schema):
box = formencode.validators.StringBoolean(if_missing=False)
Next, assuming you want a list returned, you'll not be able to name all of your elements the same. Pylons supports a nested structure, though. Look at formencode.variabledecode.NestedVariables. In short, you'll need to define a NestedVariables instance as one of your class attributes and your form 'name' attributes will need to change in order to contain explicit indexes.
Edit.. here's a complete example I did real quick:
import logging
import pprint
import formencode
from pylons import request, response, session, tmpl_context as c, url
from pylons.controllers.util import abort, redirect
from pylons.decorators import validate
from testproj.lib.base import BaseController, render
log = logging.getLogger(__name__)
class CheckList(formencode.Schema):
box = formencode.validators.StringBoolean(if_missing=False)
hidden = formencode.validators.String()
class EnclosingForm(formencode.Schema):
pre_validators = [formencode.NestedVariables()]
boxes = formencode.ForEach(CheckList())
class MyformController(BaseController):
def index(self):
schema = EnclosingForm()
v = schema.to_python(dict(request.params))
# Return a rendered template
#return render('/myform.mako')
# or, return a response
response.content_type = 'text/plain'
return pprint.pformat(v)
And then the query string?
boxes-0.box=true&boxes-0.hidden=hidden&boxes-1.box=true&
boxes-1.hidden=hidden&boxes-2.hidden=hidden
And lastly, the response:
{'boxes': [{'box': True, 'hidden': u'hidden'},
{'box': True, 'hidden': u'hidden'},
{'box': False, 'hidden': u'hidden'}]}
HTH
|
Checkboxes with pylons
|
I have been trying to add some check boxes in a pylons mako. However I don't know how to get their values in the controller. It seems that it can only get the first value of the check boxes. I tried using form encode but i got several errors. Is there an easier way to do this?
Thanks
|
[
"I'm assuming that \"I can only get the first value\" means you've got a series of checkboxes with the same value for the 'name' attribute within your form? \nNow, if that's the case and you're wanting a list of boolean values based on whether or not the boxes are checked or not, you'll need to do two things:\nFirst, when you define your form elements using form encode on your checkbox, set it up such that a missing value on a checkbox element returns 'False.' This way, as the browser won't send a value over unless a checkbox is \"on\", you validation coerces the missing value to False.\n\n\n class Registration(formencode.Schema): \n box = formencode.validators.StringBoolean(if_missing=False)\n\n\nNext, assuming you want a list returned, you'll not be able to name all of your elements the same. Pylons supports a nested structure, though. Look at formencode.variabledecode.NestedVariables. In short, you'll need to define a NestedVariables instance as one of your class attributes and your form 'name' attributes will need to change in order to contain explicit indexes. \nEdit.. here's a complete example I did real quick:\n\n\nimport logging\nimport pprint\n\nimport formencode\nfrom pylons import request, response, session, tmpl_context as c, url\nfrom pylons.controllers.util import abort, redirect\nfrom pylons.decorators import validate\n\nfrom testproj.lib.base import BaseController, render\n\nlog = logging.getLogger(__name__)\n\nclass CheckList(formencode.Schema):\n box = formencode.validators.StringBoolean(if_missing=False)\n hidden = formencode.validators.String()\n\nclass EnclosingForm(formencode.Schema):\n pre_validators = [formencode.NestedVariables()]\n boxes = formencode.ForEach(CheckList())\n\nclass MyformController(BaseController):\n\n def index(self):\n schema = EnclosingForm()\n v = schema.to_python(dict(request.params))\n # Return a rendered template\n #return render('/myform.mako')\n # or, return a response\n response.content_type = 'text/plain'\n return pprint.pformat(v)\n\n\n\nAnd then the query string?\n\nboxes-0.box=true&boxes-0.hidden=hidden&boxes-1.box=true&\nboxes-1.hidden=hidden&boxes-2.hidden=hidden\nAnd lastly, the response:\n\n{'boxes': [{'box': True, 'hidden': u'hidden'},\n {'box': True, 'hidden': u'hidden'},\n {'box': False, 'hidden': u'hidden'}]}\n\nHTH\n"
] |
[
0
] |
[] |
[] |
[
"pylons",
"python"
] |
stackoverflow_0002456926_pylons_python.txt
|
Q:
Python | efficiency and performance
Lets say I'm going to save 100 floating point numbers in a list by running a single script, most probably it will take some memory to process.So if this code executes every time as a requirement of an application there will be performance hits, so my question is how to maintain efficiency in order to gain performance.
Mock-up code:
def generate_lglt():
float1, float2 = 27.2423423, 12.2323245
lonlats = []
for val in range(100, 0, -1):
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
print lonlats
Thanks.
A:
Bottlenecks occur at unexpected places, so never optimize code just because you think it might be the right code to try to improve. What you need to do is
Write your program so that it runs completely.
Develop tests to make sure your program is correct.
Decide whether your program is too slow.
There is a good chance you will quit at this step.
Develop performance tests that run your program realistically.
Profile the code in its realistic performance tests using the cProfile module.
Figure out what algorithmic improvements can improve your code's performance.
This is usually the way to improve speed the most.
If you are using the best algorithm for the job, perform micro-optimizations.
Rewriting critical parts in C (possibly using Cython) is often more effective than in-Python micro-optimizations.
A:
If generate_lglt() is going to be called a lot of different times, you may want to keep from regenerating the same range(100,0,-1) with every call of the code. You may want to cache that generated range somewhere and use it over and over again.
Also, if you are going to be exiting a for loop without completing each iteration, use xrange instead of range.
|
Python | efficiency and performance
|
Lets say I'm going to save 100 floating point numbers in a list by running a single script, most probably it will take some memory to process.So if this code executes every time as a requirement of an application there will be performance hits, so my question is how to maintain efficiency in order to gain performance.
Mock-up code:
def generate_lglt():
float1, float2 = 27.2423423, 12.2323245
lonlats = []
for val in range(100, 0, -1):
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
lonlats.append(random.uniform(float1, float2))
print lonlats
Thanks.
|
[
"Bottlenecks occur at unexpected places, so never optimize code just because you think it might be the right code to try to improve. What you need to do is\n\nWrite your program so that it runs completely.\nDevelop tests to make sure your program is correct.\nDecide whether your program is too slow.\n\n\nThere is a good chance you will quit at this step.\n\nDevelop performance tests that run your program realistically.\nProfile the code in its realistic performance tests using the cProfile module.\nFigure out what algorithmic improvements can improve your code's performance.\n\n\nThis is usually the way to improve speed the most.\n\nIf you are using the best algorithm for the job, perform micro-optimizations.\n\n\nRewriting critical parts in C (possibly using Cython) is often more effective than in-Python micro-optimizations.\n\n\n",
"If generate_lglt() is going to be called a lot of different times, you may want to keep from regenerating the same range(100,0,-1) with every call of the code. You may want to cache that generated range somewhere and use it over and over again.\nAlso, if you are going to be exiting a for loop without completing each iteration, use xrange instead of range.\n"
] |
[
8,
2
] |
[] |
[] |
[
"optimization",
"premature_optimization",
"python"
] |
stackoverflow_0002457363_optimization_premature_optimization_python.txt
|
Q:
faking a filesystem / virtual filesystem
I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts).
The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how.
What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user.
Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.
A:
Either a chroot jail or a higher-order security mechanism such as SELinux can be used to restrict access to specific resources.
A:
You are probably best to use a virtual machine like VirtualBox or VMware (perhaps even creating one per user/session).
That will allow you some control over other resources such as memory and network as well as disk
The only python that I know of that has such features built in is the one on Google App Engine. That may be a workable alternative for you too.
A:
This is inherently insecure software. By letting users upload scripts you are introducing a remote code execution vulnerability. You have more to worry about than just modifying files, whats stopping the python script from accessing the network or other resources?
To solve this problem you need to use a sandbox. To better harden the system you can use a layered security approach.
The first layer, and the most important layer is a python sandbox. User supplied scripts will be executed within a python sandbox. This will give you the fine grained limitations that you need. Then, the entire python app should run within its own dedicated chroot. I highly recommend using the grsecurity kernel modules which improve the strength of any chroot. For instance a grsecuirty chroot cannot be broken unless the attacker can rip a hole into kernel land which is very difficult to do these days. Make sure your kernel is up to date.
The end result is that you are trying to limit the resources that an attacker's script has. Layers are a proven approach to security, as long as the layers are different enough such that the same attack won't break both of them. You want to isolate the script form the rest of the system as much as possible. Any resources that are shared are also paths for an attacker.
|
faking a filesystem / virtual filesystem
|
I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts).
The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how.
What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user.
Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.
|
[
"Either a chroot jail or a higher-order security mechanism such as SELinux can be used to restrict access to specific resources. \n",
"You are probably best to use a virtual machine like VirtualBox or VMware (perhaps even creating one per user/session). \nThat will allow you some control over other resources such as memory and network as well as disk\nThe only python that I know of that has such features built in is the one on Google App Engine. That may be a workable alternative for you too. \n",
"This is inherently insecure software. By letting users upload scripts you are introducing a remote code execution vulnerability. You have more to worry about than just modifying files, whats stopping the python script from accessing the network or other resources?\nTo solve this problem you need to use a sandbox. To better harden the system you can use a layered security approach. \nThe first layer, and the most important layer is a python sandbox. User supplied scripts will be executed within a python sandbox. This will give you the fine grained limitations that you need. Then, the entire python app should run within its own dedicated chroot. I highly recommend using the grsecurity kernel modules which improve the strength of any chroot. For instance a grsecuirty chroot cannot be broken unless the attacker can rip a hole into kernel land which is very difficult to do these days. Make sure your kernel is up to date. \nThe end result is that you are trying to limit the resources that an attacker's script has. Layers are a proven approach to security, as long as the layers are different enough such that the same attack won't break both of them. You want to isolate the script form the rest of the system as much as possible. Any resources that are shared are also paths for an attacker. \n"
] |
[
5,
3,
0
] |
[] |
[] |
[
"filesystems",
"python",
"sandbox",
"security"
] |
stackoverflow_0002452488_filesystems_python_sandbox_security.txt
|
Q:
What is the paste deploy uri syntax?
Paste Deploy can reference code with uris such as
[section]
use = egg:FooBar#baz
What is the full syntax for these uris?
A:
Those URIs are fully detailed in the documentation. It boils down to config:, egg:, and prefix-less URIs that point to other sections.
|
What is the paste deploy uri syntax?
|
Paste Deploy can reference code with uris such as
[section]
use = egg:FooBar#baz
What is the full syntax for these uris?
|
[
"Those URIs are fully detailed in the documentation. It boils down to config:, egg:, and prefix-less URIs that point to other sections.\n"
] |
[
2
] |
[] |
[] |
[
"paster",
"python"
] |
stackoverflow_0002435865_paster_python.txt
|
Q:
recurse over a list
I'm trying to recurse over a list (eg. [True, [[True, False], [False, [False, True]]]]) using Python. I know that the list length will always be 2 and both values will be boolean. I'd like to take those values and substitute them back into the list until there are only 2 values left (or 1 boolean value). Any help would be much appreciated.
A:
You haven't said how to combine the two parts, so I'm assuming or but you could use another function instead.
l = [True, [[True, False], [False, [False, True]]]]
def foo(x):
if isinstance(x, list):
return foo(x[0]) or foo(x[1])
else:
return x
print foo(l)
A:
say your list is l
def print_list(list):
t = type(list())
for item in list:
if type(item) is t:
print_list(item)
else:
print item
print_list(l)
Something simple like that would print every item in your list.
|
recurse over a list
|
I'm trying to recurse over a list (eg. [True, [[True, False], [False, [False, True]]]]) using Python. I know that the list length will always be 2 and both values will be boolean. I'd like to take those values and substitute them back into the list until there are only 2 values left (or 1 boolean value). Any help would be much appreciated.
|
[
"You haven't said how to combine the two parts, so I'm assuming or but you could use another function instead.\nl = [True, [[True, False], [False, [False, True]]]]\n\ndef foo(x):\n if isinstance(x, list):\n return foo(x[0]) or foo(x[1])\n else:\n return x\n\nprint foo(l)\n\n",
"say your list is l\ndef print_list(list):\n t = type(list())\n for item in list:\n if type(item) is t:\n print_list(item)\n else:\n print item\n print_list(l)\n\nSomething simple like that would print every item in your list. \n"
] |
[
4,
0
] |
[] |
[] |
[
"python",
"recursion"
] |
stackoverflow_0002458075_python_recursion.txt
|
Q:
How to model a social news feed on Google App Engine
We want to implement a "News feed" where a user can see messages
broadcasted by her friends, sorted with newest message first. But the
feed should reflect changes in her friends list. (If she adds new
friends, messages from those should be included in the feed, and if
she removes friends their messages should not be included.) If we use
the pubsub-test example and attach a recipient list to each message
this means a lot of manipulation of the message recipients lists when users
connect and disconnect friends.
We first modeled publish-subscribe "fan out" using conventional RDBMS
thinking. It seemed to work at first, but then, since the IN operator
works the way it does, we quickly realized we couldn't continue on
that path. We found Brett Slatkin's presentation from last years
Google I/O and we have now watched it a few times but it isn't clear to
us how to do it with "dynamic" recipient lists.
What we need are some hints on how to "think" when modeling this.
A:
Pasting the answer I got for this question in the Google Group for Google App Engine http://groups.google.com/group/google-appengine/browse_thread/thread/09a05c5f41163b4d# By Ikai L (Google)
A couple of thoughts here:
is removing of friends a common event? similarly, is adding of
friends a common event? (All relative,
relative to "reads" of the news feed)
From what I remember, the only way to make heavy reads scale is to write
the data multiple times in peoples'
streams. Twitter does this, from what
I remember, using a "eventually
consistent" model. This is why your
feed will not update for several
minutes when they are under heavy
load. The general consensus, though,
is that a relational, normalized
model simply will not work.
the Jaiku engine is open source for your study:
http://code.google.com/p/jaikuengine.
This runs on App Engine Hope these
help when you're considering a design.
|
How to model a social news feed on Google App Engine
|
We want to implement a "News feed" where a user can see messages
broadcasted by her friends, sorted with newest message first. But the
feed should reflect changes in her friends list. (If she adds new
friends, messages from those should be included in the feed, and if
she removes friends their messages should not be included.) If we use
the pubsub-test example and attach a recipient list to each message
this means a lot of manipulation of the message recipients lists when users
connect and disconnect friends.
We first modeled publish-subscribe "fan out" using conventional RDBMS
thinking. It seemed to work at first, but then, since the IN operator
works the way it does, we quickly realized we couldn't continue on
that path. We found Brett Slatkin's presentation from last years
Google I/O and we have now watched it a few times but it isn't clear to
us how to do it with "dynamic" recipient lists.
What we need are some hints on how to "think" when modeling this.
|
[
"Pasting the answer I got for this question in the Google Group for Google App Engine http://groups.google.com/group/google-appengine/browse_thread/thread/09a05c5f41163b4d# By Ikai L (Google) \n\nA couple of thoughts here: \n\nis removing of friends a common event? similarly, is adding of \n friends a common event? (All relative,\n relative to \"reads\" of the news feed)\nFrom what I remember, the only way to make heavy reads scale is to write\n the data multiple times in peoples'\n streams. Twitter does this, from what\n I remember, using a \"eventually\n consistent\" model. This is why your\n feed will not update for several\n minutes when they are under heavy\n load. The general consensus, though,\n is that a relational, normalized\n model simply will not work. \nthe Jaiku engine is open source for your study: \n http://code.google.com/p/jaikuengine.\n This runs on App Engine Hope these\n help when you're considering a design.\n\n\n"
] |
[
3
] |
[] |
[] |
[
"google_app_engine",
"google_cloud_datastore",
"python",
"social_networking"
] |
stackoverflow_0002447488_google_app_engine_google_cloud_datastore_python_social_networking.txt
|
Q:
Should I strip the XML declaration from suds output before parsing with lxml?
I’m trying to implement a SOAP webservice in Python 2.6 using the suds library. That is working well, but I’ve run into a problem when trying to parse the output with lxml.
Suds returns a suds.sax.text.Text object with the reply from the SOAP service. The suds.sax.text.Text class is a subclass of the Python built-in Unicode class. In essence, it would be comparable with this Python statement:
u'<?xml version="1.0" encoding="utf-8" ?><root><lotsofelements \></root>'
Which is incongrous, since if the XML declaration is correct, the contents are UTF-8 encoded, and thus not a Python Unicode object (because those are stored in some internal encoding like UCS4).
lxml will refuse to parse this, as documented, since there is no clear answer to what encoding it should be interpreted as.
As I see it, there are two ways out of this bind:
Strip the <?xml> declaration, including the encoding.
Convert the output from Suds into a bytestring, using the specified encoding.
Currently, the data I’m receiving from the webservice is within the ASCII-range, so either way will work, but both feels very much like ugly hacks to me, and I’m not quite sure what would happen, if I start to receive data that would need a wider range of Unicode characters.
Any good ideas? I can’t imagine I’m the first one in this position…
A:
You and lxml are correct; a valid XML document must be a stream of bytes encoded as declared in the <?xml ..... header (default: UTF-8).
I'd suggest a third option: leave it in unicode with an XML header that omits the encoding declaration but leaves the version in there (future-safe). That will keep lxml happy and avoid the overhead of you encoding it again.
I'd also suggest some gentle enquiry at the suds site and having a poke around in their source.
A:
Hmm, I'm currently implementing my first Suds-based solution and parsing my responses with lxml without a problem, but I think this could be because I'm doing it in a pretty blunt and dumb way. Here's what my code looks like:
try:
result = self.client.service.ExportOwnersDetails(fAccess=self.access_id, fParams=params)
except URLError:
# TODO: Log timeout here, handle
return
response = str(result.fReturn)
if len(response) == 0 or response.find('<?xml ') == -1:
# TODO: Log import error here, handle
return
response = StringIO(response)
xml = etree.parse(response)
Like I said, not very clever (and obviously I still have some logging to do), but that's my approach. The fAccess, fParams, fReturn nonsense is the naming convention at the third-party provider I'm integrating with.
|
Should I strip the XML declaration from suds output before parsing with lxml?
|
I’m trying to implement a SOAP webservice in Python 2.6 using the suds library. That is working well, but I’ve run into a problem when trying to parse the output with lxml.
Suds returns a suds.sax.text.Text object with the reply from the SOAP service. The suds.sax.text.Text class is a subclass of the Python built-in Unicode class. In essence, it would be comparable with this Python statement:
u'<?xml version="1.0" encoding="utf-8" ?><root><lotsofelements \></root>'
Which is incongrous, since if the XML declaration is correct, the contents are UTF-8 encoded, and thus not a Python Unicode object (because those are stored in some internal encoding like UCS4).
lxml will refuse to parse this, as documented, since there is no clear answer to what encoding it should be interpreted as.
As I see it, there are two ways out of this bind:
Strip the <?xml> declaration, including the encoding.
Convert the output from Suds into a bytestring, using the specified encoding.
Currently, the data I’m receiving from the webservice is within the ASCII-range, so either way will work, but both feels very much like ugly hacks to me, and I’m not quite sure what would happen, if I start to receive data that would need a wider range of Unicode characters.
Any good ideas? I can’t imagine I’m the first one in this position…
|
[
"You and lxml are correct; a valid XML document must be a stream of bytes encoded as declared in the <?xml ..... header (default: UTF-8).\nI'd suggest a third option: leave it in unicode with an XML header that omits the encoding declaration but leaves the version in there (future-safe). That will keep lxml happy and avoid the overhead of you encoding it again.\nI'd also suggest some gentle enquiry at the suds site and having a poke around in their source.\n",
"Hmm, I'm currently implementing my first Suds-based solution and parsing my responses with lxml without a problem, but I think this could be because I'm doing it in a pretty blunt and dumb way. Here's what my code looks like:\ntry:\n result = self.client.service.ExportOwnersDetails(fAccess=self.access_id, fParams=params)\nexcept URLError:\n # TODO: Log timeout here, handle\n return\nresponse = str(result.fReturn)\n\nif len(response) == 0 or response.find('<?xml ') == -1:\n # TODO: Log import error here, handle\n return\nresponse = StringIO(response)\nxml = etree.parse(response)\n\nLike I said, not very clever (and obviously I still have some logging to do), but that's my approach. The fAccess, fParams, fReturn nonsense is the naming convention at the third-party provider I'm integrating with.\n"
] |
[
2,
1
] |
[] |
[] |
[
"lxml",
"python",
"soap",
"suds",
"unicode"
] |
stackoverflow_0002458244_lxml_python_soap_suds_unicode.txt
|
Q:
Giving users a "reputation system" - Should I...?
I'm thinking of adding a reputation system to my Django web application; the site is already being used so I'm trying to be careful about my choices.
Reputation is generated in all actions that contribute to the site, similar to Stackoverflow's system.
I know there are literally millions of ways of implementing this, and this is why I feel quite lost.
Two alternatives I am not sure about are:
Keep track of reasons why reputation was incremented
Ignore reasons in order to reduce complexity of the site and overhead
Would be happy with a few pointers, and directions. Would be very much appreciated!
A:
In Django, I'd suggest having a property on the User (or Profile) model that calculates a user's reputation on-demand. Then, cache the reputation with your caching framework and/or store to the database for fast retrieval.
This way, in addition to having the records of what impacts reputation, you can change your reputation criteria at will.
A:
keep track of the reasons, IMHO. It surely wouldn't be that complex, and you don't need to store a huge amount of information, just a datetime, point value, command, target, and originator. If the data gets to be too much after some time dump the DB to a backup medium and clear the history.
|
Giving users a "reputation system" - Should I...?
|
I'm thinking of adding a reputation system to my Django web application; the site is already being used so I'm trying to be careful about my choices.
Reputation is generated in all actions that contribute to the site, similar to Stackoverflow's system.
I know there are literally millions of ways of implementing this, and this is why I feel quite lost.
Two alternatives I am not sure about are:
Keep track of reasons why reputation was incremented
Ignore reasons in order to reduce complexity of the site and overhead
Would be happy with a few pointers, and directions. Would be very much appreciated!
|
[
"In Django, I'd suggest having a property on the User (or Profile) model that calculates a user's reputation on-demand. Then, cache the reputation with your caching framework and/or store to the database for fast retrieval.\nThis way, in addition to having the records of what impacts reputation, you can change your reputation criteria at will.\n",
"keep track of the reasons, IMHO. It surely wouldn't be that complex, and you don't need to store a huge amount of information, just a datetime, point value, command, target, and originator. If the data gets to be too much after some time dump the DB to a backup medium and clear the history.\n"
] |
[
6,
4
] |
[] |
[] |
[
"django",
"python",
"web_applications"
] |
stackoverflow_0002458355_django_python_web_applications.txt
|
Q:
Concatenate multi value into one record
I joined two tables together and what I like to do is concatenate multi vaule in one records without duplicated value.
Input Table
Table name: TAXLOT_ZONE
TID ZONE
1 A
1 A
1 B
1 C
2 D
2 D
2 E
3 A
3 B
4 C
5 D
Desirable output table looks like;
table name: Taxlot_zone_out
TID ZONE
1 A, B, C
2 D, E
3 A, B
4 C
5 D
A:
Assuming your table is in sorted order and is iterable, you can use itertools.groupby to group rows with the same first element.
l = [(1, 'A'), (1, 'A'), (1, 'B'), (1, 'C'),
(2, 'D'), (2, 'D'), (2, 'E'),
(3, 'A'), (3, 'B'),
(4, 'C'),
(5, 'D')]
from itertools import groupby
from operator import itemgetter
result = [(taxlot, list(set(v for k,v in g)))
for taxlot, g in groupby(l, itemgetter(0))]
Result:
[(1, ['A', 'C', 'B']),
(2, ['E', 'D']),
(3, ['A', 'B']),
(4, ['C']),
(5, ['D'])]
|
Concatenate multi value into one record
|
I joined two tables together and what I like to do is concatenate multi vaule in one records without duplicated value.
Input Table
Table name: TAXLOT_ZONE
TID ZONE
1 A
1 A
1 B
1 C
2 D
2 D
2 E
3 A
3 B
4 C
5 D
Desirable output table looks like;
table name: Taxlot_zone_out
TID ZONE
1 A, B, C
2 D, E
3 A, B
4 C
5 D
|
[
"Assuming your table is in sorted order and is iterable, you can use itertools.groupby to group rows with the same first element.\nl = [(1, 'A'), (1, 'A'), (1, 'B'), (1, 'C'),\n (2, 'D'), (2, 'D'), (2, 'E'),\n (3, 'A'), (3, 'B'),\n (4, 'C'),\n (5, 'D')]\n\nfrom itertools import groupby\nfrom operator import itemgetter\nresult = [(taxlot, list(set(v for k,v in g)))\n for taxlot, g in groupby(l, itemgetter(0))]\n\nResult: \n[(1, ['A', 'C', 'B']),\n (2, ['E', 'D']),\n (3, ['A', 'B']),\n (4, ['C']),\n (5, ['D'])]\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002458585_python.txt
|
Q:
How to apply a function to a collection of elements
Consider I have an array of elements out of which I want to create a new 'iterable' which on every next applies a custom 'transformation'. What's the proper way of doing it under python 2.x?
For people familiar with Java, the equivalent is Iterables#transform from google's collections framework.
Ok as for a dummy example (coming from Java)
Iterable<Foo> foos = Iterables.transform(strings, new Function<String, Foo>()
{
public Foo apply(String string) {
return new Foo(string);
}
});
//use foos below
A:
A generator expression:
(foobar(x) for x in S)
A:
Another way of doing it:
from itertools import imap
my_generator = imap(my_function, my_iterable)
That's the way I'd do it myself, but I'm kind of weird in that I actually like map.
A:
Or by using map():
def foo(x):
return x**x
for y in map(foo,S):
bar(y)
# for simple functions, lambda's are applicable as well
for y in map(lambda x: x**x,S):
bar(y)
|
How to apply a function to a collection of elements
|
Consider I have an array of elements out of which I want to create a new 'iterable' which on every next applies a custom 'transformation'. What's the proper way of doing it under python 2.x?
For people familiar with Java, the equivalent is Iterables#transform from google's collections framework.
Ok as for a dummy example (coming from Java)
Iterable<Foo> foos = Iterables.transform(strings, new Function<String, Foo>()
{
public Foo apply(String string) {
return new Foo(string);
}
});
//use foos below
|
[
"A generator expression:\n(foobar(x) for x in S)\n\n",
"Another way of doing it:\nfrom itertools import imap\nmy_generator = imap(my_function, my_iterable)\n\nThat's the way I'd do it myself, but I'm kind of weird in that I actually like map.\n",
"Or by using map():\ndef foo(x):\n return x**x \n\nfor y in map(foo,S):\n bar(y)\n\n# for simple functions, lambda's are applicable as well\nfor y in map(lambda x: x**x,S):\n bar(y)\n\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002458621_python.txt
|
Q:
Abort a slow flush to disk after write?
Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc?
I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.
A:
It's somewhat filesystem dependent, but in some filesystems, if you delete a file before (all of) it is allocated, the IO to write the blocks will never happen. This might also be true if you truncate it so that the part which is still being written is chopped off.
Not sure that you can really abort a write if you want to still access the data. Also the kinds of filesystems that support this (e.g. xfs, ext4) are not normally used on USB sticks.
If you want to flush data to the disc, use fdatasync(). Merely flushing your IO library's buffer into the OS one will not achieve any physical flushing.
A:
Assuming I am understanding this correct, you want to be able to 'abort' and NOT flush the data. This IS possible using a ctype and a little pokery. This is very OS dependent so I'll give you the OSX version and then what you can do to change it to Linux:
f = open('flibble1.txt', 'w')
f.write("hello world")
import ctypes
x = ctypes.cdll.LoadLibrary('/usr/lib/libc.dylib')
x.close(f.fileno())
try:
del f
catch IOError:
pass
If you change /usr/lib/libc.dylib to the libc.so.6 in /usr/lib for Linux then you should be good to go. Basically by calling close() instead of fclose(), no call to fsync() is done and nothing is flushed.
Hope that's useful.
A:
When you abort the write operation, trying doing file.truncate(0); before closing it.
|
Abort a slow flush to disk after write?
|
Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc?
I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.
|
[
"It's somewhat filesystem dependent, but in some filesystems, if you delete a file before (all of) it is allocated, the IO to write the blocks will never happen. This might also be true if you truncate it so that the part which is still being written is chopped off.\nNot sure that you can really abort a write if you want to still access the data. Also the kinds of filesystems that support this (e.g. xfs, ext4) are not normally used on USB sticks. \nIf you want to flush data to the disc, use fdatasync(). Merely flushing your IO library's buffer into the OS one will not achieve any physical flushing.\n",
"Assuming I am understanding this correct, you want to be able to 'abort' and NOT flush the data. This IS possible using a ctype and a little pokery. This is very OS dependent so I'll give you the OSX version and then what you can do to change it to Linux:\nf = open('flibble1.txt', 'w')\nf.write(\"hello world\")\nimport ctypes\nx = ctypes.cdll.LoadLibrary('/usr/lib/libc.dylib')\nx.close(f.fileno())\ntry:\n del f\ncatch IOError:\n pass\n\nIf you change /usr/lib/libc.dylib to the libc.so.6 in /usr/lib for Linux then you should be good to go. Basically by calling close() instead of fclose(), no call to fsync() is done and nothing is flushed. \nHope that's useful.\n",
"When you abort the write operation, trying doing file.truncate(0); before closing it.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"file_io",
"flush",
"linux",
"python"
] |
stackoverflow_0002458624_file_io_flush_linux_python.txt
|
Q:
How efficient is Python substring extraction?
I've got the entire contents of a text file (at least a few KB) in string myStr.
Will the following code create a copy of the string (less the first character) in memory?
myStr = myStr[1:]
I'm hoping it just refers to a different location in the same internal buffer. If not, is there a more efficient way to do this?
Thanks!
Note: I'm using Python 2.5.
A:
At least in 2.6, slices of strings are always new allocations; string_slice() calls PyString_FromStringAndSize(). It doesn't reuse memory--which is a little odd, since with invariant strings, it should be a relatively easy thing to do.
Short of the buffer API (which you probably don't want), there isn't a more efficient way to do this operation.
A:
As with most garbage collected languages, strings are created as often as needed, which is very often. The reason for this is because tracking substrings as described would make garbage collection more difficult.
What is the actual algorithm you are trying to implement. It might be possible to give you advice for ways to get better results if we knew a bit more about it.
As for an alternative, what is it you really need to do? Could you use a different way of looking at the issue, such as just keeping an integer index into the string? Could you use a array.array('u')?
A:
One (albeit slightly hacky) solution would be something like this:
f = open("test.c")
f.read(1)
myStr = f.read()
print myStr
It will skip the first character, and then read the data into your string variable.
A:
Depending on what you are doing, itertools.islice may be a suitable memory-efficient solution (should one become necessary).
|
How efficient is Python substring extraction?
|
I've got the entire contents of a text file (at least a few KB) in string myStr.
Will the following code create a copy of the string (less the first character) in memory?
myStr = myStr[1:]
I'm hoping it just refers to a different location in the same internal buffer. If not, is there a more efficient way to do this?
Thanks!
Note: I'm using Python 2.5.
|
[
"At least in 2.6, slices of strings are always new allocations; string_slice() calls PyString_FromStringAndSize(). It doesn't reuse memory--which is a little odd, since with invariant strings, it should be a relatively easy thing to do.\nShort of the buffer API (which you probably don't want), there isn't a more efficient way to do this operation.\n",
"As with most garbage collected languages, strings are created as often as needed, which is very often. The reason for this is because tracking substrings as described would make garbage collection more difficult.\nWhat is the actual algorithm you are trying to implement. It might be possible to give you advice for ways to get better results if we knew a bit more about it.\nAs for an alternative, what is it you really need to do? Could you use a different way of looking at the issue, such as just keeping an integer index into the string? Could you use a array.array('u')?\n",
"One (albeit slightly hacky) solution would be something like this:\nf = open(\"test.c\")\nf.read(1)\nmyStr = f.read()\nprint myStr\n\nIt will skip the first character, and then read the data into your string variable. \n",
"Depending on what you are doing, itertools.islice may be a suitable memory-efficient solution (should one become necessary).\n"
] |
[
4,
3,
1,
1
] |
[] |
[] |
[
"memory",
"performance",
"python",
"substring"
] |
stackoverflow_0002457367_memory_performance_python_substring.txt
|
Q:
Extract substructure from a text file using bash or python
I have a huge text file, which follows the structure:
SET
TAG1
...
...
SET
...
SET
TAG2
...
...
SET
...
...
I would like to extract for a specific TAG, (i.e. TAG54) its individual "substructure", which would be
SET
TAG54
...
...
SET
Each substructure, for a given TAG_i contains always:
first line:SET
second line:TAG_i (in this case TAG54)
an arbitrary number of lines
last line:SET
I wonder what would be the best way to do this, whether in bash or python, so for a given TAG, one can "extract" this substructure.
Thanks
A:
Here's a Python approach: you pass in the open file handle as the first argument, the tag number as second argument, and get back as the result a list of the relevant lines (including newline characters), or an empty line if the tag is not found in the file:
def lookfor(f, tagnum):
tag = 'TAG%s\n' % tagnum
for line in f:
if line == tag:
break
else: # file finished, tag not found
return []
result = ['SET\n', tag]
for line in f:
result.append(line)
if line == 'SET\n':
break
return result
This should be reasonably well-performing. If you want other forms of arguments and/or results, it shouldn't be hard to tweak accordingly, of course.
A:
If your system's grep supports -P for perl regexp:
grep -P 'SET\nTAG54\n[.\n]*\nSET' file.txt
A:
csplit -f tags input.txt '%^TAG54$%-1' '/^SET$/+1' '%.*%' '{*}'
A:
gawk:
BEGIN {
state=0
}
state==0 && $0=="TAG54" {
print "SET"
state=1
}
state==1 {
print
}
state==1 && $0=="SET" {
exit
}
A:
$ awk -vRS="SET" '/TAG54/{print RT$0RT}' file
SET
TAG54
...
...
SET
if you are doing it with shell scripting, pass your shell variable to awk using -v. eg
#!/bin/bash
read -r -p "what's your tag? " tag
awk -vRS="SET" -vt="$tag" '$0~tag{print RT$0RT}' file
|
Extract substructure from a text file using bash or python
|
I have a huge text file, which follows the structure:
SET
TAG1
...
...
SET
...
SET
TAG2
...
...
SET
...
...
I would like to extract for a specific TAG, (i.e. TAG54) its individual "substructure", which would be
SET
TAG54
...
...
SET
Each substructure, for a given TAG_i contains always:
first line:SET
second line:TAG_i (in this case TAG54)
an arbitrary number of lines
last line:SET
I wonder what would be the best way to do this, whether in bash or python, so for a given TAG, one can "extract" this substructure.
Thanks
|
[
"Here's a Python approach: you pass in the open file handle as the first argument, the tag number as second argument, and get back as the result a list of the relevant lines (including newline characters), or an empty line if the tag is not found in the file:\ndef lookfor(f, tagnum):\n tag = 'TAG%s\\n' % tagnum\n for line in f:\n if line == tag:\n break\n else: # file finished, tag not found\n return []\n result = ['SET\\n', tag]\n for line in f:\n result.append(line)\n if line == 'SET\\n':\n break\n return result\n\nThis should be reasonably well-performing. If you want other forms of arguments and/or results, it shouldn't be hard to tweak accordingly, of course.\n",
"If your system's grep supports -P for perl regexp:\ngrep -P 'SET\\nTAG54\\n[.\\n]*\\nSET' file.txt\n\n",
"csplit -f tags input.txt '%^TAG54$%-1' '/^SET$/+1' '%.*%' '{*}'\n\n",
"gawk:\nBEGIN {\n state=0\n}\n\nstate==0 && $0==\"TAG54\" {\n print \"SET\"\n state=1\n}\n\nstate==1 {\n print\n}\n\nstate==1 && $0==\"SET\" {\n exit\n}\n\n",
"$ awk -vRS=\"SET\" '/TAG54/{print RT$0RT}' file\nSET\nTAG54\n...\n...\nSET\n\nif you are doing it with shell scripting, pass your shell variable to awk using -v. eg\n#!/bin/bash\nread -r -p \"what's your tag? \" tag\nawk -vRS=\"SET\" -vt=\"$tag\" '$0~tag{print RT$0RT}' file\n\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"bash",
"python"
] |
stackoverflow_0002456813_bash_python.txt
|
Q:
How do I copy only the values and not the references from a Python list?
Specifically, I want to create a backup of a list, then make some changes to that list, append all the changes to a third list, but then reset the first list with the backup before making further changes, etc, until I'm finished making changes and want to copy back all the content in the third list to the first one. Unfortunately, it seems that whenever I make changes to the first list in another function, the backup gets changed also. Using original = backup didn't work too well; nor did using
def setEqual(restore, backup):
restore = []
for number in backup:
restore.append(number)
solve my problem; even though I successfully restored the list from the backup, the backup nevertheless changed whenever I changed the original list.
How would I go about solving this problem?
A:
You want copy.deepcopy() for this.
A:
The first thing to understand is why that setEqual method can't work: you need to know how identifiers work. (Reading that link should be very helpful.) For a quick rundown with probably too much terminology: in your function, the parameter restore is bound to an object, and you are merely re-binding that identifier with the = operator. Here are some examples of binding the identifier restore to things.
# Bind the identifier `restore` to the number object 1.
restore = 1
# Bind the identifier `restore` to the string object 'Some string.'
# The original object that `restore` was bound to is unaffected.
restore = 'Some string.'
So, in your function, when you say:
restore = []
You are actually binding restore to a new list object you're creating. Because Python has function-local scoping, restore in your example is binding the function-local identifier restore to the new list. This will not change anything you're passing in to setEqual as restore. For example,
test_variable = 1
setEqual(test_variable, [1, 2, 3, 4])
# Passes, because the identifier test_variable
# CAN'T be rebound within this scope from setEqual.
assert test_variable == 1
Simplifying a bit, you can only bind identifiers in the currently executing scope -- you can never write a function like def set_foo_to_bar(foo, bar) that affects the scope outside of that function. As @Ignacio says, you can use something like a copy function to rebind the identifier in the current scope:
original = [1, 2, 3, 4]
backup = list(original) # Make a shallow copy of the original.
backup.remove(3)
assert original == [1, 2, 3, 4] # It's okay!
|
How do I copy only the values and not the references from a Python list?
|
Specifically, I want to create a backup of a list, then make some changes to that list, append all the changes to a third list, but then reset the first list with the backup before making further changes, etc, until I'm finished making changes and want to copy back all the content in the third list to the first one. Unfortunately, it seems that whenever I make changes to the first list in another function, the backup gets changed also. Using original = backup didn't work too well; nor did using
def setEqual(restore, backup):
restore = []
for number in backup:
restore.append(number)
solve my problem; even though I successfully restored the list from the backup, the backup nevertheless changed whenever I changed the original list.
How would I go about solving this problem?
|
[
"You want copy.deepcopy() for this.\n",
"The first thing to understand is why that setEqual method can't work: you need to know how identifiers work. (Reading that link should be very helpful.) For a quick rundown with probably too much terminology: in your function, the parameter restore is bound to an object, and you are merely re-binding that identifier with the = operator. Here are some examples of binding the identifier restore to things.\n# Bind the identifier `restore` to the number object 1.\nrestore = 1\n# Bind the identifier `restore` to the string object 'Some string.'\n# The original object that `restore` was bound to is unaffected.\nrestore = 'Some string.'\n\nSo, in your function, when you say:\nrestore = []\n\nYou are actually binding restore to a new list object you're creating. Because Python has function-local scoping, restore in your example is binding the function-local identifier restore to the new list. This will not change anything you're passing in to setEqual as restore. For example,\ntest_variable = 1\nsetEqual(test_variable, [1, 2, 3, 4])\n# Passes, because the identifier test_variable\n# CAN'T be rebound within this scope from setEqual.\nassert test_variable == 1 \n\nSimplifying a bit, you can only bind identifiers in the currently executing scope -- you can never write a function like def set_foo_to_bar(foo, bar) that affects the scope outside of that function. As @Ignacio says, you can use something like a copy function to rebind the identifier in the current scope:\noriginal = [1, 2, 3, 4]\nbackup = list(original) # Make a shallow copy of the original.\nbackup.remove(3)\nassert original == [1, 2, 3, 4] # It's okay!\n\n"
] |
[
8,
5
] |
[] |
[] |
[
"backup",
"list",
"python",
"python_3.x",
"restore"
] |
stackoverflow_0002458904_backup_list_python_python_3.x_restore.txt
|
Q:
Is there a library in Python that can convert user-dates to timestamp?
If the month is: "12"
Day is: "05"
Year is: "2010"
Can this be converted into a timestamp somehow, in a very simple way?
A:
You can use the datetime module:
import datetime
d = datetime.date(year, month, day)
At this point, d is a date object.
If you want a timestamp from that, you can do the following:
import time
timestamp = time.mktime(d.timetuple())
A:
import datetime
d = datetime.datetime(year=2010,day=5,month=12)
d
datetime.datetime(2010, 12, 5, 0, 0)
A:
In the interest of showing a man how to fish vs giving a man a fish...
A good starting point for these sorts of questions is the Python library documentation. If you look on that page for the word "date" you will easily find the datetime module.
A:
import time
>>> time.mktime((2010,12,5,0,0,0,0,0,0))
1291500000.0
|
Is there a library in Python that can convert user-dates to timestamp?
|
If the month is: "12"
Day is: "05"
Year is: "2010"
Can this be converted into a timestamp somehow, in a very simple way?
|
[
"You can use the datetime module:\nimport datetime\n\nd = datetime.date(year, month, day)\n\nAt this point, d is a date object.\nIf you want a timestamp from that, you can do the following:\nimport time\n\ntimestamp = time.mktime(d.timetuple())\n\n",
"import datetime\n\nd = datetime.datetime(year=2010,day=5,month=12)\n\nd\ndatetime.datetime(2010, 12, 5, 0, 0)\n\n",
"In the interest of showing a man how to fish vs giving a man a fish...\nA good starting point for these sorts of questions is the Python library documentation. If you look on that page for the word \"date\" you will easily find the datetime module.\n",
"import time\n\n>>> time.mktime((2010,12,5,0,0,0,0,0,0))\n1291500000.0\n\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"date",
"datetime",
"python",
"timestamp"
] |
stackoverflow_0002454088_date_datetime_python_timestamp.txt
|
Q:
Strange python error
I am trying to write a python program that calculates a histogram, given a list of numbers like:
1
3
2
3
4
5
3.2
4
2
2
so the input parameters are the filename and the number of intervals.
The program code is:
#!/usr/bin/env python
import os, sys, re, string, array, math
import numpy
Lista = []
db = sys.argv[1]
db_file = open(db,"r")
ic=0
nintervals= int(sys.argv[2])
while 1:
line = db_file.readline()
if not line:
break
ll=string.split(line)
#print ll[6]
Lista.insert(ic,float(ll[0]))
ic=ic+1
lmin=min(Lista)
print "min= ",lmin
lmax=max(Lista)
print "max= ",lmax
width=666.666
width=(lmax-lmin)/nintervals
print "width= ",width
nelements=len(Lista)
print "nelements= ",nelements
print " "
Histogram = numpy.zeros(shape=(nintervals))
for item in Lista:
#print item
int_number = 1 + int((item-lmin)/width)
print " "
print "item,lmin= ",item,lmin
print "(item-lmin)/width= ",(item-lmin)," / ",width," ====== ",(float(item)-float(lmin))/float(width)
print "int((item-lmin)/width)= ",int((item-lmin)/width)
print item , " belongs to interval ", int_number, " which is from ", lmin+width*(int_number-1), " to ",lmin+width*int_number
Histogram[int_number] = Histogram[int_number] + 1
4
but somehow I am completely lost, I get strange errors, can anybody help¿
Thanks
P.D. These are the results of the output:
item,lmin= 1.0 1.0
(item-lmin)/width= 0.0 / 0.666666666667 ====== 0.0
int((item-lmin)/width)= 0
1.0 belongs to interval 1 which is from 1.0 to 1.66666666667
item,lmin= 2.0 1.0
(item-lmin)/width= 1.0 / 0.666666666667 ====== 1.5
int((item-lmin)/width)= 1
2.0 belongs to interval 2 which is from 1.66666666667 to 2.33333333333
item,lmin= 3.0 1.0
(item-lmin)/width= 2.0 / 0.666666666667 ====== 3.0
int((item-lmin)/width)= 3
3.0 belongs to interval 4 which is from 3.0 to 3.66666666667
Traceback (most recent call last):
File "from_list_to_histogram.py", line 43, in <module>
Histogram[int_number] = Histogram[int_number] + 1
IndexError: index out of bounds
The most important errors are:
(item-lmin)/width= 1.0 / 0.666666666667 ====== 1.5
and
IndexError: index out of bounds
A:
I believe the problem may be a peculiar off-by one in the line:
int_number = 1 + int((item-lmin)/width)
Why that 1 +? Python indices on an array of length N are from 0 to N-1 included. The 1 + here makes int_number go from 1 to 1 + (lmax-lmin)/width i.e. to 1 + nintervals given the formula for width, while you've sized Histogram to nintervals items -- so it's actually an off-by-two, worsened by the 1 + but it would be there (for lmax only) even without it. make the intervals an epsilon wider, so lmax falls inside the last one and not just beyond it, and lose the 1 +, and things might work better.
A:
Here is a more Pythonic approach.
from itertools import groupby
from math import floor
data = [1,3,2,3,4,5,3.2,4,2,2,3.6]
data.sort()
nintervals = 3
lmax = max(data)
lmin = min(data)
width = 1.0*(lmax-lmin)/nintervals
def grouper(item):
return floor(1.0*(item-lmin)/width)
for i, b in groupby(data, grouper):
print '%.3f <= i < %.3f ' %(lmin + i * width, lmin + (i+1) * width), list(b)
A:
On the last line you access Histogram with a too big index. You should make sure that 'int_number' is at most len(Histogram) - 1
There's probably a bug, which causes this problem.
A:
I just removed code that loads from file and rewrite to something more readable
from math import floor
Lista = [1,3,2,3,4,5,3.2,4,2,2]
ic=0
nintervals= 3
lmin=min(Lista)
print "min= ",lmin
lmax=max(Lista)
print "max= ",lmax
width=1.0*(lmax-lmin)/nintervals
print "width= ",width
nelements=len(Lista)
print "nelements= ",nelements
print " "
histogram =[0]*nintervals
for item in Lista:
ind = int(floor(1.0*(item-lmin)/width))
if ind==nintervals:
ind=ind-1
histogram[ind]+=1
for i,v in enumerate(histogram):
print "from", lmin+i*width, "to", lmin+(i+1)*width, "are",v,"values"
for i,v in enumerate(histogram):
print "Visual presentation:","="*int(round(v*40.0/lmax))
|
Strange python error
|
I am trying to write a python program that calculates a histogram, given a list of numbers like:
1
3
2
3
4
5
3.2
4
2
2
so the input parameters are the filename and the number of intervals.
The program code is:
#!/usr/bin/env python
import os, sys, re, string, array, math
import numpy
Lista = []
db = sys.argv[1]
db_file = open(db,"r")
ic=0
nintervals= int(sys.argv[2])
while 1:
line = db_file.readline()
if not line:
break
ll=string.split(line)
#print ll[6]
Lista.insert(ic,float(ll[0]))
ic=ic+1
lmin=min(Lista)
print "min= ",lmin
lmax=max(Lista)
print "max= ",lmax
width=666.666
width=(lmax-lmin)/nintervals
print "width= ",width
nelements=len(Lista)
print "nelements= ",nelements
print " "
Histogram = numpy.zeros(shape=(nintervals))
for item in Lista:
#print item
int_number = 1 + int((item-lmin)/width)
print " "
print "item,lmin= ",item,lmin
print "(item-lmin)/width= ",(item-lmin)," / ",width," ====== ",(float(item)-float(lmin))/float(width)
print "int((item-lmin)/width)= ",int((item-lmin)/width)
print item , " belongs to interval ", int_number, " which is from ", lmin+width*(int_number-1), " to ",lmin+width*int_number
Histogram[int_number] = Histogram[int_number] + 1
4
but somehow I am completely lost, I get strange errors, can anybody help¿
Thanks
P.D. These are the results of the output:
item,lmin= 1.0 1.0
(item-lmin)/width= 0.0 / 0.666666666667 ====== 0.0
int((item-lmin)/width)= 0
1.0 belongs to interval 1 which is from 1.0 to 1.66666666667
item,lmin= 2.0 1.0
(item-lmin)/width= 1.0 / 0.666666666667 ====== 1.5
int((item-lmin)/width)= 1
2.0 belongs to interval 2 which is from 1.66666666667 to 2.33333333333
item,lmin= 3.0 1.0
(item-lmin)/width= 2.0 / 0.666666666667 ====== 3.0
int((item-lmin)/width)= 3
3.0 belongs to interval 4 which is from 3.0 to 3.66666666667
Traceback (most recent call last):
File "from_list_to_histogram.py", line 43, in <module>
Histogram[int_number] = Histogram[int_number] + 1
IndexError: index out of bounds
The most important errors are:
(item-lmin)/width= 1.0 / 0.666666666667 ====== 1.5
and
IndexError: index out of bounds
|
[
"I believe the problem may be a peculiar off-by one in the line:\nint_number = 1 + int((item-lmin)/width)\n\nWhy that 1 +? Python indices on an array of length N are from 0 to N-1 included. The 1 + here makes int_number go from 1 to 1 + (lmax-lmin)/width i.e. to 1 + nintervals given the formula for width, while you've sized Histogram to nintervals items -- so it's actually an off-by-two, worsened by the 1 + but it would be there (for lmax only) even without it. make the intervals an epsilon wider, so lmax falls inside the last one and not just beyond it, and lose the 1 +, and things might work better.\n",
"Here is a more Pythonic approach.\nfrom itertools import groupby\nfrom math import floor\n\ndata = [1,3,2,3,4,5,3.2,4,2,2,3.6]\ndata.sort()\n\nnintervals = 3\nlmax = max(data)\nlmin = min(data)\n\nwidth = 1.0*(lmax-lmin)/nintervals\n\ndef grouper(item): \n return floor(1.0*(item-lmin)/width)\n\nfor i, b in groupby(data, grouper):\n print '%.3f <= i < %.3f ' %(lmin + i * width, lmin + (i+1) * width), list(b)\n\n",
"On the last line you access Histogram with a too big index. You should make sure that 'int_number' is at most len(Histogram) - 1\nThere's probably a bug, which causes this problem.\n",
"I just removed code that loads from file and rewrite to something more readable \nfrom math import floor\n\nLista = [1,3,2,3,4,5,3.2,4,2,2]\nic=0\nnintervals= 3\n\nlmin=min(Lista)\nprint \"min= \",lmin\nlmax=max(Lista)\nprint \"max= \",lmax\n\nwidth=1.0*(lmax-lmin)/nintervals\nprint \"width= \",width\n\nnelements=len(Lista)\nprint \"nelements= \",nelements\nprint \" \"\nhistogram =[0]*nintervals\n\nfor item in Lista:\n ind = int(floor(1.0*(item-lmin)/width))\n if ind==nintervals:\n ind=ind-1\n histogram[ind]+=1\n\nfor i,v in enumerate(histogram):\n print \"from\", lmin+i*width, \"to\", lmin+(i+1)*width, \"are\",v,\"values\"\n\nfor i,v in enumerate(histogram):\n print \"Visual presentation:\",\"=\"*int(round(v*40.0/lmax))\n\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002457529_python.txt
|
Q:
Python importing
I have a file, myfile.py, which imports Class1 from file.py and file.py contains imports to different classes in file2.py, file3.py, file4.py.
In my myfile.py, can I access these classes or do I need to again import file2.py, file3.py, etc.?
Does Python automatically add all the imports included in the file I imported, and can I use them automatically?
A:
Best practice is to import every module that defines identifiers you need, and use those identifiers as qualified by the module's name; I recommend using from only when what you're importing is a module from within a package. The question has often been discussed on SO.
Importing a module, say moda, from many modules (say modb, modc, modd, ...) that need one or more of the identifiers moda defines, does not slow you down: moda's bytecode is loaded (and possibly build from its sources, if needed) only once, the first time moda is imported anywhere, then all other imports of the module use a fast path involving a cache (a dict mapping module names to module objects that is accessible as sys.modules in case of need... if you first import sys, of course!-).
A:
Python doesn't automatically introduce anything into the namespace of myfile.py, but you can access everything that is in the namespaces of all the other modules.
That is to say, if in file1.py you did from file2 import SomeClass and in myfile.py you did import file1, then you can access it within myfile as file1.SomeClass. If in file1.py you did import file2 and in myfile.py you did import file1, then you can access the class from within myfile as file1.file2.SomeClass. (These aren't generally the best ways to do it, especially not the second example.)
This is easily tested.
A:
In the myfile module, you can either do from file import ClassFromFile2 or from file2 import ClassFromFile2 to access ClassFromFile2, assuming that the class is also imported in file.
This technique is often used to simplify the API a bit. For example, a db.py module might import various things from the modules mysqldb, sqlalchemy and some other helpers. Than, everything can be accessed via the db module.
A:
If you are using wildcard import, yes, wildcard import actually is the way of creating new aliases in your current namespace for contents of the imported module. If not, you need to use the namespace of the module you have imported as usual.
|
Python importing
|
I have a file, myfile.py, which imports Class1 from file.py and file.py contains imports to different classes in file2.py, file3.py, file4.py.
In my myfile.py, can I access these classes or do I need to again import file2.py, file3.py, etc.?
Does Python automatically add all the imports included in the file I imported, and can I use them automatically?
|
[
"Best practice is to import every module that defines identifiers you need, and use those identifiers as qualified by the module's name; I recommend using from only when what you're importing is a module from within a package. The question has often been discussed on SO.\nImporting a module, say moda, from many modules (say modb, modc, modd, ...) that need one or more of the identifiers moda defines, does not slow you down: moda's bytecode is loaded (and possibly build from its sources, if needed) only once, the first time moda is imported anywhere, then all other imports of the module use a fast path involving a cache (a dict mapping module names to module objects that is accessible as sys.modules in case of need... if you first import sys, of course!-).\n",
"Python doesn't automatically introduce anything into the namespace of myfile.py, but you can access everything that is in the namespaces of all the other modules. \nThat is to say, if in file1.py you did from file2 import SomeClass and in myfile.py you did import file1, then you can access it within myfile as file1.SomeClass. If in file1.py you did import file2 and in myfile.py you did import file1, then you can access the class from within myfile as file1.file2.SomeClass. (These aren't generally the best ways to do it, especially not the second example.)\nThis is easily tested.\n",
"In the myfile module, you can either do from file import ClassFromFile2 or from file2 import ClassFromFile2 to access ClassFromFile2, assuming that the class is also imported in file.\nThis technique is often used to simplify the API a bit. For example, a db.py module might import various things from the modules mysqldb, sqlalchemy and some other helpers. Than, everything can be accessed via the db module.\n",
"If you are using wildcard import, yes, wildcard import actually is the way of creating new aliases in your current namespace for contents of the imported module. If not, you need to use the namespace of the module you have imported as usual.\n"
] |
[
11,
1,
0,
0
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0002459300_import_python.txt
|
Q:
How to determine if the variable is a function in Python?
Since functions are values in Python, how do I determine if the variable is a function?
For example:
boda = len # boda is the length function now
if ?is_var_function(boda)?:
print "Boda is a function!"
else:
print "Boda is not a function!"
Here hypothetical ?is_var_function(x)? should return true if x is a callable function, and false if it is not.
A:
The callable built-in mentioned in other answers doesn't answer your question as posed, because it also returns True, besides functions, for methods, classes, instances of classes which define a __call__ method. If your question's title and text are wrong, and you don't care if something is in fact a function but only if it's callable, then use that builtin. But the best answer to your question as posed is: import the inspect method of Python's standard library, and use inspect.isfunction. (There are other, lower-abstraction ways, but it's always a good idea to use functionality of the inspect module for introspection when it's there, in preference to lower-level approaches: inspect helps keep your code concise, clear, robust, and future-proof).
A:
You may use inspect.isfunction(object). See: docs.python.org
That said, you should avoid using this technique in your day-to-day code. Sometimes you actually need to use reflection - for example, an MVC framework might need to load a class and inspect its members. But usually you should return/pass/deal with objects that have the same "interface". For example, do not return an object that may be an integer or a function - always return the same "type" of object so your code can be consistent.
A:
There is a callable function in python.
if callable(boda):
A:
Use callable(boda) to determine whether boda is callable or not.
Callable means here being a function, method or even a class. But since you want to distinguish only between variables and functions it should work nicely.
Your code would then look like:
boda = len # boda is the length function now
if callable(boda):
print "Boda is a function!"
else:
print "Boda is not a function!"
A:
>>> import types
>>> def f(): pass
...
>>> x = 0
>>> type(f) == types.FunctionType
True
>>> type(x) == types.FunctionType
False
That will check if it's a function. callable() will check if it's callable (that is, it has a __call__ method), so it will return true on a class as well as a function or method.
A:
It's not a 100% perfect solution, but you might want to check out the "callable" built-in function:
http://docs.python.org/library/functions.html
A:
And what should it return for a property, which you access like a value attribute but actually invokes a function? Your seemingly simple question actually opens up a whole can of worms in Python. Which means things like callable and isfunction are good to know about for your education and introspection, but are probably not things you want to rely on when interfacing with other code.
Tangentially: See the Turtles All The Way Down presentation for more on how Python is put together.
A:
The philosophy of how to use different objects in a Python program is called duck typing—if it looks like a duck, quacks like duck, and walks like a duck, it's a duck. Objects are grouped not by what their type is but what they are capable of doing, and this even extends to functions. When writing a Python program, you should always know what all your objects can do and use them without checking.
For example, I could define a function
def add_three(a, b c):
return a + b + c
and mean for it to be used with three floats. But by not checking this, I have a much more useful function—I can use it with ints, with decimal.Decimals, or with fractions.Fractions, for example.
The same applies to having a function. If I know I have a function and want to call it, I should just call it. Maybe what I have is a function and maybe I have a different callable (like a bound method or an instance of an arbitrary class that defines __call__) that could be just as good. By not checking anything, I make my code able to handle a wide range of circumstances I might not even have thought of in advance.
In the case of callables, I can pretty reliably determine whether I have one or not, but for the sake of my code's simplicity, I shouldn't want to. If someone passes in something that's not callable, you'll get an error when you call it anyways. If I'm writing code that accepts a parameter that may be a callable or not and I do different things depending on it, it sounds like I should improve my API by defining two functions to do these two different things.
If you did have a way to handle the case where the caller passed in something that isn't function-like (and this wasn't just the result of an insane API), the right solution would be to catch the TypeError that is raised when you try to call something that can't be called. In general, it's better to try to do something and recover if it fails rather than to check ahead of time. (Recall the cliché, "It's easier to ask forgiveness than permission.") Checking ahead of time can lead to unexpected problems based on subtle errors in logic and can lead to race conditions.
Why do you think you need to typecheck?
|
How to determine if the variable is a function in Python?
|
Since functions are values in Python, how do I determine if the variable is a function?
For example:
boda = len # boda is the length function now
if ?is_var_function(boda)?:
print "Boda is a function!"
else:
print "Boda is not a function!"
Here hypothetical ?is_var_function(x)? should return true if x is a callable function, and false if it is not.
|
[
"The callable built-in mentioned in other answers doesn't answer your question as posed, because it also returns True, besides functions, for methods, classes, instances of classes which define a __call__ method. If your question's title and text are wrong, and you don't care if something is in fact a function but only if it's callable, then use that builtin. But the best answer to your question as posed is: import the inspect method of Python's standard library, and use inspect.isfunction. (There are other, lower-abstraction ways, but it's always a good idea to use functionality of the inspect module for introspection when it's there, in preference to lower-level approaches: inspect helps keep your code concise, clear, robust, and future-proof).\n",
"You may use inspect.isfunction(object). See: docs.python.org\nThat said, you should avoid using this technique in your day-to-day code. Sometimes you actually need to use reflection - for example, an MVC framework might need to load a class and inspect its members. But usually you should return/pass/deal with objects that have the same \"interface\". For example, do not return an object that may be an integer or a function - always return the same \"type\" of object so your code can be consistent.\n",
"There is a callable function in python.\n if callable(boda):\n\n",
"Use callable(boda) to determine whether boda is callable or not. \nCallable means here being a function, method or even a class. But since you want to distinguish only between variables and functions it should work nicely.\nYour code would then look like:\nboda = len # boda is the length function now\nif callable(boda):\n print \"Boda is a function!\"\nelse:\n print \"Boda is not a function!\"\n\n",
">>> import types\n>>> def f(): pass\n...\n>>> x = 0\n>>> type(f) == types.FunctionType\nTrue\n>>> type(x) == types.FunctionType\nFalse\n\nThat will check if it's a function. callable() will check if it's callable (that is, it has a __call__ method), so it will return true on a class as well as a function or method.\n",
"It's not a 100% perfect solution, but you might want to check out the \"callable\" built-in function:\nhttp://docs.python.org/library/functions.html\n",
"And what should it return for a property, which you access like a value attribute but actually invokes a function? Your seemingly simple question actually opens up a whole can of worms in Python. Which means things like callable and isfunction are good to know about for your education and introspection, but are probably not things you want to rely on when interfacing with other code.\nTangentially: See the Turtles All The Way Down presentation for more on how Python is put together.\n",
"The philosophy of how to use different objects in a Python program is called duck typing—if it looks like a duck, quacks like duck, and walks like a duck, it's a duck. Objects are grouped not by what their type is but what they are capable of doing, and this even extends to functions. When writing a Python program, you should always know what all your objects can do and use them without checking.\nFor example, I could define a function\ndef add_three(a, b c):\n return a + b + c\n\nand mean for it to be used with three floats. But by not checking this, I have a much more useful function—I can use it with ints, with decimal.Decimals, or with fractions.Fractions, for example.\nThe same applies to having a function. If I know I have a function and want to call it, I should just call it. Maybe what I have is a function and maybe I have a different callable (like a bound method or an instance of an arbitrary class that defines __call__) that could be just as good. By not checking anything, I make my code able to handle a wide range of circumstances I might not even have thought of in advance.\nIn the case of callables, I can pretty reliably determine whether I have one or not, but for the sake of my code's simplicity, I shouldn't want to. If someone passes in something that's not callable, you'll get an error when you call it anyways. If I'm writing code that accepts a parameter that may be a callable or not and I do different things depending on it, it sounds like I should improve my API by defining two functions to do these two different things.\nIf you did have a way to handle the case where the caller passed in something that isn't function-like (and this wasn't just the result of an insane API), the right solution would be to catch the TypeError that is raised when you try to call something that can't be called. In general, it's better to try to do something and recover if it fails rather than to check ahead of time. (Recall the cliché, \"It's easier to ask forgiveness than permission.\") Checking ahead of time can lead to unexpected problems based on subtle errors in logic and can lead to race conditions.\nWhy do you think you need to typecheck?\n"
] |
[
18,
14,
9,
4,
2,
1,
0,
0
] |
[] |
[] |
[
"function",
"python"
] |
stackoverflow_0002459329_function_python.txt
|
Q:
How to schedule hundreds of thousands of tasks?
We have hundreds of thousands of tasks that need to be run at a variety of arbitrary intervals, some every hour, some every day, and so on. The tasks are resource intensive and need to be distributed across many machines.
Right now tasks are stored in a database with an "execute at this time" timestamp. To find tasks that need to be executed, we query the database for jobs that are due to be executed, then update the timestamps when the task is complete. Naturally this leads to a substantial write load on the database.
As far as I can tell, we are looking for something to release tasks into a queue at a set interval. (Workers could then request tasks from that queue.)
What is the best way to schedule recurring tasks at scale?
For what it's worth we're largely using Python, although we have no problems using components (RabbitMQ?) written in other languages.
UPDATE: Right now we have about 350,000 tasks that run every half hour or so, with some variation. 350,000 tasks * 48 times per day is 16,800,000 tasks executed per day.
UPDATE 2: There are no dependencies. The tasks do not have to be executed in order and do not rely on previous results.
A:
Since ACID isn't needed and you're okay with tasks potentially running twice, I wouldn't keep the timestamps in the database at all. For each task, create a list of [timestamp_of_next_run, task_id] and use a min-heap to store all of the lists. Python's heapq module can maintain the heap for you. You'll be able to very efficiently pop off the task with the soonest timestamp. When you need to run a task, use its task_id to look up in the database what the task needs to do. When a task completes, update the timestamp and put it back into the heap. (Just be careful not to change an item that's currently in the heap, as that will break the heap properties).
Use the database only to store information that you will still care about after a crash and reboot. If you won't need the information after a reboot, don't spend the time writing to disk. You will still have a lot of database read operations to load the information about a task that needs to run, but a read is much cheaper than a write.
If you don't have enough RAM to store all of the tasks in memory at the same time, you could go with a hybrid setup where you keep the tasks for the next 24 hours (for example) in RAM and everything else stays in the database. Alternately, you could rewrite the code in C or C++, which are less memory hungry.
A:
If you don't want a database, you could store just the next run timestamp and task id in memory. You could store the properties for each task in a file named [task_id].txt. You would need a data structure to store all the tasks, sorted by timestamp in memory, an AVL tree seems like it would work, here's a simple one for python: http://bjourne.blogspot.com/2006/11/avl-tree-in-python.html. Hopefully Linux (I assume that's what you are running on) could handle millions of files in a directory, otherwise you might need to hash on the task id to get a sub folder).
Your master server would just need to run a loop, popping off tasks out of the AVL tree until the next task's timestamp is in the future. Then you could sleep for a few seconds and start checking again. Whenever a task runs, you would update the next run timestamp in the task file and re-insert it into the AVL tree.
When the master server reboots, there would be the overhead of reloading all tasks id and next run timestamp back into memory, so that might be painful with millions of files. Maybe you just have one giant file and give each task 1K space in the file for properties and next run timestamp and then use [task_id] * 1K to get to the right offset for the task properties.
If you are willing to use a database, I am confident MySQL could handle whatever you throw at it given the conditions you describe, assuming you have 4GB+ RAM and several hard drives in RAID 0+1 on your master server.
Finally, if you really want to get complicated, Hadoop might work too: http://hadoop.apache.org/
A:
If you're worried about writes, you can have a set of servers that dispatch the tasks (may be stripe the servers to equalize load) and have each server write bulk checkpoints to the DB (this way, you will not have so many write queries). You still have to write to be able to recover if scheduling server dies, of course.
In addition, if you don't have a clustered index on timestamp, you will avoid having a hot-spot at the end of the table.
A:
350,000 tasks * 48 times per day is
16,800,000 tasks executed per day.
To schedule the jobs, you don't need a database.
Databases are for things that are updated. The only update visible here is a change to the schedule to add, remove or reschedule a job.
Cron does this in a totally scalable fashion with a single flat file.
Read the entire flat file into memory, start spawning jobs. Periodically, check the fstat to see if the file changed. Or, even better, wait for a HUP signal and use that to reread the file. Use kill -HUP to signal the scheduler to reread the file.
It's unclear what you're updating the database for.
If the database is used to determine future schedule based on job completion, then a single database is a Very Dad Idea.
If you're using the database to do some analysis of job history, then you have a simple data warehouse.
Record completion information (start time, end time, exit status, all that stuff) in a simple flat log file.
Process the flat log files to create a fact table and dimension updates.
When someone has the urge to do some analysis, load relevant portions of the flat log files into a datamart so they can do queries and counts and averages and the like.
Do not directly record 17,000,000 rows per day into a relational database. No one wants all that data. They want summaries: counts and averages.
A:
Why hundreds of thousands and not hundreds of millions ? :evil:
I think you need stackless python, http://www.stackless.com/. created by the genius of Christian Tismer.
Quoting
Stackless Python is an enhanced
version of the Python programming
language. It allows programmers to
reap the benefits of thread-based
programming without the performance
and complexity problems associated
with conventional threads. The
microthreads that Stackless adds to
Python are a cheap and lightweight
convenience which can if used
properly, give the following benefits:
Improved program structure. More
readable code. Increased programmer
productivity.
Is used for massive multiplayer games.
|
How to schedule hundreds of thousands of tasks?
|
We have hundreds of thousands of tasks that need to be run at a variety of arbitrary intervals, some every hour, some every day, and so on. The tasks are resource intensive and need to be distributed across many machines.
Right now tasks are stored in a database with an "execute at this time" timestamp. To find tasks that need to be executed, we query the database for jobs that are due to be executed, then update the timestamps when the task is complete. Naturally this leads to a substantial write load on the database.
As far as I can tell, we are looking for something to release tasks into a queue at a set interval. (Workers could then request tasks from that queue.)
What is the best way to schedule recurring tasks at scale?
For what it's worth we're largely using Python, although we have no problems using components (RabbitMQ?) written in other languages.
UPDATE: Right now we have about 350,000 tasks that run every half hour or so, with some variation. 350,000 tasks * 48 times per day is 16,800,000 tasks executed per day.
UPDATE 2: There are no dependencies. The tasks do not have to be executed in order and do not rely on previous results.
|
[
"Since ACID isn't needed and you're okay with tasks potentially running twice, I wouldn't keep the timestamps in the database at all. For each task, create a list of [timestamp_of_next_run, task_id] and use a min-heap to store all of the lists. Python's heapq module can maintain the heap for you. You'll be able to very efficiently pop off the task with the soonest timestamp. When you need to run a task, use its task_id to look up in the database what the task needs to do. When a task completes, update the timestamp and put it back into the heap. (Just be careful not to change an item that's currently in the heap, as that will break the heap properties).\nUse the database only to store information that you will still care about after a crash and reboot. If you won't need the information after a reboot, don't spend the time writing to disk. You will still have a lot of database read operations to load the information about a task that needs to run, but a read is much cheaper than a write.\nIf you don't have enough RAM to store all of the tasks in memory at the same time, you could go with a hybrid setup where you keep the tasks for the next 24 hours (for example) in RAM and everything else stays in the database. Alternately, you could rewrite the code in C or C++, which are less memory hungry.\n",
"If you don't want a database, you could store just the next run timestamp and task id in memory. You could store the properties for each task in a file named [task_id].txt. You would need a data structure to store all the tasks, sorted by timestamp in memory, an AVL tree seems like it would work, here's a simple one for python: http://bjourne.blogspot.com/2006/11/avl-tree-in-python.html. Hopefully Linux (I assume that's what you are running on) could handle millions of files in a directory, otherwise you might need to hash on the task id to get a sub folder).\nYour master server would just need to run a loop, popping off tasks out of the AVL tree until the next task's timestamp is in the future. Then you could sleep for a few seconds and start checking again. Whenever a task runs, you would update the next run timestamp in the task file and re-insert it into the AVL tree.\nWhen the master server reboots, there would be the overhead of reloading all tasks id and next run timestamp back into memory, so that might be painful with millions of files. Maybe you just have one giant file and give each task 1K space in the file for properties and next run timestamp and then use [task_id] * 1K to get to the right offset for the task properties.\nIf you are willing to use a database, I am confident MySQL could handle whatever you throw at it given the conditions you describe, assuming you have 4GB+ RAM and several hard drives in RAID 0+1 on your master server.\nFinally, if you really want to get complicated, Hadoop might work too: http://hadoop.apache.org/\n",
"If you're worried about writes, you can have a set of servers that dispatch the tasks (may be stripe the servers to equalize load) and have each server write bulk checkpoints to the DB (this way, you will not have so many write queries). You still have to write to be able to recover if scheduling server dies, of course.\nIn addition, if you don't have a clustered index on timestamp, you will avoid having a hot-spot at the end of the table.\n",
"\n350,000 tasks * 48 times per day is\n 16,800,000 tasks executed per day.\n\nTo schedule the jobs, you don't need a database.\nDatabases are for things that are updated. The only update visible here is a change to the schedule to add, remove or reschedule a job.\nCron does this in a totally scalable fashion with a single flat file.\nRead the entire flat file into memory, start spawning jobs. Periodically, check the fstat to see if the file changed. Or, even better, wait for a HUP signal and use that to reread the file. Use kill -HUP to signal the scheduler to reread the file.\nIt's unclear what you're updating the database for.\nIf the database is used to determine future schedule based on job completion, then a single database is a Very Dad Idea.\nIf you're using the database to do some analysis of job history, then you have a simple data warehouse.\n\nRecord completion information (start time, end time, exit status, all that stuff) in a simple flat log file.\nProcess the flat log files to create a fact table and dimension updates.\n\nWhen someone has the urge to do some analysis, load relevant portions of the flat log files into a datamart so they can do queries and counts and averages and the like.\nDo not directly record 17,000,000 rows per day into a relational database. No one wants all that data. They want summaries: counts and averages. \n",
"Why hundreds of thousands and not hundreds of millions ? :evil:\nI think you need stackless python, http://www.stackless.com/. created by the genius of Christian Tismer.\nQuoting \n\nStackless Python is an enhanced\n version of the Python programming\n language. It allows programmers to\n reap the benefits of thread-based\n programming without the performance\n and complexity problems associated\n with conventional threads. The\n microthreads that Stackless adds to\n Python are a cheap and lightweight\n convenience which can if used\n properly, give the following benefits:\n Improved program structure. More\n readable code. Increased programmer\n productivity.\n\nIs used for massive multiplayer games.\n"
] |
[
5,
3,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002458296_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.