content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
SQLAlchemy autocommiting?
I have an issue with SQLAlchemy apparently committing. A rough sketch of my code:
trans = self.conn.begin()
try:
assert not self.conn.execute(my_obj.__table__.select(my_obj.id == id)).first()
self.conn.execute(my_obj.__table__.insert().values(id=id))
assert not self.conn.execute(my_obj.__table__.select(my_obj.id == id)).first()
except:
trans.rollback()
raise
I don't commit, and the second assert always fails! In other words, it seems the data is getting inserted into the database even though the code is within a transaction! Is this assessment accurate?
A:
You're right in that changes aren't get commited to DB. But they are auto-flushed by SQLAlchemy when you perform query, in your case flush is performed on lines with asserts. So if you will not explicitly call commit you will never see these changes in DB, within real data. However, you will get them back as long as you use the same conn object.
You can pass autoflush=False to session constructor do disable this behavior.
|
SQLAlchemy autocommiting?
|
I have an issue with SQLAlchemy apparently committing. A rough sketch of my code:
trans = self.conn.begin()
try:
assert not self.conn.execute(my_obj.__table__.select(my_obj.id == id)).first()
self.conn.execute(my_obj.__table__.insert().values(id=id))
assert not self.conn.execute(my_obj.__table__.select(my_obj.id == id)).first()
except:
trans.rollback()
raise
I don't commit, and the second assert always fails! In other words, it seems the data is getting inserted into the database even though the code is within a transaction! Is this assessment accurate?
|
[
"You're right in that changes aren't get commited to DB. But they are auto-flushed by SQLAlchemy when you perform query, in your case flush is performed on lines with asserts. So if you will not explicitly call commit you will never see these changes in DB, within real data. However, you will get them back as long as you use the same conn object.\nYou can pass autoflush=False to session constructor do disable this behavior.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sqlalchemy",
"transactions"
] |
stackoverflow_0002432527_python_sqlalchemy_transactions.txt
|
Q:
Google OAuth and local dev
i am trying to use Google OAuth to import a user 's contacts. In order to get a consumer and secret key for you app you have to verify your domain at https://www.google.com/accounts/ManageDomains Google allows you to use only domains without ports. I want to test and build the app locally so usually (Facebook, Linkedin apps) i user a reverse SSH tunnel for example http://6pna.com:30002
Has anyone use a tunnel with Google OAuth. Does it work? So far I just verified my apps domain but my requests come from the tunnel (different domain) so OAuth fails (although i get to Google and authorize my app)
Any tips, hints ? Thanks
A:
well after trial and error i found out that the request 's domain is irrelevant
A:
i just use the official gdata google auth library http://code.google.com/p/gdata-python-client
Here is some code
google_auth_url = None
if not current_user.gmail_authorized:
google = gdata.contacts.service.ContactsService(source=GOOGLE_OAUTH_SETTINGS['APP_NAME'])
google.SetOAuthInputParameters(GOOGLE_OAUTH_SETTINGS['SIG_METHOD'], GOOGLE_OAUTH_SETTINGS['CONSUMER_KEY'],
consumer_secret=GOOGLE_OAUTH_SETTINGS['CONSUMER_SECRET'])
if not request.vars.oauth_verifier:
req_token = google.FetchOAuthRequestToken(scopes=GOOGLE_OAUTH_SETTINGS['SCOPES'],
oauth_callback="http://"+request.env.http_host+URL(r=request,c='default',f='import_accounts'))
session['oauth_token_secret'] = req_token.secret
google_auth_url = google.GenerateOAuthAuthorizationURL()
else:
oauth_token = gdata.auth.OAuthTokenFromUrl(request.env.request_uri)
if oauth_token:
oauth_token.secret = session['oauth_token_secret']
oauth_token.oauth_input_params = google.GetOAuthInputParameters()
google.SetOAuthToken(oauth_token)
access_token = google.UpgradeToOAuthAccessToken(oauth_verifier=request.vars.oauth_verifier)
# store access_tonen
#google.GetContactsFeed() # do the process or do it in ajax (but first update the user)
|
Google OAuth and local dev
|
i am trying to use Google OAuth to import a user 's contacts. In order to get a consumer and secret key for you app you have to verify your domain at https://www.google.com/accounts/ManageDomains Google allows you to use only domains without ports. I want to test and build the app locally so usually (Facebook, Linkedin apps) i user a reverse SSH tunnel for example http://6pna.com:30002
Has anyone use a tunnel with Google OAuth. Does it work? So far I just verified my apps domain but my requests come from the tunnel (different domain) so OAuth fails (although i get to Google and authorize my app)
Any tips, hints ? Thanks
|
[
"well after trial and error i found out that the request 's domain is irrelevant\n",
"i just use the official gdata google auth library http://code.google.com/p/gdata-python-client\nHere is some code\n google_auth_url = None\n if not current_user.gmail_authorized:\n google = gdata.contacts.service.ContactsService(source=GOOGLE_OAUTH_SETTINGS['APP_NAME'])\n google.SetOAuthInputParameters(GOOGLE_OAUTH_SETTINGS['SIG_METHOD'], GOOGLE_OAUTH_SETTINGS['CONSUMER_KEY'],\n consumer_secret=GOOGLE_OAUTH_SETTINGS['CONSUMER_SECRET'])\n if not request.vars.oauth_verifier:\n req_token = google.FetchOAuthRequestToken(scopes=GOOGLE_OAUTH_SETTINGS['SCOPES'],\n oauth_callback=\"http://\"+request.env.http_host+URL(r=request,c='default',f='import_accounts'))\n session['oauth_token_secret'] = req_token.secret\n google_auth_url = google.GenerateOAuthAuthorizationURL()\n else:\n oauth_token = gdata.auth.OAuthTokenFromUrl(request.env.request_uri)\n if oauth_token:\n oauth_token.secret = session['oauth_token_secret']\n oauth_token.oauth_input_params = google.GetOAuthInputParameters()\n google.SetOAuthToken(oauth_token)\n access_token = google.UpgradeToOAuthAccessToken(oauth_verifier=request.vars.oauth_verifier)\n # store access_tonen\n\n #google.GetContactsFeed() # do the process or do it in ajax (but first update the user)\n\n"
] |
[
4,
3
] |
[] |
[] |
[
"google_contacts_api",
"oauth",
"python"
] |
stackoverflow_0002410559_google_contacts_api_oauth_python.txt
|
Q:
Getting values from Multiple Text Entry using Pygtk and Python
On a click of a button named "Add Textbox" it calls a function which creates a single textbox using (gtk.Entry) function. So each time i click that button it creates a textbox. I have a submit button which should fetches all the values of the text boxes(say 10 textboxes) generated with the name of "entry". It works for one textbox but not for multiple. In php we can create dynamix textboxes mentioning as an array name=entry[]. Do we have similar functionality in python ?
Enviroment : FC10 , Glade 3 , Python 2.5 , GTK.
A:
You could be a bit clearer, it's not obvious what you do with your GtkEntry after creating it. The easiest thing would be to just add it to a Python list, so you can iterate over all created GtkEntry widgets later.
Or, you could "tag" the widgets with something to make them identifiable, and iterate over the containing widgets (assuming you really do add the widget to a window or something).
|
Getting values from Multiple Text Entry using Pygtk and Python
|
On a click of a button named "Add Textbox" it calls a function which creates a single textbox using (gtk.Entry) function. So each time i click that button it creates a textbox. I have a submit button which should fetches all the values of the text boxes(say 10 textboxes) generated with the name of "entry". It works for one textbox but not for multiple. In php we can create dynamix textboxes mentioning as an array name=entry[]. Do we have similar functionality in python ?
Enviroment : FC10 , Glade 3 , Python 2.5 , GTK.
|
[
"You could be a bit clearer, it's not obvious what you do with your GtkEntry after creating it. The easiest thing would be to just add it to a Python list, so you can iterate over all created GtkEntry widgets later.\nOr, you could \"tag\" the widgets with something to make them identifiable, and iterate over the containing widgets (assuming you really do add the widget to a window or something).\n"
] |
[
1
] |
[] |
[] |
[
"glade",
"gtk",
"pygtk",
"python"
] |
stackoverflow_0002432468_glade_gtk_pygtk_python.txt
|
Q:
can we display glass bar chart in python with google app engine
i am using bar chat and i want to use glass bar chart instead of that
tutorials are given for PHP only.
A:
Disclaimer: I don't know, what is "glass bar chart".
You cannot (or at least it is not effective to) generate graphics (charts) on the AppEngine servers. However, if you want to display bar charts or any other kind of plots and charts in your AppEngine applications, you have two other solutions:
A) Use an external chart plotting service to produce plots. Google Charts is a popular choice. There are some Python libraries which can help here:
pygooglechart
google-chartwrapper
graphy
B) Plot anything on the client (use Javascript to plot charts). There are some Javascript libraries which may help you:
Raphaël
gRaphaël
and many others
Generally, A is easier and more accessible, but B can produce more eye-candy. If you need interactive charts, you should choose B.
|
can we display glass bar chart in python with google app engine
|
i am using bar chat and i want to use glass bar chart instead of that
tutorials are given for PHP only.
|
[
"Disclaimer: I don't know, what is \"glass bar chart\".\nYou cannot (or at least it is not effective to) generate graphics (charts) on the AppEngine servers. However, if you want to display bar charts or any other kind of plots and charts in your AppEngine applications, you have two other solutions:\nA) Use an external chart plotting service to produce plots. Google Charts is a popular choice. There are some Python libraries which can help here:\n\npygooglechart\ngoogle-chartwrapper\ngraphy\n\nB) Plot anything on the client (use Javascript to plot charts). There are some Javascript libraries which may help you:\n\nRaphaël\ngRaphaël\nand many others\n\nGenerally, A is easier and more accessible, but B can produce more eye-candy. If you need interactive charts, you should choose B.\n"
] |
[
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002430748_python.txt
|
Q:
What is a faster way of merging the values of this Python structure into a single dictionary?
I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient.
I have a dictionary of dictionaries, like this:
groups_and_classes = {'group_1': {'class_A': [1, 2, 3],
'class_B': [1, 3, 5, 7],
'class_c': [1, 2], # ...many more items like this
},
'group_2': {'class_A': [11, 12, 13],
'class_C': [5, 6, 7, 8, 9]
}, # ...and many more items like this
}
A function creates a new object from groups_and_classes like this (the function to create this is called often):
all_classes = {'class_A': [1, 2, 3, 11, 12, 13],
'class_B': [1, 3, 5, 7, 9],
'class_C': [1, 2, 5, 6, 7, 8, 9]
}
Right now, there is a loop that does this:
all_classes = {}
for group in groups_and_classes.values():
for c, vals in group.iteritems():
for v in vals:
if all_classes.has_key(c):
if v not in all_classes[c]:
all_classes[c].append(v)
else:
all_classes[c] = [v]
So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique:
all_classes = {}
for group in groups_and_classes.values():
for c, vals in group.iteritems():
try:
all_classes[c].update(set(vals))
except KeyError:
all_classes[c] = set(vals)
This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code.
Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?
A:
Here's a tweak for conciseness, though I'm not sure about performance:
from collections import defaultdict
all_classes = defaultdict(set)
for group in groups_and_classes.values():
for c, vals in group.iteritems():
all_classes[c].update(set(vals))
Defaultdicts are not quite the greatest thing since sliced bread, but they're pretty cool. :)
A:
One thing that might improve things slightly is to avoid the redundant conversion to a set, and just use:
all_classes[c].update(vals)
update can actually take an arbitrary iterable, as it essentially just iterates and adds, so you can avoid an extra conversion step.
A:
Combining Dictionaries Of Lists In Python.
def merge_dols(dol1, dol2):
result = dict(dol1, **dol2)
result.update((k, dol1[k] + dol2[k]) for k in set(dol1).intersection(dol2))
return result
g1 = groups_and_classes['group_1']
g2 = groups_and_classes['group_2']
all_classes = merge_dols(g1,g2)
OR
all_classes = reduce(merge_dols,groups_and_classes.values())
--copied from Alex Martelli
If you get more than two groups then you can use itertools.reduce
all_classes = reduce(merge_dols,groups_and_classes.values())
|
What is a faster way of merging the values of this Python structure into a single dictionary?
|
I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient.
I have a dictionary of dictionaries, like this:
groups_and_classes = {'group_1': {'class_A': [1, 2, 3],
'class_B': [1, 3, 5, 7],
'class_c': [1, 2], # ...many more items like this
},
'group_2': {'class_A': [11, 12, 13],
'class_C': [5, 6, 7, 8, 9]
}, # ...and many more items like this
}
A function creates a new object from groups_and_classes like this (the function to create this is called often):
all_classes = {'class_A': [1, 2, 3, 11, 12, 13],
'class_B': [1, 3, 5, 7, 9],
'class_C': [1, 2, 5, 6, 7, 8, 9]
}
Right now, there is a loop that does this:
all_classes = {}
for group in groups_and_classes.values():
for c, vals in group.iteritems():
for v in vals:
if all_classes.has_key(c):
if v not in all_classes[c]:
all_classes[c].append(v)
else:
all_classes[c] = [v]
So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique:
all_classes = {}
for group in groups_and_classes.values():
for c, vals in group.iteritems():
try:
all_classes[c].update(set(vals))
except KeyError:
all_classes[c] = set(vals)
This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code.
Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?
|
[
"Here's a tweak for conciseness, though I'm not sure about performance:\nfrom collections import defaultdict\nall_classes = defaultdict(set)\nfor group in groups_and_classes.values():\n for c, vals in group.iteritems():\n all_classes[c].update(set(vals))\n\nDefaultdicts are not quite the greatest thing since sliced bread, but they're pretty cool. :)\n",
"One thing that might improve things slightly is to avoid the redundant conversion to a set, and just use:\nall_classes[c].update(vals)\n\nupdate can actually take an arbitrary iterable, as it essentially just iterates and adds, so you can avoid an extra conversion step.\n",
"Combining Dictionaries Of Lists In Python.\ndef merge_dols(dol1, dol2):\n result = dict(dol1, **dol2)\n result.update((k, dol1[k] + dol2[k]) for k in set(dol1).intersection(dol2))\n return result\n\ng1 = groups_and_classes['group_1']\ng2 = groups_and_classes['group_2']\n\nall_classes = merge_dols(g1,g2)\n\n\nOR\n\nall_classes = reduce(merge_dols,groups_and_classes.values())\n\n--copied from Alex Martelli\nIf you get more than two groups then you can use itertools.reduce\nall_classes = reduce(merge_dols,groups_and_classes.values())\n\n"
] |
[
4,
2,
2
] |
[] |
[] |
[
"data_structures",
"performance",
"python",
"refactoring"
] |
stackoverflow_0002433027_data_structures_performance_python_refactoring.txt
|
Q:
how to read url data
from google.appengine.ext import webapp
from google.appengine.ext.webapp import util
from google.appengine.ext import db
from google.appengine.api import urlfetch
class TrakHtml(db.Model):
hawb = db.StringProperty(required=False)
htmlData = db.TextProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
Traks = list()
Traks.append('93332134')
#Traks.append('91779831')
#Traks.append('92782244')
#Traks.append('38476214')
for st in Traks :
trak = TrakHtml()
trak.hawb = st
url = 'http://etracking.cevalogistics.com/eTrackResultsMulti.aspx?sv='+st
result = urlfetch.fetch(url)
self.response.out.write(result.read())
***trak.htmlData = result.read()
trak.put()
#self.response.out.write(st)
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
util.run_wsgi_app(application)
if __name__ == '__main__':
main()
I am getting error at ***line it is not reading url data.
A:
You have read the result twice (once in self.responce.out.write and once a line below).
Store the value as a string first:
htmlData = result.read()
self.response.out.write(htmlData)
trak.htmlData = htmlData
I would expect result.read() to move to the end of the result stream - think of it like a book: Reading a book, you flip page by page. When you get to the end, trying to read gets difficult - unless you rewind to the beginning.
Also, please state the error message - that is often a tremendous help in diagnosing a problem!
|
how to read url data
|
from google.appengine.ext import webapp
from google.appengine.ext.webapp import util
from google.appengine.ext import db
from google.appengine.api import urlfetch
class TrakHtml(db.Model):
hawb = db.StringProperty(required=False)
htmlData = db.TextProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
Traks = list()
Traks.append('93332134')
#Traks.append('91779831')
#Traks.append('92782244')
#Traks.append('38476214')
for st in Traks :
trak = TrakHtml()
trak.hawb = st
url = 'http://etracking.cevalogistics.com/eTrackResultsMulti.aspx?sv='+st
result = urlfetch.fetch(url)
self.response.out.write(result.read())
***trak.htmlData = result.read()
trak.put()
#self.response.out.write(st)
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
util.run_wsgi_app(application)
if __name__ == '__main__':
main()
I am getting error at ***line it is not reading url data.
|
[
"You have read the result twice (once in self.responce.out.write and once a line below).\nStore the value as a string first:\nhtmlData = result.read()\nself.response.out.write(htmlData)\ntrak.htmlData = htmlData\n\nI would expect result.read() to move to the end of the result stream - think of it like a book: Reading a book, you flip page by page. When you get to the end, trying to read gets difficult - unless you rewind to the beginning.\nAlso, please state the error message - that is often a tremendous help in diagnosing a problem!\n"
] |
[
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002433216_python.txt
|
Q:
Any downsides to UPX-ing my 32-bit Python 2.6.4 development environment EXE/PYD/DLL files?
Are there any downsides to UPX-ing my 32-bit Python 2.6.4 development environment EXE/PYD/DLL files?
The reason I'm asking is that I frequently use a custom PY2EXE script that UPX's copies of these files on every build.
Yes, I could get fancy and try to cache UPXed files, but I think a simpler, safer, and higher performance solution would be for me to just UPX my Python 2.6.4 directory once and be done with it.
Thoughts?
Malcolm
A:
I have experienced significant increases in start up time when UPX compressed executables are run on systems with certain virus scanners. I was only compressing single executables, but I expect that each compressed dll would add to the start time.
Is it really necessary to use UPX? I can't imagine the space savings to be significant enough to be worth the trouble.
|
Any downsides to UPX-ing my 32-bit Python 2.6.4 development environment EXE/PYD/DLL files?
|
Are there any downsides to UPX-ing my 32-bit Python 2.6.4 development environment EXE/PYD/DLL files?
The reason I'm asking is that I frequently use a custom PY2EXE script that UPX's copies of these files on every build.
Yes, I could get fancy and try to cache UPXed files, but I think a simpler, safer, and higher performance solution would be for me to just UPX my Python 2.6.4 directory once and be done with it.
Thoughts?
Malcolm
|
[
"I have experienced significant increases in start up time when UPX compressed executables are run on systems with certain virus scanners. I was only compressing single executables, but I expect that each compressed dll would add to the start time.\nIs it really necessary to use UPX? I can't imagine the space savings to be significant enough to be worth the trouble.\n"
] |
[
2
] |
[] |
[] |
[
"py2exe",
"python",
"upx"
] |
stackoverflow_0002431236_py2exe_python_upx.txt
|
Q:
Many producer, single consumer with python/mod_wsgi
I have a Pylons web application served by Apache (mod_wsgi, prefork). Because of Apache, there are multiple separate processes running my application code concurrently. Some of the non-critical tasks that the application does I want to defer for processing in background to improve "live" response times. So I'm thinking of task queue, many Apache processes adding tasks to this queue, a single separate Python process processing them one-by-one and removing from queue.
The queue should preferably be persisted to disk so queued unprocessed tasks are not lost because of power outage, server restart etc. The question is what would be a reasonable way to implement such queue?
As for the things I've tried: I started with simple SQLite database and single table in it for storing queue items. In load testing, when increasing level of concurrency, I started getting "database locked" errors, as expected. The quick'n'dirty fix was to replace SQLite with MySQL--it handles concurrency issues well but feels like an overkill for the simple thing I need to do. Queue-related DB operations also show up prominently in my profiling reports.
A:
A message broker like Apache's ActiveMQ is an ideal solution here.
The pipeline could be following:
Application process that is responsible for handling HTTP requests generates replies quickly and sends low-priority, heavy tasks to AMQ queue.
One or more another processes are subscribed to consume AMQ queue and do what is intended to do with these heavy tasks.
The requirement of queue persistence is fulfilled out of the box since ActiveMQ stores messages that are not yet consumed in persistent storage. Furthermore it scales quite well since you're free to deploy multiple HTTP-apps, multiple consumer apps and AMQ itself on different machines each.
We use something like this in our project written in Python utilizing STOMP as underlying communication protocol.
A:
A web server (any web server) is multi-producer, single-consumer process.
A simple solution is to build a wsgiref or Werkzeug backend server to handle your backend requests.
Since this "backend" server is build using WSGI technology, it's very, very similar to the front-end web server. Except. It doesn't produce HTML responses (JSON is usually simpler). Other than that, it's very straightforward.
You design RESTful transactions for this backend. You use all of the various WSGI features for URI parsing, authorization, authentication, etc. You -- generally -- don't need session management, since RESTful servers don't usually offer sessions.
If you get into serious scalability issues, you simply wrap your backend server in lighttpd or some other web engine to create a multi-threaded backend.
|
Many producer, single consumer with python/mod_wsgi
|
I have a Pylons web application served by Apache (mod_wsgi, prefork). Because of Apache, there are multiple separate processes running my application code concurrently. Some of the non-critical tasks that the application does I want to defer for processing in background to improve "live" response times. So I'm thinking of task queue, many Apache processes adding tasks to this queue, a single separate Python process processing them one-by-one and removing from queue.
The queue should preferably be persisted to disk so queued unprocessed tasks are not lost because of power outage, server restart etc. The question is what would be a reasonable way to implement such queue?
As for the things I've tried: I started with simple SQLite database and single table in it for storing queue items. In load testing, when increasing level of concurrency, I started getting "database locked" errors, as expected. The quick'n'dirty fix was to replace SQLite with MySQL--it handles concurrency issues well but feels like an overkill for the simple thing I need to do. Queue-related DB operations also show up prominently in my profiling reports.
|
[
"A message broker like Apache's ActiveMQ is an ideal solution here.\nThe pipeline could be following:\n\nApplication process that is responsible for handling HTTP requests generates replies quickly and sends low-priority, heavy tasks to AMQ queue.\nOne or more another processes are subscribed to consume AMQ queue and do what is intended to do with these heavy tasks.\n\nThe requirement of queue persistence is fulfilled out of the box since ActiveMQ stores messages that are not yet consumed in persistent storage. Furthermore it scales quite well since you're free to deploy multiple HTTP-apps, multiple consumer apps and AMQ itself on different machines each.\nWe use something like this in our project written in Python utilizing STOMP as underlying communication protocol.\n",
"A web server (any web server) is multi-producer, single-consumer process.\nA simple solution is to build a wsgiref or Werkzeug backend server to handle your backend requests. \nSince this \"backend\" server is build using WSGI technology, it's very, very similar to the front-end web server. Except. It doesn't produce HTML responses (JSON is usually simpler). Other than that, it's very straightforward.\nYou design RESTful transactions for this backend. You use all of the various WSGI features for URI parsing, authorization, authentication, etc. You -- generally -- don't need session management, since RESTful servers don't usually offer sessions. \nIf you get into serious scalability issues, you simply wrap your backend server in lighttpd or some other web engine to create a multi-threaded backend.\n"
] |
[
1,
0
] |
[] |
[] |
[
"apache",
"concurrency",
"producer",
"python",
"synchronization"
] |
stackoverflow_0002432956_apache_concurrency_producer_python_synchronization.txt
|
Q:
vectorize is indeterminate
I'm trying to vectorize a simple function in numpy and getting inconsistent behavior. I expect my code to return 0 for values < 0.5 and the unchanged value otherwise. Strangely, different runs of the script from the command line yield varying results: sometimes it works correctly, and sometimes I get all 0's. It doesn't matter which of the three lines I use for the case when d <= T. It does seem to be correlated with whether the first value to be returned is 0. Any ideas? Thanks.
import numpy as np
def my_func(d, T=0.5):
if d > T: return d
#if d <= T: return 0
else: return 0
#return 0
N = 4
A = np.random.uniform(size=N**2)
A.shape = (N,N)
print A
f = np.vectorize(my_func)
print f(A)
$ python x.py
[[ 0.86913815 0.96833127 0.54539153 0.46184594]
[ 0.46550903 0.24645558 0.26988519 0.0959257 ]
[ 0.73356391 0.69363161 0.57222389 0.98214089]
[ 0.15789303 0.06803493 0.01601389 0.04735725]]
[[ 0.86913815 0.96833127 0.54539153 0. ]
[ 0. 0. 0. 0. ]
[ 0.73356391 0.69363161 0.57222389 0.98214089]
[ 0. 0. 0. 0. ]]
$ python x.py
[[ 0.37127366 0.77935622 0.74392301 0.92626644]
[ 0.61639086 0.32584431 0.12345342 0.17392298]
[ 0.03679475 0.00536863 0.60936931 0.12761859]
[ 0.49091897 0.21261635 0.37063752 0.23578082]]
[[0 0 0 0]
[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
A:
If this really is the problem you want to solve, then there's a much better solution:
A[A<=0.5] = 0.0
The problem with your code, however, is that if the condition passes, you are returning the integer 0, not the float 0.0. From the documentation:
The data type of the output of vectorized is determined by calling the function with the first element of the input. This can be avoided by specifying the otypes argument.
So when the very first entry is <0.5, it tries to create an integer, not float, array.
You should change return 0 to
return 0.0
Alternately, if you don't want to touch my_func, you can use
f = np.vectorize(my_func, otypes=[np.float])
|
vectorize is indeterminate
|
I'm trying to vectorize a simple function in numpy and getting inconsistent behavior. I expect my code to return 0 for values < 0.5 and the unchanged value otherwise. Strangely, different runs of the script from the command line yield varying results: sometimes it works correctly, and sometimes I get all 0's. It doesn't matter which of the three lines I use for the case when d <= T. It does seem to be correlated with whether the first value to be returned is 0. Any ideas? Thanks.
import numpy as np
def my_func(d, T=0.5):
if d > T: return d
#if d <= T: return 0
else: return 0
#return 0
N = 4
A = np.random.uniform(size=N**2)
A.shape = (N,N)
print A
f = np.vectorize(my_func)
print f(A)
$ python x.py
[[ 0.86913815 0.96833127 0.54539153 0.46184594]
[ 0.46550903 0.24645558 0.26988519 0.0959257 ]
[ 0.73356391 0.69363161 0.57222389 0.98214089]
[ 0.15789303 0.06803493 0.01601389 0.04735725]]
[[ 0.86913815 0.96833127 0.54539153 0. ]
[ 0. 0. 0. 0. ]
[ 0.73356391 0.69363161 0.57222389 0.98214089]
[ 0. 0. 0. 0. ]]
$ python x.py
[[ 0.37127366 0.77935622 0.74392301 0.92626644]
[ 0.61639086 0.32584431 0.12345342 0.17392298]
[ 0.03679475 0.00536863 0.60936931 0.12761859]
[ 0.49091897 0.21261635 0.37063752 0.23578082]]
[[0 0 0 0]
[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
|
[
"If this really is the problem you want to solve, then there's a much better solution:\nA[A<=0.5] = 0.0\n\nThe problem with your code, however, is that if the condition passes, you are returning the integer 0, not the float 0.0. From the documentation:\n\nThe data type of the output of vectorized is determined by calling the function with the first element of the input. This can be avoided by specifying the otypes argument.\n\nSo when the very first entry is <0.5, it tries to create an integer, not float, array. \nYou should change return 0 to \nreturn 0.0\n\nAlternately, if you don't want to touch my_func, you can use\nf = np.vectorize(my_func, otypes=[np.float])\n\n"
] |
[
7
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0002433587_numpy_python.txt
|
Q:
Python Libraries and drivers
I have no knowledge of Python. I started with .NET and than learned PHP. Someone later asked me to learn Ruby as well. I started learning it. Since last few months I am seeing many libraries and drivers written in Python. I want to know what are the advantages of Python over PHP/Ruby? What type of language it is and is there a need to learn Python as well?
Which is the purest version of Python? I could see many variants there like IronPython etc.
A:
Nobody can tell you the exact answer because everybody has their own "holy grail". You will just have to find out for yourself which one suits you best for the task you want to perform. Case closed.
A:
If you're just getting started in python, chances are the standard python distribution will work just fine. Once you get into the guts of your project, changing to IronPython (etc) is not a big deal.
I think the most important part is the "getting started" piece. Start writing python and you'll never look back.
|
Python Libraries and drivers
|
I have no knowledge of Python. I started with .NET and than learned PHP. Someone later asked me to learn Ruby as well. I started learning it. Since last few months I am seeing many libraries and drivers written in Python. I want to know what are the advantages of Python over PHP/Ruby? What type of language it is and is there a need to learn Python as well?
Which is the purest version of Python? I could see many variants there like IronPython etc.
|
[
"Nobody can tell you the exact answer because everybody has their own \"holy grail\". You will just have to find out for yourself which one suits you best for the task you want to perform. Case closed.\n",
"If you're just getting started in python, chances are the standard python distribution will work just fine. Once you get into the guts of your project, changing to IronPython (etc) is not a big deal.\nI think the most important part is the \"getting started\" piece. Start writing python and you'll never look back.\n"
] |
[
1,
1
] |
[] |
[] |
[
"php",
"python",
"ruby"
] |
stackoverflow_0002395157_php_python_ruby.txt
|
Q:
SQLAlchemy - full load instance before detach
is there a way how to fully load some SQLAlchemy ORM mapped instance (together with its related objects) before detaching it from the Session? I want to send it via pipe into another processs and I don't want to merge it into session in this new process.
Thank you
Jan
A:
I believe you'll want to use the options() method on the Query, with eagerload() or eagerload_all().
Here's an example of use from one of our apps, where the class Controlled has a relation called changes which brings in a bunch of DocumentChange records, which themselves have a relation dco that brings in one Dco object per instance. This is a two-level eager-load, thus the use of the eagerload_all(). We're using the declarative extension (in case that matters) and m.Session is a "scoped" (thread-local) session.
from sqlalchemy.orm import eagerload, eagerload_all
...
controlled_docs = (m.Session.query(m.Controlled)
.options(eagerload_all('changes.dco'))
.order_by('number')
.all())
If that's not sufficient, perhaps include a snippet or text showing how the relevant ORM classes are related and I could update the answer to show how those options would be used in your case.
|
SQLAlchemy - full load instance before detach
|
is there a way how to fully load some SQLAlchemy ORM mapped instance (together with its related objects) before detaching it from the Session? I want to send it via pipe into another processs and I don't want to merge it into session in this new process.
Thank you
Jan
|
[
"I believe you'll want to use the options() method on the Query, with eagerload() or eagerload_all().\nHere's an example of use from one of our apps, where the class Controlled has a relation called changes which brings in a bunch of DocumentChange records, which themselves have a relation dco that brings in one Dco object per instance. This is a two-level eager-load, thus the use of the eagerload_all(). We're using the declarative extension (in case that matters) and m.Session is a \"scoped\" (thread-local) session.\nfrom sqlalchemy.orm import eagerload, eagerload_all\n...\ncontrolled_docs = (m.Session.query(m.Controlled)\n .options(eagerload_all('changes.dco'))\n .order_by('number')\n .all())\n\nIf that's not sufficient, perhaps include a snippet or text showing how the relevant ORM classes are related and I could update the answer to show how those options would be used in your case.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0002432941_python_sqlalchemy.txt
|
Q:
Python ImportError when executing 'import.py', but not when executing 'python import.py'
I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
When I execute "python import.py", it works:
C:\Temp>python import.py
Success!
When I run the python interpreter and type the commands, it works:
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
But when I execute "import.py', it does not work:
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks.
A:
I have the feeling that
C:\Temp>import.py
uses a different interpreter. Can you try with the following scripts:
#!/usr/bin/env python
import sys
print sys.executable
import xml.etree.ElementTree as ET
print "Success!"
A:
Probably py extension is connected to some other python interpreter than the one in /usr/bin/python
A:
Try:
./import.py
Most people don't have "." in their path.
just typing python will call the cygwin python.
import.py will likely call whichever python is associated with .py files under windows.
You are using two different python executables.
A:
Create a batch file next to your program that calls it the right way ... and I'm fairly sure you've got the problem because of an ambiguity between "windows python" (a python interpreter compiled for windows) and "cygwin python" (a python interpreter running on cygwin).
|
Python ImportError when executing 'import.py', but not when executing 'python import.py'
|
I am running Cygwin Python version 2.5.2.
I have a three-line source file, called import.py:
#!/usr/bin/python
import xml.etree.ElementTree as ET
print "Success!"
When I execute "python import.py", it works:
C:\Temp>python import.py
Success!
When I run the python interpreter and type the commands, it works:
C:\Temp>python
Python 2.5.2 (r252:60911, Dec 2 2008, 09:26:14)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import xml.etree.ElementTree as ET
>>> print "Success!"
Success!
>>>
But when I execute "import.py', it does not work:
C:\Temp>which python
/usr/bin/python
C:\Temp>import.py
Traceback (most recent call last):
File "C:\Temp\import.py", line 2, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
When I remove the first line (#!/usr/bin/python), I get the same error. I need that line in there, though, for when this script runs on Linux. And it works fine on Linux.
Any ideas?
Thanks.
|
[
"I have the feeling that \nC:\\Temp>import.py\n\nuses a different interpreter. Can you try with the following scripts:\n#!/usr/bin/env python\nimport sys\nprint sys.executable\nimport xml.etree.ElementTree as ET\nprint \"Success!\"\n\n",
"Probably py extension is connected to some other python interpreter than the one in /usr/bin/python\n",
"Try:\n./import.py\n\nMost people don't have \".\" in their path.\njust typing python will call the cygwin python.\nimport.py will likely call whichever python is associated with .py files under windows. \nYou are using two different python executables.\n",
"Create a batch file next to your program that calls it the right way ... and I'm fairly sure you've got the problem because of an ambiguity between \"windows python\" (a python interpreter compiled for windows) and \"cygwin python\" (a python interpreter running on cygwin).\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0002433703_import_python.txt
|
Q:
Hide filter items that produce zero results in django-filter
I have an issue with the django-filter application: how to hide the items that will produce zero results. I think that there is a simple method to do this, but idk how.
I'm using the LinkWidget on a ModelChoiceFilter, like this:
provider = django_filters.ModelChoiceFilter(queryset=Provider.objects.all(),
widget=django_filters.widgets.LinkWidget)
What I need to do is filter the queryset and select only the Provider that will produce at least one result, and exclude the others.
There is a way to do that?
A:
Basically, you need to apply filters, and then apply them again, but on newly-generated queryset. Something like this:
f = SomeFilter(request.GET)
f = SomeFilter(request.GET, queryset=f.qs)
Now when you have correct queryset, you can override providers dynamically in init:
def __init__(self, **kw):
super(SomeFilter, self).__init__(**kw)
self.filters['provider'].extra['queryset'] = Provider.objects.filter(foo__in=self.queryset)
Not pretty but it works. You should probably encapsulate those two calls into more-efficient method on filter.
A:
Maybe the queryset can be a callable instead of a 'real' queryset object. This way, it can be generated dynamically. At least this works in Django Models for references to other models.
The callable can be a class method in you Model.
A:
If I understand your question correctly I believe you want to use the AllValuesFilter.
import django_tables
provider = django_filters.AllValuesFilter(
widget=django_filters.widgets.LinkWidget)
More information is available here: http://github.com/alex/django-filter/blob/master/docs/ref/filters.txt#L77
|
Hide filter items that produce zero results in django-filter
|
I have an issue with the django-filter application: how to hide the items that will produce zero results. I think that there is a simple method to do this, but idk how.
I'm using the LinkWidget on a ModelChoiceFilter, like this:
provider = django_filters.ModelChoiceFilter(queryset=Provider.objects.all(),
widget=django_filters.widgets.LinkWidget)
What I need to do is filter the queryset and select only the Provider that will produce at least one result, and exclude the others.
There is a way to do that?
|
[
"Basically, you need to apply filters, and then apply them again, but on newly-generated queryset. Something like this:\nf = SomeFilter(request.GET) \nf = SomeFilter(request.GET, queryset=f.qs)\n\nNow when you have correct queryset, you can override providers dynamically in init:\ndef __init__(self, **kw):\n super(SomeFilter, self).__init__(**kw)\n self.filters['provider'].extra['queryset'] = Provider.objects.filter(foo__in=self.queryset)\n\nNot pretty but it works. You should probably encapsulate those two calls into more-efficient method on filter.\n",
"Maybe the queryset can be a callable instead of a 'real' queryset object. This way, it can be generated dynamically. At least this works in Django Models for references to other models.\nThe callable can be a class method in you Model.\n",
"If I understand your question correctly I believe you want to use the AllValuesFilter.\nimport django_tables\n\nprovider = django_filters.AllValuesFilter(\n widget=django_filters.widgets.LinkWidget)\n\nMore information is available here: http://github.com/alex/django-filter/blob/master/docs/ref/filters.txt#L77\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"django",
"django_filter",
"filter",
"python"
] |
stackoverflow_0002183008_django_django_filter_filter_python.txt
|
Q:
How do constructors and destructors work?
I'm trying to understand this code:
class Person:
'''Represents a person '''
population = 0
def __init__(self,name):
//some statements and population += 1
def __del__(self):
//some statements and population -= 1
def sayHi(self):
'''grettings from person'''
print 'Hi My name is %s' % self.name
def howMany(self):
'''Prints the current population'''
if Person.population == 1:
print 'i am the only one here'
else:
print 'There are still %d guyz left ' % Person.population
rohan = Person('Rohan')
rohan.sayHi()
rohan.howMany()
sanju = Person('Sanjivi')
sanju.howMany()
del rohan # am i doing this correctly?
How does the destructor get invoked -- automatically or do I have to add something in the "main" program/class like above?
Output:
Initializing person data
******************************************
Initializing Rohan
******************************************
Population now is: 1
Hi My name is Rohan
i am the only one here
Initializing person data
******************************************
Initializing Sanjivi
******************************************
Population now is: 2
In case Person dies:
******************************************
Sanjivi Bye Bye world
there are still 1 people left
i am the only one here
In case Person dies:
******************************************
Rohan Bye Bye world
i am the last person on earth
Population now is: 0
If required I can paste the whole lesson as well. I'm learning from:
http://www.ibiblio.org/swaroopch/byteofpython/read/
A:
Here is a slightly opinionated answer.
Don't use __del__. This is not C++ or a language built for destructors. The __del__ method really should be gone in Python 3.x, though I'm sure someone will find a use case that makes sense. If you need to use __del __, be aware of the basic limitations per http://docs.python.org/reference/datamodel.html:
__del__ is called when the garbage collector happens to be collecting the objects, not when you lose the last reference to an object and not when you execution del object.
__del__ is responsible for calling any __del__ in a superclass, though it is not
clear if this is in method resolution order (MRO) or just calling each superclass.
Having a __del__ means that the garbage collector gives up on detecting and cleaning any cyclic links, such as losing the last reference to a linked list. You can get a list of the objects ignored from gc.garbage. You can sometimes use weak references to avoid the cycle altogether. This gets debated now and then: see http://mail.python.org/pipermail/python-ideas/2009-October/006194.html.
The __del__ function can cheat, saving a reference to an object, and stopping the garbage collection.
Exceptions explicitly raised in __del__ are ignored.
__del__ complements __new__ far more than __init__. This gets confusing. See http://www.algorithm.co.il/blogs/index.php/programming/python/python-gotchas-1-del-is-not-the-opposite-of-init/ for an explanation and gotchas.
__del__ is not a "well-loved" child in Python. You will notice that sys.exit() documentation does not specify if garbage is collected before exiting, and there are lots of odd issues. Calling the __del__ on globals causes odd ordering issues, e.g., http://bugs.python.org/issue5099. Should __del__ called even if the __init__ fails? See http://mail.python.org/pipermail/python-dev/2000-March/thread.html#2423 for a long thread.
But, on the other hand:
__del__ means you do not forget to call a close statement. See http://eli.thegreenplace.net/2009/06/12/safely-using-destructors-in-python/ for a pro __del__ viewpoint. This is usually about freeing ctypes or some other special resource.
And my pesonal reason for not liking the __del__ function.
Everytime someone brings up __del__ it devolves into thirty messages of confusion.
It breaks these items in the Zen of Python:
Complex is better than complicated.
Special cases aren't special enough to break the rules.
Errors should never pass silently.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
If the implementation is hard to explain, it's a bad idea.
So, find a reason not to use __del__.
A:
As I understood them from my early CPTS experiance:
Constructors: Constructors are mainly used in classes to initialze the class with values, and gives an oppurtunity to do some background work based on creation. If you pass in values during the creation of an object, this is where you can handle assignment of those values to variables within your class. (In this case, upon construction you are incrementing a variable that keeps track of population).
Destructors: Destructors cleanup a class. In python, due to the garbage collector it's not as important as languages that can leave hanging pointers (c++). (In this case you are decrementing the population variable on destruction of the object).
|
How do constructors and destructors work?
|
I'm trying to understand this code:
class Person:
'''Represents a person '''
population = 0
def __init__(self,name):
//some statements and population += 1
def __del__(self):
//some statements and population -= 1
def sayHi(self):
'''grettings from person'''
print 'Hi My name is %s' % self.name
def howMany(self):
'''Prints the current population'''
if Person.population == 1:
print 'i am the only one here'
else:
print 'There are still %d guyz left ' % Person.population
rohan = Person('Rohan')
rohan.sayHi()
rohan.howMany()
sanju = Person('Sanjivi')
sanju.howMany()
del rohan # am i doing this correctly?
How does the destructor get invoked -- automatically or do I have to add something in the "main" program/class like above?
Output:
Initializing person data
******************************************
Initializing Rohan
******************************************
Population now is: 1
Hi My name is Rohan
i am the only one here
Initializing person data
******************************************
Initializing Sanjivi
******************************************
Population now is: 2
In case Person dies:
******************************************
Sanjivi Bye Bye world
there are still 1 people left
i am the only one here
In case Person dies:
******************************************
Rohan Bye Bye world
i am the last person on earth
Population now is: 0
If required I can paste the whole lesson as well. I'm learning from:
http://www.ibiblio.org/swaroopch/byteofpython/read/
|
[
"Here is a slightly opinionated answer.\nDon't use __del__. This is not C++ or a language built for destructors. The __del__ method really should be gone in Python 3.x, though I'm sure someone will find a use case that makes sense. If you need to use __del __, be aware of the basic limitations per http://docs.python.org/reference/datamodel.html:\n\n__del__ is called when the garbage collector happens to be collecting the objects, not when you lose the last reference to an object and not when you execution del object.\n__del__ is responsible for calling any __del__ in a superclass, though it is not \nclear if this is in method resolution order (MRO) or just calling each superclass.\nHaving a __del__ means that the garbage collector gives up on detecting and cleaning any cyclic links, such as losing the last reference to a linked list. You can get a list of the objects ignored from gc.garbage. You can sometimes use weak references to avoid the cycle altogether. This gets debated now and then: see http://mail.python.org/pipermail/python-ideas/2009-October/006194.html.\nThe __del__ function can cheat, saving a reference to an object, and stopping the garbage collection.\nExceptions explicitly raised in __del__ are ignored.\n__del__ complements __new__ far more than __init__. This gets confusing. See http://www.algorithm.co.il/blogs/index.php/programming/python/python-gotchas-1-del-is-not-the-opposite-of-init/ for an explanation and gotchas.\n__del__ is not a \"well-loved\" child in Python. You will notice that sys.exit() documentation does not specify if garbage is collected before exiting, and there are lots of odd issues. Calling the __del__ on globals causes odd ordering issues, e.g., http://bugs.python.org/issue5099. Should __del__ called even if the __init__ fails? See http://mail.python.org/pipermail/python-dev/2000-March/thread.html#2423 for a long thread. \n\nBut, on the other hand:\n\n__del__ means you do not forget to call a close statement. See http://eli.thegreenplace.net/2009/06/12/safely-using-destructors-in-python/ for a pro __del__ viewpoint. This is usually about freeing ctypes or some other special resource.\n\nAnd my pesonal reason for not liking the __del__ function.\n\nEverytime someone brings up __del__ it devolves into thirty messages of confusion.\nIt breaks these items in the Zen of Python:\n\n\nComplex is better than complicated.\nSpecial cases aren't special enough to break the rules.\nErrors should never pass silently.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nIf the implementation is hard to explain, it's a bad idea.\n\n\nSo, find a reason not to use __del__.\n",
"As I understood them from my early CPTS experiance:\nConstructors: Constructors are mainly used in classes to initialze the class with values, and gives an oppurtunity to do some background work based on creation. If you pass in values during the creation of an object, this is where you can handle assignment of those values to variables within your class. (In this case, upon construction you are incrementing a variable that keeps track of population).\nDestructors: Destructors cleanup a class. In python, due to the garbage collector it's not as important as languages that can leave hanging pointers (c++). (In this case you are decrementing the population variable on destruction of the object). \n"
] |
[
22,
1
] |
[] |
[] |
[
"class",
"destructor",
"python"
] |
stackoverflow_0002433130_class_destructor_python.txt
|
Q:
Match multiline regex in file object
How can I extract the groups from this regex from a file object (data.txt)?
import numpy as np
import re
import os
ifile = open("data.txt",'r')
# Regex pattern
pattern = re.compile(r"""
^Time:(\d{2}:\d{2}:\d{2}) # Time: 12:34:56 at beginning of line
\r{2} # Two carriage return
\D+ # 1 or more non-digits
storeU=(\d+\.\d+)
\s
uIx=(\d+)
\s
storeI=(-?\d+.\d+)
\s
iIx=(\d+)
\s
avgCI=(-?\d+.\d+)
""", re.VERBOSE | re.MULTILINE)
time = [];
for line in ifile:
match = re.search(pattern, line)
if match:
time.append(match.group(1))
The problem in the last part of the code, is that I iterate line by line, which obviously doesn't work with multiline regex. I have tried to use pattern.finditer(ifile) like this:
for match in pattern.finditer(ifile):
print match
... just to see if it works, but the finditer method requires a string or buffer.
I have also tried this method, but can't get it to work
matches = [m.groups() for m in pattern.finditer(ifile)]
Any idea?
After comment from Mike and Tuomas, I was told to use .read().. Something like this:
ifile = open("data.txt",'r').read()
This works fine, but would this be the correct way to search through the file? Can't get it to work...
for i in pattern.finditer(ifile):
match = re.search(pattern, i)
if match:
time.append(match.group(1))
Solution
# Open file as file object and read to string
ifile = open("data.txt",'r')
# Read file object to string
text = ifile.read()
# Close file object
ifile.close()
# Regex pattern
pattern_meas = re.compile(r"""
^Time:(\d{2}:\d{2}:\d{2}) # Time: 12:34:56 at beginning of line
\n{2} # Two newlines
\D+ # 1 or more non-digits
storeU=(\d+\.\d+) # Decimal-number
\s
uIx=(\d+) # Fetch uIx-variable
\s
storeI=(-?\d+.\d+) # Fetch storeI-variable
\s
iIx=(\d+) # Fetch iIx-variable
\s
avgCI=(-?\d+.\d+) # Fetch avgCI-variable
""", re.VERBOSE | re.MULTILINE)
file_times = open("output_times.txt","w")
for match in pattern_meas.finditer(text):
output = "%s,\t%s,\t\t%s,\t%s,\t\t%s,\t%s\n" % (match.group(1), match.group(2), match.group(3), match.group(4), match.group(5), match.group(6))
file_times.write(output)
file_times.close()
Maybe it can be written more compact and pythonic though....
A:
You can read the data from the file object into a string with ifile.read()
A:
times = [match.group(1) for match in pattern.finditer(ifile.read())]
finditer yield MatchObjects. If the regex doesn't match anything times will be an empty list.
You can also modify your regex to use non-capturing groups for storeU, storeI, iIx and avgCI, then pattern.findall will contain only matched times.
Note: naming variable time might shadow standard library module. times would be a better option.
A:
Why don't you read the whole file into a buffer using
buffer = open("data.txt").read()
and then do a search with that?
|
Match multiline regex in file object
|
How can I extract the groups from this regex from a file object (data.txt)?
import numpy as np
import re
import os
ifile = open("data.txt",'r')
# Regex pattern
pattern = re.compile(r"""
^Time:(\d{2}:\d{2}:\d{2}) # Time: 12:34:56 at beginning of line
\r{2} # Two carriage return
\D+ # 1 or more non-digits
storeU=(\d+\.\d+)
\s
uIx=(\d+)
\s
storeI=(-?\d+.\d+)
\s
iIx=(\d+)
\s
avgCI=(-?\d+.\d+)
""", re.VERBOSE | re.MULTILINE)
time = [];
for line in ifile:
match = re.search(pattern, line)
if match:
time.append(match.group(1))
The problem in the last part of the code, is that I iterate line by line, which obviously doesn't work with multiline regex. I have tried to use pattern.finditer(ifile) like this:
for match in pattern.finditer(ifile):
print match
... just to see if it works, but the finditer method requires a string or buffer.
I have also tried this method, but can't get it to work
matches = [m.groups() for m in pattern.finditer(ifile)]
Any idea?
After comment from Mike and Tuomas, I was told to use .read().. Something like this:
ifile = open("data.txt",'r').read()
This works fine, but would this be the correct way to search through the file? Can't get it to work...
for i in pattern.finditer(ifile):
match = re.search(pattern, i)
if match:
time.append(match.group(1))
Solution
# Open file as file object and read to string
ifile = open("data.txt",'r')
# Read file object to string
text = ifile.read()
# Close file object
ifile.close()
# Regex pattern
pattern_meas = re.compile(r"""
^Time:(\d{2}:\d{2}:\d{2}) # Time: 12:34:56 at beginning of line
\n{2} # Two newlines
\D+ # 1 or more non-digits
storeU=(\d+\.\d+) # Decimal-number
\s
uIx=(\d+) # Fetch uIx-variable
\s
storeI=(-?\d+.\d+) # Fetch storeI-variable
\s
iIx=(\d+) # Fetch iIx-variable
\s
avgCI=(-?\d+.\d+) # Fetch avgCI-variable
""", re.VERBOSE | re.MULTILINE)
file_times = open("output_times.txt","w")
for match in pattern_meas.finditer(text):
output = "%s,\t%s,\t\t%s,\t%s,\t\t%s,\t%s\n" % (match.group(1), match.group(2), match.group(3), match.group(4), match.group(5), match.group(6))
file_times.write(output)
file_times.close()
Maybe it can be written more compact and pythonic though....
|
[
"You can read the data from the file object into a string with ifile.read()\n",
"times = [match.group(1) for match in pattern.finditer(ifile.read())]\n\nfinditer yield MatchObjects. If the regex doesn't match anything times will be an empty list.\nYou can also modify your regex to use non-capturing groups for storeU, storeI, iIx and avgCI, then pattern.findall will contain only matched times.\nNote: naming variable time might shadow standard library module. times would be a better option.\n",
"Why don't you read the whole file into a buffer using\nbuffer = open(\"data.txt\").read()\n\nand then do a search with that?\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"multiline",
"python",
"regex"
] |
stackoverflow_0002433648_multiline_python_regex.txt
|
Q:
What is the difference between trapping and handling an exception?
I'm looking into exception handling in python and a blog post I read differentiated between trapping and handling an exception. Can someone explain the core difference between these two, both in python specifically and the overall conceptual difference? A google search for 'exception trapping handling' isn't super-useful.
A:
I would say that "trapping" and "catching" an exception are the same thing: you have to trap/catch it to be able to handle it, but the act of trapping it is not the same as handling it.
Trapping-but-not-handling = supressing, in other words. Handling implies that you actually do something with the information at your disposal: log it, throw it to the next level, perform some action if the exception is not entirely unexpected etc.etc.
Or to put it another way, trapping an exception means that you have a code construct into which exception-al circumstances will flow, and where you can choose to handle the information that you find there.
A:
In terms of a conceptual difference, I'd define Trapping as adding code to limit the impact of an error extending to other parts of the code or being displayed by the OS to the user.
Handling an error would be doing something appropriate in response to the error.
From a pseudo-code stance:
try
// Something which may cause an error - this is trapped by wrapping in a try/catch
catch
// doing something appropriate in response to the error occurring - handle it
finally
|
What is the difference between trapping and handling an exception?
|
I'm looking into exception handling in python and a blog post I read differentiated between trapping and handling an exception. Can someone explain the core difference between these two, both in python specifically and the overall conceptual difference? A google search for 'exception trapping handling' isn't super-useful.
|
[
"I would say that \"trapping\" and \"catching\" an exception are the same thing: you have to trap/catch it to be able to handle it, but the act of trapping it is not the same as handling it. \nTrapping-but-not-handling = supressing, in other words. Handling implies that you actually do something with the information at your disposal: log it, throw it to the next level, perform some action if the exception is not entirely unexpected etc.etc.\nOr to put it another way, trapping an exception means that you have a code construct into which exception-al circumstances will flow, and where you can choose to handle the information that you find there.\n",
"In terms of a conceptual difference, I'd define Trapping as adding code to limit the impact of an error extending to other parts of the code or being displayed by the OS to the user.\nHandling an error would be doing something appropriate in response to the error.\nFrom a pseudo-code stance:\ntry\n // Something which may cause an error - this is trapped by wrapping in a try/catch\ncatch\n // doing something appropriate in response to the error occurring - handle it\nfinally\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"exception_handling",
"python"
] |
stackoverflow_0002433816_exception_handling_python.txt
|
Q:
Trying to write to binary plist format from Python (w/PyObjC) to be fetch and read in by Cocoa Touch
I'm trying to serve a property list of search results to my iPhone app. The server is a prototype, written in Python.
First I found Python's built-in plistlib, which is awesome. I want to give search-as-you-type a shot, so I need it to be as small as possible, and xml was too big. The binary plist format seems like a good choice. Unfortunately plistlib doesn't do binary files, so step right up PyObjC.
(Segue: I'm very open to any other thoughts on how to accomplish live search. I already pared down the data as much as possible, including only displaying enough results to fill the window with the iPhone keyboard up, which is 5.)
Unfortunately, although I know Python and am getting pretty decent with Cocoa, I still don't get PyObjC.
This is the Cocoa equivalent of what I want to do:
NSArray *plist = [NSArray arrayWithContentsOfFile:read_path];
NSError *err;
NSData *data = [NSPropertyListSerialization dataWithPropertyList:plist
format:NSPropertyListBinaryFormat_v1_0
options:0 // docs say this must be 0, go figure
error:&err];
[data writeToFile:write_path atomically:YES];
I thought I should be able to do something like this, but dataWithPropertyList isn't in the NSPropertyListSerialization objects dir() listing. I should also probably convert the list to NSArray. I tried the PyObjC docs, but it's so tangential to my real work that I thought I'd try an SO SOS, too.
from Cocoa import NSArray, NSData, NSPropertyListSerialization, NSPropertyListBinaryFormat_v1_0
plist = [dict(key1='val', key2='val2'), dict(key1='val', key2='val2')]
NSPropertyListSerialization.dataWithPropertyList_format_options_error(plist,
NSPropertyListBinaryFormat_v1_0,
?,
?)
This is how I'm reading in the plist on the iPhone side.
NSData *data = [NSData dataWithContentsOfURL:url];
NSPropertyListFormat format;
NSString *err;
id it = [NSPropertyListSerialization
propertyListFromData:data
mutabilityOption:0
format:&format
errorDescription:&err];
Happy to clarify if any of this doesn't make sense.
A:
I believe the correct function name is
NSPropertyListSerialization.dataWithPropertyList_format_options_error_
because of the ending :.
(BTW, if the object is always an array or dictionary, -writeToFile:atomically: will write the plist (as XML format) already.)
A:
As KennyTM said, you're missing the trailing underscore in the method name. In PyObjC you need to take the Objective-C selector name (dataWithPropertyList:format:options:error:) and replace all of the colons with underscores (don't forget the last colon, too!). That gives you dataWithPropertyList_format_options_error_ (note the trailing underscore). Also, for the error parameter, you can just use None. That makes your code look like this:
bplist = NSPropertyListSerialization.dataWithPropertyList_format_options_error_(
plist,
NSPropertyListBinaryFormat_v1_0,
0,
None)
# bplist is an NSData object that you can operate on directly or
# write to a file...
bplist.writeToFile_atomically_(pathToFile, True)
If you test the resulting file, you'll see that it's a Binary PList file, as desired:
Jagaroth:~/Desktop $ file test.plist
test.plist: Apple binary property list
|
Trying to write to binary plist format from Python (w/PyObjC) to be fetch and read in by Cocoa Touch
|
I'm trying to serve a property list of search results to my iPhone app. The server is a prototype, written in Python.
First I found Python's built-in plistlib, which is awesome. I want to give search-as-you-type a shot, so I need it to be as small as possible, and xml was too big. The binary plist format seems like a good choice. Unfortunately plistlib doesn't do binary files, so step right up PyObjC.
(Segue: I'm very open to any other thoughts on how to accomplish live search. I already pared down the data as much as possible, including only displaying enough results to fill the window with the iPhone keyboard up, which is 5.)
Unfortunately, although I know Python and am getting pretty decent with Cocoa, I still don't get PyObjC.
This is the Cocoa equivalent of what I want to do:
NSArray *plist = [NSArray arrayWithContentsOfFile:read_path];
NSError *err;
NSData *data = [NSPropertyListSerialization dataWithPropertyList:plist
format:NSPropertyListBinaryFormat_v1_0
options:0 // docs say this must be 0, go figure
error:&err];
[data writeToFile:write_path atomically:YES];
I thought I should be able to do something like this, but dataWithPropertyList isn't in the NSPropertyListSerialization objects dir() listing. I should also probably convert the list to NSArray. I tried the PyObjC docs, but it's so tangential to my real work that I thought I'd try an SO SOS, too.
from Cocoa import NSArray, NSData, NSPropertyListSerialization, NSPropertyListBinaryFormat_v1_0
plist = [dict(key1='val', key2='val2'), dict(key1='val', key2='val2')]
NSPropertyListSerialization.dataWithPropertyList_format_options_error(plist,
NSPropertyListBinaryFormat_v1_0,
?,
?)
This is how I'm reading in the plist on the iPhone side.
NSData *data = [NSData dataWithContentsOfURL:url];
NSPropertyListFormat format;
NSString *err;
id it = [NSPropertyListSerialization
propertyListFromData:data
mutabilityOption:0
format:&format
errorDescription:&err];
Happy to clarify if any of this doesn't make sense.
|
[
"I believe the correct function name is\nNSPropertyListSerialization.dataWithPropertyList_format_options_error_\n\nbecause of the ending :. \n(BTW, if the object is always an array or dictionary, -writeToFile:atomically: will write the plist (as XML format) already.)\n",
"As KennyTM said, you're missing the trailing underscore in the method name. In PyObjC you need to take the Objective-C selector name (dataWithPropertyList:format:options:error:) and replace all of the colons with underscores (don't forget the last colon, too!). That gives you dataWithPropertyList_format_options_error_ (note the trailing underscore). Also, for the error parameter, you can just use None. That makes your code look like this:\nbplist = NSPropertyListSerialization.dataWithPropertyList_format_options_error_(\n plist,\n NSPropertyListBinaryFormat_v1_0, \n 0, \n None)\n\n# bplist is an NSData object that you can operate on directly or\n# write to a file...\nbplist.writeToFile_atomically_(pathToFile, True)\n\nIf you test the resulting file, you'll see that it's a Binary PList file, as desired:\nJagaroth:~/Desktop $ file test.plist \ntest.plist: Apple binary property list\n\n"
] |
[
4,
2
] |
[] |
[] |
[
"cocoa_touch",
"pyobjc",
"python",
"search"
] |
stackoverflow_0002346463_cocoa_touch_pyobjc_python_search.txt
|
Q:
Odd nested dictionary behavior in python
Im new two python and am trying to grow a dictionary of dictionaries. I have done this in php and perl but python is behaving very differently. Im sure it makes sense to those more familiar with python. Here is my code:
colnames = ['name','dob','id'];
tablehashcopy = {};
tablehashcopy = dict.fromkeys(colnames,{});
tablehashcopy['name']['hi'] = 0;
print(tablehashcopy);
Output:
{'dob': {'hi': 0}, 'name': {'hi': 0}, 'id': {'hi': 0}}
The problem arises from the 2nd to last statement(i put the print in for convenience). I expected to find that one element has been added to the 'name' dictionary with the key 'hi' and the value 0. But this key,value pair has been added to EVERY sub-dictionary. Why?
I have tested this on my ubuntu machine in both python 2.6 and python 3.1 the behaviour is the same.
A:
The issue is with your fromkeys call. You passed it an empty dictionary, and it used that exact same empty dictionary as the value for all of the keys. Since the exact same dictionary is being used as the value, adding an element to it means you'll see that same element for all the keys' dictionaries.
A:
As msalib says, your problem is that dict.fromkeys makes all the entries in the dictionary point to the same object. Try this instead to create a separate empty dictionary for each key:
tablehashcopy=dict((key,{}) for key in colnames)
A:
dict.fromkeys(seq, [value]) - value defaults to None if nothing is passed, but since you passed a dict instead, that's what it's using for each key-value pair.
A:
In your source, only one inner dictionary object is created and used 3 times. So when you modificate it, you will see the change 3 times too, because everything is bound to the same variable.
You can do something like the following to solve the problem. This will create several (key, {}) tuples and use them to generate a dictionary:
colnames = ['name','dob','id']
tablehashcopy = dict((k, {}) for k in colnames)
tablehashcopy['name']['hi'] = 0
print tablehashcopy # use "print(tablehashcopy)" in Python 3
# output: {'dob': {}, 'name': {'hi': 0}, 'id': {}}
Please also take a look at the formatting (no semicolons) and some statements I've deleted because they were unnecessary.
|
Odd nested dictionary behavior in python
|
Im new two python and am trying to grow a dictionary of dictionaries. I have done this in php and perl but python is behaving very differently. Im sure it makes sense to those more familiar with python. Here is my code:
colnames = ['name','dob','id'];
tablehashcopy = {};
tablehashcopy = dict.fromkeys(colnames,{});
tablehashcopy['name']['hi'] = 0;
print(tablehashcopy);
Output:
{'dob': {'hi': 0}, 'name': {'hi': 0}, 'id': {'hi': 0}}
The problem arises from the 2nd to last statement(i put the print in for convenience). I expected to find that one element has been added to the 'name' dictionary with the key 'hi' and the value 0. But this key,value pair has been added to EVERY sub-dictionary. Why?
I have tested this on my ubuntu machine in both python 2.6 and python 3.1 the behaviour is the same.
|
[
"The issue is with your fromkeys call. You passed it an empty dictionary, and it used that exact same empty dictionary as the value for all of the keys. Since the exact same dictionary is being used as the value, adding an element to it means you'll see that same element for all the keys' dictionaries.\n",
"As msalib says, your problem is that dict.fromkeys makes all the entries in the dictionary point to the same object. Try this instead to create a separate empty dictionary for each key:\ntablehashcopy=dict((key,{}) for key in colnames)\n\n",
"dict.fromkeys(seq, [value]) - value defaults to None if nothing is passed, but since you passed a dict instead, that's what it's using for each key-value pair.\n",
"In your source, only one inner dictionary object is created and used 3 times. So when you modificate it, you will see the change 3 times too, because everything is bound to the same variable.\nYou can do something like the following to solve the problem. This will create several (key, {}) tuples and use them to generate a dictionary:\ncolnames = ['name','dob','id']\ntablehashcopy = dict((k, {}) for k in colnames)\ntablehashcopy['name']['hi'] = 0\nprint tablehashcopy # use \"print(tablehashcopy)\" in Python 3\n# output: {'dob': {}, 'name': {'hi': 0}, 'id': {}}\n\nPlease also take a look at the formatting (no semicolons) and some statements I've deleted because they were unnecessary.\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"dictionary",
"nested",
"python"
] |
stackoverflow_0002433805_dictionary_nested_python.txt
|
Q:
EOL character in Linux and Windows
I'm writing a simple script, that just connects to telnet port, listens everything on it, staying connected and when some string, for example '123' appears, script do something.
I use tn.read_until("123", 2), but when '123' appears, script just disconnects. How to make it stay online?
A:
Put tn.read_until("123", 2) in a loop.
A:
You can try this:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 21))
s.listen(1)
conn = s.accept()
run = True
while run==True:
data = conn.recv(1000)
if data == '123':
#do something
else:
#do something
conn.close()
is this what you want..
|
EOL character in Linux and Windows
|
I'm writing a simple script, that just connects to telnet port, listens everything on it, staying connected and when some string, for example '123' appears, script do something.
I use tn.read_until("123", 2), but when '123' appears, script just disconnects. How to make it stay online?
|
[
"Put tn.read_until(\"123\", 2) in a loop.\n",
"You can try this:\nimport socket\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind(('', 21))\ns.listen(1)\nconn = s.accept()\nrun = True\nwhile run==True:\n data = conn.recv(1000)\n if data == '123':\n #do something\n else:\n #do something\nconn.close()\n\nis this what you want..\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"telnet"
] |
stackoverflow_0002433293_python_telnet.txt
|
Q:
Object for storing strings in Python
class MyWriter:
def __init__(self, stdout):
self.stdout = stdout
self.dumps = []
def write(self, text):
self.stdout.write(smart_unicode(text).encode('cp1251'))
self.dumps.append(text)
def close(self):
self.stdout.close()
writer = MyWriter(sys.stdout)
save = sys.stdout
sys.stdout = writer
I use self.dumps list to store data obtained from prints. Is there a more convenient object for storing string lines in memory? Ideally I want dump it to one big string. I can get it like this "\n".join(self.dumps) from code above. May be it's better to just concatenate strings - self.dumps += text?
A:
A list of strings to be joined with ''.join is just fine. However, if you prefer a more direct solution:
import cStringIO
class MyWriter(object):
def __init__(self, stdout):
self.stdout = stdout
self.dumps = cStringIO.StringIO()
self.final = None
def write(self, text):
self.stdout.write(smart_unicode(text).encode('cp1251'))
self.dumps.write(text)
def close(self):
self.stdout.close()
self.final = self.dumps.getvalue()
self.dumps.close()
def getvalue(self):
if self.final is not None:
return self.final
return self.dumps.getvalue()
getvalue cannot be called on a string-io object after it's closed (closing the object makes it drop its own buffer memory) which is why I make self.final just before that happens. Apart from the getvalue, a string-io object is a pretty faithful implementation of the "file-like object" interface, so it often comes in handy when you just want to have some piece of code, originally designed to print results, keep them in memory instead; but it's also a potentially neat way to "build up a string by pieces" -- just write each piece and getvalue when done (or at any time during the process to see what you've built up so far).
Modern Python style for this task is often to prefer the lower-abstraction approach (explicitly build a list of strings and join them up at need), but there's nothing wrong with the slightly higher-abstraction "string I/O" approach either.
(A third approach that seems a bit out of favor is to keep extending an array.array of characters, just to be comprehensive in listing these;-).
A:
I am quite sure, that a single '\n'.join(self.dumps) will be much faster than self.dumps += text.
Explanation: In Python, strings are immutable, so if you concat two strings, a new string is generated and the two other strings are copied into it. That's not a problem if you do it once, but inside a loop, this will copy the whole text in every iteration. join() on the other hand is a builtin function written in C, which has the ability to reallocate memory efficiently and change the end of the string. So, it should be much faster.
So, you source is perfectly fine. Great work!
PS: the flush() function is missing
|
Object for storing strings in Python
|
class MyWriter:
def __init__(self, stdout):
self.stdout = stdout
self.dumps = []
def write(self, text):
self.stdout.write(smart_unicode(text).encode('cp1251'))
self.dumps.append(text)
def close(self):
self.stdout.close()
writer = MyWriter(sys.stdout)
save = sys.stdout
sys.stdout = writer
I use self.dumps list to store data obtained from prints. Is there a more convenient object for storing string lines in memory? Ideally I want dump it to one big string. I can get it like this "\n".join(self.dumps) from code above. May be it's better to just concatenate strings - self.dumps += text?
|
[
"A list of strings to be joined with ''.join is just fine. However, if you prefer a more direct solution:\nimport cStringIO\n\nclass MyWriter(object):\n\n def __init__(self, stdout):\n self.stdout = stdout\n self.dumps = cStringIO.StringIO()\n self.final = None\n\n def write(self, text):\n self.stdout.write(smart_unicode(text).encode('cp1251'))\n self.dumps.write(text)\n\n def close(self):\n self.stdout.close()\n self.final = self.dumps.getvalue()\n self.dumps.close()\n\n def getvalue(self):\n if self.final is not None:\n return self.final\n return self.dumps.getvalue()\n\ngetvalue cannot be called on a string-io object after it's closed (closing the object makes it drop its own buffer memory) which is why I make self.final just before that happens. Apart from the getvalue, a string-io object is a pretty faithful implementation of the \"file-like object\" interface, so it often comes in handy when you just want to have some piece of code, originally designed to print results, keep them in memory instead; but it's also a potentially neat way to \"build up a string by pieces\" -- just write each piece and getvalue when done (or at any time during the process to see what you've built up so far).\nModern Python style for this task is often to prefer the lower-abstraction approach (explicitly build a list of strings and join them up at need), but there's nothing wrong with the slightly higher-abstraction \"string I/O\" approach either.\n(A third approach that seems a bit out of favor is to keep extending an array.array of characters, just to be comprehensive in listing these;-).\n",
"I am quite sure, that a single '\\n'.join(self.dumps) will be much faster than self.dumps += text.\nExplanation: In Python, strings are immutable, so if you concat two strings, a new string is generated and the two other strings are copied into it. That's not a problem if you do it once, but inside a loop, this will copy the whole text in every iteration. join() on the other hand is a builtin function written in C, which has the ability to reallocate memory efficiently and change the end of the string. So, it should be much faster.\nSo, you source is perfectly fine. Great work!\nPS: the flush() function is missing\n"
] |
[
2,
1
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0002433194_python_string.txt
|
Q:
error in fetching url data
from google.appengine.ext import webapp
from google.appengine.ext.webapp import util
from google.appengine.ext import db
from google.appengine.api import urlfetch
class TrakHtml(db.Model):
hawb = db.StringProperty(required=False)
htmlData = db.TextProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
Traks = list()
Traks.append('93332134')
#Traks.append('91779831')
#Traks.append('92782244')
#Traks.append('38476214')
for st in Traks :
trak = TrakHtml()
trak.hawb = st
url = 'http://etracking.cevalogistics.com/eTrackResultsMulti.aspx?sv='+st
result = urlfetch.fetch(url)
self.response.out.write(result.read())
trak.htmlData = result.read()
trak.put()
result.read() is not giving whole file , it giving some portion. trak.htmlData is a TextProperty() so it has to store whole file and i want that only.
A:
you call result.read() twice. That's probably why it's fragmented.
A:
This link has info on the return value of urlfetch.fetch(url)
http://code.google.com/appengine/docs/python/urlfetch/responseobjects.html
It looks like you want to do result.content.read()
A:
I note that you are calling read() twice, which may be the problem.
When I look at the specs for urlfetch.fetch(), it returns a response object.
The contents are directly accessible as result.contents, so you shouldn't need to call the (undefined??) read function.
|
error in fetching url data
|
from google.appengine.ext import webapp
from google.appengine.ext.webapp import util
from google.appengine.ext import db
from google.appengine.api import urlfetch
class TrakHtml(db.Model):
hawb = db.StringProperty(required=False)
htmlData = db.TextProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
Traks = list()
Traks.append('93332134')
#Traks.append('91779831')
#Traks.append('92782244')
#Traks.append('38476214')
for st in Traks :
trak = TrakHtml()
trak.hawb = st
url = 'http://etracking.cevalogistics.com/eTrackResultsMulti.aspx?sv='+st
result = urlfetch.fetch(url)
self.response.out.write(result.read())
trak.htmlData = result.read()
trak.put()
result.read() is not giving whole file , it giving some portion. trak.htmlData is a TextProperty() so it has to store whole file and i want that only.
|
[
"you call result.read() twice. That's probably why it's fragmented.\n",
"This link has info on the return value of urlfetch.fetch(url)\nhttp://code.google.com/appengine/docs/python/urlfetch/responseobjects.html\nIt looks like you want to do result.content.read()\n",
"I note that you are calling read() twice, which may be the problem.\nWhen I look at the specs for urlfetch.fetch(), it returns a response object.\nThe contents are directly accessible as result.contents, so you shouldn't need to call the (undefined??) read function.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0002434119_google_app_engine_python.txt
|
Q:
How can I set controls for a web page?
I have this login page with https, and i reach to this approach::
import ClientForm
import urllib2
request = urllib2.Request("http://ritaj.birzeit.edu")
response = urllib2.urlopen(request)
forms = ClientForms.ParseResponseEx(response)
response.close()
f = forms[0]
username = str(raw_input("Username: "))
password = str(raw_input("Password: "))
## Here What To Do
request2 = f.click()
i get the controls of that page
>>> f = forms[0]
>>> [c.name for c in f.controls]
['q', 'sitesearch', 'sa', 'domains', 'form:mode', 'form:id', '__confirmed_p', '__refreshing_p', 'return_url', 'time', 'token_id', 'hash', 'username', 'password', 'persistent_p', 'formbutton:ok']
so how can i set the username and password controls of the "non-form form" f ???
and i have another problem,, how to know if its the right username and password ??
A:
You set f['username'] = username and f['password'] = password, and when you f.click() you'll get a response that you'll need to examine in order to determine whether those strings were the ones the site you're visiting expected -- how the site communicates that depends on the site, it should use an HTTP status for the purpose but some sites are very sloppy that way, so you may have to scrape their response page instead.
|
How can I set controls for a web page?
|
I have this login page with https, and i reach to this approach::
import ClientForm
import urllib2
request = urllib2.Request("http://ritaj.birzeit.edu")
response = urllib2.urlopen(request)
forms = ClientForms.ParseResponseEx(response)
response.close()
f = forms[0]
username = str(raw_input("Username: "))
password = str(raw_input("Password: "))
## Here What To Do
request2 = f.click()
i get the controls of that page
>>> f = forms[0]
>>> [c.name for c in f.controls]
['q', 'sitesearch', 'sa', 'domains', 'form:mode', 'form:id', '__confirmed_p', '__refreshing_p', 'return_url', 'time', 'token_id', 'hash', 'username', 'password', 'persistent_p', 'formbutton:ok']
so how can i set the username and password controls of the "non-form form" f ???
and i have another problem,, how to know if its the right username and password ??
|
[
"You set f['username'] = username and f['password'] = password, and when you f.click() you'll get a response that you'll need to examine in order to determine whether those strings were the ones the site you're visiting expected -- how the site communicates that depends on the site, it should use an HTTP status for the purpose but some sites are very sloppy that way, so you may have to scrape their response page instead.\n"
] |
[
1
] |
[] |
[] |
[
"browser",
"python"
] |
stackoverflow_0002434126_browser_python.txt
|
Q:
Python to MATLAB: exporting list of strings using scipy.io
I am trying to export a list of text strings from Python to MATLAB using scipy.io. I would like to use scipy.io because my desired .mat file should include both numerical matrices (which I learned to do here) and text cell arrays.
I tried:
import scipy.io
my_list = ['abc', 'def', 'ghi']
scipy.io.savemat('test.mat', mdict={'my_list': my_list})
In MATLAB, I load test.mat and get a character array:
my_list =
adg
beh
cfi
How do I make scipy.io export a list into a MATLAB cell array?
A:
You need to make my_list an array of numpy objects:
import scipy.io
import numpy as np
my_list = np.zeros((3,), dtype=np.object)
my_list[:] = ['abc', 'def', 'ghi']
scipy.io.savemat('test.mat', mdict={'my_list': my_list})
Then it will be saved in a cell format. There might be a better way of putting it into a np.object, but I took this way from the Scipy documentation.
A:
It looks like the contents of the list are exported properly, they are just transposed and placed in a character array. You can easily convert it to the desired cell array of strings in MATLAB by transposing it and using CELLSTR, which places each row in a separate cell:
>> my_list = ['adg';'beh';'cfi']; %# Your example
>> my_list = cellstr(my_list') %'# A 3-by-1 cell array of strings
my_list =
'abc'
'def'
'ghi'
Granted, this doesn't address the more general issue of exporting data as a cell array from Python to MATLAB, but it should help with the specific problem you list above.
|
Python to MATLAB: exporting list of strings using scipy.io
|
I am trying to export a list of text strings from Python to MATLAB using scipy.io. I would like to use scipy.io because my desired .mat file should include both numerical matrices (which I learned to do here) and text cell arrays.
I tried:
import scipy.io
my_list = ['abc', 'def', 'ghi']
scipy.io.savemat('test.mat', mdict={'my_list': my_list})
In MATLAB, I load test.mat and get a character array:
my_list =
adg
beh
cfi
How do I make scipy.io export a list into a MATLAB cell array?
|
[
"You need to make my_list an array of numpy objects:\nimport scipy.io\nimport numpy as np\nmy_list = np.zeros((3,), dtype=np.object)\nmy_list[:] = ['abc', 'def', 'ghi']\nscipy.io.savemat('test.mat', mdict={'my_list': my_list})\n\nThen it will be saved in a cell format. There might be a better way of putting it into a np.object, but I took this way from the Scipy documentation.\n",
"It looks like the contents of the list are exported properly, they are just transposed and placed in a character array. You can easily convert it to the desired cell array of strings in MATLAB by transposing it and using CELLSTR, which places each row in a separate cell:\n>> my_list = ['adg';'beh';'cfi']; %# Your example\n>> my_list = cellstr(my_list') %'# A 3-by-1 cell array of strings\n\nmy_list = \n\n 'abc'\n 'def'\n 'ghi'\n\nGranted, this doesn't address the more general issue of exporting data as a cell array from Python to MATLAB, but it should help with the specific problem you list above.\n"
] |
[
13,
1
] |
[] |
[] |
[
"mat_file",
"matlab",
"python",
"scipy",
"string"
] |
stackoverflow_0002433924_mat_file_matlab_python_scipy_string.txt
|
Q:
A RAM error of big array
I need to get the numbers of one line randomly, and put each line in other array, then get the numbers of one col.
I have a big file, more than 400M. In that file, there are 13496*13496 number, means 13496 rows and 13496 cols. I want to read them to a array.
This is my code:
_L1 = [[0 for col in range(13496)] for row in range(13496)]
_L1file = open('distanceCMD.function.txt')
while (i<13496):
print "i="+str(i)
_strlf = _L1file.readline()
_strlf = _strlf.split('\t')
_strlf = _strlf[:-1]
_L1[i] = _strlf
i += 1
_L1file.close()
And this is my error message:
MemoryError:
File "D:\research\space-function\ART3.py", line 30, in <module>
_strlf = _strlf.split('\t')
A:
you might want to approach your problem in another way. Process the file line by line. I don't see a need to store the whole big file into array. Otherwise, you might want to tell us what you are actually trying to do.
for line in open("400MB_file"):
# do something with line.
Or
f=open("file")
for linenum,line in enumerate(f):
if linenum+1 in [2,3,10]:
print "there are ", len(line.split())," columns" #assuming you want to split on spaces
print "100th column value is: ", line.split()[99]
if linenum+1>10:
break # break if you want to stop after the 10th line
f.close()
A:
This is a simple case of your program demanding more memory than is available to the computer. An array of 13496x13496 elements requires 182,142,016 'cells', where a cell is a minimum of one byte (if storing chars) and potentially several bytes (if storing floating-point numerics, for example). I'm not even taking your particular runtimes' array metadata into account, though this would typically be a tiny overhead on a simple array.
Assuming each array element is just a single byte, your computer needs around 180MB of RAM to hold it in memory in its' entirety. Trying to process it could be impractical.
You need to think about the problem a different way; as has already been mentioned, a line-by-line approach might be a better option. Or perhaps processing the grid in smaller units, perhaps 10x10 or 100x100, and aggregating the results. Or maybe the problem itself can be expressed in a different form, which avoids the need to process the entire dataset altogether...?
If you give us a little more detail on the nature of the data and the objective, perhaps someone will have an idea to make the task more manageable.
A:
Short answer: the Python object overhead is killing you. In Python 2.x on a 64-bit machine, a list of strings consumes 48 bytes per list entry even before accounting for the content of the strings. That's over 8.7 Gb of overhead for the size of array you describe.
On a 32-bit machine it'll be a bit better: only 28 bytes per list entry.
Longer explanation: you should be aware that Python objects themselves can be quite large: even simple objects like ints, floats and strings. In your code you're ending up with a list of lists of strings. On my (64-bit) machine, even an empty string object takes up 40 bytes, and to that you need to add 8 bytes for the list pointer that's pointing to this string object in memory. So that's already 48 bytes per entry, or around 8.7 Gb. Given that Python allocates memory in multiples of 8 bytes at a time, and that your strings are almost certainly non-empty, you're actually looking at 56 or 64 bytes (I don't know how long your strings are) per entry.
Possible solutions:
(1) You might do (a little) better by converting your entries from strings to ints or floats as appropriate.
(2) You'd do much better by either using Python's array type (not the same as list!) or by using numpy: then your ints or floats would only take 4 or 8 bytes each.
Since Python 2.6, you can get basic information about object sizes with the sys.getsizeof function. Note that if you apply it to a list (or other container) then the returned size doesn't include the size of the contained list objects; only of the structure used to hold those objects. Here are some values on my machine.
>>> import sys
>>> sys.getsizeof("")
40
>>> sys.getsizeof(5.0)
24
>>> sys.getsizeof(5)
24
>>> sys.getsizeof([])
72
>>> sys.getsizeof(range(10)) # 72 + 8 bytes for each pointer
152
A:
MemoryError exception:
Raised when an operation runs out of
memory but the situation may still be
rescued (by deleting some objects).
The associated value is a string
indicating what kind of (internal)
operation ran out of memory. Note that
because of the underlying memory
management architecture (C’s malloc()
function), the interpreter may not
always be able to completely recover
from this situation; it nevertheless
raises an exception so that a stack
traceback can be printed, in case a
run-away program was the cause.
It seems that, at least in your case, reading the entire file into memory is not a doable option.
A:
Replace this:
_strlf = _strlf[:-1]
with this:
_strlf = [float(val) for val in _strlf[:-1]]
You are making a big array of strings. I can guarantee that the string "123.00123214213" takes a lot less memory when you convert it to floating point.
You might want to include some handling for null values.
You can also go to numpy's array type, but your problem may be too small to bother.
|
A RAM error of big array
|
I need to get the numbers of one line randomly, and put each line in other array, then get the numbers of one col.
I have a big file, more than 400M. In that file, there are 13496*13496 number, means 13496 rows and 13496 cols. I want to read them to a array.
This is my code:
_L1 = [[0 for col in range(13496)] for row in range(13496)]
_L1file = open('distanceCMD.function.txt')
while (i<13496):
print "i="+str(i)
_strlf = _L1file.readline()
_strlf = _strlf.split('\t')
_strlf = _strlf[:-1]
_L1[i] = _strlf
i += 1
_L1file.close()
And this is my error message:
MemoryError:
File "D:\research\space-function\ART3.py", line 30, in <module>
_strlf = _strlf.split('\t')
|
[
"you might want to approach your problem in another way. Process the file line by line. I don't see a need to store the whole big file into array. Otherwise, you might want to tell us what you are actually trying to do.\nfor line in open(\"400MB_file\"):\n # do something with line.\n\nOr \nf=open(\"file\")\nfor linenum,line in enumerate(f):\n if linenum+1 in [2,3,10]:\n print \"there are \", len(line.split()),\" columns\" #assuming you want to split on spaces\n print \"100th column value is: \", line.split()[99]\n if linenum+1>10:\n break # break if you want to stop after the 10th line\nf.close()\n\n",
"This is a simple case of your program demanding more memory than is available to the computer. An array of 13496x13496 elements requires 182,142,016 'cells', where a cell is a minimum of one byte (if storing chars) and potentially several bytes (if storing floating-point numerics, for example). I'm not even taking your particular runtimes' array metadata into account, though this would typically be a tiny overhead on a simple array.\nAssuming each array element is just a single byte, your computer needs around 180MB of RAM to hold it in memory in its' entirety. Trying to process it could be impractical.\nYou need to think about the problem a different way; as has already been mentioned, a line-by-line approach might be a better option. Or perhaps processing the grid in smaller units, perhaps 10x10 or 100x100, and aggregating the results. Or maybe the problem itself can be expressed in a different form, which avoids the need to process the entire dataset altogether...?\nIf you give us a little more detail on the nature of the data and the objective, perhaps someone will have an idea to make the task more manageable.\n",
"Short answer: the Python object overhead is killing you. In Python 2.x on a 64-bit machine, a list of strings consumes 48 bytes per list entry even before accounting for the content of the strings. That's over 8.7 Gb of overhead for the size of array you describe.\nOn a 32-bit machine it'll be a bit better: only 28 bytes per list entry.\nLonger explanation: you should be aware that Python objects themselves can be quite large: even simple objects like ints, floats and strings. In your code you're ending up with a list of lists of strings. On my (64-bit) machine, even an empty string object takes up 40 bytes, and to that you need to add 8 bytes for the list pointer that's pointing to this string object in memory. So that's already 48 bytes per entry, or around 8.7 Gb. Given that Python allocates memory in multiples of 8 bytes at a time, and that your strings are almost certainly non-empty, you're actually looking at 56 or 64 bytes (I don't know how long your strings are) per entry.\nPossible solutions:\n(1) You might do (a little) better by converting your entries from strings to ints or floats as appropriate.\n(2) You'd do much better by either using Python's array type (not the same as list!) or by using numpy: then your ints or floats would only take 4 or 8 bytes each.\nSince Python 2.6, you can get basic information about object sizes with the sys.getsizeof function. Note that if you apply it to a list (or other container) then the returned size doesn't include the size of the contained list objects; only of the structure used to hold those objects. Here are some values on my machine.\n>>> import sys\n>>> sys.getsizeof(\"\")\n40\n>>> sys.getsizeof(5.0)\n24\n>>> sys.getsizeof(5)\n24\n>>> sys.getsizeof([])\n72\n>>> sys.getsizeof(range(10)) # 72 + 8 bytes for each pointer\n152\n\n",
"MemoryError exception:\n\nRaised when an operation runs out of\n memory but the situation may still be\n rescued (by deleting some objects).\n The associated value is a string\n indicating what kind of (internal)\n operation ran out of memory. Note that\n because of the underlying memory\n management architecture (C’s malloc()\n function), the interpreter may not\n always be able to completely recover\n from this situation; it nevertheless\n raises an exception so that a stack\n traceback can be printed, in case a\n run-away program was the cause.\n\nIt seems that, at least in your case, reading the entire file into memory is not a doable option.\n",
"Replace this:\n_strlf = _strlf[:-1]\n\nwith this:\n_strlf = [float(val) for val in _strlf[:-1]]\n\nYou are making a big array of strings. I can guarantee that the string \"123.00123214213\" takes a lot less memory when you convert it to floating point.\nYou might want to include some handling for null values. \nYou can also go to numpy's array type, but your problem may be too small to bother.\n"
] |
[
7,
3,
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002432521_python.txt
|
Q:
Finding unique maximum values in a list using python
I have a list of points as shown below
points=[ [x0,y0,v0], [x1,y1,v1], [x2,y2,v2].......... [xn,yn,vn]]
Some of the points have duplicate x,y values. What I want to do is to extract the unique maximum value x,y points
For example, if I have points [1,2,5] [1,1,3] [1,2,7] [1,7,3]
I would like to obtain the list [1,1,3] [1,2,7] [1,7,3]
How can I do this in python?
Thanks
A:
For example:
import itertools
def getxy(point): return point[:2]
sortedpoints = sorted(points, key=getxy)
results = []
for xy, g in itertools.groupby(sortedpoints, key=getxy):
results.append(max(g, key=operator.itemgetter(2)))
that is: sort and group the points by xy, for every group with fixed xy pick the point with the maximum z. Seems straightforward if you're comfortable with itertools (and you should be, it's really a very powerful and useful module!).
Alternatively you could build a dict with (x,y) tuples as keys and lists of z as values and do one last pass on that one to pick the max z for each (x, y), but I think the sort-and-group approach is preferable (unless you have many millions of points so that the big-O performance of sorting worries you for scalability purposes, I guess).
A:
You can use dict achieve this, using the property that "If a given key is seen more than once, the last value associated with it is retained in the new dictionary." This code sorts the points to make sure that the highest values come later, creates a dictionary whose keys are a tuple of the first two values and whose value is the third coordinate, then translates that back into a list
points = [[1,2,5], [1,1,3], [1,2,7], [1,7,3]]
sp = sorted(points)
d = dict( ( (a,b), c) for (a,b,c) in sp)
results = [list(k) + [v] for (k,v) in d.iteritems()]
There may be a way to further improve that, but it satisfies all your requirements.
A:
If I understand your question .. maybe use a dictionary to map (x,y) to the max z
something like this (not tested)
dict = {}
for x,y,z in list
if dict.has_key((x,y)):
dict[(x,y)] = max(dict[(x,y)], z)
else:
dict[(x,y)] = z
Though the ordering will be lost
|
Finding unique maximum values in a list using python
|
I have a list of points as shown below
points=[ [x0,y0,v0], [x1,y1,v1], [x2,y2,v2].......... [xn,yn,vn]]
Some of the points have duplicate x,y values. What I want to do is to extract the unique maximum value x,y points
For example, if I have points [1,2,5] [1,1,3] [1,2,7] [1,7,3]
I would like to obtain the list [1,1,3] [1,2,7] [1,7,3]
How can I do this in python?
Thanks
|
[
"For example:\nimport itertools\n\ndef getxy(point): return point[:2]\n\nsortedpoints = sorted(points, key=getxy)\n\nresults = []\n\nfor xy, g in itertools.groupby(sortedpoints, key=getxy):\n results.append(max(g, key=operator.itemgetter(2)))\n\nthat is: sort and group the points by xy, for every group with fixed xy pick the point with the maximum z. Seems straightforward if you're comfortable with itertools (and you should be, it's really a very powerful and useful module!).\nAlternatively you could build a dict with (x,y) tuples as keys and lists of z as values and do one last pass on that one to pick the max z for each (x, y), but I think the sort-and-group approach is preferable (unless you have many millions of points so that the big-O performance of sorting worries you for scalability purposes, I guess).\n",
"You can use dict achieve this, using the property that \"If a given key is seen more than once, the last value associated with it is retained in the new dictionary.\" This code sorts the points to make sure that the highest values come later, creates a dictionary whose keys are a tuple of the first two values and whose value is the third coordinate, then translates that back into a list\npoints = [[1,2,5], [1,1,3], [1,2,7], [1,7,3]]\nsp = sorted(points)\nd = dict( ( (a,b), c) for (a,b,c) in sp)\nresults = [list(k) + [v] for (k,v) in d.iteritems()]\n\nThere may be a way to further improve that, but it satisfies all your requirements.\n",
"If I understand your question .. maybe use a dictionary to map (x,y) to the max z\nsomething like this (not tested)\ndict = {}\nfor x,y,z in list\n if dict.has_key((x,y)):\n dict[(x,y)] = max(dict[(x,y)], z)\n else:\n dict[(x,y)] = z\n\nThough the ordering will be lost\n"
] |
[
9,
0,
0
] |
[] |
[] |
[
"python",
"set",
"unique"
] |
stackoverflow_0002434251_python_set_unique.txt
|
Q:
Last matching symbol in Regex
I couldn't find a more descriptive title, but here there is an example:
import re
m = re.search(r"\((?P<remixer>.+) (Remix)\)", "Title (Menda Remix)")
m.group("remixer") # returns 'Menda' OK
m = re.search(r"\((?P<remixer>.+) (Remix)\)", "Title (Blabla) (Menda Remix)")
m.group("remixer") # returns 'Blabla) (Menda' FAIL
This regex finds the first parenthesis, and I would like to match the last parenthesis for always getting 'Menda'. I've made a workaround to this using extra functions, but I would like a cleaner and a more consistent way using the same regex.
Thanks a lot guys.
A:
re.search(r"\((?P<remixer>[^)]+) (Remix)\)", "Title (Blabla) (Menda Remix)")
A:
Use [^()]+ instead of .+ to not to match the parenthesis.
A:
I would probably do this:
m = re.search(r".*\((?P<remixer>.+) (Remix)\)", "Title (Blabla) (Menda Remix)")
A:
Just add a $ to the end of the pattern and you're done :)
import re
m = re.search(r"\((?P<remixer>[^)]+) (Remix)\)$", "Title (Menda Remix)")
print m.group("remixer") # returns 'Menda' OK
m = re.search(r"\((?P<remixer>[^)]+) (Remix)\)$", "Title (Blabla) (Menda Remix)")
print m.group("remixer") # returns 'Blabla) (Menda' FAIL
PS: I've also changed the .+ to [^)]+ so you won't match any ) in the process.
|
Last matching symbol in Regex
|
I couldn't find a more descriptive title, but here there is an example:
import re
m = re.search(r"\((?P<remixer>.+) (Remix)\)", "Title (Menda Remix)")
m.group("remixer") # returns 'Menda' OK
m = re.search(r"\((?P<remixer>.+) (Remix)\)", "Title (Blabla) (Menda Remix)")
m.group("remixer") # returns 'Blabla) (Menda' FAIL
This regex finds the first parenthesis, and I would like to match the last parenthesis for always getting 'Menda'. I've made a workaround to this using extra functions, but I would like a cleaner and a more consistent way using the same regex.
Thanks a lot guys.
|
[
"re.search(r\"\\((?P<remixer>[^)]+) (Remix)\\)\", \"Title (Blabla) (Menda Remix)\")\n\n",
"Use [^()]+ instead of .+ to not to match the parenthesis.\n",
"I would probably do this:\nm = re.search(r\".*\\((?P<remixer>.+) (Remix)\\)\", \"Title (Blabla) (Menda Remix)\")\n\n",
"Just add a $ to the end of the pattern and you're done :)\nimport re\nm = re.search(r\"\\((?P<remixer>[^)]+) (Remix)\\)$\", \"Title (Menda Remix)\")\nprint m.group(\"remixer\") # returns 'Menda' OK\nm = re.search(r\"\\((?P<remixer>[^)]+) (Remix)\\)$\", \"Title (Blabla) (Menda Remix)\")\nprint m.group(\"remixer\") # returns 'Blabla) (Menda' FAIL\n\nPS: I've also changed the .+ to [^)]+ so you won't match any ) in the process.\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0002434749_python_regex.txt
|
Q:
How to get path to the installed GIT in Python?
I need to get a path to the GIT on Max OS X 10.6 using Python 2.6.1 into script variables. I use this code for that:
r = subprocess.Popen(shlex.split("which git"), stdout=subprocess.PIPE)
print r.stdout.read()
but the problem is that output is empty (I tried stderr too). It works fine with another commands such as pwd or ls.
Can anyone help me with that?
UPDATE: When I run which git from Terminal it prints out path as expected. So, which can find it.
UPDATE 2: I just created the bash script
#!/usr/bin/env bash
GP=`/usr/bin/which git`
PWD=`pwd`
echo "PATH IS: ${GP}"
echo "PWD IS: ${PWD}"
and output is
PATH IS:
PWD IS: /Users/user/tmp
A:
All which does is iterate over the directories in $PATH, checking to see if the file is there. Just write a small method to do likewise.
|
How to get path to the installed GIT in Python?
|
I need to get a path to the GIT on Max OS X 10.6 using Python 2.6.1 into script variables. I use this code for that:
r = subprocess.Popen(shlex.split("which git"), stdout=subprocess.PIPE)
print r.stdout.read()
but the problem is that output is empty (I tried stderr too). It works fine with another commands such as pwd or ls.
Can anyone help me with that?
UPDATE: When I run which git from Terminal it prints out path as expected. So, which can find it.
UPDATE 2: I just created the bash script
#!/usr/bin/env bash
GP=`/usr/bin/which git`
PWD=`pwd`
echo "PATH IS: ${GP}"
echo "PWD IS: ${PWD}"
and output is
PATH IS:
PWD IS: /Users/user/tmp
|
[
"All which does is iterate over the directories in $PATH, checking to see if the file is there. Just write a small method to do likewise.\n"
] |
[
2
] |
[] |
[] |
[
"osx_snow_leopard",
"python",
"subprocess"
] |
stackoverflow_0002435015_osx_snow_leopard_python_subprocess.txt
|
Q:
Passing a Python list using JSON and Django
I'm trying to send a Python list in to client side (encoded as JSON). This is the code snippet which I have written:
array_to_js = [vld_id, vld_error, False]
array_to_js[2] = True
jsonValidateReturn = simplejson.dumps(array_to_js)
return HttpResponse(jsonValidateReturn, mimetype='application/json')
How do I access it form client side? Can I access it like the following?
jsonValidateReturn[0]
Or how do I assign a name to the returned JSON array in order to access it?
Actually I'm trying to convert a server side Ajax script that returns an array (see Stack Overflow question Creating a JSON response using Django and Python that handles client side POST requests, so I wanted the same thing in return with Python, but it didn't go well.
A:
The JSON array will be dumped without a name / assignment.
That is, in order to give it a name, in your JavaScript code you would do something like this:
var my_json_data_dump = function_that_gets_json_data();
If you want to visualize it, for example, substitute:
var my_json_data_dump = { 'first_name' : Bob, 'last_name': smith };
Also, like Iganacio said, you're going to need something like json2.js to parse the string into the object in the last example. You could wrap that parsing step inside of function_that_gets_json_data, or if you're using jQuery you can do it with a function like jQuery.getJSON().
json2.js is still nice to have, though.
In response to the comment (I need space and markup):
Yes, of course. All the Python side is doing is encoding a string representation (JSON) for you. You could do something like 'var blah = %s' % json.dumps(obj_to_encode) and then on the client side, instead of simply parsing the response as JSON, you parse it as JavaScript.
I wouldn't recommend this for a few reasons:
You're no longer outputting JSON. What if you want to use it in a context where you don't want the variable name, or can't parse JavaScript?
You're evaluating JavaScript instead of simply parsing JSON. It's an operation that's open to security holes (if someone can seed the data, they might be able to execute a XSS attack).
I guess you're facing something I think every Ajax developer runs in to. You want one place of truth in your application, but now you're being encouraged to define variables and whatnot in JavaScript. So you have to cross reference your Python code with the JavaScript code that uses it.
I wouldn't get too hung up on it. I can't see why you would absolutely need to control the name of the variable from Python in this manner. If you're counting on the variable name being the same so that you can reference it in subsequent JavaScript or Python code, it's something you might obviate by simply restructuring your code. I don't mean that as a criticism, just a really helpful (in general) suggestion!
A:
If both client and server are in Python, here's what you need to know.
Server. Use a dictionary to get labels on the fields. Write this as the response.
>>> import json
>>> json.dumps( {'vld_id':1,'vls_error':2,'something_else':True} )
'{"vld_id": 1, "something_else": true, "vls_error": 2}'
Client. After reading the response string, create a Python dictionary this way.
>>> json.loads( '{"vld_id": 1, "something_else": true, "vls_error": 2}' )
{u'vld_id': 1, u'something_else': True, u'vls_error': 2}
|
Passing a Python list using JSON and Django
|
I'm trying to send a Python list in to client side (encoded as JSON). This is the code snippet which I have written:
array_to_js = [vld_id, vld_error, False]
array_to_js[2] = True
jsonValidateReturn = simplejson.dumps(array_to_js)
return HttpResponse(jsonValidateReturn, mimetype='application/json')
How do I access it form client side? Can I access it like the following?
jsonValidateReturn[0]
Or how do I assign a name to the returned JSON array in order to access it?
Actually I'm trying to convert a server side Ajax script that returns an array (see Stack Overflow question Creating a JSON response using Django and Python that handles client side POST requests, so I wanted the same thing in return with Python, but it didn't go well.
|
[
"The JSON array will be dumped without a name / assignment.\nThat is, in order to give it a name, in your JavaScript code you would do something like this:\nvar my_json_data_dump = function_that_gets_json_data();\n\nIf you want to visualize it, for example, substitute:\nvar my_json_data_dump = { 'first_name' : Bob, 'last_name': smith };\n\nAlso, like Iganacio said, you're going to need something like json2.js to parse the string into the object in the last example. You could wrap that parsing step inside of function_that_gets_json_data, or if you're using jQuery you can do it with a function like jQuery.getJSON().\njson2.js is still nice to have, though.\n\nIn response to the comment (I need space and markup):\nYes, of course. All the Python side is doing is encoding a string representation (JSON) for you. You could do something like 'var blah = %s' % json.dumps(obj_to_encode) and then on the client side, instead of simply parsing the response as JSON, you parse it as JavaScript. \nI wouldn't recommend this for a few reasons:\n\nYou're no longer outputting JSON. What if you want to use it in a context where you don't want the variable name, or can't parse JavaScript?\nYou're evaluating JavaScript instead of simply parsing JSON. It's an operation that's open to security holes (if someone can seed the data, they might be able to execute a XSS attack).\n\nI guess you're facing something I think every Ajax developer runs in to. You want one place of truth in your application, but now you're being encouraged to define variables and whatnot in JavaScript. So you have to cross reference your Python code with the JavaScript code that uses it.\nI wouldn't get too hung up on it. I can't see why you would absolutely need to control the name of the variable from Python in this manner. If you're counting on the variable name being the same so that you can reference it in subsequent JavaScript or Python code, it's something you might obviate by simply restructuring your code. I don't mean that as a criticism, just a really helpful (in general) suggestion! \n",
"If both client and server are in Python, here's what you need to know.\nServer. Use a dictionary to get labels on the fields. Write this as the response.\n>>> import json\n>>> json.dumps( {'vld_id':1,'vls_error':2,'something_else':True} )\n'{\"vld_id\": 1, \"something_else\": true, \"vls_error\": 2}'\n\nClient. After reading the response string, create a Python dictionary this way.\n>>> json.loads( '{\"vld_id\": 1, \"something_else\": true, \"vls_error\": 2}' )\n{u'vld_id': 1, u'something_else': True, u'vls_error': 2}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"json",
"python"
] |
stackoverflow_0002435261_django_json_python.txt
|
Q:
Database: storing data from user registration form
Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a "PathToPhoto" column. My question is how do I fill in the "PathToPhoto" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next.
I'm using python and postgresql.
A:
To make sure we're on the same page, is the following correct?
You're inserting the photo information into the Photo table immediately after the user uploads the photo but before he/she submits the form;
When the user submits the form, you're inserting a row into the User table;
One of the items in that row is information about the previously created photo entry.
If so, you should be able to store the "path to photo" information in a Python variable until the user submits the form, and then use the value from that variable in your User-table insert.
|
Database: storing data from user registration form
|
Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a "PathToPhoto" column. My question is how do I fill in the "PathToPhoto" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next.
I'm using python and postgresql.
|
[
"To make sure we're on the same page, is the following correct?\n\nYou're inserting the photo information into the Photo table immediately after the user uploads the photo but before he/she submits the form;\nWhen the user submits the form, you're inserting a row into the User table;\nOne of the items in that row is information about the previously created photo entry.\n\nIf so, you should be able to store the \"path to photo\" information in a Python variable until the user submits the form, and then use the value from that variable in your User-table insert.\n"
] |
[
0
] |
[] |
[] |
[
"database",
"postgresql",
"python"
] |
stackoverflow_0002435281_database_postgresql_python.txt
|
Q:
File size in Python server
We have server on Python and client + web service on Ruby. That works only if file from URL is less than 800 k. It seems like "socket.puts data" in a client works, but "output = socket.gets" - not. I think problem is in a Python part. For big files tests run "Connection reset by peer". Is it buffer size variable by default somewhere in a Python?
A:
Could you add a little more information and code to your example?
Are you thinking about sock.recv_into() which takes a buffer and buffer size as arguments? Alternately, are you hitting a timeout issue by failing to have a keepalive on the Ruby side?
Guessing in advance of knowledge.
|
File size in Python server
|
We have server on Python and client + web service on Ruby. That works only if file from URL is less than 800 k. It seems like "socket.puts data" in a client works, but "output = socket.gets" - not. I think problem is in a Python part. For big files tests run "Connection reset by peer". Is it buffer size variable by default somewhere in a Python?
|
[
"Could you add a little more information and code to your example?\nAre you thinking about sock.recv_into() which takes a buffer and buffer size as arguments? Alternately, are you hitting a timeout issue by failing to have a keepalive on the Ruby side?\nGuessing in advance of knowledge.\n"
] |
[
0
] |
[] |
[] |
[
"client",
"python",
"ruby",
"size"
] |
stackoverflow_0002435294_client_python_ruby_size.txt
|
Q:
How i can convert integer in to 'binary' in python
In Ruby i do so
asd = 123
asd = '%b' % asd # => "1111011"
A:
in Python >= 2.6 with bin():
asd = bin(123) # => '0b1111011'
To remove the leading 0b you can just take the substring bin(123)[2:].
bin(x)
Convert an integer number to a binary string. The result is a valid Python expression. If x is not a Python int object, it has to define an __index__() method that returns an integer.
New in version 2.6.
A:
you can also do string formatting, which doesn't contain '0b':
>>> '{:b}'.format(123) #{0:b} in python 2.6
'1111011'
A:
bin() works, as Felix mentioned. For completeness, you can go the other way as
well.
>>> int('01101100',2)
108
>>> bin(108)
'0b1101100'
>>> bin(108)[2:]
'1101100'
|
How i can convert integer in to 'binary' in python
|
In Ruby i do so
asd = 123
asd = '%b' % asd # => "1111011"
|
[
"in Python >= 2.6 with bin():\nasd = bin(123) # => '0b1111011'\n\nTo remove the leading 0b you can just take the substring bin(123)[2:].\n\nbin(x)\nConvert an integer number to a binary string. The result is a valid Python expression. If x is not a Python int object, it has to define an __index__() method that returns an integer.\nNew in version 2.6.\n\n",
"you can also do string formatting, which doesn't contain '0b':\n>>> '{:b}'.format(123) #{0:b} in python 2.6\n'1111011'\n\n",
"bin() works, as Felix mentioned. For completeness, you can go the other way as\nwell.\n>>> int('01101100',2)\n108\n>>> bin(108)\n'0b1101100'\n>>> bin(108)[2:]\n'1101100'\n\n"
] |
[
7,
7,
0
] |
[] |
[] |
[
"python",
"ruby"
] |
stackoverflow_0002434806_python_ruby.txt
|
Q:
Django : proper way to use model, duplicates!
I have a question about the proper, best way to manage the model.
I am relative newbie to django, so I think I need to read more docs, tutorials,etc (suggestions for this would be cool!).
Anyway, this is my question :
I have a python web crawler, that is "connected" with django model.
Crawling is done once a day, so its really common to find "duplicates". To avoid duplicates I do this :
cars = Car.Objects.filter(name=crawledItem['name'])
if len(cars) > 0:
#object already exists, update it
car = cars[0]
else:
car = Car()
#some non-relevant code here
car.save()
I want to know, if this is the proper/correct way to do it or its any "automatic" way to do it.
Its possible to put the logic inside the Car() constructor also, should I do that?
Thanks a lot!
A:
Use the get_or_create() method of the manager, then modify the returned instance as needed.
|
Django : proper way to use model, duplicates!
|
I have a question about the proper, best way to manage the model.
I am relative newbie to django, so I think I need to read more docs, tutorials,etc (suggestions for this would be cool!).
Anyway, this is my question :
I have a python web crawler, that is "connected" with django model.
Crawling is done once a day, so its really common to find "duplicates". To avoid duplicates I do this :
cars = Car.Objects.filter(name=crawledItem['name'])
if len(cars) > 0:
#object already exists, update it
car = cars[0]
else:
car = Car()
#some non-relevant code here
car.save()
I want to know, if this is the proper/correct way to do it or its any "automatic" way to do it.
Its possible to put the logic inside the Car() constructor also, should I do that?
Thanks a lot!
|
[
"Use the get_or_create() method of the manager, then modify the returned instance as needed.\n"
] |
[
6
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0002435825_django_python.txt
|
Q:
Python library for creating stubs/fake objects
I am looking for python stubbing library. Something that could be used to create fake classes/methods in my unit tests.. Is there a simple way to achieve it in python..
Thanks
PS: I am not looking for mocking library where you would record and replay expectation.
Difference between mock and stubs
A:
We do this.
class FakeSomethingOrOther( object ):
def __init__( self ):
self._count_me= 0
def method_required_by_test( self ):
return self.special_answer_required_by_test
def count_this_method( self, *args, *kw ):
self._count_me += 1
It doesn't take much to set them up
class TestSomething( unittest.TestCase ):
def setUp( self ):
self.requiredSomething = FakeSomethingOrOther()
self.requiredSomething.attribute_required_by_test= 12
self.requiredSomething.special_answer_required_by_test = 32
self.to_be_tested = ActualThing( self.requiredSomething )
Since you don't require complex statically checked type declarations, all you need is a class with the right methods. You can force test attribute values in trivially.
These things are really, really easy to write. You don't need a lot of support or libraries.
In other languages (i.e., Java) it's very hard to write something that will pass muster with static compile-time checking. Since Python doesn't have this problem, it's trivial to write mocks or fake implementations for testing purposes.
A:
Python mocker looks nice.
A Mocker instance is used to command recording and replaying of
expectations on any number of mock objects.
|
Python library for creating stubs/fake objects
|
I am looking for python stubbing library. Something that could be used to create fake classes/methods in my unit tests.. Is there a simple way to achieve it in python..
Thanks
PS: I am not looking for mocking library where you would record and replay expectation.
Difference between mock and stubs
|
[
"We do this.\nclass FakeSomethingOrOther( object ):\n def __init__( self ):\n self._count_me= 0\n def method_required_by_test( self ):\n return self.special_answer_required_by_test\n def count_this_method( self, *args, *kw ):\n self._count_me += 1\n\nIt doesn't take much to set them up\nclass TestSomething( unittest.TestCase ):\n def setUp( self ):\n self.requiredSomething = FakeSomethingOrOther()\n self.requiredSomething.attribute_required_by_test= 12\n self.requiredSomething.special_answer_required_by_test = 32\n self.to_be_tested = ActualThing( self.requiredSomething )\n\nSince you don't require complex statically checked type declarations, all you need is a class with the right methods. You can force test attribute values in trivially.\nThese things are really, really easy to write. You don't need a lot of support or libraries.\nIn other languages (i.e., Java) it's very hard to write something that will pass muster with static compile-time checking. Since Python doesn't have this problem, it's trivial to write mocks or fake implementations for testing purposes.\n",
"Python mocker looks nice.\n\nA Mocker instance is used to command recording and replaying of\n expectations on any number of mock objects.\n\n"
] |
[
9,
0
] |
[] |
[] |
[
"mocking",
"python",
"stub",
"testing"
] |
stackoverflow_0002436220_mocking_python_stub_testing.txt
|
Q:
Trying to get django app to work with mod_wsgi on CentOS 5
I'm running CentOS 5, and am trying to get a django application working with mod_wsgi. I'm using .wsgi settings I got working on Ubuntu. I'm also using an alternate installation of python (/opt/python2.6/) since my django application needs >2.5 and the OS uses 2.3
Here is the error:
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Target WSGI script '/data/hosting/cubedev/apache/django.wsgi' cannot be loaded as Python module.
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Exception occurred processing WSGI script '/data/hosting/cubedev/apache/django.wsgi'.
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] Traceback (most recent call last):
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/data/hosting/cubedev/apache/django.wsgi", line 8, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] import django.core.handlers.wsgi
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 1, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from threading import Lock
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/threading.py", line 13, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from functools import wraps
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/functools.py", line 10, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from _functools import partial, reduce
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly
And here is my .wsgi file
import os
import sys
os.environ['PYTHON_EGG_CACHE'] = '/tmp/django/' # This line was added for CentOS.
os.environ['DJANGO_SETTINGS_MODULE'] = 'cube.settings'
sys.path.append('/data/hosting/cubedev')
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
output of ldd /usr/lib/httpd/modules/mod_wsgi.so
linux-gate.so.1 => (0x00250000)
libpython2.6.so.1.0 => /opt/python2.6/lib/libpython2.6.so.1.0 (0x00be6000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00110000)
libdl.so.2 => /lib/libdl.so.2 (0x00557000)
libutil.so.1 => /lib/libutil.so.1 (0x00128000)
libm.so.6 => /lib/libm.so.6 (0x0012c000)
libc.so.6 => /lib/libc.so.6 (0x00251000)
/lib/ld-linux.so.2 (0x0039a000)
vhost config
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerAlias cube-dev.example.com
ServerName cube-dev.example.com
ErrorLog logs/cube-dev.example.com.error_log
CustomLog logs/cube-dev.example.com.access_log common
Alias /phpMyAdmin /var/www/phpMyAdmin/
# DocumentRoot /data/hosting/cubedev
WSGIScriptAlias / /data/hosting/cubedev/apache/django.wsgi
WSGIProcessGroup cubedev.example.com
WSGIDaemonProcess cubedev.example.com
Alias /media/ /data/hosting/cubedev/media/
Alias /adminmedia/ /opt/python2.6/lib/python2.6/site-packages/django/contrib/admin/media/
Alias /media /data/hosting/cubedev/media
<Directory "/data/hosting/cubedev/media">
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
A:
SystemError: dynamic module not initialized properly is the exception that is thrown when a dll (or .so) that is being loaded cannot be properly initialized. In function _PyImport_LoadDynamicModule of Python/importdl.c in case anyone is interested.
Now, the dll/so in question (the dynamic module in Python parliance) is _functools.so which is part of Python standard library. I see that it is being loaded from /opt/python2.6 so we know that this is not the system python. My guess is that this is not the python against which mod_wsgi was compiled. To check whether this is the case run ldd mod_wsgi.so and look at what libpython is returned.
Therefore my suggestion is either to recompile mod_wsgi againast the interpreter in /opt/python2.6 by running in the wsgi_mod source directory
./configure --with-python=/opt/python2.6/bin/python2.6
or make sure that sys.prefix points to the python installation that mod_wsgi expects by setting its value with the WSGIPythonHome directory.
UPDATE after ldd output
The second line in the ldd output shows that mod_wsgi loads the pythonlib in /usr/lib instead of /opt/python2.6. To instruct mod_wsgi to load that in /opt/python2.6 you should probably prepend it to the LD_LIBRARY_PATH envirnoment variable.
Try it first on the command line:
LD_LIBRARY_PATH=/opt/python2.6/lib:$LD_LIBRARY_PATH ldd mod_wsgi.so
and then make sure that the correct LD_LIBRARY_PATH is specified in the script that starts Apache.
Yet another update
You'll have to debug your mod_wsgi configuration. Just try with the following .wsgi file in place of yours and tell us what you get:
def application(environ, start_response):
status = '200 OK'
start_response(status, [('Content-type', 'text/plain')])
try:
import sys
return ['\n'.join([sys.prefix, sys.executable])]
except:
import traceback as tb
return [tb.format_exc()]
If what you get is not `/opt/python2.6', try with the option
WSGIPythonHome /opt/python2.6
See also http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives
|
Trying to get django app to work with mod_wsgi on CentOS 5
|
I'm running CentOS 5, and am trying to get a django application working with mod_wsgi. I'm using .wsgi settings I got working on Ubuntu. I'm also using an alternate installation of python (/opt/python2.6/) since my django application needs >2.5 and the OS uses 2.3
Here is the error:
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Target WSGI script '/data/hosting/cubedev/apache/django.wsgi' cannot be loaded as Python module.
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] mod_wsgi (pid=23630): Exception occurred processing WSGI script '/data/hosting/cubedev/apache/django.wsgi'.
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] Traceback (most recent call last):
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/data/hosting/cubedev/apache/django.wsgi", line 8, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] import django.core.handlers.wsgi
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 1, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from threading import Lock
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/threading.py", line 13, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from functools import wraps
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] File "/opt/python2.6/lib/python2.6/functools.py", line 10, in
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] from _functools import partial, reduce
[Thu Mar 04 10:52:15 2010] [error] [client 10.1.0.251] SystemError: dynamic module not initialized properly
And here is my .wsgi file
import os
import sys
os.environ['PYTHON_EGG_CACHE'] = '/tmp/django/' # This line was added for CentOS.
os.environ['DJANGO_SETTINGS_MODULE'] = 'cube.settings'
sys.path.append('/data/hosting/cubedev')
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
output of ldd /usr/lib/httpd/modules/mod_wsgi.so
linux-gate.so.1 => (0x00250000)
libpython2.6.so.1.0 => /opt/python2.6/lib/libpython2.6.so.1.0 (0x00be6000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00110000)
libdl.so.2 => /lib/libdl.so.2 (0x00557000)
libutil.so.1 => /lib/libutil.so.1 (0x00128000)
libm.so.6 => /lib/libm.so.6 (0x0012c000)
libc.so.6 => /lib/libc.so.6 (0x00251000)
/lib/ld-linux.so.2 (0x0039a000)
vhost config
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerAlias cube-dev.example.com
ServerName cube-dev.example.com
ErrorLog logs/cube-dev.example.com.error_log
CustomLog logs/cube-dev.example.com.access_log common
Alias /phpMyAdmin /var/www/phpMyAdmin/
# DocumentRoot /data/hosting/cubedev
WSGIScriptAlias / /data/hosting/cubedev/apache/django.wsgi
WSGIProcessGroup cubedev.example.com
WSGIDaemonProcess cubedev.example.com
Alias /media/ /data/hosting/cubedev/media/
Alias /adminmedia/ /opt/python2.6/lib/python2.6/site-packages/django/contrib/admin/media/
Alias /media /data/hosting/cubedev/media
<Directory "/data/hosting/cubedev/media">
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
|
[
"SystemError: dynamic module not initialized properly is the exception that is thrown when a dll (or .so) that is being loaded cannot be properly initialized. In function _PyImport_LoadDynamicModule of Python/importdl.c in case anyone is interested.\nNow, the dll/so in question (the dynamic module in Python parliance) is _functools.so which is part of Python standard library. I see that it is being loaded from /opt/python2.6 so we know that this is not the system python. My guess is that this is not the python against which mod_wsgi was compiled. To check whether this is the case run ldd mod_wsgi.so and look at what libpython is returned.\nTherefore my suggestion is either to recompile mod_wsgi againast the interpreter in /opt/python2.6 by running in the wsgi_mod source directory\n./configure --with-python=/opt/python2.6/bin/python2.6\n\nor make sure that sys.prefix points to the python installation that mod_wsgi expects by setting its value with the WSGIPythonHome directory.\nUPDATE after ldd output\nThe second line in the ldd output shows that mod_wsgi loads the pythonlib in /usr/lib instead of /opt/python2.6. To instruct mod_wsgi to load that in /opt/python2.6 you should probably prepend it to the LD_LIBRARY_PATH envirnoment variable.\nTry it first on the command line:\nLD_LIBRARY_PATH=/opt/python2.6/lib:$LD_LIBRARY_PATH ldd mod_wsgi.so\n\nand then make sure that the correct LD_LIBRARY_PATH is specified in the script that starts Apache.\nYet another update\nYou'll have to debug your mod_wsgi configuration. Just try with the following .wsgi file in place of yours and tell us what you get:\ndef application(environ, start_response):\n status = '200 OK'\n start_response(status, [('Content-type', 'text/plain')])\n\n try:\n import sys\n return ['\\n'.join([sys.prefix, sys.executable])]\n except:\n import traceback as tb\n return [tb.format_exc()]\n\nIf what you get is not `/opt/python2.6', try with the option\nWSGIPythonHome /opt/python2.6\n\nSee also http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives\n"
] |
[
7
] |
[] |
[] |
[
"centos5",
"django",
"mod_wsgi",
"python"
] |
stackoverflow_0002435125_centos5_django_mod_wsgi_python.txt
|
Q:
resolving overloads in boost.python
I have a C++ class like this:
class ConnectionBase
{
public:
ConnectionBase();
template <class T> Publish(const T&);
private:
virtual void OnEvent(const Overload_a&) {}
virtual void OnEvent(const Overload_b&) {}
};
My templates & overloads are a known fixed set of types at compile time. The application code derives from ConnectionBase and overrides OnEvent for the events it cares about. I can do this because the set of types is known. OnEvent is private because the user never calls it, the class creates a thread that calls it as a callback. The C++ code works.
I have wrapped this in boost.python, I can import it and publish from python. I want to create the equivalent of the following in python :
class ConnectionDerived
{
public:
ConnectionDerived();
private:
virtual void OnEvent(const Overload_b&)
{
// application code
}
};
I don't care to (do not want to) expose the default OnEvent functions, since they're never called from application code - I only want to provide a body for the C++ class to call.
But ... since python isn't typed, and all the boost.python examples I've seen dealing with internals are on the C++ side, I'm a little puzzled as to how to do this. How do I override specific overloads?
A:
Creating C++ virtual functions that can be overridden in Python requires some work - see here. You will need to create a wrapper function in a derived class that calls the Python method. Here is how it can work:
struct ConnectionBaseWrap : ConnectionBase, wrapper<ConnectionBase>
{
void OnEvent(const Overload_a &obj) {
if (override f = get_override("OnEventOverloadA"))
f(obj);
}
void OnEvent(const Overload_b &obj) {
if (override f = get_override("OnEventOverloadB"))
f(obj);
}
};
BOOST_PYTHON_MODULE(yourmodule) {
class_<ConnectionBaseWrap, boost::noncopyable>("ConnectionBase")
//Your additional definitions here.
;
}
Some notes:
Since you won't be calling the base class's OnEvent function from Python, you don't need to define it with .def. If you did want to define it, you'd need to give the two overloaded versions different names, like here.
The two overloads will call different Python methods: OnEventOverloadA and OnEventOverloadB. Another option is to have them both call the same method OnEvent, and then one Python method will override both overloads.
|
resolving overloads in boost.python
|
I have a C++ class like this:
class ConnectionBase
{
public:
ConnectionBase();
template <class T> Publish(const T&);
private:
virtual void OnEvent(const Overload_a&) {}
virtual void OnEvent(const Overload_b&) {}
};
My templates & overloads are a known fixed set of types at compile time. The application code derives from ConnectionBase and overrides OnEvent for the events it cares about. I can do this because the set of types is known. OnEvent is private because the user never calls it, the class creates a thread that calls it as a callback. The C++ code works.
I have wrapped this in boost.python, I can import it and publish from python. I want to create the equivalent of the following in python :
class ConnectionDerived
{
public:
ConnectionDerived();
private:
virtual void OnEvent(const Overload_b&)
{
// application code
}
};
I don't care to (do not want to) expose the default OnEvent functions, since they're never called from application code - I only want to provide a body for the C++ class to call.
But ... since python isn't typed, and all the boost.python examples I've seen dealing with internals are on the C++ side, I'm a little puzzled as to how to do this. How do I override specific overloads?
|
[
"Creating C++ virtual functions that can be overridden in Python requires some work - see here. You will need to create a wrapper function in a derived class that calls the Python method. Here is how it can work:\nstruct ConnectionBaseWrap : ConnectionBase, wrapper<ConnectionBase>\n{\n void OnEvent(const Overload_a &obj) {\n if (override f = get_override(\"OnEventOverloadA\"))\n f(obj);\n }\n void OnEvent(const Overload_b &obj) {\n if (override f = get_override(\"OnEventOverloadB\"))\n f(obj);\n }\n};\n\nBOOST_PYTHON_MODULE(yourmodule) {\n class_<ConnectionBaseWrap, boost::noncopyable>(\"ConnectionBase\")\n //Your additional definitions here.\n ;\n}\n\nSome notes:\n\nSince you won't be calling the base class's OnEvent function from Python, you don't need to define it with .def. If you did want to define it, you'd need to give the two overloaded versions different names, like here.\nThe two overloads will call different Python methods: OnEventOverloadA and OnEventOverloadB. Another option is to have them both call the same method OnEvent, and then one Python method will override both overloads.\n\n"
] |
[
2
] |
[] |
[] |
[
"boost",
"boost_python",
"c++",
"python"
] |
stackoverflow_0002436067_boost_boost_python_c++_python.txt
|
Q:
Reverse Search Best Practices?
I'm making an app that has a need for reverse searches. By this, I mean that users of the app will enter search parameters and save them; then, when any new objects get entered onto the system, if they match the existing search parameters that a user has saved, a notification will be sent, etc.
I am having a hard time finding solutions for this type of problem.
I am using Django and thinking of building the searches and pickling them using Q objects as outlined here: http://www.djangozen.com/blog/the-power-of-q
The way I see it, when a new object is entered into the database, I will have to load every single saved query from the db and somehow run it against this one new object to see if it would match that search query... This doesn't seem ideal - has anyone tackled such a problem before?
A:
At the database level, many databases offer 'triggers'.
Another approach is to have timed jobs that periodically fetch all items from the database that have a last-modified date since the last run; then these get filtered and alerts issued. You can perhaps put some of the filtering into the query statement in the database. However, this is a bit trickier if notifications need to be sent if items get deleted.
You can also put triggers manually into the code that submits data to the database, which is perhaps more flexible and certainly doesn't rely on specific features of the database.
A nice way for the triggers and the alerts to communicate is through message queues - queues such as RabbitMQ and other AMQP implementations will scale with your site.
A:
The amount of effort you use to solve this problem is directly related to the number of stored queries you are dealing with.
Over 20 years ago we handled stored queries by treating them as minidocs and indexing them based on all of the must have and may have terms. A new doc's term list was used as a sort of query against this "database of queries" and that built a list of possibly interesting searches to run, and then only those searches were run against the new docs. This may sound convoluted, but when there are more than a few stored queries (say anywhere from 10,000 to 1,000,000 or more) and you have a complex query language that supports a hybrid of Boolean and similarity-based searching, it substantially reduced the number we had to execute as full-on queries -- often no more that 10 or 15 queries.
One thing that helped was that we were in control of the horizontal and the vertical of the whole thing. We used our query parser to build a parse tree and that was used to build the list of must/may have terms we indexed the query under. We warned the customer away from using certain types of wildcards in the stored queries because it could cause an explosion in the number of queries selected.
Update for comment:
Short answer: I don't know for sure.
Longer answer: We were dealing with a custom built text search engine and part of it's query syntax allowed slicing the doc collection in certain ways very efficiently, with special emphasis on date_added. We played a lot of games because we were ingesting 4-10,000,000 new docs a day and running them against up to 1,000,000+ stored queries on a DEC Alphas with 64MB of main memory. (This was in the late 80's/early 90's.)
I'm guessing that filtering on something equivalent to date_added could be done used in combination the date of the last time you ran your queries, or maybe the highest id at last query run time. If you need to re-run the queries against a modified record you could use its id as part of the query.
For me to get any more specific, you're going to have to get a lot more specific about exactly what problem you are trying to solve and the scale of the solution you are trying accomplishing.
A:
If you stored the type(s) of object(s) involved in each stored search as a generic relation, you could add a post-save signal to all involved objects. When the signal fires, it looks up only the searches that involve its object type and runs those. That probably will still run into scaling issues if you have a ton of writes to the db and a lot of saved searches, but it would be a straightforward Django approach.
|
Reverse Search Best Practices?
|
I'm making an app that has a need for reverse searches. By this, I mean that users of the app will enter search parameters and save them; then, when any new objects get entered onto the system, if they match the existing search parameters that a user has saved, a notification will be sent, etc.
I am having a hard time finding solutions for this type of problem.
I am using Django and thinking of building the searches and pickling them using Q objects as outlined here: http://www.djangozen.com/blog/the-power-of-q
The way I see it, when a new object is entered into the database, I will have to load every single saved query from the db and somehow run it against this one new object to see if it would match that search query... This doesn't seem ideal - has anyone tackled such a problem before?
|
[
"At the database level, many databases offer 'triggers'.\nAnother approach is to have timed jobs that periodically fetch all items from the database that have a last-modified date since the last run; then these get filtered and alerts issued. You can perhaps put some of the filtering into the query statement in the database. However, this is a bit trickier if notifications need to be sent if items get deleted.\nYou can also put triggers manually into the code that submits data to the database, which is perhaps more flexible and certainly doesn't rely on specific features of the database.\nA nice way for the triggers and the alerts to communicate is through message queues - queues such as RabbitMQ and other AMQP implementations will scale with your site.\n",
"The amount of effort you use to solve this problem is directly related to the number of stored queries you are dealing with.\nOver 20 years ago we handled stored queries by treating them as minidocs and indexing them based on all of the must have and may have terms. A new doc's term list was used as a sort of query against this \"database of queries\" and that built a list of possibly interesting searches to run, and then only those searches were run against the new docs. This may sound convoluted, but when there are more than a few stored queries (say anywhere from 10,000 to 1,000,000 or more) and you have a complex query language that supports a hybrid of Boolean and similarity-based searching, it substantially reduced the number we had to execute as full-on queries -- often no more that 10 or 15 queries.\nOne thing that helped was that we were in control of the horizontal and the vertical of the whole thing. We used our query parser to build a parse tree and that was used to build the list of must/may have terms we indexed the query under. We warned the customer away from using certain types of wildcards in the stored queries because it could cause an explosion in the number of queries selected.\nUpdate for comment:\nShort answer: I don't know for sure.\nLonger answer: We were dealing with a custom built text search engine and part of it's query syntax allowed slicing the doc collection in certain ways very efficiently, with special emphasis on date_added. We played a lot of games because we were ingesting 4-10,000,000 new docs a day and running them against up to 1,000,000+ stored queries on a DEC Alphas with 64MB of main memory. (This was in the late 80's/early 90's.)\nI'm guessing that filtering on something equivalent to date_added could be done used in combination the date of the last time you ran your queries, or maybe the highest id at last query run time. If you need to re-run the queries against a modified record you could use its id as part of the query.\nFor me to get any more specific, you're going to have to get a lot more specific about exactly what problem you are trying to solve and the scale of the solution you are trying accomplishing.\n",
"If you stored the type(s) of object(s) involved in each stored search as a generic relation, you could add a post-save signal to all involved objects. When the signal fires, it looks up only the searches that involve its object type and runs those. That probably will still run into scaling issues if you have a ton of writes to the db and a lot of saved searches, but it would be a straightforward Django approach.\n"
] |
[
4,
4,
1
] |
[] |
[] |
[
"django",
"python",
"reverse",
"search"
] |
stackoverflow_0002431276_django_python_reverse_search.txt
|
Q:
Python decoding issue with hashlib.digest() method
Hello StackOverflow community,
Using Google App Engine, I wrote a keyToSha256() method within a model class (extending db.Model) :
class Car(db.Model):
def keyToSha256(self):
keyhash = hashlib.sha256(str(self.key())).digest()
return keyhash
When displaying the output (ultimately within a Django template), I get garbled text, for example :
�����_ɘ�!`�I�!�;�QeqN��Al�'2
I was expecting something more in line with this :
9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
Am I missing something important ? Despite reading several guides on ASCII, Unicode, utf-8 and the like, I think I'm still far from mastering the secrets of string encoding/decoding. After browsing StackOverflow and searching for insights via Google, I figured out I should ask the question here. Any idea ? Thanks !
A:
Use .hexdigest() instead.
|
Python decoding issue with hashlib.digest() method
|
Hello StackOverflow community,
Using Google App Engine, I wrote a keyToSha256() method within a model class (extending db.Model) :
class Car(db.Model):
def keyToSha256(self):
keyhash = hashlib.sha256(str(self.key())).digest()
return keyhash
When displaying the output (ultimately within a Django template), I get garbled text, for example :
�����_ɘ�!`�I�!�;�QeqN��Al�'2
I was expecting something more in line with this :
9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
Am I missing something important ? Despite reading several guides on ASCII, Unicode, utf-8 and the like, I think I'm still far from mastering the secrets of string encoding/decoding. After browsing StackOverflow and searching for insights via Google, I figured out I should ask the question here. Any idea ? Thanks !
|
[
"Use .hexdigest() instead.\n"
] |
[
5
] |
[] |
[] |
[
"decode",
"google_app_engine",
"python"
] |
stackoverflow_0002436621_decode_google_app_engine_python.txt
|
Q:
Why do we have callable objects in python?
What is the purpose of a callable object? What problems do they solve?
A:
Many kinds of objects are callable in Python, and they can serve many purposes:
functions are callable, and they may carry along a "closure" from an outer function
classes are callable, and calling a class gets you an instance of that class
methods are callable, for function-like behavior specifically pertaining to an instance
staticmethods and classmethods are callable, for method-like functionality when the
functionality pertains to "a whole class" in some sense (staticmethods' usefulness is
dubious, since a classmethod could do just as well;-)
generators are callable, and calling a generator gets you an iterator object
finally, and this may be specifically what you were asking about (not realizing that
all of the above are objects too...!!!), you can code a class whose instances are
callable: this is often the simplest way to have calls that update an instance's
state as well as depend on it (though a function with a suitable closure, and a bound
method, offer alternatives, a callable instance is the one way to go when you need to
perform both calling and some other specific operation on the same object: for
example, an object you want to be able to call but also apply indexing to had better
be an instance of a class that's both callable and indexable;-).
A great range of examples of the kind of "problems they solve" is offered by Python's standard library, which has many cases of each of the specific types I mention above.
A:
They take parameters and return a result depending on those parameters.
A callable is just an abstract form of a function resp an interface that defines that an object acts like a function (i.e. accepts parameters).
As functions are first class objects, it is obvious that functions are callable objects. If you are talking about the __call__ method, this is just one of the many special methods with which you can overload the behavior of custom objects, e.g. for arithmetic operations or also defining what happens if you call an object.
One idea why to use such is to have some kind of factory object that itself creates other objects.
A:
There are areas, especially in the 'functions calling functions of function functions' where objects allow less nesting.
Consider making a classic decorator that checks an authorization level before calling a function. Using it is clear:
@check_authorization(level="Manager")
def update_price(Item, new_price):...
You could do this as nested functions:
def check_authorization(level):
def take_params(function):
def concrete(*args, **kwargs):
if user_level_greater_than(level):
return function(*args,
**kwargs)
return None
return concrete
return take_params
Or you could to this as a class, which might be clearer:
class check_authorization(object):
def __init__(level):
self.level = level
def __call__(function):
self.function = function
return self.dec
def dec(self, *args, **kwargs):
if user_level_greater_than(self.level):
return self.function(*args,v**kwargs)
return None
Many would find this flat method more clear. Of course, I believe in cheating, because I like the signatures and metadata correct:
from dectools.dectools import make_call_if
@make_call_if
def check_authorization(function, arg, kwargs, level):
return user_level_greater_than(level)
The callable object is a tool which is good for some known applications and may also be good for the bizarre problem real life throws at you.
|
Why do we have callable objects in python?
|
What is the purpose of a callable object? What problems do they solve?
|
[
"Many kinds of objects are callable in Python, and they can serve many purposes:\n\nfunctions are callable, and they may carry along a \"closure\" from an outer function\nclasses are callable, and calling a class gets you an instance of that class\nmethods are callable, for function-like behavior specifically pertaining to an instance\nstaticmethods and classmethods are callable, for method-like functionality when the\nfunctionality pertains to \"a whole class\" in some sense (staticmethods' usefulness is\ndubious, since a classmethod could do just as well;-)\ngenerators are callable, and calling a generator gets you an iterator object\nfinally, and this may be specifically what you were asking about (not realizing that\nall of the above are objects too...!!!), you can code a class whose instances are\ncallable: this is often the simplest way to have calls that update an instance's\nstate as well as depend on it (though a function with a suitable closure, and a bound\nmethod, offer alternatives, a callable instance is the one way to go when you need to\nperform both calling and some other specific operation on the same object: for\nexample, an object you want to be able to call but also apply indexing to had better\nbe an instance of a class that's both callable and indexable;-).\n\nA great range of examples of the kind of \"problems they solve\" is offered by Python's standard library, which has many cases of each of the specific types I mention above.\n",
"They take parameters and return a result depending on those parameters.\nA callable is just an abstract form of a function resp an interface that defines that an object acts like a function (i.e. accepts parameters).\nAs functions are first class objects, it is obvious that functions are callable objects. If you are talking about the __call__ method, this is just one of the many special methods with which you can overload the behavior of custom objects, e.g. for arithmetic operations or also defining what happens if you call an object.\nOne idea why to use such is to have some kind of factory object that itself creates other objects. \n",
"There are areas, especially in the 'functions calling functions of function functions' where objects allow less nesting. \nConsider making a classic decorator that checks an authorization level before calling a function. Using it is clear:\n@check_authorization(level=\"Manager\")\ndef update_price(Item, new_price):...\n\nYou could do this as nested functions:\ndef check_authorization(level):\n def take_params(function):\n def concrete(*args, **kwargs):\n if user_level_greater_than(level):\n return function(*args, \n **kwargs)\n return None\n return concrete\n return take_params\n\nOr you could to this as a class, which might be clearer:\n class check_authorization(object):\n def __init__(level):\n self.level = level\n def __call__(function):\n self.function = function\n return self.dec\n def dec(self, *args, **kwargs):\n if user_level_greater_than(self.level):\n return self.function(*args,v**kwargs)\n return None\n\nMany would find this flat method more clear. Of course, I believe in cheating, because I like the signatures and metadata correct:\nfrom dectools.dectools import make_call_if\n\n@make_call_if\ndef check_authorization(function, arg, kwargs, level):\n return user_level_greater_than(level)\n\nThe callable object is a tool which is good for some known applications and may also be good for the bizarre problem real life throws at you.\n"
] |
[
13,
9,
2
] |
[] |
[] |
[
"callable",
"python"
] |
stackoverflow_0002436578_callable_python.txt
|
Q:
Python API for VirtualBox
I have made a command-line interface for virtualbox such that the virtualbox can be controlled from a remote machine. now I am trying to implement the commmand-line interface using python virtualbox api. For that I have downloaded the pyvb package (python api documentation shows functions that can be used for implementing this under pyvb package). but when I give pyvb.vb.VB.startVM(instance of VB class,pyvb.vm.vbVM)
SERVER SIDE CODE IS
from pyvb.constants import *
from pyvb.vm import *
from pyvb.vb import *
import xpcom
import pyvb
import os
import socket
import threading
class ClientThread ( threading.Thread ):
# Override Thread's __init__ method to accept the parameters needed:
def __init__ ( self, channel, details ):
self.channel = channel
self.details = details
threading.Thread.__init__ ( self )
def run ( self ):
print 'Received connection:', self.details [ 0 ]
while 1:
s= self.channel.recv ( 1024 )
if(s!='end'):
if(s=='start'):
v=VB()
pyvb.vb.VB.startVM(v,pyvb.vm.vbVM)
else:
self.channel.close()
break
print 'Closed connection:', self.details [ 0 ]
server = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
server.bind ( ( '127.0.0.1', 2897 ) )
server.listen ( 5 )
while True:
channel, details = server.accept()
ClientThread ( channel, details ).start()
it shows an error
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "news.py", line 27, in run
pyvb.vb.VB.startVM(v,pyvb.vm.vbVM.getUUID(m))
File "/usr/lib/python2.5/site-packages/pyvb-0.0.2-py2.5.egg/pyvb/vb.py", line 65, in startVM
cmd='%s %s'%(VB_COMMAND_STARTVM, vm.getUUID())
AttributeError: 'str' object has no attribute 'getUUID'
A:
You might want to check the official Python API from Virtualbox. pyvb seems like a wrapper written by a third party.
The virtualbox sdk contains Python examples and full API documentation.
|
Python API for VirtualBox
|
I have made a command-line interface for virtualbox such that the virtualbox can be controlled from a remote machine. now I am trying to implement the commmand-line interface using python virtualbox api. For that I have downloaded the pyvb package (python api documentation shows functions that can be used for implementing this under pyvb package). but when I give pyvb.vb.VB.startVM(instance of VB class,pyvb.vm.vbVM)
SERVER SIDE CODE IS
from pyvb.constants import *
from pyvb.vm import *
from pyvb.vb import *
import xpcom
import pyvb
import os
import socket
import threading
class ClientThread ( threading.Thread ):
# Override Thread's __init__ method to accept the parameters needed:
def __init__ ( self, channel, details ):
self.channel = channel
self.details = details
threading.Thread.__init__ ( self )
def run ( self ):
print 'Received connection:', self.details [ 0 ]
while 1:
s= self.channel.recv ( 1024 )
if(s!='end'):
if(s=='start'):
v=VB()
pyvb.vb.VB.startVM(v,pyvb.vm.vbVM)
else:
self.channel.close()
break
print 'Closed connection:', self.details [ 0 ]
server = socket.socket ( socket.AF_INET, socket.SOCK_STREAM )
server.bind ( ( '127.0.0.1', 2897 ) )
server.listen ( 5 )
while True:
channel, details = server.accept()
ClientThread ( channel, details ).start()
it shows an error
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "news.py", line 27, in run
pyvb.vb.VB.startVM(v,pyvb.vm.vbVM.getUUID(m))
File "/usr/lib/python2.5/site-packages/pyvb-0.0.2-py2.5.egg/pyvb/vb.py", line 65, in startVM
cmd='%s %s'%(VB_COMMAND_STARTVM, vm.getUUID())
AttributeError: 'str' object has no attribute 'getUUID'
|
[
"You might want to check the official Python API from Virtualbox. pyvb seems like a wrapper written by a third party.\nThe virtualbox sdk contains Python examples and full API documentation. \n"
] |
[
3
] |
[] |
[] |
[
"api",
"python",
"virtualbox"
] |
stackoverflow_0002301534_api_python_virtualbox.txt
|
Q:
Name some non-trivial sites written using IronPython & Silverlight
Just what the title says. It'd be nice to know a few non-trivial sites out there using Silverlight in Python.
A:
My current job is writing business apps for a German / Swiss media media consortium using IronPython and Silverlight. We're gradually moving all our web apps over to IronPython / Silverlight as they are faster to build, look nicer and perform better than the Javascript equivalents.
Definitely not trivial, but not public either I'm afraid (although there our main app may be used by customers - advertisers - when we port that over).
|
Name some non-trivial sites written using IronPython & Silverlight
|
Just what the title says. It'd be nice to know a few non-trivial sites out there using Silverlight in Python.
|
[
"My current job is writing business apps for a German / Swiss media media consortium using IronPython and Silverlight. We're gradually moving all our web apps over to IronPython / Silverlight as they are faster to build, look nicer and perform better than the Javascript equivalents.\nDefinitely not trivial, but not public either I'm afraid (although there our main app may be used by customers - advertisers - when we port that over).\n"
] |
[
2
] |
[] |
[] |
[
"ironpython",
"python",
"silverlight",
"web"
] |
stackoverflow_0002436714_ironpython_python_silverlight_web.txt
|
Q:
What is the best, python or bash for selectively concatenating lots of files?
I have around 20000 files coming from the output of some program, and their names follow the format:
data1.txt
data2.txt
...
data99.txt
data100.txt
...
data999.txt
data1000.txt
...
data20000.txt
I would like to write a script that gets as input argument the number N. Then it makes blocks of N concatenated files, so if N=5, it would make the following new files:
data_new_1.txt: it would contain (concatenated) data1.txt to data5.txt (like cat data1.txt data2.txt ...> data_new_1.txt )
data_new_2.txt: it would contain (concatenated) data6.txt to data10.txt
.....
I wonder what do you think would be the best approach to do this, whether bash, python or another one like awk, perl, etc.
The best approach I mean in terms of simplest code.
Thanks
A:
Here's a Python (2.6) version (if you have Python 2.5, add a first line that says
from __future__ import with_statement
and the script will also work)...:
import sys
def main(N):
rN = range(N)
for iout, iin in enumerate(xrange(1, 99999, N)):
with open('data_new_%s.txt' % (iout+1), 'w') as out:
for di in rN:
try: fin = open('data%s.txt' % (iin + di), 'r')
except IOError: return
out.write(fin.read())
fin.close()
if __name__ == '__main__':
if len(sys.argv) > 1:
N = int(sys.argv[1])
else:
N = 5
main(N)
As you see from other answers & comments, opinions on performance differ -- some believe that the Python startup (and imports of modules) will make this slower than bash (but the import part at least is bogus: sys, the only needed module, is a built-in module, requires no "loading" and therefore basically negligible overhead to import it); I suspect avoiding the repeated fork/exec of cat may slow bash down; others think that I/O will dominate anyway, making the two solutions equivalent. You'll have to benchmark with your own files, on your own system, to solve this performance doubt.
A:
Best in what sense? Bash can do this quite well, but it may be harder for you to write a good bash script if you are more familiar with another scripting language. Do you want to optimize for something specific?
That said, here's a bash implementation:
declare blocksize=5
declare i=1
declare blockstart=1
declare blockend=$blocksize
declare -a fileset
while [ -f data${i}.txt ] ; do
fileset=("${fileset[@]}" $data${i}.txt)
i=$(($i + 1))
if [ $i -gt $blockend ] ; then
cat "${fileset[@]}" > data_new_${blockstart}.txt
fileset=() # clear
blockstart=$(($blockstart + $blocksize))
blockend=$(($blockend+ $blocksize))
fi
done
EDIT: I see you now say "Best" == "Simplest code", but what's simple depends on you. For me Perl is simpler than Python, for some Awk is simpler than bash. It depends on what you know best.
EDIT again: inspired by dtmilano, I've changed mine to use cat once per blocksize, so now cat will be called 'only' 4000 times.
A:
I like this one which saves on executing processes, only 1 cat per block
#! /bin/bash
N=5 # block size
S=1 # start
E=20000 # end
for n in $(seq $S $N $E)
do
CMD="cat "
i=$n
while [ $i -lt $((n + N)) ]
do
CMD+="data$((i++)).txt "
done
$CMD > data_new_$((n / N + 1)).txt
done
A:
how about a one liner ? :)
ls data[0-9]*txt|sort -nk1.5|awk 'BEGIN{rn=5;i=1}{while((getline _<$0)>0){print _ >"data_new_"i".txt"}close($0)}NR%rn==0{i++}'
A:
Since this can easily be done in any shell I would simply use that.
This should do it:
#!/bin/sh
FILES=$1
FILENO=1
for i in data[0-9]*.txt; do
FILES=`expr $FILES - 1`
if [ $FILES -eq 0 ]; then
FILENO=`expr $FILENO + 1`
FILES=$1
fi
cat $i >> "data_new_${FILENO}.txt"
done
Python version:
#!/usr/bin/env python
import os
import sys
if __name__ == '__main__':
files_per_file = int(sys.argv[1])
i = 0
while True:
i += 1
source_file = 'data%d.txt' % i
if os.path.isfile(source_file):
dest_file = 'data_new_%d.txt' % ((i / files_per_file) + 1)
file(dest_file, 'wa').write(file(source_file).read())
else:
break
A:
Let's say, if you have a simple script that concatenates files and keeps a counter for you, like the following:
#!/usr/bin/bash
COUNT=0
if [ -f counter ]; then
COUNT=`cat counter`
fi
COUNT=$[$COUNT+1]
echo $COUNT > counter
cat $@ > $COUNT.data
The a command line will do:
find -name "*" -type f -print0 | xargs -0 -n 5 path_to_the_script
A:
Simple enough?
make_cat.py
limit = 1000
n = 5
for i in xrange( 0, (limit+n-1)//n ):
names = [ "data{0}.txt".format(j) for j in range(i*n,i*n+n) ]
print "cat {0} >data_new_{1}.txt".format( " ".join(names), i )
Script
python make_cat.py | sh
|
What is the best, python or bash for selectively concatenating lots of files?
|
I have around 20000 files coming from the output of some program, and their names follow the format:
data1.txt
data2.txt
...
data99.txt
data100.txt
...
data999.txt
data1000.txt
...
data20000.txt
I would like to write a script that gets as input argument the number N. Then it makes blocks of N concatenated files, so if N=5, it would make the following new files:
data_new_1.txt: it would contain (concatenated) data1.txt to data5.txt (like cat data1.txt data2.txt ...> data_new_1.txt )
data_new_2.txt: it would contain (concatenated) data6.txt to data10.txt
.....
I wonder what do you think would be the best approach to do this, whether bash, python or another one like awk, perl, etc.
The best approach I mean in terms of simplest code.
Thanks
|
[
"Here's a Python (2.6) version (if you have Python 2.5, add a first line that says\nfrom __future__ import with_statement\n\nand the script will also work)...:\nimport sys\n\ndef main(N):\n rN = range(N)\n for iout, iin in enumerate(xrange(1, 99999, N)):\n with open('data_new_%s.txt' % (iout+1), 'w') as out:\n for di in rN:\n try: fin = open('data%s.txt' % (iin + di), 'r')\n except IOError: return\n out.write(fin.read())\n fin.close()\n\nif __name__ == '__main__':\n if len(sys.argv) > 1:\n N = int(sys.argv[1])\n else:\n N = 5\n main(N)\n\nAs you see from other answers & comments, opinions on performance differ -- some believe that the Python startup (and imports of modules) will make this slower than bash (but the import part at least is bogus: sys, the only needed module, is a built-in module, requires no \"loading\" and therefore basically negligible overhead to import it); I suspect avoiding the repeated fork/exec of cat may slow bash down; others think that I/O will dominate anyway, making the two solutions equivalent. You'll have to benchmark with your own files, on your own system, to solve this performance doubt.\n",
"Best in what sense? Bash can do this quite well, but it may be harder for you to write a good bash script if you are more familiar with another scripting language. Do you want to optimize for something specific?\nThat said, here's a bash implementation:\n declare blocksize=5\n declare i=1\n declare blockstart=1\n declare blockend=$blocksize\n declare -a fileset \n while [ -f data${i}.txt ] ; do\n fileset=(\"${fileset[@]}\" $data${i}.txt)\n i=$(($i + 1))\n if [ $i -gt $blockend ] ; then\n cat \"${fileset[@]}\" > data_new_${blockstart}.txt\n fileset=() # clear\n blockstart=$(($blockstart + $blocksize))\n blockend=$(($blockend+ $blocksize))\n fi\n done\n\nEDIT: I see you now say \"Best\" == \"Simplest code\", but what's simple depends on you. For me Perl is simpler than Python, for some Awk is simpler than bash. It depends on what you know best.\nEDIT again: inspired by dtmilano, I've changed mine to use cat once per blocksize, so now cat will be called 'only' 4000 times.\n",
"I like this one which saves on executing processes, only 1 cat per block\n#! /bin/bash\n\nN=5 # block size\nS=1 # start\nE=20000 # end\n\nfor n in $(seq $S $N $E)\ndo\n CMD=\"cat \"\n i=$n\n while [ $i -lt $((n + N)) ]\n do\n CMD+=\"data$((i++)).txt \"\n done\n $CMD > data_new_$((n / N + 1)).txt\ndone\n\n",
"how about a one liner ? :)\nls data[0-9]*txt|sort -nk1.5|awk 'BEGIN{rn=5;i=1}{while((getline _<$0)>0){print _ >\"data_new_\"i\".txt\"}close($0)}NR%rn==0{i++}'\n\n",
"Since this can easily be done in any shell I would simply use that.\nThis should do it:\n#!/bin/sh\nFILES=$1\nFILENO=1\n\nfor i in data[0-9]*.txt; do\n FILES=`expr $FILES - 1`\n if [ $FILES -eq 0 ]; then\n FILENO=`expr $FILENO + 1`\n FILES=$1\n fi\n\n cat $i >> \"data_new_${FILENO}.txt\"\ndone\n\nPython version:\n#!/usr/bin/env python\n\nimport os\nimport sys\n\nif __name__ == '__main__':\n files_per_file = int(sys.argv[1])\n\n i = 0\n while True:\n i += 1\n source_file = 'data%d.txt' % i\n if os.path.isfile(source_file):\n dest_file = 'data_new_%d.txt' % ((i / files_per_file) + 1)\n file(dest_file, 'wa').write(file(source_file).read())\n else:\n break\n\n",
"Let's say, if you have a simple script that concatenates files and keeps a counter for you, like the following:\n#!/usr/bin/bash\nCOUNT=0\nif [ -f counter ]; then\n COUNT=`cat counter`\nfi\nCOUNT=$[$COUNT+1]\necho $COUNT > counter\ncat $@ > $COUNT.data\n\nThe a command line will do:\nfind -name \"*\" -type f -print0 | xargs -0 -n 5 path_to_the_script\n\n",
"Simple enough?\nmake_cat.py\nlimit = 1000\nn = 5\nfor i in xrange( 0, (limit+n-1)//n ):\n names = [ \"data{0}.txt\".format(j) for j in range(i*n,i*n+n) ]\n print \"cat {0} >data_new_{1}.txt\".format( \" \".join(names), i )\n\nScript\npython make_cat.py | sh\n\n"
] |
[
4,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"bash",
"python"
] |
stackoverflow_0002434689_bash_python.txt
|
Q:
Special Character Meanings Defined
In Python's module named string, there is a line that says whitespace = ' \t\n\r\v\f'.
' ' is a space character.
'\t' is a tab character.
'\n' is a newline character.
'\r' is a carriage-return character.
'\v' maps to '\x0b' (11). What does it mean and how might it be typed on a keyboard (any OS)?
'\f' maps to '\x0c' (12). What does it mean and how might it be typed on a keyboard (any OS)?
A:
\v is a vertical tab
\f is a formfeed
See: Escape Sequences
A:
\v is a vertical tab. It was used in line printers to advance about 6 lines or so. It can be typed in *nix by pressing Ctrl-V Ctrl-K.
\f is a formfeed. It was used in line printers to advance to the next page. It can be typed in *nix by pressing Ctrl-V Ctrl-L.
A:
Per wikipedia:
12 (form feed, \f, ^L), to cause a
printer to eject paper to the top of
the next page, or a video terminal to
clear the screen.
^L means Control-L on most keyboards and OSes.
\v, code 11 (typeable as ^K) is essentially obsolete, while ^L is still occasionally used (e.g in vi to "refresh/repaint the screen" rather than just "clearing" it as in the original meaning).
|
Special Character Meanings Defined
|
In Python's module named string, there is a line that says whitespace = ' \t\n\r\v\f'.
' ' is a space character.
'\t' is a tab character.
'\n' is a newline character.
'\r' is a carriage-return character.
'\v' maps to '\x0b' (11). What does it mean and how might it be typed on a keyboard (any OS)?
'\f' maps to '\x0c' (12). What does it mean and how might it be typed on a keyboard (any OS)?
|
[
"\\v is a vertical tab\n\\f is a formfeed\nSee: Escape Sequences\n",
"\\v is a vertical tab. It was used in line printers to advance about 6 lines or so. It can be typed in *nix by pressing Ctrl-V Ctrl-K.\n\\f is a formfeed. It was used in line printers to advance to the next page. It can be typed in *nix by pressing Ctrl-V Ctrl-L.\n",
"Per wikipedia:\n\n12 (form feed, \\f, ^L), to cause a\n printer to eject paper to the top of\n the next page, or a video terminal to\n clear the screen.\n\n^L means Control-L on most keyboards and OSes.\n\\v, code 11 (typeable as ^K) is essentially obsolete, while ^L is still occasionally used (e.g in vi to \"refresh/repaint the screen\" rather than just \"clearing\" it as in the original meaning).\n"
] |
[
2,
2,
2
] |
[] |
[] |
[
"character_codes",
"python"
] |
stackoverflow_0002437196_character_codes_python.txt
|
Q:
How do you position a wx.MessageDialog (wxPython)?
Is there any reason why the position, pos, flag doesn't seem to work in the following example?
dlg = wx.MessageDialog(
parent=self,
message='You must enter a URL',
caption='Error',
style=wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP,
pos=(200,200)
)
dlg.ShowModal()
dlg.Destroy()
The documentation is here: http://www.wxpython.org/docs/api/wx.MessageDialog-class.html
'self' is a reference to the frame. I'm running in Windows Vista, python26, wxpython28. The message dialog always appears to be in the middle of the screen.
If for some reason it's not possible to position the dialog, is there anyway to at least restrict the dialog to be in the frame, rather than just the center of the screen?
A:
It seems to be a bug and i think you should file the same. for time being you can user your own dervied dialog class to center it as you wish. Also instead of wx.MessageDialog you can use wx.MessageBox, it will save you few lines.
|
How do you position a wx.MessageDialog (wxPython)?
|
Is there any reason why the position, pos, flag doesn't seem to work in the following example?
dlg = wx.MessageDialog(
parent=self,
message='You must enter a URL',
caption='Error',
style=wx.OK | wx.ICON_ERROR | wx.STAY_ON_TOP,
pos=(200,200)
)
dlg.ShowModal()
dlg.Destroy()
The documentation is here: http://www.wxpython.org/docs/api/wx.MessageDialog-class.html
'self' is a reference to the frame. I'm running in Windows Vista, python26, wxpython28. The message dialog always appears to be in the middle of the screen.
If for some reason it's not possible to position the dialog, is there anyway to at least restrict the dialog to be in the frame, rather than just the center of the screen?
|
[
"It seems to be a bug and i think you should file the same. for time being you can user your own dervied dialog class to center it as you wish. Also instead of wx.MessageDialog you can use wx.MessageBox, it will save you few lines.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"windows",
"wxpython"
] |
stackoverflow_0002419619_python_windows_wxpython.txt
|
Q:
Python | How to send a JSON response with name assign to it
How can I return an response (lets say an array) to the client with a name assign to it form a python script.
echo '{"jsonValidateReturn":'.json_encode($arrayToJs).'}';
in this scenario it returns an array with the name(jsonValidateReturn) assign to it also this can be accessed by jsonValidateReturn[1],so I want to do the same using a python script.
I tried it once but it didn't go well
array_to_js = [vld_id, vld_error, False]
array_to_js[2] = False
jsonValidateReturn = simplejson.dumps(array_to_js)
return HttpResponse(jsonValidateReturn, mimetype='application/json')
Thanks.
A:
Try this for the last two lines:
jsonValidateReturn = simplejson.dumps({'jsonValidateReturn': array_to_js})
return HttpResponse(jsonValidateReturn, mimetype='application/json')
|
Python | How to send a JSON response with name assign to it
|
How can I return an response (lets say an array) to the client with a name assign to it form a python script.
echo '{"jsonValidateReturn":'.json_encode($arrayToJs).'}';
in this scenario it returns an array with the name(jsonValidateReturn) assign to it also this can be accessed by jsonValidateReturn[1],so I want to do the same using a python script.
I tried it once but it didn't go well
array_to_js = [vld_id, vld_error, False]
array_to_js[2] = False
jsonValidateReturn = simplejson.dumps(array_to_js)
return HttpResponse(jsonValidateReturn, mimetype='application/json')
Thanks.
|
[
"Try this for the last two lines:\njsonValidateReturn = simplejson.dumps({'jsonValidateReturn': array_to_js})\nreturn HttpResponse(jsonValidateReturn, mimetype='application/json') \n\n"
] |
[
1
] |
[] |
[] |
[
"json",
"python"
] |
stackoverflow_0002437473_json_python.txt
|
Q:
Use Google AppEngine datastore outside of AppEngine project
For my little framework Pyxer I would like to to be able to use the Google AppEngine datastores also outside of AppEngine projects, because I'm now used to this ORM pattern and for little quick hacks this is nice. I can not use Google AppEngine for all of my projects because of its's limitations in file size and number of files.
A great alternative would also be, if there was a project that provides an ORM with the same naming as the AppEngine datastore. I also like the GQL approach very much, since this is a nice combination of ORM and SQL patterns.
Any ideas where or how I might find such a solution? Thanks.
A:
Nick Johnson, from the app engine team himself, has a blog posting listing some of the alternatives, including his BDBdatastore.
However, that assumes you want to use exactly the same ORM that you use now in app engine. There are tons of ORM options in general out there, though I am not familiar with the state of the art in Python. This question does seem to address the issue though.
A:
You might also want to look at AppScale, which is "a platform that allows users to deploy and host their own Google App Engine applications".
It's probably overkill for your purposes, but definitely something to look into.
A:
There is also the Remote API which the bulkloader tool uses to upload or download data into/from the Datastore.
Maybe it could be used to have applications which are not hosted on AppEngine to still use the Datastore there.
|
Use Google AppEngine datastore outside of AppEngine project
|
For my little framework Pyxer I would like to to be able to use the Google AppEngine datastores also outside of AppEngine projects, because I'm now used to this ORM pattern and for little quick hacks this is nice. I can not use Google AppEngine for all of my projects because of its's limitations in file size and number of files.
A great alternative would also be, if there was a project that provides an ORM with the same naming as the AppEngine datastore. I also like the GQL approach very much, since this is a nice combination of ORM and SQL patterns.
Any ideas where or how I might find such a solution? Thanks.
|
[
"Nick Johnson, from the app engine team himself, has a blog posting listing some of the alternatives, including his BDBdatastore.\nHowever, that assumes you want to use exactly the same ORM that you use now in app engine. There are tons of ORM options in general out there, though I am not familiar with the state of the art in Python. This question does seem to address the issue though.\n",
"You might also want to look at AppScale, which is \"a platform that allows users to deploy and host their own Google App Engine applications\". \nIt's probably overkill for your purposes, but definitely something to look into.\n",
"There is also the Remote API which the bulkloader tool uses to upload or download data into/from the Datastore.\nMaybe it could be used to have applications which are not hosted on AppEngine to still use the Datastore there.\n"
] |
[
5,
4,
0
] |
[] |
[] |
[
"google_app_engine",
"orm",
"python",
"sql"
] |
stackoverflow_0001149639_google_app_engine_orm_python_sql.txt
|
Q:
Python optimization
f = open('wl4.txt', 'w')
hh = 0
######################################
for n in range(1,5):
for l in range(33,127):
if n==1:
b = chr(l) + '\n'
f.write(b)
hh += 1
elif n==2:
for s0 in range(33, 127):
b = chr(l) + chr(s0) + '\n'
f.write(b)
hh += 1
elif n==3:
for s0 in range(33, 127):
for s1 in range(33, 127):
b = chr(l) + chr(s0) + chr(s1) + '\n'
f.write(b)
hh += 1
elif n==4:
for s0 in range(33, 127):
for s1 in range(33, 127):
for s2 in range(33,127):
b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n'
f.write(b)
hh += 1
######################################
print "We Made %d Words." %(hh)
######################################
f.close()
So, is there any method to make it faster?
A:
Further significant improvements are possible.
The following script file demonstrates these, using (for brevity) only the size 4 loop (which takes up well over 90% of the time).
method 0: the OP's original code
method 1: John Kugleman's solution
method 2: (1) and move some string concatenation out of inner loops
method 3: (2) and put the code inside a function -- accessing local variables is MUCH faster than global variables. Any script can do this. Many scripts should do this.
method 4: (3) and accumulate strings in a list then join them and write them. Note that this uses memory like you may not believe. My code doesn't attempt to do it for the whole file, because (127 - 33) ** 4 is 78M strings. On a 32-bit box, that's 78 * 4 = 312Mb for the list alone (ignoring unused memory at the end of the list), plus 78 * 28 = 2184 Mb for the str objects (sys.getsizeof("1234") produces 28), plus 78 * 5 = 390 Mb for the join result. You just blew your user address space or your ulimit or something else blowable. Or if you have 1 Gb of real memory of which 128Mb has been snarfed by the video driver, but enough swap space, you have time for lunch (if running a particular OS, dinner as well).
method 5: (4) and don't ask the list for the whereabouts of its append attribute 78 million times :-)
Here is the script file:
import time, sys
time_function = time.clock # Windows; time.time may be better on *x
ubound, which = map(int, sys.argv[1:3])
t0 = time_function()
if which == 0:
### original ###
f = open('wl4.txt', 'w')
hh = 0
n = 4
for l in range(33, ubound):
if n == 1:
pass
elif n == 2:
pass
elif n == 3:
pass
elif n == 4:
for s0 in range(33, ubound):
for s1 in range(33, ubound):
for s2 in range(33,ubound):
b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n'
f.write(b)
hh += 1
f.close()
elif which == 1:
### John Kugleman ###
f = open('wl4.txt', 'w')
chars = [chr(c) for c in range(33, ubound)]
hh = 0
for l in chars:
for s0 in chars:
for s1 in chars:
for s2 in chars:
b = l + s0 + s1 + s2 + '\n'
f.write(b)
hh += 1
f.close()
elif which == 2:
### JohnK, saving + ###
f = open('wl4.txt', 'w')
chars = [chr(c) for c in range(33, ubound)]
hh = 0
for L in chars: # "L" as in "Legible" ;-)
for s0 in chars:
b0 = L + s0
for s1 in chars:
b1 = b0 + s1
for s2 in chars:
b = b1 + s2 + '\n'
f.write(b)
hh += 1
f.close()
elif which == 3:
### JohnK, saving +, function ###
def which3func():
f = open('wl4.txt', 'w')
chars = [chr(c) for c in range(33, ubound)]
nwords = 0
for L in chars:
for s0 in chars:
b0 = L + s0
for s1 in chars:
b1 = b0 + s1
for s2 in chars:
b = b1 + s2 + '\n'
f.write(b)
nwords += 1
f.close()
return nwords
hh = which3func()
elif which == 4:
### JohnK, saving +, function, linesep.join() ###
def which4func():
f = open('wl4.txt', 'w')
chars = [chr(c) for c in range(33, ubound)]
nwords = 0
for L in chars:
accum = []
for s0 in chars:
b0 = L + s0
for s1 in chars:
b1 = b0 + s1
for s2 in chars:
accum.append(b1 + s2)
nwords += len(accum)
accum.append("") # so that we get a final newline
f.write('\n'.join(accum))
f.close()
return nwords
hh = which4func()
elif which == 5:
### JohnK, saving +, function, linesep.join(), avoid method lookup in loop ###
def which5func():
f = open('wl4.txt', 'w')
chars = [chr(c) for c in range(33, ubound)]
nwords = 0
for L in chars:
accum = []; accum_append = accum.append
for s0 in chars:
b0 = L + s0
for s1 in chars:
b1 = b0 + s1
for s2 in chars:
accum_append(b1 + s2)
nwords += len(accum)
accum_append("") # so that we get a final newline
f.write('\n'.join(accum))
f.close()
return nwords
hh = which5func()
else:
print "Bzzzzzzt!!!"
t1 = time_function()
print "Method %d made %d words in %.1f seconds" % (which, hh, t1 - t0)
Here are some results:
C:\junk\so>for %w in (0 1 2 3 4 5) do \python26\python wl4.py 127 %w
C:\junk\so>\python26\python wl4.py 127 0
Method 0 made 78074896 words in 352.3 seconds
C:\junk\so>\python26\python wl4.py 127 1
Method 1 made 78074896 words in 183.9 seconds
C:\junk\so>\python26\python wl4.py 127 2
Method 2 made 78074896 words in 157.9 seconds
C:\junk\so>\python26\python wl4.py 127 3
Method 3 made 78074896 words in 126.0 seconds
C:\junk\so>\python26\python wl4.py 127 4
Method 4 made 78074896 words in 68.3 seconds
C:\junk\so>\python26\python wl4.py 127 5
Method 5 made 78074896 words in 60.5 seconds
Update in response to OP's questions
""" When I try to add for loops, i got a memory error for accum_append.. what is the problem ??"""
I don't know what the problem is; I can't read your code at this distance. Guess: If you are trying to do length == 5, you have probably got the accum initialisation and writing bits in the wrong place, and accum is trying to grow beyond the capacity of your system's memory (as I hoped I'd explained earlier).
"""Now Method 5 is the fastest one, but its make a word tell length 4.. how could i make how much i want ?? :)"""
You have two choices: (1) you continue to use nested for loops (2) you look at the answers that don't use nested for loops, with the length specified dynamically.
Methods 4 and 5 got speedups by using accum but the manner of doing that was tailored to the exact knowledge of how much memory would be used.
Below are 3 more methods. 101 is tgray's method with no extra memory use. 201 is Paul Hankin's method (plus some write-to-file code) similarly with no extra memory use. These two methods are of about the same speed and are within sight of method 3 speedwise. They both allow dynamic specification of the desired length.
Method 102 is tgray's method with a fixed 1Mb buffer -- it attempts to save time by reducing the number of calls to f.write() ... you may wish to experiment with the buffer size. You could create an orthogonal 202 method if you wished. Note that tgray's method uses itertools.product for which you'll need Python 2.6, whereas Paul Hankin's method uses generator expressions which have been around for a while.
elif which == 101:
### tgray, memory-lite version
def which101func():
f = open('wl4.txt', 'w')
f_write = f.write
nwords = 0
chars = map(chr, xrange(33, ubound)) # create a list of characters
length = 4 #### length is a variable
for x in product(chars, repeat=length):
f_write(''.join(x) + '\n')
nwords += 1
f.close()
return nwords
hh = which101func()
elif which == 102:
### tgray, memory-lite version, buffered
def which102func():
f = open('wl4.txt', 'w')
f_write = f.write
nwords = 0
chars = map(chr, xrange(33, ubound)) # create a list of characters
length = 4 #### length is a variable
buffer_size_bytes = 1024 * 1024
buffer_size_words = buffer_size_bytes // (length + 1)
words_in_buffer = 0
buffer = []; buffer_append = buffer.append
for x in product(chars, repeat=length):
words_in_buffer += 1
buffer_append(''.join(x) + '\n')
if words_in_buffer >= buffer_size_words:
f_write(''.join(buffer))
nwords += words_in_buffer
words_in_buffer = 0
del buffer[:]
if buffer:
f_write(''.join(buffer))
nwords += words_in_buffer
f.close()
return nwords
hh = which102func()
elif which == 201:
### Paul Hankin (needed output-to-file code added)
def AllWords(n, CHARS=[chr(i) for i in xrange(33, ubound)]):
#### n is the required word length
if n == 1: return CHARS
return (w + c for w in AllWords(n - 1) for c in CHARS)
def which201func():
f = open('wl4.txt', 'w')
f_write = f.write
nwords = 0
for w in AllWords(4):
f_write(w + '\n')
nwords += 1
f.close()
return nwords
hh = which201func()
A:
You can create the range(33, 127) once and save it off. Not having to create it repeatedly cuts the runtime in half on my machine.
chars = [chr(c) for c in range(33, 127)]
...
for s0 in chars:
for s1 in chars:
for s2 in chars:
b = l + s0 + s1 + s2 + '\n'
f.write(b)
hh += 1
A:
The outer loop seems pretty pointless. Why not simply:
for l in range(33,127)
.. your code for the n==1 case
for l in range(33,127)
.. your code for the n==2 case
for l in range(33,127)
.. your code for the n==3 case
for l in range(33,127)
.. your code for the n==4 case
That will be both faster and easier read.
A:
When doing operations that involve iterating a lot, a good place to start is the itertools package.
In this case it looks like you want the product function. Which gives you the:
cartesian product, equivalent to a
nested for-loop
So to get a list of the "words" you are creating:
from itertools import product
chars = map(chr, xrange(33,127)) # create a list of characters
words = [] # this will be the list of words
for length in xrange(1, 5): # length is the length of the words created
words.extend([''.join(x) for x in product(chars, repeat=length)])
# instead of keeping a separate counter, hh, we can use the len function
print "We Made %d Words." % (len(words))
f = open('wl4.txt', 'w')
f.write('\n'.join(words)) # write one word per line
f.close()
As a result we get the result that your script gives us. And since itertools is implemented in c, it is also faster.
Edit:
Per John Machin's very astute comment about the memory usage, here's updated code that doesn't give me a memory error when I run it on the whole range(33, 127).
from itertools import product
chars = map(chr, xrange(33,127)) # create a list of characters
f_words = open('wl4.txt', 'w')
num_words = 0 # a counter (was hh in OPs code)
for length in xrange(1, 5): # length is the length of the words created
for char_tup in product(chars, repeat=length):
f_words.write(''.join(char_tup) + '\n')
num_words += 1
f.close()
print "We Made %d Words." % (num_words)
This runs in about 4 minutes (240 seconds) on my machine.
A:
How about this it workes with arbitary word lengths: (Password generator?)
f = open('wl4.txt', 'w')
hh=0
chars = map(chr,xrange(33, 127))
def func(n, result):
if (n == 0):
f.write(result + "\n")
hh +=1
else:
for c in chars:
func(n-1, result+c)
for n in range(1, 5):
func(n,"")
######################################
print "We Made %d Words." %(hh)
######################################
f.close()
A:
Do you need all of the words sorted by their length? If you can mingle the lengths together, you can improve slightly on John Kugelman's answer like this:
f = open("wl4.txt", "w")
chars = [chr(c) for c in range(33, 127)]
c = len(chars)
count = c + c*c + c**3 + c**4
for c0 in chars:
print >>f, c0
for c1 in chars:
s1 = c0 + c1
print >>f, s1
for c2 in chars:
s2 = s1 + c2
print >>f, s2
for c3 in chars:
print >>f, s2 + c3
print "We Made %d Words." % count
Directly calculating hh instead of all of the incrementing is also a big win (about 15% on this laptop). There's also an improvement from using print over f.write, though i have no idea why that's the case. This version runs in about 39 seconds for me.
A:
Here's a short recursive solution.
def AllWords(n, CHARS=[chr(i) for i in xrange(33, 127)]):
if n == 1: return CHARS
return (w + c for w in AllWords(n - 1) for c in CHARS)
for i in xrange(1, 5):
for w in AllWords(i):
print w
PS: is it an error that character 127 is excluded?
|
Python optimization
|
f = open('wl4.txt', 'w')
hh = 0
######################################
for n in range(1,5):
for l in range(33,127):
if n==1:
b = chr(l) + '\n'
f.write(b)
hh += 1
elif n==2:
for s0 in range(33, 127):
b = chr(l) + chr(s0) + '\n'
f.write(b)
hh += 1
elif n==3:
for s0 in range(33, 127):
for s1 in range(33, 127):
b = chr(l) + chr(s0) + chr(s1) + '\n'
f.write(b)
hh += 1
elif n==4:
for s0 in range(33, 127):
for s1 in range(33, 127):
for s2 in range(33,127):
b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n'
f.write(b)
hh += 1
######################################
print "We Made %d Words." %(hh)
######################################
f.close()
So, is there any method to make it faster?
|
[
"Further significant improvements are possible.\nThe following script file demonstrates these, using (for brevity) only the size 4 loop (which takes up well over 90% of the time).\nmethod 0: the OP's original code\nmethod 1: John Kugleman's solution\nmethod 2: (1) and move some string concatenation out of inner loops\nmethod 3: (2) and put the code inside a function -- accessing local variables is MUCH faster than global variables. Any script can do this. Many scripts should do this.\nmethod 4: (3) and accumulate strings in a list then join them and write them. Note that this uses memory like you may not believe. My code doesn't attempt to do it for the whole file, because (127 - 33) ** 4 is 78M strings. On a 32-bit box, that's 78 * 4 = 312Mb for the list alone (ignoring unused memory at the end of the list), plus 78 * 28 = 2184 Mb for the str objects (sys.getsizeof(\"1234\") produces 28), plus 78 * 5 = 390 Mb for the join result. You just blew your user address space or your ulimit or something else blowable. Or if you have 1 Gb of real memory of which 128Mb has been snarfed by the video driver, but enough swap space, you have time for lunch (if running a particular OS, dinner as well).\nmethod 5: (4) and don't ask the list for the whereabouts of its append attribute 78 million times :-)\nHere is the script file:\nimport time, sys\ntime_function = time.clock # Windows; time.time may be better on *x\nubound, which = map(int, sys.argv[1:3])\nt0 = time_function()\nif which == 0:\n ### original ###\n f = open('wl4.txt', 'w')\n hh = 0\n n = 4\n for l in range(33, ubound):\n if n == 1:\n pass\n elif n == 2:\n pass\n elif n == 3:\n pass\n elif n == 4:\n for s0 in range(33, ubound):\n for s1 in range(33, ubound):\n for s2 in range(33,ubound):\n b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\\n'\n f.write(b)\n hh += 1\n f.close()\nelif which == 1:\n ### John Kugleman ###\n f = open('wl4.txt', 'w')\n chars = [chr(c) for c in range(33, ubound)]\n hh = 0\n for l in chars:\n for s0 in chars:\n for s1 in chars:\n for s2 in chars:\n b = l + s0 + s1 + s2 + '\\n'\n f.write(b)\n hh += 1\n f.close()\nelif which == 2:\n ### JohnK, saving + ###\n f = open('wl4.txt', 'w')\n chars = [chr(c) for c in range(33, ubound)]\n hh = 0\n for L in chars: # \"L\" as in \"Legible\" ;-)\n for s0 in chars:\n b0 = L + s0\n for s1 in chars:\n b1 = b0 + s1\n for s2 in chars:\n b = b1 + s2 + '\\n'\n f.write(b)\n hh += 1\n f.close()\nelif which == 3:\n ### JohnK, saving +, function ###\n def which3func():\n f = open('wl4.txt', 'w')\n chars = [chr(c) for c in range(33, ubound)]\n nwords = 0\n for L in chars:\n for s0 in chars:\n b0 = L + s0\n for s1 in chars:\n b1 = b0 + s1\n for s2 in chars:\n b = b1 + s2 + '\\n'\n f.write(b)\n nwords += 1\n f.close()\n return nwords\n hh = which3func()\nelif which == 4:\n ### JohnK, saving +, function, linesep.join() ###\n def which4func():\n f = open('wl4.txt', 'w')\n chars = [chr(c) for c in range(33, ubound)]\n nwords = 0\n for L in chars:\n accum = []\n for s0 in chars:\n b0 = L + s0\n for s1 in chars:\n b1 = b0 + s1\n for s2 in chars:\n accum.append(b1 + s2)\n nwords += len(accum)\n accum.append(\"\") # so that we get a final newline\n f.write('\\n'.join(accum))\n f.close()\n return nwords\n hh = which4func()\nelif which == 5:\n ### JohnK, saving +, function, linesep.join(), avoid method lookup in loop ###\n def which5func():\n f = open('wl4.txt', 'w')\n chars = [chr(c) for c in range(33, ubound)]\n nwords = 0\n for L in chars:\n accum = []; accum_append = accum.append\n for s0 in chars:\n b0 = L + s0\n for s1 in chars:\n b1 = b0 + s1\n for s2 in chars:\n accum_append(b1 + s2)\n nwords += len(accum)\n accum_append(\"\") # so that we get a final newline\n f.write('\\n'.join(accum))\n f.close()\n return nwords\n hh = which5func()\nelse:\n print \"Bzzzzzzt!!!\"\nt1 = time_function()\nprint \"Method %d made %d words in %.1f seconds\" % (which, hh, t1 - t0)\n\nHere are some results:\nC:\\junk\\so>for %w in (0 1 2 3 4 5) do \\python26\\python wl4.py 127 %w\n\nC:\\junk\\so>\\python26\\python wl4.py 127 0\nMethod 0 made 78074896 words in 352.3 seconds\n\nC:\\junk\\so>\\python26\\python wl4.py 127 1\nMethod 1 made 78074896 words in 183.9 seconds\n\nC:\\junk\\so>\\python26\\python wl4.py 127 2\nMethod 2 made 78074896 words in 157.9 seconds\n\nC:\\junk\\so>\\python26\\python wl4.py 127 3\nMethod 3 made 78074896 words in 126.0 seconds\n\nC:\\junk\\so>\\python26\\python wl4.py 127 4\nMethod 4 made 78074896 words in 68.3 seconds\n\nC:\\junk\\so>\\python26\\python wl4.py 127 5\nMethod 5 made 78074896 words in 60.5 seconds\n\nUpdate in response to OP's questions\n\"\"\" When I try to add for loops, i got a memory error for accum_append.. what is the problem ??\"\"\"\nI don't know what the problem is; I can't read your code at this distance. Guess: If you are trying to do length == 5, you have probably got the accum initialisation and writing bits in the wrong place, and accum is trying to grow beyond the capacity of your system's memory (as I hoped I'd explained earlier).\n\"\"\"Now Method 5 is the fastest one, but its make a word tell length 4.. how could i make how much i want ?? :)\"\"\"\nYou have two choices: (1) you continue to use nested for loops (2) you look at the answers that don't use nested for loops, with the length specified dynamically.\nMethods 4 and 5 got speedups by using accum but the manner of doing that was tailored to the exact knowledge of how much memory would be used.\nBelow are 3 more methods. 101 is tgray's method with no extra memory use. 201 is Paul Hankin's method (plus some write-to-file code) similarly with no extra memory use. These two methods are of about the same speed and are within sight of method 3 speedwise. They both allow dynamic specification of the desired length.\nMethod 102 is tgray's method with a fixed 1Mb buffer -- it attempts to save time by reducing the number of calls to f.write() ... you may wish to experiment with the buffer size. You could create an orthogonal 202 method if you wished. Note that tgray's method uses itertools.product for which you'll need Python 2.6, whereas Paul Hankin's method uses generator expressions which have been around for a while.\nelif which == 101:\n ### tgray, memory-lite version\n def which101func():\n f = open('wl4.txt', 'w')\n f_write = f.write\n nwords = 0\n chars = map(chr, xrange(33, ubound)) # create a list of characters\n length = 4 #### length is a variable\n for x in product(chars, repeat=length):\n f_write(''.join(x) + '\\n')\n nwords += 1\n f.close()\n return nwords\n hh = which101func()\nelif which == 102:\n ### tgray, memory-lite version, buffered\n def which102func():\n f = open('wl4.txt', 'w')\n f_write = f.write\n nwords = 0\n chars = map(chr, xrange(33, ubound)) # create a list of characters\n length = 4 #### length is a variable\n buffer_size_bytes = 1024 * 1024\n buffer_size_words = buffer_size_bytes // (length + 1)\n words_in_buffer = 0\n buffer = []; buffer_append = buffer.append\n for x in product(chars, repeat=length):\n words_in_buffer += 1\n buffer_append(''.join(x) + '\\n')\n if words_in_buffer >= buffer_size_words:\n f_write(''.join(buffer))\n nwords += words_in_buffer\n words_in_buffer = 0\n del buffer[:]\n if buffer:\n f_write(''.join(buffer))\n nwords += words_in_buffer\n f.close()\n return nwords\n hh = which102func()\nelif which == 201:\n ### Paul Hankin (needed output-to-file code added)\n def AllWords(n, CHARS=[chr(i) for i in xrange(33, ubound)]):\n #### n is the required word length\n if n == 1: return CHARS\n return (w + c for w in AllWords(n - 1) for c in CHARS)\n def which201func():\n f = open('wl4.txt', 'w')\n f_write = f.write\n nwords = 0\n for w in AllWords(4):\n f_write(w + '\\n')\n nwords += 1\n f.close()\n return nwords\n hh = which201func()\n\n",
"You can create the range(33, 127) once and save it off. Not having to create it repeatedly cuts the runtime in half on my machine.\nchars = [chr(c) for c in range(33, 127)]\n\n...\n\nfor s0 in chars:\n for s1 in chars:\n for s2 in chars:\n b = l + s0 + s1 + s2 + '\\n'\n f.write(b)\n hh += 1\n\n",
"The outer loop seems pretty pointless. Why not simply:\nfor l in range(33,127)\n .. your code for the n==1 case\n\nfor l in range(33,127)\n .. your code for the n==2 case\n\nfor l in range(33,127)\n .. your code for the n==3 case\n\nfor l in range(33,127)\n .. your code for the n==4 case\n\nThat will be both faster and easier read.\n",
"When doing operations that involve iterating a lot, a good place to start is the itertools package.\nIn this case it looks like you want the product function. Which gives you the:\n\ncartesian product, equivalent to a\n nested for-loop\n\nSo to get a list of the \"words\" you are creating:\nfrom itertools import product\n\nchars = map(chr, xrange(33,127)) # create a list of characters\nwords = [] # this will be the list of words\n\nfor length in xrange(1, 5): # length is the length of the words created\n words.extend([''.join(x) for x in product(chars, repeat=length)])\n\n# instead of keeping a separate counter, hh, we can use the len function\nprint \"We Made %d Words.\" % (len(words)) \n\nf = open('wl4.txt', 'w')\nf.write('\\n'.join(words)) # write one word per line\nf.close()\n\nAs a result we get the result that your script gives us. And since itertools is implemented in c, it is also faster.\nEdit:\nPer John Machin's very astute comment about the memory usage, here's updated code that doesn't give me a memory error when I run it on the whole range(33, 127).\nfrom itertools import product\n\nchars = map(chr, xrange(33,127)) # create a list of characters\nf_words = open('wl4.txt', 'w')\n\nnum_words = 0 # a counter (was hh in OPs code)\nfor length in xrange(1, 5): # length is the length of the words created\n for char_tup in product(chars, repeat=length):\n f_words.write(''.join(char_tup) + '\\n')\n num_words += 1\n\nf.close()\nprint \"We Made %d Words.\" % (num_words) \n\nThis runs in about 4 minutes (240 seconds) on my machine.\n",
"How about this it workes with arbitary word lengths: (Password generator?)\nf = open('wl4.txt', 'w')\nhh=0\nchars = map(chr,xrange(33, 127))\n\ndef func(n, result):\n if (n == 0):\n f.write(result + \"\\n\")\n hh +=1\n else:\n for c in chars:\n func(n-1, result+c)\n\nfor n in range(1, 5):\n func(n,\"\")\n###################################### \nprint \"We Made %d Words.\" %(hh) \n###################################### \nf.close()\n\n",
"Do you need all of the words sorted by their length? If you can mingle the lengths together, you can improve slightly on John Kugelman's answer like this:\nf = open(\"wl4.txt\", \"w\")\n\nchars = [chr(c) for c in range(33, 127)]\nc = len(chars)\ncount = c + c*c + c**3 + c**4\n\nfor c0 in chars:\n print >>f, c0\n for c1 in chars:\n s1 = c0 + c1\n print >>f, s1\n for c2 in chars:\n s2 = s1 + c2\n print >>f, s2\n for c3 in chars:\n print >>f, s2 + c3\n\nprint \"We Made %d Words.\" % count\n\nDirectly calculating hh instead of all of the incrementing is also a big win (about 15% on this laptop). There's also an improvement from using print over f.write, though i have no idea why that's the case. This version runs in about 39 seconds for me.\n",
"Here's a short recursive solution.\ndef AllWords(n, CHARS=[chr(i) for i in xrange(33, 127)]):\n if n == 1: return CHARS\n return (w + c for w in AllWords(n - 1) for c in CHARS)\n\nfor i in xrange(1, 5):\n for w in AllWords(i):\n print w\n\nPS: is it an error that character 127 is excluded?\n"
] |
[
8,
7,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"optimization",
"python"
] |
stackoverflow_0002433167_optimization_python.txt
|
Q:
Would this hack for per-object permissions in django work?
According to the documentation, a class can have the meta option permissions, described as such:
Options.permissions
Extra permissions to enter into the permissions table when creating this object. Add, delete and change permissions are automatically created for each object that has admin set. This example specifies an extra permission, can_deliver_pizzas:
permissions = (("can_deliver_pizzas", "Can deliver pizzas"),)
This is a list or tuple of 2-tuples in the format (permission_code, human_readable_permission_name).
Would it be possible to define permissions at run time by:
permissions = (("can_access_%s" % self.pk, /
"Has access to object %s of type %s" % (self.pk,self.__name__)),)
?
A:
I think in the context of the Meta class, you don't have access to self.
If you look for a solution for the admin application, read this about row level permissions.
There is also says:
For public-facing (i.e., non-admin) views, you are of course free to implement whatever form of permission-checking logic your application requires.
A:
No, this wouldn't work, for a number of reasons. Firstly, as Felix points out, you have no access to self at that point. Secondly, as the documentation you quoted states, this is a list of items to enter into the permissions table - in other words these are actual database rows, which are created by manage.py syncdb.
|
Would this hack for per-object permissions in django work?
|
According to the documentation, a class can have the meta option permissions, described as such:
Options.permissions
Extra permissions to enter into the permissions table when creating this object. Add, delete and change permissions are automatically created for each object that has admin set. This example specifies an extra permission, can_deliver_pizzas:
permissions = (("can_deliver_pizzas", "Can deliver pizzas"),)
This is a list or tuple of 2-tuples in the format (permission_code, human_readable_permission_name).
Would it be possible to define permissions at run time by:
permissions = (("can_access_%s" % self.pk, /
"Has access to object %s of type %s" % (self.pk,self.__name__)),)
?
|
[
"I think in the context of the Meta class, you don't have access to self.\nIf you look for a solution for the admin application, read this about row level permissions.\nThere is also says:\n\nFor public-facing (i.e., non-admin) views, you are of course free to implement whatever form of permission-checking logic your application requires. \n\n",
"No, this wouldn't work, for a number of reasons. Firstly, as Felix points out, you have no access to self at that point. Secondly, as the documentation you quoted states, this is a list of items to enter into the permissions table - in other words these are actual database rows, which are created by manage.py syncdb. \n"
] |
[
0,
0
] |
[] |
[] |
[
"database_permissions",
"django",
"permissions",
"python"
] |
stackoverflow_0002437621_database_permissions_django_permissions_python.txt
|
Q:
Django admin site auto populate combo box based on input
hi i have to following model
class Match(models.Model):
Team_one = models.ForeignKey('Team', related_name='Team_one')
Team_two = models.ForeignKey('Team', related_name='Team_two')
Stadium = models.CharField(max_length=255, blank=True)
Start_time = models.DateTimeField(auto_now_add=False, auto_now=False, blank=True, null=True)
Rafree = models.CharField(max_length=255, blank=True)
Judge = models.CharField(max_length=255, blank=True)
Winner = models.ForeignKey('Team', related_name='winner', blank=True)
updated = models.DateTimeField('update date', auto_now=True )
created = models.DateTimeField('creation date', auto_now_add=True )
def save(self, force_insert=False, force_update=False):
pass
@models.permalink
def get_absolute_url(self):
return ('view_or_url_name')
class MatchAdmin(admin.ModelAdmin):
list_display = ('Team_one','Team_two', 'Winner')
search_fields = ['Team_one','Team_tow']
admin.site.register(Match, MatchAdmin)
i was wondering is their a way to populated the winner combo box once the team one and team two is selected in admin site ?
A:
Theres no real easy way to do that with the django admin. It's possible, but it would require you to replace the admin form, and subclass the widget with some javascript that copys the Team into the box. Way more effort than it's worth.
If I were you, I'd just have winner_team and loser_team fields
also read this: http://www.python.org/dev/peps/pep-0008/
A:
You need django-smart-selects.
|
Django admin site auto populate combo box based on input
|
hi i have to following model
class Match(models.Model):
Team_one = models.ForeignKey('Team', related_name='Team_one')
Team_two = models.ForeignKey('Team', related_name='Team_two')
Stadium = models.CharField(max_length=255, blank=True)
Start_time = models.DateTimeField(auto_now_add=False, auto_now=False, blank=True, null=True)
Rafree = models.CharField(max_length=255, blank=True)
Judge = models.CharField(max_length=255, blank=True)
Winner = models.ForeignKey('Team', related_name='winner', blank=True)
updated = models.DateTimeField('update date', auto_now=True )
created = models.DateTimeField('creation date', auto_now_add=True )
def save(self, force_insert=False, force_update=False):
pass
@models.permalink
def get_absolute_url(self):
return ('view_or_url_name')
class MatchAdmin(admin.ModelAdmin):
list_display = ('Team_one','Team_two', 'Winner')
search_fields = ['Team_one','Team_tow']
admin.site.register(Match, MatchAdmin)
i was wondering is their a way to populated the winner combo box once the team one and team two is selected in admin site ?
|
[
"Theres no real easy way to do that with the django admin. It's possible, but it would require you to replace the admin form, and subclass the widget with some javascript that copys the Team into the box. Way more effort than it's worth.\nIf I were you, I'd just have winner_team and loser_team fields\nalso read this: http://www.python.org/dev/peps/pep-0008/\n",
"You need django-smart-selects.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_admin",
"django_models",
"django_templates",
"python"
] |
stackoverflow_0002437264_django_django_admin_django_models_django_templates_python.txt
|
Q:
Python Script to backup a directory
#Filename:backup_ver1
import os
import time
#1 Using list to specify the files and directory to be backed up
source = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\Dr Py\Final_Py'
#2 define backup directory
destination = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\PyDevResourse'
#3 Setting the backup name
targetBackup = destination + time.strftime('%Y%m%d%H%M%S') + '.rar'
rar_command = "rar.exe a -ag '%s' %s" % (targetBackup, ''.join(source))
##i am sure i am doing something wrong here - rar command please let me know
if os.system(rar_command) == 0:
print 'Successful backup to', targetBackup
else:
print 'Backup FAILED'
O/P:- Backup FAILED
winrar is added to Path and CLASSPATH under Environment variables as well - anyone else with a suggestion for backing up the directory is most welcome
A:
Maybe instead of writing your own backup script you could use python tool called rdiff-backup, which can create incremental backups?
A:
The source directory contains spaces, but you don't have quotes around it in the command line. This might be a reason for the backup to fail.
To avoid problems like this, use the subprocess module instead of os.system:
subprocess.call(['rar.exe', 'a', '-ag', targetBackup, source])
A:
if the compression algorithm can be something else and its just to backup a directory, why not do it with python's own tar and gzip instead? eg
import os
import tarfile
import time
root="c:\\"
source=os.path.join(root,"Documents and Settings","rgolwalkar","Desktop","Desktop","Dr Py","Final_Py")
destination=os.path.join(root,"Documents and Settings","rgolwalkar","Desktop","Desktop","PyDevResourse")
targetBackup = destination + time.strftime('%Y%m%d%H%M%S') + 'tar.gz'
tar = tarfile.open(targetBackup, "w:gz")
tar.add(source)
tar.close()
that way, you are not dependent on rar.exe on the system.
|
Python Script to backup a directory
|
#Filename:backup_ver1
import os
import time
#1 Using list to specify the files and directory to be backed up
source = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\Dr Py\Final_Py'
#2 define backup directory
destination = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\PyDevResourse'
#3 Setting the backup name
targetBackup = destination + time.strftime('%Y%m%d%H%M%S') + '.rar'
rar_command = "rar.exe a -ag '%s' %s" % (targetBackup, ''.join(source))
##i am sure i am doing something wrong here - rar command please let me know
if os.system(rar_command) == 0:
print 'Successful backup to', targetBackup
else:
print 'Backup FAILED'
O/P:- Backup FAILED
winrar is added to Path and CLASSPATH under Environment variables as well - anyone else with a suggestion for backing up the directory is most welcome
|
[
"Maybe instead of writing your own backup script you could use python tool called rdiff-backup, which can create incremental backups?\n",
"The source directory contains spaces, but you don't have quotes around it in the command line. This might be a reason for the backup to fail.\nTo avoid problems like this, use the subprocess module instead of os.system:\nsubprocess.call(['rar.exe', 'a', '-ag', targetBackup, source])\n\n",
"if the compression algorithm can be something else and its just to backup a directory, why not do it with python's own tar and gzip instead? eg\nimport os\nimport tarfile\nimport time\nroot=\"c:\\\\\"\nsource=os.path.join(root,\"Documents and Settings\",\"rgolwalkar\",\"Desktop\",\"Desktop\",\"Dr Py\",\"Final_Py\")\ndestination=os.path.join(root,\"Documents and Settings\",\"rgolwalkar\",\"Desktop\",\"Desktop\",\"PyDevResourse\")\ntargetBackup = destination + time.strftime('%Y%m%d%H%M%S') + 'tar.gz' \ntar = tarfile.open(targetBackup, \"w:gz\")\ntar.add(source)\ntar.close()\n\nthat way, you are not dependent on rar.exe on the system.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"backup",
"python",
"rar",
"windows_xp"
] |
stackoverflow_0002438041_backup_python_rar_windows_xp.txt
|
Q:
Python twisted Reactor class
What is the significance of the decorators
@reactor.callWhenRunning,
@results_deferred.addCallback
@results_deferred.addErrback.
Also what are deferred strings, for example in the
twisted.internet.utils.getProcessOutput()
returns a deferred string what exactly is happening here?
I am new to twisted hence this might be a very simple question but reading twisted documentation did not help me much
A:
In the normal programming practice you'd do
db = Database.connect()
result = db.getResult()
processResult(result)
Now depending on your Database and network, these 3 statements can take anywhere from a millisecond to a few seconds.
We've all been programming this way for decades now, and for the most part we're fine with 'waiting'.
But there comes a time when your program cant just wait for results. You'd start to think, gee I could do a lot of other things while I wait for the result. Maybe print an output, or process a function, or just quickly check the socket etc.
Enter Twisted and Deferred.
Instead of waiting for result, in Twisted when invoked the special methods you'll get a Deferred. You'll add a callback function to this deferred which means, call this function when you have the result/answer.
deferredResult = db.nonBlockingGetResult()
deferredResult.addCallback(processOutput)
As soon as the first statement is executed, it returns the 'something' back. And that something is Deferred. There's no blocking there, there no waiting. And to this Deferred you add the callback processOutput which is called when deferred is 'fired' - ie result is ready.
HTH
A:
A deferred is a like a promise to return output in the future. You really should read the documentation on Deferreds here and here. Also, you should read up on Python decorators in general. One introduction is here.
More specifically, what is happening is that when you call getProcessOutput(), the result is not quite ready. It might be ready in an instant or in an hour. But you probably don't care: whenever it is ready, you probably want to take the output and pass it to a function. So instead of returning the output (which is not going to be ready right away), getProcessOutput returns a deferred object. When the output is finally ready, the deferred object will notice and call whatever processing function you supply, passing along the actual process output data. You really should read up on deferreds though.
A:
I am not sure about python , but this looks a like Active object pattern, and Futures. Futures is going to be standard in next c++ version. If you read through Active object and Futures you will get an idea
|
Python twisted Reactor class
|
What is the significance of the decorators
@reactor.callWhenRunning,
@results_deferred.addCallback
@results_deferred.addErrback.
Also what are deferred strings, for example in the
twisted.internet.utils.getProcessOutput()
returns a deferred string what exactly is happening here?
I am new to twisted hence this might be a very simple question but reading twisted documentation did not help me much
|
[
"In the normal programming practice you'd do\ndb = Database.connect()\nresult = db.getResult()\nprocessResult(result)\n\nNow depending on your Database and network, these 3 statements can take anywhere from a millisecond to a few seconds.\nWe've all been programming this way for decades now, and for the most part we're fine with 'waiting'.\nBut there comes a time when your program cant just wait for results. You'd start to think, gee I could do a lot of other things while I wait for the result. Maybe print an output, or process a function, or just quickly check the socket etc.\nEnter Twisted and Deferred.\nInstead of waiting for result, in Twisted when invoked the special methods you'll get a Deferred. You'll add a callback function to this deferred which means, call this function when you have the result/answer.\n\ndeferredResult = db.nonBlockingGetResult()\ndeferredResult.addCallback(processOutput)\n\nAs soon as the first statement is executed, it returns the 'something' back. And that something is Deferred. There's no blocking there, there no waiting. And to this Deferred you add the callback processOutput which is called when deferred is 'fired' - ie result is ready.\nHTH\n",
"A deferred is a like a promise to return output in the future. You really should read the documentation on Deferreds here and here. Also, you should read up on Python decorators in general. One introduction is here.\nMore specifically, what is happening is that when you call getProcessOutput(), the result is not quite ready. It might be ready in an instant or in an hour. But you probably don't care: whenever it is ready, you probably want to take the output and pass it to a function. So instead of returning the output (which is not going to be ready right away), getProcessOutput returns a deferred object. When the output is finally ready, the deferred object will notice and call whatever processing function you supply, passing along the actual process output data. You really should read up on deferreds though.\n",
"I am not sure about python , but this looks a like Active object pattern, and Futures. Futures is going to be standard in next c++ version. If you read through Active object and Futures you will get an idea\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"python",
"twisted"
] |
stackoverflow_0002433616_python_twisted.txt
|
Q:
Problem with Tk and Ping in Python
I'm not being able to make this line work with Tk
import os
while(1):
ping = os.popen('ping www.google.com -n 1')
result = ping.readlines()
msLine = result[-1].strip()
print msLine.split(' = ')[-1]
I'm trying to create a label and text = msLine.split... but everything freezes
A:
There can be other issues with Tk and popen(). First:
Thou shalt not continously ping or fetch from google.com.
Add a "import time" at the top and "time.sleep(2)" at the bottom of
the while loop.
Second:
You probably meant "ping www.google.com -c 1" instead of "-n 1". The "-c 1" asks for one
ping only. The "-n 1" pings 0.0.0.1.
A:
Your example code shows no GUI code. It is impossible to guess why your GUI freezes without seeing the code. Though, your code is pretty buggy so even if there were GUI code in your post it likely wouldn't help.
Is it possible that you're forgetting to call the mainloop() method on your root widget? That would explain the freeze. And if you are calling mainloop(), there's no reason to do while(1) since the main event loop itself is an infinite loop. Why are you calling ping in a loop?
One specific problem you have is that you are calling ping wrong. For one, the option "-n 1" needs to come before the hostname argument (ie: 'ping -n 1 www.google.com' instead of 'ping www.google.com -n 1'). Also, -n is the wrong thing to do. I think you want "-c 1"
Here's a working example of how you can ping periodically and update a label:
import os
from Tkinter import *
class App:
def __init__(self):
self.root = Tk()
self.create_ui()
self.url = "www.google.com"
self.do_ping = False
self.root.mainloop()
def create_ui(self):
self.label = Label(self.root, width=32, text="Ping!")
self.button = Button(text="Start", width=5, command=self.toggle)
self.button.pack(side="top")
self.label.pack(side="top", fill="both", expand=True)
def toggle(self):
if self.do_ping:
self.do_ping = False
self.button.configure(text="Start")
else:
self.do_ping = True
self.button.configure(text="Stop")
self.ping()
def ping(self):
if not self.do_ping:
return
ping = os.popen('ping -c 1 %s' % self.url)
result = ping.readlines()
msLine = result[-1].strip()
data = msLine.split(' = ')[-1]
self.label.configure(text=data)
# re-schedule to run in another half-second
if self.do_ping:
self.root.after(500, self.ping)
app=App()
|
Problem with Tk and Ping in Python
|
I'm not being able to make this line work with Tk
import os
while(1):
ping = os.popen('ping www.google.com -n 1')
result = ping.readlines()
msLine = result[-1].strip()
print msLine.split(' = ')[-1]
I'm trying to create a label and text = msLine.split... but everything freezes
|
[
"There can be other issues with Tk and popen(). First:\nThou shalt not continously ping or fetch from google.com.\nAdd a \"import time\" at the top and \"time.sleep(2)\" at the bottom of \nthe while loop.\nSecond:\nYou probably meant \"ping www.google.com -c 1\" instead of \"-n 1\". The \"-c 1\" asks for one\nping only. The \"-n 1\" pings 0.0.0.1. \n",
"Your example code shows no GUI code. It is impossible to guess why your GUI freezes without seeing the code. Though, your code is pretty buggy so even if there were GUI code in your post it likely wouldn't help.\nIs it possible that you're forgetting to call the mainloop() method on your root widget? That would explain the freeze. And if you are calling mainloop(), there's no reason to do while(1) since the main event loop itself is an infinite loop. Why are you calling ping in a loop?\nOne specific problem you have is that you are calling ping wrong. For one, the option \"-n 1\" needs to come before the hostname argument (ie: 'ping -n 1 www.google.com' instead of 'ping www.google.com -n 1'). Also, -n is the wrong thing to do. I think you want \"-c 1\" \nHere's a working example of how you can ping periodically and update a label:\nimport os\nfrom Tkinter import *\nclass App:\n def __init__(self):\n self.root = Tk()\n self.create_ui()\n self.url = \"www.google.com\"\n self.do_ping = False\n self.root.mainloop()\n\n def create_ui(self):\n self.label = Label(self.root, width=32, text=\"Ping!\")\n self.button = Button(text=\"Start\", width=5, command=self.toggle)\n self.button.pack(side=\"top\")\n self.label.pack(side=\"top\", fill=\"both\", expand=True)\n\n def toggle(self):\n if self.do_ping:\n self.do_ping = False\n self.button.configure(text=\"Start\")\n else:\n self.do_ping = True\n self.button.configure(text=\"Stop\")\n self.ping()\n\n def ping(self):\n if not self.do_ping:\n return\n ping = os.popen('ping -c 1 %s' % self.url)\n result = ping.readlines()\n msLine = result[-1].strip()\n data = msLine.split(' = ')[-1] \n self.label.configure(text=data)\n # re-schedule to run in another half-second\n if self.do_ping:\n self.root.after(500, self.ping)\n\napp=App()\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0002430653_python_tkinter.txt
|
Q:
Differentiate gtk.Entry icons
I'm adding two icons to a gtk.Entry in PyGTK. The icons signals are handled by the following method
def entry_icon_event(self, widget, position, event)
I'm trying to differentiate between the two of them:
<enum GTK_ENTRY_ICON_PRIMARY of type GtkEntryIconPosition>
<enum GTK_ENTRY_ICON_SECONDARY of type GtkEntryIconPosition>
How can I do this? I've been digging through the documentation of PyGTK but there's no object GtkEntryIconPosition nor any definition for this enums.
Thanks
A:
Alright, since no one gave an answer, I'll do with what I actually found. A method to use this icons would look like this:
def entry_icon_event(self, widget, icon, event):
if icon.value_name == "GTK_ENTRY_ICON_PRIMARY":
print "First Button"
if event.button == 0:
print "Left Click":
else:
print "Right Click"
elif icon.value_name == "GTK_ENTRY_ICON_SECONDARY":
print "Second Button"
if event.button == 0:
print "Left Click":
else:
print "Right Click"
A:
There is better way to do it:
def entry_icon_event(self, widget, icon, event):
if icon == gtk.ENTRY_ICON_PRIMARY:
...
elif icon == gtk.ENTRY_ICON_SECONDARY:
...
|
Differentiate gtk.Entry icons
|
I'm adding two icons to a gtk.Entry in PyGTK. The icons signals are handled by the following method
def entry_icon_event(self, widget, position, event)
I'm trying to differentiate between the two of them:
<enum GTK_ENTRY_ICON_PRIMARY of type GtkEntryIconPosition>
<enum GTK_ENTRY_ICON_SECONDARY of type GtkEntryIconPosition>
How can I do this? I've been digging through the documentation of PyGTK but there's no object GtkEntryIconPosition nor any definition for this enums.
Thanks
|
[
"Alright, since no one gave an answer, I'll do with what I actually found. A method to use this icons would look like this:\ndef entry_icon_event(self, widget, icon, event):\n if icon.value_name == \"GTK_ENTRY_ICON_PRIMARY\":\n print \"First Button\"\n if event.button == 0:\n print \"Left Click\":\n else:\n print \"Right Click\"\n elif icon.value_name == \"GTK_ENTRY_ICON_SECONDARY\":\n print \"Second Button\"\n if event.button == 0:\n print \"Left Click\":\n else:\n print \"Right Click\"\n\n",
"There is better way to do it:\ndef entry_icon_event(self, widget, icon, event):\n if icon == gtk.ENTRY_ICON_PRIMARY:\n ...\n elif icon == gtk.ENTRY_ICON_SECONDARY:\n ...\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"gtkentry",
"icons",
"pygtk",
"python"
] |
stackoverflow_0002191209_gtkentry_icons_pygtk_python.txt
|
Q:
Batch select with SQLAlchemy
I have a large set of values V, some of which are likely to exist in a table T. I would like to insert into the table those which are not yet inserted. So far I have the code:
for value in values:
s = self.conn.execute(mytable.__table__.select(mytable.value == value)).first()
if not s:
to_insert.append(value)
I feel like this is running slower than it should. I have a few related questions:
Is there a way to construct a select statement such that you provide a list (in this case, 'values') to which sqlalchemy responds with records which match that list?
Is this code overly expensive in constructing select objects? Is there a way to construct a single select statement, then parameterize at execution time?
A:
For the first question, something like this if I understand your question correctly
mytable.__table__.select(mytable.value.in_(values)
For the second question, querying this by 1 row at a time is overly expensive indeed, although you might not have a choice in the matter. As far as I know there is no tuple select support in SQLAlchemy so if there are multiple variables (think polymorhpic keys) than SQLAlchemy can't help you.
Either way, if you select all matching rows and insert the difference you should be done :)
Something like this should work:
results = self.conn.execute(mytable.__table__.select(mytable.value.in_(values))
available_values = set(row.value for row in results)
to_insert = set(values) - available_values
|
Batch select with SQLAlchemy
|
I have a large set of values V, some of which are likely to exist in a table T. I would like to insert into the table those which are not yet inserted. So far I have the code:
for value in values:
s = self.conn.execute(mytable.__table__.select(mytable.value == value)).first()
if not s:
to_insert.append(value)
I feel like this is running slower than it should. I have a few related questions:
Is there a way to construct a select statement such that you provide a list (in this case, 'values') to which sqlalchemy responds with records which match that list?
Is this code overly expensive in constructing select objects? Is there a way to construct a single select statement, then parameterize at execution time?
|
[
"For the first question, something like this if I understand your question correctly\nmytable.__table__.select(mytable.value.in_(values)\n\nFor the second question, querying this by 1 row at a time is overly expensive indeed, although you might not have a choice in the matter. As far as I know there is no tuple select support in SQLAlchemy so if there are multiple variables (think polymorhpic keys) than SQLAlchemy can't help you.\nEither way, if you select all matching rows and insert the difference you should be done :)\nSomething like this should work:\nresults = self.conn.execute(mytable.__table__.select(mytable.value.in_(values))\navailable_values = set(row.value for row in results)\nto_insert = set(values) - available_values\n\n"
] |
[
3
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0002438690_python_sqlalchemy.txt
|
Q:
Using Nose & NoseXUnit on a Python package
This is a previous post detailing a CI setup for Python. The asker and answerer detail the use of Nose and NoseXUnit with Hudson for their builds. However, NoseXUnit throws an error when run on any source folder where init.py is present:
File "build/bdist.linux-x86_64/egg/nosexunit/tools.py", line 59,
in packages nosexunit.excepts.ToolError: following folder can not contain
__init__.py file: /home/dev/source/web2py/applications
I can't think of a source folder of mine that is not a package also. Is there a step I am missing when dealing with NoseXUnit?
A:
You probably shouldn't use NoseXUnit - it's really out of date, and a similar feature exists in nose >= 0.11.
From nose --help:
--with-xunit Enable plugin Xunit: This plugin provides test results
in the standard XUnit XML format. [NOSE_WITH_XUNIT]
--xunit-file=FILE Path to xml file to store the xunit report in. Default
is nosetests.xml in the working directory
[NOSE_XUNIT_FILE]
if for some reason you need an old version of nose, use http://bitbucket.org/durin42/nose-xml/ - that's the plugin that became the --with-xunit option.
|
Using Nose & NoseXUnit on a Python package
|
This is a previous post detailing a CI setup for Python. The asker and answerer detail the use of Nose and NoseXUnit with Hudson for their builds. However, NoseXUnit throws an error when run on any source folder where init.py is present:
File "build/bdist.linux-x86_64/egg/nosexunit/tools.py", line 59,
in packages nosexunit.excepts.ToolError: following folder can not contain
__init__.py file: /home/dev/source/web2py/applications
I can't think of a source folder of mine that is not a package also. Is there a step I am missing when dealing with NoseXUnit?
|
[
"You probably shouldn't use NoseXUnit - it's really out of date, and a similar feature exists in nose >= 0.11.\nFrom nose --help:\n --with-xunit Enable plugin Xunit: This plugin provides test results\n in the standard XUnit XML format. [NOSE_WITH_XUNIT]\n --xunit-file=FILE Path to xml file to store the xunit report in. Default\n is nosetests.xml in the working directory\n [NOSE_XUNIT_FILE]\n\nif for some reason you need an old version of nose, use http://bitbucket.org/durin42/nose-xml/ - that's the plugin that became the --with-xunit option.\n"
] |
[
5
] |
[] |
[] |
[
"nose",
"nosetests",
"package",
"python",
"web2py"
] |
stackoverflow_0002083102_nose_nosetests_package_python_web2py.txt
|
Q:
warnings emitted during 'easy_install'
When I easy_install some python modules, warnings such as:
<some module>: module references __file__
<some module>: module references __path__
<some module>: module MAY be using inspect.trace
<some module>: module MAY be using inspect.getsourcefile
sometimes get emitted.
Where (what package / source file) do these messages come from? Why is referencing __file__ or __path__ considered a bad thing?
A:
easy_install doesn't like use of __file__ and __path__ not so much because they're dangerous, but because packages that use them almost always fail to run out of zipped eggs.
easy_install is warning because it'll install "less efficiently" into an unzipped directory instead of a zipped egg.
In practice, I'm usually glad when the zip_safe check fails, because then if I need to dive into the source of a module it's a ton easier.
A:
I wouldn't worry about it. As durin42 notes, this just means that setuptools won't zip the egg when it puts it into site packages. If you don't want to see these messages, I believe you can just use the -Z flag to easy_install. That will make it always unzip the egg.
I recommend using pip. It gives you a lot less unnecessary output to deal with.
|
warnings emitted during 'easy_install'
|
When I easy_install some python modules, warnings such as:
<some module>: module references __file__
<some module>: module references __path__
<some module>: module MAY be using inspect.trace
<some module>: module MAY be using inspect.getsourcefile
sometimes get emitted.
Where (what package / source file) do these messages come from? Why is referencing __file__ or __path__ considered a bad thing?
|
[
"easy_install doesn't like use of __file__ and __path__ not so much because they're dangerous, but because packages that use them almost always fail to run out of zipped eggs. \neasy_install is warning because it'll install \"less efficiently\" into an unzipped directory instead of a zipped egg. \nIn practice, I'm usually glad when the zip_safe check fails, because then if I need to dive into the source of a module it's a ton easier.\n",
"I wouldn't worry about it. As durin42 notes, this just means that setuptools won't zip the egg when it puts it into site packages. If you don't want to see these messages, I believe you can just use the -Z flag to easy_install. That will make it always unzip the egg.\nI recommend using pip. It gives you a lot less unnecessary output to deal with.\n"
] |
[
7,
2
] |
[] |
[] |
[
"easy_install",
"python",
"warnings"
] |
stackoverflow_0002298403_easy_install_python_warnings.txt
|
Q:
Python: inserting double or single quotes around a string
Im using python to access a MySQL database and im getting a unknown column in field due to quotes not being around the variable.
code below:
cur = x.cnx.cursor()
cur.execute('insert into tempPDBcode (PDBcode) values (%s);' % (s))
rows = cur.fetchall()
How do i manually insert double or single quotes around the value of s?
I've trying using str() and manually concatenating quotes around s but it still doesn't work.
The sql statement works fine iv double and triple check my sql query.
A:
You shouldn't use Python's string functions to build the SQL statement. You run the risk of leaving an SQL injection vulnerability. You should do this instead:
cur.execute('insert into tempPDBcode (PDBcode) values (%s);', s)
Note the comma.
A:
Python will do this for you automatically, if you use the database API:
cur = x.cnx.cursor()
cur.execute('insert into tempPDBcode (PDBcode) values (%s)',s)
Using the DB API means that python will figure out whether to use quotes or not, and also means that you don't have to worry about SQL-injection attacks, in case your s variable happens to contain, say,
value'); drop database; '
|
Python: inserting double or single quotes around a string
|
Im using python to access a MySQL database and im getting a unknown column in field due to quotes not being around the variable.
code below:
cur = x.cnx.cursor()
cur.execute('insert into tempPDBcode (PDBcode) values (%s);' % (s))
rows = cur.fetchall()
How do i manually insert double or single quotes around the value of s?
I've trying using str() and manually concatenating quotes around s but it still doesn't work.
The sql statement works fine iv double and triple check my sql query.
|
[
"You shouldn't use Python's string functions to build the SQL statement. You run the risk of leaving an SQL injection vulnerability. You should do this instead:\ncur.execute('insert into tempPDBcode (PDBcode) values (%s);', s) \n\nNote the comma.\n",
"Python will do this for you automatically, if you use the database API:\ncur = x.cnx.cursor()\ncur.execute('insert into tempPDBcode (PDBcode) values (%s)',s) \n\nUsing the DB API means that python will figure out whether to use quotes or not, and also means that you don't have to worry about SQL-injection attacks, in case your s variable happens to contain, say,\nvalue'); drop database; '\n\n"
] |
[
10,
5
] |
[
"If this were purely a string-handling question, the answer would be tojust put them in the string:\ncur.execute('insert into tempPDBcode (PDBcode) values (\"%s\");' % (s)) \n\nThat's the classic use case for why Python supports both kinds of quotes.\nHowever as other answers & comments have pointed out, there are SQL-specific concerns that are relevant in this case.\n"
] |
[
-5
] |
[
"python",
"quotes",
"sql"
] |
stackoverflow_0002439027_python_quotes_sql.txt
|
Q:
Reading Python Documentation for 3rd party modules
I recently downloaded IMDbpy module..
When I do,
import imdb
help(imdb)
i dont get the full documentation.. I have to do
im = imdb.IMDb()
help(im)
to see the available methods. I dont like this console interface. Is there any better way of reading the doc. I mean all the doc related to module imdb in one page..
A:
Use pydoc
pydoc -w imdb
This will generate imdb.html in the same directory.
pydoc -p 9090 will start a HTTP server on port 9090, and you will be able to browse all documentation at http://localhost:9090/
A:
in IPython you could do
[1]: import os
[2]: os?
< get the full documentation here >
# or you could do it on specific functions
[3]: os.uname
<built-in function>
[4]: os.uname?
< get the full documentation here >
# Incase of modules written in python, you could inspect source code by doing
[5]: import string
[6]: string??
< hows the source code of the module >
[7]: string.upper??
< shows the source code of the function >
|
Reading Python Documentation for 3rd party modules
|
I recently downloaded IMDbpy module..
When I do,
import imdb
help(imdb)
i dont get the full documentation.. I have to do
im = imdb.IMDb()
help(im)
to see the available methods. I dont like this console interface. Is there any better way of reading the doc. I mean all the doc related to module imdb in one page..
|
[
"Use pydoc \npydoc -w imdb\n\nThis will generate imdb.html in the same directory.\n\npydoc -p 9090 will start a HTTP server on port 9090, and you will be able to browse all documentation at http://localhost:9090/\n",
"in IPython you could do\n[1]: import os\n[2]: os?\n\n< get the full documentation here >\n\n# or you could do it on specific functions \n[3]: os.uname\n<built-in function>\n\n\n\n[4]: os.uname?\n\n< get the full documentation here >\n\n\n# Incase of modules written in python, you could inspect source code by doing\n[5]: import string\n[6]: string??\n\n< hows the source code of the module >\n\n[7]: string.upper??\n\n< shows the source code of the function >\n\n"
] |
[
10,
1
] |
[] |
[] |
[
"documentation",
"imdbpy",
"pydoc",
"python"
] |
stackoverflow_0002437857_documentation_imdbpy_pydoc_python.txt
|
Q:
How to teach beginners reversing a string in Python?
I am teaching a course "Introduction to Computer Programming" to the first year math students. One has to assume that this is the first exposure of students to computer programming. Here are the main goals of my teaching:
Students should learn and understand the basics of Python.
Eventually they need to master sufficiently many Python tools so that they are able to select the right tool for a given problem.
At the same time they have to learn basic skills of problem solving by computer programming.
My method of teaching is to give for each newly introduced concept a series of problems and teasers that motivate students. For instance, when introducing strings and lists a natural question is the task of string or list reversal. If I ask students to write a code that will check whether a string is a palindrome then I better tell them how to reverse it.
For lists, a natural solution myString.reverse() has at least two drawbacks:
It does not carry over to strings.
Students will see it a magic unless told about methods first.
The real question is: How should one introduce the problem of reversing a string in Python?
A:
You could teach them about stride notation (::) first and then slicing and apply both.
s = 'string'
s = s[::-1]
print s # gnirts
References and more information:
Extended Slices
An Informal Introduction to Python
Python string reversed explanation
In response to your comment, you can supply either arguments.
>>> s[len(s):None:-1]
'gnirts'
>>> s[5:None:-1]
'gnirts'
>>> s[::-1] # and of course
'gnirts'
A:
The two obvious ways are:
''.join(reversed(s))
and
s[::-1]
I think both are non-trivial for a programming newbie, but the concepts involved are not really that difficult.
The second way is easier to understand if you start by showing them what the results of s[::3], s[::2] and s[::1] are. Then s[::-1] will come naturally :)
A:
Just ask them a riddle like this:
why
>>> 'dammitimmad'[::-1] == 'dammitimmad'
True
but
>>> 'dammit im mad'[::-1] == 'dammit im mad'
False
?
A:
Absolute beginners guide to string reversal in python. ;)
# Tell them that,
# to reverse a string
# we read it backwards
s = 'string' # input string
l = len(s)
rs = '' # reversed string
for i in range(l-1,-1,-1): # range(start,end,step)
rs += s[i]
print rs
But this is not considered pythonic and I am in favor of better methods everyone else have already posted.
A:
Take a look at this discussion.
A:
Introduce them to enough tools (array slicing and perhaps functional-style recursion, in particular) to accomplish the reversal. Then, let them struggle with trying to figure it out for a while. Take a few different answers and compare them, showing the pros and cons of each way.
|
How to teach beginners reversing a string in Python?
|
I am teaching a course "Introduction to Computer Programming" to the first year math students. One has to assume that this is the first exposure of students to computer programming. Here are the main goals of my teaching:
Students should learn and understand the basics of Python.
Eventually they need to master sufficiently many Python tools so that they are able to select the right tool for a given problem.
At the same time they have to learn basic skills of problem solving by computer programming.
My method of teaching is to give for each newly introduced concept a series of problems and teasers that motivate students. For instance, when introducing strings and lists a natural question is the task of string or list reversal. If I ask students to write a code that will check whether a string is a palindrome then I better tell them how to reverse it.
For lists, a natural solution myString.reverse() has at least two drawbacks:
It does not carry over to strings.
Students will see it a magic unless told about methods first.
The real question is: How should one introduce the problem of reversing a string in Python?
|
[
"You could teach them about stride notation (::) first and then slicing and apply both.\ns = 'string'\ns = s[::-1]\nprint s # gnirts\n\nReferences and more information:\n\nExtended Slices\nAn Informal Introduction to Python\nPython string reversed explanation\n\nIn response to your comment, you can supply either arguments.\n>>> s[len(s):None:-1] \n'gnirts'\n>>> s[5:None:-1] \n'gnirts'\n>>> s[::-1] # and of course\n'gnirts'\n\n",
"The two obvious ways are:\n''.join(reversed(s))\n\nand\ns[::-1]\n\nI think both are non-trivial for a programming newbie, but the concepts involved are not really that difficult.\nThe second way is easier to understand if you start by showing them what the results of s[::3], s[::2] and s[::1] are. Then s[::-1] will come naturally :)\n",
"Just ask them a riddle like this:\nwhy\n>>> 'dammitimmad'[::-1] == 'dammitimmad'\nTrue\n\nbut\n>>> 'dammit im mad'[::-1] == 'dammit im mad'\nFalse\n\n?\n",
"Absolute beginners guide to string reversal in python. ;)\n# Tell them that,\n# to reverse a string\n# we read it backwards\n\ns = 'string' # input string\nl = len(s)\nrs = '' # reversed string\n\nfor i in range(l-1,-1,-1): # range(start,end,step)\n rs += s[i]\n\nprint rs\n\nBut this is not considered pythonic and I am in favor of better methods everyone else have already posted.\n",
"Take a look at this discussion.\n",
"Introduce them to enough tools (array slicing and perhaps functional-style recursion, in particular) to accomplish the reversal. Then, let them struggle with trying to figure it out for a while. Take a few different answers and compare them, showing the pros and cons of each way.\n"
] |
[
7,
4,
3,
2,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002439216_python.txt
|
Q:
how to embed a webpage using wx?
I need to show a webpage (a complex page with script and stuff, no static html) in a frame or something. It's for a desktop application, I'm using python 2.6 + wxPython 2.8.10.1. I need to catch some events too (mostly about changing page). I've found some samples using the webview module in a gtk application, but I couldn't have it works on wx.
A:
You can embed IE, but I think that's about it. wxWebKit is working on a wx add-on to use WebKit as an embedded browser in wx, but I think it's still a work in progress.
A:
There is a commercial solution for this called wxWebConnect that uses Gecko (the Mozilla engine). I've never used it myself because i'm waiting for the wxWebKit project to be ready to use but it looks pretty good although perhaps a little overkill for your needs.
|
how to embed a webpage using wx?
|
I need to show a webpage (a complex page with script and stuff, no static html) in a frame or something. It's for a desktop application, I'm using python 2.6 + wxPython 2.8.10.1. I need to catch some events too (mostly about changing page). I've found some samples using the webview module in a gtk application, but I couldn't have it works on wx.
|
[
"You can embed IE, but I think that's about it. wxWebKit is working on a wx add-on to use WebKit as an embedded browser in wx, but I think it's still a work in progress.\n",
"There is a commercial solution for this called wxWebConnect that uses Gecko (the Mozilla engine). I've never used it myself because i'm waiting for the wxWebKit project to be ready to use but it looks pretty good although perhaps a little overkill for your needs.\n"
] |
[
1,
1
] |
[] |
[] |
[
"python",
"wxwidgets"
] |
stackoverflow_0002439039_python_wxwidgets.txt
|
Q:
Install TurboGears on windows xp
I've been trying to get TurboGears installed on Windows by following this site.
I've installed virtualenv but when I execute the command "virtualenv --no-site-packages testproj", I get the following message:
New python executable in testproj\Scripts\python.exe
Traceback (most recent call last):
File "C:\Python26\Scripts\virtualenv-script.py", line 8, in
load_entry_point('virtualenv==1.4.5', 'console_scripts', 'virtualenv')()
File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 529, in main
use_distribute=options.use_distribute)
File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 612, in create_environment
site_packages=site_packages, clear=clear))
File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 837, in install_python
stdout=subprocess.PIPE)
File "C:\Python26\lib\subprocess.py", line 621, in __init__
errread, errwrite)
File "C:\Python26\lib\subprocess.py", line 830, in _execute_child
startupinfo)
WindowsError: [Error 14001] This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem
Can someone help me debug this ? If any one knows a better tutorial to install turbogears, please let me know.
A:
I figured out the error. Apparently, virtualenv does not like it if folder names have spaces (eg Documents and Settings). It worked fine when my folder names had no spaces.
|
Install TurboGears on windows xp
|
I've been trying to get TurboGears installed on Windows by following this site.
I've installed virtualenv but when I execute the command "virtualenv --no-site-packages testproj", I get the following message:
New python executable in testproj\Scripts\python.exe
Traceback (most recent call last):
File "C:\Python26\Scripts\virtualenv-script.py", line 8, in
load_entry_point('virtualenv==1.4.5', 'console_scripts', 'virtualenv')()
File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 529, in main
use_distribute=options.use_distribute)
File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 612, in create_environment
site_packages=site_packages, clear=clear))
File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 837, in install_python
stdout=subprocess.PIPE)
File "C:\Python26\lib\subprocess.py", line 621, in __init__
errread, errwrite)
File "C:\Python26\lib\subprocess.py", line 830, in _execute_child
startupinfo)
WindowsError: [Error 14001] This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem
Can someone help me debug this ? If any one knows a better tutorial to install turbogears, please let me know.
|
[
"I figured out the error. Apparently, virtualenv does not like it if folder names have spaces (eg Documents and Settings). It worked fine when my folder names had no spaces.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"turbogears",
"windows_xp"
] |
stackoverflow_0002426262_python_turbogears_windows_xp.txt
|
Q:
Parsing/Tokenizing a String Containing a SQL Command
Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components?
That is, if I had the following string
SELECT a.foo, b.baz, a.bar
FROM TABLE_A a
LEFT JOIN TABLE_B b
ON a.id = b.id
WHERE baz = 'snafu';
I'd get back a data structure/object something like
//fake PHPish
$results['select-columns'] = Array[a.foo,b.baz,a.bar];
$results['tables'] = Array[TABLE_A,TABLE_B];
$results['table-aliases'] = Array[a=>TABLE_A, b=>TABLE_B];
//etc...
Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want.
I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along)
Thanks!
A:
SQLite source has a file named parse.y that contains grammar for SQL. You can pass that file to lemon parser generator to generate C code that executes the grammar.
|
Parsing/Tokenizing a String Containing a SQL Command
|
Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components?
That is, if I had the following string
SELECT a.foo, b.baz, a.bar
FROM TABLE_A a
LEFT JOIN TABLE_B b
ON a.id = b.id
WHERE baz = 'snafu';
I'd get back a data structure/object something like
//fake PHPish
$results['select-columns'] = Array[a.foo,b.baz,a.bar];
$results['tables'] = Array[TABLE_A,TABLE_B];
$results['table-aliases'] = Array[a=>TABLE_A, b=>TABLE_B];
//etc...
Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want.
I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along)
Thanks!
|
[
"SQLite source has a file named parse.y that contains grammar for SQL. You can pass that file to lemon parser generator to generate C code that executes the grammar. \n"
] |
[
2
] |
[] |
[] |
[
"parsing",
"php",
"python",
"sql",
"tokenize"
] |
stackoverflow_0002439618_parsing_php_python_sql_tokenize.txt
|
Q:
In what order should the Python concepts be explained to absolute beginners?
I am teaching Python to undergraduate math majors. I am interested in the optimal order in which students should be introduced to various Python concepts. In my view, at each stage the students should be able to solve a non-trivial programming problem using only the tools available at that time. Each new tool should enable a simpler solution to a familiar problem. A selection of numerous concepts available in Python is essential in order to keep students focused. They should also motivated and should appreciate each newly mastered tool without too much memorization. Here are some specific questions:
For instance, my predecessor introduced lists before strings. I think the opposite is a better solution.
Should function definitions be introduced at the very beginning or after mastering basic structured programming ideas, such as decisions (if) and loops (while)?
Should sets be introduced before dictionaries?
Is it better to introduce reading and writing files early in the course or should one use input and print for most of the course?
Any suggestions with explanations are most welcome.
Edit: In high school the students were introduced to computers. A few of them learned how to program. Prior to this they had a course, covering word, excel, powerpoint, html, latex, a taste of Mathematica, but no programming. 5 years ago I used Mathematica in this course and the follow-up course uses C and later Java. Now I teach introduction to Python and in the follow-up course my colleague teaches object-oriented programming in Python. Later a student may take special courses on data structures, algorithms, optimization and in some elective courses they learn on their own Mathematica, Matlab and R.
A:
After some try / except as a teacher, I chose to stick to something like:
(starting from nothing, adjust to their level)
Shortly, what is Python and what you can do with it. Skip the speech on technical stuff and focus on what they want to do : music, GUI, Web site, renaming files, etc.
Installing Python, running the interpreter. If you can, use iPython.
Variables, basic strings and print().
Int and types (including type errors and casting).
Basic calculus. Show them 1 / 0, 10 / 3 but don't bother them with details.
Putting calculus results in variables.
Using variables in calculus.
String formating with %. Show only "%s", it's enough and always works. Always use a tuple (with an ending coma), even if it contains only one item.
Lists, indexing, slicing and common errors. Then show tuples as frozen lists (and casting). Show that then can contain each others. Make them work on that until they master it perfectly: this is very, very important.
Dictionaries, with common errors. Nesting with tuples and lists. Insist on the last point.
For loop on strings, then lists, then tuples, then dictionaries.
For loop on nested types. Be nasty. Take your time. Knowing that part well changes everything.
Dictionary items(), values() and keys().
Reading files using for, including IOErrors.
Writing files.
Using methods. Use a string as an example showing strip(), lower(), split(), etc. Don't explain OOP, just how to use a method. Use the world "method" a lot from now.
Creating a module file and using it. One module only. Everything in it.
Functions (only with return, no print(). Forbid print() in functions).
Function parameters.
Named parameters.
Default value parameters.
Try / Except and exceptions.
Import and creation of your own directory modules. Show all the special cases (it takes way more time to explain it than you think).
Demonstrate some standard modules (but don't spend too much time on it, it's just to show): datetime, string, os and sys. Avoid abstract stuffs like itertools, they are a coder dream but a student nightmare.
After that you can bring OOP on the table, but it a bit more complicated. Use strings, lists and files to introduce the notion of object. When they got it, start with classes. Then may the force be with you :-)
It is tempting to use print in functions to show how it works, and even more tempting to use raw_input. You should avoid it at all cost. The first one makes it very difficult to bring the concept of a "returned value", the second hides the real flow of a program and students have a hard time to understand that you need to chain functions, not ask the user for every value you need.
Generally, choose one method that works for something and stick to it. Don't show alternative ways. E.g:
Show only string formating using %, and ignore + and ,. You can always add a little "going further" block in your lecture material for the ones who want to know more. Show only for and not while. You can code almost 90% of Python programs without while. Avoid +=. Don't show that you can multiply strings/lists/dict with ints. This is not wrong, but will lead them to misconception. You need them focused on the main concepts.
Don't show sets. Sets are very useful but rarely used. Encourage them to code at home and to ask you if they can't solve a problem. In that case show sets if they are the solution. Knowing sets take times and student brain resources that could be used for something more often used. They will have plenty of time to learn new tools later, without you: focus on what is hard or time consuming to learn alone.
Same goes for enumerate. Students with a C or a Java background will use indexes to loop instead of for if you give them enumerate. For similar reasons, keep len, fd.read, fd.realines and range for one of the last courses entitles "advanced python" if you have any time for it.
Don't even think about generators, metaclasses and decorators. These can be apprehended by very few students, even after months of practice. List comprehensions, with and ternary operations can be brought in some of the last courses if you feel your students are smart arses.
Eventually, enforce good practices arbitrarily. PEP8 formating, good architecture, name conventions, no immutable default parameters, etc. They just can't know about it right now. Don't bother, you are the teacher, you have the right to say "this is just how it is" from time to times.
Oh, and they will be better programmers if they don't start by learning things like bytecode, recursion, assembly, complexity, bubble sort, stack, implementation details, etc. You waste time teaching this to somebody that can't code a decent Python program, he just can't see what's this is all about. Practice is your best tools to bring theory. And again, they will learn everything else by them-self later if you prepare them correctly, so prioritize and and don't be afraid to skip concepts, even simple/important ones.
A:
You can see my outline here: http://homepage.mac.com/s_lott/books/nonprog/html/index.html
This presentation order is based on experience teaching C, Ada, C++, PL/SQL (and even a COBOL course once).
There's a great book, that has a sensible pedagogical ordering of concepts.
R. C. Holt, G. S. Graham, E. D. Lazowska, M. A. Scott. Structured Concurrent Programming with Operating Systems Applications. 1978. Addison-Wesley. 0201029375
http://openlibrary.org/b/OL4570626M/Structured_concurrent_programming_with_operating_systems_applications
A:
e-satis's list is pretty good, but since this is for a math class, I'd add the following suggestions:
First of all, either use Python 3.x or tell them to always use
from __future__ import division
Otherwise, they will get bitten by integer division. It's easy enough to remember when you type 1/2 at the interactive prompt, but you'll get bugs in subtle places like:
def mean(seq):
"""Return the arithmetic mean of a list."""
return sum(seq) / len(seq)
When you teach functions, show them the math module and the built-in sum function. Also show the ability to pass a function to another function, which is useful for writing generic derivative/integral approximations:
def derivative(f, x, delta_x=1e-8):
"""Approximate f'(x)."""
return (f(x + delta_x) - f(x - delta_x)) / (2 * delta_x)
print(derivative(math.sin, 0))
A:
This really depends on how much programming they know, but I've seen R successfully introduced to people with absolutely no knowledge of programming successfully. I'm going to guess that they don't have much knowledge of programming.
This may sound obvious, but teach only as much of the language as they need to solve the problem, don't get in too deep with "proper" and efficient coding styles, you can start working this in slowly once your students have some understanding, e.g. comment on style, but don't be too strict on it.
To solve a problem you have to understand at least some basic part of the language. I'm going to assume that everything you do will likely be contained in a single line and namespacing, modules, performance, etc really won't be top priorities.
Start by getting them setup with a development environment, and create a simple program for them to run. Make sure they have an environment that has everything they need (if they need numpy, walk them through installation), walk them through starting a program from the command line, and of course have an editor that's easy to use (e.g. Eclipse + PyDev, probably too complicated). The most frustrating thing is when you can't get an working environment. Pray you don't have to support windows or don't have many libraries to contend with.
Once you have that, introduce them to the language in general. Cover types and the subtle problems one may encounter, e.g.:
>>> 1/2
0
>>> 1./2
0.5
I would even instill a habit of making everything floats. Introduce strings for output and how to cast that output if you want it on the same line. Cover operations and logic, then provide an introduction to "functions," making sure to create a distinction between the mathematical equivalent. I think command flow structures should be fairly simple and include the simple ones (if, else, elif, possibly while).
At this point they should be able to create a simple program to solve a simple problem. Start building on this, introducing more complex command flows, more complex data types (lists, sets, dicts), possibly iterators and generators (be careful with these, they can be a pain and you may not need them).
Edit:
I forgot to touch on input and output. I would provide a simple framework your students can use for this, if you want to. The command line should be sufficient unless you want to trace what's happening in which case a file output is much more reasonable. Alternatively, piping output to a file works just as well.
I think sets are much more mathematically relevant (and useful!) then dicts are, and would introduce them first.
A:
I have recently taught a short Python crash course to 1st-3rd year Computer Science students the majority of whom knew only C and C++, and even that not so well. My approach was quite different from what you are suggesting.
Disclaimer: The aim of my course was to introduce the language to people who are already familiar with basic ideas of programming, so this might not be appropriate if you are teaching people who have never been exposed to programming at all.
First, I did a short introduction to the language with its strengths and weaknesses and showing some simple Python programs that even someone who does not know Python can easily get.
Then I did a thorough run through data structures, using the REPL prompt extensively for examples. Sure, at this point they could not write a program, but writing any program (even if just a toy example) without using data structures is really not what Python is about; I would even say that attempting that suggests unpythonic habits to the students. I went in this order:
Numbers (int -> float)
Sequences (list & tuple -> string -> bytearray)
Sets
Dictionaries
Bools, including auto-casting to bools.
Next up was the basic syntax, in the order:
Statements (line breaks, etc.)
Printing
Variables, focusing on the peculiarities of dynamic binding and the major difference between the C concept of variables and its Python counterpart.
Conditionals
Loops
List comprehension
Function/method calls, including function chaining, keyword parameters and argument lists.
Modules, including importing and dealing with namespaces.
Forth was a deep dive into functions. There's a surprising lot to teach about Python functions, including various ways of defining arguments (keywords, lists), multiple returns, docstrings, scoping (a large subject area by itself), and an important but oft-missed part which is using functions as objects, passing them around, using lambdas, etc.
Fifth was a more practical overview of files including I/O and encoding issues and exceptions (concept -> catching -> raising).
Finally an overview of OO features in Python, including instance variables, method types (instance/class/static), inheritance, method naming (private, mangled, special), etc.
For your particular questions:
For instance, my predecessor introduced lists before strings. I think the opposite is a better solution.
I disagree. Conceptually, a string is just a list that gets a lot of special treatment, so it makes sense to build upon the simpler list concept. If you start with data structures as I did, you also won't have to deal with not being able to use strings in I/O examples.
Should function definitions be introduced at the very beginning or after mastering basic structured programming ideas, such as decisions (if) and loops (while)?
Definitely after. Calling functions should be taught around the same time as basic structured programming ideas, but defining your own should be postponed.
Should sets be introduced before dictionaries?
Well, dictionaries are certainly much more used in practice, but if you've introduced sequences, explaining sets (especially to math students) shouldn't take long, and it makes sense to progress from simpler to more complex structures.
Is it better to introduce reading and writing files early in the course or should one use input and print for most of the course?
Python's IO capabilities are really simple, so it shouldn't matter much, but I'd say these are unnecessary for basic exercises, so you might as well leave them off for the second half of the course.
In my view, at each stage the students should be able to solve a non-trivial programming problem using only the tools available at that time. Each new tool should enable a simpler solution to a familiar problem.
The incremental approach is obviously very different from my more academic one, but it certainly has its advantages, not least of which is that it keeps people more interested. However, I always disliked the fact that when you're done with learning a subject this way, you are left with the feeling that there might well be an easier solution than what you've learned so far to even the simplest problems, since there always have been during the span of the course.
|
In what order should the Python concepts be explained to absolute beginners?
|
I am teaching Python to undergraduate math majors. I am interested in the optimal order in which students should be introduced to various Python concepts. In my view, at each stage the students should be able to solve a non-trivial programming problem using only the tools available at that time. Each new tool should enable a simpler solution to a familiar problem. A selection of numerous concepts available in Python is essential in order to keep students focused. They should also motivated and should appreciate each newly mastered tool without too much memorization. Here are some specific questions:
For instance, my predecessor introduced lists before strings. I think the opposite is a better solution.
Should function definitions be introduced at the very beginning or after mastering basic structured programming ideas, such as decisions (if) and loops (while)?
Should sets be introduced before dictionaries?
Is it better to introduce reading and writing files early in the course or should one use input and print for most of the course?
Any suggestions with explanations are most welcome.
Edit: In high school the students were introduced to computers. A few of them learned how to program. Prior to this they had a course, covering word, excel, powerpoint, html, latex, a taste of Mathematica, but no programming. 5 years ago I used Mathematica in this course and the follow-up course uses C and later Java. Now I teach introduction to Python and in the follow-up course my colleague teaches object-oriented programming in Python. Later a student may take special courses on data structures, algorithms, optimization and in some elective courses they learn on their own Mathematica, Matlab and R.
|
[
"After some try / except as a teacher, I chose to stick to something like:\n(starting from nothing, adjust to their level)\n\nShortly, what is Python and what you can do with it. Skip the speech on technical stuff and focus on what they want to do : music, GUI, Web site, renaming files, etc.\nInstalling Python, running the interpreter. If you can, use iPython.\nVariables, basic strings and print().\nInt and types (including type errors and casting).\nBasic calculus. Show them 1 / 0, 10 / 3 but don't bother them with details.\nPutting calculus results in variables.\nUsing variables in calculus.\nString formating with %. Show only \"%s\", it's enough and always works. Always use a tuple (with an ending coma), even if it contains only one item.\nLists, indexing, slicing and common errors. Then show tuples as frozen lists (and casting). Show that then can contain each others. Make them work on that until they master it perfectly: this is very, very important.\nDictionaries, with common errors. Nesting with tuples and lists. Insist on the last point.\nFor loop on strings, then lists, then tuples, then dictionaries.\nFor loop on nested types. Be nasty. Take your time. Knowing that part well changes everything.\nDictionary items(), values() and keys().\nReading files using for, including IOErrors.\nWriting files.\nUsing methods. Use a string as an example showing strip(), lower(), split(), etc. Don't explain OOP, just how to use a method. Use the world \"method\" a lot from now.\nCreating a module file and using it. One module only. Everything in it.\nFunctions (only with return, no print(). Forbid print() in functions).\nFunction parameters.\nNamed parameters.\nDefault value parameters.\nTry / Except and exceptions.\nImport and creation of your own directory modules. Show all the special cases (it takes way more time to explain it than you think).\nDemonstrate some standard modules (but don't spend too much time on it, it's just to show): datetime, string, os and sys. Avoid abstract stuffs like itertools, they are a coder dream but a student nightmare.\n\nAfter that you can bring OOP on the table, but it a bit more complicated. Use strings, lists and files to introduce the notion of object. When they got it, start with classes. Then may the force be with you :-)\nIt is tempting to use print in functions to show how it works, and even more tempting to use raw_input. You should avoid it at all cost. The first one makes it very difficult to bring the concept of a \"returned value\", the second hides the real flow of a program and students have a hard time to understand that you need to chain functions, not ask the user for every value you need.\nGenerally, choose one method that works for something and stick to it. Don't show alternative ways. E.g: \nShow only string formating using %, and ignore + and ,. You can always add a little \"going further\" block in your lecture material for the ones who want to know more. Show only for and not while. You can code almost 90% of Python programs without while. Avoid +=. Don't show that you can multiply strings/lists/dict with ints. This is not wrong, but will lead them to misconception. You need them focused on the main concepts. \nDon't show sets. Sets are very useful but rarely used. Encourage them to code at home and to ask you if they can't solve a problem. In that case show sets if they are the solution. Knowing sets take times and student brain resources that could be used for something more often used. They will have plenty of time to learn new tools later, without you: focus on what is hard or time consuming to learn alone.\nSame goes for enumerate. Students with a C or a Java background will use indexes to loop instead of for if you give them enumerate. For similar reasons, keep len, fd.read, fd.realines and range for one of the last courses entitles \"advanced python\" if you have any time for it. \nDon't even think about generators, metaclasses and decorators. These can be apprehended by very few students, even after months of practice. List comprehensions, with and ternary operations can be brought in some of the last courses if you feel your students are smart arses.\nEventually, enforce good practices arbitrarily. PEP8 formating, good architecture, name conventions, no immutable default parameters, etc. They just can't know about it right now. Don't bother, you are the teacher, you have the right to say \"this is just how it is\" from time to times.\nOh, and they will be better programmers if they don't start by learning things like bytecode, recursion, assembly, complexity, bubble sort, stack, implementation details, etc. You waste time teaching this to somebody that can't code a decent Python program, he just can't see what's this is all about. Practice is your best tools to bring theory. And again, they will learn everything else by them-self later if you prepare them correctly, so prioritize and and don't be afraid to skip concepts, even simple/important ones. \n",
"You can see my outline here: http://homepage.mac.com/s_lott/books/nonprog/html/index.html\nThis presentation order is based on experience teaching C, Ada, C++, PL/SQL (and even a COBOL course once).\nThere's a great book, that has a sensible pedagogical ordering of concepts.\nR. C. Holt, G. S. Graham, E. D. Lazowska, M. A. Scott. Structured Concurrent Programming with Operating Systems Applications. 1978. Addison-Wesley. 0201029375 \nhttp://openlibrary.org/b/OL4570626M/Structured_concurrent_programming_with_operating_systems_applications\n",
"e-satis's list is pretty good, but since this is for a math class, I'd add the following suggestions:\nFirst of all, either use Python 3.x or tell them to always use\nfrom __future__ import division\n\nOtherwise, they will get bitten by integer division. It's easy enough to remember when you type 1/2 at the interactive prompt, but you'll get bugs in subtle places like:\ndef mean(seq):\n \"\"\"Return the arithmetic mean of a list.\"\"\"\n return sum(seq) / len(seq)\n\nWhen you teach functions, show them the math module and the built-in sum function. Also show the ability to pass a function to another function, which is useful for writing generic derivative/integral approximations:\ndef derivative(f, x, delta_x=1e-8):\n \"\"\"Approximate f'(x).\"\"\"\n return (f(x + delta_x) - f(x - delta_x)) / (2 * delta_x)\n\nprint(derivative(math.sin, 0))\n\n",
"This really depends on how much programming they know, but I've seen R successfully introduced to people with absolutely no knowledge of programming successfully. I'm going to guess that they don't have much knowledge of programming. \nThis may sound obvious, but teach only as much of the language as they need to solve the problem, don't get in too deep with \"proper\" and efficient coding styles, you can start working this in slowly once your students have some understanding, e.g. comment on style, but don't be too strict on it. \nTo solve a problem you have to understand at least some basic part of the language. I'm going to assume that everything you do will likely be contained in a single line and namespacing, modules, performance, etc really won't be top priorities. \nStart by getting them setup with a development environment, and create a simple program for them to run. Make sure they have an environment that has everything they need (if they need numpy, walk them through installation), walk them through starting a program from the command line, and of course have an editor that's easy to use (e.g. Eclipse + PyDev, probably too complicated). The most frustrating thing is when you can't get an working environment. Pray you don't have to support windows or don't have many libraries to contend with. \nOnce you have that, introduce them to the language in general. Cover types and the subtle problems one may encounter, e.g.:\n>>> 1/2\n0\n>>> 1./2\n0.5\n\nI would even instill a habit of making everything floats. Introduce strings for output and how to cast that output if you want it on the same line. Cover operations and logic, then provide an introduction to \"functions,\" making sure to create a distinction between the mathematical equivalent. I think command flow structures should be fairly simple and include the simple ones (if, else, elif, possibly while).\nAt this point they should be able to create a simple program to solve a simple problem. Start building on this, introducing more complex command flows, more complex data types (lists, sets, dicts), possibly iterators and generators (be careful with these, they can be a pain and you may not need them).\nEdit: \nI forgot to touch on input and output. I would provide a simple framework your students can use for this, if you want to. The command line should be sufficient unless you want to trace what's happening in which case a file output is much more reasonable. Alternatively, piping output to a file works just as well. \nI think sets are much more mathematically relevant (and useful!) then dicts are, and would introduce them first.\n",
"I have recently taught a short Python crash course to 1st-3rd year Computer Science students the majority of whom knew only C and C++, and even that not so well. My approach was quite different from what you are suggesting.\nDisclaimer: The aim of my course was to introduce the language to people who are already familiar with basic ideas of programming, so this might not be appropriate if you are teaching people who have never been exposed to programming at all.\n\nFirst, I did a short introduction to the language with its strengths and weaknesses and showing some simple Python programs that even someone who does not know Python can easily get.\nThen I did a thorough run through data structures, using the REPL prompt extensively for examples. Sure, at this point they could not write a program, but writing any program (even if just a toy example) without using data structures is really not what Python is about; I would even say that attempting that suggests unpythonic habits to the students. I went in this order:\n\n\nNumbers (int -> float)\nSequences (list & tuple -> string -> bytearray)\nSets\nDictionaries\nBools, including auto-casting to bools.\n\nNext up was the basic syntax, in the order:\n\n\nStatements (line breaks, etc.)\nPrinting\nVariables, focusing on the peculiarities of dynamic binding and the major difference between the C concept of variables and its Python counterpart.\nConditionals\nLoops\nList comprehension\nFunction/method calls, including function chaining, keyword parameters and argument lists.\nModules, including importing and dealing with namespaces.\n\nForth was a deep dive into functions. There's a surprising lot to teach about Python functions, including various ways of defining arguments (keywords, lists), multiple returns, docstrings, scoping (a large subject area by itself), and an important but oft-missed part which is using functions as objects, passing them around, using lambdas, etc.\nFifth was a more practical overview of files including I/O and encoding issues and exceptions (concept -> catching -> raising).\nFinally an overview of OO features in Python, including instance variables, method types (instance/class/static), inheritance, method naming (private, mangled, special), etc.\n\nFor your particular questions:\n\nFor instance, my predecessor introduced lists before strings. I think the opposite is a better solution.\n\nI disagree. Conceptually, a string is just a list that gets a lot of special treatment, so it makes sense to build upon the simpler list concept. If you start with data structures as I did, you also won't have to deal with not being able to use strings in I/O examples.\n\nShould function definitions be introduced at the very beginning or after mastering basic structured programming ideas, such as decisions (if) and loops (while)?\n\nDefinitely after. Calling functions should be taught around the same time as basic structured programming ideas, but defining your own should be postponed.\n\nShould sets be introduced before dictionaries?\n\nWell, dictionaries are certainly much more used in practice, but if you've introduced sequences, explaining sets (especially to math students) shouldn't take long, and it makes sense to progress from simpler to more complex structures.\n\nIs it better to introduce reading and writing files early in the course or should one use input and print for most of the course?\n\nPython's IO capabilities are really simple, so it shouldn't matter much, but I'd say these are unnecessary for basic exercises, so you might as well leave them off for the second half of the course.\n\nIn my view, at each stage the students should be able to solve a non-trivial programming problem using only the tools available at that time. Each new tool should enable a simpler solution to a familiar problem.\n\nThe incremental approach is obviously very different from my more academic one, but it certainly has its advantages, not least of which is that it keeps people more interested. However, I always disliked the fact that when you're done with learning a subject this way, you are left with the feeling that there might well be an easier solution than what you've learned so far to even the simplest problems, since there always have been during the span of the course.\n"
] |
[
20,
2,
2,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002439638_python.txt
|
Q:
Passing in **kwargs from Flex over PyAMF
Anyone know if it is easily possible to send **kwargs over PyAMF from NetConnection.call()? I would like it.
I could write a wrapper around the actual function and expose that and perform some parsing manually to determine the kwargs to pass in, but I don't want to do that. I will just use a normal argument list in that case.
A:
Whilst ActionScript has the *args construct (params ...) there is no equivalent to **kwargs, although if you do need to send arbitrary named arguments, then you can always send a dict as a positional argument to the service. E.g.
def some_service_function(kwargs): # <- note the lack of **
foo = kwargs.get('foo')
bar = kwargs.get('bar')
And the calling ActionScript:
nc.call("some_service_function", {foo: "some", bar: "thing"})
|
Passing in **kwargs from Flex over PyAMF
|
Anyone know if it is easily possible to send **kwargs over PyAMF from NetConnection.call()? I would like it.
I could write a wrapper around the actual function and expose that and perform some parsing manually to determine the kwargs to pass in, but I don't want to do that. I will just use a normal argument list in that case.
|
[
"Whilst ActionScript has the *args construct (params ...) there is no equivalent to **kwargs, although if you do need to send arbitrary named arguments, then you can always send a dict as a positional argument to the service. E.g.\ndef some_service_function(kwargs): # <- note the lack of **\n foo = kwargs.get('foo')\n bar = kwargs.get('bar')\n\nAnd the calling ActionScript:\nnc.call(\"some_service_function\", {foo: \"some\", bar: \"thing\"})\n\n"
] |
[
1
] |
[] |
[] |
[
"keyword_argument",
"pyamf",
"python",
"remoting"
] |
stackoverflow_0002438235_keyword_argument_pyamf_python_remoting.txt
|
Q:
As a newbie, where should I go if I want to create a small GUI program?
I'm a newbie with a little experience writing in BASIC, Python and, of all things, a smidgeon of assembler (as part of a videogame ROM hack). I wanted to create small tool for modifying the hex values at particular points, in a particular file, that would have a GUI interface.
What I'm looking for is the ability to create small GUI program, that I can distribute as an EXE (or, at least a standalone directory). I'm not keen on the idea of the .NET languages, because I don't want to force people to download a massive .NET framework package. I currently have Python with IDLE and Boa Constructor set up, and the application runs there. I've tried looking up information on compiling a python app that relies on Wxwidgets, but the search results and the information I've found has been confusing, or just completely incomprehensible.
My questions are:
Is python a good language to use for this sort of project?
If I use Py2Exe, will WxWidgets already be included? Or will my users have to somehow install WxWidgets on their machines? Am I right in thinking at Py2Exe just produces a standalone directory, 'dist', that has the necessary files for the user to just double click and run the application?
If the program just relies upon Tkinter for GUI stuff, will that be included in the EXE Py2Exe produces? If so, are their any 'visual' GUI builders / IDEs for Python with only Tkinter?
Thankyou for your time,
JBMK
A:
You'd be better off thinking/saying/googling wxPython (not wxWidgets), since wxPython is the python wrapper for the wxWidgets C++.
1.) Python is a good language for this. If you are only targeting windows, I'd still do it in .NET/C# though. If you want cross-platform, Python/wxPython all the way.
2.) Yes, the wxPython files should be included in the dist directory. You'll have to of course install wxPython to your development machine. See here for some instructions on how to build. py2exe does produce a single directory with everything you need to run you program. It'll give you an EXE that you can double-click.
3.) I've never used Python's Tkinter with py2exe, but I can't see why it wouldn't work along the lines of wxPython.
You should keep in mind that your finally distributable directory will be 10s of megs (py2exe packs the python interpreter and other libraries needed for you app). Not quite as much as the .NET framework, but doesn't almost everybody have that installed already by now?
A:
If you're not afraid to learn a new language, consider Tcl/Tk. The reason I mention this is Tcl's superior-to-almost-everything distribution mechanism which makes it really easy to wrap up a single file exe that includes everything you need -- the Tcl/Tk runtime, your program, icons, sound files, etc. inside an embedded virtual filesystem. And the same technique you use for one platform works for all. You don't have to use different tools for different platforms.
If that intrigues you, google for starpack (single file that has it all), starkit (platform-independent application) and tclkit (platform-specific runtime).
Tcl/Tk isn't everyone's cup of tea, but as a getting-started GUI language it's hard to beat IMO. If it has an Achilles heel is that it has no printing support. It's surprising, though, how many GUIs don't need printing support these days.
A:
For a multiplatform GUI project i recommend you to use Qt libraries and PyQt.
I recently used them for a small application and i loved both; Qt has a great Gui designer and PyQt slot\signal model worked for me.
You can deploy your app on Osx and Windows using py2app and py2exe; here a useful link that show you how and the possible size result.
A:
Python would fit your needs.
wxWidgets and Python are completely different things. I think you mean wxPython, which is a GUI toolkit for Python. I am not sure whether Py2Exe would include this, as I have never used Py2Exe - I build the packages and their dependencies manually.
Pretty sure tkinter would be included. I use tkinter a bit and it works well enough.
|
As a newbie, where should I go if I want to create a small GUI program?
|
I'm a newbie with a little experience writing in BASIC, Python and, of all things, a smidgeon of assembler (as part of a videogame ROM hack). I wanted to create small tool for modifying the hex values at particular points, in a particular file, that would have a GUI interface.
What I'm looking for is the ability to create small GUI program, that I can distribute as an EXE (or, at least a standalone directory). I'm not keen on the idea of the .NET languages, because I don't want to force people to download a massive .NET framework package. I currently have Python with IDLE and Boa Constructor set up, and the application runs there. I've tried looking up information on compiling a python app that relies on Wxwidgets, but the search results and the information I've found has been confusing, or just completely incomprehensible.
My questions are:
Is python a good language to use for this sort of project?
If I use Py2Exe, will WxWidgets already be included? Or will my users have to somehow install WxWidgets on their machines? Am I right in thinking at Py2Exe just produces a standalone directory, 'dist', that has the necessary files for the user to just double click and run the application?
If the program just relies upon Tkinter for GUI stuff, will that be included in the EXE Py2Exe produces? If so, are their any 'visual' GUI builders / IDEs for Python with only Tkinter?
Thankyou for your time,
JBMK
|
[
"You'd be better off thinking/saying/googling wxPython (not wxWidgets), since wxPython is the python wrapper for the wxWidgets C++.\n1.) Python is a good language for this. If you are only targeting windows, I'd still do it in .NET/C# though. If you want cross-platform, Python/wxPython all the way.\n2.) Yes, the wxPython files should be included in the dist directory. You'll have to of course install wxPython to your development machine. See here for some instructions on how to build. py2exe does produce a single directory with everything you need to run you program. It'll give you an EXE that you can double-click.\n3.) I've never used Python's Tkinter with py2exe, but I can't see why it wouldn't work along the lines of wxPython.\nYou should keep in mind that your finally distributable directory will be 10s of megs (py2exe packs the python interpreter and other libraries needed for you app). Not quite as much as the .NET framework, but doesn't almost everybody have that installed already by now?\n",
"If you're not afraid to learn a new language, consider Tcl/Tk. The reason I mention this is Tcl's superior-to-almost-everything distribution mechanism which makes it really easy to wrap up a single file exe that includes everything you need -- the Tcl/Tk runtime, your program, icons, sound files, etc. inside an embedded virtual filesystem. And the same technique you use for one platform works for all. You don't have to use different tools for different platforms. \nIf that intrigues you, google for starpack (single file that has it all), starkit (platform-independent application) and tclkit (platform-specific runtime). \nTcl/Tk isn't everyone's cup of tea, but as a getting-started GUI language it's hard to beat IMO. If it has an Achilles heel is that it has no printing support. It's surprising, though, how many GUIs don't need printing support these days. \n",
"For a multiplatform GUI project i recommend you to use Qt libraries and PyQt.\nI recently used them for a small application and i loved both; Qt has a great Gui designer and PyQt slot\\signal model worked for me.\nYou can deploy your app on Osx and Windows using py2app and py2exe; here a useful link that show you how and the possible size result.\n",
"\nPython would fit your needs.\nwxWidgets and Python are completely different things. I think you mean wxPython, which is a GUI toolkit for Python. I am not sure whether Py2Exe would include this, as I have never used Py2Exe - I build the packages and their dependencies manually.\nPretty sure tkinter would be included. I use tkinter a bit and it works well enough.\n\n"
] |
[
4,
3,
0,
0
] |
[] |
[] |
[
"py2exe",
"python",
"tkinter",
"wxwidgets"
] |
stackoverflow_0002439520_py2exe_python_tkinter_wxwidgets.txt
|
Q:
Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line file?
I have a tab-separated data file with a little over 2 million lines and 19 columns.
You can find it, in US.zip: http://download.geonames.org/export/dump/.
I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 30% of my memory on the process and have only done about 6.5% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop?
def run():
from geonames.models import POI
f = file('data/US.txt')
for l in f:
li = l.split('\t')
try:
p = POI()
p.geonameid = li[0]
p.name = li[1]
p.asciiname = li[2]
p.alternatenames = li[3]
p.point = "POINT(%s %s)" % (li[5], li[4])
p.feature_class = li[6]
p.feature_code = li[7]
p.country_code = li[8]
p.ccs2 = li[9]
p.admin1_code = li[10]
p.admin2_code = li[11]
p.admin3_code = li[12]
p.admin4_code = li[13]
p.population = li[14]
p.elevation = li[15]
p.gtopo30 = li[16]
p.timezone = li[17]
p.modification_date = li[18]
p.save()
except IndexError:
pass
if __name__ == "__main__":
run()
EDIT, More details (the apparently important ones):
The memory consumption is going up as the script runs and saves more lines.
The method, .save() is an adulterated django model method with unique_slug snippet that is writing to a postgreSQL/postgis db.
SOLVED: DEBUG database logging in Django eats memory.
A:
Make sure that Django's DEBUG setting is set to False
A:
This looks perfectly fine to me. Iterating over the file like that or using xreadlines() will read each line as needed (with sane buffering behind the scenes). Memory usage should not grow as you read in more and more data.
As for performance, you should profile your app. Most likely the bottleneck is somewhere in a deeper function, like POI.save().
A:
There's no reason to worry in the data you've given us: is memory consumption going UP as you read more and more lines? Now that would be cause for worry -- but there's no indication that this would happen in the code you've shown, assuming that p.save() saves the object to some database or file and not in memory, of course. There's nothing real to be gained by adding del statements, as the memory is getting recycled at each leg of the loop anyway.
This could be sped up if there's a faster way to populate a POI instance than binding its attributes one by one -- e.g., passing those attributes (maybe as keyword arguments? positional would be faster...) to the POI constructor. But whether that's the case depends on that geonames.models module, of which I know nothing, so I can only offer very generic advice -- e.g., if the module lets you save a bunch of POIs in a single gulp, then making them (say) 100 at a time and saving them in bunches should yield a speedup (at the cost of slightly higher memory consumption).
|
Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line file?
|
I have a tab-separated data file with a little over 2 million lines and 19 columns.
You can find it, in US.zip: http://download.geonames.org/export/dump/.
I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 30% of my memory on the process and have only done about 6.5% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop?
def run():
from geonames.models import POI
f = file('data/US.txt')
for l in f:
li = l.split('\t')
try:
p = POI()
p.geonameid = li[0]
p.name = li[1]
p.asciiname = li[2]
p.alternatenames = li[3]
p.point = "POINT(%s %s)" % (li[5], li[4])
p.feature_class = li[6]
p.feature_code = li[7]
p.country_code = li[8]
p.ccs2 = li[9]
p.admin1_code = li[10]
p.admin2_code = li[11]
p.admin3_code = li[12]
p.admin4_code = li[13]
p.population = li[14]
p.elevation = li[15]
p.gtopo30 = li[16]
p.timezone = li[17]
p.modification_date = li[18]
p.save()
except IndexError:
pass
if __name__ == "__main__":
run()
EDIT, More details (the apparently important ones):
The memory consumption is going up as the script runs and saves more lines.
The method, .save() is an adulterated django model method with unique_slug snippet that is writing to a postgreSQL/postgis db.
SOLVED: DEBUG database logging in Django eats memory.
|
[
"Make sure that Django's DEBUG setting is set to False\n",
"This looks perfectly fine to me. Iterating over the file like that or using xreadlines() will read each line as needed (with sane buffering behind the scenes). Memory usage should not grow as you read in more and more data.\nAs for performance, you should profile your app. Most likely the bottleneck is somewhere in a deeper function, like POI.save().\n",
"There's no reason to worry in the data you've given us: is memory consumption going UP as you read more and more lines? Now that would be cause for worry -- but there's no indication that this would happen in the code you've shown, assuming that p.save() saves the object to some database or file and not in memory, of course. There's nothing real to be gained by adding del statements, as the memory is getting recycled at each leg of the loop anyway.\nThis could be sped up if there's a faster way to populate a POI instance than binding its attributes one by one -- e.g., passing those attributes (maybe as keyword arguments? positional would be faster...) to the POI constructor. But whether that's the case depends on that geonames.models module, of which I know nothing, so I can only offer very generic advice -- e.g., if the module lets you save a bunch of POIs in a single gulp, then making them (say) 100 at a time and saving them in bunches should yield a speedup (at the cost of slightly higher memory consumption).\n"
] |
[
5,
2,
2
] |
[] |
[] |
[
"django",
"file",
"memory_management",
"python"
] |
stackoverflow_0002440495_django_file_memory_management_python.txt
|
Q:
Python 3: Most efficient way to create a [func(i) for i in range(N)] list comprehension
Say I have a function func(i) that creates an object for an integer i, and N is some nonnegative integer. Then what's the fastest way to create a list (not a range) equal to this list
mylist = [func(i) for i in range(N)]
without resorting to advanced methods like creating a function in C? My main concern with the above list comprehension is that I'm not sure if python knows beforehand the length of range(N) to preallocate mylist, and therefore has to incrementally reallocate the list. Is that the case or is python clever enough to allocate mylist to length N first and then compute it's elements? If not, what's the best way to create mylist? Maybe this?
mylist = [None]*N
for i in range(N): mylist[i] = func(i)
RE-EDIT: Removed misleading information from a previous edit.
A:
Somebody wrote: """Python is smart enough. As long as the object you're iterating over has a __len__ or __length_hint__ method, Python will call it to determine the size and preallocate the array."""
As far as I can tell, there is no preallocation in a list comprehension. Python has no way of telling from the size of the INPUT what the size of the OUTPUT will be.
Look at this Python 2.6 code:
>>> def foo(func, iterable):
... return [func(i) for i in iterable]
...
>>> import dis; dis.dis(foo)
2 0 BUILD_LIST 0 #### build empty list
3 DUP_TOP
4 STORE_FAST 2 (_[1])
7 LOAD_FAST 1 (iterable)
10 GET_ITER
>> 11 FOR_ITER 19 (to 33)
14 STORE_FAST 3 (i)
17 LOAD_FAST 2 (_[1])
20 LOAD_FAST 0 (func)
23 LOAD_FAST 3 (i)
26 CALL_FUNCTION 1
29 LIST_APPEND #### stack[-2].append(stack[-1]); pop()
30 JUMP_ABSOLUTE 11
>> 33 DELETE_FAST 2 (_[1])
36 RETURN_VALUE
It just builds an empty list, and appends whatever the iteration delivers.
Now look at this code, which has an 'if' in the list comprehension:
>>> def bar(func, iterable):
... return [func(i) for i in iterable if i]
...
>>> import dis; dis.dis(bar)
2 0 BUILD_LIST 0
3 DUP_TOP
4 STORE_FAST 2 (_[1])
7 LOAD_FAST 1 (iterable)
10 GET_ITER
>> 11 FOR_ITER 30 (to 44)
14 STORE_FAST 3 (i)
17 LOAD_FAST 3 (i)
20 JUMP_IF_FALSE 17 (to 40)
23 POP_TOP
24 LOAD_FAST 2 (_[1])
27 LOAD_FAST 0 (func)
30 LOAD_FAST 3 (i)
33 CALL_FUNCTION 1
36 LIST_APPEND
37 JUMP_ABSOLUTE 11
>> 40 POP_TOP
41 JUMP_ABSOLUTE 11
>> 44 DELETE_FAST 2 (_[1])
47 RETURN_VALUE
>>>
The same code, plus some code to avoid the LIST_APPEND.
In Python 3.X, you need to dig a little deeper:
>>> import dis
>>> def comprehension(f, iterable): return [f(i) for i in iterable]
...
>>> dis.dis(comprehension)
1 0 LOAD_CLOSURE 0 (f)
3 BUILD_TUPLE 1
6 LOAD_CONST 1 (<code object <listcomp> at 0x00C4B8D
8, file "<stdin>", line 1>)
9 MAKE_CLOSURE 0
12 LOAD_FAST 1 (iterable)
15 GET_ITER
16 CALL_FUNCTION 1
19 RETURN_VALUE
>>> dis.dis(comprehension.__code__.co_consts[1])
1 0 BUILD_LIST 0
3 LOAD_FAST 0 (.0)
>> 6 FOR_ITER 18 (to 27)
9 STORE_FAST 1 (i)
12 LOAD_DEREF 0 (f)
15 LOAD_FAST 1 (i)
18 CALL_FUNCTION 1
21 LIST_APPEND 2
24 JUMP_ABSOLUTE 6
>> 27 RETURN_VALUE
>>>
It's the same old schtick: start off with building an empty list, then iterate over the iterable, appending to the list as required. I see no preallocation here.
The optimisation that you are thinking about is used inside a single opcode e.g. the implementation of list.extend(iterable) can preallocate if iterable can accurately report its length. list.append(object) is given a single object, not an iterable.
A:
There is no difference in computational complexity between using an autoresizing array and preallocating an array. At worst, it costs about O(2N). See here:
Constant Amortized Time
The cost of the function calls plus whatever happens in your function is going to make this extra n insignificant. This isn't something you should worry about. Just use the list comprehension.
A:
If you use the timeit module, you may come to the opposite conclusion: list comprehension is faster than preallocation:
f=lambda x: x+1
N=1000000
def lc():
return [f(i) for i in range(N)]
def prealloc():
mylist = [None]*N
for i in range(N): mylist[i]=f(i)
return mylist
def xr():
return map(f,xrange(N))
if __name__=='__main__':
lc()
Warning: These are the results on my computer. You should try these tests yourself, as your results may be different depending on your version of Python and your hardware. (See the comments.)
% python -mtimeit -s"import test" "test.prealloc()"
10 loops, best of 3: 370 msec per loop
% python -mtimeit -s"import test" "test.lc()"
10 loops, best of 3: 319 msec per loop
% python -mtimeit -s"import test" "test.xr()"
10 loops, best of 3: 348 msec per loop
Note that unlike Javier's answer, I include mylist = [None]*N as part of the code timeit is to time when using the "pre-allocation" method. (It's not just part of the setup, since it is code one would need only if using pre-allocation.)
PS. the time module (and time (unix) command) can give unreliable results. If you wish to time Python code, I'd suggest sticking with the timeit module.
A:
Going to have to disagree with Javier here...
With the following code:
print '%5s | %6s | %6s' % ('N', 'l.comp', 'manual')
print '------+--------+-------'
for N in 10, 100, 1000, 10000:
num_iter = 1000000 / N
# Time for list comprehension.
t1 = timeit.timeit('[func(i) for i in range(N)]', setup='N=%d;func=lambda x:x' % N, number=num_iter)
# Time to build manually.
t2 = timeit.timeit('''mylist = [None]*N
for i in range(N): mylist[i] = func(i)''', setup='N=%d;func=lambda x:x' % N, number=num_iter)
print '%5d | %2.4f | %2.4f' % (N, t1, t2)
I get the following table:
N | l.comp | manual
------+--------+-------
10 | 0.3330 | 0.3597
100 | 0.2371 | 0.3138
1000 | 0.2223 | 0.2740
10000 | 0.2185 | 0.2771
From this table it appears that the list comprehension faster than pre-allocation in every case of these varying lengths.
A:
Interesting question. As of the following test, it seems that preallocation does not improve performance in the current CPython implementation (Python 2 code but result ranking is the same, except that there's no xrange in Python 3):
N = 5000000
def func(x):
return x**2
def timeit(fn):
import time
begin = time.time()
fn()
end = time.time()
print "%-18s: %.5f seconds" % (fn.__name__, end - begin)
def normal():
mylist = [func(i) for i in range(N)]
def normalXrange():
mylist = [func(i) for i in xrange(N)]
def pseudoPreallocated():
mylist = [None] * N
for i in range(N): mylist[i] = func(i)
def preallocated():
mylist = [None for i in range(N)]
for i in range(N): mylist[i] = func(i)
def listFromGenerator():
mylist = list(func(i) for i in range(N))
def lazy():
mylist = (func(i) for i in xrange(N))
timeit(normal)
timeit(normalXrange)
timeit(pseudoPreallocated)
timeit(preallocated)
timeit(listFromGenerator)
timeit(lazy)
Results (ranking in parentheses):
normal : 7.57800 seconds (2)
normalXrange : 7.28200 seconds (1)
pseudoPreallocated: 7.65600 seconds (3)
preallocated : 8.07800 seconds (5)
listFromGenerator : 7.84400 seconds (4)
lazy : 0.00000 seconds
but with psyco.full():
normal : 7.25000 seconds (3)
normalXrange : 7.26500 seconds (4)
pseudoPreallocated: 6.76600 seconds (1)
preallocated : 6.96900 seconds (2)
listFromGenerator : 10.50000 seconds (5)
lazy : 0.00000 seconds
As you can see, pseudo-preallocation is faster with psyco. In any case, there's not much of a difference between the xrange solution (which I'd recommend) and the other solutions. If you don't process all elements of the list later, you could also use the lazy method (shown in the code above) which will create a generator that produces elements by the time you iterate over it.
A:
Using list comprehension to accomplish what you're trying to do would be more pythonic way to do it. Despite performance penalty:).
|
Python 3: Most efficient way to create a [func(i) for i in range(N)] list comprehension
|
Say I have a function func(i) that creates an object for an integer i, and N is some nonnegative integer. Then what's the fastest way to create a list (not a range) equal to this list
mylist = [func(i) for i in range(N)]
without resorting to advanced methods like creating a function in C? My main concern with the above list comprehension is that I'm not sure if python knows beforehand the length of range(N) to preallocate mylist, and therefore has to incrementally reallocate the list. Is that the case or is python clever enough to allocate mylist to length N first and then compute it's elements? If not, what's the best way to create mylist? Maybe this?
mylist = [None]*N
for i in range(N): mylist[i] = func(i)
RE-EDIT: Removed misleading information from a previous edit.
|
[
"Somebody wrote: \"\"\"Python is smart enough. As long as the object you're iterating over has a __len__ or __length_hint__ method, Python will call it to determine the size and preallocate the array.\"\"\"\nAs far as I can tell, there is no preallocation in a list comprehension. Python has no way of telling from the size of the INPUT what the size of the OUTPUT will be.\nLook at this Python 2.6 code:\n>>> def foo(func, iterable):\n... return [func(i) for i in iterable]\n...\n>>> import dis; dis.dis(foo)\n 2 0 BUILD_LIST 0 #### build empty list\n 3 DUP_TOP\n 4 STORE_FAST 2 (_[1])\n 7 LOAD_FAST 1 (iterable)\n 10 GET_ITER\n >> 11 FOR_ITER 19 (to 33)\n 14 STORE_FAST 3 (i)\n 17 LOAD_FAST 2 (_[1])\n 20 LOAD_FAST 0 (func)\n 23 LOAD_FAST 3 (i)\n 26 CALL_FUNCTION 1\n 29 LIST_APPEND #### stack[-2].append(stack[-1]); pop()\n 30 JUMP_ABSOLUTE 11\n >> 33 DELETE_FAST 2 (_[1])\n 36 RETURN_VALUE\n\nIt just builds an empty list, and appends whatever the iteration delivers.\nNow look at this code, which has an 'if' in the list comprehension:\n>>> def bar(func, iterable):\n... return [func(i) for i in iterable if i]\n...\n>>> import dis; dis.dis(bar)\n 2 0 BUILD_LIST 0\n 3 DUP_TOP\n 4 STORE_FAST 2 (_[1])\n 7 LOAD_FAST 1 (iterable)\n 10 GET_ITER\n >> 11 FOR_ITER 30 (to 44)\n 14 STORE_FAST 3 (i)\n 17 LOAD_FAST 3 (i)\n 20 JUMP_IF_FALSE 17 (to 40)\n 23 POP_TOP\n 24 LOAD_FAST 2 (_[1])\n 27 LOAD_FAST 0 (func)\n 30 LOAD_FAST 3 (i)\n 33 CALL_FUNCTION 1\n 36 LIST_APPEND\n 37 JUMP_ABSOLUTE 11\n >> 40 POP_TOP\n 41 JUMP_ABSOLUTE 11\n >> 44 DELETE_FAST 2 (_[1])\n 47 RETURN_VALUE\n>>>\n\nThe same code, plus some code to avoid the LIST_APPEND.\nIn Python 3.X, you need to dig a little deeper:\n>>> import dis\n>>> def comprehension(f, iterable): return [f(i) for i in iterable]\n...\n>>> dis.dis(comprehension)\n 1 0 LOAD_CLOSURE 0 (f)\n 3 BUILD_TUPLE 1\n 6 LOAD_CONST 1 (<code object <listcomp> at 0x00C4B8D\n8, file \"<stdin>\", line 1>)\n 9 MAKE_CLOSURE 0\n 12 LOAD_FAST 1 (iterable)\n 15 GET_ITER\n 16 CALL_FUNCTION 1\n 19 RETURN_VALUE\n>>> dis.dis(comprehension.__code__.co_consts[1])\n 1 0 BUILD_LIST 0\n 3 LOAD_FAST 0 (.0)\n >> 6 FOR_ITER 18 (to 27)\n 9 STORE_FAST 1 (i)\n 12 LOAD_DEREF 0 (f)\n 15 LOAD_FAST 1 (i)\n 18 CALL_FUNCTION 1\n 21 LIST_APPEND 2\n 24 JUMP_ABSOLUTE 6\n >> 27 RETURN_VALUE\n>>>\n\nIt's the same old schtick: start off with building an empty list, then iterate over the iterable, appending to the list as required. I see no preallocation here.\nThe optimisation that you are thinking about is used inside a single opcode e.g. the implementation of list.extend(iterable) can preallocate if iterable can accurately report its length. list.append(object) is given a single object, not an iterable.\n",
"There is no difference in computational complexity between using an autoresizing array and preallocating an array. At worst, it costs about O(2N). See here: \nConstant Amortized Time\nThe cost of the function calls plus whatever happens in your function is going to make this extra n insignificant. This isn't something you should worry about. Just use the list comprehension.\n",
"If you use the timeit module, you may come to the opposite conclusion: list comprehension is faster than preallocation:\nf=lambda x: x+1\nN=1000000\ndef lc():\n return [f(i) for i in range(N)]\ndef prealloc():\n mylist = [None]*N\n for i in range(N): mylist[i]=f(i)\n return mylist\ndef xr():\n return map(f,xrange(N))\n\nif __name__=='__main__':\n lc()\n\nWarning: These are the results on my computer. You should try these tests yourself, as your results may be different depending on your version of Python and your hardware. (See the comments.)\n% python -mtimeit -s\"import test\" \"test.prealloc()\"\n10 loops, best of 3: 370 msec per loop\n% python -mtimeit -s\"import test\" \"test.lc()\"\n10 loops, best of 3: 319 msec per loop\n% python -mtimeit -s\"import test\" \"test.xr()\"\n10 loops, best of 3: 348 msec per loop\n\nNote that unlike Javier's answer, I include mylist = [None]*N as part of the code timeit is to time when using the \"pre-allocation\" method. (It's not just part of the setup, since it is code one would need only if using pre-allocation.)\nPS. the time module (and time (unix) command) can give unreliable results. If you wish to time Python code, I'd suggest sticking with the timeit module.\n",
"Going to have to disagree with Javier here...\nWith the following code:\nprint '%5s | %6s | %6s' % ('N', 'l.comp', 'manual')\nprint '------+--------+-------'\nfor N in 10, 100, 1000, 10000:\n num_iter = 1000000 / N\n\n # Time for list comprehension.\n t1 = timeit.timeit('[func(i) for i in range(N)]', setup='N=%d;func=lambda x:x' % N, number=num_iter)\n\n # Time to build manually.\n t2 = timeit.timeit('''mylist = [None]*N\nfor i in range(N): mylist[i] = func(i)''', setup='N=%d;func=lambda x:x' % N, number=num_iter)\n\n print '%5d | %2.4f | %2.4f' % (N, t1, t2)\n\nI get the following table:\n N | l.comp | manual\n------+--------+-------\n 10 | 0.3330 | 0.3597\n 100 | 0.2371 | 0.3138\n 1000 | 0.2223 | 0.2740\n10000 | 0.2185 | 0.2771\n\nFrom this table it appears that the list comprehension faster than pre-allocation in every case of these varying lengths.\n",
"Interesting question. As of the following test, it seems that preallocation does not improve performance in the current CPython implementation (Python 2 code but result ranking is the same, except that there's no xrange in Python 3):\nN = 5000000\n\ndef func(x):\n return x**2\n\ndef timeit(fn):\n import time\n begin = time.time()\n fn()\n end = time.time()\n print \"%-18s: %.5f seconds\" % (fn.__name__, end - begin)\n\ndef normal():\n mylist = [func(i) for i in range(N)]\n\ndef normalXrange():\n mylist = [func(i) for i in xrange(N)]\n\ndef pseudoPreallocated():\n mylist = [None] * N\n for i in range(N): mylist[i] = func(i)\n\ndef preallocated():\n mylist = [None for i in range(N)]\n for i in range(N): mylist[i] = func(i)\n\ndef listFromGenerator():\n mylist = list(func(i) for i in range(N))\n\ndef lazy():\n mylist = (func(i) for i in xrange(N))\n\n\n\ntimeit(normal)\ntimeit(normalXrange)\ntimeit(pseudoPreallocated)\ntimeit(preallocated)\ntimeit(listFromGenerator)\ntimeit(lazy)\n\nResults (ranking in parentheses):\nnormal : 7.57800 seconds (2)\nnormalXrange : 7.28200 seconds (1)\npseudoPreallocated: 7.65600 seconds (3)\npreallocated : 8.07800 seconds (5)\nlistFromGenerator : 7.84400 seconds (4)\nlazy : 0.00000 seconds\n\nbut with psyco.full():\nnormal : 7.25000 seconds (3)\nnormalXrange : 7.26500 seconds (4)\npseudoPreallocated: 6.76600 seconds (1)\npreallocated : 6.96900 seconds (2)\nlistFromGenerator : 10.50000 seconds (5)\nlazy : 0.00000 seconds\n\nAs you can see, pseudo-preallocation is faster with psyco. In any case, there's not much of a difference between the xrange solution (which I'd recommend) and the other solutions. If you don't process all elements of the list later, you could also use the lazy method (shown in the code above) which will create a generator that produces elements by the time you iterate over it.\n",
"Using list comprehension to accomplish what you're trying to do would be more pythonic way to do it. Despite performance penalty:).\n"
] |
[
7,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"list_comprehension",
"python"
] |
stackoverflow_0002439986_list_comprehension_python.txt
|
Q:
How can I use Python with Mechanize for posting multipart/form-data?
I am using http://pypi.python.org/pypi/mechanize/0.1.11 for programmatic web browsing, I want to be able to upload files to servers the same way the browser does (by sending the content as multipart/form-data, defined in RFC2388)
Is this possible with mechanize, can you show me an example?
Thanks!
A:
There's a couple of good answer on this SO question, one with bare mechanize and one with twill on top of it, and I believe they both end up sending multipart/form-data as you want.
|
How can I use Python with Mechanize for posting multipart/form-data?
|
I am using http://pypi.python.org/pypi/mechanize/0.1.11 for programmatic web browsing, I want to be able to upload files to servers the same way the browser does (by sending the content as multipart/form-data, defined in RFC2388)
Is this possible with mechanize, can you show me an example?
Thanks!
|
[
"There's a couple of good answer on this SO question, one with bare mechanize and one with twill on top of it, and I believe they both end up sending multipart/form-data as you want.\n"
] |
[
2
] |
[] |
[] |
[
"mechanize",
"multipartform_data",
"python"
] |
stackoverflow_0002439900_mechanize_multipartform_data_python.txt
|
Q:
How to bind an ip address to telnetlib in Python
The code below binds an ip address to urllib, urllib2, etc.
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
Is it also able to bind an ip address to telnetlib?
A:
telnetlib at least in recent Python releases uses socket.create_connection (see telnetlib's sources here) but that should also be caught by your monkeypatch (sources here -- you'll see it uses a bare identifier socket but that's exactly in the module you're monkeypatching). Of course monkeypatching is always extremely fragile (the tiniest optimization in some future release, hoisting the global lookup of socket in create_connection, and you're toast...;-) so maybe you'll want to monkeypath create_connection directly as a modestly-stronger approach.
|
How to bind an ip address to telnetlib in Python
|
The code below binds an ip address to urllib, urllib2, etc.
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
Is it also able to bind an ip address to telnetlib?
|
[
"telnetlib at least in recent Python releases uses socket.create_connection (see telnetlib's sources here) but that should also be caught by your monkeypatch (sources here -- you'll see it uses a bare identifier socket but that's exactly in the module you're monkeypatching). Of course monkeypatching is always extremely fragile (the tiniest optimization in some future release, hoisting the global lookup of socket in create_connection, and you're toast...;-) so maybe you'll want to monkeypath create_connection directly as a modestly-stronger approach.\n"
] |
[
2
] |
[] |
[] |
[
"ip_address",
"python",
"telnetlib"
] |
stackoverflow_0002440781_ip_address_python_telnetlib.txt
|
Q:
How to remove lowercase sentence fragments from text?
I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner.
These are commonly referred to as speech or attribution tags, for example - he said, she said, etc.
This example shows before and after using manual deletion:
Original:
"Ah, that's perfectly true!" exclaimed Alyosha.
"Oh, do leave off playing the fool! Some idiot comes in, and you put us
to shame!" cried the girl by the window, suddenly turning to her father
with a disdainful and contemptuous air.
"Wait a little, Varvara!" cried her father, speaking peremptorily but
looking at them quite approvingly. "That's her character," he said,
addressing Alyosha again.
"Where have you been?" he asked him.
"I think," he said, "I've forgotten something... my handkerchief, I
think.... Well, even if I've not forgotten anything, let me stay a
little."
He sat down. Father stood over him.
"You sit down, too," said he.
All lower case sentence fragments manually removed:
"Ah, that's perfectly true!"
"Oh, do leave off playing the fool! Some idiot comes in, and you put us
to shame!"
"Wait a little, Varvara!" "That's her character,"
"Where have you been?"
"I think," "I've forgotten something... my handkerchief, I
think.... Well, even if I've not forgotten anything, let me stay a
little."
He sat down. Father stood over him.
"You sit down, too,"
I've changed straight quotes " to balanced and tried: ” (...)+[.]
Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression.
I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated.
Cheers,
Aaron
A:
Here's a Python snippet that should do:
thetext="""triple quoted paste of your sample text"""
y=thetext.split('\n')
for line in y:
m=re.findall('(".*?")',line)
if m:
print ' '.join(m)
else:
print line
A:
The Text::Balanced module is what you seem to be after if you're looking to use Perl. The following should be able to extract all the quoted speech in your example (not pretty, but gets the job done).
It also works for Dennis' test cases.
The advantage of the code below is that the quotes are grouped by paragraph, which may or may not be useful for later analysis
Script
use strict;
use warnings;
use Text::Balanced qw/extract_quotelike extract_multiple/;
my %quotedSpeech;
{
local $/ = '';
while (my $text = <DATA>) { # one paragraph at a time
while (my $speech = extract_multiple(
$text,
[sub{extract_quotelike($_[0])},],
undef,
1))
{ push @{$quotedSpeech{$.}}, $speech; }
}
}
# Print total number of paragraphs in DATA filehandle
print "Total paragraphs: ", (sort {$a <=> $b} keys %quotedSpeech)[-1];
# Print quotes grouped by paragraph:
foreach my $paraNumber (sort {$a <=> $b} keys %quotedSpeech) {
print "\n\nPara ",$paraNumber;
foreach my $speech (@{$quotedSpeech{$paraNumber}}) {
print "\t",$speech,"\n";
}
}
# How many quotes in paragraph 8?
print "Number of quotes in Paragraph 8: ", scalar @{$quotedSpeech{8}};
__DATA__
"Ah, that's perfectly true!" exclaimed Alyosha.
"Oh, do leave off playing the fool!
Some idiot comes in, and you put us to
shame!" cried the girl by the window,
suddenly turning to her father with a
disdainful and contemptuous air.
"Wait a little, Varvara!" cried her
father, speaking peremptorily but
looking at them quite approvingly.
"That's her character," he said,
addressing Alyosha again.
"Where have you been?" he asked him.
"I think," he said, "I've forgotten
something... my handkerchief, I
think.... Well, even if I've not
forgotten anything, let me stay a
little."
He sat down. Father stood over him.
"You sit down, too," said he.
He said, "It doesn't always work."
"Secondly," I said, "it fails for
three quoted phrases..." He completed
my thought, "with two unquoted ones."
I replied, "That's right." dejectedly.
Output
Total paragraphs: 10
Para 1 "Ah, that's perfectly true!"
Para 2 "Oh, do leave off playing the fool! Some idiot comes in, and you put us
to shame!"
Para 3 "Wait a little, Varvara!"
"That's her character,"
Para 4 "Where have you been?"
Para 5 "I think,"
"I've forgotten something... my handkerchief, I think.... Well, even if
I've not forgotten anything, let me stay a little."
Para 7 "You sit down, too,"
Para 8 "It doesn't always work."
Para 9 "Secondly,"
"it fails for three quoted phrases..."
"with two unquoted ones."
Para 10 "That's right."
A:
I am not entirely sure which editor are you using, if you are using something editor that supports atomic grouping (e.g. EditorPad Pro) You can use the regular expression below to do the search and replace:
Search for
(".+?"|^[A-Z].+\r\n)(.(?!"))*
Note: you should replace \r\n with \n or \r according to your line breaks
Replace with
\1
Here is a bit explanation for the regular expression:
The first capturing group is for characters between quotes and lines starting with Capital Letters. The second capturing group is for any characters that is after a quote but before another quote.
A:
This works for all cases shown in the question:
sed -n '/"/!{p;b}; s/\(.*\)"[^"]*/\1" /;s/\(.*"\)\([^"]*\)\(".*"\)/\1 \3/;p' textfile
It fails for cases such as these:
He said, "It doesn't always work."
"Secondly," I said, "it fails for three quoted phrases..." He completed my thought, "with two unquoted ones."
I replied, "That's right." dejectedly.
A:
If I understand what you are after... passing each line through a regex like this should work...
You can use the perl debugger to play around with this. Hop into the perl debugger with just a perl -de 42 on the command line in linux/mac. (The "42" is just a valid expression - it could be anything, but why not choose the meaning of life?)
anyways
open FILE, "<", "filename.txt" or die $!;
while (my $line = <FILE>) {
@fixed_text = $line =~ m{(?:(" .+? ")) | (?:\A .* [^"] .* \z)}xmsg;
for my $new_line (@fixed_text) {
print qq($new_line );
}
print qq(\n);
}
NOTE: Sorry I had to edit it - didn't see you wanted lines without any quotes at all...
Yes, Regex and Perl is amazing. It should be 100% accurate and get all of your instances, acept in the case where a quote extends across paragraphs
|
How to remove lowercase sentence fragments from text?
|
I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner.
These are commonly referred to as speech or attribution tags, for example - he said, she said, etc.
This example shows before and after using manual deletion:
Original:
"Ah, that's perfectly true!" exclaimed Alyosha.
"Oh, do leave off playing the fool! Some idiot comes in, and you put us
to shame!" cried the girl by the window, suddenly turning to her father
with a disdainful and contemptuous air.
"Wait a little, Varvara!" cried her father, speaking peremptorily but
looking at them quite approvingly. "That's her character," he said,
addressing Alyosha again.
"Where have you been?" he asked him.
"I think," he said, "I've forgotten something... my handkerchief, I
think.... Well, even if I've not forgotten anything, let me stay a
little."
He sat down. Father stood over him.
"You sit down, too," said he.
All lower case sentence fragments manually removed:
"Ah, that's perfectly true!"
"Oh, do leave off playing the fool! Some idiot comes in, and you put us
to shame!"
"Wait a little, Varvara!" "That's her character,"
"Where have you been?"
"I think," "I've forgotten something... my handkerchief, I
think.... Well, even if I've not forgotten anything, let me stay a
little."
He sat down. Father stood over him.
"You sit down, too,"
I've changed straight quotes " to balanced and tried: ” (...)+[.]
Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression.
I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated.
Cheers,
Aaron
|
[
"Here's a Python snippet that should do:\n thetext=\"\"\"triple quoted paste of your sample text\"\"\"\n y=thetext.split('\\n')\n for line in y:\n m=re.findall('(\".*?\")',line)\n if m:\n print ' '.join(m)\n else:\n print line\n\n",
"The Text::Balanced module is what you seem to be after if you're looking to use Perl. The following should be able to extract all the quoted speech in your example (not pretty, but gets the job done).\nIt also works for Dennis' test cases.\nThe advantage of the code below is that the quotes are grouped by paragraph, which may or may not be useful for later analysis\nScript\nuse strict;\nuse warnings;\nuse Text::Balanced qw/extract_quotelike extract_multiple/;\n\nmy %quotedSpeech;\n\n{\n local $/ = '';\n while (my $text = <DATA>) { # one paragraph at a time\n\n while (my $speech = extract_multiple(\n $text,\n [sub{extract_quotelike($_[0])},],\n undef,\n 1))\n { push @{$quotedSpeech{$.}}, $speech; }\n }\n}\n\n# Print total number of paragraphs in DATA filehandle\n\nprint \"Total paragraphs: \", (sort {$a <=> $b} keys %quotedSpeech)[-1];\n\n# Print quotes grouped by paragraph:\n\nforeach my $paraNumber (sort {$a <=> $b} keys %quotedSpeech) {\n print \"\\n\\nPara \",$paraNumber;\n foreach my $speech (@{$quotedSpeech{$paraNumber}}) {\n print \"\\t\",$speech,\"\\n\";\n }\n}\n# How many quotes in paragraph 8?\nprint \"Number of quotes in Paragraph 8: \", scalar @{$quotedSpeech{8}};\n\n\n__DATA__\n\"Ah, that's perfectly true!\" exclaimed Alyosha.\n\"Oh, do leave off playing the fool!\n Some idiot comes in, and you put us to\n shame!\" cried the girl by the window,\n suddenly turning to her father with a\n disdainful and contemptuous air.\n\"Wait a little, Varvara!\" cried her\n father, speaking peremptorily but\n looking at them quite approvingly.\n \"That's her character,\" he said,\n addressing Alyosha again.\n\"Where have you been?\" he asked him.\n\"I think,\" he said, \"I've forgotten\n something... my handkerchief, I\n think.... Well, even if I've not\n forgotten anything, let me stay a\n little.\"\nHe sat down. Father stood over him.\n\"You sit down, too,\" said he.\nHe said, \"It doesn't always work.\"\n\"Secondly,\" I said, \"it fails for\n three quoted phrases...\" He completed\n my thought, \"with two unquoted ones.\"\nI replied, \"That's right.\" dejectedly.\n\nOutput\nTotal paragraphs: 10\n\nPara 1 \"Ah, that's perfectly true!\"\n\n\nPara 2 \"Oh, do leave off playing the fool! Some idiot comes in, and you put us\nto shame!\"\n\n\nPara 3 \"Wait a little, Varvara!\"\n \"That's her character,\"\n\n\nPara 4 \"Where have you been?\"\n\n\nPara 5 \"I think,\"\n \"I've forgotten something... my handkerchief, I think.... Well, even if\nI've not forgotten anything, let me stay a little.\"\n\n\nPara 7 \"You sit down, too,\"\n\n\nPara 8 \"It doesn't always work.\"\n\n\nPara 9 \"Secondly,\"\n \"it fails for three quoted phrases...\"\n \"with two unquoted ones.\"\n\n\nPara 10 \"That's right.\"\n\n",
"I am not entirely sure which editor are you using, if you are using something editor that supports atomic grouping (e.g. EditorPad Pro) You can use the regular expression below to do the search and replace:\nSearch for \n(\".+?\"|^[A-Z].+\\r\\n)(.(?!\"))* \nNote: you should replace \\r\\n with \\n or \\r according to your line breaks\n\nReplace with\n\\1\n\nHere is a bit explanation for the regular expression:\n\nThe first capturing group is for characters between quotes and lines starting with Capital Letters. The second capturing group is for any characters that is after a quote but before another quote.\n\n",
"This works for all cases shown in the question:\nsed -n '/\"/!{p;b}; s/\\(.*\\)\"[^\"]*/\\1\" /;s/\\(.*\"\\)\\([^\"]*\\)\\(\".*\"\\)/\\1 \\3/;p' textfile\n\nIt fails for cases such as these:\nHe said, \"It doesn't always work.\"\n\n\"Secondly,\" I said, \"it fails for three quoted phrases...\" He completed my thought, \"with two unquoted ones.\"\n\nI replied, \"That's right.\" dejectedly.\n\n",
"If I understand what you are after... passing each line through a regex like this should work...\nYou can use the perl debugger to play around with this. Hop into the perl debugger with just a perl -de 42 on the command line in linux/mac. (The \"42\" is just a valid expression - it could be anything, but why not choose the meaning of life?)\nanyways\nopen FILE, \"<\", \"filename.txt\" or die $!;\nwhile (my $line = <FILE>) {\n @fixed_text = $line =~ m{(?:(\" .+? \")) | (?:\\A .* [^\"] .* \\z)}xmsg;\n for my $new_line (@fixed_text) {\n print qq($new_line );\n }\n print qq(\\n);\n}\n\nNOTE: Sorry I had to edit it - didn't see you wanted lines without any quotes at all...\nYes, Regex and Perl is amazing. It should be 100% accurate and get all of your instances, acept in the case where a quote extends across paragraphs\n"
] |
[
3,
0,
0,
0,
0
] |
[] |
[] |
[
"awk",
"perl",
"python",
"regex"
] |
stackoverflow_0002439968_awk_perl_python_regex.txt
|
Q:
Compiler options wrong with python setup.py
I'm trying to install matplotlib on my mac setup. I find that setup.py has inaccurate flags, in particular the isysroot points to an earlier SDK.
Where does setup.py get its info and how can i fix it?
I'm on MacOS 10.5.8, XCode 3.1.2 and Python 2.6 (default config was 2.5)
A:
I'm guessing you've installed 2.6 on 10.5 using the python.org OS X installer. In that case, the flags are accurate and you should not try to change them. The python.org installers are built using the so-called 10.4u SDK and with a deployment target of 10.3, allowing one installer image to work on Mac OS X systems from 10.3.9 up through 10.6 (and possibly beyond). The most recent releases of Python 2.6 have been fixed to ensure that extension module building on OS X forces the C compiler options to match those of the underlying Python so you'll need to make sure you install the 10.4u SDK (or whatever) if necessary from the Xcode package (on the OS X release CD/DVD or downloaded from the Apple Developer Connection website). It will also make sure you are using gcc-4.0, which is also the default on 10.5.
A:
setup.py gets its info from your installation of Python, specifically the distutils package of the standard library, from which it imports at least some functionality.
distutils.ccompiler provides the abstract base class CCompiler describing your C compiler. For gcc, the typical concrete class is in distutils.unixcompiler and I think that's where you should start checking for the Mac in particular.
If it can help you to see how things are in a perfectly working Mac OS X 10.5 with the next-but-latest XCode (I can't install the latest one as it's 10.6-only) I'll be glad to share info about my installation -- but I think it would be more helpful if you told us about what Mac OS X release, what XCode release, etc etc, you have installed!-)
It's also important to know whether you're using the system-provided Python, a macports one, one installed from python.org (and, which one;-), and so forth -- each may have its own installation problems of course, but they'll tend to be different from each other!-)
|
Compiler options wrong with python setup.py
|
I'm trying to install matplotlib on my mac setup. I find that setup.py has inaccurate flags, in particular the isysroot points to an earlier SDK.
Where does setup.py get its info and how can i fix it?
I'm on MacOS 10.5.8, XCode 3.1.2 and Python 2.6 (default config was 2.5)
|
[
"I'm guessing you've installed 2.6 on 10.5 using the python.org OS X installer. In that case, the flags are accurate and you should not try to change them. The python.org installers are built using the so-called 10.4u SDK and with a deployment target of 10.3, allowing one installer image to work on Mac OS X systems from 10.3.9 up through 10.6 (and possibly beyond). The most recent releases of Python 2.6 have been fixed to ensure that extension module building on OS X forces the C compiler options to match those of the underlying Python so you'll need to make sure you install the 10.4u SDK (or whatever) if necessary from the Xcode package (on the OS X release CD/DVD or downloaded from the Apple Developer Connection website). It will also make sure you are using gcc-4.0, which is also the default on 10.5.\n",
"setup.py gets its info from your installation of Python, specifically the distutils package of the standard library, from which it imports at least some functionality.\ndistutils.ccompiler provides the abstract base class CCompiler describing your C compiler. For gcc, the typical concrete class is in distutils.unixcompiler and I think that's where you should start checking for the Mac in particular.\nIf it can help you to see how things are in a perfectly working Mac OS X 10.5 with the next-but-latest XCode (I can't install the latest one as it's 10.6-only) I'll be glad to share info about my installation -- but I think it would be more helpful if you told us about what Mac OS X release, what XCode release, etc etc, you have installed!-)\nIt's also important to know whether you're using the system-provided Python, a macports one, one installed from python.org (and, which one;-), and so forth -- each may have its own installation problems of course, but they'll tend to be different from each other!-)\n"
] |
[
3,
1
] |
[] |
[] |
[
"distutils",
"gcc",
"macos",
"python",
"setup.py"
] |
stackoverflow_0002440579_distutils_gcc_macos_python_setup.py.txt
|
Q:
warning in python with MySQLdb
when I use MySQLdb get this message:
/var/lib/python-support/python2.6/MySQLdb/__init__.py:34: DeprecationWarning: the sets module is deprecated from sets import ImmutableSet
I try filter the warning with
import warnings
warnings.filterwarnings("ignore", message="the sets module is deprecated from sets import ImmutableSet")
but, I not get changes.
any suggestion?
Many thanks.
A:
From python documentation: you could filter your warning this way, so that if other warnings are caused by an other part of your code, there would still be displayed:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
import MySQLdb
[...]
but as said by Alex Martelli, the best solution would be to update MySQLdb so that it doesn't use deprecated modules.
A:
What release of MySQLdb are you using? I think the current one (1.2.3c1) should have it fixed see this bug (marked as fixed as of Oct 2008, 1.2 branch).
|
warning in python with MySQLdb
|
when I use MySQLdb get this message:
/var/lib/python-support/python2.6/MySQLdb/__init__.py:34: DeprecationWarning: the sets module is deprecated from sets import ImmutableSet
I try filter the warning with
import warnings
warnings.filterwarnings("ignore", message="the sets module is deprecated from sets import ImmutableSet")
but, I not get changes.
any suggestion?
Many thanks.
|
[
"From python documentation: you could filter your warning this way, so that if other warnings are caused by an other part of your code, there would still be displayed:\nimport warnings\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\", DeprecationWarning)\n import MySQLdb\n[...]\n\nbut as said by Alex Martelli, the best solution would be to update MySQLdb so that it doesn't use deprecated modules.\n",
"What release of MySQLdb are you using? I think the current one (1.2.3c1) should have it fixed see this bug (marked as fixed as of Oct 2008, 1.2 branch).\n"
] |
[
4,
1
] |
[] |
[] |
[
"mysql",
"python",
"warnings"
] |
stackoverflow_0002440799_mysql_python_warnings.txt
|
Q:
pyqt QTreeWidget setItemWidget dissapears after drag/drop
I'm trying to keep a widget put into a QTreeWidgetItem after a reparent (drag and drop) using QTreeWidget.setItemWidget()
But the result, if you compile the following code - is that the widget inside the QTreeWidgetItem disappears.
Any idea why? What code would fix this (repopulate the QTreeWidgetItem with the widget I guess?)
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class InlineEditor (QWidget):
_MUTE = 'MUTE'
def __init__ (self, parent):
QWidget.__init__ (self, parent)
self.setAutoFillBackground (True)
lo = QHBoxLayout()
lo.setSpacing(4)
self._cbFoo = QComboBox()
for x in ["ABC", "DEF", "GHI", "JKL"]:
self._cbFoo.addItem(x)
self._leBar = QLineEdit('', self)
lo.addWidget (self._cbFoo, 3)
lo.addSpacing (5)
lo.addWidget (QLabel ( 'Bar:'))
lo.addWidget (self._leBar, 3)
lo.addStretch (5)
self.setLayout (lo)
class Form (QDialog):
def __init__(self,parent=None):
QDialog.__init__(self, parent)
grid = QGridLayout ()
tree = QTreeWidget ()
# Here is the issue?
tree.setDragDropMode(QAbstractItemView.InternalMove)
tree.setColumnCount(3)
for n in range (2):
i = QTreeWidgetItem (tree) # create QTreeWidget the sub i
i.setText (0, "first" + str (n)) # set the text of the first 0
i.setText (1, "second")
for m in range (2):
j = QTreeWidgetItem(i)
j.setText (0, "child first" + str (m))
#b1 = QCheckBox("push me 0", tree) # this wont work w/ drag by itself either
#tree.setItemWidget (tree.topLevelItem(0).child(1), 1, b1)
item = InlineEditor(tree) # deal with a combination of multiple controls
tree.setItemWidget(tree.topLevelItem(0).child(1), 1, item)
grid.addWidget (tree)
self.setLayout (grid)
app = QApplication ([])
form = Form ()
form.show ()
app.exec_ ()
A:
managed to get a relatively "working" fix in by writing my own treeDropEvent... however if someone has a more elegant solution, please feel free to share. the code below will solve anyone else's headaches for drag/drop with setItemWidgets in a tree, cheers.
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class InlineEditor (QWidget):
_MUTE = 'MUTE'
def __init__ (self, parent):
QWidget.__init__ (self, parent)
self.setAutoFillBackground (True)
lo = QHBoxLayout()
lo.setSpacing(4)
self._cbFoo = QComboBox()
for x in ["ABC", "DEF", "GHI", "JKL"]:
self._cbFoo.addItem(x)
self._leBar = QLineEdit('', self)
lo.addWidget (self._cbFoo, 3)
lo.addSpacing (5)
lo.addWidget (QLabel ( 'Bar:'))
lo.addWidget (self._leBar, 3)
lo.addStretch (5)
self.setLayout (lo)
class Tree(QTreeWidget):
def __init__(self, parent=None):
QTreeWidget.__init__(self, parent)
# Here is the issue?
self.setDragDropMode(QAbstractItemView.InternalMove)
self.installEventFilter(self)
self.setColumnCount(3)
self.dropEvent = self.treeDropEvent
for n in range (2):
i = QTreeWidgetItem (self) # create QTreeWidget the sub i
i.setText (0, "first" + str (n)) # set the text of the first 0
i.setText (1, "second")
for m in range (2):
j = QTreeWidgetItem(i)
j.setText (0, "child first" + str (m))
self.item = InlineEditor(self) # deal with a combination of multiple controls
self.setItemWidget(self.topLevelItem(0).child(1), 1, self.item)
def treeDropEvent(self, event):
dragItem = self.currentItem()
QTreeWidget.dropEvent(self, event)
# rebuild widget (grabbing it doesnt seem to work from self.itemWidget?)
self.item = InlineEditor(self)
self.setItemWidget(dragItem, 1, self.item)
class Form (QDialog):
def __init__(self,parent=None):
QDialog.__init__(self, parent)
grid = QGridLayout ()
tree = Tree ()
grid.addWidget (tree)
self.setLayout (grid)
app = QApplication ([])
form = Form ()
form.show ()
app.exec_ ()
A:
I would love to know why as well, it happens to me too.
It seems the "underlying C/C++ object has been deleted" for the itemWidget. Thats what I get anyway when I try to setItemWidget() again after the widget disappears in the hope that that would fix it.
I put in an event to get called when the QTreeWidgetItem is dropped but it seems the object gets deleted as soon as it's dropped
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class InlineEditor (QWidget):
_MUTE = 'MUTE'
def __init__ (self, parent):
QWidget.__init__ (self, parent)
self.setAutoFillBackground (True)
lo = QHBoxLayout()
lo.setSpacing(4)
self._cbFoo = QComboBox()
for x in ["ABC", "DEF", "GHI", "JKL"]:
self._cbFoo.addItem(x)
self._leBar = QLineEdit('', self)
lo.addWidget (self._cbFoo, 3)
lo.addSpacing (5)
lo.addWidget (QLabel ( 'Bar:'))
lo.addWidget (self._leBar, 3)
lo.addStretch (5)
self.setLayout (lo)
class Tree(QTreeWidget):
def __init__(self, parent=None):
QTreeWidget.__init__(self, parent)
# Here is the issue?
self.setDragDropMode(QAbstractItemView.InternalMove)
self.installEventFilter(self)
self.setColumnCount(3)
for n in range (2):
i = QTreeWidgetItem (self) # create QTreeWidget the sub i
i.setText (0, "first" + str (n)) # set the text of the first 0
i.setText (1, "second")
for m in range (2):
j = QTreeWidgetItem(i)
j.setText (0, "child first" + str (m))
#b1 = QCheckBox("push me 0", tree) # this wont work w/ drag by itself either
#tree.setItemWidget (tree.topLevelItem(0).child(1), 1, b1)
self.item = InlineEditor(self) # deal with a combination of multiple controls
self.setItemWidget(self.topLevelItem(0).child(1), 1, self.item)
def eventFilter(self, sender, event):
if event.type() == QEvent.ChildRemoved:
print self.item._cbFoo # looks like this remains
print self.item._cbFoo.currentText() # CRASH! but the data is gone
#self.setItemWidget(self.topLevelItem(0).child(1), 1, self.item)
return False
class Form (QDialog):
def __init__(self,parent=None):
QDialog.__init__(self, parent)
grid = QGridLayout ()
tree = Tree ()
grid.addWidget (tree)
self.setLayout (grid)
app = QApplication ([])
form = Form ()
form.show ()
app.exec_ ()
|
pyqt QTreeWidget setItemWidget dissapears after drag/drop
|
I'm trying to keep a widget put into a QTreeWidgetItem after a reparent (drag and drop) using QTreeWidget.setItemWidget()
But the result, if you compile the following code - is that the widget inside the QTreeWidgetItem disappears.
Any idea why? What code would fix this (repopulate the QTreeWidgetItem with the widget I guess?)
from PyQt4.QtCore import *
from PyQt4.QtGui import *
class InlineEditor (QWidget):
_MUTE = 'MUTE'
def __init__ (self, parent):
QWidget.__init__ (self, parent)
self.setAutoFillBackground (True)
lo = QHBoxLayout()
lo.setSpacing(4)
self._cbFoo = QComboBox()
for x in ["ABC", "DEF", "GHI", "JKL"]:
self._cbFoo.addItem(x)
self._leBar = QLineEdit('', self)
lo.addWidget (self._cbFoo, 3)
lo.addSpacing (5)
lo.addWidget (QLabel ( 'Bar:'))
lo.addWidget (self._leBar, 3)
lo.addStretch (5)
self.setLayout (lo)
class Form (QDialog):
def __init__(self,parent=None):
QDialog.__init__(self, parent)
grid = QGridLayout ()
tree = QTreeWidget ()
# Here is the issue?
tree.setDragDropMode(QAbstractItemView.InternalMove)
tree.setColumnCount(3)
for n in range (2):
i = QTreeWidgetItem (tree) # create QTreeWidget the sub i
i.setText (0, "first" + str (n)) # set the text of the first 0
i.setText (1, "second")
for m in range (2):
j = QTreeWidgetItem(i)
j.setText (0, "child first" + str (m))
#b1 = QCheckBox("push me 0", tree) # this wont work w/ drag by itself either
#tree.setItemWidget (tree.topLevelItem(0).child(1), 1, b1)
item = InlineEditor(tree) # deal with a combination of multiple controls
tree.setItemWidget(tree.topLevelItem(0).child(1), 1, item)
grid.addWidget (tree)
self.setLayout (grid)
app = QApplication ([])
form = Form ()
form.show ()
app.exec_ ()
|
[
"managed to get a relatively \"working\" fix in by writing my own treeDropEvent... however if someone has a more elegant solution, please feel free to share. the code below will solve anyone else's headaches for drag/drop with setItemWidgets in a tree, cheers.\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\n\nclass InlineEditor (QWidget):\n _MUTE = 'MUTE'\n\n def __init__ (self, parent):\n QWidget.__init__ (self, parent)\n\n self.setAutoFillBackground (True)\n lo = QHBoxLayout()\n lo.setSpacing(4)\n\n self._cbFoo = QComboBox()\n for x in [\"ABC\", \"DEF\", \"GHI\", \"JKL\"]:\n self._cbFoo.addItem(x)\n\n self._leBar = QLineEdit('', self)\n lo.addWidget (self._cbFoo, 3)\n lo.addSpacing (5)\n lo.addWidget (QLabel ( 'Bar:'))\n lo.addWidget (self._leBar, 3)\n lo.addStretch (5)\n self.setLayout (lo)\n\nclass Tree(QTreeWidget):\n def __init__(self, parent=None):\n QTreeWidget.__init__(self, parent)\n\n # Here is the issue?\n self.setDragDropMode(QAbstractItemView.InternalMove)\n self.installEventFilter(self)\n self.setColumnCount(3)\n self.dropEvent = self.treeDropEvent\n\n for n in range (2):\n i = QTreeWidgetItem (self) # create QTreeWidget the sub i\n i.setText (0, \"first\" + str (n)) # set the text of the first 0\n i.setText (1, \"second\")\n for m in range (2):\n j = QTreeWidgetItem(i)\n j.setText (0, \"child first\" + str (m))\n\n self.item = InlineEditor(self) # deal with a combination of multiple controls\n self.setItemWidget(self.topLevelItem(0).child(1), 1, self.item)\n\n def treeDropEvent(self, event):\n dragItem = self.currentItem()\n\n QTreeWidget.dropEvent(self, event)\n # rebuild widget (grabbing it doesnt seem to work from self.itemWidget?)\n self.item = InlineEditor(self) \n self.setItemWidget(dragItem, 1, self.item)\n\nclass Form (QDialog):\n def __init__(self,parent=None):\n QDialog.__init__(self, parent)\n grid = QGridLayout ()\n tree = Tree ()\n grid.addWidget (tree)\n self.setLayout (grid)\n\napp = QApplication ([])\nform = Form ()\nform.show ()\napp.exec_ ()\n\n",
"I would love to know why as well, it happens to me too.\nIt seems the \"underlying C/C++ object has been deleted\" for the itemWidget. Thats what I get anyway when I try to setItemWidget() again after the widget disappears in the hope that that would fix it.\nI put in an event to get called when the QTreeWidgetItem is dropped but it seems the object gets deleted as soon as it's dropped\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\n\n\nclass InlineEditor (QWidget):\n _MUTE = 'MUTE'\n\n def __init__ (self, parent):\n QWidget.__init__ (self, parent)\n\n self.setAutoFillBackground (True)\n lo = QHBoxLayout()\n lo.setSpacing(4)\n\n self._cbFoo = QComboBox()\n for x in [\"ABC\", \"DEF\", \"GHI\", \"JKL\"]:\n self._cbFoo.addItem(x)\n\n self._leBar = QLineEdit('', self)\n lo.addWidget (self._cbFoo, 3)\n lo.addSpacing (5)\n lo.addWidget (QLabel ( 'Bar:'))\n lo.addWidget (self._leBar, 3)\n lo.addStretch (5)\n self.setLayout (lo)\n\nclass Tree(QTreeWidget):\n def __init__(self, parent=None):\n QTreeWidget.__init__(self, parent)\n\n # Here is the issue?\n self.setDragDropMode(QAbstractItemView.InternalMove)\n self.installEventFilter(self)\n self.setColumnCount(3)\n\n for n in range (2):\n i = QTreeWidgetItem (self) # create QTreeWidget the sub i\n i.setText (0, \"first\" + str (n)) # set the text of the first 0\n i.setText (1, \"second\")\n for m in range (2):\n j = QTreeWidgetItem(i)\n j.setText (0, \"child first\" + str (m))\n\n #b1 = QCheckBox(\"push me 0\", tree) # this wont work w/ drag by itself either\n #tree.setItemWidget (tree.topLevelItem(0).child(1), 1, b1)\n\n self.item = InlineEditor(self) # deal with a combination of multiple controls\n self.setItemWidget(self.topLevelItem(0).child(1), 1, self.item)\n\n def eventFilter(self, sender, event):\n if event.type() == QEvent.ChildRemoved:\n print self.item._cbFoo # looks like this remains\n print self.item._cbFoo.currentText() # CRASH! but the data is gone \n #self.setItemWidget(self.topLevelItem(0).child(1), 1, self.item)\n return False\n\n\nclass Form (QDialog):\n def __init__(self,parent=None):\n QDialog.__init__(self, parent)\n\n grid = QGridLayout ()\n tree = Tree ()\n\n grid.addWidget (tree)\n self.setLayout (grid)\n\napp = QApplication ([])\nform = Form ()\nform.show ()\napp.exec_ ()\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"pyqt",
"python",
"qt",
"qtreewidget",
"user_interface"
] |
stackoverflow_0002383212_pyqt_python_qt_qtreewidget_user_interface.txt
|
Q:
Embed Python interpreter in a Python application
I'm looking for a way to ship the Python interpreter with my application (also written in Python), so that it doesn't need to have Python installed on the machine.
I searched Google and found a bunch of results about how to embed the Python interpreter in applications written in various languages, but nothing for applications written in Python itself... I don't need to "hide" my code or make a binary like cx_freeze does, I just don't want my users to have to install Python to use my app, that's all.
A:
For distribution on Windows machines, look into py2exe
py2exe is a Python Distutils extension which converts Python scripts
into executable Windows programs, able to run without requiring a
Python installation
For the MacIntosh, there is py2app (but I'm not familiar with it)
And for both Windows and Linux, there's bbfreeze or also pyinstaller
A:
You need some sort of executable in order to start Python. May as well be the one your app has been frozen into.
The alternative is to copy the executable, library, and pieces of the stdlib that you need into a private directory and invoke that against your app.
A:
Making a frozen binary using a utility like cx_freeze or py2exe is probably the easiest way to do this. That way you only need to distribute the executable. I know that you might prefer not to distribute a binary, but if that is a concern you could always give users the option to download the source and run from an interpreter.
A:
For Windows: py2exe
For Linux: Freeze
Full disclosure: I've only read about these, never used them. Perhaps some who has can comment?
A:
Have a look at http://www.python-packager.com, it is a free webservice for building redistrutable python binaries based on pyinstaller. I've used it to build apps for Windows and it is very easy to use and also works with GUI apps.
|
Embed Python interpreter in a Python application
|
I'm looking for a way to ship the Python interpreter with my application (also written in Python), so that it doesn't need to have Python installed on the machine.
I searched Google and found a bunch of results about how to embed the Python interpreter in applications written in various languages, but nothing for applications written in Python itself... I don't need to "hide" my code or make a binary like cx_freeze does, I just don't want my users to have to install Python to use my app, that's all.
|
[
"For distribution on Windows machines, look into py2exe\npy2exe is a Python Distutils extension which converts Python scripts \ninto executable Windows programs, able to run without requiring a \nPython installation\n\nFor the MacIntosh, there is py2app (but I'm not familiar with it)\nAnd for both Windows and Linux, there's bbfreeze or also pyinstaller\n",
"You need some sort of executable in order to start Python. May as well be the one your app has been frozen into.\nThe alternative is to copy the executable, library, and pieces of the stdlib that you need into a private directory and invoke that against your app.\n",
"Making a frozen binary using a utility like cx_freeze or py2exe is probably the easiest way to do this. That way you only need to distribute the executable. I know that you might prefer not to distribute a binary, but if that is a concern you could always give users the option to download the source and run from an interpreter.\n",
"For Windows: py2exe\nFor Linux: Freeze\nFull disclosure: I've only read about these, never used them. Perhaps some who has can comment?\n",
"Have a look at http://www.python-packager.com, it is a free webservice for building redistrutable python binaries based on pyinstaller. I've used it to build apps for Windows and it is very easy to use and also works with GUI apps.\n"
] |
[
9,
2,
2,
0,
0
] |
[] |
[] |
[
"embedding",
"interpreter",
"python"
] |
stackoverflow_0002441172_embedding_interpreter_python.txt
|
Q:
Printing Stdout In Command Line App Without Overwriting Pending User Input
In a basic Unix-shell app, how would you print to stdout without disturbing any pending user input.
e.g. Below is a simple Python app that echos user input. A thread running in the background prints a counter every 1 second.
import threading, time
class MyThread( threading.Thread ):
running = False
def run(self):
self.running = True
i = 0
while self.running:
i += 1
time.sleep(1)
print i
t = MyThread()
t.daemon = True
t.start()
try:
while 1:
inp = raw_input('command> ')
print inp
finally:
t.running = False
Note how the thread mangles the displayed user input as they type it (e.g. hell1o wo2rld3). How would you work around that, so that the shell writes a new line while preserving the line the user's currently typing on?
A:
You have to port your code to some way of controlling the terminal as slightly better than a teletype -- e.g. with the curses module in Python's standard library, or other ways to move the cursor away before emitting output, then move it back to where the user's busy inputting stuff.
A:
You could defer writing output until just after you receive some input. For anything more advanced you'll have to use Alex's answer
import threading, time
output=[]
class MyThread( threading.Thread ):
running = False
def run(self):
self.running = True
i = 0
while self.running:
i += 1
time.sleep(1)
output.append(str(i))
t = MyThread()
t.daemon = True
t.start()
try:
while 1:
inp = raw_input('command> ')
while output:
print output.pop(0)
finally:
t.running = False
|
Printing Stdout In Command Line App Without Overwriting Pending User Input
|
In a basic Unix-shell app, how would you print to stdout without disturbing any pending user input.
e.g. Below is a simple Python app that echos user input. A thread running in the background prints a counter every 1 second.
import threading, time
class MyThread( threading.Thread ):
running = False
def run(self):
self.running = True
i = 0
while self.running:
i += 1
time.sleep(1)
print i
t = MyThread()
t.daemon = True
t.start()
try:
while 1:
inp = raw_input('command> ')
print inp
finally:
t.running = False
Note how the thread mangles the displayed user input as they type it (e.g. hell1o wo2rld3). How would you work around that, so that the shell writes a new line while preserving the line the user's currently typing on?
|
[
"You have to port your code to some way of controlling the terminal as slightly better than a teletype -- e.g. with the curses module in Python's standard library, or other ways to move the cursor away before emitting output, then move it back to where the user's busy inputting stuff.\n",
"You could defer writing output until just after you receive some input. For anything more advanced you'll have to use Alex's answer\nimport threading, time\noutput=[]\nclass MyThread( threading.Thread ):\n running = False\n def run(self):\n self.running = True\n i = 0\n while self.running:\n i += 1\n time.sleep(1)\n output.append(str(i))\n\nt = MyThread()\nt.daemon = True\nt.start()\ntry:\n while 1:\n inp = raw_input('command> ')\n while output:\n print output.pop(0)\nfinally:\n t.running = False\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"bash",
"python",
"scripting",
"shell",
"stdout"
] |
stackoverflow_0002440387_bash_python_scripting_shell_stdout.txt
|
Q:
How to implement Comet server side with Python?
I once tried to implement Comet in PHP. Soon, I found that PHP is not suitable for Comet, since each HTTP request will occupy one process/thread. As a result, it doesn't scale well.
I just installed mod_python in my XAMPP. I thought it would be easy to implement Comet with Python asynchronous programming. But still cannot get a clue how to implement it.
Is there any idea how to implement Comet in mod_python?
A:
First of all, I'm not async expert at all, I just investigated the topic once.
IMHO if you're using XAMPP then you're loosing the posibility of doing long polling because Apache uses thread/processes (depending on configuration) for each request.
What you need, is non-blocking web server, like Tornado, that allows splitting requests into two parts, of which the second one is fired on some event, but meanwhile server can accept subsequent inbound requests.
Example from Tornado documentation /license/:
class MainHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
http = tornado.httpclient.AsyncHTTPClient()
http.fetch("http://friendfeed-api.com/v2/feed/bret",
callback=self.async_callback(self.on_response))
def on_response(self, response):
if response.error: raise tornado.web.HTTPError(500)
json = tornado.escape.json_decode(response.body)
self.write("Fetched " + str(len(json["entries"])) + " entries "
"from the FriendFeed API")
self.finish()
-- as far as I know this is not possible under Apache - in which fetch is regular part of request handler, which of course block until it's complete - so what you end with is frozen thread or process.
Another famous library for doing non-blocking services in Python is Twisted, but I don't know much about it, only that it also is able to help you in handling a lot of connections with only one thread/process.
A:
I'm not sure if you came across this question, but the question asked is pretty similar and there seem to be some good answers there. HTH.
|
How to implement Comet server side with Python?
|
I once tried to implement Comet in PHP. Soon, I found that PHP is not suitable for Comet, since each HTTP request will occupy one process/thread. As a result, it doesn't scale well.
I just installed mod_python in my XAMPP. I thought it would be easy to implement Comet with Python asynchronous programming. But still cannot get a clue how to implement it.
Is there any idea how to implement Comet in mod_python?
|
[
"First of all, I'm not async expert at all, I just investigated the topic once. \nIMHO if you're using XAMPP then you're loosing the posibility of doing long polling because Apache uses thread/processes (depending on configuration) for each request.\nWhat you need, is non-blocking web server, like Tornado, that allows splitting requests into two parts, of which the second one is fired on some event, but meanwhile server can accept subsequent inbound requests.\nExample from Tornado documentation /license/:\nclass MainHandler(tornado.web.RequestHandler):\n @tornado.web.asynchronous\n def get(self):\n http = tornado.httpclient.AsyncHTTPClient()\n http.fetch(\"http://friendfeed-api.com/v2/feed/bret\",\n callback=self.async_callback(self.on_response))\n\n def on_response(self, response):\n if response.error: raise tornado.web.HTTPError(500)\n json = tornado.escape.json_decode(response.body)\n self.write(\"Fetched \" + str(len(json[\"entries\"])) + \" entries \"\n \"from the FriendFeed API\")\n self.finish()\n\n-- as far as I know this is not possible under Apache - in which fetch is regular part of request handler, which of course block until it's complete - so what you end with is frozen thread or process.\nAnother famous library for doing non-blocking services in Python is Twisted, but I don't know much about it, only that it also is able to help you in handling a lot of connections with only one thread/process.\n",
"I'm not sure if you came across this question, but the question asked is pretty similar and there seem to be some good answers there. HTH.\n"
] |
[
8,
0
] |
[] |
[] |
[
"comet",
"python"
] |
stackoverflow_0002441533_comet_python.txt
|
Q:
Django database - how to add this column in raw SQL
Suppose I have my models set up already.
class books(models.Model):
title = models.CharField...
ISBN = models.Integer...
What if I want to add this column to my table?
user = models.ForeignKey(User, unique=True)
How would I write the raw SQL in my database so that this column works?
A:
You should investigate a tool like South, which does all this for you.
However the SQL would be something like (assuming you're using MySQL):
ALTER TABLE `appname_books` ADD COLUMN `user_id` INTEGER NOT NULL UNIQUE;
ALTER TABLE `appname_books` ADD CONSTRAINT `user_id_refs_user` FOREIGN KEY (`user_id`) REFERENCES auth_user (`id`);
|
Django database - how to add this column in raw SQL
|
Suppose I have my models set up already.
class books(models.Model):
title = models.CharField...
ISBN = models.Integer...
What if I want to add this column to my table?
user = models.ForeignKey(User, unique=True)
How would I write the raw SQL in my database so that this column works?
|
[
"You should investigate a tool like South, which does all this for you.\nHowever the SQL would be something like (assuming you're using MySQL):\nALTER TABLE `appname_books` ADD COLUMN `user_id` INTEGER NOT NULL UNIQUE;\nALTER TABLE `appname_books` ADD CONSTRAINT `user_id_refs_user` FOREIGN KEY (`user_id`) REFERENCES auth_user (`id`);\n\n"
] |
[
5
] |
[] |
[] |
[
"database",
"django",
"mysql",
"python"
] |
stackoverflow_0002441771_database_django_mysql_python.txt
|
Q:
Is there a python module compatible with Google Apps Engine's new "Tasks"
I'm writing a Python application, that I want to later migrate to GAE.
The new "Task Queues" API fulfills a requirement of my app, and I want to simulate it locally until I have the time to migrate the whole thing to GAE.
Does anyone know of a compatible module I can run locally?
A:
Given the explicitly experimental nature of the thing, there's certainly nothing compatible in existence at this time. And obviously even if there were, Google pretty much says "we're going to change the API!" in their warning about it, so anything compatible now would not be compatible when the time comes to migrate.
A:
Since my original question, I've found two projects that aim to release the vendor-lock-in of GAE:
AppScale - http://code.google.com/p/appscale/
TyphoonAE - http://code.google.com/p/typhoonae/
There is also the GAE Testbed for easier testing:
http://code.google.com/p/gae-testbed/
|
Is there a python module compatible with Google Apps Engine's new "Tasks"
|
I'm writing a Python application, that I want to later migrate to GAE.
The new "Task Queues" API fulfills a requirement of my app, and I want to simulate it locally until I have the time to migrate the whole thing to GAE.
Does anyone know of a compatible module I can run locally?
|
[
"Given the explicitly experimental nature of the thing, there's certainly nothing compatible in existence at this time. And obviously even if there were, Google pretty much says \"we're going to change the API!\" in their warning about it, so anything compatible now would not be compatible when the time comes to migrate.\n",
"Since my original question, I've found two projects that aim to release the vendor-lock-in of GAE:\n\nAppScale - http://code.google.com/p/appscale/\nTyphoonAE - http://code.google.com/p/typhoonae/\n\nThere is also the GAE Testbed for easier testing:\nhttp://code.google.com/p/gae-testbed/\n"
] |
[
1,
0
] |
[] |
[] |
[
"google_app_engine",
"migration",
"python",
"task"
] |
stackoverflow_0001068690_google_app_engine_migration_python_task.txt
|
Q:
How to print a range with decimal points in Python?
I can print a range of numbers easily using range, but is is possible to print a range with 1 decimal place from -10 to 10?
e.g
-10.0, -9.9, -9.8 all they way through to +10?
A:
[i/10.0 for i in range(-100,101)]
(The .0 is not needed in Python 3.x)
A:
There's a recipe on ActiveState that implements a floating-point range. In your example, you can use it like
frange(-10, 10.01, 0.1)
Note that this won't generate 1 decimal place on most architectures because of the floating-point representation. If you want to be exact, use the decimal module.
A:
numpy.arange might serve your purpose well.
A:
print(', '.join('%.1f' % x/10.0 for x in range(-100, 101)))
should do exactly what you ask (the printing part, that is!) in any level of Python (a pretty long line of course -- a few hundred characters!-). You could omit the outer parenthese in Python 2, could omit the .0 bit in Python 3, etc, but coding as shown will work across Python releases.
A:
Define a simple function, like:
def frange(fmin, fmax, fleap):
cur=fmin
while cur<fmax:
yield cur
cur+=fleap
Call it using generators, e.g.:
my_float_range=[i for i in frange(-10,10,.1)]
Some sanity checks (for example, that fmin<fmax) can be added to the function body.
|
How to print a range with decimal points in Python?
|
I can print a range of numbers easily using range, but is is possible to print a range with 1 decimal place from -10 to 10?
e.g
-10.0, -9.9, -9.8 all they way through to +10?
|
[
"[i/10.0 for i in range(-100,101)]\n\n(The .0 is not needed in Python 3.x)\n",
"There's a recipe on ActiveState that implements a floating-point range. In your example, you can use it like\nfrange(-10, 10.01, 0.1)\n\nNote that this won't generate 1 decimal place on most architectures because of the floating-point representation. If you want to be exact, use the decimal module.\n",
"numpy.arange might serve your purpose well.\n",
"print(', '.join('%.1f' % x/10.0 for x in range(-100, 101)))\n\nshould do exactly what you ask (the printing part, that is!) in any level of Python (a pretty long line of course -- a few hundred characters!-). You could omit the outer parenthese in Python 2, could omit the .0 bit in Python 3, etc, but coding as shown will work across Python releases.\n",
"Define a simple function, like:\ndef frange(fmin, fmax, fleap):\n cur=fmin\n while cur<fmax:\n yield cur\n cur+=fleap\n\nCall it using generators, e.g.: \nmy_float_range=[i for i in frange(-10,10,.1)]\n\nSome sanity checks (for example, that fmin<fmax) can be added to the function body.\n"
] |
[
7,
2,
0,
0,
0
] |
[] |
[] |
[
"python",
"range"
] |
stackoverflow_0002439837_python_range.txt
|
Q:
Accessing Class Variables from a List in a nice way in Python
Suppose I have a list X = [a, b, c] where a, b, c are instances of the same class C.
Now, all these instances a,b,c, have a variable called v, a.v, b.v, c.v ...
I simply want a list Y = [a.v, b.v, c.v]
Is there a nice command to do this?
The best way I can think of is:
Y = []
for i in X
Y.append(i.v)
But it doesn't seem very elegant ~ since this needs to be repeated for any given "v"
Any suggestions? I couldn't figure out a way to use "map" to do this.
A:
That should work:
Y = [x.v for x in X]
A:
The list comprehension is the way to go.
But you also said you don't know how to use map to do it. Now, I would not recommend to use map for this at all, but it can be done:
map( lambda x: x.v, X)
that is, you create an anonymous function (a lambda) to return that attribute.
If you prefer to use the python library methods (know thy tools...) then something like:
map(operator.attrgetter("v"),X)
should also work.
A:
I would do a list comprehension:
Y = [ i.v for i in X ]
It is shorter and more conveniant.
|
Accessing Class Variables from a List in a nice way in Python
|
Suppose I have a list X = [a, b, c] where a, b, c are instances of the same class C.
Now, all these instances a,b,c, have a variable called v, a.v, b.v, c.v ...
I simply want a list Y = [a.v, b.v, c.v]
Is there a nice command to do this?
The best way I can think of is:
Y = []
for i in X
Y.append(i.v)
But it doesn't seem very elegant ~ since this needs to be repeated for any given "v"
Any suggestions? I couldn't figure out a way to use "map" to do this.
|
[
"That should work:\nY = [x.v for x in X]\n\n",
"The list comprehension is the way to go. \nBut you also said you don't know how to use map to do it. Now, I would not recommend to use map for this at all, but it can be done:\nmap( lambda x: x.v, X)\n\nthat is, you create an anonymous function (a lambda) to return that attribute. \nIf you prefer to use the python library methods (know thy tools...) then something like:\nmap(operator.attrgetter(\"v\"),X)\n\nshould also work.\n",
"I would do a list comprehension:\nY = [ i.v for i in X ]\n\nIt is shorter and more conveniant.\n"
] |
[
11,
5,
2
] |
[] |
[] |
[
"class",
"list",
"map",
"methods",
"python"
] |
stackoverflow_0002442000_class_list_map_methods_python.txt
|
Q:
Weird characters in exported csv files when converting
I came across a problem I cannot solve on my own concerning the downloadable csv formatted trends data files from Google Insights for Search.
I'm to lazy to reformat the files I4S gives me manually what means: Extracting the section with the actual trends data and reformatting the columns so that I can use it with a modelling program I do for school.
So I wrote a tiny script the should do the work for me: Taking a file, do some magic and give me a new file in proper format.
What it's supposed to do is reading the file contents, extracting the trends section, splitting it by newlines, splitting each line and then reorder the columns and maybe reformat them.
When looking at a untouched I4S csv file it looks normal containing CR LF caracters at line breaks (maybe thats only because I'm using Windows).
When just reading the contents and then writing them to a new file using the script wierd asian characters appear between CR and LF. I tried the script with a manually written similar looking file and even tried a csv file from Google Trends and it works fine.
I use Python and the script (snippet) I used for the following example
looks like this:
# Read from an input file
file = open(file,"r")
contents = file.read()
file.close()
cfile = open("m.log","w+")
cfile.write(contents)
cfile.close()
Has anybody an idea why those characters appear??? Thank you for you help!
I'll give you and example:
First few lines of I4S csv file:
Web Search Interest: foobar
Worldwide; 2004 - present
Interest over time
Week foobar
2004-01-04 - 2004-01-10 44
2004-01-11 - 2004-01-17 44
2004-01-18 - 2004-01-24 37
2004-01-25 - 2004-01-31 40
2004-02-01 - 2004-02-07 49
2004-02-08 - 2004-02-14 51
2004-02-15 - 2004-02-21 45
2004-02-22 - 2004-02-28 61
2004-02-29 - 2004-03-06 51
2004-03-07 - 2004-03-13 48
2004-03-14 - 2004-03-20 50
2004-03-21 - 2004-03-27 56
2004-03-28 - 2004-04-03 59
Output file when reading and writing contents:
Web Search Interest: foobar
圀漀爀氀搀眀椀搀攀㬀 ㈀ 㐀 ⴀ 瀀爀攀猀攀渀琀ഀഀ
䤀渀琀攀爀攀猀琀 漀瘀攀爀 琀椀洀攀ഀഀ
Week foobar
㈀ 㐀ⴀ ⴀ 㐀 ⴀ ㈀ 㐀ⴀ ⴀ ऀ㐀㐀ഀഀ
2004-01-11 - 2004-01-17 44
㈀ 㐀ⴀ ⴀ㠀 ⴀ ㈀ 㐀ⴀ ⴀ㈀㐀ऀ㌀㜀ഀഀ
2004-01-25 - 2004-01-31 40
㈀ 㐀ⴀ ㈀ⴀ ⴀ ㈀ 㐀ⴀ ㈀ⴀ 㜀ऀ㐀㤀ഀഀ
2004-02-08 - 2004-02-14 51
㈀ 㐀ⴀ ㈀ⴀ㔀 ⴀ ㈀ 㐀ⴀ ㈀ⴀ㈀ऀ㐀㔀ഀഀ
2004-02-22 - 2004-02-28 61
㈀ 㐀ⴀ ㈀ⴀ㈀㤀 ⴀ ㈀ 㐀ⴀ ㌀ⴀ 㘀ऀ㔀ഀഀ
2004-03-07 - 2004-03-13 48
㈀ 㐀ⴀ ㌀ⴀ㐀 ⴀ ㈀ 㐀ⴀ ㌀ⴀ㈀ ऀ㔀 ഀഀ
2004-03-21 - 2004-03-27 56
㈀ 㐀ⴀ ㌀ⴀ㈀㠀 ⴀ ㈀ 㐀ⴀ 㐀ⴀ ㌀ऀ㔀㤀ഀഀ
2004-04-04 - 2004-04-10 69
㈀ 㐀ⴀ 㐀ⴀ ⴀ ㈀ 㐀ⴀ 㐀ⴀ㜀ऀ㘀㔀ഀഀ
2004-04-18 - 2004-04-24 51
㈀ 㐀ⴀ 㐀ⴀ㈀㔀 ⴀ ㈀ 㐀ⴀ 㔀ⴀ ऀ㔀ഀഀ
2004-05-02 - 2004-05-08 56
㈀ 㐀ⴀ 㔀ⴀ 㤀 ⴀ ㈀ 㐀ⴀ 㔀ⴀ㔀ऀ㔀㈀ഀഀ
2004-05-16 - 2004-05-22 54
㈀ 㐀ⴀ 㔀ⴀ㈀㌀ ⴀ ㈀ 㐀ⴀ 㔀ⴀ㈀㤀ऀ㔀㔀ഀഀ
2004-05-30 - 2004-06-05 74
㈀ 㐀ⴀ 㘀ⴀ 㘀 ⴀ ㈀ 㐀ⴀ 㘀ⴀ㈀ऀ㔀㜀ഀഀ
2004-06-13 - 2004-06-19 50
㈀ 㐀ⴀ 㘀ⴀ㈀ ⴀ ㈀ 㐀ⴀ 㘀ⴀ㈀㘀ऀ㔀㐀ഀഀ
2004-06-27 - 2004-07-03 58
㈀ 㐀ⴀ 㜀ⴀ 㐀 ⴀ ㈀ 㐀ⴀ 㜀ⴀ ऀ㔀㤀ഀഀ
2004-07-11 - 2004-07-17 59
㈀ 㐀ⴀ 㜀ⴀ㠀 ⴀ ㈀ 㐀ⴀ 㜀ⴀ㈀㐀ऀ㘀㈀ഀഀ
A:
repr() is your friend (except on Python 3.X; use ascii() instead).
prompt>\python26\python -c "print repr(open('report.csv','rb').read()[:300])"
'\xff\xfeW\x00e\x00b\x00 \x00S\x00e\x00a\x00r\x00c\x00h\x00 \x00I\x00n\x00t\x00e
\x00r\x00e\x00s\x00t\x00:\x00 \x00f\x00o\x00o\x00b\x00a\x00r\x00\r\x00\n\x00W\x0
[snip]
x001\x007\x00\t\x004\x004\x00\r\x00\n\x002\x000\x00'
Sure looks like a UTF-16LE BOM (U+FEFF) in the 1st two bytes to me.
Notepad.* are NOT your friends. UTF-16 should not be referred to as "UCS-2" or "Unicode".
The following should help with what to do next:
>>> import codecs
>>> lines = list(codecs.open('report.csv', 'r', encoding='UTF-16'))
>>> import pprint
>>> pprint.pprint(lines[:8])
[u'Web Search Interest: foobar\r\n',
u'Worldwide; 2004 - present\r\n',
u'\r\n',
u'Interest over time\r\n',
u'Week\tfoobar\r\n',
u'2004-01-04 - 2004-01-10\t44\r\n',
u'2004-01-11 - 2004-01-17\t44\r\n',
u'2004-01-18 - 2004-01-24\t37\r\n']
>>>
Update: Why your output file looks like gobbledegook.
Firstly, you are looking at the files with something (Notepad.* maybe) that knows that the files are allegedly encoded in UTF-16LE, and displays them accordingly. So your input file looks fine.
However, your script is reading the input file as raw bytes. It then writes the output file as raw bytes in text mode ('w') (as opposed to binary mode ('wb')). Because you are on Windows, every \n will be replaced by \r\n. This is adding one byte (HALF of a UTF-16 character) to every line. So every SECOND line will be bassackwards aka UTF-16BE ... the letter A which is \x41\x00 in UTF-16LE will lose its trailing \x00 and pick up a leading byte (probably \x00) from the character to the left. \x00\x41 is the UTF-16LE for a CJK ("Asian") character.
Suggested reading: the Python Unicode HOWTO and this piece by Joel.
A:
The problem is character encoding, possibly in combination with universal line ending support of Python. As you mentioned, the source file is in UCS-2 LE, with a Byte Order Mark (BOM). You need to do something like:
import codecs
input_file = codecs.open("Downloads/report.csv", "r", encoding="utf_16")
contents = input_file.read()
input_file.close()
cfile = codecs.open("m.log", "w+", encoding="utf_8")
cfile.write(contents)
cfile.close()
This will read the input file, decode it properly, and write it to the new file as UTF-8. You'll need to delete your existing m.log.
A:
Found the solution:
It was a character encoding problem. Depending on the editor you use other character set encodings are shown:
Notepad++: ucs-2 little endian
PSPad: utf-16le
Decoding the contents with ucs-2 didn't work so I tried utf-16le and it went well. extraneons answer was wrong, but it lead me to the site where I learned that using 'U' in the file opening method causes recognizing "\r\n" as line breaks, too. So now the relevant snippet of my script looks like this:
file = open(file,'rU')
contents = file.read()
file.close()
contents = contents.decode("utf-16le").encode("utf-8")
Then I encode the contents with utf-8 and remove all empty lines with
lines = contents.split("\n")
contents = ""
for line in lines:
if not line.strip():
continue
else:
contents += line+"\n"
Now I can proceed splitting and reformatting the file. Thanks to Nick Bastin, you gave me the hint I needed!
|
Weird characters in exported csv files when converting
|
I came across a problem I cannot solve on my own concerning the downloadable csv formatted trends data files from Google Insights for Search.
I'm to lazy to reformat the files I4S gives me manually what means: Extracting the section with the actual trends data and reformatting the columns so that I can use it with a modelling program I do for school.
So I wrote a tiny script the should do the work for me: Taking a file, do some magic and give me a new file in proper format.
What it's supposed to do is reading the file contents, extracting the trends section, splitting it by newlines, splitting each line and then reorder the columns and maybe reformat them.
When looking at a untouched I4S csv file it looks normal containing CR LF caracters at line breaks (maybe thats only because I'm using Windows).
When just reading the contents and then writing them to a new file using the script wierd asian characters appear between CR and LF. I tried the script with a manually written similar looking file and even tried a csv file from Google Trends and it works fine.
I use Python and the script (snippet) I used for the following example
looks like this:
# Read from an input file
file = open(file,"r")
contents = file.read()
file.close()
cfile = open("m.log","w+")
cfile.write(contents)
cfile.close()
Has anybody an idea why those characters appear??? Thank you for you help!
I'll give you and example:
First few lines of I4S csv file:
Web Search Interest: foobar
Worldwide; 2004 - present
Interest over time
Week foobar
2004-01-04 - 2004-01-10 44
2004-01-11 - 2004-01-17 44
2004-01-18 - 2004-01-24 37
2004-01-25 - 2004-01-31 40
2004-02-01 - 2004-02-07 49
2004-02-08 - 2004-02-14 51
2004-02-15 - 2004-02-21 45
2004-02-22 - 2004-02-28 61
2004-02-29 - 2004-03-06 51
2004-03-07 - 2004-03-13 48
2004-03-14 - 2004-03-20 50
2004-03-21 - 2004-03-27 56
2004-03-28 - 2004-04-03 59
Output file when reading and writing contents:
Web Search Interest: foobar
圀漀爀氀搀眀椀搀攀㬀 ㈀ 㐀 ⴀ 瀀爀攀猀攀渀琀ഀഀ
䤀渀琀攀爀攀猀琀 漀瘀攀爀 琀椀洀攀ഀഀ
Week foobar
㈀ 㐀ⴀ ⴀ 㐀 ⴀ ㈀ 㐀ⴀ ⴀ ऀ㐀㐀ഀഀ
2004-01-11 - 2004-01-17 44
㈀ 㐀ⴀ ⴀ㠀 ⴀ ㈀ 㐀ⴀ ⴀ㈀㐀ऀ㌀㜀ഀഀ
2004-01-25 - 2004-01-31 40
㈀ 㐀ⴀ ㈀ⴀ ⴀ ㈀ 㐀ⴀ ㈀ⴀ 㜀ऀ㐀㤀ഀഀ
2004-02-08 - 2004-02-14 51
㈀ 㐀ⴀ ㈀ⴀ㔀 ⴀ ㈀ 㐀ⴀ ㈀ⴀ㈀ऀ㐀㔀ഀഀ
2004-02-22 - 2004-02-28 61
㈀ 㐀ⴀ ㈀ⴀ㈀㤀 ⴀ ㈀ 㐀ⴀ ㌀ⴀ 㘀ऀ㔀ഀഀ
2004-03-07 - 2004-03-13 48
㈀ 㐀ⴀ ㌀ⴀ㐀 ⴀ ㈀ 㐀ⴀ ㌀ⴀ㈀ ऀ㔀 ഀഀ
2004-03-21 - 2004-03-27 56
㈀ 㐀ⴀ ㌀ⴀ㈀㠀 ⴀ ㈀ 㐀ⴀ 㐀ⴀ ㌀ऀ㔀㤀ഀഀ
2004-04-04 - 2004-04-10 69
㈀ 㐀ⴀ 㐀ⴀ ⴀ ㈀ 㐀ⴀ 㐀ⴀ㜀ऀ㘀㔀ഀഀ
2004-04-18 - 2004-04-24 51
㈀ 㐀ⴀ 㐀ⴀ㈀㔀 ⴀ ㈀ 㐀ⴀ 㔀ⴀ ऀ㔀ഀഀ
2004-05-02 - 2004-05-08 56
㈀ 㐀ⴀ 㔀ⴀ 㤀 ⴀ ㈀ 㐀ⴀ 㔀ⴀ㔀ऀ㔀㈀ഀഀ
2004-05-16 - 2004-05-22 54
㈀ 㐀ⴀ 㔀ⴀ㈀㌀ ⴀ ㈀ 㐀ⴀ 㔀ⴀ㈀㤀ऀ㔀㔀ഀഀ
2004-05-30 - 2004-06-05 74
㈀ 㐀ⴀ 㘀ⴀ 㘀 ⴀ ㈀ 㐀ⴀ 㘀ⴀ㈀ऀ㔀㜀ഀഀ
2004-06-13 - 2004-06-19 50
㈀ 㐀ⴀ 㘀ⴀ㈀ ⴀ ㈀ 㐀ⴀ 㘀ⴀ㈀㘀ऀ㔀㐀ഀഀ
2004-06-27 - 2004-07-03 58
㈀ 㐀ⴀ 㜀ⴀ 㐀 ⴀ ㈀ 㐀ⴀ 㜀ⴀ ऀ㔀㤀ഀഀ
2004-07-11 - 2004-07-17 59
㈀ 㐀ⴀ 㜀ⴀ㠀 ⴀ ㈀ 㐀ⴀ 㜀ⴀ㈀㐀ऀ㘀㈀ഀഀ
|
[
"repr() is your friend (except on Python 3.X; use ascii() instead).\nprompt>\\python26\\python -c \"print repr(open('report.csv','rb').read()[:300])\"\n'\\xff\\xfeW\\x00e\\x00b\\x00 \\x00S\\x00e\\x00a\\x00r\\x00c\\x00h\\x00 \\x00I\\x00n\\x00t\\x00e\n\\x00r\\x00e\\x00s\\x00t\\x00:\\x00 \\x00f\\x00o\\x00o\\x00b\\x00a\\x00r\\x00\\r\\x00\\n\\x00W\\x0\n[snip]\nx001\\x007\\x00\\t\\x004\\x004\\x00\\r\\x00\\n\\x002\\x000\\x00'\n\nSure looks like a UTF-16LE BOM (U+FEFF) in the 1st two bytes to me.\nNotepad.* are NOT your friends. UTF-16 should not be referred to as \"UCS-2\" or \"Unicode\".\nThe following should help with what to do next:\n>>> import codecs\n>>> lines = list(codecs.open('report.csv', 'r', encoding='UTF-16'))\n>>> import pprint\n>>> pprint.pprint(lines[:8])\n[u'Web Search Interest: foobar\\r\\n',\n u'Worldwide; 2004 - present\\r\\n',\n u'\\r\\n',\n u'Interest over time\\r\\n',\n u'Week\\tfoobar\\r\\n',\n u'2004-01-04 - 2004-01-10\\t44\\r\\n',\n u'2004-01-11 - 2004-01-17\\t44\\r\\n',\n u'2004-01-18 - 2004-01-24\\t37\\r\\n']\n>>>\n\nUpdate: Why your output file looks like gobbledegook.\nFirstly, you are looking at the files with something (Notepad.* maybe) that knows that the files are allegedly encoded in UTF-16LE, and displays them accordingly. So your input file looks fine.\nHowever, your script is reading the input file as raw bytes. It then writes the output file as raw bytes in text mode ('w') (as opposed to binary mode ('wb')). Because you are on Windows, every \\n will be replaced by \\r\\n. This is adding one byte (HALF of a UTF-16 character) to every line. So every SECOND line will be bassackwards aka UTF-16BE ... the letter A which is \\x41\\x00 in UTF-16LE will lose its trailing \\x00 and pick up a leading byte (probably \\x00) from the character to the left. \\x00\\x41 is the UTF-16LE for a CJK (\"Asian\") character.\nSuggested reading: the Python Unicode HOWTO and this piece by Joel.\n",
"The problem is character encoding, possibly in combination with universal line ending support of Python. As you mentioned, the source file is in UCS-2 LE, with a Byte Order Mark (BOM). You need to do something like:\nimport codecs\n\ninput_file = codecs.open(\"Downloads/report.csv\", \"r\", encoding=\"utf_16\")\ncontents = input_file.read() \ninput_file.close() \n\ncfile = codecs.open(\"m.log\", \"w+\", encoding=\"utf_8\")\ncfile.write(contents) \ncfile.close()\n\nThis will read the input file, decode it properly, and write it to the new file as UTF-8. You'll need to delete your existing m.log.\n",
"Found the solution:\nIt was a character encoding problem. Depending on the editor you use other character set encodings are shown: \nNotepad++: ucs-2 little endian\nPSPad: utf-16le\nDecoding the contents with ucs-2 didn't work so I tried utf-16le and it went well. extraneons answer was wrong, but it lead me to the site where I learned that using 'U' in the file opening method causes recognizing \"\\r\\n\" as line breaks, too. So now the relevant snippet of my script looks like this:\nfile = open(file,'rU')\ncontents = file.read()\nfile.close()\n\ncontents = contents.decode(\"utf-16le\").encode(\"utf-8\")\n\nThen I encode the contents with utf-8 and remove all empty lines with\nlines = contents.split(\"\\n\")\ncontents = \"\"\nfor line in lines:\n if not line.strip():\n continue\n else:\n contents += line+\"\\n\"\n\nNow I can proceed splitting and reformatting the file. Thanks to Nick Bastin, you gave me the hint I needed!\n"
] |
[
4,
3,
2
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0002441682_csv_python.txt
|
Q:
Python, SWIG and other strange things
I have a firmware for an USB module I can already control by visual C. Now I want to port this to python. for this I need the octopus library which is written in c. I found a file called octopus_wrap which was created by SWIG!
then I found a makefile which says:
python2.5:
swig -python -outdir ./ ../octopus.i
gcc -fPIC -c ../../liboctopus/src/octopus.c
gcc -fPIC -c ../octopus_wrap.c -I /usr/include/python2.5
gcc -fPIC -shared octopus_wrap.o octopus.o /usr/lib/libusb.so -o _octopus.so
python2.4:
swig -python -outdir ./ ../octopus.i
gcc -fPIC -c ../../liboctopus/src/octopus.c
gcc -fPIC -c ../octopus_wrap.c -I /usr/include/python2.4
gcc -fPIC -shared octopus_wrap.o octopus.o /usr/lib/libusb.so -o _octopus.so
win:
gcc -fPIC -c ../../liboctopus/src/octopus.c -I /c/Programme/libusb-win32-device-bin-0.1.10.1/include
gcc -fPIC -c octopus_wrap.c -I /c/Python25/libs -lpython25 -I/c/Python25/include -I /c/Programme/libusb-win32-device-bin-0.1.10.1/include
gcc -fPIC -shared *.o -o _octopus.pyd -L/c/Python25/libs -lpython25 -lusb -L/c/Programme/libusb-win32-device-bin-0.1.10.1/lib/gcc
clean:
rm -f octopus* _octopus*
install_python2.4:
cp _octopus.so /usr/local/lib/python2.4/site-packages/
cp octopus.py /usr/local/lib/python2.4/site-packages/
install_python2.5:
cp _octopus.so /usr/local/lib/python2.5/site-packages/
cp octopus.py /usr/local/lib/python2.5/site-packages/
I dont know how to handle this but as far as I can see octopus.py and _octopus.so are the resulting output files which are relevant to python right?
luckily someone already did that and so I put those 2 files to my "python26/lib" folder (hope it doesnt matter if it´s python 2.5 or 2.6?!)
So when working with the USB device the octopus.py is the library to work with!
Importing this file makes several problems:
>>>
Traceback (most recent call last):
File "C:\Users\ameise\My Dropbox\µC\AVR\OCTOPUS\octopususb-0.5\demos\python \blink_status.py", line 8, in <module>
from octopus import *
File "C:\Python26\lib\octopus.py", line 7, in <module>
import _octopus
ImportError: DLL load failed: module not found.
and here´s the related line 7 :
import _octopus
So there´s a problem considering the .so file!
What could be my next step?
I know that´s a lot of confusing stuff but I hope anyone of you could bring some light in my mind!
thy in advance
A:
You should link and compile for the python2.6 -lpython26.
Also the file extension for windows is .pyd no .so
|
Python, SWIG and other strange things
|
I have a firmware for an USB module I can already control by visual C. Now I want to port this to python. for this I need the octopus library which is written in c. I found a file called octopus_wrap which was created by SWIG!
then I found a makefile which says:
python2.5:
swig -python -outdir ./ ../octopus.i
gcc -fPIC -c ../../liboctopus/src/octopus.c
gcc -fPIC -c ../octopus_wrap.c -I /usr/include/python2.5
gcc -fPIC -shared octopus_wrap.o octopus.o /usr/lib/libusb.so -o _octopus.so
python2.4:
swig -python -outdir ./ ../octopus.i
gcc -fPIC -c ../../liboctopus/src/octopus.c
gcc -fPIC -c ../octopus_wrap.c -I /usr/include/python2.4
gcc -fPIC -shared octopus_wrap.o octopus.o /usr/lib/libusb.so -o _octopus.so
win:
gcc -fPIC -c ../../liboctopus/src/octopus.c -I /c/Programme/libusb-win32-device-bin-0.1.10.1/include
gcc -fPIC -c octopus_wrap.c -I /c/Python25/libs -lpython25 -I/c/Python25/include -I /c/Programme/libusb-win32-device-bin-0.1.10.1/include
gcc -fPIC -shared *.o -o _octopus.pyd -L/c/Python25/libs -lpython25 -lusb -L/c/Programme/libusb-win32-device-bin-0.1.10.1/lib/gcc
clean:
rm -f octopus* _octopus*
install_python2.4:
cp _octopus.so /usr/local/lib/python2.4/site-packages/
cp octopus.py /usr/local/lib/python2.4/site-packages/
install_python2.5:
cp _octopus.so /usr/local/lib/python2.5/site-packages/
cp octopus.py /usr/local/lib/python2.5/site-packages/
I dont know how to handle this but as far as I can see octopus.py and _octopus.so are the resulting output files which are relevant to python right?
luckily someone already did that and so I put those 2 files to my "python26/lib" folder (hope it doesnt matter if it´s python 2.5 or 2.6?!)
So when working with the USB device the octopus.py is the library to work with!
Importing this file makes several problems:
>>>
Traceback (most recent call last):
File "C:\Users\ameise\My Dropbox\µC\AVR\OCTOPUS\octopususb-0.5\demos\python \blink_status.py", line 8, in <module>
from octopus import *
File "C:\Python26\lib\octopus.py", line 7, in <module>
import _octopus
ImportError: DLL load failed: module not found.
and here´s the related line 7 :
import _octopus
So there´s a problem considering the .so file!
What could be my next step?
I know that´s a lot of confusing stuff but I hope anyone of you could bring some light in my mind!
thy in advance
|
[
"You should link and compile for the python2.6 -lpython26.\nAlso the file extension for windows is .pyd no .so\n"
] |
[
0
] |
[] |
[] |
[
"python",
"swig",
"usb",
"wrapper"
] |
stackoverflow_0002442042_python_swig_usb_wrapper.txt
|
Q:
Python: puzzling behaviour inside httplib
I have added one line ( import pdb; pdb.set_trace() ) to httplib's HTTPConnection.putheader, so I can see what's going on inside.
Python26\Lib\httplib.py, line 489:
def putheader(self, header, value):
"""Send a request header line to the server.
For example: h.putheader('Accept', 'text/html')
"""
import pdb; pdb.set_trace()
if self.__state != _CS_REQ_STARTED:
raise CannotSendHeader()
str = '%s: %s' % (header, value)
self._output(str)
then ran this from the interpreter
import urllib2
urllib2.urlopen('http://www.ioerror.us/ip/headers')
... and as expected PDB kicks in:
> c:\python26\lib\httplib.py(858)putheader()
-> if self.__state != _CS_REQ_STARTED:
(Pdb)
in PDB I have the luxury of evaluating expressions on the fly, so I have tried to enter self.__state:
(Pdb) self.__state
*** AttributeError: HTTPConnection instance has no attribute '__state'
Alas, there is no __state of this instance. However when I enter step, the debugger gets past the
if self.__state != _CS_REQ_STARTED:
line without a problem. Why is this happening? If the self.__state doesn't exist python would have to raise an exception as it did when I entered the expression.
Python version: 2.6.4 on win32
A:
Answering my own question:
http://en.wikipedia.org/wiki/Name_mangling#Name_mangling_in_Python
__state is a private name inside the object, it gets mangled as _HTTPConnection__state, so when I want to access it in PDB I have to name it as self._HTTPConnection__state. Only the object can refer to it as __state.
A:
If the self.__state doesn't exist python would have to raise an exception as it did when I entered the expression.
In Python, you don't have to declare variables explicitly.
They are "born" when you assign to them.
Some code validators like pylint warn about these situations.
In your case you could have something like self.__state = None in HTTPConnection.__init__()
but this is not very important.
|
Python: puzzling behaviour inside httplib
|
I have added one line ( import pdb; pdb.set_trace() ) to httplib's HTTPConnection.putheader, so I can see what's going on inside.
Python26\Lib\httplib.py, line 489:
def putheader(self, header, value):
"""Send a request header line to the server.
For example: h.putheader('Accept', 'text/html')
"""
import pdb; pdb.set_trace()
if self.__state != _CS_REQ_STARTED:
raise CannotSendHeader()
str = '%s: %s' % (header, value)
self._output(str)
then ran this from the interpreter
import urllib2
urllib2.urlopen('http://www.ioerror.us/ip/headers')
... and as expected PDB kicks in:
> c:\python26\lib\httplib.py(858)putheader()
-> if self.__state != _CS_REQ_STARTED:
(Pdb)
in PDB I have the luxury of evaluating expressions on the fly, so I have tried to enter self.__state:
(Pdb) self.__state
*** AttributeError: HTTPConnection instance has no attribute '__state'
Alas, there is no __state of this instance. However when I enter step, the debugger gets past the
if self.__state != _CS_REQ_STARTED:
line without a problem. Why is this happening? If the self.__state doesn't exist python would have to raise an exception as it did when I entered the expression.
Python version: 2.6.4 on win32
|
[
"Answering my own question:\nhttp://en.wikipedia.org/wiki/Name_mangling#Name_mangling_in_Python\n__state is a private name inside the object, it gets mangled as _HTTPConnection__state, so when I want to access it in PDB I have to name it as self._HTTPConnection__state. Only the object can refer to it as __state.\n",
"\nIf the self.__state doesn't exist python would have to raise an exception as it did when I entered the expression.\n\nIn Python, you don't have to declare variables explicitly.\nThey are \"born\" when you assign to them.\nSome code validators like pylint warn about these situations.\nIn your case you could have something like self.__state = None in HTTPConnection.__init__()\nbut this is not very important.\n"
] |
[
1,
0
] |
[] |
[] |
[
"debugging",
"httplib",
"python"
] |
stackoverflow_0002441798_debugging_httplib_python.txt
|
Q:
how to search for file's has a known file extension like .py?
how to search for file's has a known file extension like .py ??
fext = raw_input("Put file extension to search: ")
dir = raw_input("Dir to search in: ")
##Search for the file and get the right one's
A:
I believe you want to do something like similar to this: /dir/to/search/*.extension?
This is called glob and here is how to use it:
import glob
files = glob.glob('/path/*.extension')
Edit: and here is the documentation: http://docs.python.org/library/glob.html
A:
import os
root="/home"
ext = raw_input("Put file extension to search: ")
path = raw_input("Dir to search in: ")
for r,d,f in os.walk(path):
for files in f:
if files.endswith(ext):
print "found: ",os.path.join(r,files)
A:
Non-recursive:
for x in os.listdir(dir):
if x.endswith(fext):
filename = os.path.join(dir, x)
# do your stuff here
|
how to search for file's has a known file extension like .py?
|
how to search for file's has a known file extension like .py ??
fext = raw_input("Put file extension to search: ")
dir = raw_input("Dir to search in: ")
##Search for the file and get the right one's
|
[
"I believe you want to do something like similar to this: /dir/to/search/*.extension?\nThis is called glob and here is how to use it:\nimport glob\nfiles = glob.glob('/path/*.extension')\n\nEdit: and here is the documentation: http://docs.python.org/library/glob.html\n",
"import os\nroot=\"/home\"\next = raw_input(\"Put file extension to search: \")\npath = raw_input(\"Dir to search in: \")\nfor r,d,f in os.walk(path):\n for files in f:\n if files.endswith(ext):\n print \"found: \",os.path.join(r,files)\n\n",
"Non-recursive:\nfor x in os.listdir(dir):\n if x.endswith(fext):\n filename = os.path.join(dir, x)\n # do your stuff here\n\n"
] |
[
4,
1,
0
] |
[
"You can write is as simple as:\nimport os\next = raw_input(\"Put file extension to search: \")\npath = raw_input(\"Dir to search in: \")\nmatching_files = [os.path.join(path, x) for x in os.listdir(path) if x.endswith(ext)]\n\n"
] |
[
-1
] |
[
"file",
"python",
"search"
] |
stackoverflow_0002442243_file_python_search.txt
|
Q:
How to delete an element from a list while iterating over it in Python?
Given a list of numbers:
L = [1, 2, 3, 4, 5]
How do I delete an element, let's say 3, from the list while I iterate over it?
I tried the following code but it didn't do it:
for el in L:
if el == 3:
del el
A:
Best is usually to proceed constructively -- build the new list of the items you want instead of removing those you don't. E.g.:
L[:] = [el for el in L if el != 3]
the list comprehension builds the desired list and the assignment to the "whole-list slice", L[:], ensure you're not just rebinding a name, but fully replacing the contents, so the effects are identically equal to the "removals" you wanted to perform. This is also fast.
If you absolutely, at any cost, must do deletions instead, a subtle approach might work:
>>> ndel = 0
>>> for i, el in enumerate(list(L)):
... if el==3:
... del L[i-ndel]
... ndel += 1
nowhere as elegant, clean, simple, or well-performing as the listcomp approach, but it does do the job (though its correctness is not obvious at first glance and in fact I had it wrong before an edit!-). "at any cost" applies here;-).
Looping on indices in lieu of items is another inferior but workable approach for the "must do deletions" case -- but remember to reverse the indices in this case...:
for i in reversed(range(len(L))):
if L[i] == 3: del L[i]
indeed this was a primary use case for reversed back when we were debating on whether to add that built-in -- reversed(range(... isn't trivial to obtain without reversed, and looping on the list in reversed order is sometimes useful. The alternative
for i in range(len(L) - 1, -1, -1):
is really easy to get wrong;-).
Still, the listcomp I recommended at the start of this answer looks better and better as alternatives are examined, doesn't it?-).
|
How to delete an element from a list while iterating over it in Python?
|
Given a list of numbers:
L = [1, 2, 3, 4, 5]
How do I delete an element, let's say 3, from the list while I iterate over it?
I tried the following code but it didn't do it:
for el in L:
if el == 3:
del el
|
[
"Best is usually to proceed constructively -- build the new list of the items you want instead of removing those you don't. E.g.:\nL[:] = [el for el in L if el != 3]\n\nthe list comprehension builds the desired list and the assignment to the \"whole-list slice\", L[:], ensure you're not just rebinding a name, but fully replacing the contents, so the effects are identically equal to the \"removals\" you wanted to perform. This is also fast.\nIf you absolutely, at any cost, must do deletions instead, a subtle approach might work:\n>>> ndel = 0\n>>> for i, el in enumerate(list(L)):\n... if el==3:\n... del L[i-ndel]\n... ndel += 1\n\nnowhere as elegant, clean, simple, or well-performing as the listcomp approach, but it does do the job (though its correctness is not obvious at first glance and in fact I had it wrong before an edit!-). \"at any cost\" applies here;-).\nLooping on indices in lieu of items is another inferior but workable approach for the \"must do deletions\" case -- but remember to reverse the indices in this case...:\nfor i in reversed(range(len(L))):\n if L[i] == 3: del L[i]\n\nindeed this was a primary use case for reversed back when we were debating on whether to add that built-in -- reversed(range(... isn't trivial to obtain without reversed, and looping on the list in reversed order is sometimes useful. The alternative\nfor i in range(len(L) - 1, -1, -1):\n\nis really easy to get wrong;-).\nStill, the listcomp I recommended at the start of this answer looks better and better as alternatives are examined, doesn't it?-).\n"
] |
[
13
] |
[
"for el in L:\n if el == 2:\n del L[el]\n\n"
] |
[
-4
] |
[
"python"
] |
stackoverflow_0002442651_python.txt
|
Q:
python send/receive hex data via TCP socket
I have a ethenet access control device that is said to be able to communicate via TCP.
How can i send a pachet by entering the HEX data, since this is what i have from their manual (a standard format for the communication packets sent and received after each command)
Can you please show some example code or links to get started....
standard return packet from the terminal
Size (bytes)
BS (0x08) : ASCII Character 1
STX (0x02) : ASCII Character 1
LENGTH : length from BS to ETX 4
TID : system unique I.D. 1
RESULT 1
DATA : returned parameter N
CHECKSUM : byte sum from BS to DATA 1
ETX (0x03) : ASCII Character 1
Standard command packet to the terminal
Size (bytes)
ACK (0x06) : ASCII Character 1
STX (0x02) : ASCII Character 1
LENGTH : length from ACK to ETX 4
TID : system unique I.D. (ex: 1) 1
COMMAND 1
Access Key(Optional) 6
DATA : command parameter N
CHECKSUM : byte sum from ACK to DATA 1
ETX (0x03) : ASCII Character 1
This packet starts from ACK.
In this packet, multiple byte value must be started from MSB.
For example, if length was 10, LENGTH is 0x00 0x00 0x00 0x0a.
A:
Just encode the hex data in a string:
'\x34\x82\xf6'
A:
I'd use struct.pack to prepare the string of bytes to send, from the data you want to send. Be sure to start the packing format with > to mean you want big-endian ordering and standard sizes, since they document that so clearly!
So (I don't know what the "optional" means for the access key, I'll assume it means that the field can be all-zero bytes if you have no access key), if "data" is already a string of bytes and "command" a small unsigned integer for example, something like...:
def stringfor(command, data, accesskey='\0'*6, tid=1):
length = 16 + len(data)
prefix = struct.pack('>BBIBB6s', 6, 2, length, tid, command, accesskey)
checksum = sum(ord(c) for c in prefix) &0xFF
return prefix + chr(checksum) + chr(3)
|
python send/receive hex data via TCP socket
|
I have a ethenet access control device that is said to be able to communicate via TCP.
How can i send a pachet by entering the HEX data, since this is what i have from their manual (a standard format for the communication packets sent and received after each command)
Can you please show some example code or links to get started....
standard return packet from the terminal
Size (bytes)
BS (0x08) : ASCII Character 1
STX (0x02) : ASCII Character 1
LENGTH : length from BS to ETX 4
TID : system unique I.D. 1
RESULT 1
DATA : returned parameter N
CHECKSUM : byte sum from BS to DATA 1
ETX (0x03) : ASCII Character 1
Standard command packet to the terminal
Size (bytes)
ACK (0x06) : ASCII Character 1
STX (0x02) : ASCII Character 1
LENGTH : length from ACK to ETX 4
TID : system unique I.D. (ex: 1) 1
COMMAND 1
Access Key(Optional) 6
DATA : command parameter N
CHECKSUM : byte sum from ACK to DATA 1
ETX (0x03) : ASCII Character 1
This packet starts from ACK.
In this packet, multiple byte value must be started from MSB.
For example, if length was 10, LENGTH is 0x00 0x00 0x00 0x0a.
|
[
"Just encode the hex data in a string:\n'\\x34\\x82\\xf6'\n\n",
"I'd use struct.pack to prepare the string of bytes to send, from the data you want to send. Be sure to start the packing format with > to mean you want big-endian ordering and standard sizes, since they document that so clearly!\nSo (I don't know what the \"optional\" means for the access key, I'll assume it means that the field can be all-zero bytes if you have no access key), if \"data\" is already a string of bytes and \"command\" a small unsigned integer for example, something like...:\ndef stringfor(command, data, accesskey='\\0'*6, tid=1):\n length = 16 + len(data)\n prefix = struct.pack('>BBIBB6s', 6, 2, length, tid, command, accesskey)\n checksum = sum(ord(c) for c in prefix) &0xFF\n return prefix + chr(checksum) + chr(3)\n\n"
] |
[
6,
4
] |
[] |
[] |
[
"access_control",
"python",
"sockets",
"tcp"
] |
stackoverflow_0002442704_access_control_python_sockets_tcp.txt
|
Q:
Setting package-wide variables during python setup.py install
Is there a way that when a user types
python setup.py install
to install a Python package, setup.py can be made to set specific variables at the base of the pacakge? A common example would be to basically set mypackage.__revision__ to be the svn revision of the checkout if one is working from svn. Another example case would be if the user can choose a global option, so that the option mypackage.__option__ be set according to a flag passed to setup.py, e.g.
python setup.py install --set-flag=10
Then when using the package, mypackage.__option__ would equal 10.
A:
The SVN version can be set by SVN. You don't need to use setup.py to mess with that.
Simply include the $Revision$ flag in the text somewhere and tell SVN to do replacements.
Global options are usually handled by configuration files. Why mess with it at install time? It's much easier (and more flexible) to create and read a configuration file.
You could, for example, install a configuration file with instructions on how to edit it. If the configuration file is in Python, then it's simply a variable in the configuration.
THEOPTION = 10
That would be enough. Very simple. Very standardized. Very easy to manage.
|
Setting package-wide variables during python setup.py install
|
Is there a way that when a user types
python setup.py install
to install a Python package, setup.py can be made to set specific variables at the base of the pacakge? A common example would be to basically set mypackage.__revision__ to be the svn revision of the checkout if one is working from svn. Another example case would be if the user can choose a global option, so that the option mypackage.__option__ be set according to a flag passed to setup.py, e.g.
python setup.py install --set-flag=10
Then when using the package, mypackage.__option__ would equal 10.
|
[
"The SVN version can be set by SVN. You don't need to use setup.py to mess with that.\nSimply include the $Revision$ flag in the text somewhere and tell SVN to do replacements.\nGlobal options are usually handled by configuration files. Why mess with it at install time? It's much easier (and more flexible) to create and read a configuration file. \nYou could, for example, install a configuration file with instructions on how to edit it. If the configuration file is in Python, then it's simply a variable in the configuration.\nTHEOPTION = 10\n\nThat would be enough. Very simple. Very standardized. Very easy to manage.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0002442821_python.txt
|
Q:
Calculating Hebrew date in Python
I'd like to calculate Hebrew dates (primarily the current Hebrew date) in Python. Which library is mature, easy to use, and documented? I note these. There may be others.
Python Date Utilities Library as discussed here
Calendrical
libhdate Python bindings
This informal code listing.
A:
The Python Date Utilities library (available on sourceforge) seems to be fine to do what you want, for more specific usage with hebrew dates you could have a look here, there are a lot of examples with many code snippets that should fit your needs i think.
|
Calculating Hebrew date in Python
|
I'd like to calculate Hebrew dates (primarily the current Hebrew date) in Python. Which library is mature, easy to use, and documented? I note these. There may be others.
Python Date Utilities Library as discussed here
Calendrical
libhdate Python bindings
This informal code listing.
|
[
"The Python Date Utilities library (available on sourceforge) seems to be fine to do what you want, for more specific usage with hebrew dates you could have a look here, there are a lot of examples with many code snippets that should fit your needs i think.\n"
] |
[
5
] |
[] |
[] |
[
"calendar",
"date",
"datetime",
"hebrew",
"python"
] |
stackoverflow_0002442674_calendar_date_datetime_hebrew_python.txt
|
Q:
PyYAML parse into arbitary object
I have the following Python 2.6 program and YAML definition (using PyYAML):
import yaml
x = yaml.load(
"""
product:
name : 'Product X'
sku : 123
features :
- size : '10x30cm'
weight : '10kg'
"""
)
print type(x)
print x
Which results in the following output:
<type 'dict'>
{'product': {'sku': 123, 'name': 'Product X', 'features': [{'weight': '10kg', 'size': '10x30cm'}]}}
It is possible to create an object with fields from x?
I would like to the following:
print x.features[0].size
I am aware that it is possible to create and instance from an existing class, but that is not what I want for this particular scenario.
Edit:
Updated the confusing part about a 'strongly typed object'.
Changed access to features to a indexer as suggested Alex Martelli
A:
So you have a dictionary with string keys and values that can be numbers, nested dictionaries, lists, and you'd like to wrap that into an instance which lets you use attribute access in lieu of dict indexing, and "call with an index" in lieu of list indexing -- not sure what "strongly typed" has to do with this, or why you think .features(0) is better than .features[0] (such a more natural way to index a list!), but, sure, it's feasible. For example, a simple approach might be:
def wrap(datum):
# don't wrap strings
if isinstance(datum, basestring):
return datum
# don't wrap numbers, either
try: return datum + 0
except TypeError: pass
return Fourie(datum)
class Fourie(object):
def __init__(self, data):
self._data = data
def __getattr__(self, n):
return wrap(self._data[n])
def __call__(self, n):
return wrap(self._data[n])
So x = wrap(x['product']) should give you your wish (why you want to skip that level when your overall logic would obviously require x.product.features(0).size, I have no idea, but clearly that skipping's better applied at the point of call rather than hard-coded in the wrapper class or the wrapper factory function I've just shown).
Edit: as the OP says he does want features[0] rather than features(0), just change the last two lines to
def __getitem__(self, n):
return wrap(self._data[n])
i.e., define __getitem__ (the magic method underlying indexing) instead of __call__ (the magic method underlying instance-call).
The alternative to "an existing class" (here, Fourie) would be to create a new class on the fly based on introspecting the wrapped dict -- feasible, too, but seriously dark-gray, if not actually black, magic, and without any real operational advantage that I can think of.
If the OP can clarify exactly why he may be hankering after the meta-programming peaks of creating classes on the fly, what advantage he believes he might be getting that way, etc, I'll show how to do it (and, probably, I'll also show why the craved-for advantage will not in fact be there;-). But simplicity is an important quality in any programming endeavor, and using "deep dark magic" when plain, straightforward code like the above works just fine, is generally not the best of ideas!-)
|
PyYAML parse into arbitary object
|
I have the following Python 2.6 program and YAML definition (using PyYAML):
import yaml
x = yaml.load(
"""
product:
name : 'Product X'
sku : 123
features :
- size : '10x30cm'
weight : '10kg'
"""
)
print type(x)
print x
Which results in the following output:
<type 'dict'>
{'product': {'sku': 123, 'name': 'Product X', 'features': [{'weight': '10kg', 'size': '10x30cm'}]}}
It is possible to create an object with fields from x?
I would like to the following:
print x.features[0].size
I am aware that it is possible to create and instance from an existing class, but that is not what I want for this particular scenario.
Edit:
Updated the confusing part about a 'strongly typed object'.
Changed access to features to a indexer as suggested Alex Martelli
|
[
"So you have a dictionary with string keys and values that can be numbers, nested dictionaries, lists, and you'd like to wrap that into an instance which lets you use attribute access in lieu of dict indexing, and \"call with an index\" in lieu of list indexing -- not sure what \"strongly typed\" has to do with this, or why you think .features(0) is better than .features[0] (such a more natural way to index a list!), but, sure, it's feasible. For example, a simple approach might be:\ndef wrap(datum):\n # don't wrap strings\n if isinstance(datum, basestring):\n return datum\n # don't wrap numbers, either\n try: return datum + 0\n except TypeError: pass\n return Fourie(datum)\n\nclass Fourie(object):\n def __init__(self, data):\n self._data = data\n def __getattr__(self, n):\n return wrap(self._data[n])\n def __call__(self, n):\n return wrap(self._data[n])\n\nSo x = wrap(x['product']) should give you your wish (why you want to skip that level when your overall logic would obviously require x.product.features(0).size, I have no idea, but clearly that skipping's better applied at the point of call rather than hard-coded in the wrapper class or the wrapper factory function I've just shown).\nEdit: as the OP says he does want features[0] rather than features(0), just change the last two lines to\n def __getitem__(self, n):\n return wrap(self._data[n])\n\ni.e., define __getitem__ (the magic method underlying indexing) instead of __call__ (the magic method underlying instance-call).\nThe alternative to \"an existing class\" (here, Fourie) would be to create a new class on the fly based on introspecting the wrapped dict -- feasible, too, but seriously dark-gray, if not actually black, magic, and without any real operational advantage that I can think of.\nIf the OP can clarify exactly why he may be hankering after the meta-programming peaks of creating classes on the fly, what advantage he believes he might be getting that way, etc, I'll show how to do it (and, probably, I'll also show why the craved-for advantage will not in fact be there;-). But simplicity is an important quality in any programming endeavor, and using \"deep dark magic\" when plain, straightforward code like the above works just fine, is generally not the best of ideas!-)\n"
] |
[
8
] |
[] |
[] |
[
"python",
"pyyaml",
"yaml"
] |
stackoverflow_0002442933_python_pyyaml_yaml.txt
|
Q:
numpy.equal with string values
The numpy.equal function does not work if a list or array contains strings:
>>> import numpy
>>> index = numpy.equal([1,2,'a'],None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: function not supported for these types, and can't coerce safely to supported types
What is the easiest way to workaround this without looping through each element? In the end, I need index to contain a boolean array indicating which elements are None.
A:
If you really need to use numpy, be more careful about what you pass in and it can work:
>>> import numpy
>>> a = numpy.array([1, 2, 'a'], dtype=object) # makes type of array what you need
>>> numpy.equal(a, None)
array([False, False, False], dtype=bool)
Since you start with a list, there's a chance what you really want is just a list comprehension like [item is None for item in [1, 2, 'a']] or the similar generator expression.
To have an a heterogeneous list like this is odd. Lists (and numpy arrays) are typically used for homogeneous data.
A:
What's wrong with a stock list comprehension?
index = [x is None for x in L]
|
numpy.equal with string values
|
The numpy.equal function does not work if a list or array contains strings:
>>> import numpy
>>> index = numpy.equal([1,2,'a'],None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: function not supported for these types, and can't coerce safely to supported types
What is the easiest way to workaround this without looping through each element? In the end, I need index to contain a boolean array indicating which elements are None.
|
[
"If you really need to use numpy, be more careful about what you pass in and it can work:\n>>> import numpy\n>>> a = numpy.array([1, 2, 'a'], dtype=object) # makes type of array what you need\n>>> numpy.equal(a, None)\narray([False, False, False], dtype=bool)\n\nSince you start with a list, there's a chance what you really want is just a list comprehension like [item is None for item in [1, 2, 'a']] or the similar generator expression. \nTo have an a heterogeneous list like this is odd. Lists (and numpy arrays) are typically used for homogeneous data.\n",
"What's wrong with a stock list comprehension?\nindex = [x is None for x in L]\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0002442799_numpy_python.txt
|
Q:
Python - Check if numbers in list are factors of a number
I have a list of numbers (integers) (say, from 1 to 10).
They're not necessarily consecutive, but they are in ascending order.
I've prompted the user multiple times to enter a choice of the available numbers. When that number is entered, it is removed from the list along with any of its factors that may be there.
I've prevented the user from selecting prime numbers. However, at some point in time, there may be non-prime numbers there, which have no factors remaining.
I'm relatively new to Python, so I'm having trouble implementing:
Checking if the number selected has no factors remaining (even if it is not prime).
Checking if only prime numbers remain, or numbers without
factors.
I'm thinking of using for statements, but I'm not sure exactly how to implement them. Can anyone offer advice, or code? Thanks in advance...
A:
To check if there are any factors of the number guess remaining you can use any():
hasfactors = any(guess % n == 0 for n in numbers)
To check if all the remaining numbers are prime, all() can be used. (Since you say you already prevented the user from inputting prime numbers I assume you have some kind of isprime() function):
onlyprimes = all(isprime(n) for n in numbers)
A:
For the first problem, you could use list comprehensions to build a new list where each element is not the number selected and not a factor of the number selected (see code). Compare this with your original list.
$ python
>>> selected_number = 6
>>> [x for x in range(1,11) if selected_number % x]
[4, 5, 7, 8, 9, 10]
For the second problem, check if each element is prime. If not, check for numbers without factors; for each element, you might mod over the original list and check if it's a list of zeros. I'm sure there's a faster way, though.
A:
If L is a list of non-zero numbers, the list of those which are factors of a number N is:
factors = [x for x in L if N % x == 0]
The list will simply be empty if N has no factors in L, of course.
I'm not sure what you mean by "numbers without factors", unless you mean "primes" (?) -- there have been several SO questions and answers on checking primality in Python, I'd use gmpy.is_prime (from my extension gmpy) but then of course I'm biased;-).
If you mean, "all numbers that have no factors in L", well, there's infinitely many of them, so it's kind of hard to make a list of them all. An unbounded generator for them:
import itertools
def nofactorsinlist(L):
for i in itertools.count():
if any(x for x in L if i % x == 0):
continue
yield i
Some optimizations would be possible, but this one is really simple and I'm loath to add complicated optimizations without understanding exactly what it is that you're after!-)
|
Python - Check if numbers in list are factors of a number
|
I have a list of numbers (integers) (say, from 1 to 10).
They're not necessarily consecutive, but they are in ascending order.
I've prompted the user multiple times to enter a choice of the available numbers. When that number is entered, it is removed from the list along with any of its factors that may be there.
I've prevented the user from selecting prime numbers. However, at some point in time, there may be non-prime numbers there, which have no factors remaining.
I'm relatively new to Python, so I'm having trouble implementing:
Checking if the number selected has no factors remaining (even if it is not prime).
Checking if only prime numbers remain, or numbers without
factors.
I'm thinking of using for statements, but I'm not sure exactly how to implement them. Can anyone offer advice, or code? Thanks in advance...
|
[
"To check if there are any factors of the number guess remaining you can use any():\nhasfactors = any(guess % n == 0 for n in numbers)\n\nTo check if all the remaining numbers are prime, all() can be used. (Since you say you already prevented the user from inputting prime numbers I assume you have some kind of isprime() function):\nonlyprimes = all(isprime(n) for n in numbers)\n\n",
"For the first problem, you could use list comprehensions to build a new list where each element is not the number selected and not a factor of the number selected (see code). Compare this with your original list.\n$ python\n>>> selected_number = 6\n>>> [x for x in range(1,11) if selected_number % x]\n[4, 5, 7, 8, 9, 10]\n\nFor the second problem, check if each element is prime. If not, check for numbers without factors; for each element, you might mod over the original list and check if it's a list of zeros. I'm sure there's a faster way, though.\n",
"If L is a list of non-zero numbers, the list of those which are factors of a number N is:\nfactors = [x for x in L if N % x == 0]\n\nThe list will simply be empty if N has no factors in L, of course.\nI'm not sure what you mean by \"numbers without factors\", unless you mean \"primes\" (?) -- there have been several SO questions and answers on checking primality in Python, I'd use gmpy.is_prime (from my extension gmpy) but then of course I'm biased;-).\nIf you mean, \"all numbers that have no factors in L\", well, there's infinitely many of them, so it's kind of hard to make a list of them all. An unbounded generator for them:\nimport itertools\n\ndef nofactorsinlist(L):\n for i in itertools.count():\n if any(x for x in L if i % x == 0):\n continue\n yield i\n\nSome optimizations would be possible, but this one is really simple and I'm loath to add complicated optimizations without understanding exactly what it is that you're after!-)\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"factors",
"list",
"python",
"python_3.x"
] |
stackoverflow_0002442972_factors_list_python_python_3.x.txt
|
Q:
Concatenate generator and item
I have a generator (numbers) and a value (number). I would like to iterate over these as if they were one sequence:
i for i in tuple(my_generator) + (my_value,)
The problem is, as far as I undestand, this creates 3 tuples only to immediately discard them and also copies items in "my_generator" once.
Better approch would be:
def con(seq, item):
for i in seq:
yield seq
yield item
i for i in con(my_generator, my_value)
But I was wondering whether it is possible to do it without that function definition
A:
itertools.chain treats several sequences as a single sequence.
So you could use it as:
import itertools
def my_generator():
yield 1
yield 2
for i in itertools.chain(my_generator(), [5]):
print i
which would output:
1
2
5
A:
itertools.chain()
A:
Try itertools.chain(*iterables). Docs here: http://docs.python.org/library/itertools.html#itertools.chain
|
Concatenate generator and item
|
I have a generator (numbers) and a value (number). I would like to iterate over these as if they were one sequence:
i for i in tuple(my_generator) + (my_value,)
The problem is, as far as I undestand, this creates 3 tuples only to immediately discard them and also copies items in "my_generator" once.
Better approch would be:
def con(seq, item):
for i in seq:
yield seq
yield item
i for i in con(my_generator, my_value)
But I was wondering whether it is possible to do it without that function definition
|
[
"itertools.chain treats several sequences as a single sequence.\nSo you could use it as:\nimport itertools\n\ndef my_generator():\n yield 1\n yield 2\n\nfor i in itertools.chain(my_generator(), [5]):\n print i\n\nwhich would output:\n1\n2\n5\n\n",
"itertools.chain()\n",
"Try itertools.chain(*iterables). Docs here: http://docs.python.org/library/itertools.html#itertools.chain\n"
] |
[
46,
5,
5
] |
[] |
[] |
[
"generator",
"iterator",
"list_comprehension",
"python"
] |
stackoverflow_0002443252_generator_iterator_list_comprehension_python.txt
|
Q:
Make Python Socket Server More Efficient
I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background.
Here is my code in full: http://gist.github.com/332132
I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this?
My full code:
import select
import socket
import sys
import threading
from daemon import Daemon
class Server:
def __init__(self):
self.host = ''
self.port = 9998
self.backlog = 5
self.size = 1024
self.server = None
self.threads = []
self.send_count = 0
def open_socket(self):
try:
self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server.bind((self.host,self.port))
self.server.listen(5)
print "Server Started..."
except socket.error, (value,message):
if self.server:
self.server.close()
print "Could not open socket: " + message
sys.exit(1)
def remove_thread(self, t):
t.join()
def send_to_children(self, msg):
self.send_count = 0
for t in self.threads:
t.send_msg(msg)
print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads))
def run(self):
self.open_socket()
input = [self.server,sys.stdin]
running = 1
while running:
inputready,outputready,exceptready = select.select(input,[],[])
for s in inputready:
if s == self.server:
# handle the server socket
c = Client(self.server.accept(), self)
c.start()
self.threads.append(c)
print "Num of clients: "+str(len(self.threads))
self.server.close()
for c in self.threads:
c.join()
class Client(threading.Thread):
def __init__(self,(client,address), server):
threading.Thread.__init__(self)
self.client = client
self.address = address
self.size = 1024
self.server = server
self.running = True
def send_msg(self, msg):
if self.running:
self.client.send(msg)
self.server.send_count += 1
def run(self):
while self.running:
data = self.client.recv(self.size)
if data:
print data
self.server.send_to_children(data)
else:
self.running = False
self.server.threads.remove(self)
self.client.close()
"""
Run Server
"""
class DaemonServer(Daemon):
def run(self):
s = Server()
s.run()
if __name__ == "__main__":
d = DaemonServer('/var/servers/fserver.pid')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
d.start()
elif 'stop' == sys.argv[1]:
d.stop()
elif 'restart' == sys.argv[1]:
d.restart()
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
A:
There are several possible race conditions in your code, but they would threaten correctness rather than performance: fixing them e.g. by locking would definitely not improve performance.
Rather, I'd focus on what good you think those threads are doing, at all -- since the core of your code is a select.select call, why not focus on that... and a totally asynchronous, thus more effective... server, instead of bouncing some tasks off to threads which basically are just overhead. Read when some input is ready (as you're doing), write when some socket is ready for output, &c -- it's simpler and faster than the current mix of threads and async.
Programming async servers directly on top of select.select is quite a low-level approach, and while instructive it's not really suitable for production. Consider using the asyncore and asynchat modules of the Python standard library for a modestly higher abstraction level, or the twisted third-party package for a much higher boost (including the ability to implement the underlying "Reactor" design pattern by more effective means than old select -- there's poll, kqueues, etc, depending on what OS you're on, and Twisted can let you choose the implementation depending on your platform, while keeping the same Reactor interface).
I think I cover these various possibilities decently, if concisely, in the "server-side sockets" chapter of Python in a Nutshell 2nd Ed -- which you can get for free online by getting a trial subscription to O'Reilly's "Safari Online" site, or (illegally;-) by finding and using one of the many pirate sites hosting pirate copies of books (assuming of course you don't want to spend money for it by getting it "all legal and proper";-). I think you can freely download a zipfile with all example code from O'Reilly's website, anyway.
|
Make Python Socket Server More Efficient
|
I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background.
Here is my code in full: http://gist.github.com/332132
I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this?
My full code:
import select
import socket
import sys
import threading
from daemon import Daemon
class Server:
def __init__(self):
self.host = ''
self.port = 9998
self.backlog = 5
self.size = 1024
self.server = None
self.threads = []
self.send_count = 0
def open_socket(self):
try:
self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server.bind((self.host,self.port))
self.server.listen(5)
print "Server Started..."
except socket.error, (value,message):
if self.server:
self.server.close()
print "Could not open socket: " + message
sys.exit(1)
def remove_thread(self, t):
t.join()
def send_to_children(self, msg):
self.send_count = 0
for t in self.threads:
t.send_msg(msg)
print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads))
def run(self):
self.open_socket()
input = [self.server,sys.stdin]
running = 1
while running:
inputready,outputready,exceptready = select.select(input,[],[])
for s in inputready:
if s == self.server:
# handle the server socket
c = Client(self.server.accept(), self)
c.start()
self.threads.append(c)
print "Num of clients: "+str(len(self.threads))
self.server.close()
for c in self.threads:
c.join()
class Client(threading.Thread):
def __init__(self,(client,address), server):
threading.Thread.__init__(self)
self.client = client
self.address = address
self.size = 1024
self.server = server
self.running = True
def send_msg(self, msg):
if self.running:
self.client.send(msg)
self.server.send_count += 1
def run(self):
while self.running:
data = self.client.recv(self.size)
if data:
print data
self.server.send_to_children(data)
else:
self.running = False
self.server.threads.remove(self)
self.client.close()
"""
Run Server
"""
class DaemonServer(Daemon):
def run(self):
s = Server()
s.run()
if __name__ == "__main__":
d = DaemonServer('/var/servers/fserver.pid')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
d.start()
elif 'stop' == sys.argv[1]:
d.stop()
elif 'restart' == sys.argv[1]:
d.restart()
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
|
[
"There are several possible race conditions in your code, but they would threaten correctness rather than performance: fixing them e.g. by locking would definitely not improve performance.\nRather, I'd focus on what good you think those threads are doing, at all -- since the core of your code is a select.select call, why not focus on that... and a totally asynchronous, thus more effective... server, instead of bouncing some tasks off to threads which basically are just overhead. Read when some input is ready (as you're doing), write when some socket is ready for output, &c -- it's simpler and faster than the current mix of threads and async.\nProgramming async servers directly on top of select.select is quite a low-level approach, and while instructive it's not really suitable for production. Consider using the asyncore and asynchat modules of the Python standard library for a modestly higher abstraction level, or the twisted third-party package for a much higher boost (including the ability to implement the underlying \"Reactor\" design pattern by more effective means than old select -- there's poll, kqueues, etc, depending on what OS you're on, and Twisted can let you choose the implementation depending on your platform, while keeping the same Reactor interface).\nI think I cover these various possibilities decently, if concisely, in the \"server-side sockets\" chapter of Python in a Nutshell 2nd Ed -- which you can get for free online by getting a trial subscription to O'Reilly's \"Safari Online\" site, or (illegally;-) by finding and using one of the many pirate sites hosting pirate copies of books (assuming of course you don't want to spend money for it by getting it \"all legal and proper\";-). I think you can freely download a zipfile with all example code from O'Reilly's website, anyway.\n"
] |
[
6
] |
[] |
[] |
[
"multithreading",
"python",
"sockets"
] |
stackoverflow_0002443226_multithreading_python_sockets.txt
|
Q:
passing self data into a recursive function
I'm trying to set a function to do something like this
def __binaryTreeInsert(self, toInsert, currentNode=getRoot(), parentNode=None):
where current node starts as root, and then we change it to a different node in the method and recursivly call it again.
However, i cannot get the 'currentNode=getRoot()' to work. If i try calling the funcion getRoot() (as above) it says im not giving it all the required variables, but if i try to call self.getRoot() it complains that self is an undefined variable. Is there a way i can do this without having to specify the root while calling this method?
EDIT: The base case of this method is already
if currentNode == None:
so using that to set the root wouldn't work
A:
While arg=None is the idiomatic Python sentinel value for an non-supplied argument, it doesn't have to be None. In Lua, for instance, the idiomatic non-supplied argument is an empty table. We can actually apply that to this case:
class Foo:
sentinel = {}
def bar(self, arg=sentinel):
if arg is self.sentinel:
print "You didn't supply an argument!"
else:
print "The argument was", arg
f = Foo()
f.bar(123)
f.bar()
f.bar(None)
f.bar({})
Output:
The argument was 123
You didn't supply an argument!
The argument was None
The argument was {}
This works for any case except explicitly passing Foo.sentinel, because Foo.sentinel is guaranteed to have a unique address -- meaning, x is Foo.sentinel is only true when x is Foo.sentinel :) Thus, due to the closure we've created around Foo.sentinel, there is only one object that can create an ambiguous situation, and it will never be used by accident.
A:
You can do
def __binaryTreeInsert(self, toInsert, currentNode=None, parentNode=None):
if currentNode is None:
currentNode = self.getRoot()
...
A:
When a function or method is defined, the def line is evaluated immediately, including any keyword arguments. For this reason, things like function calls and mutable objects are usually not appropriate for default arguments.
The solution is instead to use a sentinel value. None is most common, but for the cases that None would be a valid value, you can use another sentinel, for example:
not_provided = object()
def _binaryTreeInsert(self, toInsert, currentNode=not_provided, parentNode=None):
if currentNode is not_provided:
currentNode = self.getRoot()
|
passing self data into a recursive function
|
I'm trying to set a function to do something like this
def __binaryTreeInsert(self, toInsert, currentNode=getRoot(), parentNode=None):
where current node starts as root, and then we change it to a different node in the method and recursivly call it again.
However, i cannot get the 'currentNode=getRoot()' to work. If i try calling the funcion getRoot() (as above) it says im not giving it all the required variables, but if i try to call self.getRoot() it complains that self is an undefined variable. Is there a way i can do this without having to specify the root while calling this method?
EDIT: The base case of this method is already
if currentNode == None:
so using that to set the root wouldn't work
|
[
"While arg=None is the idiomatic Python sentinel value for an non-supplied argument, it doesn't have to be None. In Lua, for instance, the idiomatic non-supplied argument is an empty table. We can actually apply that to this case:\nclass Foo:\n sentinel = {}\n def bar(self, arg=sentinel):\n if arg is self.sentinel:\n print \"You didn't supply an argument!\"\n else:\n print \"The argument was\", arg\n\nf = Foo()\nf.bar(123)\nf.bar()\nf.bar(None)\nf.bar({})\n\nOutput:\n\nThe argument was 123\nYou didn't supply an argument!\nThe argument was None\nThe argument was {}\n\nThis works for any case except explicitly passing Foo.sentinel, because Foo.sentinel is guaranteed to have a unique address -- meaning, x is Foo.sentinel is only true when x is Foo.sentinel :) Thus, due to the closure we've created around Foo.sentinel, there is only one object that can create an ambiguous situation, and it will never be used by accident.\n",
"You can do\ndef __binaryTreeInsert(self, toInsert, currentNode=None, parentNode=None):\n if currentNode is None:\n currentNode = self.getRoot()\n\n...\n\n",
"When a function or method is defined, the def line is evaluated immediately, including any keyword arguments. For this reason, things like function calls and mutable objects are usually not appropriate for default arguments.\nThe solution is instead to use a sentinel value. None is most common, but for the cases that None would be a valid value, you can use another sentinel, for example:\nnot_provided = object()\ndef _binaryTreeInsert(self, toInsert, currentNode=not_provided, parentNode=None):\n if currentNode is not_provided:\n currentNode = self.getRoot()\n\n"
] |
[
2,
0,
0
] |
[
"def __binaryTreeInsert(self, toInsert, currentNode=0, parentNode=None):\n if not currentNode: \n currentNode = self.getRoot()\n\n"
] |
[
-1
] |
[
"python",
"recursion",
"self"
] |
stackoverflow_0002443264_python_recursion_self.txt
|
Q:
Python Least-Squares Natural Splines
I am trying to find a numerical package which will fit a natural spline which minimizes weighted least squares.
There is a package in scipy which does what I want for unnatural splines.
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate, randn
x = np.arange(0,5,1.0/6)
xs = np.arange(0,5,1.0/500)
y = np.sin(x+1) + .2*np.random.rand(len(x)) -.1
knots = np.array([1,2,3,4])
tck = interpolate.splrep(x,y,s=0,k=3,t=knots,task=-1)
ys = interpolate.splev(xs,tck,der=0)
plt.figure()
plt.plot(xs,ys,x,y,'x')
A:
The spline.py file inside of this tar file from this page does a natural spline fit by default. There is also some code on this page that claims to mostly what you want. The pyD3D package also has a natural spline function in its pyDataUtils module. This last one looks the most promising to me. However, it doesn't appear to have the option of setting your own knots. Maybe if you look at the source you can find a way to rectify that.
Also, I found this message on the Scipy mailing list which says that using s=0.0 (as in your given code) makes splines fitted using your above procedure natural according the writer of the message. I did find this splmake function that has an option to do a natural spline fit, but upon looking at the source I found that it isn't implemented yet.
|
Python Least-Squares Natural Splines
|
I am trying to find a numerical package which will fit a natural spline which minimizes weighted least squares.
There is a package in scipy which does what I want for unnatural splines.
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate, randn
x = np.arange(0,5,1.0/6)
xs = np.arange(0,5,1.0/500)
y = np.sin(x+1) + .2*np.random.rand(len(x)) -.1
knots = np.array([1,2,3,4])
tck = interpolate.splrep(x,y,s=0,k=3,t=knots,task=-1)
ys = interpolate.splev(xs,tck,der=0)
plt.figure()
plt.plot(xs,ys,x,y,'x')
|
[
"The spline.py file inside of this tar file from this page does a natural spline fit by default. There is also some code on this page that claims to mostly what you want. The pyD3D package also has a natural spline function in its pyDataUtils module. This last one looks the most promising to me. However, it doesn't appear to have the option of setting your own knots. Maybe if you look at the source you can find a way to rectify that.\nAlso, I found this message on the Scipy mailing list which says that using s=0.0 (as in your given code) makes splines fitted using your above procedure natural according the writer of the message. I did find this splmake function that has an option to do a natural spline fit, but upon looking at the source I found that it isn't implemented yet. \n"
] |
[
6
] |
[] |
[] |
[
"python",
"scipy",
"spline"
] |
stackoverflow_0002441058_python_scipy_spline.txt
|
Q:
How to make socket.recv(500) not stop a while loop
I made an IRC bot which uses a while true loop to receive whatever is said.
To receive I use recv(500), but that stops the loop if there isn't anything to receive, but i need the loop to continue even if there isn't anything to receive.
I need a makeshift timer to continue running.
Example code:
/A lot of stuff/
timer=0
while 1:
timer=timer+1
line=s.recv(500) #If there is nothing to receive, the loop and thus the timer stop.
/A lot of stuff/
So either I need a way to stop it stopping the loop, or I need a better timer.
A:
You can settimeout on the socket so that the call returns promptly (with a suitable exception, so you'll need a try/except around it) if nothing's there -- a timeout of 0.1 seconds actually works better than non-blocking sockets in most conditions.
A:
This is going to prove a bad way to design a network application. I recommend looking into twisted, a networking library with an excellent implementation of the IRC protocol for making a client (like your bot) in twisted.words.protocols.irc.
http://www.habnabit.org/twistedex.html is an example of a very basic IRC bot written using twisted. With very little code, you are able to access a whole, correct, efficient, reconnecting implementation of IRC.
If you are intent on writing this from a socket level yourself, I still recommend studying a networking library like twisted to learn about how to effectively implement network apps. Your current technique will prove less effective than desired.
A:
I usually use irclib which takes care of this sort of detail for you.
A:
If you want to do this with low-level python, consider using the ready_sockets = select.select([s.fileno()], [], [], 0.1) -- this will test the socket s for readability. If your socket's file number is not returned in ready_sockets, then there is no data to read.
Be careful not to use the timout of "0" if you are going to call select repeatedly in a loop that does not otherwise yield the CPU -- that would consume 100% of the CPU as the loop executes. I gave 0.1 seconds timeout as an example; in this case, your timer variable would be counting tenths of a second.
Here's an example:
timer=0
sockets_to_check = [s.fileno()]
while 1:
ready_sockets = select.select(sockets_to_check, [], sockets_to_check, 0.1)
if (len(ready_sockets[2]) > 0):
# Handle socket error or closed connection here -- our socket appeared
# in the 'exceptional sockets' return value so something has happened to
# it.
elif (len(ready_sockets[0]) > 0):
line = s.recv(500)
else:
timer=timer+1 # Note that timer is not incremented if the select did not
# incur a full 0.1 second delay. Although we may have just
# waited for 0.09999 seconds without accounting for that. If
# your timer must be perfect, you will need to implement it
# differently. If it is used only for time-out testing, this
# is fine.
Note that the above code takes advantage of the fact that your input lists contain only one socket. If you were to use this approach with multiple sockets, which select.select does support, the len(ready_sockets[x]) > 0 test would not reveal which socket is ready for reading or has an exception.
|
How to make socket.recv(500) not stop a while loop
|
I made an IRC bot which uses a while true loop to receive whatever is said.
To receive I use recv(500), but that stops the loop if there isn't anything to receive, but i need the loop to continue even if there isn't anything to receive.
I need a makeshift timer to continue running.
Example code:
/A lot of stuff/
timer=0
while 1:
timer=timer+1
line=s.recv(500) #If there is nothing to receive, the loop and thus the timer stop.
/A lot of stuff/
So either I need a way to stop it stopping the loop, or I need a better timer.
|
[
"You can settimeout on the socket so that the call returns promptly (with a suitable exception, so you'll need a try/except around it) if nothing's there -- a timeout of 0.1 seconds actually works better than non-blocking sockets in most conditions.\n",
"This is going to prove a bad way to design a network application. I recommend looking into twisted, a networking library with an excellent implementation of the IRC protocol for making a client (like your bot) in twisted.words.protocols.irc.\nhttp://www.habnabit.org/twistedex.html is an example of a very basic IRC bot written using twisted. With very little code, you are able to access a whole, correct, efficient, reconnecting implementation of IRC.\nIf you are intent on writing this from a socket level yourself, I still recommend studying a networking library like twisted to learn about how to effectively implement network apps. Your current technique will prove less effective than desired.\n",
"I usually use irclib which takes care of this sort of detail for you.\n",
"If you want to do this with low-level python, consider using the ready_sockets = select.select([s.fileno()], [], [], 0.1) -- this will test the socket s for readability. If your socket's file number is not returned in ready_sockets, then there is no data to read.\nBe careful not to use the timout of \"0\" if you are going to call select repeatedly in a loop that does not otherwise yield the CPU -- that would consume 100% of the CPU as the loop executes. I gave 0.1 seconds timeout as an example; in this case, your timer variable would be counting tenths of a second.\nHere's an example:\ntimer=0 \nsockets_to_check = [s.fileno()]\n\nwhile 1:\n ready_sockets = select.select(sockets_to_check, [], sockets_to_check, 0.1)\n if (len(ready_sockets[2]) > 0):\n # Handle socket error or closed connection here -- our socket appeared\n # in the 'exceptional sockets' return value so something has happened to \n # it.\n elif (len(ready_sockets[0]) > 0):\n line = s.recv(500)\n else:\n timer=timer+1 # Note that timer is not incremented if the select did not\n # incur a full 0.1 second delay. Although we may have just\n # waited for 0.09999 seconds without accounting for that. If\n # your timer must be perfect, you will need to implement it\n # differently. If it is used only for time-out testing, this \n # is fine.\n\nNote that the above code takes advantage of the fact that your input lists contain only one socket. If you were to use this approach with multiple sockets, which select.select does support, the len(ready_sockets[x]) > 0 test would not reveal which socket is ready for reading or has an exception.\n"
] |
[
2,
1,
1,
1
] |
[] |
[] |
[
"python",
"sockets",
"while_loop"
] |
stackoverflow_0002443383_python_sockets_while_loop.txt
|
Q:
Why can't I pass self as a named argument to an instance method in Python?
This works:
>>> def bar(x, y):
... print x, y
...
>>> bar(y=3, x=1)
1 3
And this works:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> z = Foo()
>>> z.bar(y=3, x=1)
1 3
And even this works:
>>> Foo.bar(z, y=3, x=1)
1 3
But why doesn't this work in Python 2.x?
>>> Foo.bar(self=z, y=3, x=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method bar() must be called with Foo instance as first argument (got nothing instead)
This makes metaprogramming more difficult, because it requires special case handling. I'm curious if it's somehow necessary by Python's semantics or just an artifact of implementation.
A:
z.bar is a bound method -- it already has an im_self attribute that becomes the first argument (conventionally named self) to the underlying function object, the bound method's im_func attribute. To override that you obviously need to re-bind im_self (edit: or call the im_func instead) -- whatever you do in terms of argument passing is not going to have any effect on it, of course. Yep, that's the documented way bound methods object work in Python (not just an implementation detail: every correct Python implementation has to do it exactly this way). So it's "necessary" in the sense that it's part of what makes Python exactly the language it is, as opposed to being a slighty or strongly different language. Of course you could design a different language that chooses to play by completely different rules, but -- it wouldn't be Python, of course.
Edit: the OP's edits clarified he's calling an unbound method, not a bound one. This still doesn't work, and the reason is clear from the error message the attempt gets:
TypeError: unbound method bar() must
be called with Foo instance as first
argument (got nothing instead)
The rule underlying this very clear error message is that the instance must be the first argument (so of course a positional one: named arguments have no ordering). The unbound method doesn't "know" (nor care) what that parameter's name may be (and the use of name self for it is only a convention, not a rule of the Python language): it only care about the unambiguous condition of "first argument" (among the positional ones, of course).
This obscure corner case could certainly be altered (with a Python 3.2 patch, if and when the language-changes "freeze" ends;-) by making unbound methods seriously more complicated: they'd have to introspect and save the first-argument's name at creation time, and check keyword arguments on each call just in case somebody's passing self by name instead of by position. I don't think this would break any existing, working code, it would only slow down just about every existing Python program. If you write and propose a patch implementing this complication, and get active on python-dev to advocate for it against the sure-to-come firestorm of opposition, you do no doubt stand a > 0 chance to ram it through -- good luck.
The rest of us, meanwhile, will keep getting the im_func attribute instead, as one absurdly-tiny extra step in what has to be a pretty complicated inded edifice of metaprogramming to warrant such a change -- it isn't a "special case" at all, compared with the horrid difficulties of adapting named-argument passing to builtins that don't take named arguments (and don't expose their "argument names" to easily allow the transformation of named arguments into positional ones (now that would be a windmill worth attacking, IMHO: of all callables, builtins are the worst to metaprogram about, because of that!-).
|
Why can't I pass self as a named argument to an instance method in Python?
|
This works:
>>> def bar(x, y):
... print x, y
...
>>> bar(y=3, x=1)
1 3
And this works:
>>> class Foo(object):
... def bar(self, x, y):
... print x, y
...
>>> z = Foo()
>>> z.bar(y=3, x=1)
1 3
And even this works:
>>> Foo.bar(z, y=3, x=1)
1 3
But why doesn't this work in Python 2.x?
>>> Foo.bar(self=z, y=3, x=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method bar() must be called with Foo instance as first argument (got nothing instead)
This makes metaprogramming more difficult, because it requires special case handling. I'm curious if it's somehow necessary by Python's semantics or just an artifact of implementation.
|
[
"z.bar is a bound method -- it already has an im_self attribute that becomes the first argument (conventionally named self) to the underlying function object, the bound method's im_func attribute. To override that you obviously need to re-bind im_self (edit: or call the im_func instead) -- whatever you do in terms of argument passing is not going to have any effect on it, of course. Yep, that's the documented way bound methods object work in Python (not just an implementation detail: every correct Python implementation has to do it exactly this way). So it's \"necessary\" in the sense that it's part of what makes Python exactly the language it is, as opposed to being a slighty or strongly different language. Of course you could design a different language that chooses to play by completely different rules, but -- it wouldn't be Python, of course.\nEdit: the OP's edits clarified he's calling an unbound method, not a bound one. This still doesn't work, and the reason is clear from the error message the attempt gets:\n\nTypeError: unbound method bar() must\n be called with Foo instance as first\n argument (got nothing instead)\n\nThe rule underlying this very clear error message is that the instance must be the first argument (so of course a positional one: named arguments have no ordering). The unbound method doesn't \"know\" (nor care) what that parameter's name may be (and the use of name self for it is only a convention, not a rule of the Python language): it only care about the unambiguous condition of \"first argument\" (among the positional ones, of course).\nThis obscure corner case could certainly be altered (with a Python 3.2 patch, if and when the language-changes \"freeze\" ends;-) by making unbound methods seriously more complicated: they'd have to introspect and save the first-argument's name at creation time, and check keyword arguments on each call just in case somebody's passing self by name instead of by position. I don't think this would break any existing, working code, it would only slow down just about every existing Python program. If you write and propose a patch implementing this complication, and get active on python-dev to advocate for it against the sure-to-come firestorm of opposition, you do no doubt stand a > 0 chance to ram it through -- good luck.\nThe rest of us, meanwhile, will keep getting the im_func attribute instead, as one absurdly-tiny extra step in what has to be a pretty complicated inded edifice of metaprogramming to warrant such a change -- it isn't a \"special case\" at all, compared with the horrid difficulties of adapting named-argument passing to builtins that don't take named arguments (and don't expose their \"argument names\" to easily allow the transformation of named arguments into positional ones (now that would be a windmill worth attacking, IMHO: of all callables, builtins are the worst to metaprogram about, because of that!-).\n"
] |
[
6
] |
[] |
[] |
[
"language_lawyer",
"metaprogramming",
"methods",
"python",
"python_2.x"
] |
stackoverflow_0002443673_language_lawyer_metaprogramming_methods_python_python_2.x.txt
|
Q:
Piping EOF problems with stdio and C++/Python
I got some problems with EOF and stdio in a communication pipeline between a python process and a C++ program. I have no idea what I am doing wrong. When I see an EOF in my program I clear the stdin and next round I try to read in a new line. The problem is: for some reason the getline function immediatly (from the second run always, the first works just as intended) returns an EOF instead of waiting for a new input from the python process... Any idea?
alright Here is the code:
#include <string>
#include <iostream>
#include <iomanip>
#include <limits>
using namespace std;
int main(int argc, char **argv) {
for (;;) {
string buf;
if (getline(cin,buf)) {
if (buf=="q") break;
/*****///do some stuff with input //my actual filter program
cout<<buf;
/*****/
} else {
if ((cin.rdstate() & istream::eofbit)!=0)cout<<"eofbit"<<endl;
if ((cin.rdstate() & istream::failbit)!=0)cout<<"failbit"<<endl;
if ((cin.rdstate() & istream::badbit)!=0)cout<<"badbit"<<endl;
if ((cin.rdstate() & istream::goodbit)!=0)cout<<"goodbit"<<endl;
cin.clear();
cin.ignore(numeric_limits<streamsize>::max());
//break;//I am not using break, because I
//want more input when the parent
//process puts data into stdin;
}
}
return 0;
}
and in python:
from subprocess import Popen, PIPE
import os
from time import sleep
proc=Popen(os.getcwd()+"/Pipingtest",stdout=PIPE,stdin=PIPE,stderr=PIPE);
while(1):
sleep(0.5)
print proc.communicate("1 1 1")
print "running"
A:
communicate in python is a one shot function. It sends the given input to a process, closes the input stream, and reads the output streams, waiting for the process to terminate.
There is no way you can 'restart' the pipe with the same process after "communicating".
Conversely, on the other side of the pipe, when you read EOF there is no more data to read. Any attempt to read will immediately return EOF; python has closed the pipe.
If you want to carry on communicating with the same pipe you need to use the subprocess' stdin and stdout members and not communicate (but be careful of the potential of deadlocks) and use something other than the end of stream to signal that the C++ side should do another "batch" of processing.
|
Piping EOF problems with stdio and C++/Python
|
I got some problems with EOF and stdio in a communication pipeline between a python process and a C++ program. I have no idea what I am doing wrong. When I see an EOF in my program I clear the stdin and next round I try to read in a new line. The problem is: for some reason the getline function immediatly (from the second run always, the first works just as intended) returns an EOF instead of waiting for a new input from the python process... Any idea?
alright Here is the code:
#include <string>
#include <iostream>
#include <iomanip>
#include <limits>
using namespace std;
int main(int argc, char **argv) {
for (;;) {
string buf;
if (getline(cin,buf)) {
if (buf=="q") break;
/*****///do some stuff with input //my actual filter program
cout<<buf;
/*****/
} else {
if ((cin.rdstate() & istream::eofbit)!=0)cout<<"eofbit"<<endl;
if ((cin.rdstate() & istream::failbit)!=0)cout<<"failbit"<<endl;
if ((cin.rdstate() & istream::badbit)!=0)cout<<"badbit"<<endl;
if ((cin.rdstate() & istream::goodbit)!=0)cout<<"goodbit"<<endl;
cin.clear();
cin.ignore(numeric_limits<streamsize>::max());
//break;//I am not using break, because I
//want more input when the parent
//process puts data into stdin;
}
}
return 0;
}
and in python:
from subprocess import Popen, PIPE
import os
from time import sleep
proc=Popen(os.getcwd()+"/Pipingtest",stdout=PIPE,stdin=PIPE,stderr=PIPE);
while(1):
sleep(0.5)
print proc.communicate("1 1 1")
print "running"
|
[
"communicate in python is a one shot function. It sends the given input to a process, closes the input stream, and reads the output streams, waiting for the process to terminate.\nThere is no way you can 'restart' the pipe with the same process after \"communicating\".\nConversely, on the other side of the pipe, when you read EOF there is no more data to read. Any attempt to read will immediately return EOF; python has closed the pipe.\nIf you want to carry on communicating with the same pipe you need to use the subprocess' stdin and stdout members and not communicate (but be careful of the potential of deadlocks) and use something other than the end of stream to signal that the C++ side should do another \"batch\" of processing.\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"iostream",
"pipe",
"python",
"stdin"
] |
stackoverflow_0002443701_c++_iostream_pipe_python_stdin.txt
|
Q:
Python 2.6 and 3.1.1, earlier version compatibility
I ordered three books to start teaching myself Python - a beginning programming book, a computer science book that uses Python for all of its code references, and a book on Python network programming. Unfortunately, I was a little too quick on ordering them, because I hadn't noticed the version differences. The beginner book is for python 3.1, the CS book is Python 2.3, and the last is Python 2.6. The CS book is also oriented towards beginners.
My question is, will the different versions be too different at this level for me to effectively use all three, or will I likely be able to get by learning from the 3.1 beginners book and then sort of teach myself from the 2.3 CS book, and be able to comprehend 2.6 code?
That probably didn't make sense. I hope it did.
A:
99.8% of 2.3 code is valid in 2.6. The 3.x book should be able to backfill the 2.x knowledge from 2.4 on, assuming it actually touches upon the relevant subjects. See the various "What's New" documentation to see, well, what's new.
|
Python 2.6 and 3.1.1, earlier version compatibility
|
I ordered three books to start teaching myself Python - a beginning programming book, a computer science book that uses Python for all of its code references, and a book on Python network programming. Unfortunately, I was a little too quick on ordering them, because I hadn't noticed the version differences. The beginner book is for python 3.1, the CS book is Python 2.3, and the last is Python 2.6. The CS book is also oriented towards beginners.
My question is, will the different versions be too different at this level for me to effectively use all three, or will I likely be able to get by learning from the 3.1 beginners book and then sort of teach myself from the 2.3 CS book, and be able to comprehend 2.6 code?
That probably didn't make sense. I hope it did.
|
[
"99.8% of 2.3 code is valid in 2.6. The 3.x book should be able to backfill the 2.x knowledge from 2.4 on, assuming it actually touches upon the relevant subjects. See the various \"What's New\" documentation to see, well, what's new.\n"
] |
[
4
] |
[] |
[] |
[
"compatibility",
"python",
"version"
] |
stackoverflow_0002443746_compatibility_python_version.txt
|
Q:
Implementing the factory design pattern using metaclasses
I found a lot of links on metaclasses, and most of them mention that they are useful for implementing factory methods. Can you show me an example of using metaclasses to implement the design pattern?
A:
I'd love to hear people's comments on this, but I think this is an example of what you want to do
class FactoryMetaclassObject(type):
def __init__(cls, name, bases, attrs):
"""__init__ will happen when the metaclass is constructed:
the class object itself (not the instance of the class)"""
pass
def __call__(*args, **kw):
"""
__call__ will happen when an instance of the class (NOT metaclass)
is instantiated. For example, We can add instance methods here and they will
be added to the instance of our class and NOT as a class method
(aka: a method applied to our instance of object).
Or, if this metaclass is used as a factory, we can return a whole different
classes' instance
"""
return "hello world!"
class FactorWorker(object):
__metaclass__ = FactoryMetaclassObject
f = FactorWorker()
print f.__class__
The result you will see is: type 'str'
A:
You can find some helpful examples at wikibooks/python, here and here
A:
There's no need. You can override a class's __new__() method in order to return a completely different object type.
|
Implementing the factory design pattern using metaclasses
|
I found a lot of links on metaclasses, and most of them mention that they are useful for implementing factory methods. Can you show me an example of using metaclasses to implement the design pattern?
|
[
"I'd love to hear people's comments on this, but I think this is an example of what you want to do\nclass FactoryMetaclassObject(type):\n def __init__(cls, name, bases, attrs):\n \"\"\"__init__ will happen when the metaclass is constructed: \n the class object itself (not the instance of the class)\"\"\"\n pass\n\n def __call__(*args, **kw):\n \"\"\"\n __call__ will happen when an instance of the class (NOT metaclass)\n is instantiated. For example, We can add instance methods here and they will\n be added to the instance of our class and NOT as a class method\n (aka: a method applied to our instance of object).\n\n Or, if this metaclass is used as a factory, we can return a whole different\n classes' instance\n\n \"\"\"\n return \"hello world!\"\n\nclass FactorWorker(object):\n __metaclass__ = FactoryMetaclassObject\n\nf = FactorWorker()\nprint f.__class__\n\nThe result you will see is: type 'str'\n",
"You can find some helpful examples at wikibooks/python, here and here\n",
"There's no need. You can override a class's __new__() method in order to return a completely different object type.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"factory",
"metaclass",
"python"
] |
stackoverflow_0002443648_factory_metaclass_python.txt
|
Q:
Recognising tone of the audio
I have a guitar and I need my pc to be able to tell what note is being played, recognizing the tone. Is it possible to do it in python, also is it possible with pygame? Being able of doing it in pygame would be very helpful.
A:
To recognize the frequency of an audio signal, you would use the FFT (fast Fourier transform) algorithm. As far as I can tell, PyGame has no means to record audio, nor does it support the FFT transform.
First, you need to capture the raw sampled data from the sound card; this kind of data is called PCM (Pulse Code Modulation). The simplest way to capture audio in Python is using the PyAudio library (Python bindings to PortAudio). GStreamer can also do it, it's probably an overkill for your purposes. Capturing 16-bit samples at a rate of 48000 Hz is pretty typical and probably the best a normal sound card will give you.
Once you have raw PCM audio data, you can use the fftpack module from the scipy library to run the samples through the FFT transform. This will give you a frequency distribution of the analysed audio signal, i.e., how strong is the signal in certain frequency bands. Then, it's a matter of finding the frequency that has the strongest signal.
You might need some additional filtering to avoid harmonic frequencies I am not sure.
A:
I once wrote a utility that does exactly that - it analyses what sounds are being played.
You can look at the code here (or you can download the whole project. its integrated with Frets On Fire, a guitar hero open source clone to create a real guitar hero). It was tested using a guitar, an harmonica and whistles :) The code is ugly, but it works :)
I used pymedia to record, and scipy for the FFT.
Except for the basics that others already noted, I can give you some tips:
If you record from mic, there is a lot of noise. You'll have to use a lot of trial-and-error to set thresholds and sound clean up methods to get it working. One possible solution is to use an electric guitar, and plug its output to the audio-in. This worked best for me.
Specifically, there is a lot of noise around 50Hz. That's not so bad, but its overtones (see below) are at 100 Hz and 150 Hz, and that's close to guitar's G2 and D3.... As I said my solution was to switch to an electric guitar.
There is a tradeoff between speed of detection, and accuracy. The more samples you take, the longer it will take you to detect sounds, but you'll be more accurate detecting the exact pitch. If you really want to make a project out of this, you probably need to use several time scales.
When a tones is played, it has overtones. Sometimes, after a few seconds, the overtones might even be more powerful than the base tone. If you don't deal with this, your program with think it heard E2 for a few seconds, and then E3. To overcome this, I used a list of currently playing sounds, and then as long as this note, or one of its overtones had energy in it, I assumed its the same note being played....
It is specifically hard to detect when someone plays the same note 2 (or more) times in a row, because it's hard to distinguish between that, and random fluctuations of sound level. You'll see in my code that I had to use a constant that had to be configured to match the guitar used (apparently every guitar has its own pattern of power fluctuations).
A:
You will need to use an audio library such as the built-in audioop.
Analyzing the specific note being played is not trivial, but can be done using those APIs.
Also could be of use: http://wiki.python.org/moin/PythonInMusic
A:
Very similar questions:
Audio Processing - Tone Recognition
Real time pitch detection
Real-time pitch detection using FFT
Turning sound into a sequence of notes is not an easy thing to do, especially with multiple notes at once. Read through Google results for "frequency estimation" and "note recognition".
I have some Python frequency estimation examples, but this is only a portion of what you need to solve to get notes from guitar recordings.
A:
This link shows some one doing it in VB.NET but the basics of what need to be done to achieve your goal is captured in these links below.
STFT
Colley Tukey
FFT
|
Recognising tone of the audio
|
I have a guitar and I need my pc to be able to tell what note is being played, recognizing the tone. Is it possible to do it in python, also is it possible with pygame? Being able of doing it in pygame would be very helpful.
|
[
"To recognize the frequency of an audio signal, you would use the FFT (fast Fourier transform) algorithm. As far as I can tell, PyGame has no means to record audio, nor does it support the FFT transform.\nFirst, you need to capture the raw sampled data from the sound card; this kind of data is called PCM (Pulse Code Modulation). The simplest way to capture audio in Python is using the PyAudio library (Python bindings to PortAudio). GStreamer can also do it, it's probably an overkill for your purposes. Capturing 16-bit samples at a rate of 48000 Hz is pretty typical and probably the best a normal sound card will give you.\nOnce you have raw PCM audio data, you can use the fftpack module from the scipy library to run the samples through the FFT transform. This will give you a frequency distribution of the analysed audio signal, i.e., how strong is the signal in certain frequency bands. Then, it's a matter of finding the frequency that has the strongest signal.\nYou might need some additional filtering to avoid harmonic frequencies I am not sure.\n",
"I once wrote a utility that does exactly that - it analyses what sounds are being played. \nYou can look at the code here (or you can download the whole project. its integrated with Frets On Fire, a guitar hero open source clone to create a real guitar hero). It was tested using a guitar, an harmonica and whistles :) The code is ugly, but it works :)\nI used pymedia to record, and scipy for the FFT.\nExcept for the basics that others already noted, I can give you some tips:\n\nIf you record from mic, there is a lot of noise. You'll have to use a lot of trial-and-error to set thresholds and sound clean up methods to get it working. One possible solution is to use an electric guitar, and plug its output to the audio-in. This worked best for me. \nSpecifically, there is a lot of noise around 50Hz. That's not so bad, but its overtones (see below) are at 100 Hz and 150 Hz, and that's close to guitar's G2 and D3.... As I said my solution was to switch to an electric guitar.\nThere is a tradeoff between speed of detection, and accuracy. The more samples you take, the longer it will take you to detect sounds, but you'll be more accurate detecting the exact pitch. If you really want to make a project out of this, you probably need to use several time scales.\nWhen a tones is played, it has overtones. Sometimes, after a few seconds, the overtones might even be more powerful than the base tone. If you don't deal with this, your program with think it heard E2 for a few seconds, and then E3. To overcome this, I used a list of currently playing sounds, and then as long as this note, or one of its overtones had energy in it, I assumed its the same note being played....\nIt is specifically hard to detect when someone plays the same note 2 (or more) times in a row, because it's hard to distinguish between that, and random fluctuations of sound level. You'll see in my code that I had to use a constant that had to be configured to match the guitar used (apparently every guitar has its own pattern of power fluctuations). \n\n",
"You will need to use an audio library such as the built-in audioop.\nAnalyzing the specific note being played is not trivial, but can be done using those APIs.\nAlso could be of use: http://wiki.python.org/moin/PythonInMusic\n",
"Very similar questions:\n\nAudio Processing - Tone Recognition\nReal time pitch detection\nReal-time pitch detection using FFT\n\nTurning sound into a sequence of notes is not an easy thing to do, especially with multiple notes at once. Read through Google results for \"frequency estimation\" and \"note recognition\".\nI have some Python frequency estimation examples, but this is only a portion of what you need to solve to get notes from guitar recordings.\n",
"This link shows some one doing it in VB.NET but the basics of what need to be done to achieve your goal is captured in these links below. \n\nSTFT\nColley Tukey\nFFT\n\n"
] |
[
21,
19,
1,
1,
0
] |
[] |
[] |
[
"audio",
"python"
] |
stackoverflow_0001797631_audio_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.