Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Reading files in GAE using python
3,994,580
0
2
613
0
python,django,google-app-engine
If you dig in the dev_appserver.py source code and related files you see that the server does some incarnate checking to ensure that you open only files from below your applications root (in fact the rules seem even more complex). For file access troubled I instrumented that "path permission checking" code from the development server to find that I was using absolute paths. We propbably should do a patch to appengine to provide better error reporting on that: IIRC the Appserver does not display the offending path but a mangled version of this which makes debugging difficult.
0
1
0
0
2010-10-17T21:22:00.000
2
0
false
3,955,361
0
0
1
2
I created a simple python project that serves up a couple of pages. I'm using the 'webapp' framework and django. What I'm trying to do is use one template file, and load 'content files' that contain the actual page text. When I try to read the content files using os.open, I get the following error: pageContent = os.open(pageUrl, 'r').read() OSError: [Errno 1] Operation not permitted: 'content_includes/home.inc' error If I let the django templating system to read the same file for me, everything works fine! So the question is What am I doing wrong that django isn't??? The same 'pageUrl' is used. The code below will give me the error, while if I comment out the first pageContent assignment, everything works fine. Code: pageName = "home"; pageUrl = os.path.join(os.path.normpath('content_includes'), pageName + '.inc') pageContent = os.open(pageUrl, 'r').read() pageContent=template.render(pageUrl, template_values, debug=True); template_values = { 'page': pageContent, 'test': "testing my app" } Error: Traceback (most recent call last): File "/opt/apis/google_appengine/google/appengine/ext/webapp/__init__.py", line 511, in __call__ handler.get(*groups) File "/home/odessit/Development/Python/Alpha/main.py", line 19, in get pageContent = os.open(pageUrl, 'r').read() File "/opt/apis/google_appengine/google/appengine/tools/dev_appserver.py", line 805, in FakeOpen raise OSError(errno.EPERM, "Operation not permitted", filename) OSError: [Errno 1] Operation not permitted: 'content_includes/home.inc' app.yaml: handlers: - url: /javascript static_dir: javascript - url: /images static_dir: images - url: /portfolio static_dir: portfolio - url: /.* script: main.py
How create an Executable file + Launcher in Ubuntu from a python script?
3,958,070
3
2
7,715
0
python,linux,ubuntu,debian
Closed-source? Meh. Well, you can compile python iirc, or simply use an obscusificator. But I recommend to open-source it ;-) The stuff you can double-click are .desktop files, for samples, see find /usr | grep desktop.
0
1
0
0
2010-10-18T09:53:00.000
3
0.197375
false
3,958,044
0
0
0
1
I have created a simple program in python. Now I want trasform this script in an executable program ( with hidden source code if possible ) and when I click 2 times on it, the program install itself on the ubuntu ( in the /usr/lib or /usr/bin I think ) and it will create a new launcher in the Application -> Game menu. How can I do that ?
Uploading images from a django application to a Google AppEngine application
3,959,150
1
1
269
0
python,google-app-engine
Afaik you can not store files in App Engine programmatically. You can just store them, when uploading your app. You can however store information in its data store. So you would need to deploy an app, that authenticates your user and then writs the image to the gae's data store
0
1
0
0
2010-10-18T10:24:00.000
2
0.099668
false
3,958,224
0
0
1
1
I am not running Django on AppEngine. I just want to use AppEngine as a content delivery network, basically a place I can host and serve images for free. It's for a personal side project. The situation is this: I have the URL of an image hosted on another server/provider. Instead of hotlinking to that image its better to save it on AppEngine and serve it from there - this applies mostly for thumbnails. My questions are these: How do I authenticate from my Django app (let's say A) to my AppEngine app (B), so that only I can upload images How do I make a request from A to B saying "Fetch the image on this URL, create a thumbnail and save it." How do I tell B that "For this url return that image" How can I handle errors or timeouts? And how can this be done asynchronously?
How can I move my Python2.6 site-packages into Python2.7?
3,968,450
0
4
2,613
0
python,module
Not a complete answer: It is not as simple as a mv. The files are byte compiled into .pyc files which are specific to python versions. So at the very least you'd have to regenerate the .pyc files. (Removing them should be sufficient, too.) Regenerating can be done using compileall.py. Most distributions offer a saner way to upgrade Python modules than manual fiddling like this, so maybe someone can else can give the Arch specific part of the answer?
0
1
0
0
2010-10-19T12:34:00.000
5
0
false
3,968,339
1
0
0
3
I just ran an update on ArchLinux which gave me Python3 and Python2.7. Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6. How can I move my Python2.6 modules into Python2.7 ? Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?
How can I move my Python2.6 site-packages into Python2.7?
4,016,456
0
4
2,613
0
python,module
You might want to 'easy_install yolk', which can be invoked as 'yolk -l' to give you an easy-to-read list of all the installed packages.
0
1
0
0
2010-10-19T12:34:00.000
5
0
false
3,968,339
1
0
0
3
I just ran an update on ArchLinux which gave me Python3 and Python2.7. Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6. How can I move my Python2.6 modules into Python2.7 ? Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?
How can I move my Python2.6 site-packages into Python2.7?
3,968,454
0
4
2,613
0
python,module
The clean way would be re-installing. However, for many if not most of pure python packages the mv approach would work
0
1
0
0
2010-10-19T12:34:00.000
5
0
false
3,968,339
1
0
0
3
I just ran an update on ArchLinux which gave me Python3 and Python2.7. Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6. How can I move my Python2.6 modules into Python2.7 ? Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?
How to handle multiple forms in google app engine?
3,976,759
7
2
1,373
0
python,google-app-engine,web-applications
The framework you use is irrelevant to how you handle forms. You have a couple of options: you can distinguish the forms by changing the URL they submit to - in which case, you can use the same handler or a different handler for each form - or you can distinguish them based on the contents of the form. The easiest way to do the latter is to give your submit buttons distinct names or values, and check for them in the POST data.
0
1
0
0
2010-10-20T09:23:00.000
2
1.2
true
3,976,368
0
0
1
1
Say if I have multiple forms with multiple submit button in a single page, can I somehow make all of these buttons work using webapp as backend handler? If not, what are the alternatives?
App Engine Version, Memcache
3,976,845
12
6
371
0
python,google-app-engine
The os.environ variable contains a key called CURRENT_VERSION_ID that you can use. It's value is composed of the version from app.yaml concatenated together with a period and what I suspect is the api_version. If I set version to 42 it gives me the value of 42.1. You should have no problems extracting the version number alone, but it might not be such a bad idea to keep the api_version aswell. EDIT: @Nick Johnson has pointed out that the number to the right of the period is the minor version, a number which is incremented each time you deploy your code. On the development server this number is always 1.
0
1
0
0
2010-10-20T10:17:00.000
1
1.2
true
3,976,772
0
0
1
1
I am developing an App Engine App that uses memcache. Since there is only a single memcache shared among all versions of your app I am potentially sending bad data from a new version to the production version memcache. To prevent this, I think I may append the app version to the memcache key string to allow various versions of the app to keep their data separate. I could do this manually, but I'd like to pull in the version from the app.yaml How can I access the app version from within the python code?
How can my desktop application be notified of a state change on a remote server?
3,978,891
0
0
116
0
python,authentication,authorization,polling,web.py
Does the remote end block while it does the authentication? If so, you can use a simple select to block till it returns. Another way I can think of is to pass a callback URL to the authentication server asking it to call it when it's done so that your client app can proceed. Something like a webhook.
0
1
1
0
2010-10-20T14:08:00.000
2
0
false
3,978,739
0
0
0
1
I'm creating a desktop application that requires authorization from a remote server before performing certain actions locally. What's the best way to have my desktop application notified when the server approves the request for authorization? Authorization takes 20 seconds average on, 5 seconds minimum, with a 120 second timeout. I considered polling the server ever 3 seconds or so, but this would be hard to scale when I deploy the application more widely, and seems inelegant. I have full control over the design of the server and client API. The server is using web.py on Ubuntu 10.10, Python 2.6.
cron-like recurring task scheduler design
3,980,935
7
9
3,985
0
python,cron,scheduling
There's 2 designs, basically. One runs regularly and compares the current time to the scheduling spec (i.e. "Does this run now?"), and executes those that qualify. The other technique takes the current scheduling spec and finds the NEXT time that the item should fire. Then, it compares the current time to all of those items who's "next time" is less than "current time", and fires those. Then, when an item is complete, it is rescheduled for the new "next time". The first technique can not handle "missed" items, the second technique can only handle those items that were previously scheduled. Specifically consider you you have a schedule that runs once every hour, at the top of the hour. So, say, 1pm, 2pm, 3pm, 4pm. At 1:30pm, the run task is down and not executing any processes. It does not start again until 3:20pm. Using the first technique, the scheduler will have fired the 1pm task, but not fired the 2pm, and 3pm tasks, as it was not running when those times passed. The next job to run will be the 4pm job, at, well, 4pm. Using the second technique, the scheduler will have fired the 1pm task, and scheduled the next task at 2pm. Since the system was down, the 2pm task did not run, nor did the 3pm task. But when the system restarted at 3:20, it saw that it "missed" the 2pm task, and fired it off at 3:20, and then scheduled it again for 4pm. Each technique has it's ups and downs. With the first technique, you miss jobs. With the second technique you can still miss jobs, but it can "catch up" (to a point), but it may also run a job "at the wrong time" (maybe it's supposed to run at the top of the hour for a reason). A benefit of the second technique is that if you reschedule at the END of the executing job, you don't have to worry about a cascading job problem. Consider that you have a job that runs every minute. With the first technique, the job gets fired each minute. However, typically, if the job is not FINISHED within it's minute, then you can potentially have 2 jobs running (one late in the process, the other starting up). This can be a problem if the job is not designed to run more than once simultaneously. And it can exacerbate (if there's a real problem, after 10 minutes you have 10 jobs all fighting each other). With the second technique, if you schedule at the end of the job, then if a job happens to run just over a minute, then you'll "skip" a minute" and start up the following minute rather than run on top of itself. So, you can have a job scheduled for every minute actually run at 1:01pm, 1:03pm, 1:05pm, etc. Depending on your job design, either of these can be "good" or "bad". There's no right answer here. Finally, implementing the first technique is really, quite trivial compared to implementing the second. The code to determine if a cron string (say) matches a given time is simple compared to deriving what time a cron string will be valid NEXT. I know, and I have a couple hundred lines of code to prove it. It's not pretty.
0
1
0
1
2010-10-20T17:50:00.000
3
1.2
true
3,980,782
0
0
0
1
Say you want to schedule recurring tasks, such as: Send email every wednesday at 10am Create summary on the first day of every month And you want to do this for a reasonable number of users in a web app - ie. 100k users each user can decide what they want scheduled when. And you want to ensure that the scheduled items run, even if they were missed originally - eg. for some reason the email didn't get sent on wednesday at 10am, it should get sent out at the next checking interval, say wednesday at 11am. How would you design that? If you use cron to trigger your scheduling app every x minutes, what's a good way to implement the part that decides what should run at each point in time? The cron-like implementations I've seen compare the current time to the trigger time for all specified items, but I'd like to deal with missed items as well. I have a feeling there's a more clever design than the one I'm cooking up, so please enlighten me.
C# bindings for MEEP (Photonic Simulation Package)
3,984,591
0
0
361
0
c#,c++,python,mono,meep
The straightforward and portable solution is to write a C++ wrapper for libmeep that exposes a C ABI (via extern "C" { ... }), then write a C# wrapper around this API using P/Invoke. This would be roughly equivalent to the Python Meep wrapper, AFAICT. Of course, mapping C++ classes to C# classes via a flat C API is nontrivial - you're going to have to keep IntPtr handles for the C++ classes in your C# classes, properly implement the Dispose pattern, using GCHandles or a dictionary of IntPtrs to allow referential integrity when resurfacing C++ objects (if needed), etc. Subclassing C++ objects in C# and being able to overriding virtual methods gets really quite complicated. There is a tool called SWIG that can do this automatically but the results will not be anywhere near as good as a hand-written wrapper. If you restrict yourself to Windows/.NET, Microsoft has a superset of C++ called C++/CLI, which would enable you to write a wrapper in C++ that exports a .NET API directly.
0
1
0
0
2010-10-20T22:03:00.000
1
1.2
true
3,982,717
0
0
0
1
Does anyone know of a way to call MIT's Meep simulation package from C# (probably Mono, god help me). We're stuck with the #$@%#$^ CTL front-end, which is a productivity killer. Some other apps that we're integrating into our sim pipeline are in C# (.NET). I've seen a Python interface to Meep (light years ahead of CTL), but I'd like to keep the code we're developing as homogeneous as possible. And, no, writing the rest of the tools in Python isn't an option. Why? Because we hates it. Stupid Bagginses. We hates it forever! (In reality, the various app targets don't lend themselves to a Python implementation, and the talent pool I have available is far more productive with C#.) Or, in a more SO-friendly question form: Is there a convenient/possible way to link GNU C++ libraries into C# on Windows or Mono on Linux?
python on xp: errno 13 permission denied - limits to number of files in folder?
3,982,908
2
0
1,009
0
python,windows-xp
Are you using FAT32? The maximum number of directory entries in a FAT32 folder is is 65.534. If a filename is longer than 8.3, it will take more than one directory entry. If you are conking out at 13,106, this indicates that each filename is long enough to require five directory entries. Solution: Use an NTFS volume; it does not have per-folder limits and supports long filenames natively (that is, instead of using multiple 8.3 entries). The total number of files on an NTFS volume is limited to around 4.3 billion, but they can be put in folders in any combination.
0
1
0
0
2010-10-20T22:36:00.000
3
0.132549
false
3,982,881
0
0
0
3
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive. I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file. I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13. I'd be grateful for suggestions on whether and why this problem is occurring. many thanks, nick
python on xp: errno 13 permission denied - limits to number of files in folder?
3,982,927
0
0
1,009
0
python,windows-xp
I wouldn't have that many files in a single folder, it is a maintenance nightmare. BUT if you need to, don't do this on FAT: you have max. 64k files in a FAT folder. Read the error message Your specific problem could also be be, that you as the error message suggests are hitting a file which you can't access. And there's no reason to believe that the count of files until this happens should change. It is a computer after all, and you are repeating the same operation.
0
1
0
0
2010-10-20T22:36:00.000
3
0
false
3,982,881
0
0
0
3
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive. I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file. I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13. I'd be grateful for suggestions on whether and why this problem is occurring. many thanks, nick
python on xp: errno 13 permission denied - limits to number of files in folder?
3,982,931
0
0
1,009
0
python,windows-xp
I predict that your external drive is formatted 32 and that the filenames you're writing to it are somewhere around 45 characters long. FAT32 can only have 65536 directory entries in a directory. Long file names use multiple directory entries each. And "." always takes up one entry. That you are able to write 65536/5 - 1 = 13106 entries strongly suggests that your filenames take up 5 entries each and that you have a FAT32 filesystem. This is because there exists code using 16-bit numbers as directory entry offsets. Additionally, you do not want to search through multi-1000 entry directories in FAT -- the search is linear. I.e. fopen(some_file) will induce the OS to march linearly through the list of files, from the beginning every time, until it finds some_file or marches off the end of the list. Short answer: Directories are a good thing.
0
1
0
0
2010-10-20T22:36:00.000
3
0
false
3,982,881
0
0
0
3
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive. I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file. I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13. I'd be grateful for suggestions on whether and why this problem is occurring. many thanks, nick
How to implement "autoincrement" on Google AppEngine
3,986,265
25
31
11,536
0
python,database,google-app-engine
If you absolutely have to have sequentially increasing numbers with no gaps, you'll need to use a single entity, which you update in a transaction to 'consume' each new number. You'll be limited, in practice, to about 1-5 numbers generated per second - which sounds like it'll be fine for your requirements.
0
1
0
0
2010-10-21T09:02:00.000
9
1.2
true
3,985,812
0
0
1
4
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
29,419,735
0
31
11,536
0
python,database,google-app-engine
I'm thinking in using the following solution: use CloudSQL (MySQL) to insert the records and assign the sequential ID (maybe with a Task Queue), later (using a Cron Task) move the records from CloudSQL back to the Datastore. The entities also can have a UUID, so we can map the entities from the Datastore in CloudSQL, and also have the sequential ID (for legal reasons).
0
1
0
0
2010-10-21T09:02:00.000
9
0
false
3,985,812
0
0
1
4
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
15,731,054
0
31
11,536
0
python,database,google-app-engine
Remember: Sharding increases the probability that you will get a unique, auto-increment value, but does not guarantee it. Please take Nick's advice if you MUST have a unique auto-incrment.
0
1
0
0
2010-10-21T09:02:00.000
9
0
false
3,985,812
0
0
1
4
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
4,056,817
7
31
11,536
0
python,database,google-app-engine
If you drop the requirement that IDs must be strictly sequential, you can use a hierarchical allocation scheme. The basic idea/limitation is that transactions must not affect multiple storage groups. For example, assuming you have the notion of "users", you can allocate a storage group for each user (creating some global object per user). Each user has a list of reserved IDs. When allocating an ID for a user, pick a reserved one (in a transaction). If no IDs are left, make a new transaction allocating 100 IDs (say) from the global pool, then make a new transaction to add them to the user and simultaneously withdraw one. Assuming each user interacts with the application only sequentially, there will be no concurrency on the user objects.
0
1
0
0
2010-10-21T09:02:00.000
9
1
false
3,985,812
0
0
1
4
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
What exactly does distutils do?
3,991,421
2
5
1,042
0
python,distutils
You don't have to use distutils to get your own modules working on your own machine; saving them in your python path is sufficient. When you decide to publish your modules for other people to use, distutils provides a standard way for them to install your modules on their machines. (The "dist" in "distutils" means distribution, as in distributing your software to others.)
0
1
0
0
2010-10-21T19:51:00.000
3
0.132549
false
3,991,335
1
0
0
2
I have read the documentation but I don't understand. Why do I have to use distutils to install python modules ? Why do I just can't save the modules in python path ?
What exactly does distutils do?
3,991,451
5
5
1,042
0
python,distutils
You don't have to use distutils. You can install modules manually, just like you can compile a C++ library manually (compile every implementation file, then link the .obj files) or install an application manually (compile, put into its own directory, add a shortcut for launching). It just gets tedious and error-prone, as every repetive task done manually. Moreover, the manual steps I listed for the examples are pretty optimistic - often, you want to do more. For example, PyQt adds the .ui-to-.py-compiler to the path so you can invoke it via command line. So you end up with a stack of work that could be automated. This alone is a good argument. Also, the devs would have to write installing instructions. With distutils etc, you only have to specify what your project consists of (and fancy extras if and only if you need it) - for example, you don't need to tell it to put everything in a new folder in site-packages, because it already knows this. So in the end, it's easier for developers and for users.
0
1
0
0
2010-10-21T19:51:00.000
3
0.321513
false
3,991,335
1
0
0
2
I have read the documentation but I don't understand. Why do I have to use distutils to install python modules ? Why do I just can't save the modules in python path ?
Python GeoModel alternative
3,993,903
2
6
1,355
0
python,google-app-engine,google-cloud-datastore,gis,geohashing
I can't point you to an existing library that has better performance, but as I recall, GeoModel is open source and the code isn't difficult to understand. We found that we could make some speed improvements by adjusting the code to fit our scenario. For example, if you don't need nearest-n, you just need X results from within a particular bounding box or radius, you can probably improve GeoModel's speed, as GeoModel has to currently get every record in the appropriate geohash and then sorts for closest in memory. (Details of that implementation left as an exercise for the reader.) You might also consider tuning how many levels of geohash you're using. If you have a lot of dense data and are querying over small areas, you might considerably increase performance by keeping 16 levels instead of 8 or 12. (I'm not looking at the GeoModel source right now but recalling when I last used it several months ago, so take this with a grain of salt and dive into the source code yourself.)
0
1
0
0
2010-10-22T04:22:00.000
3
0.132549
false
3,993,862
0
0
1
1
I'm looking for an alternative library for the app engine datastore that will do nearest-n or boxed geo-queries, currently i'm using GeoModel 0.2 and it runs quite slow ( > 1.5s in some cases). Does anyone have any suggestions? Thanks!
How to limit file size when writing one?
3,999,039
3
11
17,308
0
python,file-io
See the tell() method on the stream object.
0
1
0
0
2010-10-22T16:41:00.000
6
0.099668
false
3,999,007
0
0
0
2
I am using the output streams from the io module and writing to files. I want to be able to detect when I have written 1G of data to a file and then start writing to a second file. I can't seem to figure out how to determine how much data I have written to the file. Is there something easy built in to io? Or might I have to count the bytes before each write manually?
How to limit file size when writing one?
4,766,092
1
11
17,308
0
python,file-io
I noticed an ambiguity in your question. Do you want the file to be (a) over (b) under (c) exactly 1GiB large, before switching? It's easy to tell if you've gone over. tell() is sufficient for that kind of thing; just check if tell() > 1024*1024*1024: and you'll know. Checking if you're under 1GiB, but will go over 1GiB on your next write, is a similar technique. if len(data_to_write) + tell > 1024*1024*1024: will suffice. The trickiest thing to do is to get the file to exactly 1GiB. You will need to tell() the length of the file, and then partition your data appropriately in order to hit the mark precisely. Regardless of exactly which semantics you want, tell() is always going to be at least as slow as doing the counting yourself, and possibly slower. This doesn't mean that it's the wrong thing to do; if you're writing the file from a thread, then you almost certainly will want to tell() rather than hope that you've correctly preempted other threads writing to the same file. (And do your locks, etc., but that's another question.) By the way, I noticed a definite direction in your last couple questions. Are you aware of #twisted and #python IRC channels on Freenode (irc.freenode.net)? You will get timelier, more useful answers. ~ C.
0
1
0
0
2010-10-22T16:41:00.000
6
0.033321
false
3,999,007
0
0
0
2
I am using the output streams from the io module and writing to files. I want to be able to detect when I have written 1G of data to a file and then start writing to a second file. I can't seem to figure out how to determine how much data I have written to the file. Is there something easy built in to io? Or might I have to count the bytes before each write manually?
Profile python program that forks itself as a daemon
4,000,230
0
2
751
0
python,profiling,daemon,cprofile
Well you can always profile it for a single process or single thread & optimize. After which make it multi-thread. Am I missing something here?
0
1
0
0
2010-10-22T18:47:00.000
1
0
false
3,999,938
1
0
0
1
Is it possible to run cprofile on a mult-threaded python program that forks itself into a daemon process? I know you can make it work on multi thread, but I haven't seen anything on profiling a daemon.
Getting the Tasks in a Google App Engine TaskQueue
4,059,302
1
6
2,402
0
python,google-app-engine,task,dashboard,task-queue
A workaround, since they don't seem to support this yet, would be to model a Task datastore object. Create one on task queue add, update it when running, and delete it when your task fires. This can also be a nice way to get around the payload limits of the task queue api.
0
1
0
0
2010-10-23T04:42:00.000
3
0.066568
false
4,002,514
0
0
1
1
I know you can view the currently queued and running tasks in the Dashboard or development server console. However, is there any way to get that list programmatically? The docs only describe how to add tasks to the queue, but not how to list and/or cancel them. In python please.
Do all Mac OS X versions (above 10.4) have python preinstalled?
4,007,816
4
2
687
0
python,macos
Yes, they do all have python preinstalled.
0
1
0
0
2010-10-24T10:06:00.000
3
1.2
true
4,007,801
1
0
0
3
Do all Mac OS X versions (above 10.4) have python preinstalled?
Do all Mac OS X versions (above 10.4) have python preinstalled?
4,007,870
-1
2
687
0
python,macos
Yes they do. Use it from Terminal.
0
1
0
0
2010-10-24T10:06:00.000
3
-0.066568
false
4,007,801
1
0
0
3
Do all Mac OS X versions (above 10.4) have python preinstalled?
Do all Mac OS X versions (above 10.4) have python preinstalled?
4,008,331
5
2
687
0
python,macos
Yes, but the Python version may be different. OS X 10.5 shipped with Python 2.5, OS X 10.6 with Python 2.6.
0
1
0
0
2010-10-24T10:06:00.000
3
0.321513
false
4,007,801
1
0
0
3
Do all Mac OS X versions (above 10.4) have python preinstalled?
process closing while saving a file - Python - Windows XP
4,009,744
0
0
124
0
python,windows
easy crossplatform/crosslanguage way of handling partial file saving: save to a temporary filename like "file.ext.part" after you're done saving, rename to "file.ext"
0
1
0
0
2010-10-24T16:28:00.000
3
0
false
4,009,130
1
0
0
1
I'm working on a project for school where e-mails will be pulled from an inbox and downloaded to different locations depending on how things are parsed. The language I'm writing in is Python, and the environment it will be run on is Windows XP. The idea is that the program will run in the background with no interaction from the user until they basically shutdown their computer. A concern I had is what this will mean if they shut it down while a file is in the process of being saved, and what I can do to handle it. Will it just be a file.part thing? Will the shutdown throw the "Waiting to close X application" message and finish saving before terminating on its own?
Registry Entries for all users in Python
4,010,127
0
0
1,512
0
python,windows,permissions,registry,winreg
You'll either need admin permissions to write to HKLM, or settle for non-global reg keys. Behavior is going to vary somewhat between different versions of windows.
0
1
0
0
2010-10-24T19:52:00.000
4
0
false
4,010,108
0
0
0
4
I wrote an application that stores several things in the registry. When I first started, I added them to HKEY_LOCAL_MACHINE, but kept getting permission errors writing to the. So, it was suggested that I use HKEY_CURRENT_USER, that worked until I realized that I am not able to access them from another account. How can I write to the registry but allow all accounts to access read and write to it? I used the Python module _winreg.
Registry Entries for all users in Python
4,010,134
0
0
1,512
0
python,windows,permissions,registry,winreg
If you want to write to the registry so that all users can read it, you will need to run your program with administrator privileges. You might be happier storing your information in a file instead, which will be easier to manage.
0
1
0
0
2010-10-24T19:52:00.000
4
0
false
4,010,108
0
0
0
4
I wrote an application that stores several things in the registry. When I first started, I added them to HKEY_LOCAL_MACHINE, but kept getting permission errors writing to the. So, it was suggested that I use HKEY_CURRENT_USER, that worked until I realized that I am not able to access them from another account. How can I write to the registry but allow all accounts to access read and write to it? I used the Python module _winreg.
Registry Entries for all users in Python
4,010,153
2
0
1,512
0
python,windows,permissions,registry,winreg
HKEY_LOCAL_MACHINE/Software/YourSoftware, needs Admin permissions and is for install-time data, or HKEY_CURRENT_USER/Software/YourSoftware, which is for data pertinent to this environment only (this user, this profile etc.) EDIT: An alternative would be storing a config file and setting the right permissions at install time. 2nd EDIT: I've read in another comment that you want to be sure only your application modified some file, so you store the modification times. Workarounds: encrypt the file-not-to-be-modified, best is with a user-generated key make a service, install with a special user under which it runs, and make the permissions so, that only this service can access the file My gut feeling says your requirement to modify a file only by your app, but under any account is very wrong but the more or less correct solutions have to impose additional complexity. Your decision: review your requirements and possibly your design, or add a layer of complexity and possibly cruft. 3rd EDIT: Split your app, have an admin application, which can with admin rights write to HKLM and set the settings. Normal user rights should suffice to read HKLM
0
1
0
0
2010-10-24T19:52:00.000
4
1.2
true
4,010,108
0
0
0
4
I wrote an application that stores several things in the registry. When I first started, I added them to HKEY_LOCAL_MACHINE, but kept getting permission errors writing to the. So, it was suggested that I use HKEY_CURRENT_USER, that worked until I realized that I am not able to access them from another account. How can I write to the registry but allow all accounts to access read and write to it? I used the Python module _winreg.
Registry Entries for all users in Python
5,973,100
0
0
1,512
0
python,windows,permissions,registry,winreg
One other possibility would be changing the security on your HKLM keys to R/W for anyone. Although the idea that this is somehow security against modification seems a bit of a stretch. Regedt32 has the ability to set the keys, so the underlying API must have it too. All that said, this is a screwed up way to run an application and shows a sever lack of understanding of security and security models. (In other words, typical windows development.) How did I get so cynical.....
0
1
0
0
2010-10-24T19:52:00.000
4
0
false
4,010,108
0
0
0
4
I wrote an application that stores several things in the registry. When I first started, I added them to HKEY_LOCAL_MACHINE, but kept getting permission errors writing to the. So, it was suggested that I use HKEY_CURRENT_USER, that worked until I realized that I am not able to access them from another account. How can I write to the registry but allow all accounts to access read and write to it? I used the Python module _winreg.
Django development version vs stable release
4,010,690
0
2
1,216
0
python,django,google-app-engine
Another thing to consider is how you install. I'd be sure to install django from SVN, because it makes updating it MUCH easier. I have been using the dev version for a while on my site, and haven't encountered a single bug yet, aside from one that affected the admin site in a minor way (which a svn up fixed). I don't have a feel for whether people are using 1.2 or dev, but in my experience, dev is perfectly suitable. Any major errors that you may have in the code will get fixed very quickly, and svn up will get you to the latest code on the off chance that you get a revision with a major bug.
0
1
0
0
2010-10-24T20:40:00.000
3
0
false
4,010,349
0
0
1
1
I am about to start ramping up on Django and develop a web app that I want to deploy on Google App Engine. I learn that Google has Django 0.96 already installed on the APP engine, but the latest "official" version of Django I see is at 1.2.3 and its a bit of an effort to install it there. I am curious which version of Django is most widely used. So, can you please guide me on which Django version I should ramp on and deploy based on the following criterion Stability and suitability for production release Availability for applications (or plugins) and which version is most supported by the community Thanks a lot!
Python: Which encoding is used for processing sys.argv?
41,064,642
0
24
4,228
0
python,encoding,argv,sys
sys.getfilesystemencoding() works for me, at least on Windows. On Windows it is actually 'mbcs', and 'utf-8' on *nix.
0
1
0
0
2010-10-25T07:23:00.000
7
0
false
4,012,571
1
0
0
2
In what encoding are the elements of sys.argv, in Python? are they encoded with the sys.getdefaultencoding() encoding? sys.getdefaultencoding(): Return the name of the current default string encoding used by the Unicode implementation. PS: As pointed out in some of the answers, sys.stdin.encoding would indeed be a better guess. I would love to see a definitive answer to this question, though, with pointers to solid sources! PPS: As Wim pointed out, Python 3 solves this issue by putting str objects in sys.argv (if I understand correctly). The question remains open for Python 2.x, though. Under Unix, the LC_CTYPE environment variable seems to be the correct thing to check, no? What should be done with Windows (so that sys.argv elements are correctly interpreted whatever the console)?
Python: Which encoding is used for processing sys.argv?
4,013,349
6
24
4,228
0
python,encoding,argv,sys
A few observations: (1) It's certainly not sys.getdefaultencoding. (2) sys.stdin.encoding appears to be a much better bet. (3) On Windows, the actual value of sys.stdin.encoding will vary, depending on what software is providing the stdio. IDLE will use the system "ANSI" code page, e.g. cp1252 in most of Western Europe and America and former colonies thereof. However in the Command Prompt window, which emulates MS-DOS more or less, the corresponding old DOS code page (e.g. cp850) will be used by default. This can be changed by using the CHCP (change code page) command. (4) The documentation for the subprocess module doesn't provide any suggestions on what encoding to use for args and stdout. (5) One trusts that assert sys.stdin.encoding == sys.stdout.encoding never fails.
0
1
0
0
2010-10-25T07:23:00.000
7
1
false
4,012,571
1
0
0
2
In what encoding are the elements of sys.argv, in Python? are they encoded with the sys.getdefaultencoding() encoding? sys.getdefaultencoding(): Return the name of the current default string encoding used by the Unicode implementation. PS: As pointed out in some of the answers, sys.stdin.encoding would indeed be a better guess. I would love to see a definitive answer to this question, though, with pointers to solid sources! PPS: As Wim pointed out, Python 3 solves this issue by putting str objects in sys.argv (if I understand correctly). The question remains open for Python 2.x, though. Under Unix, the LC_CTYPE environment variable seems to be the correct thing to check, no? What should be done with Windows (so that sys.argv elements are correctly interpreted whatever the console)?
Send commands between two computers over the internet
4,014,696
1
3
1,907
0
java,php,javascript,python
You can write a WEB APPLICATION. The encryption part is solved by simple HTTPS usage. On the server side (your home computer with USB devices attached to it) you should use Python (since you're quite experienced with it) and a Python Web Framework you want (I.E. Django).
0
1
1
1
2010-10-25T12:48:00.000
6
0.033321
false
4,014,670
0
0
0
3
I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details. Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions? So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP. I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives? Thank you
Send commands between two computers over the internet
4,014,765
0
3
1,907
0
java,php,javascript,python
Well, I think that java can work well, in fact you have to deal with system calls to manage usb devices and things like that (and as far as I know, PHP is not the best language to do this). Also shouldn't be so hard to create a basic server/client program, just use good encryption mechanism to not show commands around web.
0
1
1
1
2010-10-25T12:48:00.000
6
0
false
4,014,670
0
0
0
3
I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details. Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions? So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP. I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives? Thank you
Send commands between two computers over the internet
4,015,151
0
3
1,907
0
java,php,javascript,python
I you are looking for solution you could use from any computer anywhere in the worls without the need to install any software on client pc, try logmein.com (http://secure.logmein.com). It is free, reliable, works in any modern browser, you don't have to remmeber IPs and hope they won't change, ... Or if this is a "for fun project" why not write a php script, open port 80 in your router so you can access you script from outside, possibly dynamically link some domain to your IP (http://www.dyndns.com/). In the script you would just login and then for example type the orders in textfield in some form in your script. Lets just say you want to do some command prompt stuf, so you will basically remotely construst a *.bat file for example. Then the script stores this a fromtheinternets.bat to a folder on your desktop that is being constantly monitored for changes. And when such a change is found you just activate the bat file. Insecure? Yes (It could be made secureER) Fun to write? Definitely PS: I am new here, hope it's not "illegal" to post link to actual services, instead of wiki lists. This is by no means and advertisement, I am just a happy user. :)
0
1
1
1
2010-10-25T12:48:00.000
6
0
false
4,014,670
0
0
0
3
I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details. Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions? So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP. I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives? Thank you
How to use Python's "easy_install" on Windows ... it's not so easy
4,016,275
1
63
152,491
0
python,windows,easy-install
For one thing, it says you already have that module installed. If you need to upgrade it, you should do something like this: easy_install -U packageName Of course, easy_install doesn't work very well if the package has some C headers that need to be compiled and you don't have the right version of Visual Studio installed. You might try using pip or distribute instead of easy_install and see if they work better.
0
1
0
1
2010-10-25T15:31:00.000
5
0.039979
false
4,016,151
0
0
0
1
After installing Python 2.7 on Windows XP, then manually setting the %PATH% to python.exe (why won't the python installer do this?), then installing setuptools 0.6c11 (why doesn't the python installer do this?), then manually setting the %PATH% to easy_install.exe (why doesn't the installer do this?), I finally tried to install a python package with easy_install, but easy_install failed when it couldn't install the pywin32 package, which is a dependency. How can I make easy_install work properly on Windows XP? The failure follows: C:\>easy_install winpexpect Searching for winpexpect Best match: winpexpect 1.4 Processing winpexpect-1.4-py2.7.egg winpexpect 1.4 is already the active version in easy-install.pth Using c:\python27\lib\site-packages\winpexpect-1.4-py2.7.egg Processing dependencies for winpexpect Searching for pywin32>=214 Reading http://pypi.python.org/simple/pywin32/ Reading http://sf.net/projects/pywin32 Reading http://sourceforge.net/project/showfiles.php?group_id=78018 No local packages or download links found for pywin32>=214 Best match: None Traceback (most recent call last): File "C:\python27\scripts\easy_install-script.py", line 8, in load_entry_point('setuptools==0.6c11', 'console_scripts', 'easy_install')() File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 1712, in main with_ei_usage(lambda: File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 1700, in with_ei_usage return f() File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 1716, in distclass=DistributionWithoutHelpCommands, **kw File "C:\python27\lib\distutils\core.py", line 152, in setup dist.run_commands() File "C:\python27\lib\distutils\dist.py", line 953, in run_commands self.run_command(cmd) File "C:\python27\lib\distutils\dist.py", line 972, in run_command cmd_obj.run() File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 211, in run self.easy_install(spec, not self.no_deps) File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 446, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 481, in install_item self.process_distribution(spec, dists[0], deps, "Using") File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 519, in process_distribution [requirement], self.local_index, self.easy_install File "C:\python27\lib\site-packages\pkg_resources.py", line 563, in resolve dist = best[req.key] = env.best_match(req, self, installer) File "C:\python27\lib\site-packages\pkg_resources.py", line 799, in best_match return self.obtain(req, installer) # try and download/install File "C:\python27\lib\site-packages\pkg_resources.py", line 811, in obtain return installer(requirement) File "C:\python27\lib\site-packages\setuptools\command\easy_install.py", line 434, in easy_install self.local_index File "C:\python27\lib\site-packages\setuptools\package_index.py", line 475, in fetch_distribution return dist.clone(location=self.download(dist.location, tmpdir)) AttributeError: 'NoneType' object has no attribute 'clone'
Specify a custom PYTHON_EGG_CACHE dir with zc.buildout?
4,026,496
0
1
1,434
0
buildout,python-egg-cache
I'm not sure what you mean. Three options that you normally have: Buildout, by default, stores the eggs in a directory called eggs/ inside your buildout directory. You can set the eggs-dir variable inside your buildout.cfg's [buildout] section to some directory. Just tell it where to place them. You can also set that very same option in .buildout/defaults.cfg inside your home directory. That way you can set a default for all your projects. Handy for storing all your eggs in one place: that can save a lot of download time, for instance. Does one of those (especially the last one) accomplish what you want? And: don't muck around with eggs in the generated bin/* files. Let buldout pick the eggs, that's its purpose.
0
1
0
1
2010-10-26T15:49:00.000
2
1.2
true
4,025,412
0
0
0
1
We're having problems when trying to deploy a number of projects which use zc.buildout - specifically we're finding that they want to put their PYTHON_EGG_CACHE directories all over the show. We'd like to somehow set this directory to one at the same level as the built-out project, where eggs can be found. There is some mention online that this can be done for Plone projects, but is it possible to do this without Plone? Are there some recipes that can set up an environment variable so we can set the PYTHON_EGG_CACHE executable files in ./bin?
One python multiprocess errors
4,385,534
1
0
1,864
0
python,multiprocess
Main proc is waiting for all child procs to be terminated before exits itself so there's a blocking call (i.e. wait4) registered as at_exit handles. The signal you sent interrupts that blocking call thus the stack trace. The thing I'm not clear about is that if the signal sent to child would be redirected to the parent process, which then interrupted that wait4 call. This is something related to the Unix process group behaviors.
0
1
0
0
2010-10-27T05:40:00.000
1
0.197375
false
4,030,277
1
0
1
1
I have one multprocess demo here, and I met some problems with it. Researched for a night, I cannot resolve the reason. Any one can help me? I want to have one parent process acts as producer, when there are tasks come, the parent can fork some children to consume these tasks. The parent monitors the child, if any one exits with exception, it can be restarted by parent. #!/usr/bin/env python # -*- coding: utf-8 -*- from multiprocessing import Process, Queue from Queue import Empty import sys, signal, os, random, time import traceback child_process = [] child_process_num = 4 queue = Queue(0) def work(queue): signal.signal(signal.SIGINT, signal.SIG_DFL) signal.signal(signal.SIGTERM, signal.SIG_DFL) signal.signal(signal.SIGCHLD, signal.SIG_DFL) time.sleep(10) #demo sleep def kill_child_processes(signum, frame): #terminate all children pass def restart_child_process(signum, frame): global child_process for i in xrange(len(child_process)): child = child_process[i] try: if child.is_alive(): continue except OSError, e: pass child.join() #join this process to make sure there is no zombie process new_child = Process(target=work, args=(queue,)) new_child.start() child_process[i] = new_child #restart one new process child = None return if __name__ == '__main__': reload(sys) sys.setdefaultencoding("utf-8") for i in xrange(child_process_num): child = Process(target=work, args=(queue,)) child.start() child_process.append(child) signal.signal(signal.SIGINT, kill_child_processes) signal.signal(signal.SIGTERM, kill_child_processes) #hook the SIGTERM signal.signal(signal.SIGCHLD, restart_child_process) signal.signal(signal.SIGPIPE, signal.SIG_DFL) When this program runs, there will be errors as below: Error in atexit._run_exitfuncs: Error in sys.exitfunc: Traceback (most recent call last): File "/usr/local/python/lib/python2.6/atexit.py", line 30, in _run_exitfuncs traceback.print_exc() File "/usr/local/python/lib/python2.6/traceback.py", line 227, in print_exc print_exception(etype, value, tb, limit, file) File "/usr/local/python/lib/python2.6/traceback.py", line 124, in print_exception _print(file, 'Traceback (most recent call last):') File "/usr/local/python/lib/python2.6/traceback.py", line 12, in _print def _print(file, str='', terminator='\n'): File "test.py", line 42, in restart_child_process new_child.start() File "/usr/local/python/lib/python2.6/multiprocessing/process.py", line 99, in start _cleanup() File "/usr/local/python/lib/python2.6/multiprocessing/process.py", line 53, in _cleanup if p._popen.poll() is not None: File "/usr/local/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) OSError: [Errno 10] No child processes If I send signal to one child:kill –SIGINT {child_pid} I will get: [root@mail1 mail]# kill -SIGINT 32545 [root@mail1 mail]# Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/python/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/local/python/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function p.join() File "/usr/local/python/lib/python2.6/multiprocessing/process.py", line 119, in join res = self._popen.wait(timeout) File "/usr/local/python/lib/python2.6/multiprocessing/forking.py", line 117, in wait return self.poll(0) File "/usr/local/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) OSError: [Errno 4] Interrupted system call Error in sys.exitfunc: Traceback (most recent call last): File "/usr/local/python/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/local/python/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function p.join() File "/usr/local/python/lib/python2.6/multiprocessing/process.py", line 119, in join res = self._popen.wait(timeout) File "/usr/local/python/lib/python2.6/multiprocessing/forking.py", line 117, in wait return self.poll(0) File "/usr/local/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) OSError: [Errno 4] Interrupted system call
Why is there a need for Twisted?
4,033,221
13
12
3,010
0
python,asynchronous,twisted
In a comment on another answer, you say "Every library is supposed to have ...". "Supposed" by whom? Having use-cases is certainly a nice way to nail down your requirements, but it's not the only way. It also doesn't make sense to talk about the use-cases for all of Twisted at once. There is no use case that justifies every single API in Twisted. There are hundreds or thousands of different use cases, each which justifies a lesser or greater subdivision of Twisted. These came and went over the years of Twisted's development, and no attempt has been made to keep a list of them. I can say that I worked on part of Twisted Names so that I would have a topic for a paper I was presenting at the time. I implemented the vt102 parser in Twisted Conch because I am obsessed with terminals and wanted a fun project involving them. And I implemented the IMAP4 support in Twisted Mail because I worked at a company developing a mail server which required tighter control over the mail store than any other IMAP4 server at the time offered. So, as you can see, different parts of Twisted were written for widely differing reasons (and I've only given examples of my own reasons, not the reasons of any other developers). The initial reason for a program being written often doesn't matter much in the long run though. Now the code is written: Twisted Names now runs the DNS for many domain names on the internet, the vt102 parser helped me get a job, and the company that drove the IMAP4 development is out of business. What really matters is what useful things you can do with the code now. As MattH points out, the resulting plethora of functionality has resulted in a library that (perhaps uniquely) addresses a wide array of interesting problems.
0
1
0
0
2010-10-27T08:52:00.000
4
1.2
true
4,031,402
0
0
0
1
I have been playing around with the twisted framework for about a week now(more because of curiosity rather than having to use it) and its been a lot of fun doing event driven asynchronous network programming. However, there is something that I fail to understand. The twisted documentation starts off with Twisted is a framework designed to be very flexible and let you write powerful servers. My doubt is :- Why do we need such an event-driven library to write powerful servers when there are already very efficient implementations of various servers out there? Surely, there must have been more than a couple of concrete implementations which the twisted developers had in mind while writing this event-driven I\O library. What are those? Why exactly was twisted made?
Fastest DNS library for python
4,036,149
7
1
769
0
python
Twisted's DNS library is cross platform. Whether or not it's the "fastest" is debateable but Twisted performs very well on the whole. I'd be surprised if it couldn't saturate your I/O link. One point of note though: Twisted uses asynchronous I/O rather than multi-tasking to achieve concurrency. Async I/O is a very good mechanism for handling concurrent queries but it requires a different programming style from the typcial threaded approach. The learning curve can be steep but it's fairly short and, in my opinion, it's well worth the effort.
0
1
0
0
2010-10-27T14:18:00.000
1
1.2
true
4,034,251
0
0
0
1
What library is the fastest to make hundreds of DNS queries in multi-tasking. I've googled round DNS libraries for python. I found that adns is said to be fastest. But it's not Windows-compatible. Are there any cross-platform compatible DNS libraries for python?
Java: updated class files not used
4,048,850
1
0
88
0
java,python
I would bet dollars for donuts that under some conditions you are not restarting the JVM between tests. The other obvious thought is that the class is not being copied to the target system as expected, or not to the correct location. Or, of course, the program is not being run from where you expect (i.e. there is another copy of the class files, perhaps in a JAR, which is actually be run). Explicitly recheck all your assumptions.
0
1
0
1
2010-10-29T03:42:00.000
1
0.197375
false
4,048,821
0
0
0
1
I'm developing a Java program through Eclipse locally, and debugging on a remote machine. Whenever I make a change to my program, I copy the corresponding class file to the bin directory on the remote machine. I run my program (a simulator) through a python script via the OS.system command. The problem is that my program sometimes does not use the updated class files after they have been moved over. The problem persists even if I log out and back into the remote machine. What's really strange is that, as a test, I deleted the bin directory entirely on the remote machine, and was still able to run my program. Can anyone explain this?
Performing unbiased program/script performance comparison
4,053,156
1
0
183
0
java,c++,python,jython,performance
These are difficult to do well. In many cases the operating system will cache files so the second time they are executed they suddenly perform much better. The other problem is you're comparing interpreted languages against compiled. The interpreted languages require an interpreter loaded into memory somewhere or they can't run. To be scrupulously fair you really should consider if memory usage and load time for the interpreter should be part of the test. If you're looking for performance in an environment where you can assume the interpreter is always preloaded then you can ignore that. Many setups for web servers will be able to keep an interpreter preloaded. If you're doing ad hoc client applications on a desktop then the start up can be very slow while the interpreter is loaded.
0
1
0
1
2010-10-29T14:19:00.000
4
0.049958
false
4,052,691
0
0
0
3
I want to perform a comparison of multiple implementations of basically the same algorithm, written in Java, C++ and Python, the latter executed using Pypy, Jython and CPython on a Mac OS X 10.6.4 Macbook Pro with normal (non-SSD) HDD. It's a "decode a stream of data from a file" type of algorithm, where the relevant measurement is total execution time, and I want to prevent bias through e.g. OS an HDD caches, other programs running simultaneously, too large/small sample file etc. What do I need to pay attention to to create a fair comparison?
Performing unbiased program/script performance comparison
4,053,208
0
0
183
0
java,c++,python,jython,performance
I would recommend that you simply run each program many times (like 20 or so) and take the lowest measurement of each set. This will make it so it is highly likely that the program will use the HD cache and other things like that. If they all do that, then it isn't biased.
0
1
0
1
2010-10-29T14:19:00.000
4
1.2
true
4,052,691
0
0
0
3
I want to perform a comparison of multiple implementations of basically the same algorithm, written in Java, C++ and Python, the latter executed using Pypy, Jython and CPython on a Mac OS X 10.6.4 Macbook Pro with normal (non-SSD) HDD. It's a "decode a stream of data from a file" type of algorithm, where the relevant measurement is total execution time, and I want to prevent bias through e.g. OS an HDD caches, other programs running simultaneously, too large/small sample file etc. What do I need to pay attention to to create a fair comparison?
Performing unbiased program/script performance comparison
4,053,230
0
0
183
0
java,c++,python,jython,performance
To get totally unbiased is impossible, you can do various stuff like running minimum processes etc but IMO best way is to run scripts in random order over a long period of time over different days and get average which will be as near to unbias as possible. Because ultimately code will run in such environment in random order and you are interested in average behavior not some numbers.
0
1
0
1
2010-10-29T14:19:00.000
4
0
false
4,052,691
0
0
0
3
I want to perform a comparison of multiple implementations of basically the same algorithm, written in Java, C++ and Python, the latter executed using Pypy, Jython and CPython on a Mac OS X 10.6.4 Macbook Pro with normal (non-SSD) HDD. It's a "decode a stream of data from a file" type of algorithm, where the relevant measurement is total execution time, and I want to prevent bias through e.g. OS an HDD caches, other programs running simultaneously, too large/small sample file etc. What do I need to pay attention to to create a fair comparison?
is it possible to read the text written in a sticky note using a script in linux?
4,060,827
0
0
536
0
python,ruby,perl
The tomboy notes are saved as xml files so you could write a xml parser.
0
1
0
1
2010-10-30T17:35:00.000
3
0
false
4,059,851
0
0
0
1
I am using sticky notes in ubuntu . And was wondering if it would be possible to read the text written in sticky notes using any scripting language .
Remote debugging with WingIDE
4,300,007
2
1
879
0
python,remote-debugging
Under Windows, I've experienced the same behavior you mention, i.e., remote debugging sometimes works, but often 'gets stuck'. I've found a few things helpful in resolving this situation: Make sure your firewall isn't blocking traffic to/from the ports being used by WingIDE and the process being debugged. (In my case, I had to unblock both wing.exe and the program I was attempting to debug in Windows Firewall.) Make sure you haven't accumulated any zombie python processes after failed debug sessions. These can hold open a connection to the IDE, making it impossible for a newly-spawned instance to connect. (Under Windows, you can use the tasklist command to check for running python instances, and netstat -anp tcp will show any sockets stuck in the TIME_WAIT state.) Insert a time.sleep(10) call immediately after your import wingdbstub statement. Start the program from a console, make sure it connects in the IDE (debug icon will turn green), then hit the 'Pause' button in the IDE, followed by 'Step Out'. (I can't begin to explain why, but this appeared to right the ship for me a couple of times after the debug connection had gone wonky.) The above advice probably applies to Linux as well, but I've only experienced this problem under Windows so far...
0
1
0
0
2010-10-31T14:28:00.000
1
1.2
true
4,063,482
0
0
0
1
Using WingIDE to debug a web application, I have set a breakpoint in some Python code which runs when a web-form is submitted. Just before the breakpoint I have inserted 'import wingdbstub' to activate remote deubgging. However, execution does not stop at the breakpoint. I know the code is running because if I insert 'raise exception(sys.modules)' just before the breakpoint, execution stops and a traceback appears in my browser, showing wingdbstub is loaded. If I hover over the bug icon in the status-bar, a dialog says "No debug process / listening for connections on TCP/IP 50005. Allowed hosts 127.0.0.1". I know I have 'lost' debug mode when a) the bug icon changes from green to white, and b) the debugging toolbar buttons (step into, over, out, etc.) disappear. I tried deleting compiled .pyc files so that they recompile when the module next runs, but the problem remains. How can I check if Wing is listening on the correct port? The strange thing is that remote-debugging has worked sometimes, but most of the time it doesn't. Any help would be appreciated. For the record, I am using Python 3.1, CherryPy 3.20 and WingIDE Personal 3.2.11. Alan
Python on Windows - subprocess.check_output with full-path works not?
5,056,711
0
1
490
0
python,python-3.x
Is there a problem in making the current directory e:\\lynx from withing python? That seems like a good solution to me.
0
1
0
0
2010-11-02T07:16:00.000
1
0
false
4,075,627
0
0
0
1
With args=['e:\\lynx\\lynx.exe','-dump',some_url], subprocess.check_output works fine if the current-directory is e:\lynx. Elsewhere, it fails with a CalledProcessException and retcode of -1. For now, I do not wish to add e:\lynx to PATH. Any thoughts?
regression test dealing with hard coded path
4,078,021
0
3
291
0
python,linux,unit-testing,regression
You could use a helper application that is setuid root to run the chroot; that would avoid the need to run the tests as root. Of course, that would probably still open up a local root exploit, so should only be done with appropriate precautions (e.g. in a VM image). At any rate, any solution with chroot is inherently platform-dependent, so it's rather awkward. I actually like the idea of Dave Webb (override open) better, I must admit...
0
1
0
1
2010-11-02T11:52:00.000
2
0
false
4,077,338
0
0
0
1
I need to extend a python code which has plenty of hard coded path In order not to mess everything, I want to create unit-tests for the code before my modifications: it will serve as non-regression tests with my new code (that will not have hard-coded paths) But because of hard coded system path, I shall run my test inside a chroot tree (I don't want to pollute my system dir) My problem is that I want to set up the chroot only for test, and this can be done with os.chroot only with root privileges (and I don't want to run the test scripts as root) In fact, I just need a fake tree diretory so that when the code that open('/etc/resolv.conf) retrieves a fake resolv.conf and not my system one I obviously don't want to replace myself the hard coded path in the code because it would not be real regression test Do you have any idea how to achieve this? Thanks Note that all the path accessed are readable with a user accout
Alternatives to ApacheBench for profiling my code speed
4,083,570
1
8
6,473
0
python,profiling,benchmarking,latency,apachebench
I have done this in two ways. With "loadrunner" which is a wonderful but pretty expensive product (from I think HP these days). With combination perl/php and the Curl package. I found the CURL api slightly easier to use from php. Its pretty easy to roll your own GET and PUT requests. I would also recommend manually running through some sample requests with Firefox and the LiveHttpHeaders add on to captute the exact format of the http requests you need.
0
1
0
1
2010-11-03T01:25:00.000
6
1.2
true
4,083,523
0
0
0
2
I've done some experiments using Apache Bench to profile my code response times, and it doesn't quite generate the right kind of data for me. I hope the good people here have ideas. Specifically, I need a tool that Does HTTP requests over the network (it doesn't need to do anything very fancy) Records response times as accurately as possible (at least to a few milliseconds) Writes the response time data to a file without further processing (or provides it to my code, if a library) I know about ab -e, which prints data to a file. The problem is that this prints only the quantile data, which is useful, but not what I need. The ab -g option would work, except that it doesn't print sub-second data, meaning I don't have the resolution I need. I wrote a few lines of Python to do it, but the httplib is horribly inefficient and so the results were useless. In general, I need better precision than pure Python is likely to provide. If anyone has suggestions for a library usable from Python, I'm all ears. I need something that is high performance, repeatable, and reliable. I know that half my responses are going to be along the lines of "internet latency makes that kind of detailed measurements meaningless." In my particular use case, this is not true. I need high resolution timing details. Something that actually used my HPET hardware would be awesome. Throwing a bounty on here because of the low number of answers and views.
Alternatives to ApacheBench for profiling my code speed
4,162,131
0
8
6,473
0
python,profiling,benchmarking,latency,apachebench
I've used a script to drive 10 boxes on the same switch to generate load by "replaying" requests to 1 server. I had my web app logging response time (server only) to the granularity I needed, but I didn't care about the response time to the client. I'm not sure you care to include the trip to and from the client in your calculations, but if you did it shouldn't be to difficult to code up. I then processed my log with a script which extracted the times per url and did scatter plot graphs, and trend graphs based on load. This satisfied my requirements which were: Real world distribution of calls to different urls. Trending performance based on load. Not influencing the web app by running other intensive ops on the same box. I did controller as a shell script that foreach server started a process in the background to loop over all the urls in a file calling curl on each one. I wrote the log processor in Perl since I was doing more Perl at that time.
0
1
0
1
2010-11-03T01:25:00.000
6
0
false
4,083,523
0
0
0
2
I've done some experiments using Apache Bench to profile my code response times, and it doesn't quite generate the right kind of data for me. I hope the good people here have ideas. Specifically, I need a tool that Does HTTP requests over the network (it doesn't need to do anything very fancy) Records response times as accurately as possible (at least to a few milliseconds) Writes the response time data to a file without further processing (or provides it to my code, if a library) I know about ab -e, which prints data to a file. The problem is that this prints only the quantile data, which is useful, but not what I need. The ab -g option would work, except that it doesn't print sub-second data, meaning I don't have the resolution I need. I wrote a few lines of Python to do it, but the httplib is horribly inefficient and so the results were useless. In general, I need better precision than pure Python is likely to provide. If anyone has suggestions for a library usable from Python, I'm all ears. I need something that is high performance, repeatable, and reliable. I know that half my responses are going to be along the lines of "internet latency makes that kind of detailed measurements meaningless." In my particular use case, this is not true. I need high resolution timing details. Something that actually used my HPET hardware would be awesome. Throwing a bounty on here because of the low number of answers and views.
python queue concurrency process management
4,086,600
0
2
727
0
python,concurrency,process,queue
If I understand correctly what you are doing, I might suggest a slightly different approach. Try establishing a single unit of work as a function and then layer on the parallel processing after that. For example: Wrap the current functionality (calling subprocess and capturing output) into a single function. Have the function create a result object that can be returned; alternatively, the function could write out to files as you see fit. Create an iterable (list, etc.) that contains an input for each chunk of data for step 1. Create a multiprocessing Pool and then capitalize on its map() functionality to execute your function from step 1 for each of the items in step 2. See the python multiprocessing docs for details. You could also use a worker/Queue model. The key, I think, is to encapsulate the current subprocess/output capture stuff into a function that does the work for a single chunk of data (whatever that is). Layering on the parallel processing piece is then quite straightforward using any of several techniques, only a couple of which were described here.
0
1
0
0
2010-11-03T10:55:00.000
3
0
false
4,086,311
1
0
0
1
The use case is as follows : I have a script that runs a series of non-python executables to reduce (pulsar) data. I right now use subprocess.Popen(..., shell=True) and then the communicate function of subprocess to capture the standard out and standard error from the non-python executables and the captured output I log using the python logging module. The problem is: just one core of the possible 8 get used now most of the time. I want to spawn out multiple processes each doing a part of the data set in parallel and I want to keep track of progres. It is a script / program to analyze data from a low frequencey radio telescope (LOFAR). The easier to install / manage and test the better. I was about to build code to manage all this but im sure it must already exist in some easy library form.
Portable Python (Mac -> Windows)
4,086,715
3
1
1,743
0
python,portability,pygame
The Python scripts are reasonably portable, as long as the interpreter and relevant libraries are installed. Generated .exe and .app files are not.
0
1
0
0
2010-11-03T11:44:00.000
4
0.148885
false
4,086,675
0
0
0
3
I have a lovely Macbook now, and I'm enjoying coding on the move. I'm also enjoying coding in Python. However, I'd like to distribute the end result to friends using Windows, as an executable. I know that Py2Exe does this, but I don't know how portable Python is across operating systems. Can anyone offer any advice? I'm using PyGame too. Many thanks
Portable Python (Mac -> Windows)
4,087,087
0
1
1,743
0
python,portability,pygame
If you are planning to include Linux in your portability criteria, it's worth remembering that many distributions still package 2.6 (or even 2.5), and will probably be a version behind in the 3.x series as well (I'm assuming your using 2.x given the PyGame requirement though). Versions of PyGame seem to vary quite heavily between distros as well.
0
1
0
0
2010-11-03T11:44:00.000
4
0
false
4,086,675
0
0
0
3
I have a lovely Macbook now, and I'm enjoying coding on the move. I'm also enjoying coding in Python. However, I'd like to distribute the end result to friends using Windows, as an executable. I know that Py2Exe does this, but I don't know how portable Python is across operating systems. Can anyone offer any advice? I'm using PyGame too. Many thanks
Portable Python (Mac -> Windows)
4,153,679
0
1
1,743
0
python,portability,pygame
Personally I experienced huge difficult with all the Exe builder, py2exe , cx_freeze etc. Bugs and errors all the time , keep displaying an issue with atexit module. I find just by including the python distro way more convinient. There is one more advantage beside ease of use. Each time you build an EXE for a python app, what you essential do is include the core of the python installation but only with the modules your app is using. But even in that case your app may increase from a mere few Kbs that the a python module is to more than 15 mbs because of the inclusion of python installation. Of course installing the whole python will take more space but each time you send your python apps they will be only few kbs long. Plus you want have to go to the hussle of bundling the exe each time you change even a coma to your python app. Or I think you do , I dont know if just replacing the py module can help you avoid this. In any case installing python and pygame is as easy as installing any other application in windows. In linux via synaptic is also extremly easy. MACOS is abit tricky though. MACOS already come with python pre installed, Snow leopard has 2.6.1 python installed. However if you app is using a python later than that and include the install of python with your app, you will have to instruct the user to set via "GET INFO -> open with" the python launcher app which is responsible for launcing python apps to use your version of python and not the onboard default 2.6.1 version, Its not difficult and it only takes a few seconds, even a clueless user can do this. Python is extremely portable, python pygame apps cannot only run unchanged to the three major platform , Windows , MACOS ,Linux . They can even run on mobile and portable devices as well. If you need to build app that runs across platform , python is dead easy and highly recomended.
0
1
0
0
2010-11-03T11:44:00.000
4
1.2
true
4,086,675
0
0
0
3
I have a lovely Macbook now, and I'm enjoying coding on the move. I'm also enjoying coding in Python. However, I'd like to distribute the end result to friends using Windows, as an executable. I know that Py2Exe does this, but I don't know how portable Python is across operating systems. Can anyone offer any advice? I'm using PyGame too. Many thanks
Interactive debugging with nosetests in PyDev
4,087,763
0
8
4,516
0
python,debugging,pylons,pydev,nose
Try import pydevd; pydevd.settrace() where would like a breakpoint.
0
1
0
1
2010-11-03T13:38:00.000
3
0
false
4,087,582
0
0
0
1
I'm using PyDev ( with Aptana ) to write and debug a Python Pylons app, and I'd like to step through the tests in the debugger. Is it possible to launch nosetests through PyDev and stop at breakpoints?
Workaround for Pythonbrew failing because test_socket can't resolve?
4,161,287
0
0
2,411
0
python,macos,installation,osx-snow-leopard
The solution was to --force pythonbrew to install in spite of the errors. I tested the socket responses using the built-in Python, Perl and Ruby, and they had the same problem resolving the localhost name. I tested using a current version of Ruby and Python on one of my Linux boxes, and the calls worked, so I was pretty sure it was something outside of that particular Mac's configuration. After forcing the install I tested the socket calls to other hosts and got the expected results and haven't had any problems doing other networking tasks so I think everything is fine.
0
1
1
0
2010-11-03T19:15:00.000
2
1.2
true
4,090,753
0
0
0
1
I'm using pythonbrew to install Python 2.6.6 on Snow Leopard. It failed with a readline error, then a socket error. I installed readline from source, which made the installer happy on the next attempt, but the socket error remains: test_socket test test_socket failed -- Traceback (most recent call last): File "/Users/gferguson/python/pythonbrew/build/Python-2.6.6/Lib/test/test_socket.py", line 483, in testSockName my_ip_addr = socket.gethostbyname(socket.gethostname()) gaierror: [Errno 8] nodename nor servname provided, or not known Digging around with the system Python shows: >>> import socket >>> my_ip_addr = socket.gethostbyname(socket.gethostname()) Traceback (most recent call last): File "", line 1, in socket.gaierror: [Errno 8] nodename nor servname provided, or not known >>> socket.gethostname() 'S1WSMA-JHAMI' >>> socket.gethostbyname('S1WSMA-JHAMI') Traceback (most recent call last): File "", line 1, in socket.gaierror: [Errno 8] nodename nor servname provided, or not known >>> socket.gethostbyname('google.com') '74.125.227.20' I triangulated the problem with Ruby's IRB: IPSocket.getaddress(Socket.gethostname) SocketError: getaddrinfo: nodename nor servname provided, or not known So, I'm not sure if this is a bug in the resolver not understanding the hostname, or if there's something weird in the machine's configuration, or if it's something weird in our network's DNS lookup, but whatever it is the installer isn't happy. I think it's a benign failure in the installer though, so I feel safe to force the test to succeed, but I'm not sure how to tell pythonbrew how to ignore that test value or specifically pass test_socket. I'm also seeing the following statuses but haven't figured out if they're significant yet: 33 tests skipped: test_al test_bsddb test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dl test_epoll test_gdbm test_gl test_imageop test_imgfile test_largefile test_linuxaudiodev test_normalization test_ossaudiodev test_pep277 test_py3kwarn test_smtpnet test_socketserver test_startfile test_sunaudiodev test_timeout test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 1 skip unexpected on darwin: test_dl Anyone have experience getting Python 2.6.6 installed with pythonbrew on Snow Leopard? Update: I just tried the socket.gethostbyname(socket.gethostname()) command from Python installed on my MacBook Pro with Snow Leopard, and it successfully reported my IP back so it appears the problem is in the system config at work. I am going to ask at SO's sibling "Apple" site and see if anyone knows what it might be.
What is the difference between Mac OS X pre-installed Python and the one from Python.org?
4,093,046
0
0
2,569
0
python,osx-snow-leopard
It's impossible to tell without knowing which version(s) you're comparing. You're best off doing python --version on your default OS X install, and then checking release notes from that version to subsequent versions. My guess (I don't have OS X) is that you're likely running 2.4.x or 2.5.x. There'll be very few regressive differences from 2.4.x forward on the 2.x version tree. Python 3.x introduces syntax changes, which will break some existing code. Perhaps the most visible change is print becomes a function in 3.0, while still a statement in 2.x. In general, amongst 2.x, syntax is only enhanced, not broken. The changes are going to be more in the libraries (i.e. the md5 module is deprecated at 2.6 in favor of the hashlib module).
0
1
0
0
2010-11-04T00:50:00.000
2
0
false
4,093,015
1
0
0
2
I am a new Mac (Snow Leopard) user and I found that Python is pre-installed in Mac OS X. What is the difference between Mac OS X pre-installed Python and the one from Python.org? If I install the one from Python.org, will it break anything? Will it be redundant? EDIT What would be a good reason to prefer the Python.org version, comparing identical version numbers head-to-head?
What is the difference between Mac OS X pre-installed Python and the one from Python.org?
4,094,163
3
0
2,569
0
python,osx-snow-leopard
The Apple-supplied Python 2.6 in Mac OS X 10.6 (Snow Leopard) is currently 2.6.1 (and, based on previous OS X releases, it is unlikely Apple will update it to a newer version in a 10.6.x maintenance release). The most recent (and likely final) release of Python 2.6 is 2.6.6. So if you install the most recent python.org release, you will get the benefit of a large number of bug fixes that have been made over the lifetime of Python 2.6. There are some other differences. The python.org 2.6.x versions are built as 32-bit-only. The Apple-suppled version is built as a 32-bit/64-bit universal and will, by default, prefer to run in 64-bit mode when possible. Either one can lead to some issues when installing third-party packages with C extension modules that depend on other 3rd-party libraries. There needs to be at least one common architecture (be it 32-bit, i386, or 64-bit, x86_64) among all the components. Another difference is that the Apple-suppled 2.6 is linked with a new version of Tk 8.5; there are reported problems with the IDLE that comes with 10.6 and possibly with other applications using Tkinter. If you plan to use either, you may be better off with the python.org 2.6 which is linked with Tk 8.4. On OS X, it is particularly easy and common to install multiple Python versions even of the same major version. If you do install the python.org version, by default the installer will modify your shell search PATH so that the python.org version is found first. It will also be available via the absolute path /usr/local/bin/python2.6. The Apple-suppled version will remain available as /usr/bin/python2.6. FYI: Be aware that Python 2.7 has already been released and there are OS X installers for it available from python.org. A new, not upwards-compatible version of Python, Python 3, is also available (currently 3.1.2 with 3.2 coming in a few months) and is expected to gradually replace Python 2 in popularity as new features are only being added to Python 3.
0
1
0
0
2010-11-04T00:50:00.000
2
0.291313
false
4,093,015
1
0
0
2
I am a new Mac (Snow Leopard) user and I found that Python is pre-installed in Mac OS X. What is the difference between Mac OS X pre-installed Python and the one from Python.org? If I install the one from Python.org, will it break anything? Will it be redundant? EDIT What would be a good reason to prefer the Python.org version, comparing identical version numbers head-to-head?
Google App Engine gives spurious content at beginning of page after quiescent period
4,098,417
6
1
108
0
python,google-app-engine
Somewhere in your top level module code is something that uses Python print statements. Print outputs to standard out, which is what is returned as the response body; if it outputs a pair of newlines, the content before that is treated by the browser as the response header. The 'junk' you're seeing is the real response headers being produced by your webapp. It's only happening on startup requests, because that's the only time the code in question gets executed.
0
1
0
0
2010-11-04T15:15:00.000
1
1.2
true
4,098,119
0
0
1
1
I'm developing an app in Python for Google App Engine. When I run the deployed app from appspot, it works fine unless I'm accessing it for the first time in over, say, 5 minutes. The problem is that if I haven't accessed the app for a while, the page renders with the message Status: 200 OK Content-Type: text/html; charset=utf-8 Cache-Control: no-cache Expires: Fri, 01 Jan 1990 00:00:00 GMT Content-Length: 15493 prepended at the top. Usually that text is displayed for a second or two before the rest of the page is displayed. If I check the server Logs, I see the info message This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. The problem is easily corrected by refreshing the page. In this case, the page is delivered correctly, and works for subsequent refreshes. But if I wait 5 minutes, the problem comes back. Any explanations, or suggestions on how to troubleshoot this? I've got a vague notion that when GAE "wakes up" after being inactive, there is an incorrect initialization going on. Or perhaps a header from a previous bout of activity is lingering in a buffer somewhere. But self.response.out seems to be empty when the request handler is invoked.
urllib2 freezes GUI
4,100,722
6
1
310
0
python,pygtk
Calling urllib2 from the main thread blocks the Gtk event loop and consequently freezes the user interface. This is not specific to urllib2, but happens with any longer running function (e.g. subprocess.call). Either use the asynchronous IO facilities from glib or call urllib2 in a separate thread to avoid this issue.
0
1
0
0
2010-11-04T19:45:00.000
2
1.2
true
4,100,628
0
0
0
2
I'm using PyGTK for a small app I've developing. The usage of URLlib2 through a proxy will freeze my GUI. Is there anyway to prevent that? My code that actually does the work is seperate from the GUI, so I was thinking may be using subprocess to call the python file. However, how would that work if I was to convert the app to an exe file? Thanks
urllib2 freezes GUI
4,101,507
0
1
310
0
python,pygtk
I'd consider using the multiprocess module, creating a pair of Queue objects ... one for the GUI controller or other components to send requests to the urllib2 process; the other for returning the results. Just a pair of Queue objects would be sufficient for a simple design (just two processes). The urllib2 process simple consumes requests from it's request queue and posts response to the results queue. The process on the other side can operate asynchronously, posting requests and, from anywhere in the event loop (or from a separate thread), pulling responses out and posting them back to a dictionary or dispatching a callback function (probably also maintained as a dictionary). (For example I might have the request model create a callback handling object, store it in a dictionary using the object's ID as the key, and post a tuple of of that ID and the URL to the request queue, then have the response processing pull IDs and response text of the response queue so that the event handling loop can then dispatch the response to the .callback() method of the object which was stored in the dictionary to begin with. The responses could be URL text results but handling for Exception objects could also be implemented (perhaps dispatched to a .errback() method in our hypothetical callback object's interface). Naturally if our main GUI is multi-threaded we have to ensure coherent access to this dictionary. However there should be relatively low contention on that. All access to this dictionary is non-blocking). More complex designs are possible. A pool of urllib2 handling processes could all share one pair of Queue objects (the beauty of these queues is that they handle all the locking and coherency details for us; multiple producers/consumers are supported). If the GUI needed to be fanned out into multiple processes that could share the same urllib2 process or pool then it would be time to look for a message bus (spread or AMQP for example). Share memory and the multiprocess locking primitives could also be used; but that would involve quite a bit more effort.
0
1
0
0
2010-11-04T19:45:00.000
2
0
false
4,100,628
0
0
0
2
I'm using PyGTK for a small app I've developing. The usage of URLlib2 through a proxy will freeze my GUI. Is there anyway to prevent that? My code that actually does the work is seperate from the GUI, so I was thinking may be using subprocess to call the python file. However, how would that work if I was to convert the app to an exe file? Thanks
How to write Python bindings for command line applications
4,101,176
1
2
510
0
c++,python,c,binding
One way would be to re-factor your command line utility so that command line handling is separated and the actual functionality is exposed as shared archive. Then you could expose those function using cython. Write your complete command line utility in python that exploits those functions. This makes distribution hard though. What you are doing is still the best way.
0
1
0
0
2010-11-04T20:36:00.000
2
0.099668
false
4,101,130
0
0
0
2
I'm interested in writing a python binding or wrapper for an existing command line utility that I use on Linux, so that I can access its features in my python programs. Is there a standard approach to doing this that someone could point me to? At the moment, I have wrapped the command line executable in a subprocess.Popen call, which works but feels quite brittle, and I'd like to make the integration between the two sides much more stable so that it works in places other than my own computer!
How to write Python bindings for command line applications
4,101,158
5
2
510
0
c++,python,c,binding
If you must use a command line interface, then subprocess.Popen is your best bet. Remember that you can use shell=True to let it pick the path variables, you can use os.path.join to use OS-dependent path separators etc. If, however, your command line utility has shared libraries, look at ctypes, which allows you to connect directly to those libraries and expose functionality directly.
0
1
0
0
2010-11-04T20:36:00.000
2
1.2
true
4,101,130
0
0
0
2
I'm interested in writing a python binding or wrapper for an existing command line utility that I use on Linux, so that I can access its features in my python programs. Is there a standard approach to doing this that someone could point me to? At the moment, I have wrapped the command line executable in a subprocess.Popen call, which works but feels quite brittle, and I'd like to make the integration between the two sides much more stable so that it works in places other than my own computer!
Hadoop/Elastic Map Reduce with binary executable?
4,101,917
0
1
1,138
0
python,matlab,amazon-web-services,hadoop,mapreduce
The following is not exactly an answer to your Hadoop question, but I couldn't resist not asking why you don't execute your processing jobs on the Grid resources? There are proven solutions for executing compute intensive workflows on the Grid. And as far as I know matlab runtime environment is usually available on these resources. You may also consider using the Grid especially if you are in academia. Good luck
0
1
0
0
2010-11-04T22:01:00.000
2
0
false
4,101,815
0
1
0
1
I am writing and distributed image processing application using hadoop streaming, python, matlab, and elastic map reduce. I have compiled a binary executable of my matlab code using the matlab compiler. I am wondering how I can incorporate this into my workflow so the binary is part of the processing on Amazon's elastic map reduce? It looks like I have to use the Hadoop Distributed Cache? The code is very complicated (and not written by me) so porting it to another language is not possible right now. THanks
Getting NppExec to understand path of the current file in Notepad++ (for Python scripts)
4,106,339
15
6
4,494
0
python,notepad++,nppexec
Notepad++ >nppexec >follow $(current directory)
0
1
0
1
2010-11-05T02:15:00.000
2
1.2
true
4,103,085
0
0
0
1
Using windows for the first time in quite awhile and have picked up notepad++ and am using the nppexec plugin to run python scripts. However, I noticed that notepad++ doesn't pick up the directory that my script is saved in. For example, I place "script.py" in 'My Documents' however os.getcwd() prints "Program Files \ Notepad++" Does anyone know how to change this behavior? Not exactly used to it in Mac.
Best Practice for dealing with app engine cold start problem
4,105,029
6
3
364
0
python,google-app-engine
Reduce the set of libraries you require in order to serve requests as much as you can. For expensive libraries that are only used in some places, put the import statement inside the function that uses them. This way, the library is only imported the first time it's needed. If your framework supports it, do just-in-time importing of handlers, so you don't have to import them all when your app starts up. Look forward to reserved instances / warmup requests, coming soon!
0
1
0
0
2010-11-05T09:36:00.000
1
1.2
true
4,104,751
0
0
1
1
After a period of inactivity the first request takes about 5 to 10 secs to come through. Is there any best practice solutions to overcome this problem? I'm using Python version of App Engine.
Strange urllib2.urlopen() behavior on Ubuntu 10.10
4,112,300
3
1
427
0
python,ubuntu,urllib2,ubuntu-10.10
5 seconds sounds suspiciously like the DNS resolving timeout. A hunch, It's possible that it's cycling through the DNS servers in your /etc/resolv.conf and if one of them is broken, the default timeout is 5 seconds on linux, after which it will try the next one, looping back to the top when it's tried them all. If you have multiple DNS servers listed in resolv.conf, try removing all but one. If this fixes it; then after that see why you're being assigned incorrect resolving servers.
0
1
1
0
2010-11-05T23:37:00.000
2
1.2
true
4,110,992
0
0
0
1
I am experiencing strange behavior with urllib2.urlopen() on Ubuntu 10.10. The first request to a url goes fast but the second takes a long time to connect. I think between 5 and 10 seconds. On windows this just works normal? Does anybody have an idea what could cause this issue? Thanks, Onno
Better ways to handle AppEngine requests that time out?
4,112,279
1
1
264
0
python,google-app-engine
I have been handling something similar by building a custom automatic retry dispatcher on the client. Whenever an ajax call to the server fails, the client will retry it. This works very well if your page is ajaxy. If your app spits entire HTML pages then you can use a two pass process: first send an empty page containing only an ajax request. Then, when AppEngine receives that ajax request, it outputs the same HTML you had before. If the ajax call succeeds it fills the DOM with the result. If it fails, it retries once.
0
1
0
0
2010-11-06T06:46:00.000
2
0.099668
false
4,112,235
0
0
1
2
Sometimes, with requests that do a lot, Google AppEngine returns an error. I have been handling this by some trickery: memcaching intermediate processed data and just requesting the page again. This often works because the memcached data does not have to be recalculated and the request finishes in time. However... this hack requires seeing an error, going back, and clicking again. Obviously less than ideal. Any suggestions? inb4: "optimize your process better", "split your page into sub-processes", and "use taskqueue". Thanks for any thoughts. Edit - To clarify: Long wait for requests is ok because the function is administrative. I'm basically looking to run a data-mining function. I'm searching over my datastore and modifying a bunch of objects. I think the correct answer is that AppEngine may not be the right tool for this. I should be exporting the data to a computer where I can run functions like this on my own. It seems AppEngine is really intended for serving with lighter processing demands. Maybe the quota/pricing model should offer the option to increase processing timeouts and charge extra.
Better ways to handle AppEngine requests that time out?
4,117,235
1
1
264
0
python,google-app-engine
If interactive user requests are hitting the 30 second deadline, you have bigger problems: your user has almost certainly given up and left anyway. What you can do depends on what your code is doing. There's a lot to be optimized by batching datastore operations, or reducing them by changing how you model your data; you can offload work to the Task Queue; for URLFetches, you can execute them in parallel. Tell us more about what you're doing and we may be able to provide more concrete suggestions.
0
1
0
0
2010-11-06T06:46:00.000
2
1.2
true
4,112,235
0
0
1
2
Sometimes, with requests that do a lot, Google AppEngine returns an error. I have been handling this by some trickery: memcaching intermediate processed data and just requesting the page again. This often works because the memcached data does not have to be recalculated and the request finishes in time. However... this hack requires seeing an error, going back, and clicking again. Obviously less than ideal. Any suggestions? inb4: "optimize your process better", "split your page into sub-processes", and "use taskqueue". Thanks for any thoughts. Edit - To clarify: Long wait for requests is ok because the function is administrative. I'm basically looking to run a data-mining function. I'm searching over my datastore and modifying a bunch of objects. I think the correct answer is that AppEngine may not be the right tool for this. I should be exporting the data to a computer where I can run functions like this on my own. It seems AppEngine is really intended for serving with lighter processing demands. Maybe the quota/pricing model should offer the option to increase processing timeouts and charge extra.
Scheduling Tasks
4,126,085
4
2
1,544
0
python,scheduled-tasks
The OS provides a tool called 'cron' that's for exactly this purpose. You shouldn't need to modify your script at all to make use of it. At a terminal command prompt, type man cron for more info.
0
1
0
0
2010-11-08T16:55:00.000
2
0.379949
false
4,126,041
0
0
0
1
I run MAC OS X. So I have completed a python script that essentially parses a few sites online, and uploads a particular file to an online server. Essentially, I wish to run this script automatically from my computer about 20 times a day. Is there a solution to schedule this script to run at fixed time points everyday? Does this require compiling the python code into a .exe file? Thanks a lot!
How should I do full-text searching on App Engine?
5,072,790
0
7
1,246
0
python,google-app-engine,search,full-text-search,full-text-indexing
GAE has announced plans to offer full-text searching natively in the Datastore soon.
0
1
0
1
2010-11-09T05:36:00.000
2
1.2
true
4,130,813
0
0
1
1
What should I do for fast, full-text searching on App Engine with as little work as possible (and as little Java — I’m doing Python.)?
When a faster python?
4,132,499
3
2
312
0
python,performance
Sure. Use one of the variants that uses a JITer, such as IronPython, Jython, or PyPy.
0
1
0
1
2010-11-09T10:09:00.000
4
0.148885
false
4,132,493
1
0
0
2
There was the Unladen Swallow project that aims to get a faster python, but it seems to be stopped : Is there a way to get a faster python, I mean faster than C-Python, without the use of psyco ?
When a faster python?
4,132,523
1
2
312
0
python,performance
I saw pypy to be very fast on some tests : have a look
0
1
0
1
2010-11-09T10:09:00.000
4
0.049958
false
4,132,493
1
0
0
2
There was the Unladen Swallow project that aims to get a faster python, but it seems to be stopped : Is there a way to get a faster python, I mean faster than C-Python, without the use of psyco ?
Trouble installing MySQLdb for second version of Python
4,139,191
0
3
429
1
python,mysql,permissions,configuration-files
Are you sure that file isn't hardcoded in some other portion of the build process? Why not just add it to you $PATH for the duration of the build? Does the script need to write that file for some reason? Does the build script use su or sudo to attempt to become some other user? Are you absolutely sure about both the permissions and the fact that you ran the script as root? It's a really weird thing if you still can't get to it. Are you using a chroot or a virtualenv?
0
1
0
0
2010-11-09T20:52:00.000
2
0
false
4,138,504
0
0
0
2
The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually. The Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states: sh: /etc/mysql/my.cnf: Permission denied along with a Traceback that says setup.py couldn't find the file. Note that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is /etc/mysql/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error. Any suggestions?
Trouble installing MySQLdb for second version of Python
4,139,563
2
3
429
1
python,mysql,permissions,configuration-files
As far as I'm aware, there is a very significant difference between "mysql_config" and "my.cnf". "mysql_config" is usually located in the "bin" folder of your MySQL install and when executed, spits out various filesystem location information about your install. "my.cnf" is a configuration script used by MySQL itself. In short, when the script asks for "mysql_config", it should be taken to literally mean the executable file with a name of "mysql_config" and not the textual configuration file you're feeding it. MYSQLdb needs the "mysql_config" file so that it knows which libraries to use. That's it. It does not read your MySQL configuration directly. The errors you are experiencing can be put down to; It's trying to open the wrong file and running into permission trouble. Even after it has tried to open that file, it still can't find the "mysql_config" file. From here, you need to locate your MySQL installation's "bin" folder and check it contains "mysql_config". Then you can edit the folder path into the "site.cnf" file and you should be good to go.
0
1
0
0
2010-11-09T20:52:00.000
2
0.197375
false
4,138,504
0
0
0
2
The context: I'm working on some Python scripts on an Ubuntu server. I need to use some code written in Python 2.7 but our server has Python 2.5. We installed 2.7 as a second instance of Python so we wouldn't break anything reliant on 2.5. Now I need to install the MySQLdb package. I assume I can't do this the easy way by running apt-get install python-mysqldb because it will likely just reinstall to python 2.5, so I am just trying to install it manually. The Problem: In the MySQL-python-1.2.3 directory I try to run python2.7 setup.py build and get an error that states: sh: /etc/mysql/my.cnf: Permission denied along with a Traceback that says setup.py couldn't find the file. Note that the setup.py script looks for a mysql_config file in the $PATH directories by default, but the mysql config file for our server is /etc/mysql/my.cnf, so I changed the package's site.cfg file to match. I checked the permissions for the file, which are -rw-r--r--. I tried running the script as root and got the same error. Any suggestions?
How to create Unix and Linux binaries from Python code
4,139,691
4
2
3,222
0
python,linux,unix,cross-compiling
The standard freeze tool (from Tools/freeze) can be used to make fully-standalone binaries on Unix, including all extension modules and builtins (and omitting anything that is not directly or indirectly imported).
0
1
0
0
2010-11-09T21:35:00.000
1
1.2
true
4,138,886
0
0
0
1
Anybody know how this can be done? I took a look at cx_Freeze, but it seems that it doesn't compile everything necessary into one binary (i.e., the python builtins aren't present).
Bulk renaming of files based on lookup
4,140,643
1
7
5,950
0
python,perl,bash
Read in the text file, create a hash with the current file name, so files['1500000704'] = 'SH103239' and so on. Then go through the files in the current directory, grab the new filename from the hash, and rename it.
0
1
0
0
2010-11-10T02:05:00.000
9
0.022219
false
4,140,619
0
0
0
1
I have a folder full of image files such as 1500000704_full.jpg 1500000705_full.jpg 1500000711_full.jpg 1500000712_full.jpg 1500000714_full.jpg 1500000744_full.jpg 1500000745_full.jpg 1500000802_full.jpg 1500000803_full.jpg I need to rename the files based on a lookup from a text file which has entries such as, SH103239 1500000704 SH103240 1500000705 SH103241 1500000711 SH103242 1500000712 SH103243 1500000714 SH103244 1500000744 SH103245 1500000745 SH103252 1500000802 SH103253 1500000803 SH103254 1500000804 So, I want the image files to be renamed, SH103239_full.jpg SH103240_full.jpg SH103241_full.jpg SH103242_full.jpg SH103243_full.jpg SH103244_full.jpg SH103245_full.jpg SH103252_full.jpg SH103253_full.jpg SH103254_full.jpg How can I do this job the easiest? Any one can write me a quick command or script which can do this for me please? I have a lot of these image files and manual change isnt feasible. I am on ubuntu but depending on the tool I can switch to windows if need be. Ideally I would love to have it in bash script so that I can learn more or simple perl or python. Thanks EDIT: Had to Change the file names
Why shouldn't I use async (evented) IO
4,140,840
1
9
1,703
0
python,asynchronous,libevent,gevent
Biggest issue is that without threads, a block for one client will cause a block for all client. For example, if one client requests a resource (file on disk, paged-out memory, etc) that requires the OS to block the requesting process, then all clients will have to wait. A multithreaded server can block just the one client and continue to serve others. That said, if the above scenario is unlikely (that is, all clients will request the same resources), then event-driven is the way to go.
0
1
0
1
2010-11-10T02:13:00.000
2
1.2
true
4,140,656
1
0
0
2
I am writing now writing some evented code (In python using gevent) and I use the nginx as a web server and I feel both are great. I was told that there is a trade off with events but was unable to see it. Can someone please shed some light? James
Why shouldn't I use async (evented) IO
4,291,204
9
9
1,703
0
python,asynchronous,libevent,gevent
The only difficulty of evented programming is that you mustn't block, ever. This can be hard to achieve if you use some libraries that were designed with threads in mind. If you don't control these libraries, a fork() + message ipc is the way to go.
0
1
0
1
2010-11-10T02:13:00.000
2
1
false
4,140,656
1
0
0
2
I am writing now writing some evented code (In python using gevent) and I use the nginx as a web server and I feel both are great. I was told that there is a trade off with events but was unable to see it. Can someone please shed some light? James
What are the SCons alternatives?
7,201,456
4
6
2,318
0
java,c++,python,scons,gyp
I tried to do a Java / C++ / C++ To Java swig / (+ Protocol buffers) project in CMAKE and it was horrible! In such a case the problem with Cmake is, that the scripting language is extremely limited. I switched to Scons and everything got much easier.
1
1
0
1
2010-11-10T05:32:00.000
5
0.158649
false
4,141,511
0
0
0
4
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects. My question is: what building mechanism can I use to manage dependencies and build these projects? I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python. I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
What are the SCons alternatives?
4,142,509
1
6
2,318
0
java,c++,python,scons,gyp
For Java and C++ projects you can take a look into Maven + Maven-nar-plugin but for Python i really don't know the best. May be other tools like CMake would fit better.
1
1
0
1
2010-11-10T05:32:00.000
5
0.039979
false
4,141,511
0
0
0
4
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects. My question is: what building mechanism can I use to manage dependencies and build these projects? I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python. I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
What are the SCons alternatives?
4,141,589
9
6
2,318
0
java,c++,python,scons,gyp
CMake I use and prefer it for my projects. There's also Rake (comes with Ruby, but can be used for anything), which I regard rather highly.
1
1
0
1
2010-11-10T05:32:00.000
5
1
false
4,141,511
0
0
0
4
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects. My question is: what building mechanism can I use to manage dependencies and build these projects? I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python. I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
What are the SCons alternatives?
4,143,403
1
6
2,318
0
java,c++,python,scons,gyp
In Java world ant is "lingua franca" for build systems. Ant supports a C++ task via ant-contrib - so you can compile your C++ code. With Ant's exec task you can still run swig on C++ code in order to get the wrappers. Then standard tasks as javac/jar can be used for java application build.
1
1
0
1
2010-11-10T05:32:00.000
5
0.039979
false
4,141,511
0
0
0
4
I have projects in C++, Java and Python. Projects in C++ export SWIG interfaces so they can be used by Java and Python projects. My question is: what building mechanism can I use to manage dependencies and build these projects? I have used SCons and GYP. They are fairly easy to use and allow plugins (code-generators, compilers, packers). I'd like to know whether there are alternatives, in particular with native support for C++, Java and Python. I develop in Linux platform, but I'd like to be able to build in mac and win platforms as well.
/etc/init.d sh script
4,145,649
1
1
2,766
0
python,linux
Pardus initialization (http://www.pardus.org.tr/eng/projects/comar/SpeedingUpLinuxWithPardus.html) is based on python and in theory you can even start system with windows executable (through Wine of course). You can see a sample initialisation script there doing almost same thing with shell script but in a pythonic way.
0
1
0
0
2010-11-10T14:04:00.000
3
0.066568
false
4,145,282
0
0
0
1
I'm new to python. I want to create controlled script executed from /etc/init.d command like /etc/init.d something start/stop/restart Any advise appreciated.
How can I capture all of the python log records generated during the execution of a series of Celery tasks?
4,440,220
0
5
1,306
0
python,logging,message-queue,task,celery
It sounds like some kind of 'watcher' would be ideal. If you can watch and consume the logs as a stream you could slurp the results as they come in. Since the watcher would be running seperately and therefore have no dependencies with respect to what it is watching I believe this would satisfy your requirements for a non-invasive solution.
0
1
0
0
2010-11-11T20:03:00.000
3
0
false
4,158,758
0
0
1
1
I want to convert my homegrown task queue system into a Celery-based task queue, but one feature I currently have is causing me some distress. Right now, my task queue operates very coarsely; I run the job (which generates data and uploads it to another server), collect the logging using a variant on Nose's log capture library, and then I store the logging for the task as a detailed result record in the application database. I would like to break this down as three tasks: collect data upload data report results (including all logging from the preceding two tasks) The real kicker here is the logging collection. Right now, using the log capture, I have a series of log records for each log call made during the data generation and upload process. These are required for diagnostic purposes. Given that the tasks are not even guaranteed to run in the same process, it's not clear how I would accomplish this in a Celery task queue. My ideal solution to this problem will be a trivial and ideally minimally invasive method of capturing all logging during the predecessor tasks (1, 2) and making it available to the reporter task (3) Am I best off remaining fairly coarse-grained with my task definition, and putting all of this work in one task? or is there a way to pass the existing captured logging around in order to collect it at the end?
Bittorrent up-to-date library written in python
4,159,871
2
2
484
0
python,client,bittorrent
The main bittorrent client, from bittorrent.com is all python based I believe. I have hacked it in the past, and it's very clean code easy to modify.
0
1
0
0
2010-11-11T22:21:00.000
1
1.2
true
4,159,847
0
0
0
1
Is there any up-to-date bittorrent library which is written in python and can be used on windows to write client? Some time ago bitcomet was written in python and it was ok. Any alternatives? And second question: does bittorrent protocol change? For example what may happen if I will use old bitcomet library
How to create a well-formatted word document(.DOC) in python on Linux?
4,171,572
1
1
4,741
0
python,linux,ms-word
Openoffice has some Python scripting ability. I only heard about it, haven't studied it nor used it.
0
1
0
0
2010-11-13T08:01:00.000
2
0.099668
false
4,171,555
1
0
0
1
I want to create new work document from sketch with Python on Linux platform, but do not know how to do that. I do not willing to use RTF or use pythondocx to create docx document. Is there any other way to do so? Remember, I need the document keeps the formatting. Thanks everyone's help!
How to install python2.5.5 in Ubuntu10.10 with all libs bundled
4,173,876
1
0
626
0
python,ubuntu-10.04
You probably missing some libraries that are not bundled by default on Ubuntu with Python (I have no idea why they decided to split "core" python this way). You can try running apt-get build-dep python python-dev and build again (you might need to add other packages as well). Rule of thumb is that if Python complains about not having sqlite3 module, you need to install libsqlite3-dev, then rebuild.
0
1
0
0
2010-11-13T17:05:00.000
4
0.049958
false
4,173,674
1
0
0
2
I want to install python2.5.5 in Ubuntu10.10, since Ubuntu10.10 now just supports python>=2.6, so I download source file from python website and try to install it use ./configure && make && sudo make install, it seems that python2.5.5 has been installed successfully, but when I want to use it, sometimes it says "no module named ...", but it should be bundled, I have used it in my Win7, so I wonder whether I can install all the libs.
How to install python2.5.5 in Ubuntu10.10 with all libs bundled
4,174,467
1
0
626
0
python,ubuntu-10.04
You can add 10.04 to your apt sources, then you can install in the usual way after an apt-update
0
1
0
0
2010-11-13T17:05:00.000
4
0.049958
false
4,173,674
1
0
0
2
I want to install python2.5.5 in Ubuntu10.10, since Ubuntu10.10 now just supports python>=2.6, so I download source file from python website and try to install it use ./configure && make && sudo make install, it seems that python2.5.5 has been installed successfully, but when I want to use it, sometimes it says "no module named ...", but it should be bundled, I have used it in my Win7, so I wonder whether I can install all the libs.
How to redirect python runtime errors?
4,175,711
2
3
2,714
0
python,error-logging
Look at the traceback module. If you catch a RuntimeError, you can write it to the log (look at the logging module for that).
0
1
0
0
2010-11-14T01:34:00.000
2
0.197375
false
4,175,697
1
0
0
1
I am writting a daemon server using python, sometimes there are python runtime errors, for example some variable type is not correct. That error will not cause the process to exit. Is it possible for me to redirect such runtime error to a log file?
What is the best way to control Twisted's reactor so that it is nonblocking?
4,176,590
11
6
1,443
0
python,twisted,nonblocking
Yes. The best practice is that this is a bad idea, and that you never really need to do it. It doesn't work with all reactors, and you certainly can't have two different libraries which want to do this. Why do you need to maintain your own main loop? Chances are, it's something like "I want to work with PyGame" or "I am writing a GUI program and I want to use GTK's mainloop" or "I'm using Twisted from within Blender and it has its own event-handling". If this is the case, you should ask that specific question, because each one of those has its own answer. If you absolutely need to do this (and, again: you don't) the way to do it is to call reactor.iterate() periodically. This will be slow, break signal handling, and have wonky semantics with respect to reactor.stop(). It will introduce lots of bugs into your program that wouldn't otherwise be there, and when you need help diagnosing them, if you ask someone on the Twisted dev team, the first thing they will tell you is "stop doing that, you don't need to do it".
0
1
0
0
2010-11-14T06:10:00.000
1
1.2
true
4,176,405
0
0
0
1
Instead of running reactor.run(), I'd like to call something else (I dunno, like reactor.runOnce() or something) occasionally while maintaining my own main loop. Is there a best-practice for this with twisted?
Can I use Django's mail API in Google App Engine?
4,180,124
3
2
670
0
python,django,google-app-engine,django-nonrel
Yes, djangoappengine has a mail backend for GAE and it's enabled by default in your settings.py via "from djangoappengine.settings_base import *". You can take a look at the settings_base module to see all backends and default settings.
0
1
0
0
2010-11-14T14:27:00.000
1
1.2
true
4,177,907
0
0
1
1
I'm using Django-nonrel for Google App Engine and I was wondering if it's possible to use Django's built-in mail API instead of GAE's mail API for sending mail. If it is, how do I do it? Sorry if this seems like a noob question. I just started learning Django and GAE recently and I can't work out this problem that I have by myself.
is it possible to use PyMongo in Google App Engine?
4,179,091
1
4
1,355
1
python,google-app-engine,mongodb,pymongo
It's not possible because you don't have access to networks sockets in App Engine. As long as you cannot access the database via HTTP, it's impossible.
0
1
0
0
2010-11-14T17:42:00.000
3
0.066568
false
4,178,742
0
0
1
1
I'm trying to use a MongoDB Database from a Google App Engine service is that possible? How do I install the PyMongo driver on Google App Engine? Thanks
Suggestions for developing a threaded tcp based admin interface
4,179,129
1
1
178
0
python,multithreading,sockets,admin-interface
python includes some multi-threading servers (SocketServer, BaseHTTPServer, xmlrpclib). You might want to look at Twisted as well, it is a powerful framework for networking.
0
1
0
0
2010-11-14T18:57:00.000
3
0.066568
false
4,179,077
0
0
0
2
I've built a very simple TCP server (in python) that when queried, returns various system level statistics of the host OS running said script. As part of my experimentation and goal to gain knowledge of python and its available libraries; i would like to build on an administration interface that a) binds to a separate TCP socket b) accepts remote connections from the LAN and c) allows the connected user to issue various commands. The Varnish application is an example of a tool that offers similar administrative functionality. My knowledge of threads is limited, and I am looking for pointers on how to accomplish something similar to the following : user connects to admin port (telnet remote.host 12111), and issues "SET LOGGING DEBUG", or "STOP SERVICE". My confusion relates to how i would go about sharing data between threads. If the service is started on for example thread-1 , how can i access data from that thread? Alternatively, a list of python applications that offer such a feature would be a great help. I'd gladly poke through code, in order to reuse their ideas.
Suggestions for developing a threaded tcp based admin interface
4,179,107
0
1
178
0
python,multithreading,sockets,admin-interface
Probably the easiest starting point would involve Python's xmlrpclib. Regarding threading, all threads can read all data in a Python program; only one thread at a time can modify any given object, so primitives such as lists and dicts will always be in a consistent state. Data structures (i.e. class objects) involving multiple primitives will require a little more care. The safest way to coordinate between threads is to pass messages/commands between threads via something like Queue.Queue; this isn't always the most efficient way but it's far less prone to problems.
0
1
0
0
2010-11-14T18:57:00.000
3
0
false
4,179,077
0
0
0
2
I've built a very simple TCP server (in python) that when queried, returns various system level statistics of the host OS running said script. As part of my experimentation and goal to gain knowledge of python and its available libraries; i would like to build on an administration interface that a) binds to a separate TCP socket b) accepts remote connections from the LAN and c) allows the connected user to issue various commands. The Varnish application is an example of a tool that offers similar administrative functionality. My knowledge of threads is limited, and I am looking for pointers on how to accomplish something similar to the following : user connects to admin port (telnet remote.host 12111), and issues "SET LOGGING DEBUG", or "STOP SERVICE". My confusion relates to how i would go about sharing data between threads. If the service is started on for example thread-1 , how can i access data from that thread? Alternatively, a list of python applications that offer such a feature would be a great help. I'd gladly poke through code, in order to reuse their ideas.
Execute remote python script via SSH
4,180,771
4
23
51,995
0
python,ssh
On Linux machines, you can run the script with 'at'. echo "python scriptname.py" ¦ at now
0
1
0
1
2010-11-14T23:40:00.000
4
0.197375
false
4,180,390
0
0
0
1
I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to. I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).