Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Asynchronous URLfetch when we don't care about the result? [Python]
| 5,412,695
| 7
| 6
| 3,248
| 0
|
python,google-app-engine,asynchronous,urlfetch
|
A task queue task is your best option here. The message you're seeing in the log indicates that the request is waiting for your URLFetch to complete before returning, so this doesn't help. You say a task is 'overkill', but really, they're very lightweight, and definitely the best way to do this. Deferred will even allow you to just defer the fetch call directly, rather than having to write a function to call.
| 0
| 1
| 0
| 0
|
2011-03-23T20:34:00.000
| 2
| 1.2
| true
| 5,411,291
| 0
| 0
| 1
| 1
|
In some code I'm writing for GAE I need to periodically perform a GET on a URL on another system, in essence 'pinging' it and I'm not terribly concerned if the request fails, times out or succeeds.
As I basically want to 'fire and forget' and not slow down my own code by waiting for the request, I'm using an asynchronous urlfetch, and not calling get_result().
In my log I get a warning:
Found 1 RPC request(s) without matching response (presumably due to timeouts or other errors)
Am I missing an obviously better way to do this? A Task Queue or Deferred Task seems (to me) like overkill in this instance.
Any input would appreciated.
|
Optimizing join query performance in google app engine
| 5,415,555
| 0
| 1
| 372
| 1
|
python,google-app-engine
|
The standard solution to this problem is denormalization. Try storing a copy of price and profit in Entity1 and then you can answer your question with a single, simple query on Entity1.
| 0
| 1
| 0
| 0
|
2011-03-24T05:58:00.000
| 2
| 0
| false
| 5,415,342
| 0
| 0
| 1
| 1
|
Scenario
Entity1 (id,itmname)
Entity2 (id,itmname,price)
Entity3 (id,itmname,profit)
profit and price are both IntegerProperty
I want to count all the item with price more then 500 and profit more then 10.
I know its join operation and is not supported by google. I tried my best to find out the way other then executing queries separately and performing count but I didn't get anything.
The reason for not executing queries separately is query execution time. In each query I am getting more then 50000 records as result so it takes nearly 20 seconds in fetching records from first query.
|
Feedback on different backends for GWT
| 5,421,810
| 1
| 4
| 1,559
| 1
|
java,python,gwt,architecture,web-frameworks
|
We had the same dilemma in the past.
I was involved in designing and building a system that had a GWT frontend and Java (Spring, Hibernate) backend. Some of our other (related) systems were built in Python and Ruby, so the expertise was there, and a question just like yours came up.
We decided on Java mainly so we could use a single language for the entire stack. Since the same people worked on both the client and server side, working in a single language reduced the need to context-switch when moving from client to server code (e.g. when debugging). In hindsight I feel that we were proven right and that that was a good decision.
We used RPC, which as you mentioned yourself definitely eased the implementation of c/s communication. I can't say that I liked it much though. REST + JSON feels more right, and at the very least creates better decoupling between server and client. I guess you'll have to decide based on whether you expect you might need to re-implement either client or server independently in the future. If that's unlikely, I'd go with the KISS principle and thus with RPC which keeps it simple in this specific case.
Regarding the disadvantages for Java that you mention, I tend to agree on the principle (I prefer RoR myself), but not on the details. The multitier and configuration architecture isn't really a problem IMO - Spring and Hibernate are simple enough nowadays. IMO the advantage of using Java across client and server in this project trumps the relative ease of using python, plus you'll be introducing complexities in the interface (i.e. by doing REST vs the native RPC).
I can't comment on Numpy/Scipy and any Java alternatives. I've no experience there.
| 0
| 1
| 0
| 0
|
2011-03-24T09:56:00.000
| 1
| 1.2
| true
| 5,417,372
| 0
| 0
| 1
| 1
|
I have to re-design an existing application which uses Pylons (Python) on the backend and GWT on the frontend.
In the course of this re-design I can also change the backend system.
I tried to read up on the advantages and disadvantages of various backend systems (Java, Python, etc) but I would be thankful for some feedback from the community.
Existing application:
The existing application was developed with GWT 1.5 (runs now on 2.1) and is a multi-host-page setup.
The Pylons MVC framework defines a set of controllers/host pages in which GWT widgets are embedded ("classical website").
Data is stored in a MySQL database and accessed by the backend with SQLAlchemy/Elixir. Server/client communication is done with RequestBuilder (JSON).
The application is not a typical business like application with complex CRUD functionality (transactions, locking, etc) or sophisticated permission system (tough a simple ACL is required).
The application is used for visualization (charts, tables) of scientific data. The client interface is primarily used to display data in read-only mode. There might be some CRUD functionality but it's not the main aspect of the app.
Only a subset of the scientific data is going to be transfered to the client interface but this subset is generated out of large datasets.
The existing backend uses numpy/scipy to read data from db/files, create matrices and filter them.
The numbers of users accessing or using the app is relatively small, but the burden on the backend for each user/request is pretty high because it has to read and filter large datasets.
Requirements for the new system:
I want to move away from the multi-host-page setup to the MVP architecture (one single host page).
So the backend only serves one host page and acts as data source for AJAX calls.
Data will be still stored in a relational database (PostgreSQL instead of MySQL).
There will be a simple ACL (defines who can see what kind of data) and maybe some CRUD functionality (but it's not a priority).
The size of the datasets is going to increase, so the burden on the backend is probably going to be higher. There won't be many concurrent requests but the few ones have to be handled by the backend quickly. Hardware (RAM and CPU) for the backend server is not an issue.
Possible backend solutions:
Python (SQLAlchemy, Pylons or Django):
Advantages:
Rapid prototyping.
Re-Use of parts of the existing application
Numpy/Scipy for handling large datasets.
Disadvantages:
Weakly typed language -> debugging can be painful
Server/Client communication (JSON parsing or using 3rd party libraries).
Python GIL -> scaling with concurrent requests ?
Server language (python) <> client language (java)
Java (Hibernate/JPA, Spring, etc)
Advantages:
One language for both client and server (Java)
"Easier" to debug.
Server/Client communication (RequestFactory, RPC) easer to implement.
Performance, multi-threading, etc
Object graph can be transfered (RequestFactory).
CRUD "easy" to implement
Multitear architecture (features)
Disadvantages:
Multitear architecture (complexity,requires a lot of configuration)
Handling of arrays/matrices (not sure if there is a pendant to numpy/scipy in java).
Not all features of the Java web application layers/frameworks used (overkill?).
I didn't mention any other backend systems (RoR, etc) because I think these two systems are the most viable ones for my use case.
To be honest I am not new to Java but relatively new to Java web application frameworks. I know my way around Pylons though in the new setup not much of the Pylons features (MVC, templates) will be used because it probably only serves as AJAX backend.
If I go with a Java backend I have to decide whether to do a RESTful service (and clearly separate client from server) or use RequestFactory (tighter coupling). There is no specific requirement for "RESTfulness". In case of a Python backend I would probably go with a RESTful backend (as I have to take care of client/server communication anyways).
Although mainly scientific data is going to be displayed (not part of any Domain Object Graph) also related metadata is going to be displayed on the client (this would favor RequestFactory).
In case of python I can re-use code which was used for loading and filtering of the scientific data.
In case of Java I would have to re-implement this part.
Both backend-systems have its advantages and disadvantages.
I would be thankful for any further feedback.
Maybe somebody has experience with both backend and/or with that use case.
thanks in advance
|
reducing I/O on application and database
| 5,426,527
| 0
| 2
| 202
| 1
|
python,mysql,amazon-ec2,mysql-management
|
You didn't really specify whether it was writes or reads. My guess is that you can do it all in a mysql instance in a ramdisc (tmpfs under Linux).
Operations such as ALTER TABLE and copying big data around end up creating a lot of IO requests because they move a lot of data. This is not the same as if you've just got a lot of random (or more predictable queries).
If it's a batch operation, maybe you can do it entirely in a tmpfs instance.
It is possible to run more than one mysql instance on the machine, it's pretty easy to start up an instance on a tmpfs - just use mysql_install_db with datadir in a tmpfs, then run mysqld with appropriate params. Stick that in some shell scripts and you'll get it to start up. As it's in a ramfs, it won't need to use much memory for its buffers - just set them fairly small.
| 0
| 1
| 0
| 1
|
2011-03-24T20:44:00.000
| 2
| 0
| false
| 5,425,289
| 0
| 0
| 0
| 1
|
Is there a way to reduce the I/O's associated with either mysql or a python script? I am thinking of using EC2 and the costs seem okay except I can't really predict my I/O usage and I am worried it might blindside me with costs.
I basically develop a python script to parse data and upload it into mysql. Once its in mysql, I do some fairly heavy analytic on it(creating new columns, tables..basically alot of math and financial based analysis on a large dataset). So is there any design best practices to avoid heavy I/O's? I think memcached stores a everything in memory and accesses it from there, is there a way to get mysql or other scripts to do the same?
I am running the scripts fine right now on another host with 2 gigs of ram, but the ec2 instance I was looking at had about 8 gigs so I was wondering if I could use the extra memory to save me some money.
|
How do I make my website on Google App Engine accessible to visitors in China?
| 5,436,314
| 1
| 2
| 534
| 0
|
python,google-app-engine
|
Assuming Google has, and routes to, Datacenters in Asia, the latency should be reasonable.
The reverse proxy to avoid the firewall should be in a country that does not censor and is as near as possible to the target area.
In those conditions, google would choose a datacenter near your reverse proxy, and the latency is rtt(google<->proxy)+rtt(user<->proxy)
But you really have to try this out.
| 0
| 1
| 0
| 0
|
2011-03-25T17:54:00.000
| 2
| 0.099668
| false
| 5,436,249
| 0
| 0
| 1
| 1
|
China blocks appspot -- How do I get around this?
Assuming the censorship was not an issue, how bad are the latency issues?
|
Is there a difference between developing a web2py app on Windows or Linux?
| 5,444,972
| 2
| 2
| 1,990
| 0
|
python,web2py
|
No, there is a Windows installer.
| 0
| 1
| 0
| 0
|
2011-03-26T19:35:00.000
| 3
| 0.132549
| false
| 5,444,798
| 0
| 0
| 1
| 1
|
I recall setting up other frameworks in a Windows environment were extremely painful :)
|
Record audio in Google App Engine using rtmplite?
| 5,450,822
| 5
| 3
| 981
| 0
|
python,google-app-engine,rtmp
|
Google App Engine is tricky for RTMP because it does not support sockets. You would have to use something like RTMPT which is tunneled over HTTP, however, this tunneling incurs latency so if you are looking to do anything realtime this could become an issue.
Currently rtmplite does not support RTMPT so this would not be possible at the moment. I am involved in a project, RTMPy (http://rtmpy.org), that is planning support for RTMPT and AppEngine. Unfortunately AppEngine support is probably a few months out.
| 0
| 1
| 0
| 0
|
2011-03-27T06:30:00.000
| 2
| 0.462117
| false
| 5,447,631
| 0
| 0
| 1
| 2
|
I am in the process of building a Google App Engine application which requires audio to be recorded and saved in our database. I have found no alternative to using some form of RTMP server for recording audio through flash, so [rtmplite] (http://code.google.com/p/rtmplite/) came into our horizon.
Since I have no experience with rtmplite, is it the right choice for our project? Or is there any other Python-based RTMP solution that allows audio recording? Any flash client you can recommend?
Thanks!
|
Record audio in Google App Engine using rtmplite?
| 6,433,502
| 0
| 3
| 981
| 0
|
python,google-app-engine,rtmp
|
Try appengine backends, they currently don't whitelist a lot of things required for such streaming. But they might soon do so. Once they enable sockets, then rtmplite or rtmpy could easily be ported to run there. Backends already support unlimited request length which is required for streaming.
| 0
| 1
| 0
| 0
|
2011-03-27T06:30:00.000
| 2
| 0
| false
| 5,447,631
| 0
| 0
| 1
| 2
|
I am in the process of building a Google App Engine application which requires audio to be recorded and saved in our database. I have found no alternative to using some form of RTMP server for recording audio through flash, so [rtmplite] (http://code.google.com/p/rtmplite/) came into our horizon.
Since I have no experience with rtmplite, is it the right choice for our project? Or is there any other Python-based RTMP solution that allows audio recording? Any flash client you can recommend?
Thanks!
|
how to have google apps engine send mail- not send a copy of the mail to the sender
| 5,450,093
| 6
| 3
| 288
| 0
|
python,google-app-engine,sendmail
|
You can't. Sending email from someone without their knowledge isn't permitted by App Engine.
You can send email from any administrator address; you could add a "donotreply@yourapp.com" type address as an administrator and send email from that address.
| 0
| 1
| 0
| 0
|
2011-03-27T10:38:00.000
| 1
| 1.2
| true
| 5,448,698
| 0
| 0
| 1
| 1
|
I'm using GAE send mail- but I dont want the sender of the mail to get a coppy of the mail.
as for now, when a user is sending mail he gets a mail saying that he sent a mail to someone and the body of the sent mail, how do I disable that?
|
vimrc mapping problem; execute python script mapping not working from vimrc
| 5,449,239
| 2
| 0
| 764
| 0
|
python,map,vim
|
I suspect that you have something before map: <buffer> argument means that mapping is defined for current buffer only, so adding it to vimrc without something like autocmd FileType python before it is weird. Maybe it is the reason why it does not work: you somehow switch to another buffer before testing this mapping.
Some additional things to concern:
Never use map where can use noremap instead.
You probably don't want this mapping to be defined for visual (at least without <C-u> before w) and select modes, and definitely don't want it to be defined for operator-pending modes, so use nnoremap.
<S-e> and E are equivalent.
You can combine w and !... in one command using pipe symbol: :w | !/usr/bin/env python %<CR>.
You forgot slash before usr.
| 0
| 1
| 0
| 0
|
2011-03-27T12:28:00.000
| 2
| 1.2
| true
| 5,449,207
| 0
| 0
| 0
| 2
|
grr. I'm struggling with Vim's learning curve.
And trying to get a simple mapping in my vimrc to execute the current buffer's python script.
The mapping is well-formed and works after I enter it into the command line in Vim. This is the mapping:
map <buffer> <S-e> :w<CR>:!usr/bin/env python % <CR>
But it won't load from my vimrc :( I'm using the basic .vimrc_sample with only this mapping appended. What's weird is that I could get a different mapping working from the vimrc:
map <S-t> itest<Esc>
This one works, but not the script executer? What gives?
Ubuntu 10.10 Python 2.6 Vim 7.2
Help is very appreciated!
|
vimrc mapping problem; execute python script mapping not working from vimrc
| 5,449,247
| 0
| 0
| 764
| 0
|
python,map,vim
|
Jesus, Murphy's Law.
After searching for an answer for an hour, 1 min after posting this q I solved it. The problem was <buffer> in the mapping.
Removing it made the mapping work, thus:
nnoremap E w:<CR>:!python % <CR>
| 0
| 1
| 0
| 0
|
2011-03-27T12:28:00.000
| 2
| 0
| false
| 5,449,207
| 0
| 0
| 0
| 2
|
grr. I'm struggling with Vim's learning curve.
And trying to get a simple mapping in my vimrc to execute the current buffer's python script.
The mapping is well-formed and works after I enter it into the command line in Vim. This is the mapping:
map <buffer> <S-e> :w<CR>:!usr/bin/env python % <CR>
But it won't load from my vimrc :( I'm using the basic .vimrc_sample with only this mapping appended. What's weird is that I could get a different mapping working from the vimrc:
map <S-t> itest<Esc>
This one works, but not the script executer? What gives?
Ubuntu 10.10 Python 2.6 Vim 7.2
Help is very appreciated!
|
Update Two Related Files On Disk In a "Secure" Way?
| 5,453,629
| 2
| 3
| 90
| 0
|
python,linux,binary,atomic
|
Do what the RDBMS engines do.
Write an "update sequence number" in each file.
You cannot ever guarantee that both files are written.
However, you can compare the update sequence numbers to see if the files have the same sequence number.
If the sequence numbers disagree, it's logically equivalent to no file having been written. Delete the files and use the backup copies.
If the sequence numbers gree, it's logically equivalent to both having been written.
| 0
| 1
| 0
| 0
|
2011-03-27T23:05:00.000
| 3
| 0.132549
| false
| 5,453,101
| 0
| 0
| 0
| 1
|
I have two binary files that are related one to another (meaning, when one file's records are updated, the other file's matching records should be updated as well). both files are binary files stored on disk.
The updation will look something like this:
UpdateFirstFile() -- first file is updated.....
UpdateSecondFile() -- second file is updated...
what methods should I use to make sure that either BOTH files are updated or NONE of the files is updated?
Both files are flat files (of size 20[MB] each). I know a database would have solved this problem, yet I am note using one due to overhead reasons (every table would require much more than 20[MB] to be stored, and I am short on space and have 1000s of such files...).
Any ideas?
|
Python Twisted does not work on Eclipse
| 5,466,280
| 2
| 2
| 991
| 0
|
python,eclipse,twisted,pydev
|
Make sure you:
Have PyDev installed
Have twisted / zope.interface installed and in your PYTHONPATH.
Have configured your eclipse project as a python/pydev project.
Have configured the interpreter in the Eclipse environment (Pydev settings).
| 0
| 1
| 0
| 0
|
2011-03-28T19:04:00.000
| 1
| 1.2
| true
| 5,463,782
| 1
| 0
| 1
| 1
|
I installed Twisted for Python and I am trying to build a simple server on Eclipse and I am getting the following error:
ImportError: No module named zope.interface
I'm not sure how to correct this. Doesn't Twisted install all of the dependencies first?
|
Environment for quickly developing routing protocol prototype
| 5,473,481
| 1
| 2
| 1,573
| 0
|
python,routing,network-programming,erlang,protocols
|
Erlang is very well suited for just a logical prototype without concrete implementation as well as implementing real world capable implementations of the protocols.
You don't need any other framework, just Erlang and OTP which comes with it is enough.
Even if you have to work down to packet level Erlang helps you with its binary patterns which are gread for working with protocol packets.
Even if you want high performance you can move the most time critical stuff into whats called "Ports" in Erlang implementing it in C or another low level language.
| 0
| 1
| 0
| 1
|
2011-03-28T21:00:00.000
| 3
| 0.066568
| false
| 5,465,010
| 0
| 0
| 0
| 1
|
I am doing research on routing protocols. Currently I perform simulations written in Python of a new protocol. The next step would be to build a real prototype which can really run on top of a Linux-based operating system (as a routing daemon such as ospfd).
What would be a well-suited programming environment/language to quickly build a prototype of a routing protocol? Anyone having experience with building distributed protocol prototypes?
I would like to focus as much as possible on high-level protocol logic instead of on low-level machine-related instructions. I am willing to learn new languages (such as Erlang or Haskell), in case they are better adapted for such a task. Alternatively, I have read about the twisted framework available in Python (which would probably allow to re-use some code), but it is unclear to me if this only would help me in case I write client/server-based protocols.
Does anyone know about an elegant tutorial or example implementation of a (distributed) protocol implementation?
|
Python IDLE equivalent of CTRL-R in R
| 5,475,771
| 6
| 8
| 6,260
| 0
|
python,r
|
No
In the shortcut key list in IDLE, in Options > Configure IDLE > Keys, in the Action - Key(s) list, one does not find any shortcut key for executing selected code.
| 0
| 1
| 0
| 0
|
2011-03-29T16:14:00.000
| 3
| 1
| false
| 5,475,649
| 0
| 0
| 0
| 1
|
If you have a script open in the Windows version of R, you can run a line (or section of highlighted code) in the shell by hitting CTRL-R (believe it's command-enter in apple version). Is there similar functionality for IDLE? Many thanks
|
Need CGI (or another solution compatible with IIS 7) to handle *massive* uploads
| 5,479,573
| -1
| 1
| 300
| 0
|
python,perl,iis,iis-7,cgi
|
the windows TCP stack is limited to 4GB file uploads. Anymore than that is not possible.
| 0
| 1
| 0
| 0
|
2011-03-29T21:52:00.000
| 3
| -0.066568
| false
| 5,479,387
| 0
| 0
| 1
| 2
|
We need to handle massive file uploads without spending resources on an IIS 7 server. To emphasize how light-weight this needs to be, let's say that we need to handle file uploads of sizes that are completely insane, like 100GB uploads, or something that can continue running for an extremely long time without consuming additional resources. Basically we need something that gives us control over the reception of the file from the moment it starts to the moment it ends.
A bit of background:
We're using ColdFusion as the server-side processor, but it has failed us when handling uploads beyond about 1GB and we've exhausted our configuration options. There's a long story behind that, but essentially, if a .cfm page (ColdFusion) is the destination of the file upload and it goes over about 1GB, it gives a 503 error... even if the target file doesn't exist. So clearly too much is going on merely by telling the server that we intend to process the file with a .cfm page.
We suspect that this is due to Java limitations because the server (or really, the workstation in this case) does not show any signs of load on CPU or memory. Since we have limited memory and this website is intended for a lot of concurrent uploads, we can't trust simply raising the virtual machine memory usage, especially because that simply doesn't work currently, even for a single connection... let alone the hundreds of concurrent connections we expect when we go live.
So we're down to writing a specialized solution using CGI that will handle file uploads only. Basically, we need control on the server-side that we don't get with ColdFusion or ASP.NET because those technologies do so many things on their own, behind the scenes, without giving us the control we need. They always end up spending up too many resources one way or the other for an arguably obvious reason; what we're trying to do is completely insane and not the intended function of those technologies. That's why we want a specialized uploader through CGI that bypasses all that ColdFusion/ASP.NET magic that keeps getting in the way, hoping it gives us the control we need.
But before we spent countless hours on this, I figured I'd ask around and see if anyone knows of a proper solution to this problem that might be viable in our case.
The only real restriction here is that it has to be CGI, and it has to run on IIS 7, therefore a Windows "Server" environment. We're fine with it being written in Python, Perl, name it... provided it can run as a CGI, but it has to run as a CGI... unless of course someone has better ideas on how to do this.
So the magic question is; are there CGI solutions out there that already do this or are we stuck with writing it on our own, hoping that the reason no one else has done it already is some other than it being impossible?
Thanks in advance.
|
Need CGI (or another solution compatible with IIS 7) to handle *massive* uploads
| 5,483,933
| 3
| 1
| 300
| 0
|
python,perl,iis,iis-7,cgi
|
You want WebDAV, not CGI. It provides all the nice bits that make file transfers not suck, like resuming and pausing.
| 0
| 1
| 0
| 0
|
2011-03-29T21:52:00.000
| 3
| 0.197375
| false
| 5,479,387
| 0
| 0
| 1
| 2
|
We need to handle massive file uploads without spending resources on an IIS 7 server. To emphasize how light-weight this needs to be, let's say that we need to handle file uploads of sizes that are completely insane, like 100GB uploads, or something that can continue running for an extremely long time without consuming additional resources. Basically we need something that gives us control over the reception of the file from the moment it starts to the moment it ends.
A bit of background:
We're using ColdFusion as the server-side processor, but it has failed us when handling uploads beyond about 1GB and we've exhausted our configuration options. There's a long story behind that, but essentially, if a .cfm page (ColdFusion) is the destination of the file upload and it goes over about 1GB, it gives a 503 error... even if the target file doesn't exist. So clearly too much is going on merely by telling the server that we intend to process the file with a .cfm page.
We suspect that this is due to Java limitations because the server (or really, the workstation in this case) does not show any signs of load on CPU or memory. Since we have limited memory and this website is intended for a lot of concurrent uploads, we can't trust simply raising the virtual machine memory usage, especially because that simply doesn't work currently, even for a single connection... let alone the hundreds of concurrent connections we expect when we go live.
So we're down to writing a specialized solution using CGI that will handle file uploads only. Basically, we need control on the server-side that we don't get with ColdFusion or ASP.NET because those technologies do so many things on their own, behind the scenes, without giving us the control we need. They always end up spending up too many resources one way or the other for an arguably obvious reason; what we're trying to do is completely insane and not the intended function of those technologies. That's why we want a specialized uploader through CGI that bypasses all that ColdFusion/ASP.NET magic that keeps getting in the way, hoping it gives us the control we need.
But before we spent countless hours on this, I figured I'd ask around and see if anyone knows of a proper solution to this problem that might be viable in our case.
The only real restriction here is that it has to be CGI, and it has to run on IIS 7, therefore a Windows "Server" environment. We're fine with it being written in Python, Perl, name it... provided it can run as a CGI, but it has to run as a CGI... unless of course someone has better ideas on how to do this.
So the magic question is; are there CGI solutions out there that already do this or are we stuck with writing it on our own, hoping that the reason no one else has done it already is some other than it being impossible?
Thanks in advance.
|
Does local GAE read and write to a local datastore file on the hard drive while it's running?
| 5,486,121
| 3
| 0
| 196
| 0
|
python,google-app-engine,local-storage
|
How the datastore reads and writes its underlying files varies - the standard datastore is read on startup, and written progressively, journal-style, as the app modifies data. The SQLite backend uses a SQLite database.
You shouldn't have to care, though - neither backend is designed for robustness in the face of failure, as they're development backends. You shouldn't be modifying or deleting the underlying files, either.
| 0
| 1
| 0
| 0
|
2011-03-30T10:09:00.000
| 2
| 1.2
| true
| 5,484,900
| 0
| 0
| 1
| 1
|
I have just noticed that when I have a running instance of my GAE application, there nothing happens with the datastore file when I add or remove entries using Python code or in admin console. I can even remove the file and still have all data safe and sound in admin area and accessible from code. But when I restart my application, all data obviously goes away and I have a blank datastore. So, the question - does GAE reads all data from the file only when it starts and then deals with it in the memory, saving the data after I stop the application? Does it make any requests to the datastore file when the application is running? If it doesn't save anything to the file while it's running, then, possibly, data may be lost if the application unexpectedly stops? Please make it clear for me if you know how it works in this aspect.
|
python file was renamed, how to get 'rename time'
| 5,631,968
| 0
| 0
| 239
| 0
|
python,file-management
|
It is impossible, OS does not store such info about files. Answered by 'atzz' in comment
| 0
| 1
| 0
| 0
|
2011-03-31T10:54:00.000
| 2
| 1.2
| true
| 5,498,704
| 1
| 0
| 0
| 1
|
lets say that some files were renamed by Python script, is it possible to get this 'rename time' using Python (It can be seen in Far Manager)? (Windows)
It seems it is not possible through STAT, etc
Any ideas ?
|
Event correlation and filtering - How to, where-to start?
| 5,516,971
| 1
| 9
| 1,981
| 0
|
python,erlang,machine-learning,classification,correlation
|
Your problem is a tactical problem as opposed to a procedural problem. Both types have their own set of tools, and you will be in a world of pain if you try to solve a tactical problem with procedural tools.
Just to clarify terms, when i say procedural, I am talking about use cases where you can say do X, then Y, then Z. With Tactical problems, X, Y, and Z can occur at anytime, and you must be able to handle the event.
You are on the right track with CEP. You might also look into using a rules engine. You didn't mention what your dev environment is, but if its Java, you might take a look at Jess. If you really want a nice and robust rules engine, take a look at Tibco Business Events. It is very powerful and fault tolerant, but definitely not free.
| 0
| 1
| 0
| 0
|
2011-03-31T17:27:00.000
| 2
| 0.099668
| false
| 5,503,820
| 0
| 0
| 0
| 1
|
Got an asynchronous stream of events, where each event has information like -
Agency (one of many Agencies possible to be served by my solution)
Agent (one of many Agents in an Agency)
Served-Entity (a person/organization served by 1 or more agencies)
Date+Time
Class-Data (tags from a fixed but large set of tags)
What I need to do is to --
Correlate an event based on Served-Entity, Date+Time and Class-Data, and create a consolidated new Event. Example:
Event #0021: { Agency='XYZ', Agent='ABC', Served-Entity='MMN', Date+Time='12-03-2011/11:03:37', Class-Date='missed-delivery,no-repeat,untracable,orphan' }
Event #0193: { Agency='KLM', Agent='DAY', Served-Entity='MMN', Date+Time='12-03-2011/12:32:21', Class-Date='missed-delivery,orphan,lost' }
Event #1217: { Agency='KLM', Agent='CARE', Served-Entity='MMN', Date+Time='12-03-2011/18:50:45', Class-Date='escalated' }
Here I find 3 events which are spaced out in time (more than 7hr separation), which are for the same Served-Entity (MMN), occur within a certain time window (say 24-hours), have matching or related Class-Data.
Finally create a consolidated (new) event which could represent an inference drawn.
Be able to create reports on a per Agency, per Agency, per Served-Entity basis, based on things like specific Class-Data tags (e.g. missed-delivery) over a certain period of time. This could be done using the original/input events, or the synthesized (inference) events.
While this is not a requirement today, but quite likely to appear in future, that the "tags" that appear in Class-Data could grow, without any human intervention. So not sure if this should then be treated as unstructured data.
Also not an immediate requirement, but in future there may be a need to identify trends / patterns of event occurrences (i.e. Event1 led to Event2 led to Event3).
The event arrival rate could be quite high... possibly thousands of events per minute. Maybe more. And, I need to archive the original/synthesized events for a period of time (a month or so).
My solution needs to be based on FOSS components (preferably). Some research done so far, points in the direction of CEP (Complex Event Processing), Bayesian-Networks/Classification, Predictive-Analytics.
Looking for some suggestions regarding approach to take. I'd prefer to take the path which meets most of my goals, with minimum difficulty/time, or to put another way, "learning AI" or "formal statistical methods" isn't my short-term goal :-)
|
Python append multiple files in given order to one big file
| 57,903,899
| 0
| 21
| 67,484
| 0
|
python,file,append
|
There's also the fileinput class in Python 3, which is perfect for this sort of situation
| 0
| 1
| 0
| 0
|
2011-04-01T06:26:00.000
| 12
| 0
| false
| 5,509,872
| 1
| 0
| 0
| 1
|
I have up to 8 seperate Python processes creating temp files in a shared folder. Then I'd like the controlling process to append all the temp files in a certain order into one big file. What's the quickest way of doing this at an os agnostic shell level?
|
I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?
| 5,525,008
| 5
| 21
| 10,042
| 0
|
c++,python,linux,multithreading,event-driven
|
I've solved this exact problem using what you mention, pipe() and libevent (which wraps epoll). The worker thread writes a byte to its pipe FD when its output queue goes from empty to non-empty. That wakes up the main IO thread, which can then grab the worker thread's output. This works great is actually very simple to code.
| 0
| 1
| 0
| 0
|
2011-04-02T17:41:00.000
| 8
| 0.124353
| false
| 5,524,780
| 0
| 0
| 0
| 3
|
I would like to spawn off threads to perform certain tasks, and use a thread-safe queue to communicate with them. I would also like to be doing IO to a variety of file descriptors while I'm waiting.
What's the recommended way to accomplish this? Do I have to created an inter-thread pipe and write to it when the queue goes from no elements to some elements? Isn't there a better way?
And if I have to create the inter-thread pipe, why don't more libraries that implement shared queues allow you to create the shared queue and inter-thread pipe as a single entity?
Does the fact I want to do this at all imply a fundamental design flaw?
I'm asking this about both C++ and Python. And I'm mildly interested in a cross-platform solution, but primarily interested in Linux.
For a more concrete example...
I have some code which will be searching for stuff in a filesystem tree. I have several communications channels open to the outside world through sockets. Requests that may (or may not) result in a need to search for stuff in the filesystem tree will be arriving.
I'm going to isolate the code that searches for stuff in the filesystem tree in one or more threads. I would like to take requests that result in a need to search the tree and put them in a thread-safe queue of things to be done by the searcher threads. The results will be put into a queue of completed searches.
I would like to be able to service all the non-search requests quickly while the searches are going on. I would like to be able to act on the search results in a timely fashion.
Servicing the incoming requests would generally imply some kind of event-driven architecture that uses epoll. The queue of disk-search requests and the return queue of results would imply a thread-safe queue that uses mutexes or semaphores to implement the thread safety.
The standard way to wait on an empty queue is to use a condition variable. But that won't work if I need to service other requests while I'm waiting. Either I end up polling the results queue all the time (and delaying the results by half the poll interval, on average), blocking and not servicing requests.
|
I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?
| 36,855,042
| 3
| 21
| 10,042
| 0
|
c++,python,linux,multithreading,event-driven
|
It seems nobody has mentioned this option yet:
Don't run select/poll/etc. in your "main thread". Start a dedicated secondary thread which does the I/O and pushes notifications into your thread-safe queue (the same queue which your other threads use to communicate with the main thread) when I/O operations complete.
Then your main thread just needs to wait on the notification queue.
| 0
| 1
| 0
| 0
|
2011-04-02T17:41:00.000
| 8
| 0.07486
| false
| 5,524,780
| 0
| 0
| 0
| 3
|
I would like to spawn off threads to perform certain tasks, and use a thread-safe queue to communicate with them. I would also like to be doing IO to a variety of file descriptors while I'm waiting.
What's the recommended way to accomplish this? Do I have to created an inter-thread pipe and write to it when the queue goes from no elements to some elements? Isn't there a better way?
And if I have to create the inter-thread pipe, why don't more libraries that implement shared queues allow you to create the shared queue and inter-thread pipe as a single entity?
Does the fact I want to do this at all imply a fundamental design flaw?
I'm asking this about both C++ and Python. And I'm mildly interested in a cross-platform solution, but primarily interested in Linux.
For a more concrete example...
I have some code which will be searching for stuff in a filesystem tree. I have several communications channels open to the outside world through sockets. Requests that may (or may not) result in a need to search for stuff in the filesystem tree will be arriving.
I'm going to isolate the code that searches for stuff in the filesystem tree in one or more threads. I would like to take requests that result in a need to search the tree and put them in a thread-safe queue of things to be done by the searcher threads. The results will be put into a queue of completed searches.
I would like to be able to service all the non-search requests quickly while the searches are going on. I would like to be able to act on the search results in a timely fashion.
Servicing the incoming requests would generally imply some kind of event-driven architecture that uses epoll. The queue of disk-search requests and the return queue of results would imply a thread-safe queue that uses mutexes or semaphores to implement the thread safety.
The standard way to wait on an empty queue is to use a condition variable. But that won't work if I need to service other requests while I'm waiting. Either I end up polling the results queue all the time (and delaying the results by half the poll interval, on average), blocking and not servicing requests.
|
I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?
| 5,524,930
| 1
| 21
| 10,042
| 0
|
c++,python,linux,multithreading,event-driven
|
C++11 has std::mutex and std::condition_variable. The two can be used to have one thread signal another when a certain condition is met. It sounds to me like you will need to build your solution out of these primitives. If you environment does not yet support these C++11 library features, you can find very similar ones at boost. Sorry, can't say much about python.
| 0
| 1
| 0
| 0
|
2011-04-02T17:41:00.000
| 8
| 0.024995
| false
| 5,524,780
| 0
| 0
| 0
| 3
|
I would like to spawn off threads to perform certain tasks, and use a thread-safe queue to communicate with them. I would also like to be doing IO to a variety of file descriptors while I'm waiting.
What's the recommended way to accomplish this? Do I have to created an inter-thread pipe and write to it when the queue goes from no elements to some elements? Isn't there a better way?
And if I have to create the inter-thread pipe, why don't more libraries that implement shared queues allow you to create the shared queue and inter-thread pipe as a single entity?
Does the fact I want to do this at all imply a fundamental design flaw?
I'm asking this about both C++ and Python. And I'm mildly interested in a cross-platform solution, but primarily interested in Linux.
For a more concrete example...
I have some code which will be searching for stuff in a filesystem tree. I have several communications channels open to the outside world through sockets. Requests that may (or may not) result in a need to search for stuff in the filesystem tree will be arriving.
I'm going to isolate the code that searches for stuff in the filesystem tree in one or more threads. I would like to take requests that result in a need to search the tree and put them in a thread-safe queue of things to be done by the searcher threads. The results will be put into a queue of completed searches.
I would like to be able to service all the non-search requests quickly while the searches are going on. I would like to be able to act on the search results in a timely fashion.
Servicing the incoming requests would generally imply some kind of event-driven architecture that uses epoll. The queue of disk-search requests and the return queue of results would imply a thread-safe queue that uses mutexes or semaphores to implement the thread safety.
The standard way to wait on an empty queue is to use a condition variable. But that won't work if I need to service other requests while I'm waiting. Either I end up polling the results queue all the time (and delaying the results by half the poll interval, on average), blocking and not servicing requests.
|
file I/O with google app engine
| 5,525,121
| 1
| 1
| 278
| 0
|
python,html,xml,google-app-engine,datastore
|
Use a StringIO when you need a file-like object for use with libraries that act on files. (Although I believe most XML parsers will happily accept a string instead of requiring a file-like object.)
| 0
| 1
| 0
| 0
|
2011-04-02T18:17:00.000
| 1
| 1.2
| true
| 5,524,991
| 0
| 0
| 1
| 1
|
I want to provide a field in my html file so that people can upload their XML files to be imported to the datastore. How can I read and process this file inside the app engine once it is uploaded ? (I dont want to store the file with blobstore. Just want to read, process and throw it away) Thanks
|
Is there a way to tell where your python file is?
| 5,525,559
| 1
| 1
| 226
| 0
|
python,windows,python-3.x,python-2.7
|
You can read the variable __file__ to find out the path to the current Python file being interpreted.
| 0
| 1
| 0
| 0
|
2011-04-02T20:04:00.000
| 5
| 0.039979
| false
| 5,525,544
| 1
| 0
| 0
| 1
|
(Python2.7-3.2 on WindowsXP)
I used to use sys.arg[0], but now I'm executing it from a batch script(for determining if the user has python)
Thanks ^^
~ps: Suggest if you know a better way to find out if the user has Python
|
python sys.argv limitations?
| 5,533,719
| 3
| 13
| 11,527
| 0
|
python
|
Python itself doesn't impose any limitations on the length or content of sys.argv. However, your operating system and/or command shell definitely will. This question cannot be completely answered without detailed consideration of your operating environment.
| 0
| 1
| 0
| 1
|
2011-04-04T01:18:00.000
| 2
| 0.291313
| false
| 5,533,704
| 1
| 0
| 0
| 1
|
Suppose I'd like to run a python script like this: python my_script.py MY_INPUT.
In this case, MY_INPUT will be transmitted to sys.argv[1].
Is there a limit to the number of characters MY_INPUT can contain?
Is there a limit to the type of characters MY_INPUT can contain?
Any other limitations with regards to MY_INPUT?
UPDATE: I am using Ubuntu Linux 10.04
|
python: Can I run a python script without actually installing python?
| 29,426,136
| 0
| 32
| 72,897
| 0
|
python,executable,py2exe,cx-freeze
|
This is an old question, but one alternative is creating a virtual environment for Python, which can be as simple as python -m venv myenvname (Python 3.4). You can "install" packages into it the normal way (e.g. pip) without needing anything else. You'll end up with a folder you can move/delete at your leisure.
| 0
| 1
| 0
| 0
|
2011-04-04T14:13:00.000
| 6
| 0
| false
| 5,539,736
| 1
| 0
| 0
| 1
|
I have some .py files I wrote that I want to run on a different machine. The target machine does not have python installed, and I can't 'install' it by policy. What I can do is copy files over, run my stuff, and then remove them.
What I tried was to just take my development python folder over to the target machine and cd to the python folder and run python.exe /path/to/.py/file. It gave me an error saying that python.dll was not registered. If I registered the DLL that is probably going to move me to far across the 'violating policy' line.
Is there anyway I can accomplish running python files on a machine that does not have python actually installed (trying to get py2exe to work now, but it is painful)?
|
Fastest Way to Write Data To Individual Machines?
| 5,541,728
| 3
| 0
| 179
| 0
|
python,linux
|
You have two (classes of) choices:
You could build some distribution mechanism yourself.
You could use an existing tool to handle the distribution and storage.
In the simplest case, you write a program on each machine in your network that simply listens, processes and writes. You distribute from X to each machine in your pool round-robin. But, you might want to address higher-level concerns like handling node failures or dealing with requests that take longer to process than others, adding new nodes to the system, etc.
As you want more functionality, you'll probably want to find some existing tool to help you. It sounds like you might want to investigate some combinations of AMQP (for reliable messaging), Hadoop (for distributed data processing) or more complete NoSQL solutions like Cassandra or Riak. By leveraging these tools, your system will be significantly more robust than what you could probably build out yourself.
| 0
| 1
| 0
| 0
|
2011-04-04T16:43:00.000
| 3
| 1.2
| true
| 5,541,615
| 0
| 0
| 0
| 1
|
I have a network of 100 machines, all running Ubuntu Linux.
On a continuous (streaming) basis, machine X is 'fed' with some real-time data. I need to write a python script that would get the data as input, load it in-memory, process it, and then save it to disk.
It's a lot of data, hence, I would ideally want to split the data in memory (using some logic) and just send pieces of it to each individual computer, in the fastest possible way. each individual computer will accept its piece of data, handle it and write it to its local disk.
Suppose I have a container of data in Python (be it a list, a dictionary etc), already processed and split to pieces. What is the fastest way to send each 'piece' of data to each individual machine?
|
Retrieve list of tasks in a queue in Celery
| 50,170,855
| 2
| 188
| 188,607
| 0
|
python,celery
|
As far as I know Celery does not give API for examining tasks that are waiting in the queue. This is broker-specific. If you use Redis as a broker for an example, then examining tasks that are waiting in the celery (default) queue is as simple as:
connect to the broker
list items in the celery list (LRANGE command for an example)
Keep in mind that these are tasks WAITING to be picked by available workers. Your cluster may have some tasks running - those will not be in this list as they have already been picked.
The process of retrieving tasks in particular queue is broker-specific.
| 0
| 1
| 0
| 0
|
2011-04-04T21:35:00.000
| 14
| 0.028564
| false
| 5,544,629
| 0
| 0
| 1
| 1
|
How can I retrieve a list of tasks in a queue that are yet to be processed?
|
How to install PIL in system library using homebrew?
| 36,866,806
| 1
| 12
| 19,221
| 0
|
python-imaging-library,homebrew
|
As @BarnabasSzabolcs mentioned, newer versions named pillow.
an alternative to brew install Homebrew/python/pillow is pip install pillow. You may need to add sudo, depends on your python environment permissions.
p.s.
that answer could be fit better as a comment, 14 credits to go...
| 0
| 1
| 0
| 0
|
2011-04-05T03:26:00.000
| 4
| 0.049958
| false
| 5,546,860
| 1
| 0
| 0
| 1
|
In a new SnowLeopard install, I'd like to use homebrew to install PIL. However the recipe installs PIL under cellar instead of in /Library/Python/2.6/site-packages. Is there a way to change the install directory?
|
getting etag from 'google reader bundle' feed using universal feed parser in python
| 5,554,213
| 0
| 0
| 203
| 0
|
python,atom-feed,google-reader
|
The Google Reader API does not support ETags or If-Modified-Since. However, it does support an ot=<timestamp in seconds since the epoch> parameter which you can use to restrict fetched data to items since you last attempted a fetch.
| 0
| 1
| 1
| 0
|
2011-04-05T12:46:00.000
| 1
| 1.2
| true
| 5,552,049
| 0
| 0
| 0
| 1
|
I am using the universal feed parser library in python to get an atom feed. This atom feed has been generated using google reader after bundling several subscriptions.
I am able to receive the latest feeds, however the feedparser.parse(url) returns a FeedParserDict which doesnot have the etag or modified values. I unable to just check for the latest feeds because of this.
Does google reader send an etag value? if yes why isn't the feedparser returning it?
~Vijay
|
Differences between node.js and Tornado
| 5,563,767
| 10
| 81
| 30,121
| 0
|
javascript,python,comparison,node.js,tornado
|
node.js uses V8 which compiles into assembly code, tornado doesn't do that yet.
Other than that (which doesn't actually seem to make much difference to the speed), it's the ecosystem. Do you prefer the event model of JS, or the way Python works? Are you happier using Python or JS libraries?
| 0
| 1
| 0
| 0
|
2011-04-06T05:06:00.000
| 5
| 1
| false
| 5,561,701
| 1
| 0
| 0
| 3
|
Besides the fact that node.js is written in JS and Tornado in Python, what are some of the differences between the two? They're both non-blocking asynchronous web servers, right? Why choose one over the other besides the language?
|
Differences between node.js and Tornado
| 9,171,990
| 3
| 81
| 30,121
| 0
|
javascript,python,comparison,node.js,tornado
|
Nodejs also has a seamless integration / implementation of websockets called Socket.io. It handles browsers supporting sockets - events and also has backward polling compatibility for older browsers. It is quite quick on development requiring a notification framework or some similar event based programming.
| 0
| 1
| 0
| 0
|
2011-04-06T05:06:00.000
| 5
| 0.119427
| false
| 5,561,701
| 1
| 0
| 0
| 3
|
Besides the fact that node.js is written in JS and Tornado in Python, what are some of the differences between the two? They're both non-blocking asynchronous web servers, right? Why choose one over the other besides the language?
|
Differences between node.js and Tornado
| 29,455,334
| 3
| 81
| 30,121
| 0
|
javascript,python,comparison,node.js,tornado
|
I would suggested you go with NodeJS, if there is no personal pref to python. I like Python a lot, but for async I choose Tornado over node, and later had to struggle finding way to do a thing, or libraries with async support (like Cassandra has async in tests, but nowhere could I find way to use cqlengine with async. Had to choose Mongo since I already surpassed the deadline).
In terms of performance and async, Node far better than tornado.
| 0
| 1
| 0
| 0
|
2011-04-06T05:06:00.000
| 5
| 0.119427
| false
| 5,561,701
| 1
| 0
| 0
| 3
|
Besides the fact that node.js is written in JS and Tornado in Python, what are some of the differences between the two? They're both non-blocking asynchronous web servers, right? Why choose one over the other besides the language?
|
Changing versions of Python in command line
| 5,568,374
| 5
| 6
| 3,806
| 0
|
python,macos
|
You can simply type python3.2 instead of just python to use python 3.2.
| 0
| 1
| 0
| 0
|
2011-04-06T14:48:00.000
| 1
| 1.2
| true
| 5,568,330
| 1
| 0
| 0
| 1
|
Python 2.5 came installed on my Mac. I downloaded Python 3.2 and have it running in my IDE. When I open the terminal in Mac and type in Python, it tells me I'm working with 2.5.
1) What do I enter in the command line to change from 2.5 to 3.2?
2) Once I get to 3.2 (using your answer), how do I get back to 2.5 if I want to?
Thanks for your help.
|
Saving the state of a program to allow it to be resumed
| 48,079,154
| 1
| 11
| 20,357
| 0
|
python,state
|
If you ok with OOP, consider creating a method for each class that output a serialised version ( using pickle ) to file. Then add a second method to load in the instance the data, and if the pickled file is there you call the load method instead of the processing one.
I use this approach for ML and it really seed up my workflow.
| 0
| 1
| 0
| 0
|
2011-04-06T15:27:00.000
| 4
| 0.049958
| false
| 5,568,904
| 1
| 0
| 0
| 2
|
I occasionally have Python programs that take a long time to run, and that I want to be able to save the state of and resume later. Does anyone have a clever way of saving the state either every x seconds, or when the program is exiting?
|
Saving the state of a program to allow it to be resumed
| 5,569,817
| 4
| 11
| 20,357
| 0
|
python,state
|
If you want to save everything, including the entire namespace and the line of code currently executing to be restarted at any time, there is not a standard library module to do that.
As another poster said, the pickle module can save pretty much everything into a file and then load it again, but you would have to specifically design your program around the pickle module (i.e. saving your "state" -- including variables, etc -- in a class).
| 0
| 1
| 0
| 0
|
2011-04-06T15:27:00.000
| 4
| 0.197375
| false
| 5,568,904
| 1
| 0
| 0
| 2
|
I occasionally have Python programs that take a long time to run, and that I want to be able to save the state of and resume later. Does anyone have a clever way of saving the state either every x seconds, or when the program is exiting?
|
Effectively reading a large, active Python log file
| 5,574,829
| 0
| 1
| 517
| 0
|
python,windows,logging,text-files
|
As ʇsәɹoɈ commented, the standard FileHandler logger does not lock the file, so it should work. However, if for some reason you cannot keep you lock on the file - then I'd recommend having your other app open the file periodically, record the position it's read to and then seek back to that point later. I know the Linux DenyHosts program uses this approach when dealing with log files that it has to monitor for a long period of time. In those situations, simply holding a lock isn't feasible, since directories may move, the file get rotated out, etc. Though it does complicate things in that then you have to store filename + read position in persistent state somewhere.
| 0
| 1
| 0
| 0
|
2011-04-06T18:13:00.000
| 2
| 1.2
| true
| 5,571,035
| 1
| 0
| 0
| 1
|
When my Python script is writing a large amount of logs to a text file line by line using the Python built-in logging library, in my Delphi-powered Windows program I want to effectively read all newly added logs (lines).
When the Python scripting is logging
to the file, my Windows program will
keep a readonly file handle to
that log file;
I'll use the Windows API to get
informed when the log file is
changed; Once the file is changed, it'll read the newly appended lines.
I'm new to Python, do you see any possible problem with this approach? Does the Python logging lib lock the entire log? Thanks!
|
OS independent printing with Python
| 5,572,506
| 1
| 2
| 2,696
| 0
|
python,pdf,printing,pygtk
|
Have you considered generating HTML?
| 1
| 1
| 0
| 0
|
2011-04-06T20:22:00.000
| 3
| 0.066568
| false
| 5,572,489
| 0
| 0
| 0
| 1
|
I am developing an application which has to be able to print a couple of pages with Python. Now I am searching for a method to create these pages and print them. It should work on Linux and Windows. The pages contain tables, images and text.
I developed the GUI with PyGtk, but I think it's convenient to create an image or PDF and print it. I have no idea how to do this. Anyone knows a good way for this?
Note: The problem isn't the generation. It is the printing of that file.
|
Help choosing between reload or subprocess
| 5,592,323
| 1
| 3
| 146
| 0
|
python,python-3.x
|
Reloading a module is rarely a good idea in a production environment; it's a mechanism intended for debugging. When you reload a module, the module's contents (classes, function, data) get replaced, but existing references to these items from other modules are not affected. This is particularly important for classes: existing objects in memory still refer to the old class, whereas objects generated after the reload refer to the new class.
There is another alternative you might want to consider: load Python code from a file and exec it. Less overhead than a complete subprocess, and less tightly coupled to the rest of a program than a module. In principle the same caveats apply to re-exec-ing as to reloading a module, but you are much less tempted to have references to exec'd code because it's more work.
| 0
| 1
| 0
| 0
|
2011-04-08T04:22:00.000
| 1
| 1.2
| true
| 5,590,396
| 0
| 0
| 0
| 1
|
Hello i want to know the best way to re import or re execute a module, because i have a web server with just one Apache session for all my domains and applications, and i if i need to make some changes on one application restart the server will affect the others, so looking for the best way to recall a module. If i choose subprocess i will need to print the response but i don' t know is that most secure way of communication. Please tell me in your experience which is the best way?
Thanks in advance!
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 7,762,436
| 3
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
I too had the error: stdlib sources not found.
My fix was to install XCode 4.2 and then retry Eclipse's PyDev "Auto Config" method.
No error. PyDev running OK!
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 0.046121
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 18,840,306
| 1
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
In Preferences > PyDev > Interpreter - Python
Choose New...
Name it "Python2.7"
set the path to /usr/bin/python
it then auto-configs some paths, select them, and it proceeds.
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 0.015383
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 12,368,091
| 2
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
@labjunky , if the .py files from the lib folder in the source tar ball are dropped into the User's site-packages folder ~/Library/Python/2.7/lib/python/site-packages[ provided it is listed in the locations by PyDev and selected] , it works too. this can be useful if the user does not have permission to modify the location in /System/Library/Frameworks/....
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 0.03076
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 5,697,623
| 2
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
I found the solution of not touching macs deliverd python version, but downloading ad installing a new one (currently 3.something)
when setting up the interpreter, point to /usr/local/bin/pyhton3
(to find out the exact path open terminal and type: sudo -s !hittenter> your password !hittenter> cd /usr/local/bin !hittenter> ls !hittenter>)
-> what this does is, showing you the content of the folder you went to. you should find the python interpreter in there.
WARNING!!!!
Do not touch or change any other python files/folders delivered with your mac.
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 0.03076
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 70,013,286
| 0
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
I got this error because I downloaded the embedded zip file version of Python and extracted it to a folder. I then downloaded the actual installer and ran it. That gave me the stuff that I was missing.
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 0
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 13,389,249
| 7
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
When I upgraded to Mountain Lion (10.8.2) I had this problem. The solution was to install XCode 4.5.2, then in XCode > Preferences > Components, there is an option to install the Command Line Tools. I installed them and then I was able install Interpreter.
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 1
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
PyDev Eclipse Python interpreters Error: stdlib not found
| 5,940,607
| 28
| 25
| 26,549
| 0
|
python,eclipse,pydev
|
Had the same problem. Eclipse wouldn't find all the required path using the default installed python (2.6). I downloaded python 2.7, went through the install. My new "which python" path became:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python.
When I tried to set up the interpreter this time, specified this path and it went right through.
Note:
Browse to /Library/Frameworks/Python.framework/Versions/2.7/bin directory
Select the python interpreter that's installed. Sometimes the 'python' link doesn't exist to the current interpreter (say, python3)
| 0
| 1
| 0
| 0
|
2011-04-08T12:50:00.000
| 13
| 1
| false
| 5,595,276
| 1
| 0
| 0
| 7
|
I have been trying to use Eclipse 3.6 as a Python editor.
I install the latest version of PyDev, and then try to set the Interpreter - Python field of the preferences, on my mac.
My python version is 2.6 and the path is "/usr/bin/python". When I enter this, and I select the items to add to the system PYTHONPATH I get the following error message:
Error: Python stdlib not found
It seems that the Python /Lib folder (which contains the standard
library) was not found /selected during the instal process.
This folder (which contains files such as threading.py and
traceback.py) is required for PyDev to function properly (and it must
contain the actual source files, not only .pyc files) ...
So I can't tell eclipse the interpreter path!
Any help would be great!
(I tried reinstalling PyDev already, no luck)
Thanks!
Following Praveen's answer, My python library is in /library/python/2.6/site-packages. When I enter /usr/bin/python into the interpreter field, eclipse asks me which paths I would like to add to my System PYTHONPATH. One of the checkbox items is exactly that path. So I check it, along with the other boxes. Click ok, and I get the same error.
|
distutils does not recompile C extension modules
| 5,599,557
| 0
| 0
| 938
| 0
|
python,distutils
|
Looking into the source for distutils and seeing how it enforces rebuilds it looks like it checks timestamps of files to determine whether a file is out of date or not.
Can you make sure the timestamp is changing when winscp is uploading the file? Otherwise it looks like the build command has a "force" option that forces a rebuild no matter what.
| 0
| 1
| 0
| 1
|
2011-04-08T18:39:00.000
| 1
| 1.2
| true
| 5,599,414
| 0
| 0
| 0
| 1
|
I'm trying to use distutils with a Python module that contains extensions written in C. The program code is housed on a Linux server, but I sometimes upload changes from a Windows machine using the file transfer program WinSCP (editing is done in Notepad++). I've noticed that distutils often does not notice these changes in the C code (i.e. python setup.py build does not trigger gcc if the code was previously compiled). A check of the C source code on the server shows that it really has been updated correctly. On the other hand, changing the code directly on the server using a text editor like vim always causes python setup.py build to recompile the changed files. Any idea why uploading changed files might not cause distutils to recompile them?
Thanks.
EDIT:
After investigating this further I am noticing the same problem if I just create a plain C program with a Makefile. Thus this problem does not look like it is a distutils problem.
|
Google-App Engine logging problem
| 5,612,643
| 0
| 1
| 252
| 0
|
python,django,google-app-engine
|
Thanks Abdul you made me realize what the problem is. I had changed a URL in my application to point to the application that I had deployed to Google-App Engine. It should have been pointing to my local application. I had myapp.appspot.com/move instead of localhost/move
| 0
| 1
| 0
| 0
|
2011-04-10T14:24:00.000
| 2
| 1.2
| true
| 5,612,390
| 0
| 0
| 1
| 1
|
I'm wondering if anyone has experienced problems with Google-App Engine's logging facility. Everything was working fine for me until this morning, I ran my local server and no logging messages were being displayed (well, none of my logging messages, the server GET messages etc.. are being displayed). Even errors are not being reported. I have no idea what is going on.
If this has happened to anyone, can you please advise on how to fix it?
|
Uninstall python 3.2 on mac os x 10.6.7
| 23,726,032
| 0
| 10
| 19,603
| 0
|
macos,uninstallation,python-3.2
|
just uninstall 3x version of python if you have already installed it. Eclipse has that option when you click "see whats already installed".
Install later 2.7 version. It works for me on my OS X 10.9.2 with Eclipse Juno.
| 0
| 1
| 0
| 0
|
2011-04-11T13:20:00.000
| 3
| 0
| false
| 5,621,952
| 1
| 0
| 0
| 2
|
According to the documentation from python.org, python 3.2 install on mac os requires an upgrade to tcl/tk 8.5.9 (for use of IDLE). In my haste, I have done both. Now my friend told me that python 3 is not recommended yet because only the built-ins and a few modules have been released for 3. The stable one so far is 2.7 (especially if one wants to make extensive use of a variety of modules). My machine has both 2.6.1 and 3.2 (because some OS services make use of 2.6.1 that comes as default with the OS).
1. How do i remove 3.2 completely to avoid any compatibility issues?
tcl/tk 8.5.9 was also installed and this is not the default. There was no verbose mode during installation, so I don't know whether it replaced the default one. If it did how bad can it be for the OS? and hence
2. If the above is really bad, how do i downgrade to the old version of tcl/tk?
In short, how do i bring my machine back to its original state? If anyone knows all the paths to the directories and files I can do it manually.
Thanks
|
Uninstall python 3.2 on mac os x 10.6.7
| 5,627,279
| 4
| 10
| 19,603
| 0
|
macos,uninstallation,python-3.2
|
I did the same (3.2 on a mac 10.6) and:
-Moved both the Python 3.2 folder and the ActiveState ActiveTcl folder from the Applications Folder to the Trash.
-Moved the Python.framework folder from the Library/Frameworks folder to the Trash.
Running System profiler shows only the 2.6 version of Python.
Marcos
| 0
| 1
| 0
| 0
|
2011-04-11T13:20:00.000
| 3
| 0.26052
| false
| 5,621,952
| 1
| 0
| 0
| 2
|
According to the documentation from python.org, python 3.2 install on mac os requires an upgrade to tcl/tk 8.5.9 (for use of IDLE). In my haste, I have done both. Now my friend told me that python 3 is not recommended yet because only the built-ins and a few modules have been released for 3. The stable one so far is 2.7 (especially if one wants to make extensive use of a variety of modules). My machine has both 2.6.1 and 3.2 (because some OS services make use of 2.6.1 that comes as default with the OS).
1. How do i remove 3.2 completely to avoid any compatibility issues?
tcl/tk 8.5.9 was also installed and this is not the default. There was no verbose mode during installation, so I don't know whether it replaced the default one. If it did how bad can it be for the OS? and hence
2. If the above is really bad, how do i downgrade to the old version of tcl/tk?
In short, how do i bring my machine back to its original state? If anyone knows all the paths to the directories and files I can do it manually.
Thanks
|
Celery - collision of task_ids
| 5,647,566
| 0
| 1
| 304
| 0
|
python,django,celery
|
It shouldn't be possible and even if, it should be very rare. My guess would be that the same task is executed a second time after your exception. Maybe there is a problem with your routing keys as the worker doesn't get the task? Or the broker has a problem, I've seen funny problems with RabbitMQ. Deleting it's database (RABBITMQ_MNESIA_BASE) helped in my case.
| 0
| 1
| 0
| 0
|
2011-04-13T08:54:00.000
| 1
| 0
| false
| 5,646,679
| 0
| 0
| 1
| 1
|
I'm getting HardTimeLimit exception for my tasks. After log examination i found -
task is not being received by celery ( No "Got task from broker:" message for task id)
task with the same id was executed couple a days ago.
Task ids are assigned automatically by @task decorator, tasks are started by django, there are ~2k tasks per day ( and ~30 collisions per day).
How ID's collision is possible? How to prevent it.
|
Tracing windows API calls
| 5,653,820
| 3
| 3
| 1,707
| 0
|
.net,python,winapi,trace,etw
|
Generally speaking, there are two approaches to intercepting system API calls; either user mode or kernel mode interception. For user mode API interception, you will have to hook every process to accurately capture/redirect every call to your desired API function. Kernel mode interception circumvents the need to hook every process, but also requires advanced low-level knowledge (and a cross-signed code signing certificate to run your code in kernel mode).
There are a number of libraries available that will provide API hooking functionality, but I believe the ones I know of all work primarily in user mode, i.e. requiring system-wide DLL injection into processes.
| 0
| 1
| 0
| 0
|
2011-04-13T17:04:00.000
| 1
| 1.2
| true
| 5,652,908
| 1
| 0
| 0
| 1
|
I am currently working on a tool in .NET/Python that monitors certain events on a system, like writing specific registry keys or creating files with a special name.
I evaluated many possibilities, and as I don't have to care about WinXP support, I am using Event Tracing for Windows to get a real-time stream of all file and registry activities, and this works fine (by consuming events from the NT kernel logger).
Now, I have to extend my tool to monitor all calls to some Windows API functions like WriteProcessMemory, NtUnmapViewOfSection or VirtualAllocEx. I found many tools that allows me to trace all API calls from a single process, but hooking all processes isn't a good idea, is it?
Now I wonder if if there is a possibility to use ETW for this. Is there any provider provided by the kernel that notifies me of API calls? If not, what else can I do?
Summary: If I want to catch API calls, do I have to hook every single process?
|
Python distribution for Scientists on Linux
| 5,660,288
| 5
| 1
| 10,929
| 0
|
python,linux,distribution,pythonxy
|
For linux, you don't need something like PythonXY, because it's already very easy to install packages with your package manager. Things are actually a lot better integrated under linux than under windows.
What you need to do is pick a good linux distribution, and install the packages you like with the package manager (apt, dnf, pacman...)
| 0
| 1
| 0
| 0
|
2011-04-14T07:26:00.000
| 5
| 0.197375
| false
| 5,659,931
| 0
| 0
| 0
| 1
|
I would like to find out if there is a Pythonxy.com equivalent for Linux/Mac OS X yet?
any kind of pointers would be helpful?
Thanks and best regards,
Vishal Sapre
|
Packaging and shipping a python library and scripts, the professional way
| 5,733,166
| 3
| 37
| 8,155
| 0
|
python
|
This will vary depending on your target market. In specialized niche industries there is more variety in how stuff is distributed. In heavily commoditized areas I would expect native OS package (at least if I were a customer). I tend to take the quality of the deployment package as indicative of the quality of the software in general. I associate native OS packages as higher quality than other formats largely because the dependency information can be complete. This makes it easier to do some compliance testing and change management.
Native OS Packages
For Unices consider creating native OS packages. They provide better integration and visibility with processes like compliance, change management, dependency management, etc.
For OSX others have already suggested py2app. You may also be able to leverage MacPorts package format or the Fink package format.
For Windows others have already suggested py2exe.
Relocation and Config Requirements
Put your Python executable under .../libexec. This prevents it from accidentally being called.
Change the name of the Python executable to prevent confusion. ie. /usr/local/libexec/<pkg>_python
Distribute the .py for the bins to make them easily relocatable. You can change the Magic Cookie at install time to whatever the location your Python was installed in via an install script. The only code you need in the bin is a line that calls into your lib which is a pyc.
Install your libs in the correct location under /usr/local/lib/app_python/site_package/... and you won't need to use PYTHONPATH.
Shared Libraries
If I remember correctly you'll want to make sure you strip any rpath entries from the libs as this can mess with their ability to be relocated.
The native OS packaging should help with any dependencies the shared libs require.
| 0
| 1
| 0
| 0
|
2011-04-14T09:49:00.000
| 9
| 0.066568
| false
| 5,661,385
| 1
| 0
| 0
| 2
|
I have the task of packaging and shipping a commercial application bundle, which will include:
a python library (developed by us)
some python programs depending on the library above
additional libraries not developed by us, but which are dependencies of our library.
a complete python installation (python 2.6)
additional stuff, libs and programs in other languages. Not a concern here, as they are not hooked into the above machinery, and the current shipping process works already.
The bundle is shipped to Linux, OSX and Windows. On Linux, it's distributed as a simple tar.gz. The user just unpacks the tar.gz and source a provided bash script in .bashrc, so that the environment is correctly set. On mac, it's a dmg. On windows, I have no idea. The windows guy is not here today, but what I see is that an exe is created somehow.
I will now explain in more detail the above points.
Our Python Library
We don't want to give out sources, so we want to provide only compiled python files. A better strategy to make them even more tamper-proof is welcome, even if it involves some deep hacking (e.g. I once saw magic done importing stuff from a .zip which was "corrupted" ad-hoc). The library at the moment does not have C level code or similar platform dependent code, but this is going to change soon. We will therefore have to provide platform-specific compiled .so together with the pyc.
Clearly, this library will be shipped in the package, together with the rest of our application. It will therefore be installed on the downloaded bundle. For this reason, it must be fully relocatable, and the user must in some way (either manually or via our env script) add the location of the untarred package to PYTHONPATH, so that the interpreter can find it.
Our Python Programs
We will ship applications in our bundle, and these applications will depend on our library. The code of these applications must be either visible by the user (so that he can learn how to use the library interface), or not visible (for those utilities we want to keep closed-source), so a double approach is called for.
Additional Libraries
Our library depends on 3rd party libraries we will have to ship, so that the user is up and running without any dependency hunting. Clearly, these libraries will be installed by us in the bundle, but we must hope these don't store the install path somewhere during the build, because that would make them non relocatable.
Our python
We will ship our version of python, which we assume the user will run in order to access our script. This is because we want to be sure of the python version running. Also, we may tinker a bit the executable or the standard library. We may have a concern about the interaction of this python with the standard python, and if the user wants a specific library on our python it will have to install it within our bundled package, and not on the standard place for libraries.
Request
I need to make my mind around this task. I've seen it done, but never done it personally, so I need your point of view. What I presented above is how I think things should work, according to how things are working right now, but it may be wrong. Any hint, quirk, suggestion, or strategy for a successful deployment is welcome. Given the complexity of the question, I already announce a high bounty on it, according to the best answer I can get.
|
Packaging and shipping a python library and scripts, the professional way
| 5,731,719
| 15
| 37
| 8,155
| 0
|
python
|
This is not a complete answer but just a bunch of ideas. I wrote an installer for a client that incorporated some ideas that might be useful to you.
It was Linux only so I focussed on just that. We needed to ship specific custom versions of mySQL, lighttpd, python, memcached, a few 3rd party Python modules and some custom scripts. We needed to launch all these services without any problems and let the user control them using regular initscripts. It should work fine on a bunch of popular distros and therefore shouldn't rely on distro specific stuff.
What I did was as follows.
Created a 500MB (I'm don't recollect the size) file and formatted it as an ext3fs file system.
Mounted it at a point using a loopback device.
Ran deb-bootstrap on the mountpoint to create a custom Debian install.
Chrooted inside the partition and then ran a bunch of scripts which did an apt-get install on all our dependencies, installed all the eggs and other packages which were necessary for the app, installed the app itself in /opt (inside the chroot), installed supervisord (to do process management) and set things up. Now, this partition was a completely self contained Linux filesystem that contained the application and everything needed to run it. You could dump it anywhere, chroot inside it and launch the app. The only dependency it had with the outside world were the ports it would use for its services and the supervisord control socket. This was the main point. We were able to include exactly what we needed (compiled files, .pycs only etc.) for a few of the applications and didn't have to bother with any limitations in standard installation tools.
After this, we packaged a few extra scripts that would go into the external operating system. These were custom made for each distro that we would have to support. This part was distro specific. There were scripts that would go into /etc/init.d and some scripts that would setup the database and stuff at the beginning.
We then created an archive of the entire filesystem using makeself. It would checksum stuff and all that and provide a self extracting archive which if run would untar the whole thing into /opt on the host machine, chroot inside the directory and run a setup script that would ask the user a few questions like db username/password etc. and set things up. After that, it would fetch the scripts I mentioned in step 5 and put them on the host OS.
The initscripts would simply chroot into the partition and start supervisord. It would then take care of launching all the services we cared about. Shutting down the application was simply a matter of connecting to running supervisord and running a command. We wrapped this in the initscript so that the user experience was UNIX like.
Now, we'd give clients the self extracting .run file. They'd run it, get asked a few questions and it would create a directory under /opt which contained our app and all it's dependencies. The init scripts would be modified to start our app on bootup and things would work as expected.
I think step 4 gives you the freedom to install whatever you want, however you want so that things would work fine.
| 0
| 1
| 0
| 0
|
2011-04-14T09:49:00.000
| 9
| 1.2
| true
| 5,661,385
| 1
| 0
| 0
| 2
|
I have the task of packaging and shipping a commercial application bundle, which will include:
a python library (developed by us)
some python programs depending on the library above
additional libraries not developed by us, but which are dependencies of our library.
a complete python installation (python 2.6)
additional stuff, libs and programs in other languages. Not a concern here, as they are not hooked into the above machinery, and the current shipping process works already.
The bundle is shipped to Linux, OSX and Windows. On Linux, it's distributed as a simple tar.gz. The user just unpacks the tar.gz and source a provided bash script in .bashrc, so that the environment is correctly set. On mac, it's a dmg. On windows, I have no idea. The windows guy is not here today, but what I see is that an exe is created somehow.
I will now explain in more detail the above points.
Our Python Library
We don't want to give out sources, so we want to provide only compiled python files. A better strategy to make them even more tamper-proof is welcome, even if it involves some deep hacking (e.g. I once saw magic done importing stuff from a .zip which was "corrupted" ad-hoc). The library at the moment does not have C level code or similar platform dependent code, but this is going to change soon. We will therefore have to provide platform-specific compiled .so together with the pyc.
Clearly, this library will be shipped in the package, together with the rest of our application. It will therefore be installed on the downloaded bundle. For this reason, it must be fully relocatable, and the user must in some way (either manually or via our env script) add the location of the untarred package to PYTHONPATH, so that the interpreter can find it.
Our Python Programs
We will ship applications in our bundle, and these applications will depend on our library. The code of these applications must be either visible by the user (so that he can learn how to use the library interface), or not visible (for those utilities we want to keep closed-source), so a double approach is called for.
Additional Libraries
Our library depends on 3rd party libraries we will have to ship, so that the user is up and running without any dependency hunting. Clearly, these libraries will be installed by us in the bundle, but we must hope these don't store the install path somewhere during the build, because that would make them non relocatable.
Our python
We will ship our version of python, which we assume the user will run in order to access our script. This is because we want to be sure of the python version running. Also, we may tinker a bit the executable or the standard library. We may have a concern about the interaction of this python with the standard python, and if the user wants a specific library on our python it will have to install it within our bundled package, and not on the standard place for libraries.
Request
I need to make my mind around this task. I've seen it done, but never done it personally, so I need your point of view. What I presented above is how I think things should work, according to how things are working right now, but it may be wrong. Any hint, quirk, suggestion, or strategy for a successful deployment is welcome. Given the complexity of the question, I already announce a high bounty on it, according to the best answer I can get.
|
IronPython IDE for Ubuntu and Linux
| 5,667,841
| 1
| 2
| 2,128
| 0
|
linux,ide,ironpython,monodevelop,pydev
|
I don't know about Eclipse, but there isn't currently an IronPython addin for MonoDevelop. If anyone's interested in developing one, please contact the MonoDevelop mailing list for advice on getting started.
| 0
| 1
| 0
| 1
|
2011-04-14T14:08:00.000
| 3
| 0.066568
| false
| 5,664,513
| 0
| 0
| 0
| 2
|
I've been searching for an IDE with code completion (intellisence) for IronPython on Linux systems (typically Ubuntu).
I've found references to MonoDevelop and Eclipse (PyDev) supporting IronPython, but I can't get any of them to work.
Is this because MonoDevelop and PyDev only support IronPython code completion on Windows? Are there any installation guides that I could follow to make these IDEs work on Ubuntu / Linux.
Many thanks for your help,
Chris
|
IronPython IDE for Ubuntu and Linux
| 5,883,279
| 2
| 2
| 2,128
| 0
|
linux,ide,ironpython,monodevelop,pydev
|
Try JetBrains PyCharm
| 0
| 1
| 0
| 1
|
2011-04-14T14:08:00.000
| 3
| 0.132549
| false
| 5,664,513
| 0
| 0
| 0
| 2
|
I've been searching for an IDE with code completion (intellisence) for IronPython on Linux systems (typically Ubuntu).
I've found references to MonoDevelop and Eclipse (PyDev) supporting IronPython, but I can't get any of them to work.
Is this because MonoDevelop and PyDev only support IronPython code completion on Windows? Are there any installation guides that I could follow to make these IDEs work on Ubuntu / Linux.
Many thanks for your help,
Chris
|
Can beagleboard run python or Ruby programs?
| 6,601,437
| 1
| 3
| 3,513
| 0
|
python,ruby,beagleboard
|
The interpreters do not need to be compiled from source, as the Ubuntu arm distribution has python in its repository as a deb. I was able to write my python scripts on my Ubuntu box and transfer them to the beagleboard without any changes. Performance so far has been surprisingly good, as I'm using the python script as a bridge between the real-time sound processing/synthesis language supercollider and a motor control board that communicates over USB-serial.
| 0
| 1
| 0
| 1
|
2011-04-14T21:14:00.000
| 4
| 0.049958
| false
| 5,669,785
| 0
| 0
| 0
| 3
|
I hi have just ordered a couple of beaglboards for experimenting. I know that it can rub Ubuntu and many other flavors of linux.
Does that mean it can run all the trivial software that run on Ubuntu?
Will the python and ruby interpreters work just the way they work on PC ?
|
Can beagleboard run python or Ruby programs?
| 5,669,819
| 4
| 3
| 3,513
| 0
|
python,ruby,beagleboard
|
The Beagleboard can run both of them, but you may have to compile the interpreters from source. And don't expect the performance of a desktop.
| 0
| 1
| 0
| 1
|
2011-04-14T21:14:00.000
| 4
| 1.2
| true
| 5,669,785
| 0
| 0
| 0
| 3
|
I hi have just ordered a couple of beaglboards for experimenting. I know that it can rub Ubuntu and many other flavors of linux.
Does that mean it can run all the trivial software that run on Ubuntu?
Will the python and ruby interpreters work just the way they work on PC ?
|
Can beagleboard run python or Ruby programs?
| 8,717,516
| 1
| 3
| 3,513
| 0
|
python,ruby,beagleboard
|
The Angstrom Linux distribution (which runs on the Beagle Board) has binary packages for both Python and Ruby. I've worked on an application that uses Python and PyGTK. Never had any problems.
| 0
| 1
| 0
| 1
|
2011-04-14T21:14:00.000
| 4
| 0.049958
| false
| 5,669,785
| 0
| 0
| 0
| 3
|
I hi have just ordered a couple of beaglboards for experimenting. I know that it can rub Ubuntu and many other flavors of linux.
Does that mean it can run all the trivial software that run on Ubuntu?
Will the python and ruby interpreters work just the way they work on PC ?
|
Task Scheduling Across a Network?
| 5,672,196
| 2
| 3
| 1,351
| 0
|
python,scheduled-tasks,distributed-computing
|
Fabric (http://docs.fabfile.org/en/1.0.1/index.html) is a pretty good toolkit for various sys admin and deployment tasks. It comes with a few pre defined tasks but also gives you the flexibility to add what you need.
I highly recommend it.
| 0
| 1
| 0
| 0
|
2011-04-15T01:30:00.000
| 3
| 0.132549
| false
| 5,671,527
| 0
| 0
| 0
| 1
|
Can you recommend on a python tool / module that allows scheduling tasks on remote machine in a network?
Note that the solution must be able to not only run certain jobs/commands on remote machines, but also verify that jobs etc are still running (for example, consider the case where a machine dies after a task has been assigned to it?)
|
Why re-implement shell commands line by line in a Fabric script?
| 8,855,653
| 2
| 4
| 1,058
| 0
|
python,shell,deployment,fabric
|
Also, I think the path you'd choose would depend on what you're trying to do. Some things are easier in python (write it in your fabfile), while others are easier in shell-land (take one of the shell approaches mentioned).
Either way, fabric is geared towards centralization and portability, and it doesn't really matter what's actually doing the lifting.
| 0
| 1
| 0
| 1
|
2011-04-15T06:31:00.000
| 3
| 0.132549
| false
| 5,673,154
| 0
| 0
| 0
| 3
|
Fabric is a tool for "executing local or remote shell commands."
Why would you re-implement a remote shell script line by line in a long Fabric script?
That is, why not just write a brief Fabric script that runs a long remote shell script instead?
|
Why re-implement shell commands line by line in a Fabric script?
| 5,674,298
| 4
| 4
| 1,058
| 0
|
python,shell,deployment,fabric
|
It wont be a good idea if I have to run the same script on, lets say, 10 servers. This means I've to not only stick the same long script on 10 servers, but also make sure if I change it on 1 server, the change has to be applied to all servers. I know this can be averted by keeping that script on a shared location, but its much more organized to have the script in the fabfile, which can not only be version controlled but kept uniform across all the roles.
| 0
| 1
| 0
| 1
|
2011-04-15T06:31:00.000
| 3
| 0.26052
| false
| 5,673,154
| 0
| 0
| 0
| 3
|
Fabric is a tool for "executing local or remote shell commands."
Why would you re-implement a remote shell script line by line in a long Fabric script?
That is, why not just write a brief Fabric script that runs a long remote shell script instead?
|
Why re-implement shell commands line by line in a Fabric script?
| 5,676,599
| 4
| 4
| 1,058
| 0
|
python,shell,deployment,fabric
|
lobster1234 raises a good point that you don't want to have to manually stick a long, remote shell script on 10 servers. However, if you still want to avoid rewriting the long, remote shell script as a long Fabric script, you could write a Fabric script that copies that remote shell script to the designated server, executes that script, and then removes the script. This way you can revision control both the fabfile and shell script together but avoid rewriting the shell script into a Fabric script.
| 0
| 1
| 0
| 1
|
2011-04-15T06:31:00.000
| 3
| 1.2
| true
| 5,673,154
| 0
| 0
| 0
| 3
|
Fabric is a tool for "executing local or remote shell commands."
Why would you re-implement a remote shell script line by line in a long Fabric script?
That is, why not just write a brief Fabric script that runs a long remote shell script instead?
|
Example of using Chef with LibCloud
| 10,241,015
| 2
| 0
| 739
| 0
|
python,deployment,chef-infra
|
I use chef to bootstrap ec2 instances. I also use boto to do further modifications of ec2 instances such as creating tags etc. I will be now be using libcloud more often since I will I will have a mix between rackspace and ec2.
As a side, when bootstrap a ec2 or rackspace instance, I do not use knife, I use libcload to boot a machine and ssh into into machine and install chef client since I fond it to be more reliable and even faster then knife by 3-5 minutes.
Net net, both are used together. Its a happy marriage.
| 0
| 1
| 0
| 1
|
2011-04-15T07:22:00.000
| 2
| 0.197375
| false
| 5,673,599
| 0
| 0
| 0
| 2
|
Chef is commonly used for provisioning servers, right? So is LibCloud, right?
What's an example use case of why someone would use both tools together?
|
Example of using Chef with LibCloud
| 5,919,135
| 0
| 0
| 739
| 0
|
python,deployment,chef-infra
|
Chef works with a variety of cloud computing providers:
Amazon AWS EC2
Rackspace Cloud
Terremark vCloud
Bluebox Group
Openstack
Slicehost
It does this through the Ruby library, fog.
| 0
| 1
| 0
| 1
|
2011-04-15T07:22:00.000
| 2
| 0
| false
| 5,673,599
| 0
| 0
| 0
| 2
|
Chef is commonly used for provisioning servers, right? So is LibCloud, right?
What's an example use case of why someone would use both tools together?
|
python how to force subprocess.call to not wait for the called command to complete
| 5,677,732
| 9
| 4
| 1,923
| 0
|
python,subprocess
|
subprocess.Popen is what you are looking for!
| 0
| 1
| 0
| 0
|
2011-04-15T13:18:00.000
| 1
| 1.2
| true
| 5,677,391
| 0
| 0
| 0
| 1
|
Im using subprocess.call to execute a bat file. subprocess.call is waiting for the bat file to complete before it continues. I want it to start the bat then continue on. Looking at the documents for subprocess it didnt look like it had an option to not wait for the command to complete.
Is there a way to do this or another option besides subprocess.call?
|
Python scripting in linux
| 5,680,308
| 0
| 1
| 304
| 0
|
python,django,linux,fabric
|
You could also try any of the distributed computing packages. Pyro is one of them that might interest you.
| 0
| 1
| 0
| 1
|
2011-04-15T15:44:00.000
| 2
| 0
| false
| 5,679,203
| 0
| 0
| 0
| 1
|
We have around 250 identical linux server which runs a business critical web application for a bank. Basically we do a lot of scripting work but now i want to centralize that only in one location. That means run on one server and and deploy it in many. I know you guys must be thinking that this is an easy task and can be done with a shell script. But again we need to create many different different scripts to do our work
I know python has a big library and this can be possible but i dont know how. To cut in short i need all scripts in one file and based on the argument it will execute it according.
For example in a python program we have a function where we can mix them to perform different result.
So you please let me know how to go about it
|
Hadoop Vs. Disco Vs. Condor?
| 5,682,297
| 3
| 4
| 1,606
| 0
|
python,distributed-computing
|
I'm unfamiliar with Disco and Condor, but I can answer regarding Hadoop:
Hadoop pros:
Robust and proven - probably more than anything else out there. Used by many organizations (including the one I work for) to run clusters of 100s of nodes and more.
Large ecosystem = support + many subprojects to make life easier (e.g. Pig, Hive)
Python support should be possible through the streaming MR feature, or maybe Jython?
Hadoop cons:
Neither simple nor elegant (imho). You'll have to spend time learning.
| 0
| 1
| 0
| 0
|
2011-04-15T20:43:00.000
| 2
| 1.2
| true
| 5,682,175
| 0
| 1
| 0
| 1
|
I am trying to find a tool that will manage a bunch of jobs on 100 machines in a cluster (submit the jobs to the machines; make sure that jobs are run etc).
Which tool would be more simple to install / manage:
(1) Hadoop?
(2) Disco?
(3) Condor?
Ideally, I am searching for a solution that would be as simple as possible, yet be robust.
Python integration is also a plus.
|
Detect socket hangup without sending or receiving?
| 8,434,845
| 18
| 27
| 35,496
| 0
|
python,c,linux,sockets,tcp
|
I've had a recurring problem communicating with equipment that had separate TCP links for send and receive. The basic problem is that the TCP stack doesn't generally tell you a socket is closed when you're just trying to read - you have to try and write to get told the other end of the link was dropped. Partly, that is just how TCP was designed (reading is passive).
I'm guessing Blair's answer works in the cases where the socket has been shut down nicely at the other end (i.e. they have sent the proper disconnection messages), but not in the case where the other end has impolitely just stopped listening.
Is there a fairly fixed-format header at the start of your message, that you can begin by sending, before the whole response is ready? e.g. an XML doctype? Also are you able to get away with sending some extra spaces at some points in the message - just some null data that you can output to be sure the socket is still open?
| 0
| 1
| 1
| 0
|
2011-04-16T12:32:00.000
| 7
| 1.2
| true
| 5,686,490
| 0
| 0
| 0
| 2
|
I'm writing a TCP server that can take 15 seconds or more to begin generating the body of a response to certain requests. Some clients like to close the connection at their end if the response takes more than a few seconds to complete.
Since generating the response is very CPU-intensive, I'd prefer to halt the task the instant the client closes the connection. At present, I don't find this out until I send the first payload and receive various hang-up errors.
How can I detect that the peer has closed the connection without sending or receiving any data? That means for recv that all data remains in the kernel, or for send that no data is actually transmitted.
|
Detect socket hangup without sending or receiving?
| 8,386,973
| -1
| 27
| 35,496
| 0
|
python,c,linux,sockets,tcp
|
You can select with a timeout of zero, and read with the MSG_PEEK flag.
I think you really should explain what you precisely mean by "not reading", and why the other answer are not satisfying.
| 0
| 1
| 1
| 0
|
2011-04-16T12:32:00.000
| 7
| -0.028564
| false
| 5,686,490
| 0
| 0
| 0
| 2
|
I'm writing a TCP server that can take 15 seconds or more to begin generating the body of a response to certain requests. Some clients like to close the connection at their end if the response takes more than a few seconds to complete.
Since generating the response is very CPU-intensive, I'd prefer to halt the task the instant the client closes the connection. At present, I don't find this out until I send the first payload and receive various hang-up errors.
How can I detect that the peer has closed the connection without sending or receiving any data? That means for recv that all data remains in the kernel, or for send that no data is actually transmitted.
|
Google App Engine: How to get the offset of an object
| 5,691,077
| 2
| 0
| 157
| 0
|
python,database,google-app-engine
|
Why not query the data twice? Once for users with a higher score, in descending order. Once for users with a lower score. Fetch five records in each query.
| 0
| 1
| 0
| 0
|
2011-04-16T16:54:00.000
| 2
| 0.197375
| false
| 5,688,079
| 0
| 0
| 1
| 1
|
I am using Google App Engine in Python.
In My database, I have some Entity which contain score of user. So I wanted to make a page on the ranking of a particular user. To do that, I need to get the 5 user with the higher score than that user and 5 user with lower score than that user, and also the position of that particular user in the database. I could use the cursor() method to get the encoded cursor. But, I cannot get the 5 Entity before the user's identity and even so, I cannot get the position of the user in the database. Maybe I can use the offset to do the query, but How to get the offset of an Entity?
|
how to read command output from serial device using python
| 23,190,516
| 0
| 0
| 4,870
| 0
|
python,serial-port
|
very first you need to get log-in into the device.
then you can run the specified command on that device.
note:command which you are going to run must be supported by that device.
Now after opening a serial port using open() you need to find the login prompt using Read() and then write the username using write(), same thing repeat for password.
once you have logged-in you can now run the commands you needed to execute
| 0
| 1
| 0
| 1
|
2011-04-17T00:22:00.000
| 2
| 0
| false
| 5,690,599
| 0
| 0
| 0
| 1
|
I have an embedded linux device and here's what I would like to do using python:
Get the device console over serial port. I can do it like this:
>>> ser = serial.Serial('/dev/ttyUSB-17', 115200, timeout=1)
Now I want to run a tail command on the embedded device command line, like this:
# tail -f /var/log/messages
and capture the o/p and display on my python >>> console.
How do I do that ?
|
Analyze logfile from GAE
| 5,699,117
| 1
| 0
| 220
| 0
|
python,google-app-engine
|
I suggest you use Google Analytics on your web app. If you want to do some sort of server-side visitor analaytics (instead of the client-side Javascript that Google Analytics uses), you'd have to store something in a database (BigTable on GAE) and run your own analytics.
| 0
| 1
| 0
| 0
|
2011-04-18T06:02:00.000
| 2
| 1.2
| true
| 5,699,092
| 0
| 0
| 1
| 1
|
Hi My app had visitors and I like to analyze the log file. Can I run a log analyzer program on the log file that google app engine allows us to download? Are third-party programs such as webalizer and visitors compatible?
Thank you
|
How to Detect in Sub Process When Parent Process Has Died?
| 5,705,904
| 5
| 6
| 1,656
| 0
|
python,subprocess,orphan
|
You can use socketpair() to create a pair of unix domain sockets before creating the subprocess. Have the parent have one end open, and the child the other end open. When the parent exits, it's end of the socket will shut down. Then the child will know it exited because it can select()/poll() for read events from its socket and receive end of file at that time.
| 0
| 1
| 0
| 0
|
2011-04-18T16:02:00.000
| 2
| 1.2
| true
| 5,705,659
| 0
| 0
| 0
| 1
|
In python, I have a parent process that spawns a handful of child processes. I've run into a situation where, due to an unhandled exception, the parent process was dieing and the child processes where left orphaned. How do I get the child processes to recognize that they've lost their parent?
I tried some code that hooks the child process up to every available signal and none of them were fired. I could theoretically put a giant try/except around the parent process to ensure that it at least fires a sigterm to the children, but this is inelegant and not foolproof. How can I prevent orphaned processes?
|
Changing password, python, linux
| 5,706,671
| 2
| 1
| 4,855
| 0
|
python,linux,change-password
|
You can modify /etc/passwd (/etc/shadow) with Python script which will need root permissions sudo python modify.py /etc/passwd (where modify.py is your script that will change password)
| 0
| 1
| 0
| 1
|
2011-04-18T17:31:00.000
| 3
| 0.132549
| false
| 5,706,597
| 0
| 0
| 0
| 1
|
How can i change password of ubuntu root user by python script? Thanks.
|
How do I set remote server TimeZone via Fabric?
| 5,712,086
| 1
| 4
| 733
| 0
|
python,bash,fabric
|
This only works for the current shell. Close the shell, start a new one and type date, you will see that the TZ has reset to the default timezone. Even for Fabric if you capture the output, you'd see that the TimeZone does get set correctly but as the script ends, so does the shell and hence the TZ variable is no longer available.
| 0
| 1
| 0
| 1
|
2011-04-19T05:39:00.000
| 3
| 0.066568
| false
| 5,712,062
| 0
| 0
| 0
| 1
|
I'm trying to change my remote server's timezone via Fabric like so:
run("export TZ=\":Pacific/Auckland\"")
run("date")
This doesn't seem to work. run("date") gives me:
Tue Apr 19 00:19:58 CDT 2011 which is not the timezone I just set.
If I just log into the server and run the same bash commands, everything's just as expected:
[lazo@lazoweb]$ date
Tue Apr 19 00:20:00 CDT 2011
[lazo@lazoweb]$ export TZ=":Pacific/Auckland"
[lazo@lazoweb]$ date
Tue Apr 19 17:20:20 NZST 2011
Can anyone shed some light on this? What am I missing?
|
Install script for C project with Python API
| 5,727,575
| 0
| 4
| 146
| 0
|
python,c,linux,unix,makefile
|
I would separate the project out into two parts. Your C part can use make as usual. Your python module can use the python setup tools, which are capable of building extensions.
(You can also write install targets, so you don't have to copy things manually)
| 0
| 1
| 0
| 1
|
2011-04-19T16:19:00.000
| 2
| 0
| false
| 5,719,506
| 0
| 0
| 0
| 1
|
I have a project which is mostly written in C, but it also has a Python API which uses Python extension modules written in C.
What is the best way to write installation/deployment scripts for a Linux/UNIX environment? Usually, I use the make utility to compile and install projects written in C. Most of the time, I just have the make utility compile all the source code into executables, and then copy the executables to /usr/local/bin.
However, my Python API requires the compilation/installation of shared library (.so) files for use with Python. This basically involves compiling the necessary C files, and then copying the shared libraries to some directory that is part of the Python sys.path, such as /usr/local/lib/pythonX.X/dist-packages/.
But how can the appropriate directory for Python extension modules be detected by the Make utility? Is there an environment variable or something that lists the directories in Python's sys.path?
|
Setting up a Python development environment on Windows
| 5,722,080
| 2
| 1
| 694
| 0
|
python,windows,development-environment
|
To make python executable on your command line, you need to add it to your PATH environment variable, which it sounds like you have done on the command line. It is quite simple to add directories to the PATH in Windows if you know where to look. Essentially, you need to get to the Environment Variables dialog box, which is slightly different for each version of Windows.
For Windows XP: Start -> Control Panel -> System -> Advanced -> Environment Variables
For Windows Vista, 7: Click the Start Orb, right-click Computer and select Properties -> Advanced -> Environment Variables
Then, in the lower of the two boxes, find Path and click Edit. Change it so that C:\Python27 (or whichever version of Python you have) is at one end of the list, separated from the other entries by a semicolon (e.g. C:\Python27;C:\Program Files ...)
Once you've done this, python will work at the command line whenever you open a command window.
Regarding your second issue, however, there isn't much you can do. You must either specify the complete path to your script or already be in the same directory as the script. That is, if the script is in C:\X\X\X you will either need to invoke it as C:\X\X\X\test.py or first cd C:\X\X\X.
| 0
| 1
| 0
| 1
|
2011-04-19T19:58:00.000
| 2
| 0.197375
| false
| 5,721,948
| 1
| 0
| 0
| 1
|
Yes, I've searched. So after spending about 4-5 hours struggling just to get Python files running, I recently stumbled over the solution to get it running through the environment variables like this: cmd -> python -> Python starts, yay yay
Since it didn't work to do it through the command line and similar I had to do it manually through the Windows interface. Now that it's working, however I cannot open .py files without typing out the full path like this: python C:\X\X\X\test.py which is obviously also starting to get annoying.
So now I'm trying to find out which variable I have to change (yet again) to only be able to type 'python test.py' and have it running. Sorry if I come off vague, but it's always a major pain to setup a new programming language for me and it kills my mood.
Thanks for help, it'll be really appreciated.
|
Is There a Way to Autogenerate Dependencies Tree in Makefile?
| 5,723,765
| -3
| 2
| 2,038
| 0
|
python,c,makefile,compilation
|
One way to simplify the build is skip dependencies. Just recompile everything every time. Only if the build has to be done many times, or it takes a "long time", the definition of which depends on the use, and the dependencies change a lot, does it make sense to do a detailed dependency build.
| 0
| 1
| 0
| 0
|
2011-04-19T22:55:00.000
| 2
| -0.291313
| false
| 5,723,719
| 0
| 0
| 0
| 1
|
Abstract question:
I'm programming a mid-size C++/C program that's highly modularized. It has a common interface, which allows you to drop in a number of different sources with the same function declarations, but different implementations and get executables with different functionalities.
I'm working out a make system that can handle the building responsibilities. Currently it's able to grab specialized sources based on contents of a configuration file (for the make process) and dump them in a temporary folder with the proper generic names. Now, I only have to compile the project.
The problem is I have a variable number of sources and the headers that the sources depend on can change with the individual implementations. In other words a static makefile won't do the trick.
'1.'
Using the Makefile system alone, is there some way to autogenerate the list of objects (.o) files that Main.cpp needs to compile?
I know I could do this by writing a little python script that my makefile calls, which subsequently makes a custom makefile by parsing the c-files examining their dependencies, starting with the base Main.cpp file.
But I didn't want to turn to this hackish solution if there was a more standardized solution or some way to do this within make.
'2.'
If the makefile system is incapable of this, should I go ahead with my custom python script, or is there a more elegant solution?
...............
To be perfectly clear, again I do NOT have a constant list of dependencies/sources/headers/objects and I do NOT want to force my end user to maintain such a list.
I need some way of autogenerate this tree, based on the contents of my C-files.
Apologies if this is a "dumb" question, I'm relatively new to the world of make -- and like most am self-taught.
Thank you!
Feel free to ask any questions.
FYI, though, my project has too many sources, though, to just post them all and I cannot do so for proprietary/research reasons.
|
Automate WordPress Install from python
| 5,732,608
| 0
| 3
| 1,206
| 0
|
python,wordpress
|
Urllib is an option, but because your script is running on the local machine anyway, I would probably use os.system. That way you can execute the php script like from a shell. You have to look into the php file on how to pass the parameters.
| 0
| 1
| 0
| 1
|
2011-04-20T14:56:00.000
| 2
| 0
| false
| 5,732,384
| 0
| 0
| 1
| 2
|
I have a python program that sets up a wordpress site on my server. It downloads the zip and unzips it into a directory, sets up the database and user, configures the config file. Now I would like to call the the wp_install function in wp-admin/include/upgrade.php and pass it the parameters it needs $weblog_title, $user_name, $admin_email ...
My question is how can I call this function from python? Can I do a urllib.urlopen and if so how do I call the wp_install function with the right parameters?
|
Automate WordPress Install from python
| 5,732,698
| 1
| 3
| 1,206
| 0
|
python,wordpress
|
It looks like wp_install() gets called inside of /wp-admin/install.php during step 1, and after form data has been validated. If you submit ?step=1& ... (all of the other required form fields) it should result in calling wp_install. So yes, you should be able to use urllib(2) for this.
| 0
| 1
| 0
| 1
|
2011-04-20T14:56:00.000
| 2
| 0.099668
| false
| 5,732,384
| 0
| 0
| 1
| 2
|
I have a python program that sets up a wordpress site on my server. It downloads the zip and unzips it into a directory, sets up the database and user, configures the config file. Now I would like to call the the wp_install function in wp-admin/include/upgrade.php and pass it the parameters it needs $weblog_title, $user_name, $admin_email ...
My question is how can I call this function from python? Can I do a urllib.urlopen and if so how do I call the wp_install function with the right parameters?
|
Best practice for bundling third party libraries for distribution in Python 3
| 5,747,306
| 1
| 8
| 2,662
| 0
|
python-3.x,dependencies,software-distribution
|
There are no best practices, but there are a few different tracks people follow. With regard to commercial product distribution there are the following:
Manage Your Own Package Server
With regard to your development process, it is typical to either have your dev boxes update from a local package server. That allows you to "freeze" the dependency list (i.e. just stop getting upstream updates) so that everyone is on the same version. You can update at particular times and have the developers update as well, keeping everyone in lockstep.
For customer installs you usually write an install script. You can collect all the packages and install your libs, as well as the other at the same time. There can be issues with trying to install a new Python, or even any standard library because the customer may already depend on a different version. Usually you can install in a sandbox to separate your packages from the systems packages. This is more of a problem on Linux than Windows.
Toolchain
The other option is to create a toolchain for each supported OS. A toolchain is all the dependencies (up to, but not including base OS libs like glibc). This toolchain gets packaged up and distributed for both developers AND customers. Best practice for a toolchain is:
change the executable to prevent confusion. (ie. python -> pkg_python)
don't install in .../bin directories to prevent accidental usage. (ie. on Linux you can install under .../libexec. /opt is also used although personally I detest it.)
install your libs in the correct location under lib/python/site-packages so you don't have to use PYTHONPATH.
Distribute the source .py files for the executables so the install script can relocate them appropriately.
The package format should be an OS native package (RedHat -> RPM, Debian -> DEB, Win -> MSI)
| 0
| 1
| 0
| 0
|
2011-04-21T15:24:00.000
| 3
| 1.2
| true
| 5,746,231
| 1
| 0
| 0
| 1
|
I'm developing an application using Python 3. What is the best practice to use third party libraries for development process and end-user distribution? Note that I'm working within these constraints:
Developers in the team should have the exact same version of the libraries.
An ideal solution would work on both Windows and Linux.
I would like to avoid making the user install software before using our own; that is, they shouldn't have to install product A and product B before using ours.
|
How do I change the default Python version in my Mac Snow Leopard?
| 5,752,844
| 1
| 3
| 15,524
| 0
|
python,macos
|
Try the following: defaults write com.apple.versioner.python Version 3.2 in a terminal. Assuming you have 3.2 installed of course.
EDIT: As Neil Deily points out in his comment this only works with Python distributions shipped by Apple.
| 0
| 1
| 0
| 0
|
2011-04-22T06:12:00.000
| 5
| 0.039979
| false
| 5,752,753
| 1
| 0
| 0
| 2
|
How do I change the default Python version used in my Mac Snow Leopard? I'm trying to switch from v2.5 to v3.0
|
How do I change the default Python version in my Mac Snow Leopard?
| 5,752,849
| 0
| 3
| 15,524
| 0
|
python,macos
|
I would first install Xcode on my machine (it comes on the installation disc that came with your computer). Then run Software Update to bring it up to date (at least to the most-current free version).
Then, download the Python 3.x source code and extract it. Do "./configure", "make" and "sudo make install" in that directory. These will install the new Python installation in /usr/local/bin (and other nearby places).
If all goes well, /usr/local/bin/python will be a Python 3 interpreter you can use. I would hesitate to overwrite the installed version of Python, since that might make trouble for python scripts shipped with the operating system. I never install anything in /usr; I let Software Update take care of that. For all the rest of my software needs, the "./configure ... make ... sudo make install" technique works very well on Snow Leopard once Xcode is installed.
| 0
| 1
| 0
| 0
|
2011-04-22T06:12:00.000
| 5
| 0
| false
| 5,752,753
| 1
| 0
| 0
| 2
|
How do I change the default Python version used in my Mac Snow Leopard? I'm trying to switch from v2.5 to v3.0
|
Capture google app engine logging output
| 5,758,949
| 5
| 3
| 1,164
| 0
|
python,google-app-engine,logging
|
The default logger sends logging output to stderr. Use your shell's method of redirecting stderr to a file (in tcsh, (dev_appserver.py > /dev/tty) >& your_logfile.txt, your shell may vary.)
You can also use the logging module in python to change the logger to send directly to a file if you detect it's running locally (os.environ['SERVER_SOFTWARE'].startswith('Dev'))
| 0
| 1
| 0
| 0
|
2011-04-22T17:07:00.000
| 2
| 1.2
| true
| 5,757,945
| 0
| 0
| 1
| 1
|
How can one view the Google App Engine logs outside the Admin console?
I'm developing, so using dev_appserver.py/the Admin Console and would like to see the logs as the records are emitted.
I'd like to monitor the logging output in a console with standard Unix tools e.g. less/grep/etc, but there doesn't seem to be an option to direct the logging from the dev_appserver.py command, and I can't open a new file in GAE (e.g. a FileHandler), so file handlers won't work, and I think using a socket/udp handler would be a bit of overkill (if it's even possible).
I'm hopeful there are other options to view the log.
Thanks for reading.
|
Recommended tools for web development with python under linux?
| 5,758,465
| 0
| 0
| 1,598
| 0
|
python,linux
|
You're going to have a lot more problems than your choice of framework if this is your first project on a UNIX. I'd recommend you find a clued in person at your workplace who knows the platform fairly well and use her as a teacher rather than rely completely on the web.
| 0
| 1
| 0
| 0
|
2011-04-22T17:53:00.000
| 5
| 0
| false
| 5,758,354
| 1
| 0
| 0
| 1
|
I'm planning to work on a project using Tornado / nginx / mySQL / jQuery and would be using linux (im new to linux too, i barely know what vim / emacs are). Which tools for web development with this stack would you recommend?
|
using emacs CEDET completion for python
| 5,770,424
| 8
| 10
| 2,830
| 0
|
python,emacs,code-completion,cedet
|
CEDET support for each language is slightly different. In the case of python, the 1.0 release for CEDET hadn't been configured to convert a python import into a file-name. In addition, 'self' is similar to 'this' in c++, which needs to be added by completion logic since it isn't declared. These two features were added to the bzr repository in January of this year. I am not a python programmer, but I recall reports that this fixed a range of the most basic features of smart completion so that symbols from imported libraries works. There was also new code in bzr for python system paths.
Thus, I recommend downloading CEDET from bzr to get these features to see if it now does what you would expect for smart completion.
| 0
| 1
| 0
| 1
|
2011-04-23T20:46:00.000
| 1
| 1
| false
| 5,766,832
| 0
| 0
| 0
| 1
|
In default installation of cedet-1.0 completion can only track global scope symbols in current file. This is not much differs from built-in completion functions (dabbrev-expand or hippie-expand).
It can complete symbols from neither imported modules, nor class properties.
Not saying it cannot handle 'self'.
Is it possible to tweak semantic to do the things?
P.S.
ECB code browser sucesfully sees all imports/base classess and stuff.
It is symbol completion workd incorrectly, or not properly set up.
|
Read/Write files in Python
| 5,780,048
| 0
| 11
| 4,572
| 0
|
python,file,file-io,permissions
|
Make sure you have permissions to change the file. Who is the owner of the file? Is it the one who runs the Python script? All these have to be taken into account.
| 0
| 1
| 0
| 0
|
2011-04-25T15:30:00.000
| 2
| 0
| false
| 5,779,989
| 0
| 0
| 0
| 1
|
I need to make a file readable and writable in python. Currently the file is read-only. I am running on a Windows machine. I run the following code:
os.chmod(projectPath, stat.S_IWRITE | stat.S_IREAD)
on a file that needs to be read/write. But when I try to execute the file that needs to be read write, I get the following:
ISDEV : fatal error -2200: Could not overwrite file C:\WINDOWS\Temp\STixInstaller\STixInstallShield.ism
So obviously, it is not making the file read/write. I then check the file permissions and it is still read-only.
Any ideas why this fails or if there is an easier way to do this I am missing?
|
Terminal display broken after killing python curses program
| 30,829,405
| 1
| 3
| 4,010
| 0
|
python,curses
|
I think you should use curses.endwin(). It restores the terminal window...
In fact if you don't call it after program is closed terminal will show everything like it is in the curses window...
| 0
| 1
| 0
| 0
|
2011-04-26T03:45:00.000
| 5
| 0.039979
| false
| 5,785,669
| 0
| 0
| 0
| 2
|
I wrote a small program in python and outputted some screen display using the curses library. For my simple output this seems to work. I run my python program from the command line.
My problem is that if I kill the python program the terminal doesn't properly display. For example:
'ls -al' displays properly before I run my python curses program
'ls -al' does not display properly after I kill the python curses program.
What can I do to make my terminal display output properly after I kill my python curses program?
|
Terminal display broken after killing python curses program
| 5,785,673
| 7
| 3
| 4,010
| 0
|
python,curses
|
Usually the reset command will reset your terminal settings to default values.
| 0
| 1
| 0
| 0
|
2011-04-26T03:45:00.000
| 5
| 1.2
| true
| 5,785,669
| 0
| 0
| 0
| 2
|
I wrote a small program in python and outputted some screen display using the curses library. For my simple output this seems to work. I run my python program from the command line.
My problem is that if I kill the python program the terminal doesn't properly display. For example:
'ls -al' displays properly before I run my python curses program
'ls -al' does not display properly after I kill the python curses program.
What can I do to make my terminal display output properly after I kill my python curses program?
|
Execute a file with arguments in Python shell
| 33,261,131
| 1
| 74
| 186,617
| 0
|
python,shell
|
Besides subprocess.call, you can also use subprocess.Popen. Like the following
subprocess.Popen(['./script', arg1, arg2])
| 0
| 1
| 0
| 0
|
2011-04-26T10:18:00.000
| 13
| 0.015383
| false
| 5,788,891
| 1
| 0
| 0
| 1
|
I would like to run a command in Python Shell to execute a file with an argument.
For example: execfile("abc.py") but how to add 2 arguments?
|
desktop development language compiled binary or scripting language (windows)?
| 5,789,728
| 0
| 2
| 525
| 0
|
python,windows,delphi,lua,desktop-application
|
Does anyone use anything else for shrink wrap apps on windows? e.g. Java, python etc.
Yes. I assume you're not really asking about Java, since that is so wide-spread. I can count quite a few Java applications that I use, and I don't operate in the "Enterprise" environment.
There are tools that allow you to ship Python code without shipping actual .py files and without needing the to actually have Python installed, so there are solutions for that as well. Since such tools exist I assume people do ship Python applications.
If so how do you distribute your app and does using a scripting language cause any problems with the installation?
What scripting language?
| 0
| 1
| 0
| 0
|
2011-04-26T11:12:00.000
| 2
| 0
| false
| 5,789,455
| 0
| 0
| 0
| 1
|
Does anyone use a scripting language only solution to produce a binary (.exe) to produce a commercial desktop application for windows or mac? e.g. Java, python etc. If so how do you distribute your app and does using a scripting language cause any problems with the installation?
I'm asking about users that can download an application and install it, they don't know about setting path variables, or changing there JAVA_HOME. The assumption for the PC are users with a consumer PC with windows (XP/Vista/7), not power users. (Or alternatively a mac type solution would be interesting to hear about to)
|
Python IDLE not accepting quotes
| 5,797,614
| 3
| 1
| 4,436
| 0
|
python,keyboard,python-idle
|
IDLE uses Tkinter from the Python standard library to supply GUI functionality. Tkinter is an interface to the multi-platform Tk graphical interface, part of Tcl/Tk. Unfortunately, Aqua Tk on OS X does not currently support all of the standard text processing features on OS X.
This particular problem appears to be a variation of a known bug in the Cocoa Aqua Tk. On a normal Apple U.S. keyboard, you use option U + <vowel> to form a diaeresis (for example, ä). It looks like on the US International keyboard, the ' key is used instead of the missing option u. There is a patch in the most recent ActiveState Tk 8.5 versions (including the one you have installed) that prevents Tk from crashing in this case. You can see how it used to "work" by launching the Apple-supplied IDLE 2.6 included with Mac OS X 10.6 (/usr/bin/idle2.6). Try typing ' (with the International keyboard setting) or option U (with the US keyboard setting) there! The patched version simply ignores this case, rather than crashing.
Your best bet is to stick to a US or US Extended keyboard input method.
| 0
| 1
| 0
| 0
|
2011-04-26T22:07:00.000
| 2
| 1.2
| true
| 5,797,046
| 0
| 0
| 0
| 1
|
Disclaimer: I'm a noob.
I've installed Python 3.2 (r32:88452) and ActiveTcl 8.5.9.2 (build 294317) on my OSX 10.6. Both installed without any errors, and I've already managed to run a .PY through Terminal. It runs okay.
I run IDLE and it doesn't show any errors. But whenever I press the quote key ('), nothing happens. The same with [shift] pressed ("). The characters just don't register. The same happens with the 'backtick'/tilde key, [shift] pressed or otherwise.
I'm using a U.S. International keyboard layout.
I've tried opening the Keyboard Viewer. Both keys, single quote and 'backtick', are orange-colored. (I had never noticed that until now.) In any other application, whenever they're clicked, the corresponding character is input -- but nothing happens from within the Python IDLE.
Any ideas on what might be happening?
Additional information: Python interpreter in interactive mode (running it from Terminal) registers both keys just fine. Also, if I try and change the keyboard layout to plain "U.S.", even IDLE registers the keys; but this tastes more like a workaround and I'd like to hear your opinions.
|
wingide feature
| 5,817,554
| 1
| 1
| 62
| 0
|
python,eclipse,wing-ide
|
Found the solution: ctrl+shift+o.
| 0
| 1
| 0
| 0
|
2011-04-28T11:00:00.000
| 1
| 0.197375
| false
| 5,817,519
| 0
| 0
| 0
| 1
|
Does wingide have an eclipse-like open resource feature (ctrl+shift+r)?
|
Python data passing
| 5,832,407
| 0
| 0
| 568
| 0
|
php,python
|
If you want things to be synchronous use a named socket(Amazing feature on Unix systems.)
If you want things to be asynchronous use pickle(there is a php version of it too.)
| 0
| 1
| 0
| 1
|
2011-04-29T12:58:00.000
| 2
| 0
| false
| 5,832,346
| 0
| 0
| 0
| 1
|
I need exchange data between python daemon (cluster nods send data to this daemon) and php script (apache) which, is accessed by webbrowsers. What do you recommend as technology which could establish some connection between them. Both, python daemon and apache/php is on the same machine.
Thank you.
|
IO within the Program Files Directory
| 5,838,035
| 0
| 3
| 1,184
| 0
|
python,windows-7,io,uac
|
You need to execute this program as administrator or an account you have granted permissions to if you want to write to the program files folder. If you run the executable as normal user you won't have access.
| 0
| 1
| 0
| 0
|
2011-04-29T22:28:00.000
| 4
| 0
| false
| 5,838,025
| 0
| 0
| 0
| 3
|
I made a program that both gathers data from a .txt file by reading it, and writes data to a different .txt file. However, there is a problem. When I run the program in a normal directory It runs perfectly fine. A problem arises when I place it in the C:\Program Files directory. When I run it I get IOERROR: [Errno 13] Permission denied: 'my subdirectory'. I believe this is probably due to this directory having some extra protocols when it comes to editing files within it.
This is in Windows 7, if it wasn't already apparent.
Also if it makes a difference the program was written in Python then converted to an .exe with py2exe.
|
IO within the Program Files Directory
| 5,838,045
| 2
| 3
| 1,184
| 0
|
python,windows-7,io,uac
|
The most likely cause of this is that the "Program Files" directories in Windows 7 require administrative privileges to create sub directories.
You could run python as an administrator (hold shift, right click python.exe, run as administrator), or write to a directory that is not Program Files.
| 0
| 1
| 0
| 0
|
2011-04-29T22:28:00.000
| 4
| 0.099668
| false
| 5,838,025
| 0
| 0
| 0
| 3
|
I made a program that both gathers data from a .txt file by reading it, and writes data to a different .txt file. However, there is a problem. When I run the program in a normal directory It runs perfectly fine. A problem arises when I place it in the C:\Program Files directory. When I run it I get IOERROR: [Errno 13] Permission denied: 'my subdirectory'. I believe this is probably due to this directory having some extra protocols when it comes to editing files within it.
This is in Windows 7, if it wasn't already apparent.
Also if it makes a difference the program was written in Python then converted to an .exe with py2exe.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.