Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Preserve POST variables during login redirect in GAE?
| 6,084,633
| 1
| 3
| 375
| 0
|
python,google-app-engine,http-post
|
The general problem with capturing a POST and turning it into a GET is first that the query string on a GET has a browser-dependent limited size, and second that a POST may be form/multi-part (what to do with the uploaded file becomes an issue).
An approach that might work for you is to accept the POST and save the data, then redirect to a page that requires login, passing the Key(s) (or enough information to reconstruct them) in the query string. The handler for that URL then assumes successful login, and fixes up the saved data (say, to associate it with the logged-in user) as appropriate.
People who decide not to login will leave orphaned records, which you can clean up via a cron job.
| 0
| 1
| 0
| 0
|
2011-05-21T20:01:00.000
| 2
| 0.099668
| false
| 6,084,123
| 0
| 0
| 1
| 1
|
in a form, I submit data to a python webapp handler (all Google App Engine based) using a HTTP POST request. In this script, I first check if the user is logged in and if not, I use users.create_login_url(...) to redirect the user first to the login page.
How can I ensure that after login the user is not just forwarded to my python script again, but that also the POST variables are preserved? The only way I found was turning all POST variables into URL parameters and adding it to the URL.
Is that possible at all?
|
How to set a file's ctime with Python?
| 6,085,044
| 9
| 7
| 4,201
| 0
|
python,unix,stat,ctime
|
It is relatively trivial to set a files ctime to the current time. Just modify its mtime, flip a permission bit or even make a hardlink to it. It is, AFAIK, impossible to set a file's ctime to an arbitrary value using the system call API in any direct sort of way.
If you had root access you could set the system time, do something to the file to set the ctime to the current time, then set the system time back. Also you could bit-twiddle the inode data structure on disk. But both of these are really bad ideas for a whole host of reasons that I don't expect I should have to explain in detail.
| 0
| 1
| 0
| 0
|
2011-05-21T23:06:00.000
| 1
| 1.2
| true
| 6,084,985
| 1
| 0
| 0
| 1
|
How can I set a Unix file's ctime?
(I'd much prefer an answer in terms of Python. If there's no way to do it with standard Python, then I suppose C is OK too.)
(Note: I know that one can use os.utime to set a file's atime and mtime. I am interested in setting ctime.)
(Note2: I hope that there's an answer that works for any POSIXoid Unix, but if not, I'm interested in Darwin and Ubuntu.)
|
Keeping the downloaded torrent in memory rather than file libtorrent
| 6,090,293
| 4
| 3
| 1,739
| 0
|
c++,python,bittorrent
|
If you're on Linux, you could torrent into a tmpfs mount; this will avoid writing to disk. That said, this obviously means you're storing large files in RAM; make sure you have enough memory to deal with this.
Note also that most Linux distributions have a tmpfs mount at /dev/shm, so you could simply point libtorrent to a file there.
| 0
| 1
| 0
| 1
|
2011-05-22T18:11:00.000
| 4
| 0.197375
| false
| 6,089,806
| 0
| 0
| 0
| 1
|
Working with Rasterbar libtorrent I dont want the downloaded data to sit on my hard drive rather a pipe or variable or something Soft so I can redirect it to somewhere else, Mysql, or even trash if it is not what I want, is there anyway of doing this in preferably python binding if not in C++ using Libtorrent?
EDIT:--> I like to point out this is a libtorrent question not a Linux file handling or Python file handling question. I need to tell libtorrent to instead of save the file traditionally in a normal file save it to my python pipe or variable or etc.
|
How to use backends in google app engine without wasting cpu resources?
| 6,098,645
| 1
| 1
| 1,080
| 0
|
python,google-app-engine,google-cloud-datastore,backend
|
The general advice for optimizing CPU usage is to minimize RPCs, understand how to use the datastore efficiently and use appstats to find your bottlenecks. For specific optimization advice, we would need to see some code.
While backends can be configured to handle public requests, they aren't intended to replace normal instances. Backends are designed for resource-intensive offline processing. Normal instances are created and destroyed automatically in response to request volume; backends have to be configured and instantiated explicitly by an administrator, thus they are not well-suited to handling traffic spikes.
They're also more expensive: keeping a backend instance online for 24 hours will cost you $3.84, whether the instance is handling requests or not.
| 0
| 1
| 0
| 0
|
2011-05-23T11:55:00.000
| 1
| 1.2
| true
| 6,096,806
| 0
| 0
| 1
| 1
|
While processing data in datastore using backends, app engine is using my cpu resources completely.
How do i process my data without wasting CPU resources?
Can i have the entire app on a backend without wasting cpu resources?
am i missing something..
if the question is too vague, ask me to clarify.. thanks
|
Does reportlab's renderPM work on Google appengine?
| 6,166,841
| 0
| 2
| 234
| 0
|
python,image,google-app-engine,reportlab
|
Just to close the question - as indicated by Wooble, reportlab itself works fine on Appengine (being pure python) but the RenderPM library doesn't.
| 0
| 1
| 0
| 0
|
2011-05-23T15:45:00.000
| 1
| 1.2
| true
| 6,099,592
| 0
| 0
| 1
| 1
|
I wanted to use ReportLab's RenderPM to generate images on Google App-Engine but it looks like it depends on c libraries. Does anyone know if it's possible to get it working?
Thanks,
Richard
|
Multiple installs of Python on MacOSX for Eclipse
| 6,103,695
| 0
| 2
| 1,353
| 0
|
python,macos,pydev,virtualenv,python-2.x
|
From the README text file of python
Installing multiple versions
On Unix and Mac systems if you intend
to install multiple versions of Python
using the same installation prefix
(--prefix argument to the configure
script) you must take care that your
primary python executable is not
overwritten by the installation of a
different version. All files and
directories installed using "make
altinstall" contain the major and
minor version and can thus live
side-by-side. "make install" also
creates ${prefix}/bin/python which
refers to ${prefix}/bin/pythonX.Y. If
you intend to install multiple
versions using the same prefix you
must decide which version (if any) is
your "primary" version. Install that
version using "make install". Install
all other versions using "make
altinstall".
For example, if you want to install
Python 2.5, 2.6 and 3.0 with 2.6 being
the primary version, you would execute
"make install" in your 2.6 build
directory and "make altinstall" in the
others.
Virtualenv is an option but you could use the above mentioned option instead of venv which seems much simpler.
| 0
| 1
| 0
| 0
|
2011-05-23T22:07:00.000
| 4
| 0
| false
| 6,103,592
| 1
| 0
| 0
| 1
|
I want to have multiple installs of Python: 2.1, 2.4, 2.7, 3.x
My IDE is Eclipse (Helios)/Pydev on MacOSX, which works great. I have a couple of Python codebases that are/will be running on different versions of Python. Also, I like Eclipse PyDev's crosslinking from source-code to documentation.
The standard recommendation seems to be: use virtualenv, and keep the installs totally separate from the builtin MacPython (2.6.1). Eclipse should never be pointing to the MacPython install. (Should PYTHONPATH even be set in such an environment?)
Before I get on with virtualenv, is there anything else I should know about this?
virtualenv doesn't impose any overhead, and I shouldn't be worried with occasional comments about breakage to nose, coverage etc?
I'm interested in hearing from Eclipse or Pydev users on MacOS.
Also if anyone has other tips on migrating a Python codebase from 2.1 -> 2.7.
|
Specific IDE, for specific things
| 6,105,339
| 5
| 3
| 343
| 0
|
java,python,ide,editor
|
notepad++ has an ftp plugin that works very nicely and runs on very little resource. Will syntax highlight most languages and has some compiler support.
| 0
| 1
| 0
| 0
|
2011-05-24T03:00:00.000
| 6
| 0.16514
| false
| 6,105,273
| 1
| 0
| 0
| 2
|
I need an IDE that can do the following:
Run on an oldish laptop (2GB Ram, 1.9 ghz intel celeron M)
Run well on an oldish laptop (with a browswer open)
Be able to run on windows
Be able to run smoothly on windows
Is able to do Java (or, if you really can't find anything, C# will be okay)
An extension or something for Python would be nice
Django support would be awesome
It would be great to have SFTP/FTP editing support that actually works
I don't care about lots of extensibility or commercial support or a kitchen sink or any of that, I just need it to be stable and all of the above.
And, Vim or EMACS aren't answers since they (in my mind, without excessive configuration) don't qualify as IDEs.
And, if this doesn't belong here, please tell me.
EDIT: Code completion is also important.
|
Specific IDE, for specific things
| 6,105,523
| 0
| 3
| 343
| 0
|
java,python,ide,editor
|
Use the IDE and tools that makes you the most productive.
Use the IDE and tools that you want to use.
Use the IDE and tools that you will enjoy using.
If your computer is preventing you from doing that, then get a new computer.
Long gone are the days when labor was cheap, and hardware expensive.
These days hardware is cheap, labor is expensive.
Your time is worth money. Don't waste it.
You are more important than that.
Whether it's from not being able to use the best tool for the job, or losing time on slow compiles, it all adds up.
In a completely contrived example:
If you're losing 1 hour of productivity a day, 5 days a week, for 48 weeks out of the year, that adds up to 240 hours. Even if you were working minimum wage, at say $10/hour, that's $2400 in lost productivity.
Take whatever you are losing in productivity, and re-invest it into solutions to those problems.
| 0
| 1
| 0
| 0
|
2011-05-24T03:00:00.000
| 6
| 0
| false
| 6,105,273
| 1
| 0
| 0
| 2
|
I need an IDE that can do the following:
Run on an oldish laptop (2GB Ram, 1.9 ghz intel celeron M)
Run well on an oldish laptop (with a browswer open)
Be able to run on windows
Be able to run smoothly on windows
Is able to do Java (or, if you really can't find anything, C# will be okay)
An extension or something for Python would be nice
Django support would be awesome
It would be great to have SFTP/FTP editing support that actually works
I don't care about lots of extensibility or commercial support or a kitchen sink or any of that, I just need it to be stable and all of the above.
And, Vim or EMACS aren't answers since they (in my mind, without excessive configuration) don't qualify as IDEs.
And, if this doesn't belong here, please tell me.
EDIT: Code completion is also important.
|
Multi-domain authentication options for google app engine
| 6,110,157
| 0
| 1
| 796
| 0
|
python,google-app-engine,authentication
|
Authentication does not imply authorisation. All that the federated ID system does for your application is give you a username/userid that you can trust. So you can setup your user accounts tied to this infomation and rely on the fact that whenever you see that userid you are talking to the same user. Or in the case of domain-wide applications, whenever you see someone with that domain in their userid.
It is completely up to your application to decide if that userid has any meaning on your application. If I login to your app now with my google account, it should say "Oh I haven't seen you before, would you like to join?" ... it should (depending on your app) not just assume I'm authorised to use your application, just because I told you my name.
I'm not sure where you got the "domain login model" from? The only two choices are Google Account and Open/FederatedID neither of those attempt to restrict user access.
In your final example, users spanning multiple google accounts will see different results depending on if they have enable multiple-signin or not. Most users will be presented with a screen to choose which google account they mean before continuing.
| 0
| 1
| 0
| 0
|
2011-05-24T07:53:00.000
| 1
| 0
| false
| 6,107,318
| 0
| 0
| 1
| 1
|
I am looking for some suggestions to implement authentication (and authorization) in our GAE app. Assuming that our app is called someapp, our requirement is as follows:
someapp is primarily for google apps users of the domain its installed for but can also authenticate users from other google apps domains.
For example, lets say google apps is configured on domainX.com and domainY.com. Additionally the admin for domainX.com has added someapp to their domain from the apps marketplace. The admin for domainX.com invites userA@domainX.com and userB@domainY.com log on to the application. Both google app domain users should be able to use their SSO (single sign-on) functionality.
As far as we know, current authentication options in the app engine allow either domain login, which allows only the users of one domain to log in to the app or federated/openid login which would allow the users of any domain to log in to the app. There is no in-between option which would allow only the users of previously authorized domains to log on to the app. Does that mean our only option is to leave aside google apps authentication and implement our own custom authentication?
Also in our sample scenario above, what if domainX.com and domainY.com have both added someapp. If userA@domainX.com navigates to someapp.appspot.com, which installation of the app will be used, the one installed on domainX.com or the one installed on domainY.com.
|
Google App Engine Locking
| 6,112,067
| 2
| 5
| 467
| 0
|
python,google-app-engine
|
Instantiating an email object certainly does not count against your "recipients emailed" quota. Like other App Engine services, you consume quota when you trigger an RPC, i.e. call send().
If you intended to email 1500 recipients and App Engine says you emailed 45,000, your code has a bug.
| 0
| 1
| 0
| 0
|
2011-05-24T11:21:00.000
| 2
| 0.197375
| false
| 6,109,602
| 0
| 0
| 1
| 2
|
just wondering if anyone of you has come across this. I'm playing around with the Python mail API on Google App Engine and I created an app that accepts a message body and address via POST, creates an entity in the datastore, then a cron job is run every minute, grabs 200 entities and sends out the emails, then deletes the entities.
I ran an experiment with 1500 emails, had 1500 entities created in the datastore and 1500 emails were sent out. I then look at my stats and see that approx. 45,000 recipients were used from the quota, how is that possible?
So my question is at which point does the "Recipients Emailed" quota actually count? At the point where I create a mail object or when I actually send() it? I was hoping for the second, but the quotas seem to show something different. I do pass the mail object around between crons and tasks, etc. Anybody has any info on this?
Thanks.
Update: Turns out I actually was sending out 45k emails with a queue of only 1500. It seems that one cron job runs until the previous one is finished and works out with the same entities. So the question changes to "how do I lock the entities and make sure nobody selects them before sending the emails"?
Thanks again!
|
Google App Engine Locking
| 6,141,535
| 3
| 5
| 467
| 0
|
python,google-app-engine
|
Use tasks to send the email.
Create a task that takes a key as an argument, retrieves the stored entity for that key, then sends the email.
When your handler receives the body and address, store that as you do now but then enqueue a task to do the send and pass the key of your datastore object to the task so it knows which object to send an email for.
You may find that the body and address are small enough that you can simply pass them as arguments to a task and have the task send the email without having to store anything directly in the datastore.
This also has the advantage that if you want to impose a limit on the number of emails sent within a given amount of time (quota) you can set up a task queue with that rate.
| 0
| 1
| 0
| 0
|
2011-05-24T11:21:00.000
| 2
| 0.291313
| false
| 6,109,602
| 0
| 0
| 1
| 2
|
just wondering if anyone of you has come across this. I'm playing around with the Python mail API on Google App Engine and I created an app that accepts a message body and address via POST, creates an entity in the datastore, then a cron job is run every minute, grabs 200 entities and sends out the emails, then deletes the entities.
I ran an experiment with 1500 emails, had 1500 entities created in the datastore and 1500 emails were sent out. I then look at my stats and see that approx. 45,000 recipients were used from the quota, how is that possible?
So my question is at which point does the "Recipients Emailed" quota actually count? At the point where I create a mail object or when I actually send() it? I was hoping for the second, but the quotas seem to show something different. I do pass the mail object around between crons and tasks, etc. Anybody has any info on this?
Thanks.
Update: Turns out I actually was sending out 45k emails with a queue of only 1500. It seems that one cron job runs until the previous one is finished and works out with the same entities. So the question changes to "how do I lock the entities and make sure nobody selects them before sending the emails"?
Thanks again!
|
Problem with reading pasted text in terminal
| 6,115,447
| 3
| 0
| 121
| 0
|
python,terminal
|
Make sure that your pasted text doesn't contain any embedded control characters (such as a newline), which could end the input.
| 0
| 1
| 0
| 0
|
2011-05-24T18:50:00.000
| 2
| 0.291313
| false
| 6,115,347
| 0
| 0
| 0
| 1
|
I'm reading text in terminal with
description = raw_input()
It works if I write the text and press enter. The problem is when I paste the text from somewhere with Ctrl+Shift+V or with right click + paste. My program immediately ends, description contains only part of the text (I can see it in database).
Do you know how to do this so paste works? I'm using xfce4-terminal in Ubuntu.
thank you
|
Help me decide what to use with Google App Engine for this practical work
| 6,116,489
| 4
| 0
| 346
| 0
|
java,python,google-app-engine
|
I would recommend using Python + Django framework. I love Java, but for the Google App Engine there is much more documentation online for Python.
| 0
| 1
| 0
| 0
|
2011-05-24T20:09:00.000
| 3
| 0.26052
| false
| 6,116,236
| 0
| 0
| 1
| 1
|
I'm working on a practical work for college, and I have to develop a web-app that could be used by all the teachers from my province.
The application should let the users (teachers) manage some information related to their daily duties. One of the requirements is that I must use
Google App Engine platform for developing and hosting the web application.
I have 2 months to finish the work.
I have some intermediate knowledge of C++, so what language (Python or Java ) and web framework do you think would the best to
develop the application in less time?
I know this is not a strictly programming questions, but please don't delete this post at least until I get a
few answer in order to have an idea of how to proceed.
Many thanks in advance!
|
How to write a bash script that enters text into programs
| 6,119,079
| 2
| 2
| 500
| 0
|
python,bash
|
Have you tried echo "Something for input" | python myPythonScript.py ?
| 0
| 1
| 0
| 0
|
2011-05-25T02:55:00.000
| 4
| 0.099668
| false
| 6,119,038
| 0
| 0
| 0
| 1
|
I'm writing a bash script that fires up python and then enters some simple commands before exiting. I've got it firing up python ok, but how do I make the script simulate keyboard input in the python shell, as though a person were doing it?
|
Interprocess messaging between two Python programs
| 6,121,275
| 3
| 4
| 1,949
| 0
|
python,linux,python-3.x
|
This really depends on the kind of messaging you want and the roles of the two processes. If it's proper "client/server", I would probably create a SimpleHTTPServer and then use HTTP to communicate between the two. You can also use XMLRPCLib and the client to talk between them. Manually creating a TCP server with your own custom protocol sounds like a bad idea to me. You might also consider using a message queue system to communicate between them.
| 0
| 1
| 1
| 0
|
2011-05-25T07:49:00.000
| 3
| 0.197375
| false
| 6,121,180
| 0
| 0
| 0
| 1
|
We have two Python programs running on two linux servers. Now we want to send messages between these Python programs. The best idea so far is to create a TCP/IP server and client architecture, but this seems like a very complicate way to do it. Is this really best practice for doing such a thing?
|
Python time.time() -> IOError
| 6,129,347
| 3
| 2
| 757
| 0
|
python,linux,gentoo
|
Did it work before your custom kernel? Boot into a rescue CD, chroot into your gentoo env, and run your script. If it works, it's your kernel. That's about as specific as I can be.
| 0
| 1
| 0
| 1
|
2011-05-25T18:26:00.000
| 2
| 0.291313
| false
| 6,129,054
| 0
| 0
| 0
| 1
|
I've just installed a base gentoo stage 3 and I get the following error when i try and call time.time():
sbx / # python
import time
Python 2.7.1 (r271:86832, May 22 2011, 14:53:09)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> time.time()
Traceback (most recent call last):
File "", line 1, in
IOError: [Errno 0] Error
I found this because when I try and run emerge I get:
sbx / # emerge
Traceback (most recent call last):
File "/usr/bin/emerge", line 32, in
from _emerge.main import emerge_main
File "/usr/lib/portage/pym/_emerge/main.py", line 6, in
import logging
File "/usr/lib/python2.7/logging/__init__.py", line 94, in
_startTime = time.time()
IOError: [Errno 11] Resource temporarily unavailable
This is a custom kernel and I just made sure I compiled in RTC support, but still no luck. Any ideas on why this is happening?
|
Memory Leaks Comet Server in PHP
| 6,132,278
| 4
| 3
| 254
| 0
|
php,python,comet,tornado,long-polling
|
The gist of it is that PHP was originally written with the intent of having a brand new process for every request that you could just throw away once said request ended, at a time where things like Comet and long polling weren't really on the table.
As such there are quite a few areas - notably the garbage collector - where at its origin PHP just wasn't made for running during a long period of time, and it didn't care much because every http request got a brand new php instance.
It got clearly better in the recent years, but I still wouldn't use it for creating that sort of long-lifetime applications.
| 0
| 1
| 0
| 1
|
2011-05-25T22:13:00.000
| 1
| 1.2
| true
| 6,131,620
| 0
| 0
| 0
| 1
|
Why would a Comet Server like Tornado be especially prone to memory leaks if written in PHP?
Are there genuine weaknesses particular to PHP for implementing a long polling framework/service like Tornado?
Thanks
|
Installing JDK and Python and setting the PATH variable on Windows 7
| 6,147,430
| 2
| 1
| 2,880
| 0
|
python,installation,java,windows-7-x64
|
I'm confused. Are you switching out the paths between the JDK and Python? If that's the case, you can have both paths set in your system's PATH variable. Example: C:\jdk-install\;C:\python-install
| 0
| 1
| 0
| 0
|
2011-05-27T02:03:00.000
| 2
| 1.2
| true
| 6,147,101
| 1
| 0
| 0
| 2
|
I recently installed Java's JDK and Python in my Windows 7 system. I wanted to access both programs from the command line (whether it be cmd or cygwin) so I used the PATH global variable and entered the path to my JDK. What can I do so that python and JDK are accessed by PATH? What I am doing now is changing the PATH variable every time.
Thanks
|
Installing JDK and Python and setting the PATH variable on Windows 7
| 6,147,460
| 1
| 1
| 2,880
| 0
|
python,installation,java,windows-7-x64
|
Right click on "My Computer"
Choose "Properties"
Click on "Advanced System Settings"
Click on "Environment Variables"
Find PATH and set it appropriately
Note that only processes started after you change the path will see the change.
| 0
| 1
| 0
| 0
|
2011-05-27T02:03:00.000
| 2
| 0.099668
| false
| 6,147,101
| 1
| 0
| 0
| 2
|
I recently installed Java's JDK and Python in my Windows 7 system. I wanted to access both programs from the command line (whether it be cmd or cygwin) so I used the PATH global variable and entered the path to my JDK. What can I do so that python and JDK are accessed by PATH? What I am doing now is changing the PATH variable every time.
Thanks
|
Storing Python data on a Linux system
| 6,147,249
| 0
| 2
| 170
| 0
|
python,linux,pickle,data-storage
|
I'd use a database. A real one. This is why they exist (well, one of the reasons). Don't reinvent the wheel if you don't have to.
| 0
| 1
| 0
| 0
|
2011-05-27T02:28:00.000
| 5
| 0
| false
| 6,147,225
| 0
| 0
| 0
| 3
|
I have the need to create a system to store python data structures on a linux system but have concurrent read and write access to the data from multiple programs/daemons/scripts. My first thought is I would create a unix socket that would listen for connections and serve up requested data as pickled python data structures. Any writes by the clients would get synced to disk (maybe in batch, though I don't expect it to be high throughput so just Linux vfs caching would likely be fine). This ensures only a single process reads and writes to the data.
The other idea is to just keep the pickled data structure on disk and only allow a single process access through a lockfile or token... This requires all accessing clients to respect the locking mechanism / use the access module.
What am I over looking? SQLite is available, but I'd like to keep this as simple as possible.
What would you do?
|
Storing Python data on a Linux system
| 6,319,111
| 0
| 2
| 170
| 0
|
python,linux,pickle,data-storage
|
You could serialize the data structures and store them as values using ConfigParser. If you created your own access lib/module to the access the data, you could do the serialization in the lib so the client code would just send and receive python objects. You could also handle concurrency in the lib.
| 0
| 1
| 0
| 0
|
2011-05-27T02:28:00.000
| 5
| 1.2
| true
| 6,147,225
| 0
| 0
| 0
| 3
|
I have the need to create a system to store python data structures on a linux system but have concurrent read and write access to the data from multiple programs/daemons/scripts. My first thought is I would create a unix socket that would listen for connections and serve up requested data as pickled python data structures. Any writes by the clients would get synced to disk (maybe in batch, though I don't expect it to be high throughput so just Linux vfs caching would likely be fine). This ensures only a single process reads and writes to the data.
The other idea is to just keep the pickled data structure on disk and only allow a single process access through a lockfile or token... This requires all accessing clients to respect the locking mechanism / use the access module.
What am I over looking? SQLite is available, but I'd like to keep this as simple as possible.
What would you do?
|
Storing Python data on a Linux system
| 6,147,346
| 1
| 2
| 170
| 0
|
python,linux,pickle,data-storage
|
If you want to just store name/value pairs (e.g. filename to pickled data) you can always use Berkley DB (http://code.activestate.com/recipes/189060-using-berkeley-db-database/). If your data is numbers-oriented, you might want to check out PyTables (http://www.pytables.org/moin). If you really want to use sockets (I would generally try to avoid that, since there's a lot of minutia you have to worry about) you may want to look at Twisted Python (good for handling multiple connections via Python with no threading required).
| 0
| 1
| 0
| 0
|
2011-05-27T02:28:00.000
| 5
| 0.039979
| false
| 6,147,225
| 0
| 0
| 0
| 3
|
I have the need to create a system to store python data structures on a linux system but have concurrent read and write access to the data from multiple programs/daemons/scripts. My first thought is I would create a unix socket that would listen for connections and serve up requested data as pickled python data structures. Any writes by the clients would get synced to disk (maybe in batch, though I don't expect it to be high throughput so just Linux vfs caching would likely be fine). This ensures only a single process reads and writes to the data.
The other idea is to just keep the pickled data structure on disk and only allow a single process access through a lockfile or token... This requires all accessing clients to respect the locking mechanism / use the access module.
What am I over looking? SQLite is available, but I'd like to keep this as simple as possible.
What would you do?
|
How to upload current date/time into App Engine with Bulkloader tool?
| 6,153,856
| 1
| 1
| 389
| 0
|
python,google-app-engine,bulkloader
|
I would do this:
Add a property to my bulkuploader.yaml
for the modified time and use a import
transform to get the date?
| 0
| 1
| 0
| 0
|
2011-05-27T07:41:00.000
| 1
| 1.2
| true
| 6,149,263
| 0
| 0
| 1
| 1
|
How can I add a last modified time property to my entity kind that gets updated during a bulk upload?
I'm currently using appcfg upload_data to upload a csv up to my high replication datastore. I plan to have this as a cron job to do a one-way sync from our internal database to datastore. In order to account for stale records, I'd like to have it update a last modified time property and then do a map reduce to delete old records (older than a week). Records will be updated using key property.
What would be the best way to create the last modified time considering I want to reserver the ability to use the Datastore Admin to delete the entire entity kind if I need to?
Create a object model to "initialize" the datastore entity with all the necessary fields?
Add a property to my bulkuploader.yaml for the modified time and use a import transform to get the date?
Other...
Thanks in advance!
|
Appengine - Storing a Pickled in Datastore
| 6,157,563
| 1
| 2
| 853
| 0
|
python,google-app-engine,pickle
|
Never mind. I just ran tests with both. It appears that you cannot use TextProperty with pickle. It will cause errors. Using it with BlobProperty, on the other hand, works perfectly.
| 0
| 1
| 0
| 0
|
2011-05-27T20:27:00.000
| 3
| 1.2
| true
| 6,157,367
| 0
| 0
| 1
| 2
|
In Google Appengine, I'm interested in pickling an object and storing it in the datastore. I don't need to index it.
Is there any difference if I store it as a BlobProperty or TextProperty? Which one is better?
|
Appengine - Storing a Pickled in Datastore
| 6,157,457
| 4
| 2
| 853
| 0
|
python,google-app-engine,pickle
|
BlobProperty can store binary data while TextProperty can store only strings.
You can use BlobProperty as TextProperty is basicly a BlobProperty with encoding.
| 0
| 1
| 0
| 0
|
2011-05-27T20:27:00.000
| 3
| 0.26052
| false
| 6,157,367
| 0
| 0
| 1
| 2
|
In Google Appengine, I'm interested in pickling an object and storing it in the datastore. I don't need to index it.
Is there any difference if I store it as a BlobProperty or TextProperty? Which one is better?
|
Key generation in Google App Engine
| 6,164,669
| 1
| 3
| 421
| 0
|
python,google-app-engine
|
Keys in App Engine are based on:
The keys of the ancestor entities of the entity, if any.
The kind name of the entity.
Either an auto-generated integer id or a user-assigned key_name. The integer IDs are allocated in generally-increasing blocks to various instances of the application, so that they can be guaranteed to be unique but are not guaranteed to actualy get assigned to entities in a monotonically increasing fashion.
The keys do not use anything like a universally unique ID.
| 0
| 1
| 0
| 0
|
2011-05-28T04:35:00.000
| 4
| 0.049958
| false
| 6,159,666
| 0
| 0
| 1
| 1
|
If you guys ever used Google App Engine. It generates a key for every single instance of a model created. It's pretty neat.
I'm looking into building something like that. Do they do it so that the key is based on the content? Or do they just take a random choice from a-zA-Z0-9 for like 50 times and build a string out of it? That sounds reasonable because the chances that 2 key would be the same would be lower than 1/10^89.
|
fabric: how to double tunnel
| 6,161,754
| 0
| 18
| 6,837
| 0
|
python,ssh,fabric
|
I'm just going to answer the SSH part: Yes, you can set up a double tunnel -- one SSH from local to A that tunnels from a secondary local port (like 2121) to port 21 on B, and then you can SSH to localhost:2121 and login on B. I've done stuff like that with PuTTY.
Implementing that in fabric is left as an exercise.
| 0
| 1
| 0
| 0
|
2011-05-28T12:09:00.000
| 6
| 0
| false
| 6,161,548
| 0
| 0
| 0
| 1
|
Situation:
A and B are remote hosts.
Local machine can SSH into A, but not B.
B ONLY accepts SSH connections from A.
Question:
Is it possible to use fabric on the local machine to execute commands on Host B, preferably without having to install fabric on A?
|
Turn an application or script into a shell command
| 6,163,126
| 2
| 6
| 3,763
| 0
|
python,shell,command-line
|
Add a shebang: as the top line of the file: #!/usr/bin/python or #!/usr/bin/python3 (you can use the python -B to prevent generation of .pyc files, which is why I don't use /usr/bin/env)
Make it executable: You will need to do chmod +x app.py
(optional) Add directory to path, so can call it anywhere: Add a directory with your executable to your $PATH environment variable. How you do so depends on your shell, but is either export PATH=$PATH:/home/you/some/path/to/myscripts (e.g. Linux distros which use bash) or setenv PATH $PATH:/home/you/some/path/to/myscripts (e.g. tcsh like in Mac OS X). You will want to put this, for example, in your .bashrc or whatever startup script you have, or else you will have to repeat this step every time you log in.
app.py will need to be in the myscripts (or whatever you name it) folder. You don't even need to call it app.py, but you can just rename it app.
If you wish to skip step #3, you can still do ./app to run it if you are in the same directory.
| 0
| 1
| 0
| 0
|
2011-05-28T17:04:00.000
| 6
| 0.066568
| false
| 6,163,087
| 0
| 0
| 0
| 2
|
When I want to run my python applications from commandline (under ubuntu) I have to be in the directory where is the source code app.py and run the application with command
python app.py
How can I make it (how is it conventionally done) to run the application from arbitrary directory with the command: app ? Similarly as you type ls, mkdir and other commands?
thank you
|
Turn an application or script into a shell command
| 6,163,117
| 0
| 6
| 3,763
| 0
|
python,shell,command-line
|
I'm pretty sure you have to make the script executable via chmod +x and put it in the PATH variable of your system.
| 0
| 1
| 0
| 0
|
2011-05-28T17:04:00.000
| 6
| 0
| false
| 6,163,087
| 0
| 0
| 0
| 2
|
When I want to run my python applications from commandline (under ubuntu) I have to be in the directory where is the source code app.py and run the application with command
python app.py
How can I make it (how is it conventionally done) to run the application from arbitrary directory with the command: app ? Similarly as you type ls, mkdir and other commands?
thank you
|
Python setup.py install uses wrong Python installation path
| 6,168,163
| 5
| 3
| 9,083
| 0
|
python,pythonpath
|
You need to execute setup.py by specifying which python interpreter on the command line, like this:
/path/to/python setup.py install
UPDATE:
The error message indicates that you don't have the python-dev package
installed on your system.
| 0
| 1
| 0
| 0
|
2011-05-29T13:41:00.000
| 2
| 0.462117
| false
| 6,168,035
| 1
| 0
| 0
| 2
|
I'm on a bluehost-server which has a "rudimental" installation of python2.6.
I installed python2.6 in my user-directory which works fine so far, but when I try to install python packages with "setup.py install", "easy_install" or "pip install" I get:
error: invalid Python installation: unable to open /usr/lib/python2.6/config/Makefile (No such file or directory)
So, it tries to use the system-wide installation which does not have this Makefile. Also using the --prefix or --user argument doesn't help.
How can I tell pip or easy_install to use the python-installation in my user-directory?
|
Python setup.py install uses wrong Python installation path
| 6,169,494
| 0
| 3
| 9,083
| 0
|
python,pythonpath
|
I just solved the problem by installing the needed packages manually, meaning copying the sourcefiles into my local python folder.
Thanks for helping anyway.
Best
Jacques
| 0
| 1
| 0
| 0
|
2011-05-29T13:41:00.000
| 2
| 0
| false
| 6,168,035
| 1
| 0
| 0
| 2
|
I'm on a bluehost-server which has a "rudimental" installation of python2.6.
I installed python2.6 in my user-directory which works fine so far, but when I try to install python packages with "setup.py install", "easy_install" or "pip install" I get:
error: invalid Python installation: unable to open /usr/lib/python2.6/config/Makefile (No such file or directory)
So, it tries to use the system-wide installation which does not have this Makefile. Also using the --prefix or --user argument doesn't help.
How can I tell pip or easy_install to use the python-installation in my user-directory?
|
python: complexity of os.path.exists with a ext4 filesystem?
| 6,176,569
| 0
| 4
| 810
| 0
|
python,linux,complexity-theory,ext4
|
Chances are good that the complexity is O(n) with n being the depth in the filesystem (e.g. / would have n=1, /something n=2, ...)
| 0
| 1
| 0
| 1
|
2011-05-30T12:55:00.000
| 2
| 0
| false
| 6,176,547
| 1
| 0
| 0
| 1
|
Does anyone know what the complexity of the os.path.exists function is in python with a ext4 filesystem?
|
easy_install fails on error "Couldn't find setup script" after binary upload?
| 39,579,939
| 1
| 11
| 23,071
| 0
|
python,binary,easy-install,python-c-extension
|
Sometimes you don't actually really intend to easy_install the 'directory', which will look for a setup.py file.
In simple words, you may be doing easy_install xyz/
while what you really want to do is easy_install xyz
| 0
| 1
| 0
| 1
|
2011-05-30T16:29:00.000
| 3
| 0.066568
| false
| 6,178,664
| 0
| 0
| 0
| 1
|
After uploading a binary distribution of my Python C extension with python setup.py bdist upload, easy_install [my-package-name] fails on "error: Couldn't find a setup script in /tmp/easy_install/package-name-etc-etc".
What am I doing wrong?
|
pydev - can someone please explain these errors
| 6,187,949
| 3
| 1
| 254
| 0
|
python,pydev
|
Neil speaks the truth, except for telling you to turn it off -- spell check in comments is quite helpful -- those who come after will thank you for ensuring they can read your comments without trying to decode random spelling errors.
The lines are simply pointing out words your IDE thinks are spelled wrong. I don't know why it doesn't understand "registry", but you have, in fact, misspelled "following" (as "follwing"). Fix the latter, ignore the former (or add it to the dictionary, don't remember if there's a convenient mechanism for that There is! See Macke's helpful comment below.).
| 0
| 1
| 0
| 0
|
2011-05-31T12:54:00.000
| 2
| 1.2
| true
| 6,187,874
| 1
| 0
| 0
| 2
|
I am developing using PyDev in Eclipse. I have put in some comments in my code. I get wavy red lines underneath certain words. The program runs fine and there are no warnings mentioned. So what is the meaning of these wavy lines.
e.g.
#!/usr/bin/python - I get the line under usr and python
# generated by accessing registry using another script written in VBScript.
# The scripts can do the follwing things.
- I get wavy lines under the words registry and following.
I need these comments as I may run the module on its own later.
Thanks for the help.
|
pydev - can someone please explain these errors
| 6,187,969
| 0
| 1
| 254
| 0
|
python,pydev
|
Might be this is just a spellchecker. You have a typo "follwing" instead of "following".
| 0
| 1
| 0
| 0
|
2011-05-31T12:54:00.000
| 2
| 0
| false
| 6,187,874
| 1
| 0
| 0
| 2
|
I am developing using PyDev in Eclipse. I have put in some comments in my code. I get wavy red lines underneath certain words. The program runs fine and there are no warnings mentioned. So what is the meaning of these wavy lines.
e.g.
#!/usr/bin/python - I get the line under usr and python
# generated by accessing registry using another script written in VBScript.
# The scripts can do the follwing things.
- I get wavy lines under the words registry and following.
I need these comments as I may run the module on its own later.
Thanks for the help.
|
Notification as a cron job
| 6,192,123
| 1
| 3
| 728
| 0
|
python,cron
|
If the cron job runs as "you", and if you set the DISPLAY var (export DISPLAY=:0) you should have no issues.
| 0
| 1
| 0
| 1
|
2011-05-31T18:06:00.000
| 4
| 0.049958
| false
| 6,191,624
| 0
| 0
| 0
| 2
|
I coded a python application which was running OK as a cron job. Later I added some libraries (e.g. pynotify and other *) because I wanted to be notified with the message describing what is happening, but it seems that cron can't run such an application.
Do you know some alternative how to run this application every five minutes? I'm using Xubuntu.
import gtk, pygtk, os, os.path, pynotify
I can run the application without cron without problems.
Cron seems to run the application but it won't show the notification message. In /var/log/cron.log there are no errors. The application executed every minute without problems.
my crontab:
*/1 * * * * /home/xralf/pythonsrc/app
thank you
|
Notification as a cron job
| 6,191,715
| 0
| 3
| 728
| 0
|
python,cron
|
I don't see any problem in cron job with pynotify? What is the error you are getting?
Can you run your python code separately to check whether your python code is working really well but only fails with cron?
Celery is distributed job queue & task manager written in Python but it may be too much for your needs.
Supervisord also can do some sort of cron task if you know that your program shall close in 5 minutes. So you can configure supervisord to start the task soon after. None of them are not easier like cron job.
| 0
| 1
| 0
| 1
|
2011-05-31T18:06:00.000
| 4
| 0
| false
| 6,191,624
| 0
| 0
| 0
| 2
|
I coded a python application which was running OK as a cron job. Later I added some libraries (e.g. pynotify and other *) because I wanted to be notified with the message describing what is happening, but it seems that cron can't run such an application.
Do you know some alternative how to run this application every five minutes? I'm using Xubuntu.
import gtk, pygtk, os, os.path, pynotify
I can run the application without cron without problems.
Cron seems to run the application but it won't show the notification message. In /var/log/cron.log there are no errors. The application executed every minute without problems.
my crontab:
*/1 * * * * /home/xralf/pythonsrc/app
thank you
|
Develop in python on Linux, test on Windows
| 6,194,672
| 1
| 0
| 447
| 0
|
python,windows,testing,virtualization
|
Whatever happens, if you're developing for multiple platforms you're going to have to copy from whichever you're developing on to the secondary platform(s), build it and instigate your tests.
Your best bet is to automate as much of it as possible (for both environments) and make a build bot which watches for new code on a central repository (you are using version control aren't you?).
If the Windows version is giving you most hassle then why not develop on Windows and set up the build bot on your linux machine?
| 0
| 1
| 0
| 0
|
2011-05-31T23:15:00.000
| 3
| 0.066568
| false
| 6,194,618
| 0
| 0
| 0
| 2
|
I'm trying to develop a cross-platform library and want to be able to develop code and then quickly test it on both Windows and Linux. I'm not sure if it's even an option or worthwhile testing under Wine (it uses the multiprocessing module, and COM on Windows) but I do have a VM I've been running it under. It's just been cumbersome getting the code copied over to the Windows machine running on a remote server using the Windows GUI (which is slow enough over the network as it is) after every change, then bringing up the command prompt and running the tests, then troubleshooting on Windows and getting the fixes back to the development environment.
Is there any way I can remove some steps from the testing process?
|
Develop in python on Linux, test on Windows
| 6,194,653
| 2
| 0
| 447
| 0
|
python,windows,testing,virtualization
|
Using a DVCS to push the local changes to the server will take you far. You will need a SSH server on the Windows machine, but there are several of those around. You can also use a makefile to direct the pushing and testing, possibly even running the tests remotely depending on what they consist of.
| 0
| 1
| 0
| 0
|
2011-05-31T23:15:00.000
| 3
| 1.2
| true
| 6,194,618
| 0
| 0
| 0
| 2
|
I'm trying to develop a cross-platform library and want to be able to develop code and then quickly test it on both Windows and Linux. I'm not sure if it's even an option or worthwhile testing under Wine (it uses the multiprocessing module, and COM on Windows) but I do have a VM I've been running it under. It's just been cumbersome getting the code copied over to the Windows machine running on a remote server using the Windows GUI (which is slow enough over the network as it is) after every change, then bringing up the command prompt and running the tests, then troubleshooting on Windows and getting the fixes back to the development environment.
Is there any way I can remove some steps from the testing process?
|
Where do scheduled python programs "print"?
| 6,208,294
| 2
| 0
| 109
| 0
|
python,crontab
|
They will be sent to the email address defined at the top of the crontab, or to the crontab's owner by default. See the crontab(5) man page for more details.
| 0
| 1
| 0
| 1
|
2011-06-01T22:02:00.000
| 1
| 1.2
| true
| 6,208,274
| 0
| 0
| 0
| 1
|
If I schedule print "Hello World!"; to run every hour with crontab, where will Hello World! be printed? Is there a log file?
If I do it with Java or C instead of Python, will it make any difference?
Thanks!
|
How can I automate antivirus/WSUS patch testing of my Windows driver and binary?
| 6,208,551
| 1
| 5
| 1,381
| 0
|
python,testing,automation,functional-testing,cots
|
Interesting problem. One thing to avoid is using the antivirus APIs to check to see if your application triggers them. You want a real live deployment of your application, on the expected operating system, with a real live AV install monitoring it. That way you'll trigger the heuristics monitoring as well as the simple "does this code match that checksum" that the API works with.
You haven't told us what your application is written in, but if your test suite for your application actually exercises portions of the application, rather than testing single code paths, that may be a good start. Ideally, your integration test suite is the same test suite you use to check for problems on your deploy targets. Your integration testing should verify the input AND the output for each test in a live environment, which SHOULD catch crashes and the like. Also, don't forget to check for things that take much longer than they should, that's an unfortunately common failure mode. Most importantly, your test suite needs to be easy enough to write, change, and improve that it actually stays in sync with the product. Tests that don't test everything are useless, and tests that aren't run are even worse. If we had more information about how your program works, we could give better advice about how to automate that.
You'll probably want a suite of VM images across your intended deploy targets, in various states of patch (and unpatch). For some applications, you'll need a separate VM for each variant of IE, since that changes other aspects of the system. Be very careful about which combination of things you have in each VM. Don't test more than one AV at a time. Update the AVs in your snapshots before running your tests. If you have a large enough combination software in your images, you might need to automate image creation - get a base system build, update to the latest patch level, then script the installation of AV and other application combinations.
Yes, maintaining this farm of VMs will be a pain, but if you script the deploy of your application, and have good snapshots and a plan for patching and updating the snapshots, the actual test suite itself shouldn't take all that long to run given appropriate hardware. You'll need to investigate the VM solutions, but I'd probably start with VMWare.
| 0
| 1
| 0
| 1
|
2011-06-01T22:14:00.000
| 1
| 1.2
| true
| 6,208,385
| 0
| 0
| 0
| 1
|
My (rather small) company develops a popular Windows application, but one thing we've always struggled with is testing - it frequently is only tested by the developers on a system similar to the one they developed it on, and when an update is pushed out to customers, there is a segment of our base that experiences issues due to some weird functionality with a Windows patch, or in the case of certain paranoid antivirus applications (I'm looking at you, Comodo and Kaspersky!), they will false-positive on our app.
We do manual testing on what 70% of our users use, but it's slow and painful, and sometimes isn't as complete as it should be. Management keeps insisting that we need to do better, but they keep punting on the issue when it comes time to release (testing will take HOW LONG? Just push it out and we'll issue a patch to customers who experience issues!).
I'd like to design a better system of automated testing using VMs, but could use some ideas on how to implement it, or if there's a COTS product out there, any suggestions would be great. I'm hacking a Python script together that "runs" every feature of our product, but I'm not sure how to go about testing if we get a Windows crash (besides just checking to see if it's still in the process list), or worse yet, if Comodo has flagged it for some stupid reason.
To best simulate the test environment, I'm trying to keep the VM as "pure" as possible and not load a lot of crap on it outside of the OS and the antivirus, and some common apps (Acrobat Reader, Firefox etc).
Any ideas would be most appreciated!
|
SVN having credential problems with Jenkins running as SYSTEM
| 6,209,784
| 2
| 4
| 1,833
| 0
|
python,svn,windows-7,credentials,jenkins
|
Create a new Jenkins job, and use Subversion as the revision control system. Put in the URL of the Subversion repository you want to manipulate in your Python script. Under the URL will appear a link to let you set the login. Click the link and log in.
Once you're done, you can delete the job. The whole purpose was to allow Jenkins to set up Subversion to allow that user to login in for that repository URL.
| 0
| 1
| 0
| 1
|
2011-06-01T23:34:00.000
| 1
| 1.2
| true
| 6,208,922
| 0
| 0
| 0
| 1
|
I have Jenkins running a python script that makes some SVN calls, my problem is that Jenkins tries to run this script as SYSTEM user which doesn't seem to have permission to access the SVN. It prompts me for a password for 'SYSTEM' upon my svn call.
If I run the python script by itself, I have no problems accessing the repository. Is there a way to have Jenkins run its Windows batch command as a non-SYSTEM user? I would rather not hardcode the SVN username and password in my script.
Edit: I found a way to change the user Jenkins runs under, it is accessed through:
Start > Control Panel > Administrative Tools > Services > Right Click, Properties for jenkins > Log On.
|
Will Python open a file before it's finished writing?
| 6,215,475
| 7
| 4
| 2,987
| 0
|
python,linux,file
|
Typically on Linux, unless you're using locking of some kind, two processes can quite happily have the same file open at once, even for writing. There are three ways of avoiding problems with this:
Locking
By having the writer apply a lock to the file, it is possible to prevent the reader from reading the file partially. However, most locks are advisory so it is still entirely possible to see partial results anyway. (Mandatory locks exist, but a strongly not recommended on the grounds that they're far too fragile.) It's relatively difficult to write correct locking code, and it is normal to delegate such tasks to a specialist library (i.e., to a database engine!) In particular, you don't want to use locking on networked filesystems; it's a source of colossal trouble when it works and can often go thoroughly wrong.
Convention
A file can instead be created in the same directory with another name that you don't automatically look for on the reading side (e.g., .foobar.txt.tmp) and then renamed atomically to the right name (e.g., foobar.txt) once the writing is done. This can work quite well, so long as you take care to deal with the possibility of previous runs failing to correctly write the file. If there should only ever be one writer at a time, this is fairly simple to implement.
Not Worrying About It
The most common type of file that is frequently written is a log file. These can be easily written in such a way that information is strictly only ever appended to the file, so any reader can safely look at the beginning of the file without having to worry about anything changing under its feet. This works very well in practice.
There's nothing special about Python in any of this. All programs running on Linux have the same issues.
| 0
| 1
| 0
| 0
|
2011-06-02T13:41:00.000
| 4
| 1
| false
| 6,215,324
| 0
| 0
| 0
| 4
|
I am writing a script that will be polling a directory looking for new files.
In this scenario, is it necessary to do some sort of error checking to make sure the files are completely written prior to accessing them?
I don't want to work with a file before it has been written completely to disk, but because the info I want from the file is near the beginning, it seems like it could be possible to pull the data I need without realizing the file isn't done being written.
Is that something I should worry about, or will the file be locked because the OS is writing to the hard drive?
This is on a Linux system.
|
Will Python open a file before it's finished writing?
| 6,229,831
| 0
| 4
| 2,987
| 0
|
python,linux,file
|
Yes it will.
I prefer the "file naming convention" and renaming solution described by Donal.
| 0
| 1
| 0
| 0
|
2011-06-02T13:41:00.000
| 4
| 0
| false
| 6,215,324
| 0
| 0
| 0
| 4
|
I am writing a script that will be polling a directory looking for new files.
In this scenario, is it necessary to do some sort of error checking to make sure the files are completely written prior to accessing them?
I don't want to work with a file before it has been written completely to disk, but because the info I want from the file is near the beginning, it seems like it could be possible to pull the data I need without realizing the file isn't done being written.
Is that something I should worry about, or will the file be locked because the OS is writing to the hard drive?
This is on a Linux system.
|
Will Python open a file before it's finished writing?
| 6,215,361
| 4
| 4
| 2,987
| 0
|
python,linux,file
|
On Unix, unless the writing application goes out of its way, the file won't be locked and you'll be able to read from it.
The reader will, of course, have to be prepared to deal with an incomplete file (bearing in mind that there may be I/O buffering happening on the writer's side).
If that's a non-starter, you'll have to think of some scheme to synchronize the writer and the reader, for example:
explicitly lock the file;
write the data to a temporary location and only move it into its final place when the file is complete (the move operation can be done atomically, provided both the source and the destination reside on the same file system).
| 0
| 1
| 0
| 0
|
2011-06-02T13:41:00.000
| 4
| 1.2
| true
| 6,215,324
| 0
| 0
| 0
| 4
|
I am writing a script that will be polling a directory looking for new files.
In this scenario, is it necessary to do some sort of error checking to make sure the files are completely written prior to accessing them?
I don't want to work with a file before it has been written completely to disk, but because the info I want from the file is near the beginning, it seems like it could be possible to pull the data I need without realizing the file isn't done being written.
Is that something I should worry about, or will the file be locked because the OS is writing to the hard drive?
This is on a Linux system.
|
Will Python open a file before it's finished writing?
| 6,215,461
| 0
| 4
| 2,987
| 0
|
python,linux,file
|
If you have some control over the writing program, have it write the file somewhere else (like the /tmp directory) and then when it's done move it to the directory being watched.
If you don't have control of the program doing the writing (and by 'control' I mean 'edit the source code'), you probably won't be able to make it do file locking either, so that's probably out. In which case you'll likely need to know something about the file format to know when the writer is done. For instance, if the writer always writes "DONE" as the last four characters in the file, you could open the file, seek to the end, and read the last four characters.
| 0
| 1
| 0
| 0
|
2011-06-02T13:41:00.000
| 4
| 0
| false
| 6,215,324
| 0
| 0
| 0
| 4
|
I am writing a script that will be polling a directory looking for new files.
In this scenario, is it necessary to do some sort of error checking to make sure the files are completely written prior to accessing them?
I don't want to work with a file before it has been written completely to disk, but because the info I want from the file is near the beginning, it seems like it could be possible to pull the data I need without realizing the file isn't done being written.
Is that something I should worry about, or will the file be locked because the OS is writing to the hard drive?
This is on a Linux system.
|
How to detect in Iron Python what the script is being called from?
| 6,218,295
| 1
| 0
| 112
| 0
|
.net,mono,ironpython
|
You could use System.AppDomain.CurrentDomain.GetAssemblies() (assuming you don't use AppDomain isolation, of course) and see if that contains an assembly that would only be preset when your application is running.
| 0
| 1
| 0
| 1
|
2011-06-02T16:49:00.000
| 2
| 1.2
| true
| 6,217,545
| 0
| 0
| 0
| 1
|
I have a ipy script that can be called either from an embedded console in a larger application or directly from the command line and I'm looking for a quick way to determine at run time which has occurred without having to pass an argument to differentiate the events.
Additionally the script has to run on both mono/linux and .net/windows.
Thanks in advance for any assistance.
|
Install python module to non default version of python on Mac
| 6,220,717
| 3
| 8
| 4,874
| 0
|
python,module
|
If you're installing through setuptools (ie python setup.py), it will install to the lib directory for the python executable you use (unless it's a broken package).
| 0
| 1
| 0
| 0
|
2011-06-02T20:54:00.000
| 3
| 0.197375
| false
| 6,220,274
| 1
| 0
| 0
| 1
|
I have a couple different versions of Python installed on my Mac. The default version is 2.5, so when I install a module it gets installed to 2.5. I need to be able to install some modules to a different version of Python because I am working on projects that use different versions. Any one know how to accomplish this? Thanks for your help.
|
How to backup python dependencies or modules already installed
| 6,223,955
| 7
| 5
| 7,121
| 0
|
python,dependencies,backup
|
If you have installed the packages with pip (an improved easy_install), you can just do pip freeze > my-reqs.txt to get a list and versions of the installed packages. There is also some option to install using the reqs file, which I can not remember right now.
Pip is meant to companion virtualenv, which can be used to handle per project requirements of dependent packages.
| 0
| 1
| 0
| 0
|
2011-06-03T05:46:00.000
| 2
| 1
| false
| 6,223,426
| 1
| 0
| 0
| 1
|
i have installed many python modules plugins and libraries in my centos system.
Now i don't want to install each thing again on separate computers.
Is there any way i can make the package like rpm or any other thing so that when i install in in new location all the modules, dependencies also gets installed and everytime i install new thing i can update the install package
|
How to install Python 2.7 devel if I have Python 2.7 in a different directory
| 6,224,291
| 7
| 5
| 3,612
| 0
|
python,dependencies,development-environment
|
Installing Python from source installs the development files in the same prefix.
| 0
| 1
| 0
| 0
|
2011-06-03T07:29:00.000
| 1
| 1.2
| true
| 6,224,228
| 1
| 0
| 0
| 1
|
I have python 2.7 installed in /opt/python2.7.
Now i want to install the devel packages for it but could not find it.
How can i install it os that goes in python2.7 not for default python2.4
|
hide scripts in msi installer
| 6,226,348
| 0
| 0
| 188
| 0
|
python,installation,windows-installer
|
A solution would be temporary files. You can store them in Binary table and use two custom actions:
one which extracts them before the installation starts
another one which removes them when installation is finished
You can extract them in a temporary location, for example the user Temp folder.
| 0
| 1
| 0
| 0
|
2011-06-03T10:04:00.000
| 1
| 0
| false
| 6,225,631
| 1
| 0
| 0
| 1
|
I am creating an python based msi installer.
by which I am executing some python scripts while the installer is running.
But I dont want to deliver/install these scripts with package just want to hide them in the msi and use while its running. I tried using binary table in the msi for the same but it didn't work. >How should i do it?
|
How to continuously run a Python script on an EC2 server?
| 6,232,612
| 40
| 23
| 15,157
| 0
|
python,amazon-ec2
|
You have a few options.
You can add your script to cron to be run regularly.
You can run your script manually, and detach+background it using nohup.
You can run a tool such as GNU Screen, and detach your terminal and log out, only to continue where you left off later. I use this a lot.
For example:
Log in to your machine, run: screen.
Start your script and either just close your terminal or properly detach your session with: Ctrl+A, D, D.
Disconnect from your terminal.
Reconnect at some later time, and run screen -rD. You should see your stuff just as you left it.
You can also add your script to /etc/rc.d/ to be invoked on book and always be running.
| 0
| 1
| 1
| 1
|
2011-06-03T20:53:00.000
| 3
| 1.2
| true
| 6,232,564
| 0
| 0
| 0
| 1
|
I've setup an Amazon EC2 server. I have a Python script that is supposed to download large amounts of data from the web onto the server. I can run the script from the terminal through ssh, however very often I loose the ssh connection. When I loose the connection, the script stops.
Is there a method where I tell the script to run from terminal and when I disconnect, the script is still running on the server?
|
Battling with wxpython
| 6,232,781
| 11
| 5
| 662
| 0
|
macos,wxpython,32bit-64bit
|
I don't have a Mac, but I read almost all the messages on the wxPython mailing list. As I understand it, you don't want to use the Python that came with your Mac. It has been modified for the Mac specifically somehow, so you should download a normal version of Python and install it.
As for the 32-bit question, with wxPython 2.8, you are correct. You are limited to 32-bit because of the Carbon API. However, if you scroll down the download page (http://wxpython.org/download.php) you will see that wxPython 2.9 has been released and it has a Cocoa build which (and I quote) "requires at least OSX 10.5, and supports either 32-bit or 64-bit architectures" and Python 2.7.
I highly recommend that you go and seek help on the wxPython mailing list. The author of wxPython is there and he uses a Mac and there are several other Mac addicts on the list too that answer these sorts of questions.
| 1
| 1
| 0
| 0
|
2011-06-03T20:58:00.000
| 1
| 1.2
| true
| 6,232,629
| 0
| 0
| 0
| 1
|
I've spent a very frustrating evening trying to get wxpython to work on my MacBook Pro (running Snow Leopard 10.6.6). From reading the various threads on this topic both here and on other websites this is my understanding so far:
If you are running python 2.6 or greater you can only work with wxpython if you access the 32-bit version
Typing python at the command line prompt reveals that I am using python 2.6.1.
Typing which python returns /usr/bin/python so I'm using the default version installed with my OS. This means that typing the following at the command line prompt
defaults write com.apple.versioner.python Prefer-32-Bit -bool yes should change the version I'm using to the 32 bit version.
With the above in place, I can now simply type the name of my python file (with the wx module imported) and my file will run successfully.
As you can no doubt guess however my file doesn't run successfully. I can't figure out what's going on, but maybe someone else can here are some other observations that might help...
typing help(), modules yields the following message and then prints out the modules, including wx and wxpython
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pkgutil.py:110: DeprecationWarning: The wxPython compatibility package is no longer automatically generated or actively maintained. Please switch to the wx package as soon as possible.
__import__(name)
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pkgutil.py:110: DeprecationWarning: twisted.flow is unmaintained.
__import__(name)
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/twisted/python/filepath.py:12: DeprecationWarning: the sha module is deprecated; use the hashlib module instead
import sha
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/twisted/words/im/__init__.py:8: UserWarning: twisted.im will be undergoing a rewrite at some point in the future.
warnings.warn("twisted.im will be undergoing a rewrite at some point in the future.")
Fri Jun 3 22:23:48 Paul-Pattersons-MacBook-Pro.local python[3208] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged.
_RegisterApplication(), FAILED TO establish the default connection to the WindowServer, _CGSDefaultConnection() is NULL.
Then examining the wx module specifially yields...
NAME
wx
FILE
/usr/local/lib/wxPython-unicode-2.8.12.0/lib/python2.6/site-packages/wx-2.8-mac-unicode/wx/__init__.py
Can anyone help?
|
Appengine ACL with Google Authentication
| 6,246,968
| 5
| 3
| 871
| 0
|
python,django,google-app-engine,authentication,authorization
|
You'll need to do this yourself: Implement the ACL with a datastore model keyed by the user's user_id, and fetch and check it on each request. The Users API doesn't provide anything like this built-in.
| 0
| 1
| 0
| 0
|
2011-06-04T21:49:00.000
| 2
| 1.2
| true
| 6,239,612
| 0
| 0
| 1
| 1
|
I would like to implement ACL with Google Authentication. Need some pointer regarding the possibility of the same.
Use case:
Page X accessible only to myadmin@gmail.com
Page Y accessible for all belong to a group Y. After registration a moderator will add/reject the user to the group Y.
Pages are not accessible if user does not belong to any one of the above two. Unauthorized view is prohibited even though the user is authenticated successfully.
I am planning to use Django for my project, any support provided by Django would be useful.
Thanks in advance.
|
Ancestors in App Engine data store
| 6,243,119
| 5
| 0
| 300
| 0
|
python,google-app-engine,datastore,ancestor
|
When you create an entity with a parent, the entities are placed in the same Entity Group. Transactions in App Engine can only work within a single entity group, so if you need transactions, you need entity groups. If you don't need transaction, you don't need entity groups (in particular, to build relationships between entities that don't need transactional capabilities, you should use ReferenceProperties, not parent-child relationships.)
| 0
| 1
| 0
| 0
|
2011-06-04T22:19:00.000
| 1
| 1.2
| true
| 6,239,737
| 0
| 0
| 1
| 1
|
I've been developing for Google App Engine for a while. One of the features I've noticed but haven't had an opportunity to use yet is "ancestors" in the data store.
What would be an example of a situation where this is useful?
|
remotely start Python program in background
| 6,258,988
| 2
| 3
| 2,313
| 0
|
python,fork,shelve,python-daemon
|
For those who came across this post in the future. Python-daemon can still work. It is just that be sure to load the shelve dicts within the same process. So previously the shelve dicts is loaded in parent process, when python-daemon spawns a child process, the dict handler is not passed correctly. When we fix this, everything works again.
Thanks for those suggesting valuable comments on this thread!
| 0
| 1
| 0
| 1
|
2011-06-05T15:42:00.000
| 2
| 0.197375
| false
| 6,243,933
| 0
| 0
| 0
| 1
|
I need to use fabfile to remotely start some program in remote boxes from time to time, and get the results. Since the program takes a long while to finish, I wish to make it run in background and so I dont need to wait. So I tried os.fork() to make it work. The problem is that when I ssh to the remote box, and run the program with os.fork() there, the program can work in background fine, but when I tried to use fabfile's run, sudo to start the program remotely, os.fork() cannot work, the program just die silently. So I switched to Python-daemon to daemonalize the program. For a great while, it worked perfectly. But now when I started to make my program to read some Python shelve dicts, python-daemon cannot work any longer. Seems like if you use python-daemon, the shelve dicts cannot be loaded correctly, which I dont know why. Anyone has an idea besides os.fork() and Python-daemon, what else can I try to solve my problem?
|
Binary Files on 32bit / 64bit systems?
| 6,244,838
| 2
| 3
| 1,348
| 0
|
python,serialization,32bit-64bit
|
As long as your struct format string uses "standard size and alignment" (< or >) rather than "native size and alignment" (@), your files can be used cross-platform.
| 0
| 1
| 0
| 1
|
2011-06-05T18:06:00.000
| 3
| 1.2
| true
| 6,244,799
| 0
| 0
| 0
| 1
|
I am using python struct module to create custom binary files.
The file itself has the following format:
4 bytes (integer)
1 byte (unsigned char)
4 bytes (float)
4 bytes (integer)
1 byte (unsigned char)
4 bytes (float)
.......................... (100000 such lines)
4 bytes (integer)
1 byte (unsigned char)
4 bytes (float)
Currently, I am using a 32bit machine to create these custom binary files. I am soon planning on switching to a 64bit machine.
Will I be able to read/write the same files using both {32bit / 64bit} machines? or should I expect compatibility issues?
(I will be using Ubuntu Linux for both)
|
Invoking shell script from a python script using root privileges
| 6,254,043
| 1
| 1
| 2,685
| 0
|
python,linux,shell,root,sudo
|
I'd put logic in the python_script.py to check its UID and fail if is not executed as root. if os.getuid() != 0:. That will ensure it only runs as root, ether by a root login, or sudo.
If you're getting permission denied when trying to execute the python_script.py, you need to set the execute bit on it. chmod +x python_script.py
| 0
| 1
| 0
| 0
|
2011-06-06T14:34:00.000
| 1
| 0.197375
| false
| 6,253,613
| 0
| 0
| 0
| 1
|
I'm trying to invoke a shell script shell_script.sh from a python script (python_script.py) using the call command. The shell_script.sh invokes a executable that requires root access to execute.
The python_script.py invokes shell_script.sh using subprocess.call().
See below:
subprocess.call(['/complete_path/shell_script.sh', 'param1', 'param2',
'param3'], shell=True)
When I try to execute the python script python_script.py it gives me permission denied.
I've tried different ways.
a) Invoke python with sudo - sudo python python_script.py
b) Invoke sudo into inside the call method - subprocess.call(['sudo' '/complete_path/shell_script.sh', 'param1', 'param2',
'param3'], shell=True)
What's the best way to resolve this.
Thanks.
|
Why does Python's queue.Queue.get() permit returning early from timeouts?
| 12,251,479
| 0
| 4
| 2,227
| 0
|
python,real-time
|
There is definitely a bug in Queue.get, at least in python 2.6.6.
On posix a queue.get(timeout=1) seems to exit (raising the Empty exception) almost immediately, whereas queue.get(timeout=2) is working fine.
I was using a single queue with concurrent threads *get*ing on it...
| 0
| 1
| 0
| 0
|
2011-06-07T06:15:00.000
| 3
| 0
| false
| 6,261,244
| 0
| 0
| 0
| 2
|
UPDATE: This question is based on a faulty mental model of how Queue.get() was actually behaving, which was caused by some slightly ambiguous documentation but mainly by a buggy, hand-rolled implementation of timedelta.total_seconds(). I discovered this bug when trying to prove that the original answers where incorrect. Now that timedelta.total_seconds() is provided by Python (since 2.7), I will move to use that.
Sorry for the confusion.
This isn't a "Why does my code not run?" question, but a "What is the motivation behind this design decision?"
Since 2.3, Python's queue module contains a Queue class with a get method, that takes a timeout parameter. Here's the section from the manual:
Queue.get([block[, timeout]])
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. [...]
(Emphasis mine)
Note that it may raise an Empty exception even if it hasn't reached the timeout. In fact, I am seeing that behaviour on Ubuntu (but not Windows). It is bailing just a little early and it has had minor consequences on my code - I can code around it though.
Most blocking timeouts take a minimum timeout, which makes sense on a non-real-time OS, including Windows and Linux. There is no guarantee that the OS will context switch to your process or thread by any given deadline.
However, this one takes a maximum timeout. Can anyone explain how this design decision might make sense?
|
Why does Python's queue.Queue.get() permit returning early from timeouts?
| 6,261,388
| 5
| 4
| 2,227
| 0
|
python,real-time
|
I think you are misinterpreting the documentation. It isn't saying it might raise the Empty exception after less than timeout seconds, it is saying it will block for at most timeout seconds. It might block for less than that, if it can satisfy the get.
I realize you are saying you see it raising Empty early, but honestly, that sounds like either a bug, or that you are relying on more accuracy than the system can provide. (It does seem as though, to obey the exact wording of the specification, an implementation should round timeout down to the resolution of its timer rather than up as you seem to desire.)
| 0
| 1
| 0
| 0
|
2011-06-07T06:15:00.000
| 3
| 1.2
| true
| 6,261,244
| 0
| 0
| 0
| 2
|
UPDATE: This question is based on a faulty mental model of how Queue.get() was actually behaving, which was caused by some slightly ambiguous documentation but mainly by a buggy, hand-rolled implementation of timedelta.total_seconds(). I discovered this bug when trying to prove that the original answers where incorrect. Now that timedelta.total_seconds() is provided by Python (since 2.7), I will move to use that.
Sorry for the confusion.
This isn't a "Why does my code not run?" question, but a "What is the motivation behind this design decision?"
Since 2.3, Python's queue module contains a Queue class with a get method, that takes a timeout parameter. Here's the section from the manual:
Queue.get([block[, timeout]])
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. [...]
(Emphasis mine)
Note that it may raise an Empty exception even if it hasn't reached the timeout. In fact, I am seeing that behaviour on Ubuntu (but not Windows). It is bailing just a little early and it has had minor consequences on my code - I can code around it though.
Most blocking timeouts take a minimum timeout, which makes sense on a non-real-time OS, including Windows and Linux. There is no guarantee that the OS will context switch to your process or thread by any given deadline.
However, this one takes a maximum timeout. Can anyone explain how this design decision might make sense?
|
How to detect online users in a web application in Google App Engine
| 6,264,539
| 2
| 2
| 548
| 0
|
python,google-app-engine,session
|
HTTP is stateless, so there's no inherent definition of "online user". You could count the number of non-destroyed sessions you've created, but unless you've got a cron job that destroys old sessions, this won't give an accurate picture.
You basically need to decide how much time without a new page request counts as "online" and query for the sessions that have been updated in that range of time.
| 0
| 1
| 0
| 0
|
2011-06-07T08:35:00.000
| 2
| 0.197375
| false
| 6,262,601
| 0
| 0
| 1
| 1
|
I am currently working on an application for Google App Engine, and I need some advice to detect the number of online users in the application. How can I do this?
I am using a session library. Do I need to overwrite the session methods (create_session, destroy_session increment/and decrement a value in datastore) or is there another method that I can use?
|
Installed Tornado and Python but Apache is still handling .py files
| 6,289,549
| 0
| 0
| 847
| 0
|
python,apache,webserver,tornado
|
So you have Apache as the web head and Tornado running behind it? Why not just use ProxyPass from port 80 to whatever port Tornado is running on.
You can't get Tornado to serve the .py files like PHP can do with .php files.
| 0
| 1
| 0
| 1
|
2011-06-08T09:39:00.000
| 3
| 0
| false
| 6,276,805
| 0
| 0
| 0
| 1
|
How do I get Tornado (or in general another server) to handle the .py files on my host, while Apache still handles the php files?
|
Build Cython and gevent on OSX
| 6,315,275
| 2
| 3
| 1,248
| 0
|
python,cython,gevent
|
Recompilation gevent-1.0dev and greenlet with flags CFLAGS="-arch i386 -arch x86_64" is a solution to my problem.
| 0
| 1
| 0
| 0
|
2011-06-08T10:16:00.000
| 2
| 1.2
| true
| 6,277,257
| 1
| 0
| 0
| 1
|
When I build gevent, I get an error
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.1/bin/cython", line 7, in
from Cython.Compiler.Main import main
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/site-packages/Cython-0.14.1-py2.7-macosx-10.4-i386.egg/Cython/Compiler/Main.py", line 19, in
import Code
ImportError: dlopen(/usr/local/Cellar/python/2.7.1/lib/python2.7/site-packages/Cython-0.14.1-py2.7-macosx-10.4-i386.egg/Cython/Compiler/Code.so, 2): no suitable image found. Did find:
/usr/local/Cellar/python/2.7.1/lib/python2.7/site-packages/Cython-0.14.1-py2.7-macosx-10.4-i386.egg/Cython/Compiler/Code.so: mach-o, but wrong architecture
I tried to specify architecture with CFLAGS="-arch x86_64", but it does not work.
|
Using PHP/Python to access data from a remote telnet server over SSH
| 6,283,332
| 0
| 0
| 899
| 0
|
php,python,ssh,telnet
|
Why don't you redirect your stdout to a file. Then use your php website framework to read the file and display the results
| 0
| 1
| 0
| 1
|
2011-06-08T17:37:00.000
| 1
| 0
| false
| 6,282,891
| 0
| 0
| 0
| 1
|
I have a CentOS 5.5 server running a local telnet daemon (which is only accessible from localhost) which prints out a list of active users on our accounting system when sent the correct commands through telnet. I need to make this information available on our intranet website which uses PHP.
I've written a Python script using the telnetlib modules to run locally on the CentOS server which simply captures the output of the telnet command and prints them to stdout. I've setup key based ssh between the web server and the CentOS server to run the python script remotely from the web server. I can execute the script successfully from the web server using ssh and it prints the list of users to stdout on the webserver. I was hoping to be able to execute this SSH command and capture the results into a variable to display on the website.
However, I can't get exec(), shell_exec(), passthru() or any other PHP exec function to display the data. I'm well aware that I may be approaching this from the totally wrong angle! What would be the best way to capture the output from ssh?
|
Implement time-based quotas in python
| 6,286,944
| 1
| 2
| 232
| 0
|
python,quota
|
I'm not aware of any ready-made component, but it should be fairly simple to do this.
I would probably use a database table, containing two columns: user ID and timestamp. Each time a user (IP address?) wants a connection, you find all the entries with that user ID with a timestamp between now and 60 seconds ago. If it's under the limit, you add an entry and allow the connection; otherwise, you reject the connection.
| 0
| 1
| 0
| 0
|
2011-06-09T00:31:00.000
| 1
| 1.2
| true
| 6,286,838
| 0
| 0
| 1
| 1
|
I need to implement a time-based quota in my python (twisted) application.
Is there an existing module, or other implementation that I should use as a reference?
Specifically, my application needs to ratelimit connections from clients, using rules like '10 connections per minute'.
There is a Google App Engine module name 'taskqueue' that seems to fit my needs, but I am not using GAE.
Thank you.
EDIT:
platform is linux
re: iptables; it needs to be in the application b/.c the quotas will not be based on source IP address, rather some application-specific data ('clientid', for example).
|
programs paralyzing each other on the server (c++ with openMP and python)
| 6,289,692
| 5
| 1
| 290
| 0
|
c++,python,openmp
|
There can be a number of reasons for this, for example:
Increased failure rate in the branch prediction
Exhausted CPU cache
Filled up the memory bus
Too much context switching (this have an effect on many things, including all the previous points)
| 0
| 1
| 0
| 1
|
2011-06-09T07:50:00.000
| 2
| 0.462117
| false
| 6,289,668
| 0
| 0
| 0
| 1
|
I have an urgent problem because my time is running out: I let my calculations process on a server with 8 cores therefore I'm using openMP in my c++ code and it works fine. Of course I'm not the only one who is using the server, so my capacity is not always 800%CPU.
But it happened now several times that someone who started his python prog on the machine paralyzed mine and his prog completely: Although I was still using around 500%CPU the code was running approx. 100x slower - for me and the other guy. Do you have an idea what the reason could be, how to prevent it?
|
What is the difference between an 'sdist' .tar.gz distribution and an python egg?
| 6,292,860
| 71
| 53
| 21,997
| 0
|
python,egg,sdist
|
setup.py sdist creates a source distribution: it contains setup.py, the source files of your module/script (.py files or .c/.cpp for binary modules), your data files, etc. The result is an archive that can then be used to recompile everything on any platform.
setup.py bdist (and bdist_*) creates a built distribution: it includes .pyc files, .so/.dll/.dylib for binary modules, .exe if using py2exe on Windows, your data files... but no setup.py. The result is an archive that is specific to a platform (for example linux-x86_64) and to a version of Python, and that can be installed simply by extracting it into the root of your filesystem (executables are in /usr/bin (or equivalent), data files in /usr/share, modules in /usr/lib/pythonX.X/site-packages/...). You can even build rpm archives that can be directly installed using your package manager.
| 0
| 1
| 0
| 0
|
2011-06-09T12:17:00.000
| 2
| 1.2
| true
| 6,292,652
| 1
| 0
| 0
| 1
|
I am a bit confused. There seem to be two different kind of Python packages, source distributions (setup.py sdist) and egg distributions (setup.py bdist_egg).
Both seem to be just archives with the same data, the python source files. One difference is that pip, the most recommended package manager, is not able to install eggs.
What is the difference between the two and what is 'the' way to do distribute my packages?
(Note, I am not wanting to distribute my packages through PyPI, but I want to use a package manager that fetches my dependencies from PyPI)
|
Python distributed application architecture on constrained corporate environment
| 6,331,843
| 0
| 0
| 381
| 0
|
python,architecture,distributed,environment
|
Answer to myself: I found one possible solution..
I'm lucky because the console.py script is actually invoking many slave python scripts, each of them performing one single system check via standard third-party command-line tools which can be fired to check features on remote hosts.
Then, what I did was to modify the gui.py and console.py so that users can parametrically specify on which Windows host the checks must be carried out.
In this way, I can obtain a ditributed application...but I've been lucky, what if one or more of the third-party CL tools did not support remote host features checking?
| 0
| 1
| 0
| 0
|
2011-06-10T14:34:00.000
| 1
| 0
| false
| 6,307,829
| 0
| 0
| 0
| 1
|
This is my scenario: I developed a Python desktop application which I use to probe the status of services/DBs on the very same machine it is running on.
My need is to monitor, using my application, two "brother" Window Server 2003 hosts (Python version is 2.5 for both). One of the hosts lies in my own LAN, the other one lies in another LAN which is reachable via VPN.
The application is composed by:
A Graphical User Interface (gui.py), which provides widgets to collect user inputs and launches the...
...business-logic script (console.py), which in turn invokes slave Python scripts that check the system's services and DB usage/accounts status/ecc. The textual output of those checks is then returned back to the GUI.
I used to execute the application directly on each the two machines, but it would be great to turn it into a client/server application, so that:
users will just be supposed to run the gui.py locally
the gui.py will be supposed to communicate parameters to some server remakes of console.py which will be running on both of the Windows hosts
the servers will then execute system checks and report back the results to the client GUIs which will display them.
I thought about two possible solutions:
Create a Windows service on each of the Windows hosts, basically executing console.py's code and waiting for incoming requests from the clients
Open SSH connections from any LAN host to the eliged Windows host and directly run console.py on it.
I am working on a corporate environment, which has some network and host constraints: many network protocols (like SSH) are filtered by our corporate firewall. Furthermore, I don't have Administration privileges onto the Windows hosts, so I can't install system services on them...this is frustrating!
I just wanted to ask if there is any other way to make gui.py and console.py communicate over the network and which I did not take into account. Does anyone have any suggestion? Please note that - if possible - I'm not going to ask ICT department to give me Administration privileges on the Windows hosts!
Thanks in advance!
|
Deploying Django project on EC2 with BitNami image
| 6,315,809
| 3
| 2
| 488
| 0
|
python,django,deployment,amazon-ec2,amazon-web-services
|
The answer is pretty obvious. If you start with the Bitnami stack you'll save yourself the hassle of installing and configuring the various components (web server, gateway, python and the required libs, DB, etc.).
So if you app is pretty straight forward (typical web app) then sure, start with the bitnami stack. At most you'll reconfigure certain parts later on, as needed.
There's no particular joy in installing and configuring it all yourself, imo.
| 0
| 1
| 0
| 0
|
2011-06-11T11:35:00.000
| 1
| 0.53705
| false
| 6,315,654
| 0
| 0
| 1
| 1
|
I have a complex university project that requires building some specific libraries and the use of threads (AppEngine out of the question), and I want to deploy in on EC2 (Free tier deal).
I was wondering what would be best, to start with a bare linux distribution or the BitNami Django stack ?
I've seen similar questions here, but I'm looking for Pro's and Con's mainly.
|
limits of bulk email in GAE
| 6,337,957
| 2
| 1
| 196
| 0
|
python,google-app-engine,email,backend,task-queue
|
Enqueue a single task which sends emails sequentially, checking the wallclock time after each email. When the time approaches 10 minutes, chain another task to continue where the current task left off. If you want to send emails faster, parallelize this, and enqueue several tasks that each send emails to a subset of users.
| 0
| 1
| 0
| 0
|
2011-06-13T15:30:00.000
| 1
| 1.2
| true
| 6,332,562
| 0
| 0
| 1
| 1
|
I'm working on a voting app where I need to send an email to each voter to inform him or her about the election. I see three methods for doing this and I'm curious what the approximate limits on the number of emails I could send with each method:
In a user request, add a task to a task queue where each task sends one email. The limit here is how many tasks I can queue up in 30 seconds. Is there a way to estimate this reliably?
In a user request, add one task to a task queue where that one task adds tasks to a second task queue where each task in the second queue sends a single email. Since the limit here is 10 minutes, is it a reasonable estimate that I can send 20 times as many emails as with method 1?
Use a backend which doesn't have a time limit so I could presumably send as many emails as I need to.
If methods 1 or 2 could send a sufficient number of emails I would prefer to stick with them to avoid the extra complexity of using a backend. If it matters, I'm using the Python API.
|
GAE expanded log view
| 6,337,910
| 1
| 0
| 96
| 0
|
python,google-app-engine,logging
|
App Engine stores logging information in a set of circular buffers. When it runs out of space, it overwrites older log entries with the new data. What you're seeing is requests for which the detailed logs have been overwritten by newer requests.
| 0
| 1
| 0
| 0
|
2011-06-13T19:24:00.000
| 1
| 1.2
| true
| 6,335,248
| 0
| 0
| 1
| 1
|
This might not be bug, but feature. I'm having problem views expanded logs when searching logs in dashboard on app engine.
Search results show first couple of logs in full detail, but rest of log entries are obscured. Every new entry in log is shown in full details, but older ones get obscured over the time.
Same behavior is reflected if I try to download logs from app engine, only more log entries are not obscured.
Point is that I can't get full log of my app and would like to be able to run some tasks over data.
|
How to call a python script from Perl?
| 70,644,949
| 0
| 17
| 31,279
| 0
|
python,perl
|
IF you want to see output in "real time" and not when script has finished running , Add -u after python. example :
my $ret = system("python -u pdf2txt.py arg1 arg2");
| 0
| 1
| 0
| 1
|
2011-06-14T07:37:00.000
| 4
| 0
| false
| 6,340,479
| 0
| 0
| 0
| 1
|
I need to call "/usr/bin/pdf2txt.py" with few arguments from my Perl script. How should i do this ?
|
Python equivalent for curl -b (--cookie)
| 6,353,057
| 0
| 0
| 262
| 0
|
python
|
You can use CookieJar.add_cookie_header to add your cookie to a http request header.
| 0
| 1
| 0
| 1
|
2011-06-15T03:15:00.000
| 2
| 0
| false
| 6,352,644
| 0
| 0
| 0
| 1
|
I wish to convert a bash script that's currently using "curl -b 'cookie'" into a Python script. I've looked at Pycurl, but I couldnt' find a -b equivalent. There are also urllib and urllib2, but I couldn't see an easy way to replicate the line.
Any help would be great.
|
Shell script replacement?
| 6,354,008
| 2
| 1
| 509
| 0
|
python,shell
|
Have you thought about using "set -e" if you can depend on the exit status of the programs you're running?
| 0
| 1
| 0
| 0
|
2011-06-15T06:37:00.000
| 5
| 0.07983
| false
| 6,353,957
| 0
| 0
| 0
| 2
|
I'm starting to get sick of shell scripts to perform automations and glue codes between stuff. I love using it for quick and dirty data processing, but even for simple 3 line code that spawns a process and remembers its process id, it's taking me very long time to program it correctly.
For every commands, if I don't explicitly check for return codes, the script might terminate with exit code 0 even when I don't want to. so each shell command gets followed by if statement to see whether the program terminated correctly or not..
Passing variables, and writing robust command line argument parser is hard (something like optparse in Python).
It's very hard to debug.
I use python for most of my work, and yet, it feels bit verbose when I'm trying to use it for shell-scripting purposes, if I start to use the subprocess module.
I was thinking whether there's a good middle ground between this. Like, either writing robust shell script without being so verbose, or writing less verbose automation scripts in higher level language such as Python.
|
Shell script replacement?
| 6,354,023
| 1
| 1
| 509
| 0
|
python,shell
|
What is the question? I don't think many would consider Python 'verbose'. It is brought up often to show how a language can NOT be verbose compared to, say, Java.
By the way, Perl, syntactically and historically, can be placed between shell-scripting and Python, I think.
| 0
| 1
| 0
| 0
|
2011-06-15T06:37:00.000
| 5
| 0.039979
| false
| 6,353,957
| 0
| 0
| 0
| 2
|
I'm starting to get sick of shell scripts to perform automations and glue codes between stuff. I love using it for quick and dirty data processing, but even for simple 3 line code that spawns a process and remembers its process id, it's taking me very long time to program it correctly.
For every commands, if I don't explicitly check for return codes, the script might terminate with exit code 0 even when I don't want to. so each shell command gets followed by if statement to see whether the program terminated correctly or not..
Passing variables, and writing robust command line argument parser is hard (something like optparse in Python).
It's very hard to debug.
I use python for most of my work, and yet, it feels bit verbose when I'm trying to use it for shell-scripting purposes, if I start to use the subprocess module.
I was thinking whether there's a good middle ground between this. Like, either writing robust shell script without being so verbose, or writing less verbose automation scripts in higher level language such as Python.
|
PyGTK Glade File Manager
| 6,441,716
| 0
| 0
| 622
| 0
|
python,pygtk,glade
|
I searched GTK API and I found that GTK TreeView does what I want.
| 1
| 1
| 0
| 0
|
2011-06-15T10:15:00.000
| 1
| 1.2
| true
| 6,356,190
| 0
| 0
| 0
| 1
|
I am trying to make a file manager in python that looks like GNOME default file manager (nautilus) Because I am developing a FilePane plugin for a python written text editor. I don't know if there is a widget to give me the same look of Nautilus. If not, what widgets can I use to get a nice looking file manager?
|
Twisted or Celery? Which is right for my application with lots of SOAP calls?
| 6,359,524
| 17
| 29
| 5,662
| 0
|
python,soap,concurrency,twisted,celery
|
Is either Celery or Twisted a more generally appropriate framework here?
Depends on what you mean by "generally appropriate".
If they'll both solve the problem adequately, are there pros/cons to using one vs the other?
Not an exhaustive list.
Celery Pros:
Ready-made distributed task queue, with rate-limiting, re-tries, remote workers
Rapid development
Comparatively shallow learning curve
Celery Cons:
Heavyweight: multiple processes, external dependencies
Have to run a message passing service
Application "processes" will need to fit Celery's design
Twisted Pros:
Lightweight: single process and not dependent on a message passing service
Rapid development (for those familiar with it)
Flexible
Probably faster, no "internal" message passing required.
Twisted Cons:
Steep learning curve
Not necessarily as easy to add processing capacity later.
I'm familiar with both, and from what you've said, if it were me I'd pick Twisted.
I'd say you'll get it done quicker using Celery, but you'd learn more while doing it by using Twisted. If you have the time and inclination to follow the steep learning curve, I'd recommend you do this in Twisted.
| 0
| 1
| 0
| 0
|
2011-06-15T12:34:00.000
| 2
| 1.2
| true
| 6,357,737
| 0
| 0
| 0
| 1
|
I'm writing a Python application that needs both concurrency and asynchronicity. I've had a few recommendations each for Twisted and Celery, but I'm having trouble determining which is the better choice for this application (I have no experience with either).
The application (which is not a web app) primarily centers around making SOAP calls out to various third party APIs. To process a given piece of data, I'll need to call several APIs sequentially. And I'd like to be able to have a pool of "workers" for each of these APIs so I can make more than 1 call at a time to each API. Nothing about this should be very cpu-intensive.
More specifically, an external process will add a new "Message" to this application's database. I will need a job that watches for new messages, and then pushes them through the Process. The process will contain 4-5 steps that need to happen in order, but can happen completely asynchronously. Each step will take the message and act upon it in some way, typically adding details to the message. Each subsequent step will require the output from the step that precedes it. For most of these Steps, the work involved centers around calling out to a third-party API typically with a SOAP client, parsing the response, and updating the message. A few cases will involve the creation of a binary file (harder to pickle, if that's a factor). Ultimately, once the last step has completed, I'll need to update a flag in the database to indicate the entire process is done for this message.
Also, since each step will involve waiting for a network response, I'd like to increase overall throughput by making multiple simultaneous requests at each step.
Is either Celery or Twisted a more generally appropriate framework here? If they'll both solve the problem adequately, are there pros/cons to using one vs the other? Is there something else I should consider instead?
|
App Engine TimeOut Error: Serving a third-party API with an image stored on App Engine
| 6,373,752
| 1
| 1
| 494
| 0
|
python,image,api,google-app-engine,timeout
|
If you directly use the App Engine URLfetch API, you can adjust the timeout for your request. The default is 5 seconds, and it can be increased to 10 seconds for normal handlers, or to 10 minutes for fetches within task queue tasks or cron jobs.
If the external API is going to take more than 10 seconds to respond, probably your best bet would be to have your email handler fire off a task that calls the API with a very high timeout set (although almost certainly it would be better to fix your "pretty bad encoding problems"; how bad can encoding binary data to POST be?)
To answer your first question: if you're using dev_appserver, no, you can't handle any requests at all while you've got an external request pending; dev_appserver is single-threaded and handles 1 request at a time. The production environment should be able to scale to do this; however, if you have handlers that are waiting 10 seconds for a urlfetch, the scheduler might not scale your application well since the latency of incoming requests is one of the factors in auto-scaling.
| 0
| 1
| 0
| 0
|
2011-06-16T10:21:00.000
| 1
| 1.2
| true
| 6,370,335
| 0
| 0
| 1
| 1
|
I'm building an application in Python on App Engine. My application receives images as email attachments. When an email comes in, I grab the image and need to send it to a third party API.
The first thing I did was:
1) make a POST request to the third party API with the image data
I stopped this method because I had some pretty bad encoding problems with urllib2 and a MultipartPostHandler.
The second thing I'm doing right now is
1) Put the image in the incoming email in the Datastore
2) Put it in the memcache
3) Send to the API an URL that serves the image (using the memcache or, if not found in the memcache, the Datastore)
The problem I read on my logs is: DeadlineExceededError: ApplicationError: 5
More precisely, I see two requests in my logs:
- first, the incoming email
- then, the third party API HTTP call to my image on the URL I gave him
The incoming email ends up with the DeadlineExceededError.
The third party API call to my application ends up fine, serving correctly the image.
My interpretation:
It looks like App Engine waits for a response from the third party API, then closes because of a timeout, and then serves the request made by the third party API for the image. Unfortunately, as the connection is closed, I cannot get the useful information provided by the third party API once it has received my image.
My questions:
1) Can App Engine handle a request from a host it supposes to get a response of?
2) If not, how can I bypass this problem?
|
Python KeyboardInterrupt button
| 6,371,096
| 7
| 1
| 796
| 0
|
python,linux,keyboardinterrupt
|
It's still CtrlC. Or you could send a SIGINT to the process.
| 0
| 1
| 0
| 0
|
2011-06-16T11:27:00.000
| 1
| 1.2
| true
| 6,371,060
| 1
| 0
| 0
| 1
|
I'm using red hat 5 linux, and I would like to know what key combination raises a KeyboardInterrupt exception in python 2.6. I know that it is Ctrl+ c under windows.
Regards,
|
Installing Mechanize using easy_install
| 6,395,374
| 0
| 0
| 248
| 0
|
python,mechanize,easy-install
|
What's your Python installation location?
How did you run easy_install?
easy_install probably uses the default user account permissions and you need to manually change file permissions / ownership so that non-admins can see the files.
Generally, easy_install is designed for single user installations (development use) only. including other platforms (Linux, UNIX). If you wish to distribute application / package which works well on your operating system you need to repackage it in a format friendly for this operating system.
lxml etc. Python packages come with .EXE installer. Perhaps you could check how installer of these packages have been done and apply the same installer creation script for mechanize.
| 0
| 1
| 0
| 0
|
2011-06-18T06:26:00.000
| 1
| 1.2
| true
| 6,394,305
| 0
| 0
| 1
| 1
|
I've installed mechanize using easy_install on Windows 7 Admin account. However, when I try to setup/run another program that needs mechanize on a different account, it doesn't find it.
Any solutions?
|
What is the best way to give people your python program
| 6,396,113
| 0
| 2
| 176
| 0
|
python,linux,packaging
|
If you know a little of C/C++, you could make a tiny C/C++ program which kinda Glues all your python scripts together and packs the Python interpreter with it making a pretty presentable executable. Google "Embedding python scripts into a C/C++ application" and look for the CPython Api reference in your python docs.
| 0
| 1
| 0
| 0
|
2011-06-18T11:28:00.000
| 4
| 0
| false
| 6,395,657
| 1
| 0
| 0
| 1
|
I want to give my python program to some people, and they will run this in Linux. What is the best way to do this ? Is it better to give them every script - I have 5 of them, or make it into an installer like *.deb
Thank you.
|
What's the pythonic way to deal with worker processes that must coordinate their tasks?
| 6,404,965
| 2
| 13
| 526
| 0
|
python,concurrency,multiprocessing
|
Delegate the retrieval to a separate process which queues the requests until it is their turn.
| 0
| 1
| 0
| 0
|
2011-06-19T20:13:00.000
| 3
| 1.2
| true
| 6,404,872
| 1
| 0
| 0
| 2
|
I'm currently learning Python (from a Java background), and I have a question about something I would have used threads for in Java.
My program will use workers to read from some web-service some data periodically. Each worker will call on the web-service at various times periodically.
From what I have read, it's preferable to use the multiprocessing module and set up the workers as independent processes that get on with their data-gathering tasks. On Java I would have done something conceptually similar, but using threads. While it appears I can use threads in Python, I'll lose out on multi-cpu utilisation.
Here's the guts of my question: The web-service is throttled, viz., the workers must not call on it more than x times per second. What is the best way for the workers to check on whether they may request data?
I'm confused as to whether this should be achieved using:
Pipes as a way to communicate to some other 'managing object', which monitors the total calls per second.
Something along the lines of nmap, to share some data/value between the processes that describes if they may call the web-service.
A Manager() object that monitors the calls per seconds and informs workers if they have permission to make their calls.
Of course, I guess this may come down to how I keep track of the calls per second. I suppose one option would be for the workers to call a function on some other object, which makes the call to the web-service and records the current number of calls/sec. Another option would be for the function that calls the web-service to live within each worker, and for them to message a managing object every time they make a call to the web-service.
Thoughts welcome!
|
What's the pythonic way to deal with worker processes that must coordinate their tasks?
| 6,405,185
| 2
| 13
| 526
| 0
|
python,concurrency,multiprocessing
|
I think that you'll find that the multiprocessing module will provide you with some fairly familiar constructs.
You might find that multiprocessing.Queue is useful for connecting your worker threads back to a managing thread that could provide monitoring or throttling.
| 0
| 1
| 0
| 0
|
2011-06-19T20:13:00.000
| 3
| 0.132549
| false
| 6,404,872
| 1
| 0
| 0
| 2
|
I'm currently learning Python (from a Java background), and I have a question about something I would have used threads for in Java.
My program will use workers to read from some web-service some data periodically. Each worker will call on the web-service at various times periodically.
From what I have read, it's preferable to use the multiprocessing module and set up the workers as independent processes that get on with their data-gathering tasks. On Java I would have done something conceptually similar, but using threads. While it appears I can use threads in Python, I'll lose out on multi-cpu utilisation.
Here's the guts of my question: The web-service is throttled, viz., the workers must not call on it more than x times per second. What is the best way for the workers to check on whether they may request data?
I'm confused as to whether this should be achieved using:
Pipes as a way to communicate to some other 'managing object', which monitors the total calls per second.
Something along the lines of nmap, to share some data/value between the processes that describes if they may call the web-service.
A Manager() object that monitors the calls per seconds and informs workers if they have permission to make their calls.
Of course, I guess this may come down to how I keep track of the calls per second. I suppose one option would be for the workers to call a function on some other object, which makes the call to the web-service and records the current number of calls/sec. Another option would be for the function that calls the web-service to live within each worker, and for them to message a managing object every time they make a call to the web-service.
Thoughts welcome!
|
Prevent a file descriptor's closure on POSIX systems
| 6,407,237
| 1
| 3
| 244
| 0
|
python,c,unix,posix,system-calls
|
dup()'d file descriptors are not affected by close() calls of other instances; however, it's possible libvte may be calling some other shutdown methods which do change its state. Use strace to investigate in more detail.
Apart from that, there are a few things you can do, but none of them are very pretty. One option would be to replace the file descriptor from under libvte. That is:
First, use dup() to get your own copy of the fd, and stash it somewhere
Use dup2() to overwrite libvte's fd with one of your own choosing. This should be a new pty with a configuration similar to that of the one you're stealing, to avoid confusing libvte. Since you'll never write anything to the other end, reads will block (you'll need to do something with any data libvte may write down there!)
If libvte may be in a blocking read() at that very moment, send a signal to its thread (with a no-op - not SIGIGN - handler) to interrupt the read() call.
Do your work with the fd you duplicated at the start
To return to normal, use dup2() to put the fd back, then copy any pty state changes libvte may have made to the original descriptor.
Alternately, you can do as caf suggests, and simply have a proxy pty in there from the start.
| 0
| 1
| 0
| 0
|
2011-06-20T01:10:00.000
| 2
| 0.099668
| false
| 6,406,081
| 0
| 0
| 0
| 1
|
There is a library (libvte, a terminal emulation library) that uses a pair of file descriptors for a pty master/slave pair. I need to be able to "steal" the master fd from the library for my own use (in order to implement support for ZMODEM for the very rare occasion when the only link I have to the 'net is via a terminal). However, there is a problem.
You can tell libvte that you want to change the file descriptor to a new one, but then it attempts to close the master that it is using, and start using the new one instead. This won't work, because when the master is closed the slave goes away. Originally, I thought that it would be possible to use dup() on the pty master, such that when libvte did close() on the PTY master, I'd still have a functioning fd to use. That is apparently wrong.
I need to find a way to either:
Block libvte's read() operations on the fd.
Steal the fd away from libvte until I'm doing using it (e.g., until the rz process that I am connecting it to exits)
Is it possible on a POSIX system to do either of these things? Or would there be some other way to accomplish the same thing without patching libvte itself? The reason that I ask is that the solution has to work on a fair number of existing systems.
If it is at all relevant, I'm interfacing with libvte (and GTK+ itself) via Python. However, I'd not be averse to writing a Python extension in C that I could then call from a Python program, because you don't have to be privileged on any system to load a Python extension.
If none of it is possible, I may be forced to fork libvte to do what I want it to do and distribute that with my program, but I don't want to do that --- I do not want to be stuck maintaining a fork!
|
Google App Engine __main__ module
| 6,418,275
| 2
| 1
| 332
| 0
|
python,google-app-engine
|
each script entry in your app.yaml will be executed as a __main__ module. If you only want a single __main__ then you need to run everything through a single entry-point and map everything via a single WSGIApplication instance.
| 0
| 1
| 0
| 0
|
2011-06-20T17:53:00.000
| 1
| 1.2
| true
| 6,415,316
| 0
| 0
| 1
| 1
|
I'm building a data seeder module that looks for all models using introspection and the inspect module. I index the models I found by a string looking like module.model_name because there might be more modules with the same name in different modules.
The problem is that module sometimes is indeed the right module name, but sometimes it's __main__, probably because that specific module was the first one that was called to handle a URL after an instance was started. Is there anyway I can avoid this, perhaps by forcing a specific module to always be __main__?
This problems gets worse when I have multiple instances running at once because I also get inconsistent data between instances, each having a different __main__ module.
Thanks
|
Python (Portable 2.5) subprocess report problem "WindowsError: [Error 3] The system cannot find the path specified"
| 6,419,758
| 0
| 0
| 1,217
| 0
|
python,subprocess,portability,popen
|
Can 'cmd /c cmdstr' run correctly on windows?
| 0
| 1
| 0
| 0
|
2011-06-20T18:20:00.000
| 3
| 0
| false
| 6,415,651
| 0
| 0
| 0
| 3
|
I am using python and the codes are all worked well with non-portable version. Since I need to run the program on some computer that does not belong to me, which does not have installed python or such option available.
I use portable python instead. However the codes previously works well now report error"WindowsError: [Error 3] The system cannot find the path specified". I checked it on my computer. It works smoothly without the above error. Anybody can give a clue?
The cmd I am using is :
p = subprocess.Popen(self.cmdStr, shell=False, stdout=subprocess.PIPE, stderr=file)
I am redirecting the stderr to a file I specified.
I also googled online. There seems to have an issue of "subprocess PATH semantics and portability". I am not sure whether this is the reason. Please help. Thank you.
|
Python (Portable 2.5) subprocess report problem "WindowsError: [Error 3] The system cannot find the path specified"
| 6,415,696
| 1
| 0
| 1,217
| 0
|
python,subprocess,portability,popen
|
Ah, the problem is in the cmdStr variable. You must use absolute paths, or else have the user the process is running under have an appropriately setup PATH system variable. That or you have shell=False, which can cause problems in the subprocess module. Check the documentation for issues concerning paths etc.
| 0
| 1
| 0
| 0
|
2011-06-20T18:20:00.000
| 3
| 0.066568
| false
| 6,415,651
| 0
| 0
| 0
| 3
|
I am using python and the codes are all worked well with non-portable version. Since I need to run the program on some computer that does not belong to me, which does not have installed python or such option available.
I use portable python instead. However the codes previously works well now report error"WindowsError: [Error 3] The system cannot find the path specified". I checked it on my computer. It works smoothly without the above error. Anybody can give a clue?
The cmd I am using is :
p = subprocess.Popen(self.cmdStr, shell=False, stdout=subprocess.PIPE, stderr=file)
I am redirecting the stderr to a file I specified.
I also googled online. There seems to have an issue of "subprocess PATH semantics and portability". I am not sure whether this is the reason. Please help. Thank you.
|
Python (Portable 2.5) subprocess report problem "WindowsError: [Error 3] The system cannot find the path specified"
| 6,419,798
| 0
| 0
| 1,217
| 0
|
python,subprocess,portability,popen
|
subprocess.Popen(r"C:\Python27\python.exe",shell=True) can work correctly.
| 0
| 1
| 0
| 0
|
2011-06-20T18:20:00.000
| 3
| 0
| false
| 6,415,651
| 0
| 0
| 0
| 3
|
I am using python and the codes are all worked well with non-portable version. Since I need to run the program on some computer that does not belong to me, which does not have installed python or such option available.
I use portable python instead. However the codes previously works well now report error"WindowsError: [Error 3] The system cannot find the path specified". I checked it on my computer. It works smoothly without the above error. Anybody can give a clue?
The cmd I am using is :
p = subprocess.Popen(self.cmdStr, shell=False, stdout=subprocess.PIPE, stderr=file)
I am redirecting the stderr to a file I specified.
I also googled online. There seems to have an issue of "subprocess PATH semantics and portability". I am not sure whether this is the reason. Please help. Thank you.
|
What is NamedTemporaryFile useful for on Windows?
| 6,416,978
| 9
| 3
| 2,825
| 0
|
python,windows
|
It states that accessing it a second time while it is still open. You can still use the name otherwise, just be sure to pass delete=False when creating the NamedTemporaryFile so that it persists after it is closed.
| 0
| 1
| 0
| 0
|
2011-06-20T20:03:00.000
| 4
| 1.2
| true
| 6,416,782
| 1
| 0
| 0
| 3
|
The Python module tempfile contains both NamedTemporaryFile and TemporaryFile. The documentation for the former says
Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later)
What is the point of the file having a name if I can't use that name? If I want the useful (for me) behaviour of Unix on Windows, I've got to make a copy of the code and rip out all the bits that say if _os.name == 'nt' and the like.
What gives? Surely this is useful for something, since it was deliberately coded this way, but what is that something?
|
What is NamedTemporaryFile useful for on Windows?
| 6,416,972
| 1
| 3
| 2,825
| 0
|
python,windows
|
You don't want to "rip out all the bits...". It's coded like that for a reason. It says you can't open it a SECOND time while it's still open. Don't. Just use it once, and throw it away (after all, it is a temporary file). If you want a permanent file, create your own.
"Surely this is useful for something, since it was deliberately coded this way, but what is that something". Well, I've used it to write emails to (in a binary format) before copying them to a location where our Exchange Server picks them up & sends them. I'm sure there are lots of other use cases.
| 0
| 1
| 0
| 0
|
2011-06-20T20:03:00.000
| 4
| 0.049958
| false
| 6,416,782
| 1
| 0
| 0
| 3
|
The Python module tempfile contains both NamedTemporaryFile and TemporaryFile. The documentation for the former says
Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later)
What is the point of the file having a name if I can't use that name? If I want the useful (for me) behaviour of Unix on Windows, I've got to make a copy of the code and rip out all the bits that say if _os.name == 'nt' and the like.
What gives? Surely this is useful for something, since it was deliberately coded this way, but what is that something?
|
What is NamedTemporaryFile useful for on Windows?
| 6,421,756
| 0
| 3
| 2,825
| 0
|
python,windows
|
I'm pretty sure the Python library writers didn't just decide to make NamedTemporaryFile behave differently on Windows for laughs. All those _os.name == 'nt' tests will be there because of platform differences between Windows and Unix. So my inference from that documentation is that on Windows a file opened the way NamedTemporaryFile opens it cannot be opened again while NamedTemporaryFile still has it open, and that this is due to the way Windows works.
| 0
| 1
| 0
| 0
|
2011-06-20T20:03:00.000
| 4
| 0
| false
| 6,416,782
| 1
| 0
| 0
| 3
|
The Python module tempfile contains both NamedTemporaryFile and TemporaryFile. The documentation for the former says
Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later)
What is the point of the file having a name if I can't use that name? If I want the useful (for me) behaviour of Unix on Windows, I've got to make a copy of the code and rip out all the bits that say if _os.name == 'nt' and the like.
What gives? Surely this is useful for something, since it was deliberately coded this way, but what is that something?
|
How does python process a signal?
| 6,420,224
| 6
| 6
| 1,742
| 0
|
python,signals,extend
|
If you set a Python code signal handler using the signal module the interpreter will only run it when it re-enters the byte-code interpreter. The handler is not run right away. It is placed in a queue when the signal occurs. If the code path is currently in C code, built-in or extension module, the handler is deferred until the C code returns control to the Python byte code interpreter. This can be a long time, and you can't really predict how long.
Most notably if you are using interactive mode with readline enabled your signal handler won't run until you give it some input to interpret. this is because the input code is in the readline library (C code) and doesn't return to the interpreter until it has a complete line.
| 0
| 1
| 0
| 0
|
2011-06-21T03:40:00.000
| 2
| 1.2
| true
| 6,420,088
| 0
| 0
| 0
| 1
|
What is the workflow of processing a signal in python ? I set a signal handler, when the signal occur ,how does python invoke my function? Does the OS invoke it just like C program?
If I am in a C extend of python ,is it interrupted immediately ?
Now it's clear to me how does python process handle a signal . When you set a signal by the signal module , the module will register a function signal_handler(see $src/Modules/signalmodule.c) ,which set your handler and flag it as 1(Handlers[sig_num].tripped = 1;) , then call Py_AddPendingCall to tell python interpreter. The python interpreter will invoke Py_MakePendingCalls to call PyErr_CheckSignals which calls your function in main loop(see $src/Python/ceval.c).
communicate me if you want to talk about this : renenglish@gmail.com
|
Why thread is slower than subprocess ? when should I use subprocess in place of thread and vise versa
| 6,422,313
| 7
| 4
| 363
| 0
|
python,multithreading,subprocess
|
Python (or rather CPython, the c-based implementation that is commonly used) has a Global Intepreter Lock (a.k.a. the GIL).
Some kind of locking is necessary to synchronize memory access when several threads are accessing the same memory, which is what happens inside a process. Memory is not shared by between processes (unless you specifically allocate such memory), so no lock is needed there.
The globalness of the lock prevents several threads from running python code in the same process. When running mulitiple processes, the GIL does not interfere.
So, Python code does not scale on threads, you need processes for that.
Now, had your Python code mostly been calling C-APIs (NumPy/OpenGL/etc), there would be scaling since the GIL is usually released when native code is executing, so it's alright (and actually a good idea) to use Python to manage several threads that mostly execute native code.
(There are other Python interpreter implementations out there that do scale across threads (like Jython, IronPython, etc) but these aren't really mainstream.. yet, and usually a bit slower than CPython in single-thread scenarios.)
| 0
| 1
| 0
| 0
|
2011-06-21T08:09:00.000
| 1
| 1.2
| true
| 6,422,187
| 1
| 0
| 0
| 1
|
In my application, I have tried python threading and subprocess module to open firefox, and I have noticed that subprocess is faster than threading. what could be the reason behind this?
when to use them in place of each other?
|
Program not running correctly from task scheduler
| 29,738,627
| 1
| 1
| 566
| 0
|
python,windows-7
|
In the Task Scheduler, click Action tag, and then click Edit, by adding the directory of your program in the "Start in (optional):" you get rid of the message ".. is not recognized as an internal or external command, operable program or batch file.'
| 0
| 1
| 0
| 0
|
2011-06-21T19:38:00.000
| 2
| 0.099668
| false
| 6,431,022
| 0
| 0
| 0
| 2
|
We have phone recording software running on a windows 7 box. We run a python script that tells a audio converter program sox, to convert the calls to mp3 format.
When we run the script by double clicking it, it works fine. But when we run it through the windows task scheduler, we get the error message.
'Sox is not recognized as an internal or external command, operable program or batch file.'
Any one have any insight on this issue.
|
Program not running correctly from task scheduler
| 6,431,263
| 1
| 1
| 566
| 0
|
python,windows-7
|
You should add the program's full path or add the proper directory to your environment's PATH variable.
| 0
| 1
| 0
| 0
|
2011-06-21T19:38:00.000
| 2
| 0.099668
| false
| 6,431,022
| 0
| 0
| 0
| 2
|
We have phone recording software running on a windows 7 box. We run a python script that tells a audio converter program sox, to convert the calls to mp3 format.
When we run the script by double clicking it, it works fine. But when we run it through the windows task scheduler, we get the error message.
'Sox is not recognized as an internal or external command, operable program or batch file.'
Any one have any insight on this issue.
|
How to get around the need for multiple reactors in twisted
| 6,436,989
| 0
| 4
| 600
| 0
|
python,twisted,reactor
|
Call the Qt message checking/handling functions in the idle event of the Win32 reactor.
| 0
| 1
| 0
| 0
|
2011-06-22T08:23:00.000
| 2
| 0
| false
| 6,436,935
| 0
| 0
| 0
| 1
|
I am running a Qt application on Linux using the qt4reactor
The application sends and receives bytes on the serial port.
This works very well on Linux with the QtReactor
However when I port the application to windows then I have a problem.
On windows I use the SerialPort class from _win32SerialPort.
The doc string in _win32SerialPort is quite clear:
Requires PySerial and win32all, and needs to be used with win32eventreactor.
I assume the need to use win32eventreactor is because the addReader, addWriter methods are written for windows.
When the QtReactor is used, as soon as loseConnection is called on the transport, this calls loseConnection in twisted.internet.abstract which eventually calls the qt4reactor addWriter method (to flush the output).
This then creates a qt4reactor.TwistedSocketNotifier which tries to get a file descriptor number for select(). The abstract.fileno method is not overwritten by _win32SerialPort, so -1 is always returned and I get a
QSocketNotifier: Invalid Socket specified
I've seen many posts about multiple reactors not allowed in twisted, however I think I am correct here to assume that I need QtReactor for the Qt application and the win32eventreactor for the windows serial port.
Or is there some other workaround I can use ?
NOTE 1: when using QtReactor on windows, the serial ports work fine i.e. they can send and receive data. It is only when I close the application that I get "Invalid Socket specified"
Note 2: Now I found a workaround. I use the QtReactor, but when closing my application I do
serial.connectionLost(failure.Failure(Exception))
where serial is an instance of _win32serialport.SerialPort
This way abstract.loseConnection is never called which means that QtReactor addWriter is never called to flush the output. I suspect though that the best solution involves calling loseConnection and getting the output flushed properly.
|
Python multiprocessing
| 6,440,883
| 0
| 5
| 2,456
| 0
|
python,multicore,multiprocessing
|
Sounds like a good strategy, but you don't need the multiprocessing module for it, but rather the subprocess module. subprocess is for running child processes from a Python program and interacting with them (stdio, stdout, pipes, etc.), while multiprocessing is more about distributing Python code to run in multiple processes to gain performance through parallelism.
Depending on the responsiveness strategy, you may also want to look at threading for launching subprocesses from a thread. This will allow you to wait on one subprocess while still being responsive on the queue to accept other jobs.
| 0
| 1
| 0
| 0
|
2011-06-22T13:15:00.000
| 3
| 0
| false
| 6,440,474
| 1
| 0
| 0
| 1
|
This question is more fact finding and thought process than code oriented.
I have many compiled C++ programs that I need to run at different times and with different parameters. I'm looking at using Python multiprocessing to read a job from job queue (rabbitmq) and then feed that job to a C++ program to run (maybe subprocess). I was looking at the multiprocessing module because this will all run on dual Xeon server so I want to take full advantage of the multiprocessor ability of my server.
The Python program would be the central manager and would simply read jobs from the queue, spawn a process (or subprocess?) with the appropriate C++ program to run the job, get the results (subprocess stdout & stderr), feed that to a callback and put the process back in a queue of processes waiting for the next job to run.
First, does this sound like a valid strategy?
Second, are there any type of examples of something similar to this?
Thank you in advance.
|
Making only one task run at a time in celerybeat
| 6,455,760
| 0
| 4
| 3,061
| 0
|
python,celery,celerybeat
|
You can try adding a classfield to the object that holds the function that youre making run and use that field as a "some other guy is working or not" control
| 0
| 1
| 0
| 0
|
2011-06-23T13:45:00.000
| 4
| 0
| false
| 6,455,046
| 0
| 0
| 1
| 1
|
I have a task which I execute once a minute using celerybeat. It works fine. Sometimes though, the task takes a few seconds more than a minute to run because of which two instances of the task run. This leads to some race conditions that mess things up.
I can (and probably should) fix my task to work properly but I wanted to know if celery has any builtin ways to ensure this. My cursory Google searches and RTFMs yielded no results.
|
Where does Python look for library binaries?
| 6,465,119
| 5
| 4
| 917
| 0
|
python
|
Assuming you're on Linux, the OS looks for shared objects in the directories listed in /etc/ld.so.conf, /etc/ld.so.conf.d/* and $LD_LIBRARY_PATH.
| 0
| 1
| 0
| 0
|
2011-06-24T08:11:00.000
| 1
| 1.2
| true
| 6,465,053
| 1
| 0
| 0
| 1
|
I'm trying to bundle a Python library (fontforge) together so that my script runs on a machine without that library installed (but with Python installed). So far I tried copying ".so" files corresponding to "Missing library" errors to current directory, and while it worked for some, it didn't work for others, I'm getting "Missing library: libgunicode" even though I have libgunicode.so in current directory. Is there some setting I can adjust to get it to find it?
Edit: I'm on Ubuntu
Update: I got it to work by setting LD_LIBRARY_PATH=., then copying ".so" files into current directory until I got no more "library not found" messages
|
Handle server error python
| 6,467,479
| 1
| 0
| 207
| 0
|
python,api,error-handling
|
The easiest solution will be probably to try to validate the user input before you use it. A simple regular expression which checks the last parts of the domain the user has entered might be enough.
If you want to support arbitrary domains without a google\.[a-z]+ or appspot.com suffix you will need another way to figure out if the site matches your requirements or not. Unfortunately there is no "is-powered-by-google-or-has-a-google-like-login-page" header, so you will probably need to look at the content and use some heuristics if the page is likely to be a such a page or not.
The kind of server error (500 interrnal server error) you are now encountering might mean a lot. This error indicates that there is something wrong with your application or server configuration. For example, if you deploy a script with a syntax error, the web server will response with "server error" when someone tries to access it. Also, if you want to divide by 0 or try to access an non-existing element this kind of error will be shown. So, server-errors are just a very general name for programming errors which should be avoided (and fixed!).
| 0
| 1
| 0
| 0
|
2011-06-24T10:54:00.000
| 2
| 0.099668
| false
| 6,466,790
| 0
| 0
| 1
| 1
|
I m workin on an api where the user enters a domain and i need to redirect to the login page of that domain.
So this works for only google domains and i need to handle the error created when user enters non google domain . I m workin on google app engine.
I m new to dis error handling.So kindly explain how it works along with the solution
The error i recieved is
//Error: Server Error
The server encountered an error and could not complete your request.//
thaks in advance
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.