Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to debug python scripts that fork
| 23,942,909
| 0
| 5
| 2,992
| 0
|
python,debugging,fork
|
One possible way to debug a fork is to use pdb on the main process and winpdb on the fork. You put a software break early in the fork process and attach the winpdb app once the break has been hit.
It might be possible to run the program under winpdb and attach another instance after the fork - I haven't tried this. You certainly can't attach two winpdb instances at the same time, I've tried and it fails. If it works, this would be preferable - pdb really sucks.
| 0
| 1
| 0
| 1
|
2011-09-01T09:39:00.000
| 5
| 0
| false
| 7,268,563
| 0
| 0
| 0
| 4
|
In perl debugger I can use DB::get_fork_TTY() to debug both parent and child process in different terminals. Is there anything similar in python debugger?
Or, is there any good way to debug fork in python?
|
How to debug python scripts that fork
| 7,268,624
| 3
| 5
| 2,992
| 0
|
python,debugging,fork
|
You can emulate forked process if you will set instead of fork and its condition (pid == 0) always True. For debugging main process debugger will work.
For debugging multi-processing interaction better to use detailed logs as for me
| 0
| 1
| 0
| 1
|
2011-09-01T09:39:00.000
| 5
| 1.2
| true
| 7,268,563
| 0
| 0
| 0
| 4
|
In perl debugger I can use DB::get_fork_TTY() to debug both parent and child process in different terminals. Is there anything similar in python debugger?
Or, is there any good way to debug fork in python?
|
Logger Entity in App engine
| 7,342,091
| 1
| 2
| 157
| 0
|
python,google-app-engine,nosql,google-cloud-datastore
|
There are a few ways to do this:
Accumulate logs and write them in a single datastore put at the end of the request. This is the highest latency option, but only slightly - datastore puts are fairly fast. This solution also consumes the least resources of all the options.
Accumulate logs and enqueue a task queue task with them, which writes them to the datastore (or does whatever else you want with them). This is slightly faster (task queue enqueues tend to be quick), but it's slightly more complicated, and limited to 100kb of data (which hopefully shouldn't be a limitation).
Enqueue a pull task with the data, and have a regular push task or a backend consume the queue and batch-and-insert into the datastore. This is more complicated than option 2, but also more efficient.
Run a backend that accumulates and writes logs, and make URLFetch calls to it to store logs. The urlfetch handler can write the data to the backend's memory and return asynchronously, making this the fastest in terms of added user latency (less than 1ms for a urlfetch call)! This will require waiting for Python 2.7, though, since you'll need multi-threading to process the log entries asynchronously.
You might also want to take a look at the Prospective Search API, which may allow you to do some filtering and pre-processing on the log data.
| 0
| 1
| 0
| 0
|
2011-09-01T17:24:00.000
| 2
| 1.2
| true
| 7,274,049
| 0
| 0
| 1
| 2
|
Is it viable to have a logger entity in app engine for writing logs? I'll have an app with ~1500req/sec and am thinking about doing it with a taskqueue. Whenever I receive a request, I would create a task and put it in a queue to write something to a log entity (with a date and string properties).
I need this because I have to put statistics in the site that I think that doing it this way and reading the logs with a backend later would solve the problem. Would rock if I had programmatic access to the app engine logs (from logging), but since that's unavailable, I dont see any other way to do it..
Feedback is much welcome
|
Logger Entity in App engine
| 7,332,700
| 0
| 2
| 157
| 0
|
python,google-app-engine,nosql,google-cloud-datastore
|
How about keeping a memcache data structure of request info (recorded as they arrive) and then run an every 5 minute (or faster) cron job that crunches the stats on the last 5 minutes of requests from the memcache and just records those stats in the data store for that 5 minute interval. The same (or a different) cron job could then clear the memcache too - so that it doesn't get too big.
Then you can run big-picture analysis based on the aggregate of 5 minute interval stats, which might be more manageable than analyzing hours of 1500req/s data.
| 0
| 1
| 0
| 0
|
2011-09-01T17:24:00.000
| 2
| 0
| false
| 7,274,049
| 0
| 0
| 1
| 2
|
Is it viable to have a logger entity in app engine for writing logs? I'll have an app with ~1500req/sec and am thinking about doing it with a taskqueue. Whenever I receive a request, I would create a task and put it in a queue to write something to a log entity (with a date and string properties).
I need this because I have to put statistics in the site that I think that doing it this way and reading the logs with a backend later would solve the problem. Would rock if I had programmatic access to the app engine logs (from logging), but since that's unavailable, I dont see any other way to do it..
Feedback is much welcome
|
Make running python script more responsive to ctrl c?
| 7,285,945
| 2
| 1
| 200
| 0
|
python
|
Rather than using blocking calls with long timeouts, use event-driven networking. This will allow you never to have long periods of time doing uninterruptable operations.
| 0
| 1
| 0
| 1
|
2011-09-02T15:39:00.000
| 2
| 1.2
| true
| 7,285,874
| 0
| 0
| 0
| 1
|
We are running a very large framework of python scripts for test automation and I do really miss the opportunity to kill a running python script with ctrl + c in some situations on Windows.
When the script might be doing some socket communications with long time-outs the only options sometimes is to kill the DOS window.. Is there any options I have missed?
|
How to tell whether sys.stdout has been flushed in Python
| 7,310,072
| 1
| 5
| 431
| 0
|
python,io,stdout
|
The answer is: you cannot tell (not without serious uglyness, an external C module or similar).
The reason is that python’s file-implementation is based on the C (stdio) implementation for FILE *. So an underlying python file object basically just has a reference to the opened FILE. When writing data the C-implementation writes data, and when you tell it to flush() python also just forwards the flush call. So python does not hold the information. Even for the underlying C layer, a quick search returned that there is no documented API allowing you to access this information, however it's probably somewhere in the FILE object, so it could in theory be read out if it is that desperately needed.
| 0
| 1
| 0
| 0
|
2011-09-04T22:12:00.000
| 1
| 1.2
| true
| 7,302,450
| 1
| 0
| 0
| 1
|
I'm trying to debug some code I wrote, which involves a lot of parallel processes. And have some unwanted behaviour involving output to sys.stdout and certain messages being printed twice. For debugging purposes it would be very useful to know whether at a certain point sys.stdout has been flushed or not. I wonder if this is possible and if so, how?
Ps. I don't know if it matters but I'm using OS X (at least some sys commands depend on the operating system).
|
Redirecting one directory to another
| 7,305,485
| 0
| 1
| 106
| 0
|
python,windows,directory
|
Why don't you have the folder(s) zipped & unzip the folder when ever the game loads (into temp folder), from there things would be simpler. since the data is in temp, you can delete the information when the program exits or let windows clean it up.
this suggestion would work if the folder size is relatively small (a few MB's).
| 0
| 1
| 0
| 0
|
2011-09-05T08:16:00.000
| 2
| 0
| false
| 7,305,455
| 1
| 0
| 0
| 2
|
I am working on a mod script in Python for a legacy game. This game looks for the folder "AI" in its installation directory. Now, everytime before the game runs, a certain folder is chosen (say, AI_1 or AI_2), which should behave as if it is the AI folder (the actual AI folder doesn't exist).
I thought of a few solutions:
Temporary rename AI_1 to AI, run the game and rename back.
Create a symbolic pointing to AI_1 with the name AI.
Now both options are not looking optimal to me, because 1 is "dirty", and if the script exits unexpectedly it leaves behind trash, and 2 is hard to do on Windows. I have looked at NTFS junctions, but some users of this game run it from a FAT usb-stick and I don't want to leave them in the cold.
What is the best way to do this?
|
Redirecting one directory to another
| 7,305,874
| 0
| 1
| 106
| 0
|
python,windows,directory
|
I think that option with renaming is fine. To workaround a situation when scrip is terminated unexpectedly put an additional file with original folder name to all of the AI_x folders. Then on startup just check this file in AI folder and rename the folder back to its original name.
Another variant is to add only one file to the game folder where an original name of the folder that is currently renamed to AI will be stored.
| 0
| 1
| 0
| 0
|
2011-09-05T08:16:00.000
| 2
| 1.2
| true
| 7,305,455
| 1
| 0
| 0
| 2
|
I am working on a mod script in Python for a legacy game. This game looks for the folder "AI" in its installation directory. Now, everytime before the game runs, a certain folder is chosen (say, AI_1 or AI_2), which should behave as if it is the AI folder (the actual AI folder doesn't exist).
I thought of a few solutions:
Temporary rename AI_1 to AI, run the game and rename back.
Create a symbolic pointing to AI_1 with the name AI.
Now both options are not looking optimal to me, because 1 is "dirty", and if the script exits unexpectedly it leaves behind trash, and 2 is hard to do on Windows. I have looked at NTFS junctions, but some users of this game run it from a FAT usb-stick and I don't want to leave them in the cold.
What is the best way to do this?
|
Python communicate with a subprocess
| 7,316,014
| 2
| 2
| 1,058
| 0
|
python,streaming,subprocess,producer-consumer
|
Well I am new to python, but it seems proc.communicate or proc.stdout.readline/readlines waits till the process has completed.
As per my knowledge, you can implement a rotational logging and redirect output to a file, then using subprocess you can fire tailf -n XX logfile, in a loop until the program ends, and print the output whenever there is a request from the user end.
| 0
| 1
| 0
| 0
|
2011-09-06T05:13:00.000
| 3
| 0.132549
| false
| 7,315,212
| 0
| 0
| 0
| 1
|
I have a subprocess which is constantly producing data, but most of the data I'm not interested in. However occasionally, at random times, I need to grab a sample of the output - the thing is I need to read it at well defined boundaries. For example, let's assume the process produces a constant 100 bytes per second and useful information comes in chunks of 100 bytes. After it has been running for 4 seconds, I ask to see 100 bytes of output, then I would be interested in bytes 400-499 inclusive. But if I ask at 4.1 seconds, I don't want to intercept and get bytes 410-509, I need to wait and see bytes 500-599. Otherwise, the process should be happily streaming its output to /dev/null and I don't want to ever block the output stream. My friend fred might also ask for 100 bytes at, say, 4.6 seconds, so I also need to tee off that stuff and make the data available to multiple consumers for reading.
Is there an existing design pattern for this kind of thing? How can I implement it with python subprocess, and ensure that communication with the subprocess is non-blocking?
|
Referencing an external library in a Python appengine project, using Pydev/Eclipse
| 7,329,481
| 4
| 5
| 979
| 0
|
python,google-app-engine,pydev
|
The dev_appserver and the production environment don't have any concept of projects or libraries, so you need to structure your app so that all the necessary libraries are under the application's root. The easiest way to do this, usually, is to symlink them in as subdirectories, or worst-case, to copy them (or, using version control, make them sub-repositories).
How that maps to operations in your IDE depends on the IDE, but in general, it's probably easiest to get the app structured as you need it on disk, and work backwards from that to get your IDE set up how you like it.
| 0
| 1
| 0
| 0
|
2011-09-07T04:46:00.000
| 1
| 1.2
| true
| 7,328,959
| 0
| 0
| 1
| 1
|
it's a couple of months I've started development in Python - having myself a C# and Java background.
I'm currently working on 2 different python/appengine applications, and as often happens in those cases, both application share common code - so I would like to refactor and move the common/generic code into a shared place.
In either Java or C# I'd just create a new library project, move the code into the new project and add a reference to the library from the main projects.
I tried the same in Python, but I am unable to make it work.
I am using Eclipse with Pydev plugin.
I've created a new Pydev project, moved the code, and attempted to:
reference the library project from the main projects (using Project Properties -> Project References)
add the library src folder folder into the main projects (in this case I have an error - I presume it's not possible to leave the project boundaries when adding an existing source folder)
add as external library (pretty much the same as google libraries are defined, using Properties -> External libraries)
Import as link (from Import -> File System and enabling "Create links in workspace")
In all cases I am able to reference the library code while developing, but when I start debugging, the appengine development server throws an exception because it can't find what I have moved into a separate library project.
Of course I've searched for a solution a lot, but it looks like nobody has experienced the same problem - or maybe nobody doesn't need to do the same :)
The closest solution I've been able to find is to add an ant script to zip the library sources and copy in the target project - but this way debugging is a pain, as I am unable to step into the library code.
Any suggestion?
Needless to say, the proposed solution must take into account that the library code has to be included in the upload process to appengine...
Thanks
|
running really long scripts - how to keep them running and start them again if they fail?
| 7,334,651
| 3
| 1
| 137
| 0
|
php,python,process,centos,process-management
|
is there any way to keep these processes running in such a way that all the variables are saved and i can restart the script from where it stopped?
Yes. It's called creating a "checkpoint" or "memento".
i know i can program this
Good. Get started. Each problem is unique, so you have to create, save, and reload the mementos.
but would prefer a generalised utility which could just keep these things running so that the script completed even if there were trivial errors.
It doesn't generalize well. Not all variables can be saved. Only you know what's required to restart your process in a meaningful way.
perhaps i need some sort of process-management tool?
Not really.
trivial errors eg. string encoding issues
Usually, we find these by unit testing. That saves a lot of programming to work around the error. An ounce of prevention is worth a pound of silly work-arounds.
sometimes because the process seems to get killed by the server.
What? You'd better find out why. An ounce of prevention is worth a pound of silly work-arounds.
| 0
| 1
| 0
| 1
|
2011-09-07T13:23:00.000
| 1
| 0.53705
| false
| 7,334,587
| 0
| 0
| 0
| 1
|
I need to run a bunch of long running processes on a CENTOS server.
If I leave the processes (python/php scripts) to run sometimes the processes will stop running because of trivial errors eg. string encoding issues or sometimes because the process seems to get killed by the server.
I try to use nohup and fire the jobs from the crontab
Is there any way to keep these processes running in such a way that all the variables are saved and I can restart the script from where it stopped?
I know I can program this into the code but would prefer a generalised utility which could just keep these things running so that the script completed even if there were trivial errors.
Perhaps I need some sort of process-management tool?
Many thanks for any suggestions
|
Installing and including envisage plugin
| 7,345,262
| 1
| 0
| 147
| 0
|
python,plugins
|
I don't have access to the Envisage plugins documentation, so I'm not sure how these are installed. In general what you need to do is open Windows -> Preferences -> PyDev -> Interpreter - Python and check that the package directory is present in the System PYTHONPATH window. If it isn't add it and press Apply.
If your plugins are installed in a standard location, i.e. site-packages, another option is to remove your current interpreter from the upper window and press Auto Config.
| 0
| 1
| 0
| 0
|
2011-09-08T07:53:00.000
| 1
| 1.2
| true
| 7,344,835
| 1
| 0
| 0
| 1
|
I just installed the envisagecore and envisageplugin in Ubuntu 10.04. I'm using Eclipse SDK and PyDev plugin. How can I import this plugin in Eclipse?
|
Python: how to launch scp with pexpect without OpenSSH GUI Password Prompt on Ubuntu?
| 7,353,518
| 1
| 1
| 905
| 0
|
python,pexpect
|
See the DISPLAY and SSH_ASKPASS section of man ssh-add.
| 0
| 1
| 0
| 1
|
2011-09-08T17:19:00.000
| 1
| 1.2
| true
| 7,352,021
| 0
| 0
| 0
| 1
|
I'm attempting to automate scp commands with pexpect on Ubuntu. However, I keep getting a password GUI prompt with title "OpenSSH". How can I disable this behavior and use command line prompts instead?
|
Open a file as superuser in python
| 7,354,922
| 4
| 5
| 11,540
| 0
|
python,file-io
|
What you're looking for is called privilege escalation, and it very much depends on the platform you're running on. In general, what your program would have to do is run a portion as the superuser. On unix systems, for instance, you might be able to use sudo to read the contents of the file.
But as mentioned, this really depends on what system you're running on.
| 0
| 1
| 0
| 0
|
2011-09-08T21:37:00.000
| 4
| 0.197375
| false
| 7,354,834
| 1
| 0
| 0
| 2
|
I have to open a system file and read from it. This file is usually only readable by root (the super user). I have a way to ask the user for the superuser password. I would like to use this credentials to open the file and read from it without having my entire program running as a superuser process. Is there a way to achieve this in a multiplatform way?
|
Open a file as superuser in python
| 7,355,410
| 3
| 5
| 11,540
| 0
|
python,file-io
|
I would split the program in two.
Handles opening the file and accessing the contents. It can assume it's started with the privileges it needs.
Everything else that doesn't require special privileges.
Put a config entry which describes how to exec or subprocess the command that requires extra privileges. ie.
access_special_file: sudo access_special_file
or
access_special_file: runas /user:AccountWithPrivs access_special_file
This offloads some of the system specifics for privilege escalation to the system shell where there may be more convenient ways of gaining the permissions you need.
| 0
| 1
| 0
| 0
|
2011-09-08T21:37:00.000
| 4
| 0.148885
| false
| 7,354,834
| 1
| 0
| 0
| 2
|
I have to open a system file and read from it. This file is usually only readable by root (the super user). I have a way to ask the user for the superuser password. I would like to use this credentials to open the file and read from it without having my entire program running as a superuser process. Is there a way to achieve this in a multiplatform way?
|
Python raw_input() limit with Mac OS X Terminal?
| 53,871,077
| 11
| 7
| 2,834
| 0
|
python,macos,terminal
|
I had this same experience, and found python limits the length of input to raw_input if you do not import the readline module. Once I imported the readline module, it lifted the limit (or at least raised it significantly enough to where the text I was using worked just fine).
| 0
| 1
| 0
| 0
|
2011-09-09T03:55:00.000
| 2
| 1
| false
| 7,357,007
| 0
| 0
| 0
| 2
|
I wrote a python script and have been running it in terminal on Mac OS X snow leopard using python2.6. I used raw_input() to import text in several places, but I seem to reach a limit where it will no longer accept any more characters.
Is this a limit in python raw_input() or is this something to do with Terminal or Mac OSX?
Is there a better way to have the user input larger amounts of text in python?
|
Python raw_input() limit with Mac OS X Terminal?
| 7,357,025
| 4
| 7
| 2,834
| 0
|
python,macos,terminal
|
I'd say it's a limitation/bug with the OSX Terminal - try running the script with input via IDLE and see whether you still hit the same problem.
As for better ways of dealing with large input - it totally depends on your requirements but some ways could be:
Import text from a file
Create some kind of GUI/frontend to handle text input via more user friendly controls
| 0
| 1
| 0
| 0
|
2011-09-09T03:55:00.000
| 2
| 1.2
| true
| 7,357,007
| 0
| 0
| 0
| 2
|
I wrote a python script and have been running it in terminal on Mac OS X snow leopard using python2.6. I used raw_input() to import text in several places, but I seem to reach a limit where it will no longer accept any more characters.
Is this a limit in python raw_input() or is this something to do with Terminal or Mac OSX?
Is there a better way to have the user input larger amounts of text in python?
|
error when running "Check for updates"
| 7,419,393
| 0
| 0
| 93
| 0
|
python,aptana
|
Looks like that filepath is set up as an update site in your preferences. I'd just remove it, since it looks invalid (maybe you installed a pydev zip from here?). Go to Preferences > Install/Update > Available Software Sites and then remove the entry for it.
| 0
| 1
| 0
| 1
|
2011-09-09T22:56:00.000
| 1
| 0
| false
| 7,368,288
| 0
| 0
| 0
| 1
|
I am running Aptana Studio 3, build: 3.0.4.201108101506.
When I run "Check for updates" I get the following error
"A Problem occurred"
No repository found at file:/C:/Users/Keith/AppData/Local/Aptana%20Studio%203/plugins/com.python.pydev_2.2.1.2011073123/.
Any help would be appreciated
|
os.system('exit') in python
| 7,368,546
| 1
| 2
| 10,910
| 0
|
python,process,terminal,exit,terminate
|
Remember that system first spawns/forks a sub-shell to execute its commands. In effect, you are asking only the sub-shell to exit.
| 0
| 1
| 0
| 0
|
2011-09-09T23:34:00.000
| 3
| 0.066568
| false
| 7,368,523
| 0
| 0
| 0
| 2
|
My friend is in a macOS environment and he wanted to call os.system('exit') at the end of his python script to make the terminal close. It doesn't. This doesn't surprise me but I would like to know what exactly is going on between the python script and the terminal when this call is made.
In my mental simulation the terminal should have to tell you that there are still running jobs, but that doesn't happen either.
As a side question : will some less common terminals close when a process calls this?
|
os.system('exit') in python
| 7,368,553
| 2
| 2
| 10,910
| 0
|
python,process,terminal,exit,terminate
|
The system function starts another shell to execute a command. So in this case your Python scripts starts a shell and runs "exit" command in there, which makes that process exit. However, the Python script itself, including a terminal where it is running, continues to run. If the intent is to kill the terminal, you have to get the parent process ID and send a signal requesting it to stop. That will kill both Python script and a terminal.
| 0
| 1
| 0
| 0
|
2011-09-09T23:34:00.000
| 3
| 0.132549
| false
| 7,368,523
| 0
| 0
| 0
| 2
|
My friend is in a macOS environment and he wanted to call os.system('exit') at the end of his python script to make the terminal close. It doesn't. This doesn't surprise me but I would like to know what exactly is going on between the python script and the terminal when this call is made.
In my mental simulation the terminal should have to tell you that there are still running jobs, but that doesn't happen either.
As a side question : will some less common terminals close when a process calls this?
|
Installing packages from source into EPD Python on Mac OS X
| 7,372,654
| 0
| 0
| 791
| 0
|
python,macos,import
|
Workaround: Decided to simply add a .pth file to my site-packages directory which points to /usr/local/lib/python2.7/dist-packages
That is: Place in /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages
a file shogun.pth which simply contains: /usr/local/lib/python2.7/dist-packages/
| 0
| 1
| 0
| 0
|
2011-09-10T02:42:00.000
| 2
| 0
| false
| 7,369,271
| 1
| 0
| 0
| 1
|
I recently compiled the shogun library from source, but I'm not sure where I need to place the python files created. make install placed them in '/usr/local/lib/python2.7/dist-packages' which I assume is valid on linux systems.
sys.path in python doesn't have a dist-packages in its path, only a site-packages
|
Effective way to find total number of files in a directory
| 7,372,533
| 1
| 3
| 362
| 0
|
python,linux,algorithm,archlinux
|
@shadyabhi: if you have many subdirectories maybe you can speedup the process by using os.listdir and multiprocessing.Process to recurse into each folder.
| 0
| 1
| 0
| 1
|
2011-09-10T13:05:00.000
| 2
| 0.099668
| false
| 7,371,878
| 0
| 0
| 0
| 2
|
I am building a Music file organizer(in python2) in which I read the metadata of all files & then put those file in the required folder.
Now, I am already ready with the command line interface but this script shows feedback in a way that it shows "Which file is it working on right now?".
If the directory contains say 5000 mp3 files, there should be some kind of feedback.
So, I would like to know the most efficient way to find the total
number of mp3s available in a directory (scanning recursively in all
subsequent directories too).
My idea is to keep track of the total files processed and show a progress bar according to that. Is there a better way (performance wise), please feel free to guide.
I want my app to not have any kind of platform dependent code. If there is serious performance penalty sticking to this idea, please suggest for linux.
|
Effective way to find total number of files in a directory
| 7,371,922
| 2
| 3
| 362
| 0
|
python,linux,algorithm,archlinux
|
I'm sorry to say this but no there isn't any way to do it more efficiently than recursively finding the files (at least that is platform (or filesystem) independent).
If the filesystem can help you it will, and you can't do anything to help it.
The reason it's not possible to do it without recursive scanning is how the filesystem is designed.
A directory can be seen as a file, and it contains a list of all files it contains. To find something in a subdirectory you have to first open the directory, then open the subdirectory and search that.
| 0
| 1
| 0
| 1
|
2011-09-10T13:05:00.000
| 2
| 1.2
| true
| 7,371,878
| 0
| 0
| 0
| 2
|
I am building a Music file organizer(in python2) in which I read the metadata of all files & then put those file in the required folder.
Now, I am already ready with the command line interface but this script shows feedback in a way that it shows "Which file is it working on right now?".
If the directory contains say 5000 mp3 files, there should be some kind of feedback.
So, I would like to know the most efficient way to find the total
number of mp3s available in a directory (scanning recursively in all
subsequent directories too).
My idea is to keep track of the total files processed and show a progress bar according to that. Is there a better way (performance wise), please feel free to guide.
I want my app to not have any kind of platform dependent code. If there is serious performance penalty sticking to this idea, please suggest for linux.
|
PHP exec() command wont launch python script using sendkeys
| 7,380,933
| 0
| 2
| 1,377
| 0
|
php,python,windows,apache,exec
|
Figured it out thanks to the excellent help from Winston Ewert and Gringo Suave.
I set Apache's service to the Local System Account and gave it access to interact with the desktop. This should help if you have Windows XP or Server 2003, but Vista and newer there's an Interactive Services Detection that pops up when you try to launch GUI applications from php. Every command was executing correctly, but were doing so in Session 0. This is because Apache was installed as a service. For most people I would think that reinstalling without setting up Apache as a service would work, but I was considering moving to XAMPP anyway, so having to uninstall Apache helped push my decision.
Ultimately all of the codes I wrote in my original post now work as a result, and my project can move forward. I hope someone else stumbles across this and gets as much help from Winston Ewert and Gringo Suave as I did! Thank you both very much!
| 0
| 1
| 0
| 1
|
2011-09-11T02:20:00.000
| 2
| 0
| false
| 7,375,924
| 0
| 0
| 0
| 1
|
First Note: Sorry this is long. Wanted to be thorough.
I really hate to ask a question when there's so much out there online but its been a week of searching and I have nothing to show for it. I'd really appreciate some help. I am a noob but I learn very fast and am more than willing to try alternate languages or whatever else it might take.
The goal:
What I'm trying to do is build a Netflix remote (personal use only) that controls Netflix on the server (Windows 7 PC 32-bit) via keyboard shortcuts (example: spacebar to pause) after a button is pressed in a php page on my ipod touch or android phone. Currently the remote uses USBUIRT to control the TV and IR devices without issue. If you have any alternate methods (that I can build, not buy) to suggest or other languages I could learn that can achieve this, I'm happy to learn.
The issue:
PHP's exec() and system() commands will not launch the python script (nor an exe compiled with py2exe) that simply presses the Windows key (intended to press the key on the server, not the machine loading the php page). I can use USBUIRT's UUTX.exe passing arguments with exec() to control IR devices without issue. But my exe, py, nor pyw files work. I've even tried calling a batch file that then launches the python script and that batch will not launch. The page refreshes and no errors are displayed.
Attempted:
Here's a code that works
$exec = exec("c:\\USBUIRT\\UUTX.exe -r3 -fC:\\USBUIRT\\Pronto.txt LED_Off", $results);
Here's a few attempts that don't work
$exec = exec("c:\\USBUIRT\\test.py", $results);
$exec = exec("python c:\\USBUIRT\\test.py", $results);
$exec = exec("C:\\python25\\python.exe c:\\USBUIRT\\test.py", $results);
All of those I've tried without the dual backslashes and with forward slashes and dual forward slashes. I've left off passing it to variable $exec and that makes no difference. $result outputs
Arraystring(9) "
Copying everything in the exec() into command line works correctly. I've tried moving the file to the htdocs folder, changed folder permissions, and made sure I'm not in safemode in php. Var_dump returns: Array" Using a foreach loop gives no info from the array.
My logs for Apache show only
[Sat Sep 10 19:54:09 2011] [error] [client 127.0.0.1] File does not exist: C:/Program Files/Apache Software Foundation/Apache2.2/htdocs/announce
Setup: Apache 2.2, python 2.5, and php 5.3. Running this on Windows 7 and only connect on the local network, no vpn or the like. Given every associated folder (python, htdocs, the cmd.exe file, usbuirt folder) IUSR, admins, users, and everyone with full control just for initial testing (later I'll of course tighten security up). Safe mode is off on php as well.
Notes: This code I saw on another similar issue doesn't work:
exec("ping google.com -n 1");
No errors in error.log nor event viewer. Putting it inside ob_start(); and getting the results with ob_get_clean(); gives me absolutely nothing. No text or anything at all. I've tried a lot more but I've already written a novel on here so I'll just have to answer the rest as we go. I'll post the full php source or the python script if that is needed but all it does is import sendkeys and press the windows key to pop open the start menu as a basic visual test. I don't know if its permissions, the way I have my setup running, my coding... I just don't know anymore. And again I apologize this is so long and if you do answer, I really appreciate you taking the time to read all this to help out a total stranger.
|
Does Python's read() always return requested size except at EOF?
| 7,383,480
| 1
| 2
| 974
| 0
|
python,file-io
|
On CPython, it will always return the number of bytes requested, unless EOF is reached.
| 0
| 1
| 0
| 0
|
2011-09-12T04:59:00.000
| 2
| 1.2
| true
| 7,383,464
| 1
| 0
| 0
| 2
|
Does the Python read() method behave like C's read? Might it return less than the requested number of bytes before the last chunk of the file is reached? Or does it guarantee to always return the full amount of bytes when those bytes exist to be read?
|
Does Python's read() always return requested size except at EOF?
| 7,384,163
| 3
| 2
| 974
| 0
|
python,file-io
|
Well, the Python Standard library says this about file.read([size]):
Read at most size bytes from the file (less if the read hits EOF before obtaining size bytes). If the size argument is negative or omitted, read all data until EOF is reached. ... An empty string is returned when EOF is encountered immediately. ... Also note that when in non-blocking mode, less data than was requested may be returned, even if no size parameter was given.
| 0
| 1
| 0
| 0
|
2011-09-12T04:59:00.000
| 2
| 0.291313
| false
| 7,383,464
| 1
| 0
| 0
| 2
|
Does the Python read() method behave like C's read? Might it return less than the requested number of bytes before the last chunk of the file is reached? Or does it guarantee to always return the full amount of bytes when those bytes exist to be read?
|
Fastest/most efficient in App Engine, local file read or memcache hit?
| 7,400,621
| 6
| 5
| 478
| 0
|
python,performance,google-app-engine
|
If they are just few kbytes I would load them on the instance memory; amongst the storage choices (Memcache, Datastore, Blobstore and so on) on Google App Engine , instance memory option shoud be the fastest.
| 0
| 1
| 0
| 0
|
2011-09-13T09:58:00.000
| 1
| 1.2
| true
| 7,399,965
| 0
| 0
| 1
| 1
|
I have a couple of smaller asset files (text templates typically 100 - a few K bytes) in my app that I'm considering caching using memcached. But does anyone here know if loading a local file or requesting it from memcache is the fastest/most resource efficient?
(I'll be using the Python version of App Engine)
|
Python command to create no-arch rpm's
| 7,531,272
| 1
| 0
| 116
| 0
|
python,rpm,distutils
|
If your software does not contain extension modules (modules written in C/C++), distutils will make the RPM noarch. I don’t think there’s a way to explicitly control it.
| 0
| 1
| 0
| 1
|
2011-09-13T10:08:00.000
| 1
| 0.197375
| false
| 7,400,099
| 0
| 0
| 0
| 1
|
I am creating rpm's for my project which is in pure python. I am running the command
python setup.py bdist_rpm
to build the rpm. This is creating architechture specific rpm's (x86 or x86-64). What I would like is to have a no-arch rpm. Can any of python guru's help me with creating a no-arch rpm. Any help would be appriciated. Thanks in advance.
|
What's the best pattern to design an asynchronous RPC application using Python, Pika and AMQP?
| 10,063,487
| 0
| 9
| 5,996
| 0
|
python,design-patterns,rabbitmq,amqp,pika
|
Your setup sounds good to me. And you are right, you can simply set the callback to start a thread and chain that to a separate callback when the thread finishes to queue the response back over Channel B.
Basically, your consumers should have a queue of their own (size of N, amount of parallelism they support). When a request comes in via Channel A, it should store the result in the queue shared between the main thread with Pika and the worker threads in the thread pool. As soon it is queued, pika should respond back with ACK, and your worker thread would wake up and start processing.
Once the worker is done with its work, it would queue the result back on a separate result queue and issue a callback to the main thread to send it back to the consumer.
You should take care and make sure that the worker threads are not interfering with each other if they are using any shared resources, but that's a separate topic.
| 0
| 1
| 0
| 0
|
2011-09-13T14:32:00.000
| 3
| 0
| false
| 7,403,585
| 0
| 0
| 0
| 2
|
The producer module of my application is run by users who want to submit work to be done on a small cluster. It sends the subscriptions in JSON form through the RabbitMQ message broker.
I have tried several strategies, and the best so far is the following, which is still not fully working:
Each cluster machine runs a consumer module, which subscribes itself to the AMQP queue and issues a prefetch_count to tell the broker how many tasks it can run at once.
I was able to make it work using SelectConnection from the Pika AMQP library. Both consumer and producer start two channels, one connected to each queue. The producer sends requests on channel [A] and waits for responses in channel [B], and the consumer waits for requests on channel [A] and send responses on channel [B]. It seems, however, that when the consumer runs the callback that calculates the response, it blocks, so I have only one task executed at each consumer at each time.
What I need in the end:
the consumer [A] subscribes his tasks (around 5k each time) to the cluster
the broker dispatches N messages/requests for each consumer, where N is the number of concurrent tasks it can handle
when a single task is finished, the consumer replies to the broker/producer with the result
the producer receives the replies, update the computation status and, in the end, prints some reports
Restrictions:
If another user submits work, all of his tasks will be queued after the previous user (I guess this is automatically true from the queue system, but I haven't thought about the implications on a threaded environment)
Tasks have an order to be submitted, but the order they are replied is not important
UPDATE
I have studied a bit further and my actual problem seems to be that I use a simple function as callback to the pika's SelectConnection.channel.basic_consume() function. My last (unimplemented) idea is to pass a threading function, instead of a regular one, so the callback would not block and the consumer can keep listening.
|
What's the best pattern to design an asynchronous RPC application using Python, Pika and AMQP?
| 15,245,534
| 0
| 9
| 5,996
| 0
|
python,design-patterns,rabbitmq,amqp,pika
|
Being unexperienced in threading, my setup would run multiple consumer processes (the number of which basically being your prefetch count). Each would connect to the two queues and they would process jobs happily, unknowning of eachother's existence.
| 0
| 1
| 0
| 0
|
2011-09-13T14:32:00.000
| 3
| 0
| false
| 7,403,585
| 0
| 0
| 0
| 2
|
The producer module of my application is run by users who want to submit work to be done on a small cluster. It sends the subscriptions in JSON form through the RabbitMQ message broker.
I have tried several strategies, and the best so far is the following, which is still not fully working:
Each cluster machine runs a consumer module, which subscribes itself to the AMQP queue and issues a prefetch_count to tell the broker how many tasks it can run at once.
I was able to make it work using SelectConnection from the Pika AMQP library. Both consumer and producer start two channels, one connected to each queue. The producer sends requests on channel [A] and waits for responses in channel [B], and the consumer waits for requests on channel [A] and send responses on channel [B]. It seems, however, that when the consumer runs the callback that calculates the response, it blocks, so I have only one task executed at each consumer at each time.
What I need in the end:
the consumer [A] subscribes his tasks (around 5k each time) to the cluster
the broker dispatches N messages/requests for each consumer, where N is the number of concurrent tasks it can handle
when a single task is finished, the consumer replies to the broker/producer with the result
the producer receives the replies, update the computation status and, in the end, prints some reports
Restrictions:
If another user submits work, all of his tasks will be queued after the previous user (I guess this is automatically true from the queue system, but I haven't thought about the implications on a threaded environment)
Tasks have an order to be submitted, but the order they are replied is not important
UPDATE
I have studied a bit further and my actual problem seems to be that I use a simple function as callback to the pika's SelectConnection.channel.basic_consume() function. My last (unimplemented) idea is to pass a threading function, instead of a regular one, so the callback would not block and the consumer can keep listening.
|
How to test server behavior under network loss at every possible packet
| 7,424,772
| 2
| 3
| 435
| 0
|
python,http,tcp,network-programming,scapy
|
Unless your application is actually responding to and generating its own IP packets (which would be incredibly silly), you probably don't need to do testing at that layer. Simply testing at the TCP layer (e.g, connect(), send(), recv(), shutdown()) will probably be sufficient, as those events are the only ones which your server will be aware of.
| 0
| 1
| 1
| 0
|
2011-09-14T23:54:00.000
| 1
| 1.2
| true
| 7,424,349
| 0
| 0
| 0
| 1
|
I'm working with mobile, so I expect network loss to be common. I'm doing payments, so each request matters.
I would like to be able to test my server to see precisely how it will behave with client network loss at different points in the request cycle -- specifically between any given packet send/receive during the entire network communication.
I suspect that the server will behave slightly differently if the communication is lost while sending the response vs. while waiting for a FIN-ACK, and I want to know which timings of disconnections I can distinguish.
I tried simulating an http request using scapy, and stopping communication between each TCP packet. (I.e.: first send SYN then disappear; then send SYN and receive SYN-ACK and then disappear; then send SYN and receive SYN-ACK and send ACK and then disappear; etc.) However, I quickly got bogged down in the details of trying to reproduce a functional TCP stack.
Is there a good existing tool to automate/enable this kind of testing?
|
How to install Python module on Ubuntu
| 7,429,157
| 24
| 14
| 44,972
| 0
|
python,python-module
|
Most installation requires:
sudo python setup.py install
Otherwise, you won't be able to write to the installation directories.
I'm pretty sure that (unless you were root), you got an error when you did
python2.7 setup.py install
| 0
| 1
| 0
| 1
|
2011-09-15T06:23:00.000
| 2
| 1.2
| true
| 7,426,677
| 1
| 0
| 0
| 1
|
I just wrote a function on Python. Then, I wanted to make it module and install on my Ubuntu 11.04. Here is what I did.
Created setup.py along with function.py file.
Built distribution file using $Python2.7 setup.py sdist
Then installed it $Python2.7 setup.py install
All was going fine. But, later I wanted to use the module importing it on my code.
I got import error: ImportError: No module named '-------'
PS. I searched over google and didn't find particular answer. Detailed answer will be much appreciated.
|
emacs switch from Shell to gdb or pdb mode
| 7,438,162
| 0
| 3
| 453
| 0
|
c++,python,emacs,gdb
|
If all you're looking for is source code line tracking, my emacs does pdb source line tracking in M-x shell buffers just fine. No need to enable any other mode.
| 0
| 1
| 0
| 0
|
2011-09-15T17:59:00.000
| 1
| 0
| false
| 7,435,366
| 0
| 0
| 0
| 1
|
Let's say I have an open gdb or pdb session in an emacs shell. So the major mode is "Shell:run" Now I want to convert that buffer to gdb or pdb (python debugger) major mode.
I tried M-X gud-mode and that switched the mode to "Debugger:run". But it does not actually work; for example when I type "up", "down", or "n" it does not pop another window up showing the code (trying it under pdb).
So how can I kick emacs into its debugger mode for a session that is already open?
|
Get input file name in streaming hadoop program
| 24,434,211
| 0
| 7
| 9,150
| 0
|
python,input,streaming,hadoop,filesplitting
|
The new ENV_VARIABLE for Hadoop 2.x is MAPREDUCE_MAP_INPUT_FILE
| 0
| 1
| 0
| 0
|
2011-09-16T19:59:00.000
| 3
| 0
| false
| 7,449,756
| 0
| 0
| 0
| 2
|
I am able to find the name if the input file in a mapper class using FileSplit when writing the program in Java.
Is there a corresponding way to do this when I write a program in Python (using streaming?)
I found the following in the hadoop streaming document on apache:
See Configured Parameters. During the execution of a streaming job,
the names of the "mapred" parameters are transformed. The dots ( . )
become underscores ( _ ). For example, mapred.job.id becomes
mapred_job_id and mapred.jar becomes mapred_jar. In your code, use the
parameter names with the underscores.
But I still cant understand how to make use of this inside my mapper.
Any help is highly appreciated.
Thanks
|
Get input file name in streaming hadoop program
| 24,906,345
| 6
| 7
| 9,150
| 0
|
python,input,streaming,hadoop,filesplitting
|
By parsing the mapreduce_map_input_file(new) or map_input_file(deprecated) environment variable, you will get the map input file name.
Notice:
The two environment variables are case-sensitive, all letters are lower-case.
| 0
| 1
| 0
| 0
|
2011-09-16T19:59:00.000
| 3
| 1
| false
| 7,449,756
| 0
| 0
| 0
| 2
|
I am able to find the name if the input file in a mapper class using FileSplit when writing the program in Java.
Is there a corresponding way to do this when I write a program in Python (using streaming?)
I found the following in the hadoop streaming document on apache:
See Configured Parameters. During the execution of a streaming job,
the names of the "mapred" parameters are transformed. The dots ( . )
become underscores ( _ ). For example, mapred.job.id becomes
mapred_job_id and mapred.jar becomes mapred_jar. In your code, use the
parameter names with the underscores.
But I still cant understand how to make use of this inside my mapper.
Any help is highly appreciated.
Thanks
|
virtual environment from Mac to Linux
| 7,450,904
| 3
| 2
| 2,982
| 0
|
python,django,linux,macos,virtualenv
|
You can just recreate the virtual environment on Ubuntu. The virtual env will have the python binary which will be different on a different system.
| 0
| 1
| 0
| 0
|
2011-09-16T22:01:00.000
| 2
| 1.2
| true
| 7,450,835
| 1
| 0
| 1
| 1
|
I recently made a django project using virtualenv on my mac. That mac broke, but I saved the files and now I want to work on my project using my linux computer. I am now having some difficulty running the virtual environment in Ubuntu.
Does it even make sense to try and use a virtual env made in Mac OS on Ubuntu?
Thanks
|
GAE Lookup Table Incompatible with Transactions?
| 7,466,485
| 1
| 0
| 143
| 1
|
python,google-app-engine,transactions,google-cloud-datastore,entity-groups
|
If you can, try and fit the data into instance memory. If it won't fit in instance memory, you have a few options available to you.
You can store the data in a resource file that you upload with the app, if it only changes infrequently, and access it off disk. This assumes you can build a data structure that permits easy disk lookups - effectively, you're implementing your own read-only disk based table.
Likewise, if it's too big to fit as a static resource, you could take the same approach as above, but store the data in blobstore.
If your data absolutely must be in the datastore, you may need to emulate your own read-modify-write transactions. Add a 'revision' property to your records. To modify it, fetch the record (outside a transaction), perform the required changes, then inside a transaction, fetch it again to check the revision value. If it hasn't changed, increment the revision on your own record and store it to the datastore.
Note that the underlying RPC layer does theoretically support multiple independent transactions (and non-transactional operations), but the APIs don't currently expose any way to access this from within a transaction, short of horrible (and I mean really horrible) hacks, unfortunately.
One final option: You could run a backend provisioned with more memory, exposing a 'SpellCheckService', and make URLFetch calls to it from your frontends. Remember, in-memory is always going to be much, much faster than any disk-based option.
| 0
| 1
| 0
| 0
|
2011-09-16T22:55:00.000
| 2
| 1.2
| true
| 7,451,163
| 0
| 0
| 1
| 2
|
My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise.
My current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data.
The problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions.
Note that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example.
One solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution.
Ideally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however.
Storing the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction.
Does anyone know of a suitable implementation for large global lookup tables?
Please understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.
|
GAE Lookup Table Incompatible with Transactions?
| 7,452,303
| 1
| 0
| 143
| 1
|
python,google-app-engine,transactions,google-cloud-datastore,entity-groups
|
First, if you're under the belief that a namespace is going to help avoid key collisions, it's time to take a step back. A key consists of an entity kind, a namespace, a name or id, and any parents that the entity might have. It's perfectly valid for two different entity kinds to have the same name or id. So if you have, say, a LookupThingy that you're matching against, and have created each member by specifying a unique name, the key isn't going to collide with anything else.
As for the challenge of doing the equivalent of a spell-check against an unparented lookup table within a transaction, is it possible to keep the lookup table in code?
Or can you think of an analogy that's closer to what you need? One that motivates the need to do the lookup within a transaction?
| 0
| 1
| 0
| 0
|
2011-09-16T22:55:00.000
| 2
| 0.099668
| false
| 7,451,163
| 0
| 0
| 1
| 2
|
My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise.
My current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data.
The problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions.
Note that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example.
One solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution.
Ideally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however.
Storing the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction.
Does anyone know of a suitable implementation for large global lookup tables?
Please understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.
|
Lock down a program so it has no access to outside files, like a virus scanner does
| 7,466,568
| 0
| 0
| 189
| 0
|
python,windows,linux,macos,permissions
|
You can run untrusted apps in chroot and block them from using network with an iptables rule (for example, owner --uid-owner match)
But really, virtual machine is more reliable and on modern hardware performance impact is negligible.
| 0
| 1
| 0
| 0
|
2011-09-18T09:24:00.000
| 3
| 0
| false
| 7,460,592
| 0
| 0
| 0
| 1
|
I would like to launch an untrusted application programmatically, so I want to remove the program's ability to access files, network, etc. Essentially, I want to restrict it so its only interface to the rest of the computer is stdin and stdout.
Can I do that? Preferably in a cross-platform way but I sort of expect to have to do it differently for each OS. I'm using Python, but I'm willing to write this part in a lower level or more platform integrated language if necessary.
The reason I need to do this is to write a distributed computing infrastructure. It needs to download a program, execute it, piping data to stdin, and returning data that it receives on stdout to the central server. But since the program it downloads is untrusted, I want to restrict it to only using stdin and stdout.
|
Store images temporary in Google App Engine?
| 7,461,775
| 4
| 2
| 424
| 0
|
python,google-app-engine,hosting
|
GAE has a BlobStore API, which can work pretty much as a file storage, but probably it's not what you whant. Actually, the right answer depends on what kind of API you're using - it may support file-like objects, so you could pass urllib response object, or accept URLs, or tons of other interesting features
| 0
| 1
| 0
| 0
|
2011-09-18T11:18:00.000
| 2
| 1.2
| true
| 7,461,111
| 0
| 0
| 1
| 2
|
I'm writing an app with Python, which will check for updates on a website(let's call it A) every 2 hours, if there are new posts, it will download the images in the post and post them to another website(call it B), then delete those images.
Site B provide API for upload images with description, which is like:
upload(image_path, description), where image_path is the path of the image on your computer.
Now I've finished the app, and I'm trying to make it run on Google App Engine(because my computer won't run 7x24), but it seems that GAE won't let you write files on its file system.
How can I solve this problem? Or are there other choices for free Python hosting and providing "cron job" feature?
|
Store images temporary in Google App Engine?
| 7,465,743
| 1
| 2
| 424
| 0
|
python,google-app-engine,hosting
|
You shouldn't need to use temporary storage at all - just download the image with urlfetch into memory, then use another urlfetch to upload it to the destination site.
| 0
| 1
| 0
| 0
|
2011-09-18T11:18:00.000
| 2
| 0.099668
| false
| 7,461,111
| 0
| 0
| 1
| 2
|
I'm writing an app with Python, which will check for updates on a website(let's call it A) every 2 hours, if there are new posts, it will download the images in the post and post them to another website(call it B), then delete those images.
Site B provide API for upload images with description, which is like:
upload(image_path, description), where image_path is the path of the image on your computer.
Now I've finished the app, and I'm trying to make it run on Google App Engine(because my computer won't run 7x24), but it seems that GAE won't let you write files on its file system.
How can I solve this problem? Or are there other choices for free Python hosting and providing "cron job" feature?
|
Distributing python on Mac, Linux, and Windows using cx_freeze: can I generate all apps from one platform?
| 7,474,129
| 6
| 7
| 993
| 0
|
python,python-3.x,cx-freeze
|
Short answer: no
I've been doing something similiar recently (using cx_Freeze with Python 3). If you set up Python inside Wine, you can generate a Windows build, but I had to copy some DLLs in before it worked properly (cx_Freeze calls a Windows API function that's not implemented in Wine). I've not run into any way of packaging applications for Macs without actually having a Mac.
Perhaps someone should set up a community build service so people could build distributables for different platforms for each other. That doesn't get round the problem of testing, though.
| 0
| 1
| 0
| 0
|
2011-09-19T04:12:00.000
| 1
| 1.2
| true
| 7,466,375
| 0
| 0
| 0
| 1
|
I'm setting up a scripted build of a cross-platform python app (Python 3) and I'd like to create all the distributables from linux. Is that possible?
|
CPU Utilization on UNIX
| 7,470,125
| 0
| 1
| 541
| 0
|
python,shell,unix,cpu
|
well, you can try to use the top command with "-b -n 1" and grab it's contents and than you can use cut or other tools to get what you need
NOTE: you can add the -p option to limit to a particular process id
| 0
| 1
| 0
| 1
|
2011-09-19T11:13:00.000
| 3
| 0
| false
| 7,470,045
| 0
| 0
| 0
| 1
|
I'm trying to calculate the percentage of CPU% used for a particular process using Python/Shell, but so far nothing.
I have looked at a lot of questions here, but none could help me.
Any suggestions?
|
argparse Python modules in cli
| 10,015,728
| 2
| 12
| 15,510
| 0
|
python,linux,command-line,argparse
|
If you're on CentOS and don't have an easy RPM to get to Python 2.7, JF's suggestion of pip install argparse is the way to go. Calling out this solution in a new answer. Thanks, JF.
| 0
| 1
| 0
| 1
|
2011-09-19T15:45:00.000
| 5
| 0.07983
| false
| 7,473,609
| 0
| 0
| 0
| 2
|
I am trying to run a python script from the Linux SSH Secure Shell command line environment, and I am trying to import the argparse library, but it gives the error: "ImportError: No module named argparse".
I think that this is because the Python environment that the Linux shell is using does not have the argparse library in it, and I think I can fix it fix it if I can find the directories for the libraries being used by the Python environment, and copy the argparse library into it, but I can not find where that directory is located.
I would appreciate any help on finding this directory (I suppose I could include the argparse library in the same directory as my python script for now, but I would much rather have the argparse library in the place where the other Python libraries are, as it should be).
|
argparse Python modules in cli
| 7,474,038
| 0
| 12
| 15,510
| 0
|
python,linux,command-line,argparse
|
You're probably using an older version of Python.
The argparse module has been added pretty recently, in Python 2.7.
| 0
| 1
| 0
| 1
|
2011-09-19T15:45:00.000
| 5
| 0
| false
| 7,473,609
| 0
| 0
| 0
| 2
|
I am trying to run a python script from the Linux SSH Secure Shell command line environment, and I am trying to import the argparse library, but it gives the error: "ImportError: No module named argparse".
I think that this is because the Python environment that the Linux shell is using does not have the argparse library in it, and I think I can fix it fix it if I can find the directories for the libraries being used by the Python environment, and copy the argparse library into it, but I can not find where that directory is located.
I would appreciate any help on finding this directory (I suppose I could include the argparse library in the same directory as my python script for now, but I would much rather have the argparse library in the place where the other Python libraries are, as it should be).
|
Netbeans 7 not starting up after python plugin installation
| 8,114,362
| 1
| 1
| 2,129
| 0
|
python,macos,netbeans
|
I was having a problem with Netbeans 7 not starting. Netbeans had first errored out with no error message. Then it wouldn't start or give me an error. I looked in the .netbeans directory in my user directory, and found and attempted to delete the 'lock' file in that directory. When I first tried to delete it, it said it was in use. So with task manager, I had to go to processes tab and find netbeans. I killed that task, then was able to delete 'lock'. Then netbeans started.
| 0
| 1
| 0
| 0
|
2011-09-19T17:31:00.000
| 6
| 0.033321
| false
| 7,474,887
| 0
| 0
| 1
| 5
|
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install.
I am using a mac osx
I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
|
Netbeans 7 not starting up after python plugin installation
| 19,486,580
| 0
| 1
| 2,129
| 0
|
python,macos,netbeans
|
I have the same problem afther installing the python plguin. To solve this problem i deleted the file: org-openide-awt.jar from C:\Users\MYUSERNAME.netbeans\7.0\modules
Regards!
Martín.
PD: I'm using Netbeans 7.0.1 anda Windows 7 64bit.
| 0
| 1
| 0
| 0
|
2011-09-19T17:31:00.000
| 6
| 0
| false
| 7,474,887
| 0
| 0
| 1
| 5
|
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install.
I am using a mac osx
I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
|
Netbeans 7 not starting up after python plugin installation
| 7,478,399
| 1
| 1
| 2,129
| 0
|
python,macos,netbeans
|
I had the same issue. Netbeans would die before opening at all. I could not fix it, and had to revert back to 6.9.1.
| 0
| 1
| 0
| 0
|
2011-09-19T17:31:00.000
| 6
| 0.033321
| false
| 7,474,887
| 0
| 0
| 1
| 5
|
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install.
I am using a mac osx
I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
|
Netbeans 7 not starting up after python plugin installation
| 8,088,729
| 1
| 1
| 2,129
| 0
|
python,macos,netbeans
|
I had the same problem, but with Windows 7. I deleted the .netbeans directory located in my home folder. That fixed my problem, hope it fixes yours.
| 0
| 1
| 0
| 0
|
2011-09-19T17:31:00.000
| 6
| 0.033321
| false
| 7,474,887
| 0
| 0
| 1
| 5
|
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install.
I am using a mac osx
I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
|
Netbeans 7 not starting up after python plugin installation
| 7,475,990
| -1
| 1
| 2,129
| 0
|
python,macos,netbeans
|
I know I'm not answering your question directly, but I too was considering installing the Python plugin in Netbeans 7 but saw that it was still in Beta.
I use WingIDE from wingware for Python development. I'm a Python newbie but I'm told by the pros that Wing is the best IDE for Python. The "101" version is free and works very well. The licensed versions include more options such as version control integration and Django features.
| 0
| 1
| 0
| 0
|
2011-09-19T17:31:00.000
| 6
| -0.033321
| false
| 7,474,887
| 0
| 0
| 1
| 5
|
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install.
I am using a mac osx
I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
|
fd duplicate from python to child process
| 7,478,053
| 0
| 0
| 280
| 0
|
python,linux,process,linux-kernel
|
By default, any file descriptors that a process has open when it forks/execs (which happens during a popen()) are inherited by the child process. If this isn't what you want to happen, you will need to either manually close the file descriptors after forking, or set the fds as close-on-exec using fcntl(fd, F_SETFD, FD_CLOEXEC). (This makes the kernel automatically close the file descriptor when it execs the new process.)
| 0
| 1
| 0
| 0
|
2011-09-19T22:16:00.000
| 1
| 1.2
| true
| 7,477,954
| 1
| 0
| 0
| 1
|
i think i have a problem with a ttyUSB device that caused from having 2 open ttyUSB fd's at the same time from different processes.
it goes like this:
i have a main python process, which opens several ttyUSB fd's, read, write, close, and open processes (with popen) to handle each ttyUSB (of course after the fd was closed).
when i do 'lsof | grep ttyUSB' it looks as if all the fd's that were opened in the main process when the child process started, associated to the child process even though they were already closed by the main process. (btw, the fd's are not associated to the main process)
is that behavior normal? (tinycore, kernal 2.6.33.3), do i have a way to prevent it?
thanks.
|
Python Parallel Processing Amazon EC2 or Alternatives?
| 7,478,918
| 1
| 0
| 384
| 0
|
python,multithreading,amazon-ec2,cloud,parallel-processing
|
Your requirements are too vague for a specific response. It is unlikely you are going to be able to elaborate them sufficiently for anybody to provide an authoritative answer.
Fortunately for you, many Infrastructure as a Service platforms like AWS and Rackspace let you test things out extremely inexpensively (literal pocket change), so give them a try and see what works for your application.
| 0
| 1
| 0
| 1
|
2011-09-20T00:21:00.000
| 1
| 1.2
| true
| 7,478,803
| 0
| 0
| 1
| 1
|
I'm working on running a Memory/CPU intensive project on a cloud service, from my Googling and research it looks like I should use Amazon EC2 as there are guides it using MPI - however, reading up on stackoverflow about people's comparison of EC2 with rackspace, joyent, etc, I was wondering if this is really the best cloud option I should go with or is there an alternative better route I should take? Any insight would be appreciated.
Thanks,
|
Make a twister server take initiative
| 7,488,756
| 0
| 3
| 252
| 0
|
python,twisted
|
If by "immediately" you mean "when the client connects", try calling sendLine in your LineReceiver subclass's connectionMade.
| 0
| 1
| 0
| 0
|
2011-09-20T16:07:00.000
| 3
| 0
| false
| 7,488,266
| 0
| 0
| 0
| 2
|
I have a server in twisted, implementing a LineReceiver protocol.
When I call sendLine in response to a client message, it writes the line to the client immediately, as one would expect.
But say the client asks the server to do a lengthy calculation. I want the server to periodically send a progress message to the client. When the server takes initiative and calls sendLine without the client having asked for anything, it seems to wait for the client to send a message to the server before sending anything.
How do I send a message from the server to the client immediately, without having the client explicitly ask for it?
|
Make a twister server take initiative
| 7,523,779
| 1
| 3
| 252
| 0
|
python,twisted
|
You can send a line using sendLine whenever you want and it should arrive immediately, but you may have a problem related to your server blocking.
A call to sendLine is deferred, so if you make a call in the middle of a bunch of processing, it's possible that it's not being actioned for a while, and then when a message is received, the reactor interrupts the processing, receives the message, and gets the queued message sent before going back to processing. You should read some of the other answers here and make sure that your processing isn't blocking up your main thread.
| 0
| 1
| 0
| 0
|
2011-09-20T16:07:00.000
| 3
| 0.066568
| false
| 7,488,266
| 0
| 0
| 0
| 2
|
I have a server in twisted, implementing a LineReceiver protocol.
When I call sendLine in response to a client message, it writes the line to the client immediately, as one would expect.
But say the client asks the server to do a lengthy calculation. I want the server to periodically send a progress message to the client. When the server takes initiative and calls sendLine without the client having asked for anything, it seems to wait for the client to send a message to the server before sending anything.
How do I send a message from the server to the client immediately, without having the client explicitly ask for it?
|
Does Twisted cache DNS lookups? Does it honour TTL?
| 7,488,979
| 2
| 2
| 556
| 0
|
python,dns,twisted
|
I think Twisted simply uses the OS' resolver, so the answer to both questions is "as much as your OS does".
| 0
| 1
| 0
| 0
|
2011-09-20T16:46:00.000
| 1
| 1.2
| true
| 7,488,720
| 0
| 0
| 0
| 1
|
If I use the Twisted Endpoint API to make a series of connections to the same host, will Twisted cache the DNS lookup between requests?
If it does, will it honour the DNS record's TTL?
My implementation is fairly vanilla. I instantiate a SSL4ClientEndpoint with host, port etc, and through the life of the program I use it to make several connections.
|
Odd message and processing hangs
| 9,764,576
| 0
| 0
| 85
| 0
|
python,amazon-web-services
|
wberry was correct in his comment, I was running into a max descriptors per process issue. This seems highly dependent on operating system. Reducing the size of the batches I was having each processor handle to below the file descriptor limit of the process solved the problem.
| 0
| 1
| 0
| 0
|
2011-09-20T20:51:00.000
| 1
| 1.2
| true
| 7,491,614
| 0
| 0
| 0
| 1
|
I have a large project that runs on an application server. It does pipelined processing of large batches of data and works fine on one Linux system (the old production environment) and one windows system (my dev environment).
However, we're upgrading our infrastructure and moving to a new linux system for production, based on the same image used for the existing production system (we use AWS). The python version (2.7) and libraries should be identical because of this, we're verifying this on our own using file hashes, also.
Our issue is that when we attempt to start processing on the new server, we receive a very strange output written to standard out followed by hanging of the server, "Removing descriptor: [some number]". I cannot duplicate this on the dev machine.
Has anyone ever encountered behavior like this in python before? Besides modules in the python standard library we are also using eventlet and beautifulsoup. In the standard library we lean heavily on urllib2, re, cElementTree, and multiprocessing (mostly the pools).
|
Getting an embedded Python runtime to use the current active virtualenv
| 9,071,047
| 2
| 15
| 4,477
| 0
|
python,virtualenv
|
Seems to be not an answer, but still might be useful in other contexts.
Have you tried running bin/activate_this.py from your Python virtualenv? The comment in this file of my virtualenv reads:
By using execfile(this_file, dict(__file__=this_file)) you will
activate this virtualenv environment.
This can be used when you must use an existing Python interpreter, not
the virtualenv bin/python
You should achieve the desired result if you execute the runtime equivalent of the above code.
| 0
| 1
| 0
| 0
|
2011-09-20T23:07:00.000
| 6
| 0.066568
| false
| 7,492,855
| 1
| 0
| 0
| 2
|
I make heavy use of virtualenv to isolate my development environments from the system-wide Python installation. Typical work-flow for using a virtualenv involves running source /path/to/virtualenv/bin/activate to set the environment variables that Python requires to execute an isolated runtime. Making sure my Python executables use the current active virtualenv is as simple as setting the shebang to #!/usr/bin/env python
Lately, though, I've been writing some C code that embeds the Python runtime. What I can't seem to figure out is how to get the embedded runtime to use the current active virtualenv. Anybody got a good example to share?
|
Getting an embedded Python runtime to use the current active virtualenv
| 31,758,278
| 0
| 15
| 4,477
| 0
|
python,virtualenv
|
You can the check the environment variable VIRTUAL_ENV to get the current envs location.
| 0
| 1
| 0
| 0
|
2011-09-20T23:07:00.000
| 6
| 0
| false
| 7,492,855
| 1
| 0
| 0
| 2
|
I make heavy use of virtualenv to isolate my development environments from the system-wide Python installation. Typical work-flow for using a virtualenv involves running source /path/to/virtualenv/bin/activate to set the environment variables that Python requires to execute an isolated runtime. Making sure my Python executables use the current active virtualenv is as simple as setting the shebang to #!/usr/bin/env python
Lately, though, I've been writing some C code that embeds the Python runtime. What I can't seem to figure out is how to get the embedded runtime to use the current active virtualenv. Anybody got a good example to share?
|
App Engine problems using the command line
| 7,507,801
| 2
| 1
| 219
| 0
|
python,django,google-app-engine,command-line,google-cloud-datastore
|
If you're using Django 0.96 templates within a 'normal' App Engine app, then manage.py isn't involved at all.
/path/to/dev_appserver.py --clear_datastore .
is what you want, assuming you're CD'd to the root of your app.
| 0
| 1
| 0
| 0
|
2011-09-21T21:24:00.000
| 1
| 1.2
| true
| 7,506,854
| 0
| 0
| 1
| 1
|
I am developing an App Engine project on Windows using Eclipse and Python 2.5 and Django 0.96. Everything in my app, including Django, is working nicely.
My problems arise when I try to use command line. For example, I'm told that if I want to clear the local datastore I can enter "python manage.py reset" but when I do so the response is "python: can't open file 'manage.py'".
I feel as if I have missed a configuration step. I have checked my system variables and "Path" includes "C:\Python25" (which I had added manually) but nothing Django or App Engine related. My .py extension is associated with C:\Python25\python.exe.
In my quest to solve this, and in trying to understand what manage.py is, I see that I might have had to create a Django project using "django-admin.py startproject [myproject]" but because everything works nicely from Eclipse I'm not sure if this is necessary now. In any case, if I try to enter this from the command line I get "'django-admin.py' is not recognized..."
Please, what am I missing?
|
Is it necessary to explicitly free memory in Google App Engine's Python environment?
| 7,526,870
| 6
| 2
| 617
| 0
|
python,google-app-engine
|
Python has it's own garbage collection so there's no need to release memory manually.
| 0
| 1
| 0
| 0
|
2011-09-23T09:15:00.000
| 2
| 1.2
| true
| 7,526,746
| 0
| 0
| 1
| 1
|
Is there a way to free memory in google's app engine? Is there a garbage collector in python?
|
zeromq: how to prevent infinite wait?
| 10,846,438
| 10
| 77
| 82,000
| 0
|
python,zeromq
|
The send wont block if you use ZMQ_NOBLOCK, but if you try closing the socket and context, this step would block the program from exiting..
The reason is that the socket waits for any peer so that the outgoing messages are ensured to get queued.. To close the socket immediately and flush the outgoing messages from the buffer, use ZMQ_LINGER and set it to 0..
| 0
| 1
| 1
| 0
|
2011-09-24T12:25:00.000
| 4
| 1
| false
| 7,538,988
| 0
| 0
| 0
| 1
|
I just got started with ZMQ. I am designing an app whose workflow is:
one of many clients (who have random PULL addresses) PUSH a request to a server at 5555
the server is forever waiting for client PUSHes. When one comes, a worker process is spawned for that particular request. Yes, worker processes can exist concurrently.
When that process completes it's task, it PUSHes the result to the client.
I assume that the PUSH/PULL architecture is suited for this. Please correct me on this.
But how do I handle these scenarios?
the client_receiver.recv() will wait for an infinite time when server fails to respond.
the client may send request, but it will fail immediately after, hence a worker process will remain stuck at server_sender.send() forever.
So how do I setup something like a timeout in the PUSH/PULL model?
EDIT: Thanks user938949's suggestions, I got a working answer and I am sharing it for posterity.
|
How is rate_limit enforced in Celery?
| 11,125,501
| 16
| 6
| 1,119
| 0
|
python,django,asynchronous,queue,celery
|
Rate limited tasks are never dropped, they are queued internally in the worker so that they execute as soon as they are allowed to run.
The token bucket algorithm does not specify anything about dropping packets (it is an option, but Celery does not do that).
| 0
| 1
| 0
| 0
|
2011-09-24T21:06:00.000
| 1
| 1.2
| true
| 7,541,931
| 0
| 0
| 1
| 1
|
I'm running a Django website where I use Celery to implement preventive caching - that is, I calculate and cache results even before they are requested by the user.
However, one of my Celery tasks could, in some situation, be called a lot (I'd say sightly quicker than it completes on average, actually). I'd like to rate_limit it so that it doesn't consume a lot of resources when it's actually not that useful.
However, I'd like first to understand how Celery's celery.task.base.Task.rate_limit attribute is enforced. Are tasks refused? Are they delayed and executed later?
Thanks in advance!
|
How to check if a Windows version is Genuine or not?
| 7,545,592
| 0
| 2
| 2,226
| 0
|
java,c++,python,c,windows
|
The Java solution is to use Process to run the C++ or VBScript solution as a child process.
| 0
| 1
| 0
| 0
|
2011-09-25T11:28:00.000
| 3
| 0
| false
| 7,545,206
| 0
| 0
| 0
| 1
|
Is it possible to check whether a Windows installation is Genuine or not programmatically?
Lets just say I want to check Windows 7 from C, C++, Java or Python.
|
Getting friendly device names in python
| 7,552,255
| 7
| 11
| 8,806
| 0
|
python,windows,linux,macos
|
Regarding Linux, if all you need is to enumerate devices, you can even skip pyudev dependency for your project, and simply parse the output of /sbin/udevadm info --export-db command (does not require root privileges). It will dump all information about present devices and classes, including USB product IDs for USB devices, which should be more then enough to identify your USB-to-serial adapters. Of course, you can also do this with pyudev.
| 0
| 1
| 0
| 0
|
2011-09-26T06:53:00.000
| 6
| 1.2
| true
| 7,551,546
| 0
| 0
| 0
| 2
|
I have an 2-port signal relay connected to my computer via a USB serial interface. Using the pyserial module I can control these relays with ease. However, this is based on the assumption that I know beforehand which COM-port (or /dev-node) the device is assigned to.
For the project I'm doing that's not enough since I don't want to assume that the device always gets assigned to for example COM7 in Windows. I need to be able to identify the device programatically across the possible platforms (Win, Linux, OSX (which I imagine would be similar to the Linux approach)), using python. Perhaps by, as the title suggests, enumerate USB-devices on the system and somehow get more friendly names for them. Windows and Linux being the most important platforms to support.
Any help would be greatly appreciated!
EDIT:
Seems like the pyudev-module would be a good fit for Linux-systems. Has anyone had any experience with that?
|
Getting friendly device names in python
| 7,552,386
| 0
| 11
| 8,806
| 0
|
python,windows,linux,macos
|
It will be great if this is possible, but in my experience with commercial equipments using COM ports this is not the case. Most of the times you need to set manually in the software the COM port. This is a mess, specially in windows (at least XP) that tends to change the number of the com ports in certain cases. In some equipment there is an autodiscovery feature in the software that sends a small message to every COM port and waits for the right answer. This of course only works if the instrument implements some kind of identification command. Good luck.
| 0
| 1
| 0
| 0
|
2011-09-26T06:53:00.000
| 6
| 0
| false
| 7,551,546
| 0
| 0
| 0
| 2
|
I have an 2-port signal relay connected to my computer via a USB serial interface. Using the pyserial module I can control these relays with ease. However, this is based on the assumption that I know beforehand which COM-port (or /dev-node) the device is assigned to.
For the project I'm doing that's not enough since I don't want to assume that the device always gets assigned to for example COM7 in Windows. I need to be able to identify the device programatically across the possible platforms (Win, Linux, OSX (which I imagine would be similar to the Linux approach)), using python. Perhaps by, as the title suggests, enumerate USB-devices on the system and somehow get more friendly names for them. Windows and Linux being the most important platforms to support.
Any help would be greatly appreciated!
EDIT:
Seems like the pyudev-module would be a good fit for Linux-systems. Has anyone had any experience with that?
|
Shebang Notation: Python Scripts on Windows and Linux?
| 7,575,837
| 1
| 85
| 92,049
| 0
|
python,windows,linux,shebang
|
Install pywin32. One of the nice thing is it setups the file association of *.py to the python interpreter.
| 0
| 1
| 0
| 0
|
2011-09-27T19:11:00.000
| 6
| 0.033321
| false
| 7,574,453
| 1
| 0
| 0
| 2
|
I have some small utility scripts written in Python that I want to be usable on both Windows and Linux. I want to avoid having to explicitly invoke the Python interpreter. Is there an easy way to point shebang notation to the correct locations on both Windows and Linux? If not, is there another way to allow implicit invocation of the Python interpreter on both Windows and Linux without having to modify the script when transferring between operating systems?
Edit: The shebang support on Windows is provided Cygwin, but I want to use the native Windows Python interpreter on Windows, not the Cygwin one.
Edit # 2: It appears that shebang notation overrides file associations in Cygwin terminals. I guess I could just uninstall Cygwin Python and symlink /usr/bin/python to Windows-native Python.
|
Shebang Notation: Python Scripts on Windows and Linux?
| 7,574,585
| 42
| 85
| 92,049
| 0
|
python,windows,linux,shebang
|
Unless you are using cygwin, windows has no shebang support. However, when you install python, it add as file association for .py files. If you put just the name of your script on the command line, or double click it in windows explorer, then it will run through python.
What I do is include a #!/usr/bin/env python shebang in my scripts. This allows for shebang support on linux. If you run it on a windows machine with python installed, then the file association should be there, and it will run as well.
| 0
| 1
| 0
| 0
|
2011-09-27T19:11:00.000
| 6
| 1
| false
| 7,574,453
| 1
| 0
| 0
| 2
|
I have some small utility scripts written in Python that I want to be usable on both Windows and Linux. I want to avoid having to explicitly invoke the Python interpreter. Is there an easy way to point shebang notation to the correct locations on both Windows and Linux? If not, is there another way to allow implicit invocation of the Python interpreter on both Windows and Linux without having to modify the script when transferring between operating systems?
Edit: The shebang support on Windows is provided Cygwin, but I want to use the native Windows Python interpreter on Windows, not the Cygwin one.
Edit # 2: It appears that shebang notation overrides file associations in Cygwin terminals. I guess I could just uninstall Cygwin Python and symlink /usr/bin/python to Windows-native Python.
|
Every Python IDLE run starts a new process
| 29,788,701
| 0
| 0
| 486
| 0
|
python,windows,python-idle
|
I've noticed this on Windows 7, running IDLE v2.7.3; Tk version 8.5; Python 2.7.3
However, it only seems to fail to close the process if you kill a program before it finishes on its own. If possible, let your programs run to their end.
| 0
| 1
| 0
| 0
|
2011-09-27T19:34:00.000
| 3
| 0
| false
| 7,574,704
| 1
| 0
| 0
| 2
|
Windows 7: I'm using Python3.2 with IDLE. Every time I edit and load my program, I get a new "pythonw.exe *32" process (as shown by Windows Task Manager)--even if the program just prints Hello World.
This is a special nuisance if the program is on a static RAM drive, because then I have to kill each of these processes individually before I can eject my drive.
Is this a bug in IDLE? Is there a way I can prevent this from happening? Or at least, is there a way I can kill all these pythonw processes at once, instead of one at a time?
|
Every Python IDLE run starts a new process
| 7,574,883
| 3
| 0
| 486
| 0
|
python,windows,python-idle
|
Upgrade to version 3.2.2. That fixed the bug for me. I saw the same thing in 3.2.1.
| 0
| 1
| 0
| 0
|
2011-09-27T19:34:00.000
| 3
| 1.2
| true
| 7,574,704
| 1
| 0
| 0
| 2
|
Windows 7: I'm using Python3.2 with IDLE. Every time I edit and load my program, I get a new "pythonw.exe *32" process (as shown by Windows Task Manager)--even if the program just prints Hello World.
This is a special nuisance if the program is on a static RAM drive, because then I have to kill each of these processes individually before I can eject my drive.
Is this a bug in IDLE? Is there a way I can prevent this from happening? Or at least, is there a way I can kill all these pythonw processes at once, instead of one at a time?
|
pythonw.exe processes not quitting after running script
| 7,592,072
| 1
| 2
| 2,421
| 0
|
python,process,python-idle
|
Have you tried using python.exe instead of pythonw.exe?
Im pretty sure this is the intended default behavior for the window python interperter (pythonw.exe).
If its a .pyw file, just right click "Open With..." and use python.exe
| 0
| 1
| 0
| 0
|
2011-09-29T03:02:00.000
| 1
| 1.2
| true
| 7,592,008
| 1
| 0
| 0
| 1
|
Every time I restart the shell or run a script and instance of pythonw.exe*32 is created. When I close out of IDLE these processes don't go away in the task manager. Any ideas on how to fix this?
Thanks!
|
PyPy on Windows 7 x64?
| 7,609,593
| 16
| 14
| 10,736
| 0
|
python,win64,pypy
|
PyPy is not compatible with 64bit windows. Primary reason is that sizeof(void*) != sizeof(long) which is a bit annoying. Contributions are more than welcome :)
| 0
| 1
| 0
| 0
|
2011-09-30T09:33:00.000
| 4
| 1
| false
| 7,608,503
| 1
| 0
| 0
| 3
|
I am trying to use PyPy on a Windows 7 x64 machine but do not find any way to do it.
Apparently there is a win32 binary, but no x64 binary or installation guide.
I am currently using Python 2.7.2 win64 (Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit (AMD64)] on win32).
Installation from sources raised the following error:
[translation:ERROR] WindowsError: [Error 193] %1 is not a valid Win32 application
Does anyone have a guide/hint to use PyPy on a win64?
Or is it just not possible?
|
PyPy on Windows 7 x64?
| 7,608,534
| 6
| 14
| 10,736
| 0
|
python,win64,pypy
|
There's no version available for 64 bit Python on Windows. You appear to have the following options:
Download the source to PyPy and port it to 64 bit.
Switch to 32 bit Python.
Option 2 looks more tractable.
| 0
| 1
| 0
| 0
|
2011-09-30T09:33:00.000
| 4
| 1
| false
| 7,608,503
| 1
| 0
| 0
| 3
|
I am trying to use PyPy on a Windows 7 x64 machine but do not find any way to do it.
Apparently there is a win32 binary, but no x64 binary or installation guide.
I am currently using Python 2.7.2 win64 (Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit (AMD64)] on win32).
Installation from sources raised the following error:
[translation:ERROR] WindowsError: [Error 193] %1 is not a valid Win32 application
Does anyone have a guide/hint to use PyPy on a win64?
Or is it just not possible?
|
PyPy on Windows 7 x64?
| 29,459,581
| 3
| 14
| 10,736
| 0
|
python,win64,pypy
|
Just an update on this issue if anyone read it nowadays: PyPy seems to have solved their issues with Windows x64, you can download the 32-bit version of PyPy and it will work flawlessly under Windows 7 x64 (I even have a 64-bit python install beside, and pypy is working nicely along, I just have to specify the full path to pypy to use it for the scripts I need).
| 0
| 1
| 0
| 0
|
2011-09-30T09:33:00.000
| 4
| 0.148885
| false
| 7,608,503
| 1
| 0
| 0
| 3
|
I am trying to use PyPy on a Windows 7 x64 machine but do not find any way to do it.
Apparently there is a win32 binary, but no x64 binary or installation guide.
I am currently using Python 2.7.2 win64 (Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit (AMD64)] on win32).
Installation from sources raised the following error:
[translation:ERROR] WindowsError: [Error 193] %1 is not a valid Win32 application
Does anyone have a guide/hint to use PyPy on a win64?
Or is it just not possible?
|
How can I run a program in msys through Python?
| 7,613,945
| 1
| 4
| 2,618
| 0
|
python,msys
|
Find where in the msys path libgcc_s_dw2-1.dll is.
Find the environmental variable in MSYS that has that path in it.
Add that environmental variable to Windows.
| 0
| 1
| 0
| 0
|
2011-09-30T16:47:00.000
| 1
| 1.2
| true
| 7,613,525
| 1
| 0
| 0
| 1
|
I've got a short python script that will eventually edit an input file, run an executable on that input file and read the output from the executable. The problem is, I've compiled the executable through msys, and can only seem to run it from the msys window. I'm wondering if the easiest way to do this is to somehow use os.command in Python to run msys and pipe a command in, or run a script through msys, but I haven't found a way to do this.
Has anyone tried this before? How would you pipe a command into msys? Or is there a smarter way to do this that I haven't thought of?
Thanks in advance!
EDIT: Just realized that this information might help, haha . . . . I'm running Windows, msys 1.0 and Python 2.7
|
Is there any loss of functionality with streaming jobs in hbase/hadoop versus using java?
| 7,707,068
| 1
| 3
| 758
| 0
|
java,python,hadoop,hbase,thrift
|
Yes, you should get data local code execution with streaming. You do not push the data to where the program is, you push the program to where the data is. Streaming simply takes the local input data and runs it through stdin to your python program. Instead of each map running inside of a java task, it spins up and instance of your python program and just pumps the input through that.
If you really want to do fast processing you really should learn java though. Having to pipe everything through stdin and stout is a lot of overhead.
| 0
| 1
| 0
| 0
|
2011-10-02T00:27:00.000
| 2
| 0.099668
| false
| 7,623,858
| 0
| 0
| 1
| 1
|
Sorry in advance if this is a basic question. I'm reading a book on hbase and learing but most of the examples in the book(and well as online) tend to be using Java(I guess because hbase is native to java). There are a few python examples and I know I can access hbase with python(using thrift or other modules), but I'm wondering about additional functions?
For example, hbase has a 'coprocessors' function that pushs the data to where your doing your computing. Does this type work with python or other apps that are using streaming hadoop jobs? It seems with java, it can know what your doing and manage the data flow accordingly but how does this work with streaming? If it doesn't work, is there a way to get this type of functionality(via streaming without switching to another language)?
Maybe another way of asking this is..what can a non-java programmer do to get all the benefits of the features of hadoop when streaming?
Thanks in advance!
|
Python/C Raw Socket Operations using Django, Mod_WSGI, Apache
| 7,630,026
| 1
| 2
| 344
| 0
|
python,django,sockets,mod-wsgi,freebsd
|
Use Celery or some other back end service which runs as root. Having a web application process run as root is a security problem waiting to happen. This is why mod_wsgi blocks you running daemon processes as root. Sure you could hack the code to disable the exclusion, but I am not about to tell you how to do that.
| 0
| 1
| 0
| 0
|
2011-10-02T20:07:00.000
| 1
| 0.197375
| false
| 7,628,889
| 0
| 0
| 1
| 1
|
I'm currently writing a web application using Django, Apache, and mod_wsgi that provides some FreeBSD server management and configuration features, including common firewall operations.
My Python/C library uses raw sockets to interact directly with the firewall and works perfectly fine when running as root, but raw socket operations are only allowed for root.
At this point, the only thing I can think of is to install and use sudo to explicitly allow the www user access to /sbin/ipfw which isn't ideal since I would prefer to use my raw socket library operations rather than a subprocess call.
I suppose another option would be to write (local domain sockets) or use an existing job system (Celery?) that runs as root and handles these requests.
Or perhaps there's some WSGI Daemon mode trickery I'm unaware of? I'm sure this issue has been encountered before. Any advice on the best way to handle this?
|
Can Gstreamer be used server-side to stream audio to multiple clients on demand?
| 7,634,531
| 1
| 2
| 1,045
| 0
|
python,audio,stream,gstreamer
|
You might want to look at Flumotion (www.flumotion.org). It is a python based streaming server using GStreamer, you might be able to get implementation ideas from that in terms of how you do your application. It relies heavily on the python library Twisted for its network handling.
| 0
| 1
| 0
| 0
|
2011-10-03T02:17:00.000
| 1
| 0.197375
| false
| 7,630,548
| 0
| 0
| 1
| 1
|
I'm working on an audio mixing program (DAW) web app, and considering using Python and Python Gstreamer for the backend. I understand that I can contain the audio tracks of a single music project in a gst.Pipeline bin, but playback also appears to be controlled by this Pipeline.
Is it possible to create several "views" into the Pipeline representing the project? So that more than one client can grab an audio stream of this Pipeline at will, with the ability to do time seek?
If there is a better platform/library out there to use, I'd appreciate advice on that too. I'd prefer sticking to Python though, because my team members are already researching Python for other parts of this project.
Thanks very much!
|
Control rs232 windows terminal program from python
| 7,712,917
| 0
| 2
| 1,707
| 0
|
python,windows,python-3.x,serial-port,automated-tests
|
I was also able to solve this using WScript, but pySerial was the preferred solution.
| 0
| 1
| 0
| 1
|
2011-10-03T08:27:00.000
| 3
| 0
| false
| 7,632,642
| 0
| 0
| 0
| 1
|
I am testing a piece of hardware which hosts an ftp server. I connect to the server in order to configure the hardware in question.
My test environment is written in Python 3.
To start the ftp server, I need to launch a special proprietary terminal application on my pc. I must use this software as far as I know and I have no help files for it. I do however know how to use it to launch the ftp server and that's all I need it for.
When I start this app, I go to the menu and open a dialog where I select the com port/speed the hardware is connected to. I then enter the command to launch the ftp server in a console like window within the application. I am then prompted for the admin code for the hardware, which I enter. When I'm finished configuring the device, I issue a command to restart the hardware's software.
In order for me to fully automate my tests, I need to remove the manual starting of this ftp server for each test.
As far as I know, I have two options:
Windows GUI automation
Save the stream of data sent on the com port when using this application.
I've tried to find an GUI automater but pywinauto isn't supporting Python 3. Any other options here which I should look at?
Any suggestions on how I can monitor the com port in question and save the traffic on it?
Thanks,
Barry
|
How can I test the validity of a ReferenceProperty in Appengine?
| 7,637,063
| 3
| 2
| 199
| 0
|
python,google-app-engine,google-cloud-datastore
|
This will test key existence without returning an entity:
db.Query(keys_only=True).filter('__key__ =', test_key).count(1) == 1
I'm not certain that it's computationally cheaper than fetching the entity.
| 0
| 1
| 0
| 0
|
2011-10-03T15:02:00.000
| 1
| 1.2
| true
| 7,636,806
| 0
| 0
| 1
| 1
|
I am currently testing a small application I have written. I have not been sufficiently careful in ensuring the data in my datastore is consistent and now I have an issue that I have some records referencing objects which no longer exist. More specifially, I have some objects which have ReferenceProperty's which have been assigned values; the objects referred to have been deleted but the reference remains.
I would like to add some checks to my code to ensure that referenced objects exist and acting accordingly. (Of course, I also need to clean up my data).
One approach is to just try to get() it; however, this should probably retrieve the entire object - I'd like an approach which just tests the existence of the object, ideally, just resulting in costs of key manipulations on the datastore, rather than full entity reads.
So, my question is the following: is there a simple mechanism to test if a given ReferenceProperty is valid which only involves key access operations (rather than full entity get operations)?
|
easy_install failure on windows 7
| 7,638,140
| 1
| 0
| 541
| 0
|
python,windows-7,easy-install
|
This error:
bad local file header
seems to say that the file header (which usually determines the filetype) isn't passing Python's test. It could be the zipping program you used.
Try 7-Zip (free) or a different program when creating your egg. I haven't made them before, but I think there is even a way to do it with plain 'ol Python.
| 0
| 1
| 0
| 0
|
2011-10-03T16:51:00.000
| 1
| 0.197375
| false
| 7,638,109
| 1
| 0
| 0
| 1
|
I have created two eggs using bdist and egg_A is defines as a dependency of egg_B.
I check both egg using the command unzip and both are ok, however when I try to insall the egg using easy_install I get the following stack trace :
Installed c:\virtualenv\lib\site-packages\pymarketdata-1.0-py2.7.egg
Reading file:C:/python_nest/
Processing dependencies for PyMarketData==1.0
zipimport.ZipImportError: bad local file header in c:\yoan\yoyo\lib\site-packages\PyMarketData-1.0-py2.7.egg
Any idea where it could come from ?
|
Execute Python Script as Root (seteuid vs c-wrapper)
| 7,639,481
| 3
| 13
| 4,966
| 0
|
python,c,django,freebsd
|
The correct thing is called privilege separation: clearly identify minimal set of tasks which have to be done on elevated privileges. Write a separate daemon and an as much limited as possible way of communicating the task to do. Run this daemon as another user with elevated privileges. A bit more work, but also more secure.
EDIT: using a setuid-able wrapper will also satisfy the concept of privilege separation, although I recommend having the web server chrooted and mounting the chrooted file system nosuid (which would defeat that).
| 0
| 1
| 0
| 1
|
2011-10-03T18:33:00.000
| 3
| 0.197375
| false
| 7,639,141
| 0
| 0
| 0
| 1
|
I have a quick one off task in a python script that I'd like to call from Django (www user), that's going to need to root privileges.
At first I thought I would could use Python's os.seteuid() and set the setuid bit on the script, but then I realized that I would have to set the setuid bit on Python itself, which I assume is big no no. From what I can tell, this would also be the case if using sudo, which I really would like to avoid.
At this point, I'm considering just writing a C wrapper the uses seteuid and calls my python script as root, passing the necessary arguments to it.
Is this the correct thing to do or should I be looking at something else?
|
how to copy a file between 2 computers on the network in python
| 7,641,519
| 1
| 0
| 2,350
| 0
|
python
|
NFS mount the filesystem, then both systems can access the same files as if they were local. Otherwise you could use sockets.
| 0
| 1
| 0
| 0
|
2011-10-03T22:39:00.000
| 2
| 1.2
| true
| 7,641,481
| 0
| 0
| 0
| 1
|
I am trying to move a log file from a computer, where an operation is performed, to another computer, that will get the log file and process it, returning a document with the result of the analysis.
I am using Python for the job, but I am open to other options (I have to run this copy via console on OSX, due the fact that most of my work is done in shell scripting; so cannot use any visual solution; anything that can be launched via a script would work too); any suggestion is more than welcome since I do not really have a favorite way to do this (just trying the less problematic....I do not need any security encryption since both of the computers are on my internal network, no communication with the outside is performed).
Hope that someone can point me to the right solution, thanks in advance.
|
Building binary python distributions for multiple OS X versions
| 13,315,980
| 1
| 3
| 1,843
| 0
|
python,macos,build,cross-compiling,distutils
|
I found the solution, go into your /System/Library/Frameworks/Python.framework/Versions/2.7/lib/distutils/sysconfig.py
Goto line 408 that says "raise DistutilsPlatformError" and add a '#' to comment out that line of code... This will "unleash the python"
You are basically telling python "don't worry its not 10.7, I know" there could be some crashes as a result but I think otherwise. My very complex python applicaiton now compiles on MacOSX 10.8 with no troubles and it seems to do the job, QA still has to test it though.
| 0
| 1
| 0
| 0
|
2011-10-04T21:34:00.000
| 2
| 0.099668
| false
| 7,654,316
| 1
| 0
| 0
| 1
|
I am attempting to build a python application with binary modules on OS X. I want to build versions for Snow Leopard and Leopard from Lion. I have XCode 4 installed with the 10.5 and 10.6 sdks and have been attempting to build using the MACOSX_DEPLOYMENT_TARGET flag set to 10.6. I receive an error from distutils complaining that python was built with a different deployment target.
I tried building a separate python binary with the deployment target set to 10.6 and then used virtualenv to try to build from that, but virtualenv expected a lib directory under the base env directory that was not there.
I am a total newb at developing on Mac and not even sure if what I want to do is possible. Am I going to have to break down and have someone still running Snow Leopard build my distributions?
I really appreciate any assistance.
|
What are the different options for social authentication on Appengine - how do they compare?
| 7,662,946
| 11
| 6
| 552
| 0
|
python,google-app-engine,oauth,openid,facebook-authentication
|
In my research on this question I found that there are essentially three options:
Use Google's authentication mechanisms (including their federated login via OpenID)
Pros:
You can easily check who is logged in via the Users service provided with Appengine
Google handles the security so you can be quite sure it's well tested
Cons:
This can only integrate with third party OpenID providers; it cannot integrate with facebook/twitter at this time
Use the social authentication mechanisms provided by a known framework such as tipfy, or django
Pros:
These can integrate with all of the major social authentication services
They are quite widely used so they are likely to be quite robust and pretty well tested
Cons:
While they are probably well tested, they may not be maintained
They do come as part of a larger framework which you may have to get comfortable with before deploying your app
Roll your own social authentication
Pros:
You can do mix up whatever flavours of OpenID and OAuth tickles your fancy
Cons:
You are most likely to introduce security holes
Unless you've a bit of experience working with these technologies, this is likely to be the most time consuming
Further notes:
It's probable that everyone will move to OpenID eventually and then the standard Google authentication should work everywhere
The first option allows you to point a finger at Google if there is a problem with their authentication; the second option imposes more responsibility on you, but still allows you to say that you use a widely used solution if there is a problem and the final option puts all the responsibility on you
Most of the issues revolve around session management - in case 1, Google does all of the session management and it is pretty invisible to the developer; in case 2, the session management is handled by the framework and in the 3rd case, you've to devise your own.
| 0
| 1
| 0
| 1
|
2011-10-05T10:42:00.000
| 1
| 1.2
| true
| 7,660,059
| 0
| 0
| 1
| 1
|
[This question is intended as a means to both capture my findings and sanity check them - I'll put up my answer toute suite and see what other answers and comments appear.]
I spent a little time trying to get my head around the different social authentication options for (python) Appengine. I was particularly confused by how the authentication mechanisms provided by Google can interact with other social authentication mechanisms. The picture is complicated by the fact that Google has nice integration with third party OpenID providers but some of the biggest social networks are not OpenID providers (eg facebook, twitter). [Note that facebook can use OpenID as a relaying party, but not as a provider].
The question is then the following: what are the different options for social authentication in Appengine and what are the pros and cons of each?
|
App Engine memcache speed "dict" vs "Class Object"
| 7,661,286
| 1
| 0
| 409
| 0
|
python,google-app-engine
|
You need to measure but for all intended purposes the results should be that its about the same speed.
If any method tend to be faster I don't really think its going to impact you overall performance that much.
Your main latency will be the RPC call to the memache which might be two factor slower then the slowest serialization.
| 0
| 1
| 0
| 0
|
2011-10-05T10:57:00.000
| 3
| 1.2
| true
| 7,660,231
| 0
| 0
| 1
| 1
|
I have page handler with 95% traffic, it fetches model from DB using its key and then uses fields in the fetched model to fill a django template.
I want to memcache the fetched model so as to avoid DB reads. Not all fields of the model are used in template so i want to cache it with just the required feilds so as to improve cache utilization and fit more such models in cache.
So i want to convert the Model in a dictionary or 'class boject' with only the fields required in template.
Memcache uses pickle to serialize values, so for serialization purpose which will be faster dictionary or 'class object'??
|
Purpose of #!/usr/bin/python3 shebang
| 7,720,640
| 13
| 221
| 273,899
| 0
|
python,scripting
|
This line helps find the program executable that will run the script. This shebang notation is fairly standard across most scripting languages (at least as used on grown-up operating systems).
An important aspect of this line is specifying which interpreter will be used. On many development-centered Linux distributions, for example, it is normal to have several versions of python installed at the same time.
Python 2.x and Python 3 are not 100% compatible, so this difference can be very important. So #! /usr/bin/python and #! /usr/bin/python3 are not the same (and neither are quite the same as #! /usr/bin/env python3 as noted elsewhere on this page.
| 0
| 1
| 0
| 1
|
2011-10-06T04:29:00.000
| 7
| 1
| false
| 7,670,303
| 1
| 0
| 0
| 4
|
I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above.
|
Purpose of #!/usr/bin/python3 shebang
| 7,670,323
| 7
| 221
| 273,899
| 0
|
python,scripting
|
And this line is how.
It is ignored.
It will fail to run, and should be changed to point to the proper location. Or env should be used.
It will fail to run, and probably fail to run under a different version regardless.
| 0
| 1
| 0
| 1
|
2011-10-06T04:29:00.000
| 7
| 1
| false
| 7,670,303
| 1
| 0
| 0
| 4
|
I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above.
|
Purpose of #!/usr/bin/python3 shebang
| 52,982,676
| 3
| 221
| 273,899
| 0
|
python,scripting
|
Actually the determination of what type of file a file is very complicated, so now the operating system can't just know. It can make lots of guesses based on -
extension
UTI
MIME
But the command line doesn't bother with all that, because it runs on a limited backwards compatible layer, from when that fancy nonsense didn't mean anything. If you double click it sure, a modern OS can figure that out- but if you run it from a terminal then no, because the terminal doesn't care about your fancy OS specific file typing APIs.
Regarding the other points. It's a convenience, it's similarly possible to run
python3 path/to/your/script
If your python isn't in the path specified, then it won't work, but we tend to install things to make stuff like this work, not the other way around. It doesn't actually matter if you're under *nix, it's up to your shell whether to consider this line because it's a shellcode. So for example you can run bash under Windows.
You can actually ommit this line entirely, it just mean the caller will have to specify an interpreter. Also don't put your interpreters in nonstandard locations and then try to call scripts without providing an interpreter.
| 0
| 1
| 0
| 1
|
2011-10-06T04:29:00.000
| 7
| 0.085505
| false
| 7,670,303
| 1
| 0
| 0
| 4
|
I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above.
|
Purpose of #!/usr/bin/python3 shebang
| 7,670,334
| 28
| 221
| 273,899
| 0
|
python,scripting
|
That's called a hash-bang. If you run the script from the shell, it will inspect the first line to figure out what program should be started to interpret the script.
A non Unix based OS will use its own rules for figuring out how to run the script. Windows for example will use the filename extension and the # will cause the first line to be treated as a comment.
If the path to the Python executable is wrong, then naturally the script will fail. It is easy to create links to the actual executable from whatever location is specified by standard convention.
| 0
| 1
| 0
| 1
|
2011-10-06T04:29:00.000
| 7
| 1
| false
| 7,670,303
| 1
| 0
| 0
| 4
|
I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above.
|
How can I keep Task Manager from killing my pythonw script?
| 7,680,857
| 0
| 0
| 734
| 0
|
python,windows,process,taskmanager
|
Hopefully you aren't giving "public" general users administrative privileges. Don't give the public account permission to close your script. Then you just run your script from a different user account than the account the general public can use.
| 0
| 1
| 0
| 0
|
2011-10-06T07:34:00.000
| 1
| 0
| false
| 7,671,375
| 0
| 0
| 0
| 1
|
I want to run a python script for several days, performing a huge database calculation, on a "public" windows computer in my work place.
Since this task is important, I want to prevent closing it from the task manager.
Is it possible to protect a python script from being closed by the task manager (windows xp)? If it is, how?
|
Rsyslog + Virtualenv
| 8,363,498
| 2
| 1
| 321
| 0
|
python,centos,virtualenv,rsyslog
|
Have you ever asked a question while researching something, then learned what you needed to do and then wished you hadn't asked the question?
All you need to do is modify your python path and add the path to the site-packages directory of the virtualenv you want to use.
| 0
| 1
| 0
| 1
|
2011-10-07T12:35:00.000
| 1
| 1.2
| true
| 7,687,332
| 0
| 0
| 0
| 1
|
I'm using a shell execute action in rsyslog to a python script on a CentOS machine. How can I ensure that it runs in a specified virtualenv?
|
Python self contained web application and server?
| 7,721,670
| 0
| 6
| 1,744
| 0
|
python,deployment
|
cherrypy is the easiest one to use, django is feature rich and tornado is more advanced with asynchrounous web server(in my opinion it is better than multithreaded web server).
For what you want, django is best suitable for you IMO.
| 0
| 1
| 0
| 0
|
2011-10-07T14:10:00.000
| 3
| 0
| false
| 7,688,442
| 0
| 0
| 1
| 1
|
What is a good and easy way to distribute a web application and server bundled together, python way?
So I can say to a user "Here take this tar/whatever, unpack it and run blahblah.py" and blahblah.py will run a http/wsgi server and serve my application?
Im looking for a stable production-ready multi-threaded wsgi-server with which I can bundle my app, without the need for nginx or other "frontends" or having to deal with any configuration.
|
Can someone explain parallelpython versus hadoop for distributing python process across various servers?
| 7,704,037
| 2
| 3
| 540
| 0
|
python,hadoop,parallel-processing
|
The main difference is that Hadoop is good at processing big data (dozen to terabytes of data). It provides a simple logical framework called MapReduce which is well appropriate for data aggregation, and a distributed storage system called HDFS.
If your inputs are smaller than 1 gigabyte, you probably don't want to use Hadoop.
| 0
| 1
| 0
| 0
|
2011-10-09T07:12:00.000
| 2
| 0.197375
| false
| 7,701,989
| 1
| 0
| 0
| 2
|
I'm new to using multiple cpu's to process jobs and was wondering if people could let me know the pro/cons of parallelpython(or any type of python module) versus hadoop streaming?
I have a very large cpu intensive process that I would like to spread across several servers.
|
Can someone explain parallelpython versus hadoop for distributing python process across various servers?
| 7,705,052
| 2
| 3
| 540
| 0
|
python,hadoop,parallel-processing
|
Since moving data becomes harder and harder with size; when it comes to parallel computing, data localization becomes very important. Hadoop as a map/reduce framework maximizes the localization of data being processed. It also gives you a way to spread your data efficiently across your cluster (hdfs). So basically, even if you use other parallel modules, as long as you don't have your data localized on the computers you are doing process or as long as you have to move your data across cluster all the time, you wouldn't get maximum benefit from parallel computing. That's one of the key ideas of hadoop.
| 0
| 1
| 0
| 0
|
2011-10-09T07:12:00.000
| 2
| 1.2
| true
| 7,701,989
| 1
| 0
| 0
| 2
|
I'm new to using multiple cpu's to process jobs and was wondering if people could let me know the pro/cons of parallelpython(or any type of python module) versus hadoop streaming?
I have a very large cpu intensive process that I would like to spread across several servers.
|
usr/bin/env: bad interpreter Permission Denied --> how to change the fstab
| 16,814,809
| 1
| 10
| 33,994
| 0
|
python,permissions,cygwin
|
This seems to be a late answer, but may be useful for others. I got the same kinda error, when I was trying to run a shell script which used python. Please check \usr\bin for existence of python. If not found, install that to solve the issue. I come to such a conclusion, as the error shows "bad interpreter".
| 0
| 1
| 0
| 0
|
2011-10-10T17:22:00.000
| 8
| 0.024995
| false
| 7,716,357
| 1
| 0
| 0
| 2
|
I'm using cygwin on windows 7 to run a bash script that activates a python script, and I am getting the following error:
myscript.script: /cydrive/c/users/mydrive/folder/myscript.py: usr/bin/env: bad interpreter: Permission Denied.
I'm a total newbie to programming, so I've looked around a bit, and I think this means Python is mounted on a different directory that I don't have access to. However, based on what I found, I have tried to following things:
Change something (from user to exec) in the fstab: however, my fstab file is all commented out and only mentions what the defaults are. I don't know how I can change the defaults. The fstab.d folder is empty.
change the #! usr/bin/env python line in the script to the actual location of Python: did not work, same error
add a PYTHONPATH to the environment variables of windows: same error.
I would really appreciate it if someone could help me out with a suggestion!
|
usr/bin/env: bad interpreter Permission Denied --> how to change the fstab
| 26,632,686
| 0
| 10
| 33,994
| 0
|
python,permissions,cygwin
|
You should write your command as 'python ./example.py ',then fix it in your script.
| 0
| 1
| 0
| 0
|
2011-10-10T17:22:00.000
| 8
| 0
| false
| 7,716,357
| 1
| 0
| 0
| 2
|
I'm using cygwin on windows 7 to run a bash script that activates a python script, and I am getting the following error:
myscript.script: /cydrive/c/users/mydrive/folder/myscript.py: usr/bin/env: bad interpreter: Permission Denied.
I'm a total newbie to programming, so I've looked around a bit, and I think this means Python is mounted on a different directory that I don't have access to. However, based on what I found, I have tried to following things:
Change something (from user to exec) in the fstab: however, my fstab file is all commented out and only mentions what the defaults are. I don't know how I can change the defaults. The fstab.d folder is empty.
change the #! usr/bin/env python line in the script to the actual location of Python: did not work, same error
add a PYTHONPATH to the environment variables of windows: same error.
I would really appreciate it if someone could help me out with a suggestion!
|
Writing a telnet server in Python and embedding IPython as shell
| 7,723,501
| 0
| 1
| 830
| 0
|
python,ipython
|
I think it is not possible provide full functionality of IPython, such as auto complete,.. Twisted has python shell that works well with telnet.
| 0
| 1
| 0
| 0
|
2011-10-11T08:43:00.000
| 2
| 0
| false
| 7,723,399
| 0
| 0
| 0
| 1
|
I am trying to write a simple telned server that will expose a IPython shell to the connected client. Does someone know how to do that ?
The question is really about embedding the IPython shell into the Telnet server (I can probably use Twisted for the Telnet server part )
Thx
|
Running IPython after changing the filename of python.exe
| 7,820,862
| 0
| 0
| 550
| 0
|
python,windows,interpreter,ipython
|
Found a solution:
python27.exe c:\Python27\Scripts\ipython-script.py
| 0
| 1
| 0
| 0
|
2011-10-11T13:42:00.000
| 4
| 1.2
| true
| 7,727,017
| 1
| 0
| 0
| 4
|
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain.
But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename.
My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
|
Running IPython after changing the filename of python.exe
| 7,727,342
| 1
| 0
| 550
| 0
|
python,windows,interpreter,ipython
|
Try to find Python in windows registry and changes path to python. After that try to reinstall ipython.
| 0
| 1
| 0
| 0
|
2011-10-11T13:42:00.000
| 4
| 0.049958
| false
| 7,727,017
| 1
| 0
| 0
| 4
|
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain.
But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename.
My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
|
Running IPython after changing the filename of python.exe
| 7,727,120
| 0
| 0
| 550
| 0
|
python,windows,interpreter,ipython
|
I do not know if there is a config file where you can change, but you may have to recompile Ipython and change the interpreter variables. But why do you need to rename it to python27.exe when it already is in a python27 folder?
| 0
| 1
| 0
| 0
|
2011-10-11T13:42:00.000
| 4
| 0
| false
| 7,727,017
| 1
| 0
| 0
| 4
|
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain.
But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename.
My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.