Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Where should settings files be stored?
| 4,520,533
| 6
| 4
| 366
| 0
|
python,wxpython,application-settings
|
wxPython has your back. You want wx.StandardPaths. There's a good example included with the wxPython demo.
| 0
| 1
| 0
| 0
|
2010-12-23T06:42:00.000
| 3
| 1.2
| true
| 4,516,459
| 0
| 0
| 0
| 2
|
I'm writing an application in python (using wxPython for the gui) and I'm looking for a platform independent way to decide where to store application settings files. On linux systems, where is it customary to store application settings files? How about on Mac, and Windows (all modern versions)?
Ideally I'd like to have a module that provides a platform agnostic interface to locate these files. Does something like this already exist?
|
Print statements on server give IOError: failed to write data
| 4,517,414
| 1
| 1
| 1,041
| 0
|
python,pylons
|
Don't use print statements, use the logging module. We can't help you without knowing the setup of the server.
| 0
| 1
| 0
| 0
|
2010-12-23T09:26:00.000
| 2
| 1.2
| true
| 4,517,397
| 0
| 0
| 0
| 2
|
I am running Pylons on my local machine with paster, and on a Debian server using WSGI. I want to add some print statements to debug a problem: am not a Pylons or Python expert.
On my local machine this works fine: print statements go to the terminal. On the server, the statements don't print to the log files: instead the log file says "IOError: failed to write data" whenever a print statement is called.
Until I can fix this, I can't debug anything on the server.
Could someone advise how to get printing running on the server? Thanks!
|
Print statements on server give IOError: failed to write data
| 4,517,657
| 3
| 1
| 1,041
| 0
|
python,pylons
|
It's wrong for a WSGI application to use sys.stdout or sys.stderr. If you want to spit debug to a server error log, use environ['wsgi.errors'].write().
| 0
| 1
| 0
| 0
|
2010-12-23T09:26:00.000
| 2
| 0.291313
| false
| 4,517,397
| 0
| 0
| 0
| 2
|
I am running Pylons on my local machine with paster, and on a Debian server using WSGI. I want to add some print statements to debug a problem: am not a Pylons or Python expert.
On my local machine this works fine: print statements go to the terminal. On the server, the statements don't print to the log files: instead the log file says "IOError: failed to write data" whenever a print statement is called.
Until I can fix this, I can't debug anything on the server.
Could someone advise how to get printing running on the server? Thanks!
|
How to set up Python interpreter in Emacs on OS X?
| 4,520,888
| 5
| 2
| 506
| 0
|
python,emacs,osx-leopard
|
While in python-mode, hit C-c ! or do an M-x py-shell RET. Should work with defaults.
| 0
| 1
| 0
| 0
|
2010-12-23T17:01:00.000
| 1
| 1.2
| true
| 4,520,851
| 0
| 0
| 0
| 1
|
I have emacs and python working on OSX, but I'd like to get a python interpreter working in a split window so I can view output as I code. Is this possible?
thanks
|
Line endings and reading and writing to text files
| 4,522,035
| 3
| 2
| 573
| 0
|
python,file-io,line-endings
|
It's a non-issue, Python is smart like that. It handles line endings across platforms very well.
| 0
| 1
| 0
| 0
|
2010-12-23T19:56:00.000
| 2
| 1.2
| true
| 4,522,026
| 1
| 0
| 0
| 1
|
I am writing a small script that will need to read and write to text files on Windows and Linux and perhaps Mac even. The script will be used by users on all perhaps all of these platforms (Windows for sure) and interchangeably - so a user who wrote to a file X on Windows, may read the file on Linux with the script.
What precautions should I take or how should I implement my code that it is able to handle line endings across various platforms? (reading and writing)
Or this is a non-issue and Python handles everything?
|
Parsing parts of a Large XML File in App Engine using Blobstore?
| 4,522,924
| 0
| 2
| 630
| 0
|
python,xml,google-app-engine,beautifulsoup,blobstore
|
It really sounds like App Engine is not the right platform for this project.
| 0
| 1
| 0
| 0
|
2010-12-23T21:00:00.000
| 4
| 0
| false
| 4,522,434
| 0
| 0
| 1
| 1
|
I'm working on an google app engine app that will have to deal with some largish ( <100 MB) XML files uploaded from a form that will exceed GAE's limits -- either taking longer than 30 seconds to upload the file, or exceeding the 10 MB request size.
The current solution I'm envisioning is to upload the file to the blobstore, and then bring it into the application (1 MB at a time) for parsing. This could also very well exceed the 30 second limits for a request, so I'm wondering if there's a nice way to handle large XML documents in chunks, as I may end up having to do it via task queues in 30 second bursts.
I'm currently using BeautifulSoup for other parts of the project, having switched from minidom. Is there a way to handle data in chunks that would play nice with GAE?
|
Python Eggs on Google App Engine
| 4,530,218
| 0
| 0
| 883
| 0
|
python,google-app-engine,virtualenv,pip
|
If you use easy_install instead of pip you can run it with the --install-dir argument to specify a non-default installation directory.
| 0
| 1
| 0
| 0
|
2010-12-25T11:51:00.000
| 2
| 0
| false
| 4,530,183
| 1
| 0
| 1
| 1
|
Usually I would use virtualenv and pip for deployment of web applications. With Google App Engine this doesn't work, because all import statement are relative to directory of the application.
The most common approach I saw was to simply copy the packages from site-packages to the directory of the application. This involves manual work and is error-prone.
Another approach was to changes install_lib and install_scripts in ~/.pydisutils.cfg, but this doesn't allow me to use pip in my home directory simultaneously.
Do you have any suggestions for this?
|
virtualenvwrapper on Ubuntu 10.10 - Python
| 4,531,826
| 1
| 1
| 1,599
| 0
|
python,django,ubuntu,virtualenv,virtualenvwrapper
|
Well, is virtualev installed in the same python as virtualenvwrapper? It requires installing virtualenv separately.
| 0
| 1
| 0
| 0
|
2010-12-25T21:49:00.000
| 2
| 1.2
| true
| 4,531,805
| 1
| 0
| 0
| 2
|
can't get virtualenvwrapper to work on Ubuntu 10.10 desktop.
mkvirtualenv test_env
returns:
ERROR: virtualenvwrapper could not find virtualenv in your path
I followed the install instructions to the letter.
Any ideas?
|
virtualenvwrapper on Ubuntu 10.10 - Python
| 12,169,613
| 3
| 1
| 1,599
| 0
|
python,django,ubuntu,virtualenv,virtualenvwrapper
|
I got that same message when installing virtualenvwrapper using the MacPorts package manager (version py27-virtualenvwrapper @3.2_0). I had virtualenv installed, also via MacPorts. The only way I could get it to work was adding the bash environmental:
export VIRTUALENVWRAPPER_VIRTUALENV=virtualenv-2.7
to my .profile file. Not at all Ubuntu 10.10, but if you've got a working virtualenv on your set-up, maybe virtualenvwrapper needs a pointer ...
| 0
| 1
| 0
| 0
|
2010-12-25T21:49:00.000
| 2
| 0.291313
| false
| 4,531,805
| 1
| 0
| 0
| 2
|
can't get virtualenvwrapper to work on Ubuntu 10.10 desktop.
mkvirtualenv test_env
returns:
ERROR: virtualenvwrapper could not find virtualenv in your path
I followed the install instructions to the letter.
Any ideas?
|
Django Asynchronous Processing
| 4,536,676
| 14
| 8
| 9,924
| 0
|
python,django,asynchronous,celery,cython
|
Celery would be perfect for this.
Since what you're doing is relatively simple (read: you don't need complex rules about how tasks should be routed), you could probably get away with using the Redis backend, which means you don't need to setup/configure RabbitMQ (which, in my experience, is more difficult).
I use Redis with the most a dev build of Celery, and here are the relevant bits of my config:
# Use redis as a queue
BROKER_BACKEND = "kombu.transport.pyredis.Transport"
BROKER_HOST = "localhost"
BROKER_PORT = 6379
BROKER_VHOST = "0"
# Store results in redis
CELERY_RESULT_BACKEND = "redis"
REDIS_HOST = "localhost"
REDIS_PORT = 6379
REDIS_DB = "0"
I'm also using django-celery, which makes the integration with Django happy.
Comment if you need any more specific advice.
| 0
| 1
| 0
| 0
|
2010-12-26T21:45:00.000
| 2
| 1.2
| true
| 4,535,540
| 0
| 0
| 1
| 1
|
I have a bunch of Django requests which executes some mathematical computations ( written in C and executed via a Cython module ) which may take an indeterminate amount ( on the order of 1 second ) of time to execute. Also the requests don't need to access the database and are all independent of each other and Django.
Right now everything is synchronous ( using Gunicorn with sync worker types ) but I'd like to make this asynchronous and nonblocking. In short I'd like to do something:
Receive the AJAX request
Allocate task to an available worker ( without blocking the main Django web application )
Worker executes task in some unknown amount of time
Django returns the result of the computation (a list of strings) as JSON whenever the task completes
I am very new to asynchronous Django, and so my question is what is the best stack for doing this.
Is this sort of process something a task queue is well suited for? Would anyone recommend Tornado + Celery + RabbitMQ, or perhaps something else?
Thanks in advance!
|
Iterative MapReduce
| 11,489,099
| 1
| 3
| 3,152
| 0
|
python,streaming,hadoop,mapreduce,iteration
|
You needn't write another job. You can put the same job in a loop ( a while loop) and just keep changing the parameters of the job, so that when the mapper and reducer complete their processing, the control starts with creating a new configuration, and then you just automatically have an input file that is the output of the previous phase.
| 0
| 1
| 0
| 0
|
2010-12-27T08:18:00.000
| 4
| 0.049958
| false
| 4,537,422
| 0
| 1
| 0
| 1
|
I've written a simple k-means clustering code for Hadoop (two separate programs - mapper and reducer). The code is working over a small dataset of 2d points on my local box. It's written in Python and I plan to use Streaming API.
I would like suggestions on how best to run this program on Hadoop.
After each run of mapper and reducer, new centres are generated. These centres are input for the next iteration.
From what I can see, each mapreduce iteration will have to be a separate mapreduce job. And it looks like I'll have to write another script (python/bash) to extract the new centres from HDFS after each reduce phase, and feed it back to mapper.
Any other easier, less messier way? If the cluster happens to use a fair scheduler, It will be very long before this computation completes?
|
WSGI server that processes request despite client disconnecting? - Python
| 4,540,187
| 1
| 0
| 672
| 0
|
python,django,wsgi,uwsgi,gevent
|
almost all wsgi servers do that. I'm not sure what you mean.
gunicorn
paste
cherrypy
twisted.web
apache with mod_wsgi
werkzeug
...
| 0
| 1
| 0
| 0
|
2010-12-27T16:24:00.000
| 1
| 0.197375
| false
| 4,540,089
| 0
| 0
| 1
| 1
|
I need to find a stable wsgi server that won't stop processing requests when client disconnect.
I'm not sure if uWSGI or gunicorn would fit this criteria.
Forgot to add this:
I am also trying to return a response before the request gets processed.
Any ideas?
|
Updated Python from disk image, want to make it available in terminal
| 4,542,064
| 0
| 2
| 151
| 0
|
python
|
First, I'm not a Mac user, so I don't know a couple of specifics (default PATH, etc).
Also, a bit of clarity - when you use the installer, it lets you customize the installation to install in a specific location - do you know where that is. If you didn't select a location, it defaults to /usr/local/bin.
From Terminal, try "python3". If that fails, try "/usr/local/bin/python3".
Outside of that, wherever it's found, you'll want that in your path statement if it isn't there already.
It's not recommended that you "replace" the system python with python 3.x, as you'll definitely have problems.
| 0
| 1
| 0
| 0
|
2010-12-27T21:38:00.000
| 4
| 0
| false
| 4,542,014
| 1
| 0
| 0
| 2
|
I have Python 2.6.1 which came on the Mac I have, but I'd recently downloaded and installed the disk image of 3.1.3 and I'd like to have it available in Terminal, how can I do this? For instance when I do $ python -V in Terminal I get 2.6.1.
Thanks for any help.
|
Updated Python from disk image, want to make it available in terminal
| 4,542,091
| 1
| 2
| 151
| 0
|
python
|
The default Python version in Mac OS X needs to stay the default Python version, or things will break. You want to install it alongside with Python 2. This is most likely exactly what happened, but you start Python 3 with python3.
| 0
| 1
| 0
| 0
|
2010-12-27T21:38:00.000
| 4
| 0.049958
| false
| 4,542,014
| 1
| 0
| 0
| 2
|
I have Python 2.6.1 which came on the Mac I have, but I'd recently downloaded and installed the disk image of 3.1.3 and I'd like to have it available in Terminal, how can I do this? For instance when I do $ python -V in Terminal I get 2.6.1.
Thanks for any help.
|
IPython in Emacs. Quick code evaluation
| 4,556,760
| 0
| 6
| 3,753
| 0
|
python,shell,emacs,ipython
|
I found a partial answer to Q1:
Python-mode provides C-c C-c which can send a buffer to an already-opened Python shell (similarly C-c C-| can send a region to the shell), and if ipython.el is installed, then default python shell is set to IPython.
Unfortunately, this only works for python scripts, and not for IPython scripts. C-c C-c works by copying the buffer with the code snippet to a temporary file with extension .py that is then sent to the shell. Since the file has extension .py IPython executes it as if it was regular Python code, and therefore the code snippet cannot have IPython-specific code (such as IPython magic commands)..
| 0
| 1
| 0
| 0
|
2010-12-27T23:50:00.000
| 2
| 0
| false
| 4,542,718
| 0
| 0
| 0
| 1
|
Update: The question still lacks a satisfactory answer.
I would like to "send" code snippets to a IPython interpreter in Emacs 23.2 (Linux). Assuming that I have already started an IPython shell in a buffer in Emacs (e.g. using Python-mode.el and IPython.el), is there a way of selecting a region in a different buffer and "sending" this region to the already-started IPython shell?
I have tried C-c C-c (send-buffer-to-shell) and C-c | (send-region-to-shell), but this only works as long as the code is written in Python and not in IPython (IPython can run Python code). The reason seems to be that, for both commands, Emacs creates a temporary file with .py extension (as opposed to with .ipy extension), which then is interpreted by IPython as "Python-specific code". This prevents me from using IPython-specific features
such as magic commands.
On a separate note, I have also read that Emacs provides M-| ('shell-command-on-region') to run selected regions in a shell. To do this with an IPython interpreter, I have tried setting shell-file-name to my IPython path. However, when I run M-| after selecting a region, Emacs prompts me the following:
Shell command on region:
and if I then type RET, I get the IPython man page on the *Shell Command Output* buffer, without the region being executed. Is there any IPython-specific command that I can use for M-| ('shell-command-on-region') to get IPython run my code?
Thanks!
|
what is missing that prevents my IDLE from working?
| 4,551,695
| 1
| 0
| 243
| 0
|
python,tkinter
|
Use Synaptic Package Manager (System > Administration > Synaptic Package Manager). Search for "idle".
| 0
| 1
| 0
| 0
|
2010-12-29T04:52:00.000
| 3
| 0.066568
| false
| 4,551,565
| 1
| 0
| 0
| 1
|
Installed Ubuntu 10.10 and Python 2.5.5. IDLE did not start from the terminal so I went into the Python interpreter and did "import _tkinter". The package was not found. After searching a bit, I found that Ubuntu/debian might not include _tkinter so I proceeded to "sudo apt-get install python-tk" as per my searches.
Still the interpreter cannot find _tkinter. What next?
|
TK_LIBRARY and TCL_LIBRARY environment variables on Ubuntu
| 4,553,921
| 1
| 1
| 2,839
| 0
|
python,tkinter
|
No, setting TK_LIBRARY and TCL_LIBRARY should not be necessary. I suspect you are invoking a Python interpreter different from /usr/bin/python, or /usr/bin/python has been changed to point to a different Python interpreter than the one that goes with the python-tk package. Ubuntu 10.10 ships with Python 2.6 by default.
Edit: To build Python 2.5 with Tk support, make sure the tk-dev package is installed before running configure. Also check the end of the build output what other modules have not been built, and consider installation the relevant header files. Make sure that your installation does not overwrite /usr/bin/python, e.g. by installing into /usr/local (which is the default for configure).
| 0
| 1
| 0
| 0
|
2010-12-29T11:56:00.000
| 1
| 1.2
| true
| 4,553,869
| 1
| 0
| 0
| 1
|
I'm trying to get _tkinter to import into Python. I suspect it may be due to not having values defined for the environment variables TK_LIBRARY and TCL_LIBRARY. This is as it stands after using apt-get for python-tk, tcl, and tk. If I have to set the environment variables manually, what would I set them to?
I am using Ubuntu 10.10 and Python 2.5.5.
|
How to load current buffer into Python interpreter in Emacs?
| 4,555,319
| 9
| 7
| 1,879
| 0
|
python,emacs
|
You may want to hit "C-c C-z" (switch to interpreter) to see the results of and buffers or regions you evaluated.
| 0
| 1
| 0
| 0
|
2010-12-29T15:10:00.000
| 1
| 1.2
| true
| 4,555,224
| 0
| 0
| 0
| 1
|
I'm trying to use emacs to edit and run python programs (emacs23 and python 2.6 on Ubuntu 10.10).
I read a file into Emacs (C-x -C-f)
I start the interperter (Menu Python - Start interpreter, I haven't found the keyboard shortcut for this yet)
Emacs split the frame in two windows
I place the cursor in the python file (C-x o)
Now I want to run the Python code in the upper window in the Python interpreter in the lower window. Other places have suggested:
C-c C-c, but that does nothing
C-c !, but emacs says that that command is undefined
I have installed ropemacs (sudo apt-get install python-ropemacs) but that didn't change anything.
|
Easiest non-Java way to write HBase MapReduce on CDH3?
| 4,567,653
| 0
| 1
| 1,322
| 0
|
python,hadoop,mapreduce,hbase
|
It's not precisely an answer, but it's the closest I got --
I asked in #hbase on irc.freenode.net yesterday, and one of the Cloudera employees responded.
The "Input Splits" problem I'm having with Pig is specific to Pig 0.7, and Pig 0.8 will be bundled with Cloudera CDH3 Beta 4 (no ETA on that). Therefore, what I want to do (easily write M/R jobs using HBase tables as both sink and source) will be possible in their next release. It also seems that the HBaseStorage class will be generally improved to help with read/write operations from ANY JVM language, as well, making Jython, JRuby, Scala and Clojure all much more feasible as well.
So the answer to the question, at this time, is "Wait for CDH3 Beta 4", or if you're impatient, "Download the latest version of Pig and pray that it's compatible with your HBase"
| 0
| 1
| 0
| 0
|
2010-12-29T19:12:00.000
| 1
| 1.2
| true
| 4,557,045
| 0
| 1
| 1
| 1
|
I've been working on this for a long time, and I feel very worn out; I'm hoping for an [obvious?] insight from SO community that might get my pet project back on the move, so I can stop kicking myself. I'm using Cloudera CDH3, HBase .89 and Hadoop .20.
I have a Python/Django app that writes data to a single HBase table using the Thrift interface, and that works great. Now I want to Map/Reduce it into some more HBase tables.
The obvious answer here is either Dumbo or Apache PIG, but with Pig, the HBaseStorage adapter support isn't available for my version yet (Pig is able to load the classes and definitions, but freezes at the "Map" step, complaining about "Input Splits"; Pig mailing lists suggest this is fixed in Pig 0.8, which is incompatible with CDH3 Hadoop, so I'd have to use edge versions of everything [i think]). I can't find any information on how to make Dumbo use HBaseStorage as a data sink.
I don't care if it's Python, Ruby, Scala, Clojure, Jython, JRuby or even PHP, I just really don't want to write Java (for lots of reasons, most of them involving the sinking feeling I get every time I have to convert an Int() to IntWritable() etc).
I've tried literally every last solution and example I can find (for the last 4 weeks) for writing HBase Map/Reduce jobs in alternative languages, but everything seems to be either outdated or incomplete. Please, Stack Overflow, save me from my own devices!
|
Python sched scheduler and reboots
| 4,559,771
| 1
| 0
| 1,117
| 0
|
python,scheduled-tasks
|
Answer to all three questions is No.
sched is different from cron. It takes a generic timer or counter function and a delay function and lets you to schedule a function call after a particular time (an event as defined by your generic timer function).
It won't run after you close your program, unless you maintain state by writing to a file or db. This is complicated and using cron would be better.
sched works on events, but not on background. so, it not is not exactly a deamon, but you can deamonize it running the program in the background using OS facilities.
| 0
| 1
| 0
| 0
|
2010-12-30T03:17:00.000
| 2
| 0.099668
| false
| 4,559,758
| 0
| 0
| 0
| 1
|
I have read about python sched (task scheduler), it works like a cron.
but I have a question :
lets say if I schedule a function to run after every 2 hours and in the mean time my system gets shut down, then I again restart the system...
Does the scheduler automatically start and run the function after 2 hours? Or do I have to start that again after shutting down the system?
Does sched work like a daemon?
|
Deliver to a specific version via Inbound Mail service
| 4,653,923
| 0
| 5
| 129
| 0
|
python,google-app-engine,email
|
There is an easier way to do this than writing code that routes between different versions using URLFetch.
If you have a large body of code that is email oriented and you need to have a development version,
simply use one of your ten applications as the development application (version).
This allows you to do things like have test-specific entities in the development application Datastore and you can
test as much as you want running on appengine live.
The only constraints are:
because the application has a different name, for email sending from the application, you either need to send from your gmail account or have a
configuration that switches the application name
sending test email to the application will have a slightly different email address (not a big issue I think)
keep an app.yaml with a different application name
you burn another one of your ten possible apps
Most RCS will allow you to have the same project checked out into different directories. Once you are ready for launch
(all development code is committed and testing done), update the 'production' directory (except for app.yaml) and then deploy.
| 0
| 1
| 0
| 0
|
2010-12-31T02:53:00.000
| 2
| 0
| false
| 4,567,769
| 0
| 0
| 1
| 2
|
I have an app that services inbound mail and I have deployed a new development version to Google App Engine. The default is currently set to the previous version.
Is there a way to specify that inbound mail should be delivered to a particular version?
This is well documented using URLs but I can't find any reference to version support in the inbound mail service...
|
Deliver to a specific version via Inbound Mail service
| 4,568,426
| 5
| 5
| 129
| 0
|
python,google-app-engine,email
|
No, this isn't currently supported. You could write some code for your default version that routes mail to other versions via URLFetch, though.
| 0
| 1
| 0
| 0
|
2010-12-31T02:53:00.000
| 2
| 1.2
| true
| 4,567,769
| 0
| 0
| 1
| 2
|
I have an app that services inbound mail and I have deployed a new development version to Google App Engine. The default is currently set to the previous version.
Is there a way to specify that inbound mail should be delivered to a particular version?
This is well documented using URLs but I can't find any reference to version support in the inbound mail service...
|
Creating a BAT file for python script
| 54,316,566
| -1
| 62
| 296,010
| 0
|
python,batch-file
|
start xxx.py
You can use this for some other file types.
| 0
| 1
| 0
| 0
|
2010-12-31T16:51:00.000
| 14
| -0.014285
| false
| 4,571,244
| 1
| 0
| 0
| 2
|
How can I create a simple BAT file that will run my python script located at C:\somescript.py?
|
Creating a BAT file for python script
| 57,697,370
| 0
| 62
| 296,010
| 0
|
python,batch-file
|
i did this and works:
i have my project in D: and my batch file is in the desktop, if u have it in the same drive just ignore the first line and change de D directory in the second line
in the second line change the folder of the file, put your folder
in the third line change the name of the file
D:
cd D:\python_proyects\example_folder\
python example_file.py
| 0
| 1
| 0
| 0
|
2010-12-31T16:51:00.000
| 14
| 0
| false
| 4,571,244
| 1
| 0
| 0
| 2
|
How can I create a simple BAT file that will run my python script located at C:\somescript.py?
|
running python script in bash slower than running code in IDLE
| 4,573,124
| 0
| 0
| 830
| 0
|
python
|
Rafe is likely correct - you can test this out by limiting your imports and seeing if that makes a difference in startup time. I.e., if you are doing
from Tkinter import *
then change that to import only the modules you actually need. Or write a quick null program that just sets up and tears down without using anything in the package - that should run pretty close to the same in both.
| 0
| 1
| 0
| 1
|
2011-01-01T03:30:00.000
| 2
| 0
| false
| 4,573,094
| 0
| 0
| 0
| 2
|
I have written a python script to draw the sierpinski gasket using Tkinter and when run from the python IDLE the program takes about half the time it takes to run when run from bash. I timed the script using them time module in python. Any ideas as to why this is happening will be appreciated. thanks
|
running python script in bash slower than running code in IDLE
| 4,573,147
| 2
| 0
| 830
| 0
|
python
|
It's because of the way you're passing it. Based on your comment on the other answer, you're using python -c, and in IDLE you're using the Run command (or something similar). I'm not aware of any performance issues with python -c, but using Run in IDLE to run somescript.py is equivalent to python somescript.py.
You really should run scripts using python -c, it's more for small snippets.
| 0
| 1
| 0
| 1
|
2011-01-01T03:30:00.000
| 2
| 0.197375
| false
| 4,573,094
| 0
| 0
| 0
| 2
|
I have written a python script to draw the sierpinski gasket using Tkinter and when run from the python IDLE the program takes about half the time it takes to run when run from bash. I timed the script using them time module in python. Any ideas as to why this is happening will be appreciated. thanks
|
What is good practice for writing web applications that control daemons (and their config files)
| 4,573,569
| 1
| 5
| 332
| 0
|
php,python,ruby-on-rails,system-administration,database-administration
|
I'm not a Unix security guru, but some basic things to think of:
Make sure your web app runs as a specific user, and make sure that user has privileged rights only to those files which it is supposed to modify.
Do not allow arbitrary inputs to be added to the files, have strict forms where each field is validated to contain only things it should contain, like a-z and 0-9 only, etc.
Use HTTPS to access the site.
I'm sure there is more to come from the real gurus.
| 0
| 1
| 0
| 0
|
2011-01-01T07:34:00.000
| 2
| 0.099668
| false
| 4,573,468
| 0
| 0
| 1
| 1
|
Can someone suggest some basic advice on dealing with web applications that interact with configuration files like httpd.conf, bind zone files, etc.
I understand that it's bad practice, in fact very dangerous to allow arbitrary execution of code without fully validating it and so on. But say you are tasked to write a small app that allows one to add vhosts to an apache configuration.
Do you have your code execute with full privileges, do you write future variables into a database and have a cron job (with full privileges) execute a script that pulls the vars from the database and throws them into a template config file, etc.
Some thoughts & contributions on this issue would be appreciated.
tl;dr - how can you securely write a web app to update/create entries in a config file like apache's httpd.conf, etc.
|
most widely used python web app deployment style
| 4,576,197
| 0
| 0
| 504
| 0
|
python,apache,nginx,wsgi,fastcgi
|
100 req/s is not that hard to achieve these days.
Consider the deployment that your framework recommends. Zope for instance, has a decent webserver built in, so mod_proxy_http is good deployment.
Since wsgi came to fruition, it has become the preferred mechanism for many frameworks, and they now the builtin web servers are only suitable for development.
Regardless of what you deploy with now, it's important to be able to switch/add parts of the stack as necessary - do you want a reverse proxy for static content in there somwhere? You may not need one if you use nginx as it can serve static content from memcached quite well.
Summary: use wsgi
| 0
| 1
| 0
| 0
|
2011-01-01T20:49:00.000
| 2
| 1.2
| true
| 4,575,709
| 0
| 0
| 1
| 2
|
I wonder which option is more stable (leaving performance aside) and is more widely used (I assume the widely used one is the most stable):
apache -> mod_wsgi
apache -> mod_fcgid
apache -> mod_proxy_ajp
apache -> mod_proxy_http
for a project that will serve REST services with small json formatted input and output messages and web pages, up to 100 req/s. Please comment on apache if you think nginx etc. is more suitable.
Thanks.
|
most widely used python web app deployment style
| 4,575,875
| 2
| 0
| 504
| 0
|
python,apache,nginx,wsgi,fastcgi
|
apache -> mod-wsgi is currently the "recommended" solution. But it also depends on your needs quite a bit.
There is quite a difference between running 1 heavy application versus 1 light aplication or many light applications.
Personally my preferred setup is still nginx -> apache -> mod_wsgi with multiple apache servers for heavy sites.
| 0
| 1
| 0
| 0
|
2011-01-01T20:49:00.000
| 2
| 0.197375
| false
| 4,575,709
| 0
| 0
| 1
| 2
|
I wonder which option is more stable (leaving performance aside) and is more widely used (I assume the widely used one is the most stable):
apache -> mod_wsgi
apache -> mod_fcgid
apache -> mod_proxy_ajp
apache -> mod_proxy_http
for a project that will serve REST services with small json formatted input and output messages and web pages, up to 100 req/s. Please comment on apache if you think nginx etc. is more suitable.
Thanks.
|
AppEngine social application
| 5,873,214
| 1
| 1
| 261
| 0
|
java,python,google-app-engine
|
If you decide to use python, try a look to vikuit social. It runs over Google Appengine , it's open source ( GNU3) and perhaps it's a good base to your development.
| 0
| 1
| 0
| 0
|
2011-01-02T14:14:00.000
| 2
| 0.099668
| false
| 4,578,729
| 0
| 0
| 1
| 1
|
I started developing my application in AppEngine Java, however I noticed that Facebook has officially discontinued the support for the Java API and the third party API was last updated a year ago.
Does anybody use Java + Social plugins? How has it been going so far? Should I switch to Python, I'd not want to since, I'm not very great with Python and have written significant amounts of code in Java already.
|
Edit text using Python and curses Textbox widget?
| 8,763,488
| 6
| 11
| 16,216
| 0
|
python,ncurses
|
textpad.Textbox(win, insert_mode=True) provides basic insert support. Backspace needs to be added though.
| 0
| 1
| 0
| 0
|
2011-01-03T01:05:00.000
| 4
| 1
| false
| 4,581,441
| 1
| 0
| 0
| 1
|
Has anybody got a working example of using the curses.textpad.Textbox widget to edit existing text? This is, of course, in a Linux terminal (e.g. xterm).
|
Possible to use python to script for mac?
| 9,659,784
| 2
| 2
| 144
| 0
|
python,macos
|
Since appscript is deprecated and no longer maintained, you might want to look at simply executing AppleScript directly from your python script with a popen() to /usr/bin/osascript.
| 0
| 1
| 0
| 0
|
2011-01-03T07:23:00.000
| 2
| 0.197375
| false
| 4,582,691
| 0
| 0
| 0
| 1
|
Is it possible to use Python in the way that AppleScript is used to automate/customize the Mac OS?
|
problems in Python Sched
| 4,585,382
| 0
| 1
| 361
| 0
|
python
|
All your schedulers are part of your a single Python process, then you won't be able to count the the individual timers which are scheduled. As the python schedulers are something you write, you can choose to keep a file which would be updated periodically.
If each scheduler is a separate python process, then count the many python processes from your Windows task manager.
| 0
| 1
| 0
| 0
|
2011-01-03T14:25:00.000
| 2
| 0
| false
| 4,585,237
| 1
| 0
| 0
| 1
|
I have created the number of schedulers using python in windows which are running in background.
Can anyone tell me any command to check how many schedulers running on windows and also how can I remove them?
|
Multiple Python Distributions -- Managing Packages
| 17,237,758
| 0
| 1
| 320
| 0
|
python
|
Virtualenv and virtualenvwrapper make managing packages very nice!
| 0
| 1
| 0
| 0
|
2011-01-03T16:20:00.000
| 2
| 0
| false
| 4,586,156
| 1
| 0
| 0
| 2
|
Ubuntu (10.10) came installed with Python2.6, 2.7, and 3. In addition, I have installed the Enthought Python Distribution. Is there any way to manage Python packages within these distributions intelligently?
For compatibility, I'd imagine switching between these distributions occasionally. If I install PyBlah, I'd like it to be available under all of the distributions. Can I do better than installing PyBlah under each distribution?
|
Multiple Python Distributions -- Managing Packages
| 4,586,247
| 4
| 1
| 320
| 0
|
python
|
Well you can't install a package across 2.x-3.x distributions, they're not compatable. So the easiest ( and recommended way) is to install it for each version.
If you're sure you want to install it for all your versions, you can install it somewhere like ~/lib/python/ and add that directory to your PYTHONPATH.
| 0
| 1
| 0
| 0
|
2011-01-03T16:20:00.000
| 2
| 0.379949
| false
| 4,586,156
| 1
| 0
| 0
| 2
|
Ubuntu (10.10) came installed with Python2.6, 2.7, and 3. In addition, I have installed the Enthought Python Distribution. Is there any way to manage Python packages within these distributions intelligently?
For compatibility, I'd imagine switching between these distributions occasionally. If I install PyBlah, I'd like it to be available under all of the distributions. Can I do better than installing PyBlah under each distribution?
|
File Locking in Python?
| 4,594,300
| 0
| 0
| 2,736
| 0
|
python,file,locking
|
Google up zc.lockfile or portalocker.py. Both libraries can lock a file in a portable manner (windows and posix systems). I usually use zc.lockfile.
| 0
| 1
| 0
| 0
|
2011-01-04T11:47:00.000
| 2
| 0
| false
| 4,593,235
| 1
| 0
| 0
| 1
|
I want to lock the file whenever I perform read/write operation on that file. No other program or function can access file until I do not release the file. How do you do this in Python?
|
Embed a Python interpreter in a (Windows) C++ application
| 4,596,102
| 2
| 4
| 1,517
| 0
|
c++,python,winapi,api
|
ActivePython (http://www.activestate.com/activepython/downloads) installs itself as an ActiveScript engine.The ProgID is Python.AXScript.2 . So you can use it with COM via the Windows standard IActiveScript interface. Read up on it.
Distribution is another matter. Either you require that customers have it, or you could try and extract the juicy bits from the ActiveState's package, or maybe there's an official way to do unattended setup...
| 1
| 1
| 0
| 0
|
2011-01-04T16:38:00.000
| 4
| 0.099668
| false
| 4,596,013
| 1
| 0
| 0
| 1
|
I am building a window application written in C++. I'd like to utilize several python libraries.
I don't need any fancy Python interop here. My method is like this:
Open a thread to run Python interpreter.
Send commands from C++ to the Python interpreter. The C++ may need to write some intermediate files for the interop.
This method is dirty, but it will work for a lot of interpreter-like environments, e.g. gnuplot, lua.
My question is that what kind of API are there for me to use for this task. Maybe I need some Win32 API?
EDIT: I don't need any Python specific. I really want the general method. So that my application could also work with gnuplot, etc..
|
Same question to multiple remote users with different login
| 4,601,330
| 0
| 1
| 132
| 0
|
python,django,google-app-engine
|
Yes, this should be possible. Your solution might look something like this:
A user creates a new group.
You generate some random questions and store them in a list for that group.
More users join that group.
You start showing the questions to the users by selecting the first question in that groups list.
Once all users have correctly answered the question, you remove the question from the groups list and show the next question.
| 0
| 1
| 0
| 0
|
2011-01-04T16:46:00.000
| 1
| 1.2
| true
| 4,596,093
| 0
| 0
| 1
| 1
|
I am very new to Google App Engine and Python. I am building a web application using Python and Django which is based on questions and multiple answers. Once the users are logged in to the website, they will be provided with random questions from a datastore.
What my requirement is, if certain users want to form a group so that they all can get the same random questions at the same time to answer, is this possible? Without forming the group, each user gets different random questions on their end.
|
Creating a Scheduled Task in Windows Server 2008
| 4,605,975
| 1
| 1
| 2,379
| 0
|
python,windows,scheduled-tasks
|
You could set python.exe as the program, and pass your script to it as an argument.
| 0
| 1
| 0
| 0
|
2011-01-05T15:36:00.000
| 1
| 1.2
| true
| 4,605,884
| 1
| 0
| 0
| 1
|
I'm used to using Windows Server 2003 and to run my Python scripts as a scheduled task I simply followed the below steps: Opened Scheduled Tasks > Add Scheduled Task > Next > select Python > Next > Name the task, daily > supplied start time > supplied username and password to run the task.
Server 2008 looks a bit different but after opening Task Scheduler, I first chose Create basic task > gave it a name > chose Daily > start time and date > start program > and this is where I get confused.. I'm now asked to browse for Program/script (which i thought would be my actual python script) and I can Add arguments and Start in (both are optional). How do I tell the Task Scheduler that the program is a Python program? Server 2003 gave me a list for that. If I setup the Task the way I just described and run it, my task simply opens in Notepad and never runs.
Thank you for any help.
|
How can I modify a file in a gzipped tar file?
| 4,610,327
| 5
| 2
| 6,217
| 0
|
python,scripting,automation,tar
|
Don't think of a tar file as a database that you can read/write -- it's not. A tar file is a concatenation of files. To modify a file in the middle, you need to rewrite the rest of the file. (for files of a certain size, you might be able to exploit the block padding)
What you want to do is process the tarball file by file, copying files (with modifications) into a new tarball. The Python tarfile module should make this easy to do. You should be able to retain the attributes by copying them from the old TarInfo object to the new one.
| 0
| 1
| 0
| 1
|
2011-01-05T23:15:00.000
| 2
| 1.2
| true
| 4,610,205
| 0
| 0
| 0
| 1
|
I want to write a (preferably python) script to modify the content of one file in a gzipped tar file. The script must run on FreeBSD 6+.
Basically, I need to:
open the tar file
if the tar file has _MY_FILE_ in it:
if _MY_FILE_ has a line matching /RE/ in it:
insert LINE after the matching line
rewrite the content into the tar file, preserving all metadata except the file size
I'll be repeating this for a lot of files.
Python's tarfile module doesn't seem to be able to open tar files for read/write access when they're compressed, which makes a certain amount of sense. However, I can't find a way to copy the tar file with modifications, either.
Is there an easy way to do this?
|
Script from stdin use case
| 4,613,922
| 3
| 1
| 104
| 0
|
python,scripting,programming-languages
|
If you want to execute code generated by some tool it could be useful to be able to pipe the generated into your interpreter/compiler..
Simply support it ;) Checking if stdin is a tty or not is not hard anyway.
| 0
| 1
| 0
| 1
|
2011-01-06T10:21:00.000
| 1
| 1.2
| true
| 4,613,888
| 1
| 0
| 0
| 1
|
Taking e.g. Python as a good example of a modern scripting language, it has the option of reading a program (as opposed to input data for the program) from stdin. The REPL is the obvious use case where stdin is a terminal, but it's also designed to handle the scenario where it's not a terminal.
What use cases are there for reading the program itself from noninteractive stdin?
(The reason I ask is that I'm working on a scripting language myself, and wondering whether this is an important feature to provide, and if so, what the specifics need to look like.)
|
Has anyone gotten distribute to work correctly with github, specifically private repositories?
| 4,615,208
| 3
| 2
| 816
| 0
|
python,github,distribution,distutils,easy-install
|
If you know, that "pip" works, why don't you just use "pip"? "pip" can not only install from a package index, but also from a local source directory. Just use pip install . instead of python setup.py install.
Concerning your impression, it is indeed wrong. "pip" and "distribute" are altogether different projects with different aims. "pip" is a frontend to the distutils/setuptools API, trying to replace the rather weird "easy_install" frontend, whereas "distribute" is an alternative implementation of the backend "setuptools" API (which only includes an "easy_install" implementation for the sake of compatibility). "pip" isn't tied to "distribute" and also works with the old "setuptools" implementation.
I'd therefore recommend to always use "pip" for all package installations, and to never use "easy_install" or "python setup.py install". "pip" just works, whereas the other two are somewhat strange.
| 0
| 1
| 0
| 1
|
2011-01-06T11:46:00.000
| 1
| 1.2
| true
| 4,614,552
| 0
| 0
| 1
| 1
|
I built a small micro framework for our web service / web app and have it hosted it in a private repository on github.
I've added the private github repo in the dependency_links and have verified that it exists in dependency_links.txt
When I execute python setup.py install, I get unknown url type: git+ssh, so I looked deeper into the code and realized that distribute only has support for svn+ url types. I was under the (apparently wrong) impression that distribute used pip under the hood, but looks like it still uses easy_install.
Has anyone found a solution to using distutils / distribute to install private github repos as dependencies?
|
what is the difference between easy_install and apt-get
| 4,615,378
| 8
| 2
| 1,284
| 0
|
python,linux,ubuntu,packages
|
You are confusing two completely separate things.
Aptitude, of which apt_get is one part, is the Ubuntu system-wide package manager. It has packages for absolutely everything - applications, libraries, system utils, whatever. They may not be the latest versions, as packages are usually only updated for each separate Ubuntu release (except for security and bug fixes).
easy_install is a Python-only system for install Python libraries. It doesn't do anything else. The libraries are installed in the system Python's site-packages directory. There are some downsides to easy_install, one of which is that it's hard to upgrade and uninstall libraries. Use pip instead.
| 0
| 1
| 0
| 1
|
2011-01-06T13:20:00.000
| 1
| 1.2
| true
| 4,615,299
| 0
| 0
| 0
| 1
|
I have just started using Ubuntu as my first Linux, have a couple of question.
What is the difference between easy_install and apt-get?
How do I update my packages with packages installed in both these ways?
They are under pythonpath right?
|
Video website on google application engine
| 4,625,277
| 3
| 4
| 2,593
| 0
|
python,google-app-engine,google-cloud-datastore
|
As Nick pointed out, it can be done and it won't be a straight forward implementation.
I would suggest using the Amazon EC2 service for video conversion and Amazon S3 for storing of videos while using App Engine for creating a fast reliable and unbelievably scalable front-end.
| 0
| 1
| 0
| 0
|
2011-01-06T16:06:00.000
| 3
| 0.197375
| false
| 4,616,934
| 0
| 0
| 1
| 1
|
I am going to work on a video website where users/admin will be able to upload the videos and play them using some opensource javascript player. However, I want to know if it is a good idea to start this kind of project on google app engine considering its limitations to server and store the data.
What are the issues which I may have to encounter on Google application engine and if there are any possible solutions for those issues.
Currently, I have doubts on converting the videos while uploading, creating images from the videos uploaded (something like ffmpeg for google app engine) and whether google app engine will allow streaming for large videos considering its request and response constraints.
Please suggest.
Thanks in advance.
|
Migrating to Linux from Windows
| 4,619,230
| 5
| 0
| 484
| 0
|
python,windows,linux
|
Chmod is your friend.
However I question your design. Why do you want to have priviledges at such high levels of file systems. You know every user has a home dir and ther is always dir for configuration on both Windows and Linux.
What you do is a bad practice.
| 0
| 1
| 0
| 0
|
2011-01-06T19:51:00.000
| 6
| 0.16514
| false
| 4,619,168
| 0
| 0
| 0
| 5
|
I have a python gui that access files on windows as C:\data and C:\MyDIR all outside my doc's.
On, a linux sys i created /data and /MyDIR.
My gui cant access. I foresee always using C:\data and C:\MyDIR on both systems.
How do I fix code or Linux permissions to have access to both dir and sub directories.
|
Migrating to Linux from Windows
| 4,619,239
| 1
| 0
| 484
| 0
|
python,windows,linux
|
i created \data and \MyDIR
First, no you didn't. Paths use / in linux, not \.
Second, do NOT create directories in the root directory unless you know exactly what you're doing. True, you're not going to hurt anything by doing this, but it's extremely bad practice and should be avoided except for specific cases.
Linux is a multi-user OS. If you have configuration files that the user can write to, they should be in the user's home directory somewhere. If you have config files that are read only, they should be installed somewhere such as /etc/.
| 0
| 1
| 0
| 0
|
2011-01-06T19:51:00.000
| 6
| 0.033321
| false
| 4,619,168
| 0
| 0
| 0
| 5
|
I have a python gui that access files on windows as C:\data and C:\MyDIR all outside my doc's.
On, a linux sys i created /data and /MyDIR.
My gui cant access. I foresee always using C:\data and C:\MyDIR on both systems.
How do I fix code or Linux permissions to have access to both dir and sub directories.
|
Migrating to Linux from Windows
| 4,619,237
| 1
| 0
| 484
| 0
|
python,windows,linux
|
First of all, maybe you meant to say /data and /MyDIR.
Second, they're direct children of /, the root filesystem, which is reserved to the superuser (root) and people who know what they're doing.
Unfortunately the Windows world doesn't enforce nor encourage good practices, so you were able to create those two directories in your C: root (pretty much the analogue of the / directory). Long story short, it's likely you had to use root (probably masked as sudo) to make those two directories inside /, which means root is their owner, and it (and only it) has the permission to write inside them.
You'd better create some alike dirs inside your home (cd ~ and you're there) using your regular user (because you have a regular user, haven't you?) and then use them.
On the other hand you could use something like fuse and ntfs-3g to access those two dirs in the original ntfs filesystem corresponding to C:
| 0
| 1
| 0
| 0
|
2011-01-06T19:51:00.000
| 6
| 0.033321
| false
| 4,619,168
| 0
| 0
| 0
| 5
|
I have a python gui that access files on windows as C:\data and C:\MyDIR all outside my doc's.
On, a linux sys i created /data and /MyDIR.
My gui cant access. I foresee always using C:\data and C:\MyDIR on both systems.
How do I fix code or Linux permissions to have access to both dir and sub directories.
|
Migrating to Linux from Windows
| 4,619,321
| 0
| 0
| 484
| 0
|
python,windows,linux
|
Where did you create those directories in your Linux? Under $HOME? You can determine your path separator using the string 'sep' from the os module, that is, os.sep, then acct according to its return value. Comes to my mind sth like:
import os
dirs = [os.sep + "data", os.sep + "MyDIR"]
But all depends on what you want to do. If you can, please explain further your needs.
| 0
| 1
| 0
| 0
|
2011-01-06T19:51:00.000
| 6
| 0
| false
| 4,619,168
| 0
| 0
| 0
| 5
|
I have a python gui that access files on windows as C:\data and C:\MyDIR all outside my doc's.
On, a linux sys i created /data and /MyDIR.
My gui cant access. I foresee always using C:\data and C:\MyDIR on both systems.
How do I fix code or Linux permissions to have access to both dir and sub directories.
|
Migrating to Linux from Windows
| 4,619,238
| 1
| 0
| 484
| 0
|
python,windows,linux
|
The linux filesystem uses / as root. You can't use \data and \MyDir because \ does not mean anything. Moreover, the default owner of / is the user named root. Commonly you work with a different user than root on the machine.
So by default, you don't have the permission to write or to create something in / .
Choose another directory in your home directory. For example :
~/data/ & ~/MyDir/
~/ equals to /home/user428862/ where user428862 is your username on the machine.
| 0
| 1
| 0
| 0
|
2011-01-06T19:51:00.000
| 6
| 0.033321
| false
| 4,619,168
| 0
| 0
| 0
| 5
|
I have a python gui that access files on windows as C:\data and C:\MyDIR all outside my doc's.
On, a linux sys i created /data and /MyDIR.
My gui cant access. I foresee always using C:\data and C:\MyDIR on both systems.
How do I fix code or Linux permissions to have access to both dir and sub directories.
|
Which editor/IDE is the best for editing Python scripts for Google App Engine?
| 4,622,125
| 1
| 4
| 2,971
| 0
|
python,google-app-engine,ide,editor
|
Eclipse with the official google appenine plugin and the pydev plugin.
| 0
| 1
| 0
| 0
|
2011-01-07T02:52:00.000
| 5
| 0.039979
| false
| 4,622,111
| 0
| 0
| 0
| 1
|
Which editor or IDE do you use when you edit Python scripts? Especially when you are working on Google App Engine.
Do you know any editor/IDE which support Syntax highlighting and auto-completion for Google App Engine modules?
Thanks in advance for any suggestion.
|
Chaining multiple mapreduce tasks in Hadoop streaming
| 4,980,434
| 1
| 7
| 7,633
| 0
|
python,hadoop,mapreduce,hadoop-plugins
|
If you are already writing your mapper and reducer in Python, I would consider using Dumbo where such an operation is straightforward. The sequence of your map reduce jobs, your mapper, reducer etc. are all in one python script that can be run from the command line.
| 0
| 1
| 0
| 0
|
2011-01-07T14:18:00.000
| 4
| 0.049958
| false
| 4,626,356
| 0
| 0
| 0
| 1
|
I am in scenario where I have two mapreduce jobs. I am more comfortable with python and planning to use it for writing mapreduce scripts and use hadoop streaming for the same. is there a convenient to chain both the jobs following form when hadoop streaming is used?
Map1 -> Reduce1 -> Map2 -> Reduce2
I've heard a lot of methods to accomplish this in java, But i need something for Hadoop streaming.
|
tracing Linux socket calls?
| 4,630,096
| 0
| 1
| 2,163
| 0
|
python,linux,sockets,network-traffic
|
Why would the INADDR_ANY IP address fail? It shouldn't. From my point of view, something else is missing in your picture. What happens if you try, except the code block, using (errno, string) to get a more descriptive error msg.
| 0
| 1
| 0
| 0
|
2011-01-07T14:53:00.000
| 3
| 0
| false
| 4,626,726
| 0
| 0
| 0
| 2
|
I've got a Python library I am trying to debug (pyzeroconf). The following code returns '34' as if the data was sent down the socket but I can't see those packets on 2 different wireshark equipped PCs.
bytes_sent = self.socket.sendto(out.packet(), 0, (addr, port))
I am at the point where I need to understand what's going on down the call stack. Is there a way to trace what's happening?
Resolution: the problem was related to the "bind address" the library was figuring out as default. A value of "0.0.0.0" isn't allowed and fails (at least on Linux) silently.
|
tracing Linux socket calls?
| 4,626,823
| 1
| 1
| 2,163
| 0
|
python,linux,sockets,network-traffic
|
I'm quite sure this not what you expect, but can help:
strace -f -F python myscript.py
strace dump the system calls of a generic program.
| 0
| 1
| 0
| 0
|
2011-01-07T14:53:00.000
| 3
| 0.066568
| false
| 4,626,726
| 0
| 0
| 0
| 2
|
I've got a Python library I am trying to debug (pyzeroconf). The following code returns '34' as if the data was sent down the socket but I can't see those packets on 2 different wireshark equipped PCs.
bytes_sent = self.socket.sendto(out.packet(), 0, (addr, port))
I am at the point where I need to understand what's going on down the call stack. Is there a way to trace what's happening?
Resolution: the problem was related to the "bind address" the library was figuring out as default. A value of "0.0.0.0" isn't allowed and fails (at least on Linux) silently.
|
Async spawing of processes: design question - Celery or Twisted
| 4,635,379
| 5
| 6
| 1,689
| 0
|
python,django,asynchronous,twisted
|
On my system, RabbitMQ running with pretty reasonable defaults is using about 2MB of RAM. Celeryd uses a bit more, but not an excessive amount.
In my opinion, the overhead of RabbitMQ and celery are pretty much negligible compared to the rest of the stack. If you're processing jobs that are going to take several minutes to complete, those jobs are what will overwhelm your 512MB server as soon as your traffic increases, not RabbitMQ. Starting off with RabbitMQ and Celery will at least set you up nicely to scale those jobs out horizontally though, so you're definitely on the right track there.
Sure, you could write your own job control in Twisted, but I don't see it gaining you much. Twisted has pretty good performance, but I wouldn't expect it to outperform RabbitMQ by enough to justify the time and potential for introducing bugs and architectural limitations. Mostly, it just seems like the wrong spot to worry about optimizing. Take the time that you would've spent re-writing RabbitMQ and work on reducing those three minute jobs by 20% or something. Or just spend an extra $20/month and double your capacity.
| 0
| 1
| 0
| 0
|
2011-01-08T17:08:00.000
| 3
| 1.2
| true
| 4,635,033
| 0
| 0
| 1
| 2
|
All: I'm seeking input/guidance/and design ideas. My goal is to find a lean but reliable way to take XML payload from an HTTP POST (no problems with this part), parse it, and spawn a relatively long-lived process asynchronously.
The spawned process is CPU intensive and will last for roughly three minutes. I don't expect much load at first, but there's a definite possibility that I will need to scale this out horizontally across servers as traffic hopefully increases.
I really like the Celery/Django stack for this use: it's very intuitive and has all of the built-in framework to accomplish exactly what I need. I started down that path with zeal, but I soon found my little 512MB RAM cloud server had only 100MB of free memory and I started sensing that I was headed for trouble once I went live with all of my processes running full-tilt. Also, it's got several moving parts: RabbitMQ, MySQL, cerleryd, ligthttpd and the django container.
I can absolutely increase the size of my server, but I'm hoping to keep my costs down to a minimum at this early phase of this project.
As an alternative, I'm considering using twisted for the process management, as well as perspective broker for the remote systems, should they be needed. But for me at least, while twisted is brilliant, I feel like I'm signing up for a lot going down that path: writing protocols, callback management, keeping track of job states, etc. The benefits here are pretty obvious - excellent performance, far fewer moving parts, and a smaller memory footprint (note: I need to verify the memory part). I'm heavily skewed toward Python for this - it's much more enjoyable for me than the alternatives :)
I'd greatly appreciate any perspective on this. I'm concerned about starting things off on the wrong track, and redoing this later with production traffic will be painful.
-Matt
|
Async spawing of processes: design question - Celery or Twisted
| 4,635,409
| 0
| 6
| 1,689
| 0
|
python,django,asynchronous,twisted
|
I'll answer this question as though I was the one doing the project and hopefully that might give you some insight.
I'm working on a project that will require the use of a queue, a web server for the public facing web application and several job clients.
The idea is to have the web server continuously running (no need for a very powerful machine here). However, the work is handled by these job clients which are more powerful machines that can be started and stopped at will. The job queue will also reside on the same machine as the web application. When a job gets inserted into the queue, a process that starts the job clients will kick into action and spin the first client. Using a load balancer that can start new servers as the load increases, I don't have to bother about managing the number of servers running to process jobs in the queue. If there are no jobs in the queue after a while, all job clients can be terminated.
I will suggest using a setup similar to this. You don't want job execution to affect the performance of your web application.
| 0
| 1
| 0
| 0
|
2011-01-08T17:08:00.000
| 3
| 0
| false
| 4,635,033
| 0
| 0
| 1
| 2
|
All: I'm seeking input/guidance/and design ideas. My goal is to find a lean but reliable way to take XML payload from an HTTP POST (no problems with this part), parse it, and spawn a relatively long-lived process asynchronously.
The spawned process is CPU intensive and will last for roughly three minutes. I don't expect much load at first, but there's a definite possibility that I will need to scale this out horizontally across servers as traffic hopefully increases.
I really like the Celery/Django stack for this use: it's very intuitive and has all of the built-in framework to accomplish exactly what I need. I started down that path with zeal, but I soon found my little 512MB RAM cloud server had only 100MB of free memory and I started sensing that I was headed for trouble once I went live with all of my processes running full-tilt. Also, it's got several moving parts: RabbitMQ, MySQL, cerleryd, ligthttpd and the django container.
I can absolutely increase the size of my server, but I'm hoping to keep my costs down to a minimum at this early phase of this project.
As an alternative, I'm considering using twisted for the process management, as well as perspective broker for the remote systems, should they be needed. But for me at least, while twisted is brilliant, I feel like I'm signing up for a lot going down that path: writing protocols, callback management, keeping track of job states, etc. The benefits here are pretty obvious - excellent performance, far fewer moving parts, and a smaller memory footprint (note: I need to verify the memory part). I'm heavily skewed toward Python for this - it's much more enjoyable for me than the alternatives :)
I'd greatly appreciate any perspective on this. I'm concerned about starting things off on the wrong track, and redoing this later with production traffic will be painful.
-Matt
|
Releasing a wxPython App: Give out scripts or compile in Exe, etc?
| 4,643,279
| 1
| 3
| 1,685
| 0
|
python,wxpython
|
I suggest both, script for all platforms and frozen binary for lazy windows users.
To answer your latest question, you don't compile python. Python is an interpreted language, it gets compiled on the fly when run. A python frozen binary is actually the python interpreter with your script hardcoded in it. And frozen binaries are windows-only, AFAIK. Besides, Unix and MacOS (usually) come with python pre-installed.
| 1
| 1
| 0
| 0
|
2011-01-10T02:33:00.000
| 3
| 0.066568
| false
| 4,643,247
| 0
| 0
| 0
| 1
|
I have a wxPython application that is almost done & I would like to place it in my portfolio. I have to consider when someone attempts to run my app that they may not have Python, or wxPython, so if they just click the main script/python file its not going to run, right?
How should I distribute my app (how do you distribute ur apps) so that it can be run & also so that it could be run on the 3 major OS's (Unix, Windows, MacOSX)?
I know of py2exe for releasing under windows, but what can I use for Unix & MacOSX to compile the program? Whats the easiest way?
|
Is there a python module to parse Linux's sysfs?
| 4,649,590
| 1
| 9
| 6,009
| 0
|
python,linux,internals
|
Not really sure why you need something specific, they are all text files for the most part, you can just mess with them directly.
There aren't any python modules that does that as far as I know.
| 0
| 1
| 0
| 0
|
2011-01-10T16:09:00.000
| 3
| 0.066568
| false
| 4,648,792
| 0
| 0
| 0
| 1
|
Hey all, Linux has a lot of great features in procfs and sysfs, and tools like vmstat extend that quite a bit, but I have a need to collect data from a variety of these systems and was hoping to leverage a unified Python utility instead of hacking together a bunch of disparate scripts.
In order to do that I first need to identify whether or not Python has the bits and pieces I need to adequately parse/process the different data collection points. So, the essence of my question:
Is there a python module that handles/parses the sysfs objects already?
I've looked for such a beast via Google, usenet, and various forums, but I haven't yet found anything intelligent or functional. So, before I carve one out, I figured I'd check here first.
|
Debugging Python in TextWrangler shows Terminal not Debugger
| 4,910,172
| 1
| 2
| 2,348
| 0
|
python,textwrangler
|
The terminal that TextWranger opens up when you click on #! | Run in Debugger is the debug environment provided by TextWrangler. It's a command-line utility similar to gdb (from the GNU toolchain) if you've ever used that before. When the terminal opens, if you see this prompt: (pdb) then that means you're in the debugger. Typing help at the prompt will get you the available list of commands.
| 0
| 1
| 0
| 0
|
2011-01-10T23:59:00.000
| 1
| 1.2
| true
| 4,652,893
| 1
| 0
| 0
| 1
|
When I am debugging python code in TextWrangler using the #! | Run in Debugger option the code is run in a terminal not in the python debugger. How do I configure TextWrangler to use the python debugger?
BTW - Using TextWrangler v3.5 (2880) running on a Mac, python file has .py extension and is seen by TextWrangler as a python file; syntax highlighting is correct.
Thanks,
Jamie
|
Which of two methods of using python within erlang should I use?
| 4,654,839
| 1
| 2
| 196
| 0
|
python,erlang
|
I haven't worked Erlang<->Python before but erlport.org seems promising. I would try that first before getting into greasiness with REST and what not. I.e. I didn't provide and answer but a recommendation :)
| 0
| 1
| 0
| 1
|
2011-01-11T01:34:00.000
| 1
| 0.197375
| false
| 4,653,361
| 0
| 0
| 0
| 1
|
I have a project where I use erlang to aggregate RSS, and I use python to process the RSS feeds.
Method 1:
Use an erlang port, using erlport.org, to call python.
I'm not sure how to design the python code to be asyncrhonous using erlport.
Method 2:
Use erlang to call on a RESTful interface with Tornado that does the processing (asynchro downloading of urls -- asynchro procssing)
|
Login hook on Google Appengine
| 4,663,607
| 0
| 7
| 345
| 0
|
python,events,google-app-engine,login,hook
|
I'm using Python on GAE (so it may be different for Java) but have seen no documentation about such a hook for a user logging in. If you used one of the session management frameworks you'd probably get some indication for that, but otherwise I do this kind of house keeping on my opening page itself which requires login. (What do you want to do about an already logged in user returning to your site a few days later... that is, do you really want to record logins or the start time of a visit/session??)
If I wanted to do this but with multiple landing pages, and without using a session framework,
I'd use memcache to do a quick check on every page request and then only write to the datastore when a new visit starts.
| 0
| 1
| 0
| 0
|
2011-01-11T11:06:00.000
| 2
| 0
| false
| 4,656,923
| 0
| 0
| 1
| 1
|
Every time a user logs in to the application, I want to perform a certain task, say, record the time of login. So I wanted to know if a hook is fired on login by default? If yes, how can I make my module respond to it.
Edit - Assume there are multiple entry points in the application to login.
|
A Read-Only Relational Database on Google App Engine?
| 4,663,353
| 2
| 2
| 803
| 1
|
python,google-app-engine,sqlite,relational-database,non-relational-database
|
I don't think you're likely to find anything like that...surely not over blobstore. Because if all your data is stored in a single blob, you'd have to read the entire database into memory for any operation, and you said you can't do that.
Using the datastore as your backend is more plausible, but not much. The big issue with providing a SQLite driver there would be implementing transaction semantics, and since that's the key thing GAE takes away from you for the sake of high availability, it's hard to imagine somebody going to much trouble to write such a thing.
| 0
| 1
| 0
| 0
|
2011-01-11T21:55:00.000
| 3
| 1.2
| true
| 4,663,071
| 0
| 0
| 1
| 2
|
I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I quickly hit the quota imposed by google. A final option is to use django-nonrel + djangoappengine, but I'm afraid that package is still in its infancy.
Ideally, I'd like to create a read-only sqlite database that uses a blobstore as its data source. Is this possible?
|
A Read-Only Relational Database on Google App Engine?
| 4,663,631
| 2
| 2
| 803
| 1
|
python,google-app-engine,sqlite,relational-database,non-relational-database
|
django-nonrel does not magically provide an SQL database - so it's not really a solution to your problem.
Accessing a blobstore blob like a file is possible, but the SQLite module requires a native C extension, which is not enabled on App Engine.
| 0
| 1
| 0
| 0
|
2011-01-11T21:55:00.000
| 3
| 0.132549
| false
| 4,663,071
| 0
| 0
| 1
| 2
|
I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I quickly hit the quota imposed by google. A final option is to use django-nonrel + djangoappengine, but I'm afraid that package is still in its infancy.
Ideally, I'd like to create a read-only sqlite database that uses a blobstore as its data source. Is this possible?
|
Redirect stdout to a file in Python?
| 68,410,134
| 0
| 395
| 655,144
| 0
|
python,stdout
|
I know this question is answered (using python abc.py > output.log 2>&1 ), but I still have to say:
When writing your program, don't write to stdout. Always use logging to output whatever you want. That would give you a lot of freedom in the future when you want to redirect, filter, rotate the output files.
| 0
| 1
| 0
| 1
|
2011-01-13T00:51:00.000
| 13
| 0
| false
| 4,675,728
| 0
| 0
| 0
| 1
|
How do I redirect stdout to an arbitrary file in Python?
When a long-running Python script (e.g, web application) is started from within the ssh session and backgounded, and the ssh session is closed, the application will raise IOError and fail the moment it tries to write to stdout. I needed to find a way to make the application and modules output to a file rather than stdout to prevent failure due to IOError. Currently, I employ nohup to redirect output to a file, and that gets the job done, but I was wondering if there was a way to do it without using nohup, out of curiosity.
I have already tried sys.stdout = open('somefile', 'w'), but this does not seem to prevent some external modules from still outputting to terminal (or maybe the sys.stdout = ... line did not fire at all). I know it should work from simpler scripts I've tested on, but I also didn't have time yet to test on a web application yet.
|
Modify system configuration files and use system commands through web interface
| 4,710,807
| 0
| 3
| 474
| 0
|
python,security,apache,root,privileges
|
Hello
You can easily create web applications in Python using WSGI-compliant web frameworks such as CherryPy2 and templating engines such as Genshi. You can use the 'subprocess' module to manadge external commands...
| 0
| 1
| 0
| 0
|
2011-01-13T13:44:00.000
| 2
| 1.2
| true
| 4,680,693
| 0
| 0
| 1
| 2
|
I received a project recently and I am wondering how to do something in a correct and secure manner.
The situation is the following:
There are classes to manage linux users, mysql users and databases and apache virtual hosts. They're used to automate the addition of users in a small shared-hosting environnement. These classes are then used in command-line scripts to offer a nice interface for the system administrator.
I am now asked to build a simple web interface to offer a GUI to the administrator and then offer some features directly to the users (change their unix password and other daily procedures).
I don't know how to implement the web application. It will run in Apache (with the apache user) but the classes need to access files and commands that are only usable by the root user to do the necessary changes (e.g useradd and virtual hosts configuration files). When using the command-line scripts, it is not a problem as they are run under the correct user. Giving permissions to the apache user would probably be dangerous.
What would be the best technique to allow this through the web application ? I would like to use the classes directly if possible (it would be handier than calling the command line scripts like external processes and parsing output) but I can't see how to do this in a secure manner.
I saw existing products doing similar things (webmin, eBox, ...) but I don't know how it works.
PS: The classes I received are simple but really badly programmed and barely commented. They are actually in PHP but I'm planning to port them to python. Then I'd like to use the Django framework to build the web admin interface.
Thanks and sorry if the question is not clear enough.
EDIT: I read a little bit about webmin and saw that it uses its own mini web server (called miniserv.pl). It seems like a good solution. The user running this server should then have permissions to modify the files and use the commands. How could I do something similar with Django? Use the development server? Would it be better to use something like CherryPy?
|
Modify system configuration files and use system commands through web interface
| 4,684,335
| 0
| 3
| 474
| 0
|
python,security,apache,root,privileges
|
You can use sudo to give the apache user root permission for only the commands/scripts you need for your web app.
| 0
| 1
| 0
| 0
|
2011-01-13T13:44:00.000
| 2
| 0
| false
| 4,680,693
| 0
| 0
| 1
| 2
|
I received a project recently and I am wondering how to do something in a correct and secure manner.
The situation is the following:
There are classes to manage linux users, mysql users and databases and apache virtual hosts. They're used to automate the addition of users in a small shared-hosting environnement. These classes are then used in command-line scripts to offer a nice interface for the system administrator.
I am now asked to build a simple web interface to offer a GUI to the administrator and then offer some features directly to the users (change their unix password and other daily procedures).
I don't know how to implement the web application. It will run in Apache (with the apache user) but the classes need to access files and commands that are only usable by the root user to do the necessary changes (e.g useradd and virtual hosts configuration files). When using the command-line scripts, it is not a problem as they are run under the correct user. Giving permissions to the apache user would probably be dangerous.
What would be the best technique to allow this through the web application ? I would like to use the classes directly if possible (it would be handier than calling the command line scripts like external processes and parsing output) but I can't see how to do this in a secure manner.
I saw existing products doing similar things (webmin, eBox, ...) but I don't know how it works.
PS: The classes I received are simple but really badly programmed and barely commented. They are actually in PHP but I'm planning to port them to python. Then I'd like to use the Django framework to build the web admin interface.
Thanks and sorry if the question is not clear enough.
EDIT: I read a little bit about webmin and saw that it uses its own mini web server (called miniserv.pl). It seems like a good solution. The user running this server should then have permissions to modify the files and use the commands. How could I do something similar with Django? Use the development server? Would it be better to use something like CherryPy?
|
Call a program written in python in a program written in Ocaml
| 4,684,135
| 2
| 3
| 849
| 0
|
python,ocaml
|
You can execute commands using Sys.command, so you can just do Sys.command "python foo.py", assuming python is in your path and foo.py is in the current directory.
| 0
| 1
| 0
| 0
|
2011-01-13T18:17:00.000
| 6
| 0.066568
| false
| 4,683,639
| 1
| 0
| 0
| 2
|
I wanted to ask if you could call in a program written in Ocaml a program written in python , and if the answer is yes how do I do?
|
Call a program written in python in a program written in Ocaml
| 4,683,673
| -1
| 3
| 849
| 0
|
python,ocaml
|
It depends on your exact requirements, but you can use pythons os.system() to execute an program in the same way you would call it from the command line. That should be a good starting point.
| 0
| 1
| 0
| 0
|
2011-01-13T18:17:00.000
| 6
| -0.033321
| false
| 4,683,639
| 1
| 0
| 0
| 2
|
I wanted to ask if you could call in a program written in Ocaml a program written in python , and if the answer is yes how do I do?
|
Establish SSH Connection Between Two Isolated Machines Using a 3rd System
| 4,686,141
| 1
| 1
| 660
| 0
|
python,networking,ssh,portforwarding
|
I'd use ssh to create a remote tunnel (-R) from the server to the local system. If you're insistent on doing this with Python then there's the subprocess module.
| 0
| 1
| 1
| 0
|
2011-01-13T22:33:00.000
| 2
| 0.099668
| false
| 4,686,104
| 0
| 0
| 0
| 1
|
I'd like to do the following with Python:
Computer 1 starts SSH server (probably using twisted or paramiko)
Computer 1 connects to Server 1 (idle connection)
Computer 2 connects to Server 1
Server 1 forwards Computer 2's connection to Computer 1 (connection no longer idle)
Computer 1 forwards Server 1's connection to listening SSH port (on computer 1)
Result being Computer 2 now has a SSH session with Computer 1, almost as if Computer 2 had started a normal SSH session (but with Server 1's IP instead of Computer 1's)
I need this because I can't port forward on Computer 1's network (the router doesn't support it).
|
How to analyze memory usage from a core dump?
| 46,514,677
| 1
| 16
| 11,789
| 0
|
python,linux,memory-leaks,coredump
|
Try running linux perf tool on the python process with callgraph enabled.
if its multi threaded process give all associated LWPs as arguments.
| 0
| 1
| 0
| 0
|
2011-01-14T12:05:00.000
| 2
| 0.099668
| false
| 4,690,851
| 0
| 0
| 0
| 1
|
I have a core dump under Linux. The process went on memory allocation rampage and I need to find at least which library this happens in.
What tool do you suggest to get broad overview of where the memory is going? I know the problem is hard/unsolvable fully. Any tool that could at least give some clues would help.
[it's a python process, the suspicion is that the memory allocations are caused by one of the custom modules written in C]
|
Python Twisted - Prospective Broker and Server-Side Deffereds
| 4,695,170
| 3
| 2
| 323
| 0
|
python,pygtk,twisted,amqp,deferred
|
On the server side, how do I defer the response inside one of the servers pb methods?
Easy. Return the Deferred from the remote_ method. Done.
| 0
| 1
| 0
| 0
|
2011-01-14T18:45:00.000
| 1
| 1.2
| true
| 4,694,706
| 0
| 0
| 0
| 1
|
Background:
I have a gtk client that uses twisted and perspective broker to perform remote object execution and server/client communication. This works great for me and was a breeze to start working with.
I have amqp (Message Queue/MQ) services that I also need to communicate from the client.
I have a security model in place around the client and server through twisted, and I don't want the clients to talk to the Message Queue Server directly, nor do I want another dependency on amqp libraries for the clients.
Ideally I would like the client to send a request to the server through perspective broker, the Perspective Broker Server to send an amqp request to another server on behalf of the client, and the client to receive an acknowledgment when the PB server receives a response from the Message Queue Server.
Question:
On the server side, how do I defer the response inside one of the servers pb methods?
More importantly what's the most efficient way to connect an outgoing request back to an incoming request and still preserve the Twisted event driven paradigms?
|
Get the uri of the page in Google App Engine / Django?
| 4,698,277
| 0
| 0
| 98
| 0
|
python,django,google-app-engine,httpwebrequest
|
Should be in self.request.url in your RequestHandler-based class.
| 0
| 1
| 0
| 0
|
2011-01-15T06:02:00.000
| 1
| 1.2
| true
| 4,698,265
| 0
| 0
| 1
| 1
|
I'm using Google App Engine with the standard Django templates found in Webapp.
I wanted to display the permalink to the page the user is on. How do I get the uri?
|
Using RabbitMQ is there a way to look at the queue contents without a dequeue operation?
| 9,286,914
| 46
| 48
| 26,786
| 0
|
python,rabbitmq,esb,amqp
|
Queue browsing is not supported directly, but if you declare a queue with NO auto acknowledgements and do not ACK the messages that you receive, then you can see everything in it. After you have had a look, send a CANCEL on the channel, or disconnect and reconnect to cause all the messages to be requeued. This does increment a number in the message headers, but otherwise leaves the messages untouched.
I built an app where message ordering was not terribly important, and I frequently scanned through the queue in this way. If I found a problem, I would dump the messages into a file, fix them and resubmit.
If you only need to peek at a message or two once in a while you can do that with the RabbitMQ management plugin.
In addition, if you only need a message count, you can get that every time you declare the queue, or on a basic.get command.
| 0
| 1
| 0
| 0
|
2011-01-15T15:13:00.000
| 3
| 1.2
| true
| 4,700,292
| 0
| 0
| 0
| 1
|
As a way to learn RabbitMQ and python I'm working on a project that allows me to distribute h264 encodes between a number of computers. The basics are done, I have a daemon that runs on Linux or Mac that attaches to queue, accepts jobs and encodes them using HandBrakeCLI and acks the message once the encode is complete. I've also built a simple tool to push items into the queue.
Now I want to expand the capabilities of the tool that pushes items into the queue so that I can view what is in the queue. I'm aware of the ability to see how many items are in the queue, but I want to be able to get the actual messages so I can show what movie or TV show is waiting to be encoded yet. The idea is that the queue manager would receive messages from the encoder clients when a job has completed and then refresh the queue list.
I know there is a convoluted way of keeping the queue manager's list in sync with the actual work queue but I'd like this to be "persistent" in that I should be able to close the queue manager and reopen it later to see the queue.
|
Python and/or Perl VS bash
| 13,985,089
| -1
| 8
| 5,752
| 0
|
python,perl,bash,admin
|
I'd say if this is only your machine and you're not supposed to share those administration scripts with any other one, so you'd better to keep doing that in Python (which seems you feel more comfortable on that).
But if you have colleagues or your admin scripts are supposed to employee by other people, so let keep it in a way that is more popular and more understandable for others also: Bash!
Also I guess if you know Bash, you can simply use dozens of existing Bash scripts by customizing them or improving them to whatever which is more suitable for you!
| 0
| 1
| 0
| 1
|
2011-01-15T19:45:00.000
| 2
| -0.099668
| false
| 4,701,766
| 0
| 0
| 0
| 1
|
I normally code admin scripts in Python and I know of many that code them in Perl. I was about to invest some time on improving my skills on bash programming. But I wonder if people around think that this is a good idea ?
I know bash is a good skill to have and market very often demand it but ... if I can get by with Python or Perl then ... is it really worth the effort ?
As answers I am looking for cases where actually bash is way better than Perl or Python to develop admin scripts.
|
Replace symbols in python script on distribution
| 4,706,043
| 3
| 2
| 201
| 0
|
python,distutils
|
Do it the other way around. Add the version number, the author name and other metadata you need in the script to the script itself. Then import or execfile() the script in setup.py, and use the metadata defined in the script as arguments to the setup() function.
| 0
| 1
| 0
| 1
|
2011-01-16T13:58:00.000
| 3
| 1.2
| true
| 4,705,723
| 1
| 0
| 0
| 1
|
I have a python script that outputs the program name, version number and the author when called with command line arguments like --help or --version. Currently this information is hardcoded in the python script itself. But I use distutils for building/packaging the application so all this information is already present in the setup.py. Is it possible to let distutils write metadata like version and author name/email to the built python script so I only need to maintain this data in the setup.py file? Or is there another standard mechanism to handle stuff like that?
|
Find the most recent file in a directory without reading all the contents of it
| 4,709,984
| 3
| 4
| 1,636
| 0
|
python,c,unix
|
No portable API exists to do this in Unix. Most filesystems don't index files inside directories by their mtime (or ctime), so even if it did it probably wouldn't be any faster than doing it yourself.
| 0
| 1
| 0
| 0
|
2011-01-17T04:07:00.000
| 4
| 0.148885
| false
| 4,709,968
| 0
| 0
| 0
| 1
|
I'm trying to find out the latest file in a huge filesystem. One way to do this is to go through all directories - one at a time, read its contents, select the latest file etc.
The obvious drawback is I have to get all the files in a specific directory. I was wondering whether there was a 'magic' call in Python [1] which Unix supports to get just the latest file in a directory.
[1]. My application is in Python, but if a readymade solution doesnt exist in stdlib, please provide C (lanuage) alternatives using system calls. I'm willing to write a C-extension and make this work.
Thanks
update: I suppose I should offer an explanation on why an inotify type solution wont work for me. I was simply looking for a system call using Python/C which could give me the latest file. Yes, one could have inotify (or a similar overarching setup) which monitors FS changes but given a random directory how do I find the latest file is the essence of the question.
|
Is there an API of Google App Engine provided to better configure the Bigtable besides Datastore?
| 4,718,951
| 2
| 1
| 134
| 1
|
python,google-app-engine
|
The Datastore is the only interface to the underlying storage on App Engine. You should be able to use any valid UTF-8 string as a kind name, key name, or property name, however.
| 0
| 1
| 0
| 0
|
2011-01-17T10:32:00.000
| 1
| 0.379949
| false
| 4,712,143
| 0
| 0
| 1
| 1
|
According to the Bigtable original article, a column key of a Bigtable is named using "family:qualifier" syntax where column family names must be printable but qualifiers may be arbitrary strings. In the application I am working on, I would like to specify the qualifiers using Chinese words (or phrase). Is it possible to do this in Google App Engine? Is there a Bigtable API other than provided datastore API? It seems Google is tightly protecting its platform for good reasons.
Thanks in advance.
Marvin
|
"IOError [Errno 13] Permisson denied" when copy a file on Windows
| 4,715,102
| 0
| 4
| 2,669
| 0
|
python,windows
|
Can you copy files that are open in Windows? I have a vague memory that you can't, and the file will be open while you execute it.
Is it really being copied? It doesn't exist there before copying? Did it copy the whole file?
| 0
| 1
| 0
| 0
|
2011-01-17T13:20:00.000
| 3
| 0
| false
| 4,713,589
| 1
| 0
| 0
| 3
|
I wrote a program that will copy a file called a.exe to C:/Windows/, then I pack it to exe with PyInstaller, and rename the exe file to a.exe. When I run the exe file, it output IOError [Errno 13] Permisson denied: 'C:/Windows/a.exe', but the file a.exe was copied to the directory C:/Windows. Then I ran it as the Administrator, it happened again...
At first, I copy the file with shututil.copy, then I wrote a function myself(open a.exe, create a.exe under C:/Windows, read a.exe 's content and write to C:/Windows/a.exe, close all), but it doesn't help...Any ideas?
|
"IOError [Errno 13] Permisson denied" when copy a file on Windows
| 4,713,887
| 4
| 4
| 2,669
| 0
|
python,windows
|
Check if a.exe has read-only attribute. shutil.copy raises "Permission denied" error when it is called to overwrite existing file with read-only attribute set
| 0
| 1
| 0
| 0
|
2011-01-17T13:20:00.000
| 3
| 1.2
| true
| 4,713,589
| 1
| 0
| 0
| 3
|
I wrote a program that will copy a file called a.exe to C:/Windows/, then I pack it to exe with PyInstaller, and rename the exe file to a.exe. When I run the exe file, it output IOError [Errno 13] Permisson denied: 'C:/Windows/a.exe', but the file a.exe was copied to the directory C:/Windows. Then I ran it as the Administrator, it happened again...
At first, I copy the file with shututil.copy, then I wrote a function myself(open a.exe, create a.exe under C:/Windows, read a.exe 's content and write to C:/Windows/a.exe, close all), but it doesn't help...Any ideas?
|
"IOError [Errno 13] Permisson denied" when copy a file on Windows
| 4,713,881
| 0
| 4
| 2,669
| 0
|
python,windows
|
Apparently you're trying to execute a file that moves itself to a different place ... I guess that cannot work.
| 0
| 1
| 0
| 0
|
2011-01-17T13:20:00.000
| 3
| 0
| false
| 4,713,589
| 1
| 0
| 0
| 3
|
I wrote a program that will copy a file called a.exe to C:/Windows/, then I pack it to exe with PyInstaller, and rename the exe file to a.exe. When I run the exe file, it output IOError [Errno 13] Permisson denied: 'C:/Windows/a.exe', but the file a.exe was copied to the directory C:/Windows. Then I ran it as the Administrator, it happened again...
At first, I copy the file with shututil.copy, then I wrote a function myself(open a.exe, create a.exe under C:/Windows, read a.exe 's content and write to C:/Windows/a.exe, close all), but it doesn't help...Any ideas?
|
Serving large generated files using Google App Engine?
| 4,718,754
| 3
| 2
| 793
| 0
|
python,google-app-engine,blob,blobstore
|
I think storing it in the blobstore via a form post is your best currently-available option. We have plans to implement programmatic blobstore writing, but it's not ready quite yet.
| 0
| 1
| 0
| 0
|
2011-01-17T20:31:00.000
| 4
| 1.2
| true
| 4,717,568
| 0
| 0
| 1
| 2
|
Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are:
Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file.
Having the offline processing code store the file somewhere off of GAE, and serving it from there.
Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit.
Any input would be appreciated,
|
Serving large generated files using Google App Engine?
| 4,717,660
| 0
| 2
| 793
| 0
|
python,google-app-engine,blob,blobstore
|
There is some approach you are overlooking, although I'm not sure whether it is that much better:
Split the data into many 1MB chunks, and have individual requests to transfer the chunks.
This would require cooperation from the outside applications to actually retrieve the data in chunks; you might want to use the HTTP Range header to maintain the illusion of a single file. Then have another object that keeps the IDs of all the individual chunks.
| 0
| 1
| 0
| 0
|
2011-01-17T20:31:00.000
| 4
| 0
| false
| 4,717,568
| 0
| 0
| 1
| 2
|
Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are:
Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file.
Having the offline processing code store the file somewhere off of GAE, and serving it from there.
Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit.
Any input would be appreciated,
|
How to enable Python support in gVim on Windows?
| 37,509,169
| 0
| 54
| 58,355
| 0
|
python,vim
|
After trying all answers in this thread without success, the following worked for me (Win10, Python 2.7 32bit, gvim 7.4 32bit):
Reinvoke the Python Installer, select Change Python
Select the Option Add Python to Path, which is off by default
After the installer is done, restart your machine
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 0
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
How to enable Python support in gVim on Windows?
| 35,977,819
| 0
| 54
| 58,355
| 0
|
python,vim
|
Download the one called "OLE GUI executable"
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 0
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
How to enable Python support in gVim on Windows?
| 35,925,222
| 0
| 54
| 58,355
| 0
|
python,vim
|
After reading the above, I can confirm that on Win8.1 it does matter the order you install them (least for me it did). I had 32bit VIM 7.4 installed for a few months, then tried adding Python and couldn't do it. Left Python 2.7.9 installed and uninstalled/reinstalled VIM and now it works.
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 0
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
How to enable Python support in gVim on Windows?
| 25,298,763
| 0
| 54
| 58,355
| 0
|
python,vim
|
When I typed :version, it revealed that my Vim was not compiled with Python. Perhaps because I did not have Python (32-bit?) at the time.
I did install 32-bit Python as suggested, but reinstalling Vim seemed necessary.
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 0
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
How to enable Python support in gVim on Windows?
| 17,963,884
| 43
| 54
| 58,355
| 0
|
python,vim
|
I had the same issue, but on Windows 7, and a restart didn't fix it.
I already had gVim 7.3 installed. At the time of writing the current Python version was 3.3, so I installed that. But :has ("python") and :has ("python3") still returned 0.
After much trial and error, I determined that:
If gVim is 32-bit, and it usually is even on 64-bit Windows (you can confirm using the :version command), then you need the 32-bit python installation as well
No restart of Windows 7 is required
The version of python needs to match the version that gVim is compiled for as it looks for a specific DLL name. You can work this out from the :version command in gVim, which gives something like:
Compilation: cl -c /W3 /nologo -I. -Iproto -DHAVE_PATHDEF -DWIN32
-DFEAT_CSCOPE -DFEAT_ NETBEANS_INTG -DFEAT_XPM_W32 -DWINVER=0x0400 -D_WIN32_WINNT=0x0400 /Fo.\ObjGOLYHTR/ / Ox /GL -DNDEBUG /Zl /MT -DFEAT_OLE -DFEAT_MBYTE_IME -DDYNAMIC_IME -DFEAT_GUI_W32 -DDYNAMI C_ICONV -DDYNAMIC_GETTEXT -DFEAT_TCL -DDYNAMIC_TCL
-DDYNAMIC_TCL_DLL=\"tcl83.dll\" -DDYNAM IC_TCL_VER=\"8.3\" -DFEAT_PYTHON -DDYNAMIC_PYTHON -DDYNAMIC_PYTHON_DLL=\"python27.dll\" -D FEAT_PYTHON3 -DDYNAMIC_PYTHON3 -DDYNAMIC_PYTHON3_DLL=\"python31.dll\" -DFEAT_PERL -DDYNAMI C_PERL -DDYNAMIC_PERL_DLL=\"perl512.dll\" -DFEAT_RUBY -DDYNAMIC_RUBY -DDYNAMIC_RUBY_VER=19 1 -DDYNAMIC_RUBY_DLL=\"msvcrt-ruby191.dll\" -DFEAT_BIG /Fd.\ObjGOLYHTR/ /Zi
So the above told me that I don't actually want python 3.3, I need 3.1 (or 2.7). After installing python 3.1, :has ("python") still returns 0, but :has ("python3") now returns 1. That should mean that python based scripts will now work!
I imagine future versions of gVim may be compiled against other versions of python, but using this method should let you work out which version is required.
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 1
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
How to enable Python support in gVim on Windows?
| 12,643,966
| 27
| 54
| 58,355
| 0
|
python,vim
|
I encountered this problem on Windows 7 64-bit. I realized I was using 64-bit Python 2.7.3 and 32-bit vim 7.3-46. I reinstalled both as 32-bit versions and then restarted the computer. Now it works.
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 1
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
How to enable Python support in gVim on Windows?
| 31,216,863
| 1
| 54
| 58,355
| 0
|
python,vim
|
I had a similar problem. I've been enjoying vim's omni-completion feature for some years,using Windows XP, Python 2.7, gVim 7. Recently I moved to a new PC running Windows 8.1. I installed gVim and the plugins I like, then tried out everything. Omni-completion gave an error, saying I needed the version of vim compiled with Python support. At that stage, I had not yet installed Python. The solution was to install Python then re-install vim. Omni-conpletion now works. Perhaps the order of installation matters.
| 0
| 1
| 0
| 0
|
2011-01-17T21:31:00.000
| 13
| 0.015383
| false
| 4,718,122
| 1
| 0
| 0
| 7
|
I'm trying to get Python support in gVim on Windows. Is there a way to accomplish that?
I'm using:
Windows XP SP3
gVim v. 7.3
Python 2.7.13 (ActivePython through Windows Installer binaries)
|
Python tarfile progress
| 4,718,870
| 0
| 4
| 3,242
| 0
|
python,progress-bar,tar
|
How are you adding files to the tar file? Is is through "add" with recursive=True? You could build the list of files yourself and call "add" one-by-one, showing the progress as you go. If you're building from a stream/file then it looks like you could wrap that fileobj to see the read status and pass that into addfile.
It does not look like you will need to modify tarfile.py at all.
| 0
| 1
| 0
| 1
|
2011-01-17T22:23:00.000
| 5
| 0
| false
| 4,718,588
| 0
| 0
| 0
| 1
|
Is there any library to show progress when adding files to a tar archive in python or alternativly would be be possible to extend the functionality of the tarfile module to do this?
In an ideal world I would like to show the overall progress of the tar creation as well as an ETA as to when it will be complete.
Any help on this would be really appreciated.
|
List of References in Google App Engine for Python
| 4,719,811
| 6
| 18
| 2,691
| 0
|
python,google-app-engine
|
You're right, there's no built in ReferenceListProperty. It'd be possible to write one yourself - custom Property subclasses are generally fairly easy - but getting it right is harder than you'd think, when it comes to deferencing and caching a list of references.
You can use a db.ListProperty(db.Key), however, which allows you to store a list of keys. Then, you can load them individually or all at once using a batch db.get() operation. This requires you to do the resolution step yourself, but it also gives you more control over when you dereference entities.
| 0
| 1
| 0
| 0
|
2011-01-18T01:29:00.000
| 2
| 1
| false
| 4,719,700
| 0
| 0
| 1
| 1
|
In Google App Engine, there is such a thing as a ListProperty that allows you to hold a list (array) of items. You may also specify the type of the item being held, for instance string, integer, or whatever.
Google App Engine also allows you to have a ReferenceProperty. A ReferenceProperty "contains" a reference to another Google App Engine Model entity. If you access a ReferenceProperty, it will automatically retrieve the actual entity that the reference points to. This is convenient, as it beats getting the key, and then getting the entity for said key.
However, I do not see any such thing as a ListReferenceProperty (or ReferenceListProperty). I would like to hold a list of references to other entities, that would automatically be resolved when I attempt to access elements within the list. The closest I can get it seems is to hold a list of db.Key objects. I can use these keys to then manually retrieve their associated entities from the server.
Is there any good solution to this? Basically, I would like the ability to have a collection of (auto-dereferencing) references to other entities. I can almost get there by having a collection of keys to other entities, but I would like it to "know" that these are key items, and that it could dereference them as a service to me.
Thank you
|
Python: detecting install location of python tools
| 4,722,121
| 5
| 3
| 1,415
| 0
|
python
|
I see you are talking about the source bundle of Python, which includes Tools/Scripts, a set of helpful scripts for working with Python Source. It should be noted that they are not a part of Python Standard Library and installers are not obliged to bundle them with their distribution, for e.g in Ubuntu, I don't find it in /usr/lib/python2.6 or some other path.
If you want to rely on any of the Tools/Scripts, just carry them along with your script, that would be most portable.
| 0
| 1
| 0
| 1
|
2011-01-18T08:48:00.000
| 1
| 1.2
| true
| 4,722,072
| 1
| 0
| 0
| 1
|
Python's installation comes with some handy tools, located under
$YOUR_PYTHON/Tools/Scripts. Is there a platform-independent way to find out where on a system they are located? I want to use ftpmirror.py as part of a shell script.
|
xmpppy and Facebook Chat Integration
| 5,268,496
| 2
| 3
| 2,814
| 0
|
python,facebook,chat,xmpppy
|
I also started the same project, and was trapped into same problem. I found the solution too. You have to write the UserName of facebook (Hence You must opt one Username) and that too in small Caps. This is the most important part. Most probably you too like me would not be writing it in small Caps.
| 0
| 1
| 1
| 1
|
2011-01-19T06:03:00.000
| 2
| 0.197375
| false
| 4,732,230
| 0
| 0
| 0
| 1
|
I'm trying to create a very simple script that uses python's xmpppy to send a message over facebook chat.
import xmpp
FACEBOOK_ID = "username@chat.facebook.com"
PASS = "password"
SERVER = "chat.facebook.com"
jid=xmpp.protocol.JID(FACEBOOK_ID)
C=xmpp.Client(jid.getDomain(),debug=[])
if not C.connect((SERVER,5222)):
raise IOError('Can not connect to server.')
if not C.auth(jid.getNode(),PASS):
raise IOError('Can not auth with server.')
C.send(xmpp.protocol.Message("friend@chat.facebook.com","Hello world",))
This code works to send a message via gchat, however when I try with facebook I recieve this error:
An error occurred while looking up _xmpp-client._tcp.chat.facebook.com
When I remove @chat.facebook.com from the FACEBOOK_ID I get this instead:
File "gtalktest.py", line 11, in
if not C.connect((SERVER,5222)):
File "/home/john/xmpppy-0.3.1/xmpp/client.py", line 195, in connect
if not CommonClient.connect(self,server,proxy,secure,use_srv) or secureNone and not secure: return self.connected
File "/home/john/xmpppy-0.3.1/xmpp/client.py", line 179, in connect
if not self.Process(1): return
File "/home/john/xmpppy-0.3.1/xmpp/dispatcher.py", line 302, in dispatch
handler['func'](session,stanza)
File "/home/john/xmpppy-0.3.1/xmpp/dispatcher.py", line 214, in streamErrorHandler
raise exc((name,text))
xmpp.protocol.HostUnknown: (u'host-unknown', '')
I also notice any time I import xmpp I get the following two messages when running:
/home/john/xmpppy-0.3.1/xmpp/auth.py:24: DeprecationWarning: the sha module is deprecated; use the hashlib module instead
import sha,base64,random,dispatcher
/home/john/xmpppy-0.3.1/xmpp/auth.py:26: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
I'm fairly new to solving these kinds of problems, and advise, or links to resources that could help me move forward in solve these issues would be greatly appreciated. Thanks for reading!
|
Dealing with bugs that only appear right after boot
| 4,733,924
| 3
| 3
| 58
| 0
|
python,windows,debugging,race-condition,boot
|
Run it inside a virtual machine.
| 0
| 1
| 0
| 0
|
2011-01-19T10:00:00.000
| 2
| 0.291313
| false
| 4,733,886
| 1
| 0
| 0
| 2
|
I dealt with a bug today that only appeared when I ran my program right after booting my computer. The cold start exposed a race condition which triggered the bug. I managed to fix it, but it took a long time because I had to reboot my machine several times to figure out what was going on. Can anybody suggest better ways of debugging problems like this in the future? Can I somehow quickly put the computer into a "just booted" state?
Running Python 2.6 on Windows XP.
|
Dealing with bugs that only appear right after boot
| 4,734,016
| 2
| 3
| 58
| 0
|
python,windows,debugging,race-condition,boot
|
Use virtual machine (f.e. VirtualBox) and save state (make snapshot) just before booting is finished. Test freely and restore just booted state according to your needs.
| 0
| 1
| 0
| 0
|
2011-01-19T10:00:00.000
| 2
| 1.2
| true
| 4,733,886
| 1
| 0
| 0
| 2
|
I dealt with a bug today that only appeared when I ran my program right after booting my computer. The cold start exposed a race condition which triggered the bug. I managed to fix it, but it took a long time because I had to reboot my machine several times to figure out what was going on. Can anybody suggest better ways of debugging problems like this in the future? Can I somehow quickly put the computer into a "just booted" state?
Running Python 2.6 on Windows XP.
|
Getting Java to talk to Python, C, C++, and Ruby
| 4,737,502
| 1
| 2
| 1,579
| 0
|
java,c++,python,c,ruby
|
You should probably use a standard interprocess mechanism like a pipe or socket.
All of these languages have libraries available for both, and this strategy allows communication between any 2 of your processes (Java/Ruby, Ruby/Python, Java/C, etc)
| 0
| 1
| 0
| 0
|
2011-01-19T14:56:00.000
| 3
| 0.066568
| false
| 4,736,698
| 0
| 0
| 1
| 2
|
I have succeeded in getting ProcessBuilder to run external scripts, but I still have to get Java to communicate with the external scripts. I figure that I should get the input/output streams from the process, and use those to send and receive data. I'm having a the most trouble with giving input to the scripts. It seems that I can get output from the scripts by using the script's print function, but I can't seem to get the scripts to register input from the main java program.
This question involves four languages, so it's fine if you post only the answer regarding one language.
|
Getting Java to talk to Python, C, C++, and Ruby
| 4,737,732
| 2
| 2
| 1,579
| 0
|
java,c++,python,c,ruby
|
The method getOutputStream() on the Process class returns a stream you can write to in Java that connects to the stdin stream of the process. You should be able to read this as you would normally read stdin for each language (e.g., cin for C++, scanf for C, STDIN.read for Ruby, don't know Python!)
If this is what you're doing and it isn't working (your question sounds like it might be but it's hard to tell) could you post some code to make it easier to see what you might be doing wrong?
| 0
| 1
| 0
| 0
|
2011-01-19T14:56:00.000
| 3
| 1.2
| true
| 4,736,698
| 0
| 0
| 1
| 2
|
I have succeeded in getting ProcessBuilder to run external scripts, but I still have to get Java to communicate with the external scripts. I figure that I should get the input/output streams from the process, and use those to send and receive data. I'm having a the most trouble with giving input to the scripts. It seems that I can get output from the scripts by using the script's print function, but I can't seem to get the scripts to register input from the main java program.
This question involves four languages, so it's fine if you post only the answer regarding one language.
|
Do you know Python libs to send / receive files using Bittorent?
| 4,750,449
| 2
| 7
| 891
| 0
|
python,bittorrent
|
The original BitTorrent client is written in Python. Have you checked that out?
| 0
| 1
| 0
| 1
|
2011-01-20T17:27:00.000
| 2
| 0.197375
| false
| 4,750,432
| 0
| 0
| 0
| 1
|
I have bigs files to move to a lot of servers. For now we use rsync, but I would like to experiment with bittorent.
I'm studing the code of Deluge, a Python bittorent client but it uses twisted and is utterly complex. Do you know anything hight level?
EDIT: I just read that Facebook does code deployment using Bittorent. Maybe they published their lib for that, but I can't find it. Ever hear of it?
|
How do I refer to the current directory in functions from the Python 'os' module?
| 4,758,914
| 3
| 0
| 6,837
| 0
|
python,path,working-directory
|
You'd typically use os.listdir('.') for this purpose. If you need a standard module, the variable os.curdir is available.
| 0
| 1
| 0
| 0
|
2011-01-21T12:36:00.000
| 3
| 0.197375
| false
| 4,758,855
| 1
| 0
| 0
| 1
|
I want to use the function os.listdir(path) to get a list of files from the directory I'm running the script in, but how do I say the current directory in the "path" argument?
|
Why can't it find my celery config file?
| 8,588,149
| 3
| 23
| 30,388
| 0
|
python,django,celery
|
you can work around this with the environment... or, use --config: it requires
a path relative to CELERY_CHDIR from /etc/defaults/celeryd
a python module name, not a filename.
The error message could probably use these two facts.
| 0
| 1
| 0
| 0
|
2011-01-21T19:44:00.000
| 4
| 0.148885
| false
| 4,763,072
| 0
| 0
| 0
| 2
|
/home/myuser/mysite-env/lib/python2.6/site-packages/celery/loaders/default.py:53:
NotConfigured: No celeryconfig.py
module found! Please make sure it
exists and is available to Python.
NotConfigured)
I even defined it in my /etc/profile and also in my virtual environment's "activate". But it's not reading it.
|
Why can't it find my celery config file?
| 5,534,637
| 3
| 23
| 30,388
| 0
|
python,django,celery
|
Make sure you have celeryconfig.py in the same location you are running 'celeryd' or otherwise make sure its is available on the Python path.
| 0
| 1
| 0
| 0
|
2011-01-21T19:44:00.000
| 4
| 0.148885
| false
| 4,763,072
| 0
| 0
| 0
| 2
|
/home/myuser/mysite-env/lib/python2.6/site-packages/celery/loaders/default.py:53:
NotConfigured: No celeryconfig.py
module found! Please make sure it
exists and is available to Python.
NotConfigured)
I even defined it in my /etc/profile and also in my virtual environment's "activate". But it's not reading it.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.