Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Packaging python for 32 bit WIndows XP from a 64 bit windows 7 machine using py2exe
| 9,040,859
| 1
| 1
| 968
| 0
|
python,dll,windows-xp,py2exe,windows-7-x64
|
py2exe tells you:
Your executable(s) also depend on these dlls which are not included,
you may or may not need to distribute them.
Make sure you have the license if you distribute any of them, and
make sure you don't distribute files belonging to the operating system.
WS2_32.dll is part of the operating system.
| 0
| 1
| 0
| 0
|
2012-01-27T22:16:00.000
| 1
| 1.2
| true
| 9,040,793
| 1
| 0
| 0
| 1
|
I am trying to package a python application on my 64 bit windows 7 machine using py2exe.
The final target of this application are 32 bit windows machines.
I am using 32 bit python 2.7 on the 64 bit windows 7 machine. When I package the application , py2exe warns me of several DLLs from the system32 directory that need to be packaged .
The built exe now fails to run on the destination machines : windows XP (32 bit) and windows Vista (32 bit) with the message saying C:\myapp\bin\WS2_32.dll is corrupted and I need to check it against the Windows installation.
Checking :
WIndows 7 64 bit : system32 directory WS2_32.dll has size 290kb
Windows XP 32 bit : system directory has size 80 kb
My question is , can I build a XP/Vista 32 bit application using py2exe from Windows 7 given these differences in DLL size.
I also tried replacing the C:\myapp\bin\WS2_32.dll with the XP DLL..but this time the application didnt launch.
|
Task Queues or Multi Threading on google app engine
| 9,047,335
| 1
| 1
| 980
| 0
|
python,multithreading,google-app-engine,web2py,task-queue
|
maybe i'm misunderstanding something, but thos sounds like the perfect match for a task queue, and i can't see how multithreading will help, as i thought this only ment that you can serve many responses simultaneously, it won't help if your responses take longer than the 30 second limit.
With a task you can add it, then process until the time limit, then recreate another task with the remainder of the task if you haven't finished your job by the time limit.
| 0
| 1
| 0
| 0
|
2012-01-28T17:25:00.000
| 3
| 0.066568
| false
| 9,047,267
| 0
| 0
| 1
| 2
|
I have my server on Google App Engine
One of my jobs is to match a huge set of records with another.
This takes very long, if i have to match 10000 records with 100.
Whats the best way of implementing this.
Im, using Web2py stack and deployed my application on Google App Engine.
|
Task Queues or Multi Threading on google app engine
| 9,047,823
| 1
| 1
| 980
| 0
|
python,multithreading,google-app-engine,web2py,task-queue
|
Multithreading your code is not supported on GAE so you can not explicitly use it.
GAE itself can be multithreaded, which means that one frontend instance can handle multiple http requests simultaneously.
In your case, best way to achieve parallel task execution is Task Queue.
| 0
| 1
| 0
| 0
|
2012-01-28T17:25:00.000
| 3
| 0.066568
| false
| 9,047,267
| 0
| 0
| 1
| 2
|
I have my server on Google App Engine
One of my jobs is to match a huge set of records with another.
This takes very long, if i have to match 10000 records with 100.
Whats the best way of implementing this.
Im, using Web2py stack and deployed my application on Google App Engine.
|
Communicate with another program using subprocess or rpc / rest in python?
| 9,049,120
| 1
| 0
| 492
| 0
|
python,rest,subprocess,rpc
|
I think I'd prefer to use any of the rpc or rest interfaces, because the results you can obtain from them are usually in a format that is easy to parse since those interfaces have been designed for machine interaction. However, a command line interface is designed for human interaction and this means that the output is easy to parse for the human eye, but not necessarily by another program that receives the output.
| 0
| 1
| 0
| 0
|
2012-01-28T20:57:00.000
| 1
| 1.2
| true
| 9,048,558
| 1
| 0
| 0
| 1
|
I have a situation where there is a program which is written in c++. It is a kind of a server which you need to start first. Then from another konsole you can call the program passing commandline arguments and it does stuff. Also it provides rpc and rest based access. So you can write a rpc or rest based library to interface with the server.
So my question is, since the program can be managed using mere commandline arguments, isn't it better to use python's subprocess module and build a library (wrapper) around it? Or is there any problem with this method?
Consider another case. Say I wanted to build a GUI around any linux utility like grep which allows user to test regular expressions (like we have on websites). So isn't it easier to communicate to grep using subprocess?
Thanks.
|
How can I install PIL on mac os x 10.7.2 Lion
| 10,423,397
| 0
| 41
| 73,763
| 0
|
python,macos,python-imaging-library
|
I was trying to execute a Python script with administrative privileges in a Mac (running on Lion) and looking at this post I found out that all I needed to do was launch Python with Administrative privileges by using the "sudo" command in the Terminal.
Like that: "sudo Python" and then executing the script.
I know it is pretty basic but it was exactly what I needed to get my script working...
| 0
| 1
| 0
| 0
|
2012-01-30T20:07:00.000
| 9
| 0
| false
| 9,070,074
| 1
| 0
| 0
| 2
|
I've tried googling & looking up some other people's questions. However, I still couldn't find a clear/simple recipe to install PIL (for python 2.6 or 2.7) on mac os x 10.7.2 Lion.
|
How can I install PIL on mac os x 10.7.2 Lion
| 16,826,248
| 3
| 41
| 73,763
| 0
|
python,macos,python-imaging-library
|
u may try this in terminal:
sudo easy_install pip
sudo pip install pil
| 0
| 1
| 0
| 0
|
2012-01-30T20:07:00.000
| 9
| 0.066568
| false
| 9,070,074
| 1
| 0
| 0
| 2
|
I've tried googling & looking up some other people's questions. However, I still couldn't find a clear/simple recipe to install PIL (for python 2.6 or 2.7) on mac os x 10.7.2 Lion.
|
Error when trying to install pylibmc on Mac OSX Lion
| 48,595,862
| 1
| 17
| 8,279
| 0
|
python,osx-lion,pip,easy-install,llvm-gcc
|
Check if libmemcached is installed. If not found then install it with
brew install libmemcached and rest of things will work just fine.
I resolved this issue while installing django test suite.
| 0
| 1
| 0
| 0
|
2012-01-30T20:19:00.000
| 4
| 0.049958
| false
| 9,070,218
| 1
| 0
| 0
| 2
|
I've tried pip and easy_install, but I keep getting the following error:
error: command '/usr/bin/llvm-gcc' failed with exit status 1
I'm running OSX Lion and the install runs inside a virtualenv, with Python 2.7.2.
Thanks in advance.
|
Error when trying to install pylibmc on Mac OSX Lion
| 9,074,361
| 20
| 17
| 8,279
| 0
|
python,osx-lion,pip,easy-install,llvm-gcc
|
First a question: is libmemcached installed? If not, install it and retry. It probably is but just in case....
If pylibmc still doesn't install the problem is probably that libmemcached is not installed in a directory where gcc can discover it (this was a macports symptom in my case), in which case you can store the location in the environment when running pip from the command line:
LIBMEMCACHED=/opt/local pip install pylibmc
| 0
| 1
| 0
| 0
|
2012-01-30T20:19:00.000
| 4
| 1.2
| true
| 9,070,218
| 1
| 0
| 0
| 2
|
I've tried pip and easy_install, but I keep getting the following error:
error: command '/usr/bin/llvm-gcc' failed with exit status 1
I'm running OSX Lion and the install runs inside a virtualenv, with Python 2.7.2.
Thanks in advance.
|
Python shell window in compiled applet
| 9,071,041
| 1
| 0
| 451
| 0
|
python,shell,applet,compiled
|
Not sure about generating another shell window, but do you need to have an entire shell open? What about getting the information to the user in a different way, such as:
use another Toplevel window and insert the output into a Text or Listbox, rather than simply printing it. This could also make it easier for users to copy the output (if that's something they might find useful).
write out a data/log file.
| 0
| 1
| 0
| 0
|
2012-01-30T21:10:00.000
| 2
| 0.099668
| false
| 9,070,807
| 0
| 0
| 0
| 1
|
I am trying to make an executable python program on MAC OSX. I used the build applet program and it runs, but I had some data printing in the shell window and the executable file does not open a window. Is there a way to open a shell window with an executable python program?
Thanks
|
What is the best way to make a spool in a directory with pyinotify?
| 9,074,009
| 1
| 0
| 837
| 0
|
python,directory,inotify,pyinotify,spooler
|
You don't really want to move them as they're created, but rather as they're closed. Once they're closed (and nobody has any open file handles on them), you can consider them 'complete' and you can move them without any surprises.
You'll probably be good if you look for a 'close_write' event. (Although that doesn't guarantee that the file contains data or new data, you'd have to verify a modify->close_write event. But 99.99% of the time, close_write will do the job.
| 0
| 1
| 0
| 0
|
2012-01-31T01:01:00.000
| 1
| 0.197375
| false
| 9,073,096
| 1
| 0
| 0
| 1
|
im trying to move every file in a directory to another when they are created. Maybe i could stop the daemon (pyinotify instance running) cleanly, and the original files continue to be created in the orig/spool directory.
I want to be processed after the daemon starts again. Maybe i can take advantage of the inotify kernel queues?
Thanks in advance
|
Why use Celery instead of RabbitMQ?
| 9,287,371
| 85
| 107
| 33,423
| 0
|
python,message-queue,rabbitmq,celery
|
You are right, you don't need Celery at all. When you are designing a distributed system there are a lot of options and there is no right way to do things that fits all situations.
Many people find that it is more flexible to have pools of message consumers waiting for a message to appear on their queue, doing some work, and sending a message when the work is finished.
Celery is a framework that wraps up a whole lot of things in a package but if you don't really need the whole package, then it is better to set up RabbitMQ and implement just what you need without all the complexity. In addition, RabbitMQ can be used in many more scenarios besides the task queue scenario that Celery implements.
But if you do choose Celery, then think twice about RabbitMQ. Celery's message queueing model is simplistic and it is really a better fit for something like Redis than for RabbitMQ. Rabbit has a rich set of options that Celery basically ignores.
| 0
| 1
| 0
| 0
|
2012-01-31T10:08:00.000
| 3
| 1.2
| true
| 9,077,687
| 0
| 0
| 0
| 3
|
From my understanding, Celery is a distributed task queue, which means the only thing that it should do is dispatching tasks/jobs to others servers and get the result back. RabbitMQ is a message queue, and nothing more. However, a worker could just listen to the MQ and execute the task when a message is received. This achieves exactly what Celery offers, so why need Celery at all?
|
Why use Celery instead of RabbitMQ?
| 9,077,760
| 34
| 107
| 33,423
| 0
|
python,message-queue,rabbitmq,celery
|
Celery basically provides a nice interface to doing just what you said, and deals with all the configuration for you. Yes you could do it by hand, but you'd just be rewriting celery.
| 0
| 1
| 0
| 0
|
2012-01-31T10:08:00.000
| 3
| 1
| false
| 9,077,687
| 0
| 0
| 0
| 3
|
From my understanding, Celery is a distributed task queue, which means the only thing that it should do is dispatching tasks/jobs to others servers and get the result back. RabbitMQ is a message queue, and nothing more. However, a worker could just listen to the MQ and execute the task when a message is received. This achieves exactly what Celery offers, so why need Celery at all?
|
Why use Celery instead of RabbitMQ?
| 71,103,053
| 0
| 107
| 33,423
| 0
|
python,message-queue,rabbitmq,celery
|
In my opinion, it's easy to integrate celery with flower and other monitoring packages than RabbitMQ.
It all depends on the use case anyways...
If you don't need other functionalities celery provides, RabbitMQ would be an easy way out. Weighing out your options wouldn't be a bad idea...
| 0
| 1
| 0
| 0
|
2012-01-31T10:08:00.000
| 3
| 0
| false
| 9,077,687
| 0
| 0
| 0
| 3
|
From my understanding, Celery is a distributed task queue, which means the only thing that it should do is dispatching tasks/jobs to others servers and get the result back. RabbitMQ is a message queue, and nothing more. However, a worker could just listen to the MQ and execute the task when a message is received. This achieves exactly what Celery offers, so why need Celery at all?
|
How does Python come off as a multi-platform programming language?
| 9,087,791
| 1
| 1
| 643
| 0
|
python,user-interface,cross-platform
|
I find that Python is a very good language for GUI programming. As you have stated, you can use the bindings for wxWidgets (wxPython), but there's also a binding for just about every other cross-platform GUI toolkit you can think of (Tk, Qt, GTK, FLTK, etc.). These GUI toolkits should allow you to make a program that will run unmodified on most OSs.
In terms of Python OS compatibility, it will behave virtually the same on all OSs, except for one or two modules such as mmap.
Using py2exe, py2app, or similar tools, you can embed a Python interpreter (along with your program's bytecode and it's dependencies) within an executable, making it easy to distribute an application. An end user can then open the program as they are used to. If you want the "security" of a compiled language, Python will not be the best language for you to use (but I prefer readability over safety :).
Another thing to consider with cross-platformness is what OS specific features you plan on using. Most GUI toolkits will not support things such as Microsoft's DWM (though you can use OS features through ctypes).
| 0
| 1
| 0
| 0
|
2012-01-31T21:48:00.000
| 3
| 0.066568
| false
| 9,087,448
| 1
| 0
| 0
| 1
|
I'm talking about deploying Python-made, GUI-based, desktop applications via .app and .exe format for OSX and Windows. As far as I've gone into Python, I've only seen it as an application that runs on the Terminal / Command Prompt. I know that it is possible to create a user interface for it using various offerings on the internet (wxPython?). I just want to see how it passess off as a way for a developer to create mac and windows applications with as little code difference as possible.
|
Deadlock with flock, fork and terminating parent process
| 9,627,883
| 3
| 7
| 2,736
| 0
|
python,locking,fork,fcntl,flock
|
lsof is almost certainly simply not showing flock() locks, so not seeing one tells you nothing about whether there is one.
flock() locks are inherited via fd-sharing (dup() system call, or fork-and-exec that leaves the file open) and anyone with the shared descriptor can unlock the lock, but if the lock is already held, any attempt to lock it again will block. So, yes, it's likely that the parent locked the descriptor, then died, leaving the descriptor locked. The child process then tries to lock as well and blocks because the descriptor is already locked. (The same would happen if a child process locked the file, then died.)
Since `fcntl()' locks are per-process, the dying process releases all its locks, so that you can proceed, which is what you want here.
| 0
| 1
| 0
| 0
|
2012-02-02T03:59:00.000
| 1
| 1.2
| true
| 9,106,997
| 1
| 0
| 0
| 1
|
I have a pretty complicated python program. Internally it has a logging system that uses an exclusive (LOCK_EX) fcntl.flock to manage global locking. Effectively, whenever a log message is dumped, the global file lock is acquired, message is emitted to file (different from lock file) and global file lock is released.
The program also forks itself several times (after log management is set up).
Generally everything works.
If the parent process is killed (and children stay alive), I occasionally get a deadlock. All programs block on the fcntl.flock() forever. Trying to acquire the lock externally also blocks forever. I have to kill the children programs to fix the problem.
What is baffling though is that lsof lock_file shows no process as holding the lock! So I cannot figure out why the file is being locked by the kernel but no process is reported as holding it.
Does flock have issues with forking? Is the dead parent somehow holding the lock even though it is no longer in the process table? How do I go about resolving this issue?
|
Test if a Popen process is waiting for input
| 9,627,884
| 1
| 1
| 619
| 0
|
python,subprocess,popen
|
You can always send something to sub-process even it read nothing. So you just sent to sub-process, if it works OK, then those staff you sent will be dropped, if it failed, then you will read response.
| 0
| 1
| 0
| 0
|
2012-02-02T09:59:00.000
| 1
| 1.2
| true
| 9,110,305
| 0
| 0
| 0
| 1
|
I start a process with POpen and under normal circuimstances it should just do a job and write things to stdout which I then capture. In exceptional cases the process will fallback to an interactive mode and wait for user input. How can I detect that case and react appropriately?
|
Killing children of children in python with subprocess
| 9,119,121
| 1
| 1
| 484
| 0
|
python,subprocess,kill
|
Not exactly easy, but if your application runs in Linux, you could walk through the /proc filesystem and build a list of all PIDs whose PPID (parent PID) is the same as your subprocess'.
| 0
| 1
| 0
| 0
|
2012-02-02T18:09:00.000
| 2
| 1.2
| true
| 9,117,566
| 1
| 0
| 0
| 1
|
Does python provide a way to find the children of a child process spawned using subprocess, so that I can kill them properly? If not, what is a good way of ensuring that the children of a child are killed?
|
How to update python 2.6 to python 2.7 in ubuntu
| 9,140,731
| 7
| 6
| 28,481
| 0
|
python,linux
|
You can also install python2.7 package. Then you can define python version with shebang (#!/usr/bin/env python2.7) or even use #update-alternatives --config python to make it default interpreter. But it can break a lot of system apps...
update: sometimes, there's no alternative to python, so you'll need to create those by hand. Something like update-alternatives --install /usr/bin/python python2.7 /usr/bin/python2.7 10
update2: nevertheless if you just need 2.7 for your project, I'd suggest using virtualenv: virtualenv -ppython2.7 myproject
| 0
| 1
| 0
| 0
|
2012-02-04T09:29:00.000
| 2
| 1
| false
| 9,139,826
| 1
| 0
| 0
| 1
|
I installed ubuntu 10.04 and it comes with python2.6. How can I upgrade it to 2.7?
|
does the Linux distribution matters for noarch packages?
| 9,248,837
| 3
| 1
| 431
| 0
|
python,rpm,httplib2,rhel5
|
Yes, it does matter.
noarch marks a package as usable for every CPU architecture, usually when there are no compiled binaries in it.
But the distribution matters in general. A noarch package from another distro may or may not work. It depends e.g. on package names, directories where to put the stuff, ...
A package only works safely if the distribution and the distribution version matches.
| 0
| 1
| 0
| 0
|
2012-02-06T16:41:00.000
| 1
| 1.2
| true
| 9,163,681
| 0
| 0
| 0
| 1
|
I need httplib2 v0.7 RPM for RHEL 5.7, but can't find one. So,
do you by chance know where can I get some?
I see such RPMs but for other distros (e.g. Mandrake). Since it is python-only lib (noarch) does the distro matters? Can I get any and use it?
Python 2.6
|
Only allow a subset of users to access a google app project
| 9,171,764
| 1
| 0
| 122
| 0
|
python,google-app-engine
|
Wouldn't it be easier to write a user model rather than access it with the Google accounts API? That way you could define user groups and access without having to rely on Google.
The Google Accounts API in the example is really for low-level init debugging.
| 0
| 1
| 0
| 0
|
2012-02-07T04:38:00.000
| 1
| 1.2
| true
| 9,171,080
| 0
| 0
| 1
| 1
|
We are trying to develop a project in google app engine for a senior project, and its set up in such a way that only a subset of user at our college should be able to login to it. Our college uses google domains for email, so that is currently out login requirement (a college email though google that is), but how can we limit it to not just people without that domain, but a subset or per-approved users with that domain? Also if it matters, within the subset of users, there are four additional types of usesr, who will have access to different pages, functions and information.
Right now we are just using the google login APIs, and we are at a loss of how to micro-manage the user pool. Would we have to create a data store entry for each user who should be authorized to access the service and run a check at each page to ensure they have the privileges to be there? Or does google provide some type of service to make this easier that I've missed? Thanks folks!
|
Broken pipe" when running python with cron
| 9,173,292
| 5
| 1
| 1,106
| 0
|
python
|
If your script runs too long, cron will close its stdout/stderr that are normally redirected to a log file (through cron). Attempting to print after the timeout will give you broken pipe.
A solution is to use logging or print only to your own log files and never to stdout.
Also, cron has different envinronment, specified at the top of crontab or cron.(daily|hourly|...) files. Make sure it is correct, especially if you rely on PATH or HOME that are set at login.
| 0
| 1
| 0
| 1
|
2012-02-07T06:43:00.000
| 1
| 0.761594
| false
| 9,172,046
| 0
| 0
| 0
| 1
|
I have made an extensive script that runs fine when started from the command line or IDLE. But when I try to run it with cron it keeps giving errors:
IOError: [Errno 32] Broken pipe
|
How to prevent user stopping script by CTRL + Z?
| 9,174,968
| 3
| 4
| 15,690
| 0
|
python,background,signals,command-line-interface
|
Roughly speaking the Ctrl+Z from a Unix/Linux terminal in cooked or canonical modes will cause the terminal driver to generate a "suspend" signal to the foreground application.
So you have two different overall approaches. Change the terminal settings or ignore the signal.
If you put the terminal into "raw" mode then you disable that signal generation. It's also possible to use terminal settings (import tty and read the info about tcsetattr, but also read the man pages for ``stty` and terminfo(5) for more details).
ZelluX has already described the simplest signal handling approach.
| 0
| 1
| 0
| 0
|
2012-02-07T10:42:00.000
| 4
| 0.148885
| false
| 9,174,799
| 1
| 0
| 0
| 1
|
I want to prevent the user from going back to the shell prompt by pressing CTRL + Z from my python command line interpreter script. How can I do that?
|
Cloud Computing Passing a Function to Server
| 9,187,618
| 1
| 0
| 124
| 0
|
python,cloud
|
One of the most popular systems for processing large amounts of data in a cluster is Hadoop (http://hadoop.apache.org/)
You can write functions in python using the MapReduce programming pattern (google it), upload your program to the cluster, and it will process your data.
Take a look and read up. It's a huge topic - too much for one question. If you have some specific use cases please edit your question with more info.
| 0
| 1
| 0
| 0
|
2012-02-08T03:42:00.000
| 3
| 0.066568
| false
| 9,187,578
| 0
| 0
| 0
| 3
|
Let's say I had a cloud cluster with Python or C or something and I want to execute my function (as a client) in the cloud. How could I possibly pass the function I wrote locally up to the server?
I've seen this elsewhere and I not only don't know how to do it but I want to see if there are many ideas for it.
Thanks,
Anthony Hurst
|
Cloud Computing Passing a Function to Server
| 9,187,637
| 0
| 0
| 124
| 0
|
python,cloud
|
Well if you wrote it locally you probably wont be executing anything that require compilation in realtime (I assume your looking for efficiency and will be exchanging a whole series of computations in the cloud) which in that case you looking to send it something like a ruby file on the fly?
But that doesn't seem very practical since you really aren't going to get this newly written function coming from the client side sent over and scaled well across the cluster you are sending it to.
That being said, set something up where you can send functions perimeters in the form of xml, json, etc. Use an http connection or an https if you need it secure and build it using hadoop, mpi, et.
| 0
| 1
| 0
| 0
|
2012-02-08T03:42:00.000
| 3
| 0
| false
| 9,187,578
| 0
| 0
| 0
| 3
|
Let's say I had a cloud cluster with Python or C or something and I want to execute my function (as a client) in the cloud. How could I possibly pass the function I wrote locally up to the server?
I've seen this elsewhere and I not only don't know how to do it but I want to see if there are many ideas for it.
Thanks,
Anthony Hurst
|
Cloud Computing Passing a Function to Server
| 9,337,324
| 0
| 0
| 124
| 0
|
python,cloud
|
Code mobility is a largely unexplored field with more questions than answers. Generally you cannot move arbitrary code around at runtime. There are a few programming languages that historically supported code mobility (e.g. Kali Scheme), but it is not something that would be ready for main stream use.
Concerning functions, here I am not quite sure what you are asking. There are functional programming languages that support what I would consider "precursors" to code mobility. E.g. in Erlang you can pass function signatures around and in Cloud Haskell you can send a Closure (that is a function with collocated data) within certain limitations.
Other approaches that have reached higher significance are to craft plug-ins, that is precompiled binaries that are loaded at runtime. There might be further possibilities to pass object code around so that not everything has to be compiled and linked at runtime when it is passed from one platform to an other.
Generally what needs to be clear to you is that what you develop is source code. This either needs to be interpreted at runtime (means on both platforms there must be an interpreter loaded) or compiled into binaries and then started. Then there is still the question of data and state etc. that would need to be shared before code mobility will work as we envision it.
| 0
| 1
| 0
| 0
|
2012-02-08T03:42:00.000
| 3
| 0
| false
| 9,187,578
| 0
| 0
| 0
| 3
|
Let's say I had a cloud cluster with Python or C or something and I want to execute my function (as a client) in the cloud. How could I possibly pass the function I wrote locally up to the server?
I've seen this elsewhere and I not only don't know how to do it but I want to see if there are many ideas for it.
Thanks,
Anthony Hurst
|
Concurrent connections to Tornado WebSocket server
| 10,864,121
| 1
| 2
| 1,369
| 0
|
python,asynchronous,websocket,real-time,tornado
|
First, the first message send to server must have some data for identify the client.
The handler save itself into a shared data with the client's id. The simple way is save this into a dict, as the websocket application's property.
If some message need to send to some clients, pick up their handlers from shared data, then call the handler's send method.
| 0
| 1
| 0
| 0
|
2012-02-08T15:41:00.000
| 1
| 0.197375
| false
| 9,196,538
| 0
| 0
| 0
| 1
|
We're trying to build a server that utilizes "tornado.websocket.WebSocketHandler".
Opposite to what is demonstrated on "demos\websocket\chatdemo.py", we want every client to establish its own private session, not to broadcast the message to all connected subscribers.
How to identify individual "waiters" and deliver every message to the other client that is intended to receive it?
|
Getting Credentials File in the boto.cfg for Python
| 33,725,689
| 5
| 4
| 14,015
| 0
|
python,amazon-web-services,boto
|
For anyone looking for information on the now-current boto3, it does not use a separate configuration file but rather respects the default one created by the aws cli when running aws configure (Ie, it will look at ~/.aws/config)
| 0
| 1
| 0
| 0
|
2012-02-08T16:27:00.000
| 3
| 0.321513
| false
| 9,197,385
| 0
| 0
| 1
| 1
|
I'm using AWS for the first time and have just installed boto for python. I'm stuck at the step where it advices to:
"You can place this file either at /etc/boto.cfg for system-wide use or in the home directory of the user executing the commands as ~/.boto."
Honestly, I have no idea what to do. First, I can't find the boto.cfg and second I'm not sure which command to execute for the second option.
Also, when I deploy the application to my server, I'm assuming I need to do the same thing there too...
|
How do you change the Python version used in Wing IDE 101?
| 9,199,816
| 1
| 0
| 7,291
| 0
|
python
|
I'm working right now with Wingware WingIDE and Python 3.2.2, so it is possible, using exactly the method you mentioned. Your problem must be elsewhere. Try updating Python and Wingware to their last versions.
| 0
| 1
| 0
| 0
|
2012-02-08T18:41:00.000
| 1
| 1.2
| true
| 9,199,456
| 1
| 0
| 0
| 1
|
I upgraded Python from version 2.6.6 to version 3.2, however, my WING IDE still uses version 2.6.6. I tried changing the Python executable under Edit --> Configure Python (and linking to the python.exe under C:/Python32), but that didn't seem to work even after a WING restart... Any help is appreciated, thank you.
This is on Windows XP, by the way.
|
getting python 2.4.5 out of my environment variables
| 12,851,137
| 0
| 1
| 104
| 0
|
python
|
Trying running WHERE PYTHON (or WHERE PYTHON.EXE) to figure out where the python executable is at.
It may be that python v2.4.5 is as part of another program.
| 0
| 1
| 0
| 0
|
2012-02-09T06:56:00.000
| 1
| 0
| false
| 9,206,627
| 1
| 0
| 0
| 1
|
major noob question:
when I run python on the windows command line, it says I have 2.4.5... however, it's not in my PATH environment variable (or anywhere in my environment variables), and, Python27 IS in PATH! Anyone know how I can get Python27 up and running in windows cmd?
|
Python: pass the path as an argument
| 9,233,272
| 0
| 0
| 491
| 0
|
python-3.x
|
Try the following. In a file called exe.py put:
import os
import sys
os.popen(sys.argv[1])
#
Usage:
C:\Python32>python exe.py notepad.exe
| 0
| 1
| 0
| 0
|
2012-02-09T12:41:00.000
| 2
| 0
| false
| 9,211,015
| 1
| 0
| 0
| 1
|
i need to write a script in python, which run a process (notepad.exe). The probelm is, i dont know the path of the process so i need to pass the path of the process as an argument.
How does it work?
|
how to install additional python packages with pythonbrew
| 9,245,490
| 0
| 0
| 399
| 0
|
python,pythonbrew
|
Finally I just ditched pythonbrew and did a multi install of python.
Thereafter I used bash and profile to switch between my python environments.
| 0
| 1
| 0
| 0
|
2012-02-09T19:49:00.000
| 1
| 1.2
| true
| 9,217,687
| 0
| 0
| 0
| 1
|
I am using pythonbrew to install 2.7.2 on my CentOS.
It has worked before but this time on a separate clean system I am running into an issue.
After installing pythonbrew (which I had to --force since it complained in make test about distutils) I switched to 2.7.2
When I run easy_install setuptools it tries to go system python (2.5). Since I am non superuser this ofcourse failed.
What am I missing here?
|
Tornado Web Production Environment
| 9,826,573
| 0
| 1
| 867
| 0
|
python,security,debian,tornado
|
You can create www user just as deploying LAMP environment
lock the www user in website directory
Supervisor is a good solution of running several tornado process
You can use nginx as front-end of your tornado server.
| 0
| 1
| 0
| 0
|
2012-02-09T23:03:00.000
| 1
| 1.2
| true
| 9,220,302
| 0
| 0
| 0
| 1
|
I just set up a simple virtual server with Debian netinst. Now I want to use the tornado webserver to host a public website. I am quite new to Linux so I don't know how to set up a secure server environment.
Which users do I need to create?
Which config changes do I need to do to get a secure system?
How should I run my tornado server (deamon, init.d, ... I don't know these methods...)?
Are there more things I need to take care of when setting up a server from scratch?
Thanks for help :)
|
Isolated debugging session with PyDev
| 9,226,643
| 0
| 0
| 79
| 0
|
python,eclipse,debugging,pydev
|
I don't think this is possible out of the box... you'd need to architecture your production server so that this would be possible (i.e.: when you send a given request it should spawn a different interpreter just to handle your request for debugging purposes and shutdown that interpreter after the debug session ends), but you have to make sure that the debugger will actually run in a separate interpreter, otherwise it could end up tracing more things from other people (and in the best situation it'd only make things slower and in the worse it could end up having unexpected consequences because of some interaction of the debugger with your code).
| 0
| 1
| 0
| 1
|
2012-02-09T23:20:00.000
| 1
| 0
| false
| 9,220,493
| 0
| 0
| 1
| 1
|
I'm using Eclipse + PyDev to work on python web projects.
Sometimes I need to run debug session on production server rather then locally, due to specific environment.
I was wondering if there is a way to run isolated remote debugging session, so the other users don't experience any issues, and code execution doesn't suspend for them?
Thanks.
|
How to configure PyDev to use 32-bit Python Interpreter In Eclipse, on OSX Lion
| 9,282,173
| 2
| 3
| 1,922
| 0
|
python,eclipse,osx-lion,32bit-64bit,pydev
|
The interpreter used in PyDev is computed from sys.executable...
Now, a doubt: if you start a shell with /Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-32 and do 'print sys.executable', which executable appears?
Now, onto a workaround... you can try replacing the places where sys.executable appears in plugins/org.python.pydev/PySrc/interpreterInfo.py to point to '/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-32'
That's the script where it decides which interpreter to actually use... (still, it's strange that sys.executable would point to a different location...)
| 0
| 1
| 0
| 1
|
2012-02-11T03:11:00.000
| 1
| 1.2
| true
| 9,237,508
| 0
| 0
| 0
| 1
|
I am running OSX Lion and have installed python2.7 from python.org (this distribution can run in both 64bit and 32bit mode). I have also installed the wxPython package. I can run python scripts that import wxPython from the Terminal by explicitly using the 32-bit version. I would like to run the same scripts in Eclipse, but cannot. I configure PyDev to use python.org's interpreter, but it defaults to 64-bit (I check this by printing sys.maxint). I cannot figure out how to make PyDev use the 32-bit interpreter.
I have tried configuring the PyDev python interpreter to point to:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-32
but it ends up using:
/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
How can I configure PyDev to use the 32-bit python interpreter in Eclipse on OSX Lion?
I appreciate any input regarding this matter. Thank you.
|
Full text search with google app engine using ndb models
| 9,243,456
| 1
| 4
| 2,635
| 0
|
python,google-app-engine,full-text-search,google-cloud-datastore
|
The best solution is to wait until app engine's full text search is released. They are currently in trusted tester phase, so it's coming soon. If you roll your own solution now, you may end up redoing it in a few months.
| 0
| 1
| 0
| 0
|
2012-02-11T10:45:00.000
| 3
| 0.066568
| false
| 9,239,575
| 0
| 0
| 1
| 1
|
I have made a google app using next db module (ndb) to create my models. Now the problem is i want to deploy search over the fields of these models, and i have found two modules to do that: 1. The officially shipped with google app engine (appengine/google/ext/search) and 2. gae text search (http://code.google.com/p/gae-text-search/). Both of these provide the Searchable Model for the old db module properties. Is there any way i can do full text search using ndb and google app engine 1.6.2. Also i want to store those search queries to the datastore, how can i achieve that too? I am using python 2.7 for my development. Thanks in advance.
|
Run python script with reduced permissions
| 9,242,880
| 0
| 1
| 224
| 0
|
python,unix,sandbox
|
Create a new user account "student". Run your students' scripts as "student". Worst case, the script will destroy this special user account. If this happens, just delete user "student" and start over.
| 0
| 1
| 0
| 1
|
2012-02-11T18:13:00.000
| 2
| 0
| false
| 9,242,713
| 0
| 0
| 0
| 1
|
I have to run some student-made code that they made as homework, I'm running this on my own computer (Mac OS), and I'm not entirely certain they won't accidentally "rm -rf /" my machine. Is there an easy way to run their python scripts with reduced permissions, so that e.g. they can access only a certain directory? Is PyPy the way to go?
|
Is this a safe suid/capability wrapper for (Python) scripts?
| 9,243,141
| 0
| 3
| 2,470
| 0
|
python,c,sudo,suid
|
You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly.
| 0
| 1
| 0
| 1
|
2012-02-11T18:46:00.000
| 1
| 0
| false
| 9,242,989
| 0
| 0
| 0
| 1
|
(Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables
|
Is Celery as efficient on a local system as python multiprocessing is?
| 9,246,968
| 6
| 26
| 8,882
| 0
|
python,parallel-processing,multiprocessing,celery
|
I have actually never used Celery, but I have used multiprocessing.
Celery seems to have several ways to pass messages (tasks) around, including ways that you should be able to run workers on different machines. So a downside might be that message passing could be slower than with multiprocessing, but on the other hand you could spread the load to other machines.
You are right that multiprocessing can only run on one machine. But on the other hand, communication between the processes can be very fast, for example by using shared memory. Also if you need to process very large amounts of data, you could easily read and write data from and to the local disk, and just pass filenames between the processes.
I don't know how well Celery would deal with task failures. For example, task might never finish running, or might crash, or you might want to have the ability to kill a task if it did not finish in certain time limit. I don't know how hard it would be to add support for that if it is not there.
multiprocessing does not come with fault tolerance out of the box, but you can build that yourself without too much trouble.
| 0
| 1
| 0
| 0
|
2012-02-12T01:32:00.000
| 2
| 1.2
| true
| 9,245,656
| 1
| 0
| 0
| 1
|
I'm having a bit of trouble deciding whatever to use python multiprocessing or celery or pp for my application.
My app is very CPU heavy but currently uses only one cpu so, I need to spread it across all available cpus(which caused me to look at python's multiprocessing library) but I read that this library doesn't scale to other machines if required. Right now I'm not sure if I'll need more than one server to run my code but I'm thinking of running celery locally and then scaling would only require adding new servers instead of refactoring the code(as it would if I used multiprocessing).
My question: is this logic correct? and is there any negative(performance) with using celery locally(if it turns out a single server with multiple cores can complete my task)? or is it more advised to use multiprocessing and grow out of it into something else later?
Thanks!
p.s. this is for a personal learning project but I would maybe one day like to work as a developer in a firm and want to learn how professionals do it.
|
How to make my program add a cron job automatically with GAE?
| 9,257,522
| 3
| 2
| 296
| 0
|
python,google-app-engine,cron
|
You could tell the bot to add the new schedule in your datastore instead.
Then create a single "master" cron job with 1 minute schedule that checks the schedules that you had set in the datastore. The cron job would then determine whether on the current time the handler for an associated schedule need to be invoked or not.
If it does, the master cron job would then invoke the stored job using the TaskQueue API.
| 0
| 1
| 0
| 0
|
2012-02-13T04:52:00.000
| 2
| 1.2
| true
| 9,255,761
| 0
| 0
| 0
| 2
|
I need to make a bot who can automatically add a cron job for itself,but I don't think I could access the cron.yaml file on GAE server.What can I do with this?
|
How to make my program add a cron job automatically with GAE?
| 9,256,920
| 0
| 2
| 296
| 0
|
python,google-app-engine,cron
|
It's true that a lot of dev environments don't give you access to the cron.yaml files, however, you can run a Python script locally that communicates with your deployed program, edits your local copy of cron.yaml and pushes up the changes.
| 0
| 1
| 0
| 0
|
2012-02-13T04:52:00.000
| 2
| 0
| false
| 9,255,761
| 0
| 0
| 0
| 2
|
I need to make a bot who can automatically add a cron job for itself,but I don't think I could access the cron.yaml file on GAE server.What can I do with this?
|
Downloading many large files through Amazon EC2 Hadoop
| 9,377,499
| 0
| 0
| 157
| 0
|
python,hadoop,amazon-ec2
|
Its possible to download data onto hadoop on ec2. Hadoop has a distributed File system (HDFS) which takes care of placing blocks of data onto the slaves, and also honors the replication factor specified in configurations.
The slaves in ec2 have different ip addresses.
| 0
| 1
| 1
| 0
|
2012-02-13T11:16:00.000
| 1
| 1.2
| true
| 9,259,531
| 0
| 0
| 0
| 1
|
I am thinking of launching a hadoop cluster on amazon ec2 to download a few tens of thousands of files and later do some processing of them but before putting to much work to it I would like to know if anyone more experienced with hadoop than me thinks that it is possible? I have some doubts about being able to download files on hadoop slaves.
If you think that this is possible, can I expect each slave running on amazon ec2 to have different ip address?
I would like to use python to do most of the job (e.g. urllib2 module for downloading) and as little java as possible.
|
Running Matlab M-function from Python
| 9,265,849
| 0
| 2
| 1,156
| 0
|
python,matlab,operating-system,subprocess,matlab-deployment
|
Possible issues:
You did not install MCR.
Not running as administrator
Running from network drive
| 0
| 1
| 0
| 0
|
2012-02-13T18:06:00.000
| 2
| 0
| false
| 9,265,551
| 0
| 0
| 0
| 1
|
I want to run my Matlab function (test.m) from Python. I converted the function to an exe file test.exe using mcc -m command of Matlab; and I can run it test.exe from command prompt of windows.
On the other side, when I run exe files using os.system and subprocess.call by Python, it works well:
subprocess.call('C:\Program Files\DVD Maker\DVDMaker.exe',shell=True)
(My DVDMaker opens)
But when I run
subprocess.call('C:\...\test.exe',shell=True)
I receive this:
The filename, directory name or volume label syntax is incorrect.
|
Similarities between tcl and Python
| 9,268,696
| 1
| 2
| 3,587
| 0
|
python,tcl
|
Tcl is not really very similar to Python. It has some surface similarities I guess, as it is a mostly procedural language, but its philosophy is rather different. Whereas Python takes the approach that everything is an object, Tcl's approach is sometimes described as "everything is (or can be) a string." There are some interesting things to learn from Tcl deriving from this approach, but it's one of the lesser-used languages, so maybe hold off until you have a tangible reason to use it. In any case, you have two very different languages on your plate already; no need (IMHO) to add a third just yet.
| 0
| 1
| 0
| 1
|
2012-02-13T21:53:00.000
| 2
| 1.2
| true
| 9,268,611
| 1
| 0
| 0
| 2
|
Right now, I'm learning Python and Javascript, and someone recently suggested to me that I learn tcl. Being a relative noob to programming, I have no idea what tcl is, and if it is similar to Python. As i love python, I'm wondering how similar the two are so I can see if I want to start it.
|
Similarities between tcl and Python
| 9,268,859
| 5
| 2
| 3,587
| 0
|
python,tcl
|
While this question will obviously be closed as inconstructive in a short time, I'll leave my answer here anyway.
Joe, you appear to be greatly confused about what should drive a person who count himself a programmer to learn another programming language: in fact, one should have a natural desire to learn different languages because only this can widen one's idea about how problems can be solved by programming (programming is about solving problems). Knowing N similar programming languages basically gives you nothing besides an immediate ability to use those programming languages. This doesn't add anything to your mental toolbox.
I suggest you to at least look at functional languages (everyone's excited about them these days anyway), say, Haskell. Also maybe look at LISP or a similar thing.
Tcl is also quite interesting in its concepts (almost no syntax, everything is a string, uniformity of commands etc). Python is pretty boring in this respect--it's certainly enables a programmer to do certain things quick and efficient but it does not contain anything to satisfy a prying mind.
So my opinion is that your premises are wrong. Hope I was able to explain why.
| 0
| 1
| 0
| 1
|
2012-02-13T21:53:00.000
| 2
| 0.462117
| false
| 9,268,611
| 1
| 0
| 0
| 2
|
Right now, I'm learning Python and Javascript, and someone recently suggested to me that I learn tcl. Being a relative noob to programming, I have no idea what tcl is, and if it is similar to Python. As i love python, I'm wondering how similar the two are so I can see if I want to start it.
|
Run a python script in windows
| 9,282,142
| 1
| 0
| 2,860
| 0
|
python,windows
|
Assuming Python is installed, it is usually placed in a folder prefixed with "Python" and the major/minor version. E.g. C:\Python26
| 0
| 1
| 0
| 0
|
2012-02-14T18:19:00.000
| 2
| 0.099668
| false
| 9,282,100
| 1
| 0
| 0
| 1
|
I have always used a mac to write and run python scripts. However, I have a bunch of files on a PC that I need to run a script on. I have never really used a PC and don't know how to run the script.
If I go to the command program on the PC and type in python, nothing happens. How do I find where the python path is in order to get into the python prompt? Also, once I am in the prompt, is the importing of modules the same as in a Unix system?
|
Out of home folder .pyc files?
| 9,290,219
| 1
| 3
| 738
| 0
|
python,linux
|
Regarding the script in /usr/bin, if you execute your script as a user that doesn't have permissions to write in /usr/bin, then the .pyc files won't be created and, as far as I know, there isn't any other caching mechanism.
This means that your file will be byte compiled by the interpreter every time so, yes, there will be a performance loss. However, probably that loss it's not noticeable. Note that when a source file is updated, the compiled file is updated automatically without the user noticing it (at least most of the times).
What I've seen is the common practice in Ubuntu is to use small scripts in /usr/bin without even the .py extension. Those scripts are byte compiled very fast, so you don't need to worry about that. They just import a library and call some kind of library.main.Application().run() method and that's all.
Note that the library is installed in a different path and that all library files are byte compiled for different python versions. If that's not the case in your package, then you have to review you setup.py and your debian files since that's not the way it should be.
| 0
| 1
| 0
| 1
|
2012-02-15T08:27:00.000
| 2
| 0.099668
| false
| 9,290,018
| 1
| 0
| 0
| 2
|
If I place my project in /usr/bin/
will my python interpreter generate bytecode? If so where does it put them as the files do not have write permission in that folder. Does it cache them in a temp file?
If not, is there a performance loss for me putting the project there?
I have packaged this up as a .deb file that is installed from my Ubuntu ppa, so the obvious place to install the project is in /usr/bin/
but if I don't generate byte code by putting it there what should I do? Can I give the project write permission if it installs on another persons machine? that would seem to be a security risk.
There are surely lots of python projects installed in Ubuntu ( and obviously other distros ) how do they deal with this?
Thanks
|
Out of home folder .pyc files?
| 9,290,322
| 1
| 3
| 738
| 0
|
python,linux
|
.pyc/.pyo files are not generated for scripts that are run directly. Python modules placed where Python modules are normally expected and packaged up have the .pyc/.pyo files generated at either build time or install time, and so aren't the end user's problem.
| 0
| 1
| 0
| 1
|
2012-02-15T08:27:00.000
| 2
| 0.099668
| false
| 9,290,018
| 1
| 0
| 0
| 2
|
If I place my project in /usr/bin/
will my python interpreter generate bytecode? If so where does it put them as the files do not have write permission in that folder. Does it cache them in a temp file?
If not, is there a performance loss for me putting the project there?
I have packaged this up as a .deb file that is installed from my Ubuntu ppa, so the obvious place to install the project is in /usr/bin/
but if I don't generate byte code by putting it there what should I do? Can I give the project write permission if it installs on another persons machine? that would seem to be a security risk.
There are surely lots of python projects installed in Ubuntu ( and obviously other distros ) how do they deal with this?
Thanks
|
How to ensure that send method of the protocol is thread safe
| 9,301,329
| 1
| 0
| 168
| 0
|
python,twisted
|
Just use callFromThread directly as your queue. The reactor is already synchronizing on and monitoring it. Anywhere you want to call foo.sendString() from a non-reactor thread, just do reactor.callFromThread(foo.sendString). Building additional infrastructure to do this (your own custom synchronized queues, for example) is just additional code that might break – as you have already discovered.
| 0
| 1
| 0
| 0
|
2012-02-15T19:24:00.000
| 1
| 1.2
| true
| 9,299,872
| 0
| 0
| 0
| 1
|
I'm working on TCP client-server application using the IntNReceiver protocol. Server is accepting multiple TCP connections from client. I would like to let other threads use the protocol's sendString method, on both client and the server. I tried to use synchronized queue, monitored in separate thread and reactor.callFromThread() to call the sendString from there. This seems to work but there is a weird delay of about 20 seconds before the actual sendString actually sends the string. It does not block, returns immediately. I ran strace and the send() system call is definitely delayed. What is the proper way to do this kind of thing with twisted?
|
Finding out if the current filesystem supports symbolic links
| 9,319,169
| 4
| 1
| 168
| 0
|
python,windows,linux
|
What you probably should do is to just try to make the link and if it fails, copy.
It'll give you the advantage that you'll automatically support all file systems with soft links without having to do advanced detection or keeping an updated list of supported file systems.
| 0
| 1
| 0
| 1
|
2012-02-16T21:10:00.000
| 2
| 0.379949
| false
| 9,319,122
| 0
| 0
| 0
| 1
|
I am making a python script that in the case of EXT filesystem, will create symbolic links of some stuff, otherwise it will move the files.
How can I know the type of the filesystem of a directory?
|
How to run a script that can only write to STDOUT and read from STDIN?
| 9,323,660
| 3
| 2
| 303
| 0
|
python,perl,scripting,lua,cgi
|
One idea that comes to my mind is to create a chroot'ed env for each of your user and run the user's script in that chroot'ed env.
| 0
| 1
| 0
| 1
|
2012-02-17T02:19:00.000
| 5
| 0.119427
| false
| 9,322,042
| 0
| 0
| 0
| 1
|
I want my users to write code and run it inside a controlled environment, like for example Lua or Perl. My site runs on Perl CGI's.
Is there a way to run an isolated perl/Lua/python/etc script without access to the filesystem and returns data via stdout to be saved in a database?
What i need is a secure environment, how do i apply the restrictions? Thanks in advance.
FYI: I want to achieve something like ideone.com or codepad.org
I've been reading about sandboxes in Lua or inline code, but they don't allow me to limit resources and time, just operations. I think i'll have a virtual machine and run the code in there, any tips?
|
Create and maintain several ssh sessions in Python
| 10,684,894
| 0
| 4
| 3,383
| 0
|
python,ssh,subprocess
|
Try using pexpect module. It allows opening and maintaining ssh sessions, which you can reuse to send in multiple commands. You can send in any commands and expect particular outputs based on which you can perform other logical operations.
| 0
| 1
| 0
| 0
|
2012-02-17T03:18:00.000
| 4
| 0
| false
| 9,322,401
| 0
| 0
| 0
| 2
|
Once my program has been launched, it opens any number of ssh sessions (user defined) and runs specific commands on the servers indefinitely (while true loop) or until the user quits. For efficiency reasons I want to create each session only once, then be able to run the commands until the user quits.
How can I do this in python? I ran across a cool method in another post which used subprocess to run a command and capture its STDOUT. How can I first initiate the session, then run looped commands?
Any links to process-like stuff in Python would also be appreciated. This is my first python application.
|
Create and maintain several ssh sessions in Python
| 9,323,101
| -1
| 4
| 3,383
| 0
|
python,ssh,subprocess
|
Tried mixing it up with multithreading?
| 0
| 1
| 0
| 0
|
2012-02-17T03:18:00.000
| 4
| -0.049958
| false
| 9,322,401
| 0
| 0
| 0
| 2
|
Once my program has been launched, it opens any number of ssh sessions (user defined) and runs specific commands on the servers indefinitely (while true loop) or until the user quits. For efficiency reasons I want to create each session only once, then be able to run the commands until the user quits.
How can I do this in python? I ran across a cool method in another post which used subprocess to run a command and capture its STDOUT. How can I first initiate the session, then run looped commands?
Any links to process-like stuff in Python would also be appreciated. This is my first python application.
|
Set encoding in Python 3 CGI scripts
| 9,322,497
| 1
| 23
| 7,021
| 0
|
python,unicode,python-3.x,cgi
|
Your best bet is to explicitly encode your Unicode strings into bytes using the encoding you want to use. Relying on the implicit conversion will lead to troubles like this.
BTW: If the error is really UnicodeDecodeError, then it isn't happening on output, it's trying to decode a byte stream into Unicode, which would happen somewhere else.
| 0
| 1
| 0
| 1
|
2012-02-17T03:18:00.000
| 7
| 0.028564
| false
| 9,322,410
| 0
| 0
| 0
| 3
|
When writing a Python 3.1 CGI script, I run into horrible UnicodeDecodeErrors. However, when running the script on the command line, everything works.
It seems that open() and print() use the return value of locale.getpreferredencoding() to know what encoding to use by default. When running on the command line, that value is 'UTF-8', as it should be. But when running the script through a browser, the encoding mysteriously gets redefined to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux. The system-wide locale is en_GB.utf-8.
|
Set encoding in Python 3 CGI scripts
| 9,337,200
| 4
| 23
| 7,021
| 0
|
python,unicode,python-3.x,cgi
|
You shouldn't read your IO streams as strings for CGI/WSGI; they aren't Unicode strings, they're explicitly byte sequences.
(Consider that Content-Length is measured in bytes and not characters; imagine trying to read a multipart/form-data binary file upload submission crunched into UTF-8-decoded strings, or return a binary file download...)
So instead use sys.stdin.buffer and sys.stdout.buffer to get the raw byte streams for stdio, and read/write binary with them. It is up to the form-reading layer to convert those bytes into Unicode string parameters where appropriate using whichever encoding your web page has.
Unfortunately the standard library CGI and WSGI interfaces don't get this right in Python 3.1: the relevant modules were crudely converted from the Python 2 originals using 2to3 and consequently there are a number of bugs that will end up in UnicodeError.
The first version of Python 3 that is usable for web applications is 3.2. Using 3.0/3.1 is pretty much a waste of time. It took a lamentably long time to get this sorted out and PEP3333 passed.
| 0
| 1
| 0
| 1
|
2012-02-17T03:18:00.000
| 7
| 0.113791
| false
| 9,322,410
| 0
| 0
| 0
| 3
|
When writing a Python 3.1 CGI script, I run into horrible UnicodeDecodeErrors. However, when running the script on the command line, everything works.
It seems that open() and print() use the return value of locale.getpreferredencoding() to know what encoding to use by default. When running on the command line, that value is 'UTF-8', as it should be. But when running the script through a browser, the encoding mysteriously gets redefined to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux. The system-wide locale is en_GB.utf-8.
|
Set encoding in Python 3 CGI scripts
| 44,271,683
| 3
| 23
| 7,021
| 0
|
python,unicode,python-3.x,cgi
|
Summarizing @cercatrova 's answer:
Add PassEnv LANG line to the end of your /etc/apache2/apache2.conf or .htaccess.
Uncomment . /etc/default/locale line in /etc/apache2/envvars.
Make sure line similar to LANG="en_US.UTF-8" is present in /etc/default/locale.
sudo service apache2 restart
| 0
| 1
| 0
| 1
|
2012-02-17T03:18:00.000
| 7
| 0.085505
| false
| 9,322,410
| 0
| 0
| 0
| 3
|
When writing a Python 3.1 CGI script, I run into horrible UnicodeDecodeErrors. However, when running the script on the command line, everything works.
It seems that open() and print() use the return value of locale.getpreferredencoding() to know what encoding to use by default. When running on the command line, that value is 'UTF-8', as it should be. But when running the script through a browser, the encoding mysteriously gets redefined to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux. The system-wide locale is en_GB.utf-8.
|
Running interactive python script from emacs
| 9,325,028
| 4
| 8
| 10,228
| 0
|
python,emacs
|
I don't know about canonical, but if I needed to interact with a script I'd do M-xshellRET and run the script from there.
There's also M-xterminal-emulator for more serious terminal emulation, not just shell stuff.
| 0
| 1
| 0
| 1
|
2012-02-17T08:04:00.000
| 6
| 0.132549
| false
| 9,324,802
| 0
| 0
| 0
| 1
|
I am a fairly proficient vim user, but friends of mine told me so much good stuff about emacs that I decided to give it a try -- especially after finding about the aptly-named evil mode...
Anyways, I am currently working on a python script that requires user input (a subclass of cmd.Cmd). In vim, if I wanted to try it, I could simply do :!python % and then could interact with my script, until it quits. In emacs, I tried M-! python script.py, which would indeed run the script in a separate buffer, but then RETURNs seems not to be sent back to the script, but are caught by the emacs buffer instead. I also tried to have a look at python-mode's C-c C-c, but this runs the script in some temporary directory, whereas I just want to run it in (pwd).
So, is there any canonical way of doing that?
|
What are the best practices for creating Python Distributions(eggs) on(and for) Multiple Operating Systems
| 9,369,738
| 5
| 7
| 1,167
| 0
|
python,setuptools,egg,software-packaging
|
You can use a single package if:
The package does not use functions/classes that are not available on all your target platforms (see e.g. chapters 36-39 of the Python standard library reference for version 2.7.2 for stuff that you shouldn't use in that case)
You are not using extensions written in C/C++ that need to be compiled for every platform.
It it generally a good idea to stay away from OS specific functions that are not available on all your target platforms. The standard library is quite well documented in that respect.
| 0
| 1
| 0
| 0
|
2012-02-17T11:40:00.000
| 4
| 1.2
| true
| 9,327,602
| 1
| 0
| 0
| 2
|
Ours is a python shop. We have different python packages developed inhouse and will be deployed onto customers' environments(machines).
This is how our development and release cycle happens.
Once developers complete "testing" of a package, a distribution(egg file) of the package is prepared and pushed to a central archiving place. WHen we want to deploy our software to Customers, the same distributions(egg files) will be downloaded and installed in their environment.
Assuming the "testing" happens on multiple operating systems(to check the compatibility of the API across platforms), what is the best practice to prepare distributions and be pushed to the central archiving place.
Is it best to have operating system specific eggs on the archiving server(like, samplepkg-1.0.0.win32.egg and samplepkg-1.0.0.linux.egg ? Not sure how they can be prepared in this way using setuptools. ) Or Have a single egg because API remains same across platforms ? Any other practice which is followed by the community ?
|
What are the best practices for creating Python Distributions(eggs) on(and for) Multiple Operating Systems
| 9,421,037
| 1
| 7
| 1,167
| 0
|
python,setuptools,egg,software-packaging
|
Platform-specific eggs are only intended for distributing packages containing C code; otherwise the egg files themselves are platform-independent and you only need to distribute one, platform-independent egg.
If you are using automated installation tools or pkg_resources' runtime APIs for finding libraries and plugins, you can actually just dump all the eggs in a single directory, and the installation tool or runtime API will pick which egg to install or import from.
tl;dr version: the way setuptools builds eggs is the way they should be distributed; if you try to make a cross-platform egg into a platform-dependent one or vice versa, you're likely to experience some pain. ;-)
| 0
| 1
| 0
| 0
|
2012-02-17T11:40:00.000
| 4
| 0.049958
| false
| 9,327,602
| 1
| 0
| 0
| 2
|
Ours is a python shop. We have different python packages developed inhouse and will be deployed onto customers' environments(machines).
This is how our development and release cycle happens.
Once developers complete "testing" of a package, a distribution(egg file) of the package is prepared and pushed to a central archiving place. WHen we want to deploy our software to Customers, the same distributions(egg files) will be downloaded and installed in their environment.
Assuming the "testing" happens on multiple operating systems(to check the compatibility of the API across platforms), what is the best practice to prepare distributions and be pushed to the central archiving place.
Is it best to have operating system specific eggs on the archiving server(like, samplepkg-1.0.0.win32.egg and samplepkg-1.0.0.linux.egg ? Not sure how they can be prepared in this way using setuptools. ) Or Have a single egg because API remains same across platforms ? Any other practice which is followed by the community ?
|
Run python script without terminating at the end
| 9,344,292
| 6
| 3
| 1,094
| 0
|
python
|
As stated on python's manual:
-i switch
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script.
| 0
| 1
| 0
| 0
|
2012-02-18T18:54:00.000
| 3
| 1.2
| true
| 9,344,144
| 1
| 0
| 0
| 1
|
Is there a command line switch to carry out the script specified without terminating the process at the end?
Windows' cmd.exe for example has the /K switch.
|
How to create a Mac .pkg from python which supports multiple versions of OSX
| 9,356,034
| 0
| 0
| 514
| 0
|
python,macos,package,packagemaker,mpkg
|
You don't need any scripts - you can do that with Package Manager alone - the Package Manager GUI allows you to tag packages as installable (enabled) and selected based on conditions such as OS version (in Choices under Requirements)
| 0
| 1
| 0
| 0
|
2012-02-19T15:40:00.000
| 1
| 0
| false
| 9,350,571
| 1
| 0
| 0
| 1
|
I am trying to create a .pkg installer for a python application (specifically Spyderlib). This is not an app but a python package and a command line executable that have to be copied to specific locations.
However, the location depends on the version of OSX. I'm only targeting 10.6 and 10.7 but they come with different versions of python (2.6 and 2.7) so the install path is different.
Using bdist_mpkg I was able to create a Mac mpkg in 10.7 which installs correctly and can be edited with PackageMaker. Now I want to know how I can edit this package so that it detects the version of OSX and sets the install target path correctly.
I understand that I can use shell scripts to do pre and post-installation jobs, however I haven't been able to find examples of how to do this and how a script could but used to set the install target for the files in the mpkg.
Alternatively, it may be that this is possible to do directly from PackageMaker, but i was not able to see anything to this effect (and the documentation seems quite superficial).
So I would like to know how this could be done. It would also be really helpful to see some examples for other software packages.
|
Deploying a Python Script on a Server (CentOS): Where to start?
| 9,357,006
| 1
| 1
| 865
| 1
|
python,django,centos
|
Copy script to server
test script manually on server
set cron, "crontab -e" to a value that will test it soon
once you've debugged issues set cron to the appropriate time.
| 0
| 1
| 0
| 0
|
2012-02-20T06:14:00.000
| 3
| 0.066568
| false
| 9,356,926
| 0
| 0
| 1
| 1
|
I'm new to Python (relatively new to programing in general) and I have created a small python script that scrape some data off of a site once a week and stores it to a local database (I'm trying to do some statistical analysis on downloaded music). I've tested it on my Mac and would like to put it up onto my server (VPS with WiredTree running CentOS 5), but I have no idea where to start.
I tried Googling for it, but apparently I'm using the wrong terms as "deploying" means to create an executable file. The only thing that seems to make sense is to set it up inside Django, but I think that might be overkill. I don't know...
EDIT: More clarity
|
Debug crashing C Library used in Python project
| 9,362,176
| 1
| 0
| 617
| 0
|
python,c,debugging,libusb
|
Enable core dumps (ulimit -Sc unlimited) and crash the program to produce a core file. Examine the core file with gdb to learn more about the conditions leading up to the crash. Inspect the functions and local variables on the call stack for clues.
Or run the program under gdb to begin with and inspect the live process after it crashes and gdb intercepts the signal (SIGSEGV, SIGBUS, whatever).
Both of these approaches will be easier if you make sure all relevant native code (Python, libUSB, etc) have debugging symbols available.
Isolating the problem in a program which is as small as you can manage to make it, as Tio suggested, will also make this process easier.
PS: It would be just fine if I could see the prints I put in C in the bash where I run the Python script
You didn't mention anything about adding prints "in C" elsewhere in your question. Did you modify libUSB to add debugging prints? If so, did you rebuild it? What steps did you take to ensure that your new build would be used instead of the previously available libUSB? You may need to adjust your dylib-related environment variables to get the dynamic linker to prefer your version over the system version. If you did something else, explain what. :)
| 0
| 1
| 0
| 0
|
2012-02-20T13:25:00.000
| 2
| 0.099668
| false
| 9,361,891
| 0
| 0
| 0
| 1
|
complete Python noob here (and rusty in C).
I am using a Mac with Lion OS. Trying to use NFCpy, which uses USBpy, which uses libUSB. libUSB is crashing due to a null pointer but I have no idea how to debug that since there are so many parts involved.
Right now I am using xcode to view the code highlighted but I run everything from bash. I can switch to Windows or Linux if this is going to be somehow easier with a different environment.
Any suggestions on how to debug this would be much appreciated ;-)
PS: It would be just fine if I could see the prints I put in C in the bash where I run the Python script
|
AppEngine python send email api is marked as SPAM by Gmail email reader
| 9,374,887
| 1
| 3
| 691
| 0
|
python,google-app-engine,email,gmail,spam-prevention
|
My guess would be that the content of the mail looks "spammy" for Google, but you can do some things that might help you.
I would suggest you, since this is a confirmation mail, add another admin for your app an email like: do-not-reply@domain.com and use that one for the confirmation emails. Add more text to the body and include the unsubscribe links as well, so your users will have the possibility to not receive more email from your app. Maybe you wouldn't like the last part, but you have to give that options to your users, so this email won't be marked as SPAM.
| 0
| 1
| 0
| 1
|
2012-02-20T19:16:00.000
| 1
| 0.197375
| false
| 9,367,049
| 0
| 0
| 1
| 1
|
We send email using appengine's python send_mail api.
Is there any way to tell why an email that is sent to only one recipient would be marked as SPAM. This seems to only happen when appengine's python send_mail api sends to Gmail.
In our case we are sending email as one of the administrators of our appengine application.
And the email is a confirmation letter for an order that the user just purchased, so it is definitely NOT SPAM.
Can anyone help with this?
It seems odd because it is only GMail users that seem to be reporting this issue and we are sending from appengine (All Google servers) I love Google but sometimes Google is stricter to itself than to others :)
I've added the spf TXT record to DNS such as "v=spf1 include:_spf.google.com ~all"
(I'm hoping that will help)
I've tried to add a List-Unsubscribe header to the email but it seems app engine python send mail does not support this header.
Thanks,
Ralph
|
So I would like to start making my own terminal based game, is this feasible?
| 9,374,829
| 0
| 1
| 5,953
| 0
|
python,terminal,cross-platform
|
Getting terminal size in a cross-platform and reliable way is far from trivial (see termcap, curses and such).
| 0
| 1
| 0
| 0
|
2012-02-21T09:10:00.000
| 3
| 0
| false
| 9,374,781
| 1
| 0
| 0
| 1
|
Some basic requirements and desires:
Windows/Mac/Linux
Run as "full screen" within the terminal window, resizes as needed.
Network multi player (loose requirement, although definitely would like to)
Basic sounds
Would like to write in Python since I'm learning that.
Distributable as a single package, as in no run time dependencies that aren't built in or fairly commonplace.
Am I proposing something impossible?
Is Python up to the task?
Will I have trouble with Windows terminal?
I'm not necessarily hellbent on using Python, however I've been learning it for other purposes, so I'd like to "keep it in the family" if at all possible.
Thanks for any insight.
|
IDLE crash when opening on Mac OS X
| 21,723,895
| 1
| 3
| 7,506
| 0
|
python,macos,crash,python-idle
|
I had the same issue. I run OSX 10.8.5, Python 3.3.3 and IDLE 3.3.3 and reinstalling Python haven't been a solution.
I solved any problem removing the ~/.idlerc directory. My problem showed for the first time when I tried to change some Preferences (IDLE->Preferences->General->Startup Preferences->At Startup Open Edit Window), so I suppose that's why resetting my Preferences deleting ~/.idlerc folder have been the solution.
| 0
| 1
| 0
| 0
|
2012-02-21T19:40:00.000
| 5
| 0.039979
| false
| 9,384,021
| 1
| 0
| 0
| 2
|
I recently attempted to install python 3.2 along with IDLE 3 on my macbook pro. I successfully installed python 3.2 (as in, I can run it from the terminal), but when I attempted to install IDLE 3.2 I must have done something wrong because now both IDLE 2.7 and IDLE 3.2 crash immediately upon opening with the message "Python quit unexpectedly", no matter whether I open it through the terminal or through finder. Does anyone know how to fix this? I have installed the correct ActiveTCL package (and reinstalled) and still nothing. I have attempted to reinstall python 3.2 and IDLE 3 but I am not sure whether I did it correctly. Through a good amount of googling I found some people say that it was most likely a path issue but all of the solutions I found were using Windows so I am not sure how to apply that to my mac.
|
IDLE crash when opening on Mac OS X
| 26,536,767
| 0
| 3
| 7,506
| 0
|
python,macos,crash,python-idle
|
I had the same problem where IDLE would crash after I opened it on my MAC
I ended up updating my computer to OS Yosemite.
and the most updated version of python but it still would shut
the reason it started was because I tried to change the preferences for certain keys.
Resetting the preferences fixed it!
I typed mc ~/.idlerc idlerc2
:)
| 0
| 1
| 0
| 0
|
2012-02-21T19:40:00.000
| 5
| 0
| false
| 9,384,021
| 1
| 0
| 0
| 2
|
I recently attempted to install python 3.2 along with IDLE 3 on my macbook pro. I successfully installed python 3.2 (as in, I can run it from the terminal), but when I attempted to install IDLE 3.2 I must have done something wrong because now both IDLE 2.7 and IDLE 3.2 crash immediately upon opening with the message "Python quit unexpectedly", no matter whether I open it through the terminal or through finder. Does anyone know how to fix this? I have installed the correct ActiveTCL package (and reinstalled) and still nothing. I have attempted to reinstall python 3.2 and IDLE 3 but I am not sure whether I did it correctly. Through a good amount of googling I found some people say that it was most likely a path issue but all of the solutions I found were using Windows so I am not sure how to apply that to my mac.
|
Gracefull shutdown of bottle python server
| 9,389,919
| 2
| 1
| 758
| 0
|
python,mod-wsgi,bottle
|
In mod_wsgi you can register atexit callbacks and they will be called on normal process shutdown. You don't have too long to do stuff though. If embedded mode, or daemon mode and shutdown caused by Apache restart, you have only 3 seconds as Apache will kill off processes forcibly after that. If daemon mode and trigger is due to touching WSGI script file or you explicitly sent daemon process a signal, you have 5 seconds, which is when mod_wsgi will decide it is taking too long and forcibly kill them.
See the 'atexit' module in Python.
| 0
| 1
| 0
| 1
|
2012-02-22T04:34:00.000
| 1
| 1.2
| true
| 9,389,138
| 0
| 0
| 0
| 1
|
Hi is there a way out to gracefully shutdown the bottle server. In a way it should be able to do few steps before it eventually stops. This is critical for some clean up of threads and db state etc avoiding the corrupt state during the restart.
I am using mod wsgi apache module for running the bottle server.
|
OSX and setting PATH for Apache
| 9,407,550
| 0
| 0
| 366
| 0
|
python,macos,apache
|
Edit the shebang line in the CGI scripts to point to the other executable.
| 0
| 1
| 0
| 1
|
2012-02-23T05:03:00.000
| 2
| 0
| false
| 9,407,472
| 0
| 0
| 0
| 1
|
I have Apache running on OSX Lion and MacPorts Python and some packages installed with MacPorts.
There are some Python cgi scripts that I'd like to run. It looks like Apache uses the Python that is installed with Lion. How can I configure Apache so that the cgi scripts are run with the MacPorts Python and sites-packages (PYTHONPATH I guess)?
|
Can not import stackless after stackless python installation
| 9,408,074
| 0
| 1
| 1,014
| 0
|
python,python-stackless
|
It's likely not in your PYTHONPATH in that environment. Check to ensure the module is in your PYTHONPATH and that it is being set in the environment in which python launches. This is typically accomplished by adding the appropriate entry to your .bashrc or environment.plist.
| 0
| 1
| 0
| 0
|
2012-02-23T06:07:00.000
| 2
| 0
| false
| 9,408,025
| 1
| 0
| 0
| 1
|
I used pycharm and eclipse+pydev, and I also installed stackless python(2.7.1) for mac os x.
when I try to import stackless, there always are tips which is "can't find such package/reference", but when I switch to IDLE/Client, "import stackless" is correct. I really don't know the reason, please help me. thanks a lot
|
What are the parameters for .GetWindow function in python?
| 9,429,657
| 0
| 0
| 815
| 0
|
python,process,window,pywin32
|
The usual way is to call EnumWindows with a callback and then get information about each hwnd - for example name, title or window class. Check that against what you are looking for and save the matched hwnd. After EnumWindows returns, check that you found a valid hwnd and use that for your program.
It's not pleasant - there's not much support for this kind of thing in windows. I've heard that using accessibility features is better but I have no experience using that.
| 1
| 1
| 0
| 0
|
2012-02-23T14:25:00.000
| 1
| 0
| false
| 9,414,906
| 0
| 0
| 0
| 1
|
I'm kinda new to programming, and I wanna write a simple program that needs to OCR a particular window. Currently, I'm using (w.GetForegroundWindow()), but that gets me the current window which would always be the Python shell, since that is the one that is active when I run it, even if it is for a split second only.
After searching around for a bit, I found the .Getwindows function, but not much of it on Python. What does it do, and what are the parameters? Will i be able to target a particular process (=window) with it? If not, what can I use then?
This is using the pywin32 module on Python 2.7 in Windows
I'm in Windows, Python 2.7 . The GetWindows function comes with the module pywin32, if im not wrong
|
Google App Engine - prohibitively slow and expensive backup and restore?
| 13,692,611
| 5
| 11
| 1,978
| 0
|
python,google-app-engine,backup,restore
|
Bet you've found a solution by now Yasser, but for anyone else ending up here from Google, here's an updated answer:
The backup option in the appstore admin has been upgraded to support both datastore and cloud storage. It also uses mapreduce to do the backup, which makes the query much lighter on the system.
| 0
| 1
| 0
| 0
|
2012-02-24T10:48:00.000
| 4
| 0.244919
| false
| 9,429,436
| 0
| 0
| 1
| 1
|
After working on several GAE apps, some of which are being used for production, I have come to the conclusion that on this platform, backing up your production data is slow enough and expensive enough for us to transition to some other cloud based technology stack.
In one of our production apps, we have around a million entities with an average size per entity of 1KB. So the total size of the data is around a GB which should not be a big deal, right? Here is the output of the bulkloader tool after fetching the entities from the app engine with default options:
[INFO ] 948212 entities (608342497 bytes) transferred in 47722.7
seconds
That is almost 13 hours. So if we wanted to set up an hourly backup system for our production data, that would be way beyond impossible with the current GAE toolset.
The cost is another story. I tried using the datastore admin to copy entities to a different app which i thought we could use for backup. I first set the budget to $2 per day which quickly ran out at around 5000 entities, then i increased the budget to $10 per day which ran out again without being anywhere close to replicating the million entities.
I obviously dont intend to spend $100 every time i need to back my 1 GB data up neither do i want to wait for hours (or even days) just so that my data would be backed up. So either I dont know something or Google App Engine is currently just an impractical way to write scalable production quality apps of meaningful size that can be easily backed up and restored.
Is there a fast and cost-effective way to backup your data from a GAE app?
|
Update python in server running centos/whm?
| 9,775,582
| 0
| 1
| 2,583
| 0
|
python,django,centos,cpanel,yum
|
You can install any version of python from source as long as you don't overwrite cPanel's python 2.4 installation at /usr/bin. To do this, use the -prefix= option when you configure the python 2.x or python 3.x source for build.
| 0
| 1
| 0
| 0
|
2012-02-24T17:29:00.000
| 4
| 0
| false
| 9,435,259
| 0
| 0
| 1
| 2
|
I have a server which is running CentOS with cpanel/whm. Otherwise, it is pretty much a standard set up.
My problem is that such server is running python 2.4 and I need python 2.6 or later. How do I upgrade without breaking anything?
By the way, I currently have a django application running on that server, which I would also like to move to python 2.6 without breaking it. Is there anything extra that I have to do to do that?
|
Update python in server running centos/whm?
| 10,049,589
| 0
| 1
| 2,583
| 0
|
python,django,centos,cpanel,yum
|
The simplest way to install an alternate version of Python is to download the source and compile it. When you've finished running ./configure and make, you'll want to install using make altinstall, with python 2.6 you'd end up with an interpreter named python26
| 0
| 1
| 0
| 0
|
2012-02-24T17:29:00.000
| 4
| 0
| false
| 9,435,259
| 0
| 0
| 1
| 2
|
I have a server which is running CentOS with cpanel/whm. Otherwise, it is pretty much a standard set up.
My problem is that such server is running python 2.4 and I need python 2.6 or later. How do I upgrade without breaking anything?
By the way, I currently have a django application running on that server, which I would also like to move to python 2.6 without breaking it. Is there anything extra that I have to do to do that?
|
'twistd' is not a recognized internal or external command
| 9,440,925
| 3
| 2
| 1,615
| 0
|
python,twisted
|
You need to set the %PATHEXT% environment variable to include .py, as well as %PATH% including the path to twistd. Your most-recently-installed version of Python should then automatically launch it, assuming the filetype association was set correctly by the installer.
| 0
| 1
| 0
| 0
|
2012-02-25T01:31:00.000
| 2
| 0.291313
| false
| 9,440,303
| 0
| 0
| 0
| 1
|
I'm trying to develop a Twisted Web server but can't seem to run the twistd command. I've tried setting the python path and even included the path to the twistd.py script in my Path but nothing seems to work.
I'm using Twisted 12.0.0 and Python 2.7 on Windows. Any help would be hugely appreciated.
|
Inverted Index System using Python
| 9,452,656
| 4
| 2
| 2,059
| 0
|
python,information-retrieval,inverted-index
|
Worry about optimization after the fact. Write the code, profile it, stress test it, identify the slow parts and offset them in Cython or C or re-write the code to make it more efficient, it might be faster if you load it onto PyPy as that has a JIT Compiler, it can help with long running processes and loops.
Remember
Premature optimization, is the root of all evil. (After threads of course)
| 0
| 1
| 0
| 1
|
2012-02-26T11:19:00.000
| 2
| 0.379949
| false
| 9,452,631
| 0
| 0
| 0
| 1
|
I am working on building an inverted index using Python.
I am having some doubts regarding the performance it can provide me.
Would Python be almost equally as fast in indexing as Java or C?
Also, I would like to know if any modules/implementations exists (and what are they, some link please?) for the same and how well do they perform compared to the something developed in Java/C?
I read about this guy who optimized his Python twice as fast as C by using it with Psyco.
I know for a fact that this is misleading since gcc 3.x compilers are like super fast. Basically, my point is I know Python won't be faster than C. But is it somewhat comparable?
And can someone shed some light on its performance compared with Java? I have no clue about that. (In terms of inverted index implementation, if possible because it would essentially require disk write and reads.)
I am not asking this here without googling first. I didn't get a definite answer, hence the question.
Any help is much appreciated!
|
Distributed state
| 9,466,181
| 0
| 1
| 981
| 1
|
python,database,linux,datastore,distributed-system
|
What you describe reminds me of an Apache Cassandra cluster configured so that each machine hosts a copy of the whole dataset and reads and writes succeed when they reach a single node (I never did that, but I think it's possible). Nodes should be able to remain functional when WAN links are down and receive pending updates as soon as they get back on-line. Still, there is no magic - if conflicting updates are issued on different servers or outdated replicas are used to generate new data, consistency problems will arise on any architecture you select.
A second issue is that for every local write, you'll get n-1 remote writes and your servers may spend a lot of time and bandwidth debating who has the latest record.
I strongly suggest you fire up a couple EC2 instances and play with their connectivity to check if everything works the way you expect. This seems to be in the "creative misuse" area and your mileage may vary wildly, if you get any at all.
| 0
| 1
| 0
| 0
|
2012-02-26T20:41:00.000
| 5
| 0
| false
| 9,456,954
| 0
| 0
| 0
| 1
|
I have a handful of servers all connected over WAN links (moderate bandwidth, higher latency) that all need to be able to share info about connected clients. Each client can connect to any of the servers in the 'mesh'. Im looking for some kind of distributed database each server can host and update. It would be important that each server is able to get updated with the current state if its been offline for any length of time.
If I can't find anything, the alternative will be to pick a server to host a MySQL DB all the servers can insert to; but I'd really like to remove this as a single-point-of-failure if possible. (and the downtime associated with promoting a slave to master)
Is there any no-single-master distributed data store you have used before and would recommend?
It would most useful if any solution has Python interfaces.
|
Automatically Resize Command Line Window
| 9,459,072
| -2
| 10
| 19,100
| 0
|
python,windows,resize,cmd
|
Change the console size by right-clicking on console titlebar -> Properties -> Layout -> Window size
| 0
| 1
| 0
| 0
|
2012-02-27T01:04:00.000
| 3
| -0.132549
| false
| 9,458,870
| 1
| 0
| 0
| 1
|
I'm writing a program in Python, the data I want it to show must be able to fit on the screen at the same time without needing to scroll around. The default command line size does not allow for this.
Is there any way to automatically resize a command line window in Python without creating a GUI? The program will only be used on windows.
|
Password Protect Static Page AppEngine HowTo?
| 9,461,964
| 2
| 0
| 1,804
| 0
|
python,google-app-engine,.htaccess,openid,.htpasswd
|
AFAIK, GAE does not support such setup (static password after OpenID login).
The only way I see to make this work would be to serve static content via your handler:
Client makes a request for static content
Your handler is registered to handle this URL
Handler checks is user is authenticated. If not, requests a password.
When authenticated, handler reads static file and sends it back to user.
| 0
| 1
| 0
| 0
|
2012-02-27T06:49:00.000
| 2
| 0.197375
| false
| 9,461,085
| 0
| 0
| 1
| 1
|
So I'm working with AppEngine (Python) and what I want to do is to provide OpenID Login setting a Default provider so the user can Log-In without problems using that provider. The thing is, I want to prompt the user a Password right after they login in order to show static content (HTML Pages); If the user doesn't enter the correct password then I want to redirect them to another page. The protection has to be server side please :) Any Ideas??
P.S. I'm seeking for a solution similar to ".htaccess/htpasswd" but for app engine.
|
Can I turn off the "no_cookies" option in Google App Engine Launcher (version 1.6.2) on Windows?
| 9,477,977
| 0
| 1
| 650
| 0
|
python,windows,google-app-engine,deployment,credentials
|
The launcher will always prompt you for credentials, it uses the no_cookies flag to make sure the given credentials are passed and not the one stored in the system.
What you can do is create a batch file that will deploy the application, you can provide credentials using the --email and --passin flags.
| 0
| 1
| 0
| 0
|
2012-02-28T01:24:00.000
| 1
| 1.2
| true
| 9,475,063
| 0
| 0
| 1
| 1
|
When I deploy an application using the Google App Engine Launcher (version 1.6.2) on Windows, the following command options show in the output window:
Running command: "[u'C:\Python27\pythonw.exe',
'-u', u'C:\Program Files (x86)\Google\google_appengine\appcfg.py',
'--no_cookies', u'--email=xxxx.xxxx@xxxx.xxx', '--passin', 'update',
u'C:\path\to\project']"
I want the launcher to store my application-specific password, and I know that it needs to use a cookie to do that, but for some reason the launcher is defaulting to send the "no_cookies" option.
Is there a way to turn this option off?
|
Python 2.7.2 IDLE shell not working
| 48,256,587
| 0
| 0
| 7,359
| 0
|
python-2.7,python-idle
|
The first thing you need to do is locate your python on your computer by right clicking on it and clicking properties. After that go into your file folder and follow the path that the property finder told you, then find something that looks like this:
C:\Users\DELL\AppData\Local\Programs\Python\Python36\Lib\idlelibCidle.pyw
Don't copy this because it's the path to python on my computer, but make sure your line looks similar at the end of my line. Copy the line that looks similar to the one above and paste it or write in the command prompt and then press space and add the -n to it and hit enter. Don't add the C: at the beginning if it is already written in the command prompt, but make sure you add the \
| 0
| 1
| 0
| 0
|
2012-02-28T06:07:00.000
| 3
| 0
| false
| 9,477,214
| 0
| 0
| 0
| 1
|
For some reason the python 2.7.2 IDLE shell is not opening. I get an error that says:
"IDLE can't bind to a TCP/IP port, which is necessary to communicate with its Python execution server. This might be because no networking is installed on this computer. Run IDLE with the -n command line switch to start without a subprocess and refer to HELP/IDLE Help 'Running without a subprocess' for further details."
It was working fine the day before and I can't think of any changes I have made to the computer*(windows 7) that could have caused it to stop working. I have tried uninstalling and reinstalling it but it still has the same problem. I have added it to the exceptions on my firewall but nothing helps it.
Any help is appreciated :)
|
Python VIRT Memory Usage
| 9,951,397
| 0
| 2
| 1,159
| 0
|
python,memory
|
What does the application do? What libraries does it use? What else is different between those machines? It's hard to give a general answer.
The VIRT value indicates how much memory the process has requested from the operating system in one way or another. But Linux is lazy in this respect: that memory won't actually be allocated to the process until the process tries to do something with it.
The RES value indicates how much memory is actually resident in RAM and currently in use by the process. This excludes pages that haven't yet been touched by the process or that have been swapped out to disk. Since the RES values are small and identical for both of those processes, there's probably nothing to worry about.
| 0
| 1
| 0
| 1
|
2012-02-28T09:35:00.000
| 2
| 0
| false
| 9,479,492
| 1
| 0
| 0
| 1
|
I have an python application in production (on CentOS 6.2 / Python 2.6.6) that takes up to:
800M VIRT / 15M RES / 2M SHR
The same app run on (Fedora 16 / Python 2.7.2) "only" takes up to:
56M VIRT / 15M RES / 2M SHR
Is it an issue ?
What's the explanation of this difference ?
I'm wondering if it could go wrong anytime with such an amount of virtual memory ?
|
How do I limit a Ubuntu machine to a Python GUI?
| 9,489,644
| 1
| 1
| 224
| 0
|
python,user-interface,ubuntu,tkinter,restriction
|
Don't start a window manager. Only start your program, e.g. from xinitrc. Make the program full-screen
| 1
| 1
| 0
| 0
|
2012-02-28T15:27:00.000
| 3
| 0.066568
| false
| 9,484,724
| 0
| 0
| 0
| 1
|
I wrote a python GUI in Tkinter for a time-clock system. The micro machine is wall mounted and the employees only have access to the touchscreen menu I programmed and a barcode swipe. I know how to get the script to start on startup, but how do I prevent them from exiting out or opening other menus? Basically the sole purpose of this console is to run the time-clock GUI.
If it cant be done in Ubuntu, is there another flavor of linux it can be done in?
|
What should be used to store data in GAE?
| 9,498,169
| 0
| 1
| 109
| 0
|
python,google-app-engine,memcached,google-cloud-datastore
|
I'd suggest you split your entities in a root entity and a couple linked ones holding each some of the 150 attributes - this way, when you update one attribute, you only need to save one (or two, if the update reflects on the root entity) smaller entities to the datastore instead of a huge one.
Use memcache to prevent reads, not to store data that's going to the datastore. Memcache can be flushed and the data there can be destroyed before it hits permanent storage.
Going one step further, the grouping of attributes could reflect data that's updated together - say, if you always update street and zipcode together, it makes sense to keep them together in a single structure.
| 0
| 1
| 0
| 0
|
2012-02-28T17:16:00.000
| 3
| 0
| false
| 9,486,574
| 0
| 0
| 1
| 3
|
My web application will have ~150 fields and when value is changed in any field (at least one), I should save changed value.
How should I store such values with GAE? Should I save them directly in datastore? Should I use memcache temporarily and then save all values at once in datastore? Or, some other approach should be followed?
|
What should be used to store data in GAE?
| 9,487,492
| 0
| 1
| 109
| 0
|
python,google-app-engine,memcached,google-cloud-datastore
|
First of all you should find out how you are going to use your data. Which queries are you planning to make? What is the size of your entities?
Datastore is very different from relational databases. That is It doesn't really matter how many properties are changed, because there is no way to update a property on its own. You can only save entity as a whole. I don't know about your use case, but there should probably be a better approach for structuring your data, than having ~150 properties for a single entity. What is your use case?
Also having too much properties for an entity may lead to index explosion, or to slow down datastore writes.
| 0
| 1
| 0
| 0
|
2012-02-28T17:16:00.000
| 3
| 0
| false
| 9,486,574
| 0
| 0
| 1
| 3
|
My web application will have ~150 fields and when value is changed in any field (at least one), I should save changed value.
How should I store such values with GAE? Should I save them directly in datastore? Should I use memcache temporarily and then save all values at once in datastore? Or, some other approach should be followed?
|
What should be used to store data in GAE?
| 9,486,718
| 3
| 1
| 109
| 0
|
python,google-app-engine,memcached,google-cloud-datastore
|
The datastore is your database. Memcache is to store data that's fetched from the datastore and kept temporarily in memory to avoid too many calls back to the database. You should first design your app around the datastore and then use memcache to improve performance.
Depending on your programming language of choice (java, python, go) there are many tools out there to help you map objects in your app to the datastore and to use memcache effectively.
| 0
| 1
| 0
| 0
|
2012-02-28T17:16:00.000
| 3
| 0.197375
| false
| 9,486,574
| 0
| 0
| 1
| 3
|
My web application will have ~150 fields and when value is changed in any field (at least one), I should save changed value.
How should I store such values with GAE? Should I save them directly in datastore? Should I use memcache temporarily and then save all values at once in datastore? Or, some other approach should be followed?
|
GAE-ready asynchronous operations in Python?
| 9,491,366
| 1
| 3
| 182
| 0
|
python,http,google-app-engine,asynchronous,web2py
|
If you're brave, you might try the Experimental new DB api, NDB. It has async APIs for working with the datastore + URL fetch. If those are the things you hoped to do async-ly, then you're in luck.
| 0
| 1
| 0
| 0
|
2012-02-28T23:07:00.000
| 3
| 0.066568
| false
| 9,491,227
| 1
| 0
| 1
| 1
|
I've got a Python app making 3 different api calls one after another in the same block of code. I'd like to execute these calls asynchronously, and then perform an action when they're all complete.
A couple notes:
Other answers regarding async actions point to frameworks like Twisted and Celery, but I'm building a Web2Py app for the GAE, so those daemon-based frameworks aren't an option AFAIK.
I'm using api wrapper libraries for the various apis, so I'm wondering if there's an async solution that can be implemented at the thread level, rather than the http request level?
|
Sharing Python virtualenv environments
| 9,506,329
| 13
| 19
| 22,585
| 0
|
python,virtualenv,virtualenvwrapper
|
Put it in a user-neutral directory, and make it group-readable.
For instance, for libraries, I use /srv/http/share/ for sharing code across web applications.
You could use /usr/local/share/ for normal applications.
| 0
| 1
| 0
| 0
|
2012-02-29T20:40:00.000
| 2
| 1.2
| true
| 9,506,281
| 1
| 0
| 0
| 1
|
I have a Python virtualenv (created with virtualenvwerapper) in one user account. I would like to use it from another user account on the same host.
How can I do this?
How can I set up virtual environments so as to be available to any user on the host? (Primarily Linux / Debian but also Mac OSX.)
Thanks.
|
Python subprocess: streaming in and out
| 9,517,502
| 0
| 2
| 205
| 0
|
python,subprocess
|
It sounds like you have to establish a UDP server and client.
| 0
| 1
| 0
| 0
|
2012-03-01T13:38:00.000
| 2
| 0
| false
| 9,517,223
| 0
| 0
| 0
| 1
|
I have a server written in Python that basically accepts incoming connection from clients and feeds the data received from them into a subprocess (one instance per connection) which then processes the data and returns the result to the server so that it can send it back to the client.
The problem is that the data is streaming in and I need to be able to execute multiple read/write operations with no EOF in sight. So far, I've been unable to come up with a solution that would enable me to do just that without getting my server program blocked on reading. Any suggestions? Thanks.
|
Porting Python to an embedded system
| 24,161,759
| 1
| 24
| 7,622
| 0
|
python,embedded
|
fyi I just ported CPython 2.7x to non-POSIX OS. That was easy.
You need write pyconfig.h in right way, remove most of unused modules. Disable unused features.
Then fix compile, link errors. Then it just works after fixing some simple problems on run.
If You have no some POSIX header, write one by yourself. Implement all POSIX functions, that needed, such as file i/o.
Took 2-3 weeks in my case. Although I have heavily customized Python core. Unfortunately cannot opensource it :(.
After that I think Python can be ported easily to any platform, that has enough RAM.
| 0
| 1
| 0
| 1
|
2012-03-01T15:52:00.000
| 6
| 0.033321
| false
| 9,519,346
| 0
| 0
| 0
| 1
|
I am working with an ARM Cortex M3 on which I need to port Python (without operating system). What would be my best approach? I just need the core Python and basic I/O.
|
cursor and with_cursor() in GAE
| 9,521,520
| 0
| 0
| 550
| 1
|
python,google-app-engine
|
i am not 100% sure about that but what i used to do is compare the last cursor with the actual cursor and i think i noticed that they were the same so i came to the conclusion that it was the last cursor.
| 0
| 1
| 0
| 0
|
2012-03-01T17:47:00.000
| 3
| 0
| false
| 9,521,289
| 0
| 0
| 1
| 1
|
I am fetching records from gae model using cursor() and with_cursor() logic as used in paging. but i am not sure how to check that there is no any other record in db that is pointed by cursor. i am fetching these records in chunks within some iterations.when i got my required results in the first iteration then in next iteration I want to check there is no any record in model but I not get any empty/None value of cursor at this stage.please let me know how to perform this check with cursors in google app engine with python.
|
does the final EXE file of scapy need another dependent file
| 9,535,512
| 1
| 1
| 623
| 0
|
python,tcp,py2exe,scapy
|
I don't think you're going to be able to accomplish what you want. Even for the simplest scripts Py2Exe will require many dependent files.
As for Scapy it will definitely need winpcap. I would assume it needs most of the others as well. You could probably get away without readline, but at that point why bother?
| 0
| 1
| 0
| 0
|
2012-03-02T14:49:00.000
| 1
| 1.2
| true
| 9,535,222
| 1
| 0
| 0
| 1
|
i want to code a program using scapy, e.g. send a custom packet using scapy, and after finishing the program, i want to convert the python file into EXE file using py2exe, in order to use on windows platform without python . but i noticed that the installation of scapy for windows needs a lot of dependent file, such as pywin32, winpcap, pypcap, libdnet, pyreadline, after conversion to exe file using py2exe, should the user install multiple file to make the program executable? my program intends to be executed in various computers. i dont want users to install so many dependent file.
|
"Compiling" python script
| 9,542,902
| -1
| 4
| 4,391
| 0
|
python,macos,text,compiler-construction
|
You could try py2exe (http://www.py2exe.org/) since it compiles your code into an exe file, they should have a hell of a time trying to decompose it.
| 0
| 1
| 0
| 1
|
2012-03-03T02:18:00.000
| 5
| -0.039979
| false
| 9,542,814
| 1
| 0
| 0
| 2
|
I'm trying to send a python script I wrote on my Mac to my friends. Problem is, I don't want to send them the code that they can edit. How can I have my script change from an editable text file, to a program that you click to run?
|
"Compiling" python script
| 9,543,052
| 0
| 4
| 4,391
| 0
|
python,macos,text,compiler-construction
|
If your friends are on windows you could use py2exe, but if they're on Mac I'm not sure there's an equivalent. Either way, compiling like that breaks cross platform compatability, which is the whole point of an interpreted language really...
Python just isn't really set up to hide code like that, it's against it's philosophy as far as I can tell.
| 0
| 1
| 0
| 1
|
2012-03-03T02:18:00.000
| 5
| 0
| false
| 9,542,814
| 1
| 0
| 0
| 2
|
I'm trying to send a python script I wrote on my Mac to my friends. Problem is, I don't want to send them the code that they can edit. How can I have my script change from an editable text file, to a program that you click to run?
|
How to determine if my python script is running?
| 9,555,991
| 1
| 1
| 3,425
| 0
|
python,ubuntu-10.04
|
Save your pid to a file; if the file already exists, check that the process that left its PID is still alive. (This is safer than trying to ensure you always remove the file: You can't). The full process goes like this:
Check if the checkpoint file exists. If it does not, write your PID into the file and go ahead with the computation.
If the file exists: Read the PID and check if the process is, in fact, still alive. The best way to do that is with "kill -0" (from python: os.kill), which doesn't bother the running process but fails if it does not exist. If the process is still running, exit. Otherwise, write your PID to the file etc.
There's a small chance of a race condition, but if your process is getting restarted at infrequent intervals, that should be entirely harmless: Your process could always quit in favor of a running process that exits a second later, so what does it matter if the running process manages to quit first?
| 0
| 1
| 0
| 1
|
2012-03-04T14:39:00.000
| 3
| 0.066568
| false
| 9,555,742
| 0
| 0
| 0
| 2
|
I have my python script set to run from cron in Ubuntu Server. However it might take longer time to finish, before another cron event will try to start it. I would like to determine such case from script itself and if running then gracefully terminate it from python script.
|
How to determine if my python script is running?
| 9,555,781
| 0
| 1
| 3,425
| 0
|
python,ubuntu-10.04
|
There are two obvious solutions:
Some kind of lock file, which it checks. If the lock file exists, then don't start, otherwise create it. (Or more aptly, in true python 'ask for forgiveness, not permission' style, try to make it and catch the error if it exists - stopping a race condition). You need to be careful to ensure this gets cleaned up when the script ends, however - even on errors, otherwise it could block future runs. Traditionally this is a .pid file which contains the process id of the running process.
Use ps to check for the running process. With this solution it is harder to stop the race condition, however.
| 0
| 1
| 0
| 1
|
2012-03-04T14:39:00.000
| 3
| 0
| false
| 9,555,742
| 0
| 0
| 0
| 2
|
I have my python script set to run from cron in Ubuntu Server. However it might take longer time to finish, before another cron event will try to start it. I would like to determine such case from script itself and if running then gracefully terminate it from python script.
|
Setup Cherrypy with Google App Engine
| 15,619,684
| 1
| 1
| 1,132
| 0
|
python,google-app-engine,cgi,wsgi,cherrypy
|
The problem was that cherrypy was not in the root of the working directory, all of which I uploaded with the app engine tool. Not 100% sure if its the correct way to use GAE but it works.
| 0
| 1
| 0
| 0
|
2012-03-04T15:46:00.000
| 1
| 1.2
| true
| 9,556,280
| 0
| 0
| 1
| 1
|
Can someone please show me how to get cherrypy to work with Google App Engine, I have made applications with cherrypys built in server, but I have no idea how to make an app that works with WSGI and GAE.
I have read the documentation for cherrypy and GAE but can't find anything. And I would prefer cherrypy to the webapp2 which is in the GAE example.
|
What is the optimal way to organize infinitely looped work queue?
| 9,567,697
| 2
| 0
| 341
| 0
|
python,queue,rabbitmq,task-queue,beanstalkd
|
You create your initial batch of jobs and add them to the queue.
You have n-consumers of the queue each running the jobs. Adding consumers to the queue simply round-robins the distribution of jobs to each listening consumer, giving you arbitrary horizontal scalability.
Each job can, upon completion, be responsible for resubmitting itself back to the queue. This means that your job queue won't grow beyond the length that it was when you initialised it.
The master job can, if need be, spawn sub-jobs and add them to the queue.
For different types of jobs it is probably a good idea to use different queues. That way you can balance the load more effectively by having different quantities/horsepower of workers running the jobs from the different queues.
The fact that you are running Python isn't important here, it's the pattern, not the language that you need to nail first.
| 0
| 1
| 0
| 0
|
2012-03-04T17:18:00.000
| 2
| 1.2
| true
| 9,557,070
| 0
| 0
| 0
| 2
|
I have about 1000-10000 jobs which I need to run on a constant basis each minute or so. Sometimes new job comes in or other needs to be cancelled but it's rare event. Jobs are tagged and must be disturbed among workers each of them processes only jobs of specific kind.
For now I want to use cron and load whole database of jobs in some broker -- RabbitMQ or beanstalkd (haven't decided which one to use though).
But this approach seems ugly to me (using timer to simulate infinity, loading the whole database, etc) and has the disadvantage: for example if some kind of jobs are processed slower than added into the queue it may be overwhelmed and message broker will eat all ram, swap and then just halt.
Is there any other possibilities? Am I not using right patterns for a job? (May be I don't need queue or something..?)
p.s. I'm using python if this is important.
|
What is the optimal way to organize infinitely looped work queue?
| 9,578,495
| 0
| 0
| 341
| 0
|
python,queue,rabbitmq,task-queue,beanstalkd
|
You can use asynchronous framework, e.g. Twisted
I don't think either it's a good idea to run script by cron daemon each minute (and you mentioned reasons), so I offer you Twisted. It doesn't give you benefit with scheduling, but you get flexibility in process management and memory sharing
| 0
| 1
| 0
| 0
|
2012-03-04T17:18:00.000
| 2
| 0
| false
| 9,557,070
| 0
| 0
| 0
| 2
|
I have about 1000-10000 jobs which I need to run on a constant basis each minute or so. Sometimes new job comes in or other needs to be cancelled but it's rare event. Jobs are tagged and must be disturbed among workers each of them processes only jobs of specific kind.
For now I want to use cron and load whole database of jobs in some broker -- RabbitMQ or beanstalkd (haven't decided which one to use though).
But this approach seems ugly to me (using timer to simulate infinity, loading the whole database, etc) and has the disadvantage: for example if some kind of jobs are processed slower than added into the queue it may be overwhelmed and message broker will eat all ram, swap and then just halt.
Is there any other possibilities? Am I not using right patterns for a job? (May be I don't need queue or something..?)
p.s. I'm using python if this is important.
|
Where to get the CherryPy auth_digest module?
| 9,561,238
| 0
| 0
| 219
| 0
|
python,ubuntu,cherrypy
|
I wound up downloading the tar file (which I think may be a minor version or two more recent than what apt-get knows about) and using setup.py to install it. This version includes the digest authorization module.
| 0
| 1
| 1
| 0
|
2012-03-04T22:34:00.000
| 1
| 1.2
| true
| 9,559,475
| 0
| 0
| 0
| 1
|
I have a CherryPy web site running on a virtual ubuntu linux server. I'm attempting to move the application to a second, larger-memory server. Both servers appears to have CherryPy 3.2 installed (I just used apt-get to install it on the newer server).
The newer server, however, does not appear to have the CherryPy auth_digest module installed which is what I'm using for authentication. It is present in the CherryPy egg on the older server.
How can I update my copy of CherryPy to incorporate that module?
|
What is the possibility of using a Python app, deployed online, that has access to a users local disk?
| 9,561,004
| 2
| 0
| 139
| 0
|
python,google-app-engine,web.py
|
You could create a signed java applet that will run along side the javascript and allow access to local files. You may be able to find an applet already developed that you can call from javascript. You have to be careful with this though because once the user trusts the applet it's installed and any site can call the applet unless the applet code is restricted to a specific site.
| 0
| 1
| 0
| 0
|
2012-03-05T02:30:00.000
| 2
| 0.197375
| false
| 9,560,950
| 0
| 0
| 1
| 2
|
I currently have a local python application that scans a users drive, maps it into a tree and displays this information with javascript. I would really like to try to develop something with a Drop-Box like system to manage drive trees.
I have searched and read that App Engine specifically doesn't allow access to a user's local disk. Is there a way to use webpy or something else to access a user's local drive to create a tree directory out of it?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.