Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
How do I restart the IDLE Python Shell in Linux?
8,188,836
2
7
19,002
0
python,shell,python-idle
Restart Shell has a keyboard shortcut of ctrl+F6, you could always try that.
0
1
0
0
2011-11-18T20:38:00.000
3
0.132549
false
8,188,805
0
0
0
2
In IDLE on Windows, on the menu bar, there is a Shell menu. One of the items on the Shell menu is 'Restart Shell'. The Shell menu is not available in IDLE on Linux. The Restart Shell command is useful after you have made a change in a module and want to run the module again in the shell. In IDLE on Linux, I have to close IDLE and open it again for the shell to notice the change in the module. How can I restart the shell without closing and reopening IDLE as a whole?
How do I restart the IDLE Python Shell in Linux?
12,849,592
1
7
19,002
0
python,shell,python-idle
IDLE have two modes of operation - with subprocess and without it. The 'restart shell' option is available only with subprocess. The default mode is with subprocess, but it can be changed using the argument '-n' when starting IDLE. Apparently, the menu item that starts IDLE on Linux does that with the '-n' argument. Open IDLE without this flag and your 'restart shell' option will be back.
0
1
0
0
2011-11-18T20:38:00.000
3
0.066568
false
8,188,805
0
0
0
2
In IDLE on Windows, on the menu bar, there is a Shell menu. One of the items on the Shell menu is 'Restart Shell'. The Shell menu is not available in IDLE on Linux. The Restart Shell command is useful after you have made a change in a module and want to run the module again in the shell. In IDLE on Linux, I have to close IDLE and open it again for the shell to notice the change in the module. How can I restart the shell without closing and reopening IDLE as a whole?
Tips for interacting with debian based repositories
8,816,534
0
1
200
0
python,deb
debian-installer doesn't generate repository metadata. For that, you want a tool like reprepro or mini-dinstall. They'll also handle the second point you raised.
0
1
0
1
2011-11-19T01:49:00.000
1
0
false
8,191,239
0
0
0
1
I am planning on writing a small program that interacts with a debian based repository - namely doing a partial mirror**. I am planning to write it in python. What are some tips for working with the repository including already constructed 'wheels' (to save the invention of yet another one)? Some issues I have identified As it is going to be a partial mirror, I will need to regenerate the package lists (Release,Contents*, Packages.{bz2,gz}). (Maybe debian-installer can do it for me??) How to get changes to package list (I already know that packages do not change, but that the lists only link to the latest file)? ** Already looked into apt-mirror and debmirror. Debmirror is the closest to what I want, however lacking in some features. If apt can deal with multiple releases and architectures then I will consider apt.
e-commerce in Tornado (non-blocking) VS Flask (WSGI)
8,203,022
2
4
2,594
0
python,e-commerce,wsgi,flask,tornado
Not an answer but as an alternative way django with satchmo is very suitable for that sort of projects.
0
1
0
0
2011-11-19T21:06:00.000
3
0.132549
false
8,197,345
0
0
1
1
am trying to develop a e-commerce plateforme using python and nosql, as framework i'm between two: tornado and flask; so my question is simple: Which one is suited for e-commerce: WSGI-like application (using Flask) or a non-blocking application (using Tornado)? NB: the e-commerce will manage products, and users (without making chat system) but will include a notification system (like facebook one: someone -a friend- sold somthing...), so which is better for such situation?
Where do I issue commands?
8,199,679
2
0
69
0
python,input,command,prompt,cx-freeze
Here is what : This goes on the windows command prompt. You first need to cd to the directory where your setup.py is situated. For Eg: C:\folder\setup.py On the command prompt : C:\ cd folder python setup.py build Now, this will call python and pass the file setup.py and asks to build it.
0
1
0
0
2011-11-20T05:25:00.000
3
1.2
true
8,199,646
1
0
0
2
When tools such as cx_freeze tell you to use a command such as "python setup.py build", where is this command issued? I have tried both the python command line and the Windows command prompt, but neither one worked with inputting that phrase.
Where do I issue commands?
8,199,676
1
0
69
0
python,input,command,prompt,cx-freeze
You issue such commands from the command prompt. You should make sure that you've changed into the directory containing setup.py before typing the command.
0
1
0
0
2011-11-20T05:25:00.000
3
0.066568
false
8,199,646
1
0
0
2
When tools such as cx_freeze tell you to use a command such as "python setup.py build", where is this command issued? I have tried both the python command line and the Windows command prompt, but neither one worked with inputting that phrase.
Python - Hide Console Mid-Way Through Script
8,205,492
4
0
161
0
python,console,terminal
I don't think there is a quick fix for this. You could start without a window, fire up a subprocess with a console, and terminate that process when you want the console to go away.
0
1
0
0
2011-11-20T22:49:00.000
1
1.2
true
8,205,413
1
0
0
1
Hopefully a simple answer. I have a console python program compiled to exe writing information constantly to the terminal, but half way through the script I'd like the terminal to hide but for the program to continue on hidden. Is there a quick fix for this or is this more complicated then it seems? Any help is appreciated! Thanks!
ZeroMQ pipeline pattern
8,639,802
1
5
1,777
0
python,zeromq
REQ-REP is when you want a round trip. Sounds like you want a PUB-SUB. Set up a SUB with a bind at a well-known port, then have the clients connect to that port and issue a PUB.
0
1
0
0
2011-11-21T12:50:00.000
2
0.099668
false
8,212,075
0
0
0
1
I'm implementing a messaging system where external programs called agents are able to communicate via ZeroMq producers. So, every time an event of interest occurs, agent sends a message to ZeroMq. I'm interested in implementing this using pipeline pattern. I found some examples (Ventilator-Worker-Results Manager), but Ventilator component creates an endpoint for accepting connections from the worker, and then sends all messages in batch. My scenario is quite different. The "agent" connects every time an event is needed to be send - it doesn't wait for connections from the workers, so I'm wondering if this is possible? Also, the important fact is that messages have to be processed in order they were sent.
Advice: Python Framework Server/Worker Queue management (not Website)
8,220,985
0
0
552
0
python,message-queue,rabbitmq,worker
How about using pyro? It gives you remote object capability and you just need a client script to coordinate the work.
0
1
0
0
2011-11-21T22:22:00.000
1
0
false
8,219,355
0
0
1
1
I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).
Using WSGI on Redhat Linux
8,221,263
0
1
591
0
python,packages,redhat
I found out that WSGI is included in Python 2.5 and above, so you don't need to do any installs. Just say things like from wsgiref import make_server.
0
1
0
1
2011-11-22T02:08:00.000
3
1.2
true
8,221,114
0
0
0
2
I wanted to install WSGI on a RedHat linux box in order to make a Python server interface, but the only way I could find to do that was to use modwsgi, which is an Apache module. The whole reason I'm using WSGI is that I don't want to use Apache, so this kinda defeats the purpose. Does anyone know of actual WSGI packages for RedHat linux or is this the only way? ----Edit---- I just found out that WSGI is built into Python 2.5 and higher, so I don't need to install anything. I don't know how to mark this question as solved without answering it myself. Any tips will be appreciated.
Using WSGI on Redhat Linux
8,221,832
0
1
591
0
python,packages,redhat
WSGI is a protocol. In order to use it you need a WSGI container such as mod_wsgi, Paste Deploy, CherryPy, or wsgiref.
0
1
0
1
2011-11-22T02:08:00.000
3
0
false
8,221,114
0
0
0
2
I wanted to install WSGI on a RedHat linux box in order to make a Python server interface, but the only way I could find to do that was to use modwsgi, which is an Apache module. The whole reason I'm using WSGI is that I don't want to use Apache, so this kinda defeats the purpose. Does anyone know of actual WSGI packages for RedHat linux or is this the only way? ----Edit---- I just found out that WSGI is built into Python 2.5 and higher, so I don't need to install anything. I don't know how to mark this question as solved without answering it myself. Any tips will be appreciated.
easy_install-3.2.exe vs easy_install.exe?
8,225,167
2
1
466
0
python,python-3.x,setuptools,easy-install
You use the easy_install that is for the installation of Python where you want to install it. In most cases, both easy_install and easy_install-3.2 will be the same program and install to the same installation of Python. You can install the same version of Python in different places, then you need to run the easy_install from the right place. You can also install several versions of Python in one place, and then you need to use the right version. Always use Distribute in preference to Setuptools.
0
1
0
0
2011-11-22T06:49:00.000
2
0.197375
false
8,223,029
1
0
0
2
For Python 3.2, What is recommended: easy_install-3.2 or easy_install? Why two execs? What's the difference? One more question: distribute or setuptools? setuptools is obviously not working on Intel Win64/python3.2 (or not available as of writing this post).
easy_install-3.2.exe vs easy_install.exe?
8,223,191
3
1
466
0
python,python-3.x,setuptools,easy-install
Answering just to the first question. Regarding easy_install, since you might have multiple python versions installed, you might have also different easy_install versions installed (one for each python version). In that case, easy_install will be the default version and easy_install-X.Y will be the version to install new packages for python-X.Y.
0
1
0
0
2011-11-22T06:49:00.000
2
0.291313
false
8,223,029
1
0
0
2
For Python 3.2, What is recommended: easy_install-3.2 or easy_install? Why two execs? What's the difference? One more question: distribute or setuptools? setuptools is obviously not working on Intel Win64/python3.2 (or not available as of writing this post).
Debug PyDev+Eclipse - Code not reloads after code change in breakpoint/suspend mode
8,224,704
5
5
886
0
python,debugging,google-app-engine,pydev
The way the debug works is not by executing the source line-by-line. The debug "compiles" your source to bytecode (the .pyc files) and execute those, not your source. The debug only keeps track of what piece of the .pyc files correspond to what line of your .py ones and display that information for your convenience, but the .py file itself is not what the debugger is using to run the program. Therefore, if you change the source / .py file and want the debugger to acknowledge those changes, you need to "recompile" the .pyc files first. HTH!
0
1
0
0
2011-11-22T09:33:00.000
1
1.2
true
8,224,592
0
0
1
1
I often doing such steps and want to optimize debug speed: I am setting some breakpoints. I am running Google Appengine Application (Python 2.5.2+). When breakpoint occur I often change code to fix bugs. After code change want to test again but there is problem if I changed code in breakpoint/suspend mode the application does not updates with my code changes - thus requiring a slow reloading. Does anybody have an idea of what is root cause of forcing reloading after suspend or it is PyDev Bug/Limitation?
Is there a better way to monitor log files?(linux/python)
8,227,859
0
5
3,207
0
python,linux,logging
If you do it yourself, you might do something like this: If you detect file modification, get the size of the file. If it's larger than last time you can seek to the previous "last" position (i.e. the previous size) and read from there.
0
1
0
1
2011-11-22T13:07:00.000
3
0
false
8,227,308
0
0
0
2
I'm trying to monitor log files that some process are running on linux(to create a joint log file where log entries are grouped together by when they happen). Currently I'm thinking of opening the files being logged, polling with inotify(or wrapper) and then checking if I can read any more of the file. Is there any better way to do this? Perhaps some library which abstracts the reading/changes in the files watched?
Is there a better way to monitor log files?(linux/python)
8,235,200
3
5
3,207
0
python,linux,logging
Why won't a "tail -f" be sufficient? You could use popen and pipes to handle this from Python.
0
1
0
1
2011-11-22T13:07:00.000
3
0.197375
false
8,227,308
0
0
0
2
I'm trying to monitor log files that some process are running on linux(to create a joint log file where log entries are grouped together by when they happen). Currently I'm thinking of opening the files being logged, polling with inotify(or wrapper) and then checking if I can read any more of the file. Is there any better way to do this? Perhaps some library which abstracts the reading/changes in the files watched?
relatively new programmer interested in using Celery, is this the right approach
8,230,713
2
2
107
1
python,mysql,celery
No, there wouldn't be any problem multiple worker computers searching and writing to the same database since MySQL is designed to be able to handle this. Your approach is good.
0
1
0
0
2011-11-22T16:56:00.000
1
1.2
true
8,230,617
0
0
0
1
Essentially I have a large database of transactions and I am writing a script that will take some personal information and match a person to all of their past transactions. So I feed the script a name and it returns all of the transactions that it has decided belong to that customer. The issue is that I have to do this for almost 30k people and the database has over 6 million transaction records. Running this on one computer would obviously take a long time, I am willing to admit that the code could be optimized but I do not have time for that and I instead want to split the work over several computers. Enter Celery: My understanding of celery is that I will have a boss computer sending names to a worker computer which runs the script and puts the customer id in a column for each transaction it matches. Would there be a problem with multiple worker computers searching and writing to the same database? Also, have I missed anything and/or is this totally the wrong approach? Thanks for the help.
Errors When Installing MySQL-python module for Python 2.7
8,260,644
1
4
912
1
python,mysql,django,mysql-python
Make sure that gcc-4.0 is in your PATH. Also, you can create an alias from gcc to gcc-4.0. Take care about 32b and 64b versions. Mac OS X is a 64b operating system and you should right flags to make sure you're compiling for 64b architecture.
0
1
0
0
2011-11-23T03:28:00.000
2
1.2
true
8,236,963
0
0
0
1
I'm currently trying to build and install the mySQLdb module for Python, but the command python setup.py build gives me the following error running build running build_py copying MySQLdb/release.py -> build/lib.macosx-10.3-intel-2.7/MySQLdb error: could not delete 'build/lib.macosx-10.3-intel-2.7/MySQLdb/release.py': Permission denied I verified that I'm a root user and when trying to execute the script using sudo, I then get a gcc-4.0 error: running build running build_py copying MySQLdb/release.py -> build/lib.macosx-10.3-fat-2.7/MySQLdb running build_ext building '_mysql' extension gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.3-fat-2.7/_mysql.o -Os -g -fno-common -fno-strict-aliasing -arch x86_64 unable to execute gcc-4.0: No such file or directory error: command 'gcc-4.0' failed with exit status 1 Which is odd, because I'm using XCode 4 with Python 2.7. I've tried the easy_install and pip methods, both of which dont work and give me a permission denied error on release.py. I've chmodded that file to see if that was the problem but no luck. Thoughts?
Writing to USB device with Python using ioctl
8,245,298
0
2
5,132
0
python,usb
According to the documentation, ioctl() in the fcntl module is unix specific, so it will not work in Windows. There seems to be a Windows variant named DeviceIoControl() that works similarly. IOCTLs are declared by the device driver or operating system, so I very much doubt that there are IOCTL operations that have the same operation id (IOCTL number) and same parameters on different operating systems. For Linux, you can check the header files for specific device drivers or possibly some usb core header file for valid IOCTLs.
0
1
0
1
2011-11-23T15:35:00.000
2
0
false
8,244,887
0
0
0
1
Using Python, I am trying to write to a USB sensor using ioctl. I have loads of examples of reading from devices either directly or via pyusb, or simple file writes, but anything more complicated disappears off the radar. I need to use a control_transfer to write Feature Report message The command is ioctl(devicehandle, Operation, Args) The issue I have is determining the correct Operation. The Args, I believe should be a buffer containing the Feature Report for the device? plus a Mutable flag set to true Any help or advice would be greatly received I should add; the reason for using Python is the code must be device independent.
How to know/change current directory in Python shell?
8,248,427
3
255
539,479
0
python,windows,python-3.x,python-3.2
If you import os you can use os.getcwd to get the current working directory, and you can use os.chdir to change your directory
0
1
0
0
2011-11-23T20:06:00.000
7
0.085505
false
8,248,397
1
0
0
1
I am using Python 3.2 on Windows 7. When I open the Python shell, how can I know what the current directory is and how can I change it to another directory where my modules are?
Windows XP to Ubuntu Linux point-to-point communication using Python or C/C++
8,277,033
0
1
1,063
0
c++,python,linux,windows-xp,labview
In LabVIEW/Windows you'll need to create a TCP-listen function on a specific port (server). From the linux box you'll start a connection as a client.
0
1
0
0
2011-11-24T18:02:00.000
2
0
false
8,261,046
0
0
0
1
I need to know if I can connect 2 PCs over Ethernet connection (point to point connection). I have 2 machines (one of them runs on Windows XP and the other runs on Ubuntu Linux 10.10) and I need to have connection between them. Will be possible to have connection between them ? The suggested language is either Python or C/C++. Any ideas ? to be more specific, LabVIEW is running on the Windows machine and choice will be either Python or C/C++ on the Linux machine.
Python: compile into an Unix commandline app
8,271,703
6
7
6,014
0
python,unix,command-line
The simplest way to do this is to rename your file from filename.py to filename and add this line to the top of the file. #!/usr/bin/env python You may also need to set the executable bit on the file, which may be set using chmod +x on the command line.
0
1
0
0
2011-11-25T16:09:00.000
5
1.2
true
8,271,629
0
0
0
2
I am not sure if I searched for the wrong terms, but I could not find much on this subject. I am on osx and I'd like to compile a commandline python script into a small commandline app, that I can put into usr/local/bin so I can call it from anywhere. Is there a straighforward way to do that? Thanks
Python: compile into an Unix commandline app
8,271,713
4
7
6,014
0
python,unix,command-line
On Unix it works usually in the following way: Put #!/usr/bin/env python in the first line of your .py script. Add execution permissions to the file (using chmod). Execute the script from command line, eg. by providing ./my_script.py when in the same directory. What else do you need?
0
1
0
0
2011-11-25T16:09:00.000
5
0.158649
false
8,271,629
0
0
0
2
I am not sure if I searched for the wrong terms, but I could not find much on this subject. I am on osx and I'd like to compile a commandline python script into a small commandline app, that I can put into usr/local/bin so I can call it from anywhere. Is there a straighforward way to do that? Thanks
How do i check if a logo is in a BMP file?
8,274,728
1
2
207
0
python,image-processing
Read up your favorite literature on the subject of digital watermarking. It's not trivial if you want it to be robust to various distortions, image compression, transparency, etc. - nothing I can answer here in a few lines.
0
1
0
0
2011-11-25T22:13:00.000
4
0.049958
false
8,274,703
0
0
0
2
can anybody give me advice how i process a BMP file so that i can check if a logo is present in that file? E.g. I have a Foto. My logo is supplied in another bmp file. I want to check if my logo is part of that foto, if it is visible. Bonus question: My logo can have transparencies in it. Does that change anything?
How do i check if a logo is in a BMP file?
8,274,772
-2
2
207
0
python,image-processing
Bonus answer: if you what the logo with a transparent background let say no background use a .png or .gif file, if you want 20% transparencies u can use CSS Image Opacity / Transparency or save you logo as a png with a transparent filter you can do this in adobe fireworks
0
1
0
0
2011-11-25T22:13:00.000
4
-0.099668
false
8,274,703
0
0
0
2
can anybody give me advice how i process a BMP file so that i can check if a logo is present in that file? E.g. I have a Foto. My logo is supplied in another bmp file. I want to check if my logo is part of that foto, if it is visible. Bonus question: My logo can have transparencies in it. Does that change anything?
Google App Engine handling parallel requests
8,282,420
2
0
210
0
python,google-app-engine,webserver,parallel-processing
The Python dev_appserver is single-threaded and only serves one request at a time. The production environment, naturally, has no such restriction.
0
1
0
0
2011-11-26T20:58:00.000
1
1.2
true
8,281,565
0
0
1
1
I am using Google App Engine for the first time, whenever I make requests using two instances of the application, the responses come sequentially.. For example, I opened two pages of the app's main page, made an AJAX request from one then refreshed the other, the page doesn't load until the first page gets its response.. so the server actually waits and responds to requests sequentially.. Is this an issue with the development server only? or am I missing something here?
Can a program written in Python be AppleScripted?
13,686,960
0
5
206
0
python,xcode,applescript
You can also use the py-aemreceive module from py-appscript. I use that to implement AppleScript support in my Tkinter app.
1
1
0
0
2011-11-26T23:13:00.000
2
0
false
8,282,352
1
0
0
1
I want my Python program to be AppleScript-able, just like an Objective C program would be. Is that possible? (Note, this is not about running AppleScript from Python programs, nor about calling Python programs from AppleScript via Unix program invocation. Those are straightforward. I need genuine AppleScriptability of my program's operations.) There is some documentation about how to do this. Python 2.7.2 documentation describes MiniAEFrame, for example, but even a minimal reference to from MiniAEFrame import AEServer, MiniApplication dies with an ImportError and a complaint that a suitable image can't be found / my architecture (x86) not supported. Rut roh! It seems that MiniAEFrame might pertain to the earlier ("Carbon") API set. In other words, obsolete. There's a very nice article about "Using PyObjC for Developing Cocoa Applications with Python" (http://developer.apple.com/cocoa/pyobjc.html). Except it was written in 2005; my recently-updated version of Xcode (4.1) doesn't have any of the options it describes; the Xcode project files it provides blow up in an impressive build failure; and the last PyObjC update appears to have been made 2 years ago. Apple seems to have removed all of the functions that let you build "real" apps in AppleScript or Python, leaving only Objective C. So, what are my options? Is it still possible to build a real, AppleScriptable Mac app using Python? If so, how? (If it matters, what I need AppleScripted is text insertion. I need Dragon Dicate to be able to add text to my app. I'm currently using Tk as its UI framework, but would be happy to use the native Cocoa/Xcode APIs/tools instead, if that would help.)
Running in 32-bit mode permanently
8,578,267
0
0
3,887
0
python,macos
With the default python install you will get both the 32 and 64 bit versions. The 32 bit version can be called from the terminal with python2.7-32 or python-32.
0
1
0
0
2011-11-27T03:12:00.000
2
0
false
8,283,370
1
0
0
1
I'm not sure if this is possible but I want to force my mac to run in 32-bit mode permanently. Currently I've been using this piece of code: arch -i386 /usr/bin/python to change it but once I exit python in the terminal it switches back to 64. The reason I want to do this is so I can install pygame. I've already installed it but only in 64-bit mode. I'm not sure if I also need to install it in 32-bit mode for it to work properly. At the moment it says the module does not exist.
"Invalidate" RabbitMQ queue or sending "DONE signal"
8,296,293
0
1
177
0
python,rabbitmq
No, there is no way to find how many publishers are still publishing to a queue in AMQP. You'll have to roll your own system. A way to do this would be to have a fanout exchange that every worker binds a queue to (let's call it the "control" exchange), and have the publisher send a special message to it when it finishes. Workers could then check their "control" queue to see if the publisher is still there; if it isn't and there are no more messages available, they can safely disconnect and shutdown.
0
1
0
0
2011-11-28T11:22:00.000
1
1.2
true
8,295,078
0
0
0
1
I'm using RabbitMQ with Python/pika to distribute some batch jobs. So I think I have a very common scenario: One process fills a queue with jobs to be done. Multiple workers retrieve jobs, transform data and put the results in a second queue. Another single process retrives the results and merges them. The works very fine so far. But how do I stop my scripts in a controlled way? Is there some build in functionality to "invalidate" a queue, so that the works will be aware that no more jobs will be filled in?
Coordinating distributed Python processes using queuing or REST web service
8,301,254
3
3
533
0
python,rest,concurrency,message-queue,data-warehouse
Message brokers such as Rabbit contain practical solutions for a number of problems: multiple producers and consumers are supported without risk of duplication of messages atomicity and unit-of-work logic provide transactional integrity, preventing duplication and loss of messages in the event of failure horizontal scaling--most mature brokers can be clustered so that a single queue exists on multiple machines no-rendezvous messaging--it is not necessary for sender and receiver to be running at the same time, so one can be brought down for maintenance without affecting the other preservation of FIFO order Depending on the particular web service platform you are considering, you may find that you need some of these features and must implement them yourself if not using a broker. The web service protocols and formats such as HTTP, SOAP, JSON, etc. do not solve these problems for you. In my previous job the project management passed on using message brokers early on, but later the team ended up implementing quick-and-dirty logic meant to solve some of the same issues as above in our web service architecture. We had less time to provide business value because we were fixing so many concurrency and error-recovery issues. So while a message broker may seem on its face like a heavyweight solution, and may actually be more than you need right now, it does have a lot of benefits that you may need later without yet realizing it.
0
1
0
0
2011-11-28T16:00:00.000
2
1.2
true
8,298,571
0
0
0
1
Server A has a process that exports n database tables as flat files. Server B contains a utility that loads the flat files into a DW appliance database. A process runs on server A that exports and compresses about 50-75 tables. Each time a table is exported and a file produced, a .flag file is also generated. Server B has a bash process that repeatedly checks for each .flag file produced by server A. It does this by connecting to A and checking for the existence of a file. If the flag file exists, Server B will scp the file from Server A, uncompress it, and load it into an analytics database. If the file doesn't yet exist, it will sleep for n seconds and try again. This process is repeated for each table/file that Server B expects to be found on Server A. The process executes serially, processing a single file at a time. Additionally: The process that runs on Server A cannot 'push' the file to Server B. Because of file-size and geographic concerns, Server A cannot load the flat file into the DW Appliance. I find this process to be cumbersome and just so happens to be up for a rewrite/revamp. I'm proposing a messaging-based solution. I initially thought this would be a good candidate for RabbitMQ (or the like) where Server A would write a file, compress it and then produce a message for a queue. Server B would subscribe to the queue and would process files named in the message body. I feel that a messaging-based approach would not only save time as it would eliminate the check-wait-repeat cycle for each table, but also permit us to run processes in parallel (as there are no dependencies). I showed my team a proof-of-concept using RabbitMQ and they were all receptive to using messaging. A number of them quickly identified other opportunities where we would benefit from message-based processing. One such area that we would benefit from implementing messaging would be to populate our DW dimensions in real-time rather then through batch. It then occurred to me that a MQ-based solution might be overkill given the low volume (50-75 tasks). This might be overkill given our operations team would have to install RabbitMQ (and its dependencies, including Erlang), and it would introduce new administration headaches. I then realized this could be made more simple with a REST-based solution. Server A could produce a file and then make a HTTP call to a simple (web.py) web service on Server B. Server B could then initiate the transfer-and-load process based on the URL that is called. Given the time that it takes to transfer, uncompress, and load each file, I would likely use Python's multiprocessing to create a subprocess that loads each file. I'm thinking that the REST-based solution is idea given the fact that it's simpler. In my opinion, using an MQ would be more appropriate for higher-volume tasks but we're only talking (for now) 50-75 operations with potentially more to come. Would REST-based be a good solution given my requirements and volume? Are there other frameworks or OSS products that already do this? I'm looking to add messaging without creating other administration and development headaches.
Eclipse: debug shared library loaded from python
34,515,642
2
3
3,167
0
python,debugging,gdb,shared-libraries,eclipse-cdt
I have been able to debug the c++ shared library loaded by the python in Eclipse successfully. The prerequisites: Two eclipse projects in an eclipse workspace: one is the C++ project, from which the c++ shared library is generated, the other is the python project (PyDev), which loads the generated c++ shared library. The steps are: create a "Python Run" debug configuration named PythonDebug with the corresponding python environment and parameters create a "C/C++ Attach to Application" debug configuration named CppDebug. The project field is the C++ project, leave the C/C++ Application field empty set a breakpoint in python code where after the c++ shared library has already been loaded start the debug session PythonDebug, the program will be breaked at the created breakpoint at step 3 start the debug session CppDebug, a menu will be popped up, select python process with correct pid (there will be 3 pids, the correct one can be found in PythonDebug session) set a breakpoint in c++ source code where you want the program to break continue the PythonDebug session continue the CppDebug session the program will break at the c++ breakpoint I tested the above procedure with Eclipse Mars version. Hopefully it helps.
0
1
0
1
2011-11-29T07:51:00.000
1
0.379949
false
8,307,425
0
0
0
1
In Linux, I am trying to debug the C++ code of a shared library which is loaded from Python code. The loading is done using the ctypes package. In Eclipse, I set breakpoints both in the Python and in the C++ code, however Eclipse just skips the breakpoints in the C++ code (breakpoints in the Python code work OK). I have tried using attach to application in Eclipse (under Debug Configurations) and choosing the Python process, but it didn't change anything. In the attach to application dialog box I choose the shared library as the Project, and I choose /usr/bin/python2.6 as the C/C++ application. Is that the correct way? I've tried it both before running the python code, and after a breakpoint in the Python code was caught, just before the line calling a function of the shared library. EDIT Meanwhile I am using a workaround of calling the python code and debugging using a gdb command-line session by attaching to the python process. But I would like to hear a solution to doing this from within Eclipse.
How to capture stdout of a Python script executed with pythonw?
8,311,778
3
3
494
0
python
It was obvious: pythonw script.py|more
0
1
0
0
2011-11-29T12:04:00.000
2
1.2
true
8,310,404
0
0
0
1
When a script is executed with pythonw it will not open a console. Is there a way to capture the stdout of such a script by keeping the usage of pythonw? Note, I am looking for a solution that does not require the modification of the script (I know that I can use logging) Update: pythonw script.py >somefile seems to work. How can I redirect it to console?
developer tools used in programming
8,319,139
1
1
1,656
0
python
There are many editors... and IDE's that will allow you to run your code from directly within the editor / IDE enviroment
0
1
0
0
2011-11-29T23:13:00.000
6
0.033321
false
8,319,090
1
0
0
4
I am a pretty recent developer.. but I have a single monitor (big enough) but it is so annoying that every now and then I first use the text editor to write the code and then go to terminal to execute and debug I was wondering if there is any developer tools where I can like use half screen for that note pad and the other half for terminal.. so that I dont have to shuffle back and forth between the terminal and the text-editor... Something like, when this software is running... then notepad and terminal just freezes in that half screen and when i close this software I can minimize and turn to normal mode. If not it would be cool thing to have. :D THanks
developer tools used in programming
8,319,156
2
1
1,656
0
python
If you're developing in Python - have a look at PyCharm. It's a clone of IntelliJ IDEA, tailored for python development, written in Java, so work on any platform. If you like it, it also doesn't cost a whole truckload. That's for the easy and money-based way. More complex ways - you can use a text editor that allows running your scripts, upload to webservers, whatnot. It pretty much depends on the stuff you're developing.
0
1
0
0
2011-11-29T23:13:00.000
6
0.066568
false
8,319,090
1
0
0
4
I am a pretty recent developer.. but I have a single monitor (big enough) but it is so annoying that every now and then I first use the text editor to write the code and then go to terminal to execute and debug I was wondering if there is any developer tools where I can like use half screen for that note pad and the other half for terminal.. so that I dont have to shuffle back and forth between the terminal and the text-editor... Something like, when this software is running... then notepad and terminal just freezes in that half screen and when i close this software I can minimize and turn to normal mode. If not it would be cool thing to have. :D THanks
developer tools used in programming
8,319,361
3
1
1,656
0
python
Use ViM in split window mode, edit your script on the left side and start ConqueTerm (use :ConqueTerm bash) on the right side. Now you can code and execute Python code in the same terminal window at the same time using a superior text editor ;-) Disclaimer: Of course this only helps if you are familiar with ViM already.
0
1
0
0
2011-11-29T23:13:00.000
6
0.099668
false
8,319,090
1
0
0
4
I am a pretty recent developer.. but I have a single monitor (big enough) but it is so annoying that every now and then I first use the text editor to write the code and then go to terminal to execute and debug I was wondering if there is any developer tools where I can like use half screen for that note pad and the other half for terminal.. so that I dont have to shuffle back and forth between the terminal and the text-editor... Something like, when this software is running... then notepad and terminal just freezes in that half screen and when i close this software I can minimize and turn to normal mode. If not it would be cool thing to have. :D THanks
developer tools used in programming
8,319,143
1
1
1,656
0
python
Perhaps I'm misunderstanding the question, but it seems like the easiest way is to simply manually resize the windows and place them where you want them. Also, a handy tool in general is the Python IDLE, where you can code in the Python window and simply have it run in the shell by pressing F5.
0
1
0
0
2011-11-29T23:13:00.000
6
0.033321
false
8,319,090
1
0
0
4
I am a pretty recent developer.. but I have a single monitor (big enough) but it is so annoying that every now and then I first use the text editor to write the code and then go to terminal to execute and debug I was wondering if there is any developer tools where I can like use half screen for that note pad and the other half for terminal.. so that I dont have to shuffle back and forth between the terminal and the text-editor... Something like, when this software is running... then notepad and terminal just freezes in that half screen and when i close this software I can minimize and turn to normal mode. If not it would be cool thing to have. :D THanks
How to make sure a script only runs after another script
8,320,318
0
4
3,971
0
python,bash,cron,crontab
Make it write a file and check if the file is there.
0
1
0
1
2011-11-30T02:06:00.000
4
0
false
8,320,304
0
0
0
3
I have two python scripts running as cronjobs. ScriptA processes log files and insert records to a table, ScriptB uses the records to generate a report. I have arranged ScriptA to run one hour before ScriptB, but sometimes ScriptB run before ScriptA finish inserting, thus generating a incorrect report. How do I make sure ScriptB runs right after ScriptA finishes? EDIT ScriptA and ScriptB do very different things, say, one is for saving user data, the other is for internal use. And somewhere else there maybe some ScriptC depending on ScriptA. So I can't just merge these two jobs.
How to make sure a script only runs after another script
8,320,328
0
4
3,971
0
python,bash,cron,crontab
an approach that you could use it! is having some flag of control! somewhere, for example in the DB! So ScriptB just runs after that flag is set! and right after it finish it it sets the flag back to default state! Another way that you could implement that flag approach is using file system! Like @Benjamin suggested!
0
1
0
1
2011-11-30T02:06:00.000
4
0
false
8,320,304
0
0
0
3
I have two python scripts running as cronjobs. ScriptA processes log files and insert records to a table, ScriptB uses the records to generate a report. I have arranged ScriptA to run one hour before ScriptB, but sometimes ScriptB run before ScriptA finish inserting, thus generating a incorrect report. How do I make sure ScriptB runs right after ScriptA finishes? EDIT ScriptA and ScriptB do very different things, say, one is for saving user data, the other is for internal use. And somewhere else there maybe some ScriptC depending on ScriptA. So I can't just merge these two jobs.
How to make sure a script only runs after another script
8,320,343
2
4
3,971
0
python,bash,cron,crontab
One approach would be to make sure that if those two jobs are separate cron jobs - there is enough time inbetween to surely cover the run of job 1. Another approach is locking, as others here suggested, but then note, that cron will not re-run your job just because it completed unsuccessfully because of a lock. So either job2 will have to run in sleep cycles until it doesn't see the lock anymore, or on the contrary sees a flag of job 1 completions, or you'll have to get creative. Why not to trigger 2nd script from the 1st script after it's finished and make it a single cron job?
0
1
0
1
2011-11-30T02:06:00.000
4
1.2
true
8,320,304
0
0
0
3
I have two python scripts running as cronjobs. ScriptA processes log files and insert records to a table, ScriptB uses the records to generate a report. I have arranged ScriptA to run one hour before ScriptB, but sometimes ScriptB run before ScriptA finish inserting, thus generating a incorrect report. How do I make sure ScriptB runs right after ScriptA finishes? EDIT ScriptA and ScriptB do very different things, say, one is for saving user data, the other is for internal use. And somewhere else there maybe some ScriptC depending on ScriptA. So I can't just merge these two jobs.
Architecture: I have to make available object properties that that is updated every couple seconds
8,348,956
0
1
62
0
python,twisted
I think either you create an selfCreatedObject or using a memcached once, it will be an instance of python objects, like list or dict or anything else so the two ways are to same destination. I prefer to use object so you can make a check the change of the object value or validate it if needed.
0
1
0
0
2011-12-01T21:41:00.000
2
0
false
8,348,687
0
0
1
2
My app maintains the state of a bunch of objects with variables. I'm using Twisted to accept socket requests and return the properties of an object. I want to make sure the app can scale for a lot of requests so I'm wondering if I should deliver the object properties directly from the objects, or if I should store those properties in memcached or something similar, and have the requests read from that store. I just wasn't sure if lots of requests reading the same object values would affect the performance of the part of the app that is managing those objects. Am I over thinking it?
Architecture: I have to make available object properties that that is updated every couple seconds
8,348,947
0
1
62
0
python,twisted
I don't think you have any performance penalty because of having many reading operations from the same object (only one thread is executed at a time after all).
0
1
0
0
2011-12-01T21:41:00.000
2
0
false
8,348,687
0
0
1
2
My app maintains the state of a bunch of objects with variables. I'm using Twisted to accept socket requests and return the properties of an object. I want to make sure the app can scale for a lot of requests so I'm wondering if I should deliver the object properties directly from the objects, or if I should store those properties in memcached or something similar, and have the requests read from that store. I just wasn't sure if lots of requests reading the same object values would affect the performance of the part of the app that is managing those objects. Am I over thinking it?
Can we execute mutiple commands in the same command prompt launched by python script?
8,365,694
0
3
95
0
python,windows
If you plan to execute this command in a remote machine, then you may consider using Paramiko. I have personally found it very useful and it lets you execute the command as root also.
0
1
0
0
2011-12-02T12:10:00.000
3
0
false
8,356,137
0
0
0
3
I have used os.system(command) in a for loop. By using this, CMD opens, executes the command and close. For Second command, CMD opens again, executes tha command and close. Due to this, CMD pop-ups again and again. Meanwhile, I am not able to do another task on system. I want to do this in a CMD so that i can minimize it and continue with other task.
Can we execute mutiple commands in the same command prompt launched by python script?
8,359,438
1
3
95
0
python,windows
Another approach is to write all of the command strings to a .bat or .cmd file, and then execute the resulting file with os.system. This is more useful if the number of commands per iteration is "large-ish" and less useful if there are only a few commands per iteration.
0
1
0
0
2011-12-02T12:10:00.000
3
0.066568
false
8,356,137
0
0
0
3
I have used os.system(command) in a for loop. By using this, CMD opens, executes the command and close. For Second command, CMD opens again, executes tha command and close. Due to this, CMD pop-ups again and again. Meanwhile, I am not able to do another task on system. I want to do this in a CMD so that i can minimize it and continue with other task.
Can we execute mutiple commands in the same command prompt launched by python script?
8,356,168
2
3
95
0
python,windows
You can just concatenate your commands, delimited by a semicolon (;) and only call os.system once.
0
1
0
0
2011-12-02T12:10:00.000
3
0.132549
false
8,356,137
0
0
0
3
I have used os.system(command) in a for loop. By using this, CMD opens, executes the command and close. For Second command, CMD opens again, executes tha command and close. Due to this, CMD pop-ups again and again. Meanwhile, I am not able to do another task on system. I want to do this in a CMD so that i can minimize it and continue with other task.
Eclipse & Python: 100% CPU load due to PythonHome reindexing
10,102,742
1
0
706
0
python,eclipse,cpu,pydev
Disable 'Build Automatically' and 'Refresh Automatically' under Preferences->General->Workspace Disable 'Code Analysis' entirely, or configure it to only run on save under Preferences->PyDev->Editor->Code Analysis
0
1
0
1
2011-12-02T16:17:00.000
2
0.099668
false
8,359,291
0
0
1
1
I am currently switching from Eclipse Java Development more and more Python scripting using PyDev. Almost all the time there is a Eclipse backgropund thread called "reindexing PythonHome..." which loads my CPU for almost 100%. Unusable to coding in there anymore :/ Do you have any idea? Thanks a lot for your help! John
Filtering GMail messages in Google App Engine application
8,373,562
0
3
487
0
python,django,google-app-engine,gmail
What do you mean by "fully connected"? It's possible to set up a GMail filter to forward emails to a different address (say, the email address of your App Engine app). And an App Engine app an send emails (say, to a GMail address). The trick is to set up the GMail filter carefully to avoid loops.
0
1
0
1
2011-12-03T11:39:00.000
2
1.2
true
8,367,381
0
0
1
2
I would like to build an application in Google App Engine (Python) that would be fully connected to a single GMail account and then filter e-mails from this account (e.g. filter messages for a certain string and show it on the string). In the future I am also going to implement the option to send messages. What is the most efficient way to do this (solution provided by Google if possible)?
Filtering GMail messages in Google App Engine application
8,379,161
0
3
487
0
python,django,google-app-engine,gmail
There is no Api for Gmail in App Engine. The only thing you can do is forwarding messages to App Engine. I have used fowarding for building auto responders. But there is an excellent GMail Api in Google Apps Script with lots of functions. Apps scrips uses javascript. And ofcourse your apps script can communicate with App Engine.
0
1
0
1
2011-12-03T11:39:00.000
2
0
false
8,367,381
0
0
1
2
I would like to build an application in Google App Engine (Python) that would be fully connected to a single GMail account and then filter e-mails from this account (e.g. filter messages for a certain string and show it on the string). In the future I am also going to implement the option to send messages. What is the most efficient way to do this (solution provided by Google if possible)?
Is it necessary to have the knowledge of using terminal/command-prompt for learning python
8,368,528
0
0
147
0
python,windows,command-prompt
No. If you know how to open the command prompt, navigate to a directory ("cd") and list a directory ("dir" on windows and "ls" on linux), then you can probably jump right into those python tutorials.
0
1
0
0
2011-12-03T14:51:00.000
2
0
false
8,368,463
1
0
0
1
I saw a couple of tutorials and they all make use of termial/command-prompt and I just dont know how they work. Is it necessary to know how they work before learning python or you can just earn it like you would learn some other language(lets say C) It'll be great if you could recommend something. NOTE: I am a windows user.
Start Python Celery task via Redis Pub/Sub
11,473,359
1
11
2,033
0
python,redis,celery
The problem with using pub/sub is that it's not persistant. If you're looking to do closer to real time communication celery might not be your best choice.
0
1
0
0
2011-12-04T23:39:00.000
1
0.197375
false
8,379,513
0
0
0
1
Is there an efficient way to start tasks via Redis Pub/Sub and return the value of the task back to a Pub/Sub channel to start another task based on the result? Does anybody have an idea on how to put this together? Maybe decorators are a good idea to handle and prepare the return value back to a Pub/Sub channel without changing the code of the task too much. Any help is very much appreciated!
Google App Engine and win32 DDE
8,405,511
1
1
367
0
python,google-app-engine,pywin32,dde
The problem was not on the GAE development server: I managed to uninstall the win32 python build 216 library and install a previous version. The problem was indeed with the manifest of the build 216 and not with the GAE development server. Now it works fine with build 214.
0
1
0
0
2011-12-05T10:40:00.000
1
0.197375
false
8,384,052
0
0
1
1
I'm trying to set up a little server through the App Engine Python development server: GOAL: On Windows I've got a DDE application. I need to read data from this application and serve it over the Internet. SITUATION: The development server is working correctly on port 80, enabling me to store data and make it available as JSON over the Internet. PROBLEM: I cannot get the development server to work correctly with the win32 Python library. I enabled the module in the local whitelist, but still when trying to start a DDE connection it says: This must be an MFC application - try loading win32ui first args = ('This must be an MFC application - try loading win32ui first',) message = 'This must be an MFC application - try loading win32ui first' I have got no idea on what to do. Any hint will be very much appreciated.
Why is buildbot *NOT* failing when it should?
8,419,633
4
2
484
0
python,linux,buildbot
When you add step to a factory (i.e. f.addStep(your_step)) you should specify haltOnFailure = True to make whole build fail whenever particular build step returns FAILURE.
0
1
0
0
2011-12-05T14:23:00.000
2
1.2
true
8,386,715
0
0
0
2
I'm trying to fix a very complex buildbot base build system, which has the annoying habit of showing green bars with 'failed (1)' in them. The problem is that we run several commands using the ShellCommand build step, which is not failing the whole build when it returns non zero. We also have steps which do show up red on the detail page, but the whole build still shows green. As far as know 'flunkOnFailure' is not set on the steps themselves in my master.cfg, and the default is true. (Although that's not entirely clear from the manual pages I have found) What do I need to do ( or undo ) to ensure that an entire build fails when a ShellCommand does? This is running on 100% Linux environment. Many thanks.
Why is buildbot *NOT* failing when it should?
9,149,601
2
2
484
0
python,linux,buildbot
The default for flunkOnFailure is False in BuildStep. Various subclasses override this default, in particular ShellCommand. I would guess that the particular steps that are red, with the final result of the build being green, don't have flunkOnFailure set. On the other hand, it could be that haltOnFailure isn't set, so other steps are running and succeeding, but that the overall result of the build is still failure. The steps that succeed will still be green, even if they follow a failing step. In particular, the body of the waterfall page doesn't indicate whether a particular build succeeded or failed, overall (although the boxes along the top indicate the result of the most recent build. Either the grid or recent-build page will show the results of builds clearly.
0
1
0
0
2011-12-05T14:23:00.000
2
0.197375
false
8,386,715
0
0
0
2
I'm trying to fix a very complex buildbot base build system, which has the annoying habit of showing green bars with 'failed (1)' in them. The problem is that we run several commands using the ShellCommand build step, which is not failing the whole build when it returns non zero. We also have steps which do show up red on the detail page, but the whole build still shows green. As far as know 'flunkOnFailure' is not set on the steps themselves in my master.cfg, and the default is true. (Although that's not entirely clear from the manual pages I have found) What do I need to do ( or undo ) to ensure that an entire build fails when a ShellCommand does? This is running on 100% Linux environment. Many thanks.
ZMQ pub/sub reliable/scalable design
8,394,573
4
4
1,589
0
python,publish-subscribe,zeromq,messagebroker
It seems like most of the complexity stems from trying to make the broker service persist in the event of a failure. Solving this at the application level gives you the highest degree of flexibility, but requires the most effort if you're starting from scratch. Instead of handling this at the application level, you could instead handle this at the network level. Treat your brokers as you would any other simple network service and use an IP failover mechanism (e.g., pacemaker/corosync, UCARP, etc) to fail a virtual ip address over to the secondary service if the primary becomes unavailable. This greatly simplifies your publishers and subscribers, because you don't need a name service. They only need to know about the single virtual ip address. ZMQ will take care of reconnecting to the service as necessary (i.e., when a failover occurs).
0
1
0
0
2011-12-06T01:06:00.000
2
0.379949
false
8,394,076
0
0
0
2
I'm designin a pub/sub architecture using ZMQ. I need maximum reliability and scalability and am kind of lost in the hell of possibilities provided. At the moment, I got a set a publishers and subscribers, linked by a broker. The broker is a simple forwarder device exposing a frontend for publishers, and a backend for subscribers. I need to handle the case when the broker crashes or disconnects, and improve the overall scalability. Okay, so i thought of adding multiple brokers, the publishers would round robin the broker to send messages to, and the subscribers would just subscribe to all these brokers. Then i needed a way to retrieve the list of possible brokers, so i wrote a name service that provides a list of brokers on demand. Publishers and subscribers ask this service which brokers to connect to. I also wrote a kind of "lazy pirate" (i.e. try/retry one after the other) reliable name service in case the main name service falls. I'm starting to think that i'm designing it wrong since the codebase is non stop increasing in size and complexity. I'm lost in the jungle of possibilities provided by ZMQ. Maybe something router/dealer based would be usable here ? Any advice greatly appreciated !
ZMQ pub/sub reliable/scalable design
8,400,389
7
4
1,589
0
python,publish-subscribe,zeromq,messagebroker
It's not possible to answer your question directly because it's predicated on so many assumptions, many of which are probably wrong. You're getting lost because you're using the wrong approach. Consider 0MQ as a language, one that you don't know very well yet. If you start by trying to write "maximum reliability and scalability", you're going to end up with Godzilla's vomit. So: use the approach I use in the Guide. Start with a minimal solution to the core message flow and get that working properly. Think very carefully about the right kind of sockets to use. Then make incremental improvements, each time testing fully to make sure you understand what is actually going on. Refactor the code regularly, as you find it growing. Continue until you have a stable minimal version 1. Do not aim for "maximum" anything at the start. Finally, when you've understood the problem better, start again from scratch and again, build up a working model in several steps. Repeat until you have totally dominated the problem and learned the best ways to solve it.
0
1
0
0
2011-12-06T01:06:00.000
2
1.2
true
8,394,076
0
0
0
2
I'm designin a pub/sub architecture using ZMQ. I need maximum reliability and scalability and am kind of lost in the hell of possibilities provided. At the moment, I got a set a publishers and subscribers, linked by a broker. The broker is a simple forwarder device exposing a frontend for publishers, and a backend for subscribers. I need to handle the case when the broker crashes or disconnects, and improve the overall scalability. Okay, so i thought of adding multiple brokers, the publishers would round robin the broker to send messages to, and the subscribers would just subscribe to all these brokers. Then i needed a way to retrieve the list of possible brokers, so i wrote a name service that provides a list of brokers on demand. Publishers and subscribers ask this service which brokers to connect to. I also wrote a kind of "lazy pirate" (i.e. try/retry one after the other) reliable name service in case the main name service falls. I'm starting to think that i'm designing it wrong since the codebase is non stop increasing in size and complexity. I'm lost in the jungle of possibilities provided by ZMQ. Maybe something router/dealer based would be usable here ? Any advice greatly appreciated !
How to detect an infinite loop in a monitored process
8,398,787
3
1
4,474
0
python,monitoring,infinite-loop
The only way to detect an infinite loop is to include in the loop itself a test for those conditions that would bring it to never end. For example: if your loop is supposed to make a variable decrease until it reaches zero (var == 0 would be the exit condition), you should include a test for what I would call "proper working condition". In this example this would be: var < var_of_previous_iteration. Another (less deterministic) way to catch infinite loops could be to include a timer and trigger an exception if the loop last longer than a given time limit [this is a hugly hack though, as execution speed could be affected for example by the system being busy doing something else]. HTH!
0
1
0
0
2011-12-06T10:33:00.000
3
0.197375
false
8,398,444
1
0
0
2
I'm using Python (winappdbg) to monitor a process (the main feature is to catch the exceptions). But I would like also to detect infinite loops. Do you know a way to do that with Python? With or without winappdbg ...
How to detect an infinite loop in a monitored process
8,399,080
1
1
4,474
0
python,monitoring,infinite-loop
Infinite loops usually consume 100% CPU while well-behaving programs don't, so the first thing I'd do is check CPU usage. Unfortunately, this won't let you identify where the infinite loop is in your code. To do that, you could use a profiler to record the number of times the code is being executed. If you find a really huge number of executions in an unexpected region, then it's worth at least investigating it. Edit: As pointed out by mac, monitor CPU usage won't be useful for CPU intensive tasks, so it's not something that can be applied in all cases.
0
1
0
0
2011-12-06T10:33:00.000
3
0.066568
false
8,398,444
1
0
0
2
I'm using Python (winappdbg) to monitor a process (the main feature is to catch the exceptions). But I would like also to detect infinite loops. Do you know a way to do that with Python? With or without winappdbg ...
Where does the initial sys.path come from
8,405,986
4
4
649
0
python,import,path
It comes from the python-support package, specifically from the /usr/lib/python2.7/dist-packages/python-support.pth file that is installed. There shouldn't be any modules installed to that directory manually and any package installing modules to that directory should have a dependency on the python-support package, so you shouldn't have to worry about whether it is in sys.path or not.
0
1
0
1
2011-12-06T19:48:00.000
1
1.2
true
8,405,855
0
0
0
1
I'm trying to figure out where does the initial sys.path value come from. One ubuntu system suddenly (by which I mean probably manually by someone doing something weird) lost entries at the end of the array. All other hosts: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7'] That host: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] The /usr/lib/pymodules/python2.7 path is the one I actually care about. But where does it come from on the healthy nodes?
In Python how to call subprocesses under a different user?
8,420,171
2
1
2,331
0
python,subprocess,multiprocessing
You could look in os.setpgid(pid, pgrp) direction.
0
1
0
0
2011-12-07T17:17:00.000
2
0.197375
false
8,419,558
1
0
0
1
For a Linux system, I am writing a program in Python, who spawns child processes. I am using the "multiprocessing" library and I am wondering if there is a method to call sub-processes with a different user than the current one. I'd like to be able to run each subprocess with a different user (like Postfix, for example.) Any idea or pointers ?
Python - Read in binary file over SSH
8,425,184
0
1
1,242
0
python,ssh,pexpect
If it's a short file you can get output of ssh command using subprocess.Popen ssh root@ip_address_of_the_server 'cat /path/to/your/file' Note: Password less setup using keys should be configured in order for it to work.
0
1
0
1
2011-12-08T01:35:00.000
2
0
false
8,425,089
0
0
0
1
With Python, I need to read a file into a script similar to open(file,"rb"). However, the file is on a server that I can access through SSH. Any suggestions on how I can easily do this? I am trying to avoid paramiko and am using pexpect to log into the SSH server, so a method using pexpect would be ideal. Thanks, Eric
How does virtualenv work?
21,462,720
20
62
18,494
0
python,virtualenv
First the user creates a new virtualenv with the command virtualenv myenv. This creates a directory called myenv and copies the system python binary to myenv/bin. It also adds other necessary files and directories to myenv, including a setup script in bin/activate and a lib subdirectory for modules and packages. Then the user sources the activate script with . myenv/bin/activate, which sets the shell’s PATH environment variable to start with myenv/bin. Now when the user runs python from this shell, it will execute the copy of the binary stored in myenv/bin. Even though the binary is identical to the one in /usr/bin/python, the standard python binary is designed to search for packages and modules in directories that are relative to the binary’s path (this functionality is not related to virtualenv). It looks in ../lib/pythonX.Y where X and Y are the major and minor version numbers of the python binary. So now it is looking in myenv/lib/pythonX.Y. The myenv/bin directory also contains a script named pip so that when the user installs new packages using pip from the virtualenv, they will be installed in myenv/lib/pythonX.Y
0
1
0
0
2011-12-08T07:39:00.000
2
1
false
8,427,709
1
0
0
1
I checked the activate script and it looks to me all it does is: set VIRTUAL_ENV env append $VIRTUAL_ENV/bin in front of PATH How does virtualenv provide that magical virtual environment by these? What do I miss?
Python Pip vs Ruby Gems
8,507,138
0
5
3,102
0
python,ruby,rubygems,pip
I think you should raise your problem about gem/debian and what are you going to do with it specially. I am using pip and debian now and still no problem by now.
0
1
0
1
2011-12-08T16:04:00.000
1
0
false
8,433,881
0
0
1
1
I mostly do work in Python, but I have been using some of the Ruby stuff for Server Configuration Management (ie Puppet and Chef). I also use Ubuntu/Debian as my primary Linux distro for servers. Why is there a weird Debian/Ruby conflict over Gems, and not a similar showdown between Debian/Python over Pip? Personally, I don't mind installing newer packages then the "system" approves of. I know Debian wants to make a stable system, but when I am running my own application code on the server, I can guarantee you it's not stable to begin with. Anyway, I would be interested to know if Pip is doing something different, or if it's an ego thing or whatever?
Compound custom service server using Twisted
8,445,796
3
1
216
0
python,twisted
Serve all those services needed Yes. Do it in a non-blocking fashion (it should, according to docs, but if someone could elaborate, I'd be grateful) Twisted's uses the common reactor model. I/O goes through your choice of poll, select, whatever to determine if data is available. It handles only what is available, and passes the data along to other stages of your app. This is how it is non-blocking. I don't think it provides non-blocking disk I/O, but I'm not sure. That feature not what most people need when they say non-blocking. Be able to serve about few hundreds of clients at once Yes. No. Maybe. What are those clients doing? Is each hitting refresh every second on a browser making 100 requests? Is each one doing a numerical simulation of galaxy collisions? Is each sending the string "hi!" to the server, without expecting a response? Twisted can easily handle 1000+ requests per second. Serve large file downloads in a reasonable way, meaning that it can serve multiple clients, using multiple services, downloading and uploading large files. Sure. For example, the original version of BitTorrent was written in Twisted.
0
1
0
1
2011-12-09T10:14:00.000
1
1.2
true
8,443,994
0
0
0
1
I have an interesting project going on at our workplace. The task, that stands before us, is such: Build a custom server using Python It has a web server part, serving REST It has a FTP server part, serving files It has a SMTP part, which receives mail only and last but not least, a it has a background worker that manages lowlevel file IO based on requests received from the above mentioned services Obviously the go to place was Twisted library/framework, which is an excelent networking tool. However, studying the docs further, a few things came up that I'm not sure about. Having Java background, I would solve the task (at least at the beginning) by spawning a separate thread for each service and going from there. Being in Python however, I cannot do that for any reasonable purpose as Python has GIL. I'm not sure, how Twisted handles this. I would expect, that Twisted has large (if not majority) code written in C, where GIL is not the issue, but that I couldn't find the docs explained to my satisfaction. So the most oustanding question is: Given that Twisted uses Reactor as it's main design pattern, will it be able to: Serve all those services needed Do it in a non-blocking fashion (it should, according to docs, but if someone could elaborate, I'd be grateful) Be able to serve about few hundreds of clients at once Serve large file downloads in a reasonable way, meaning that it can serve multiple clients, using multiple services, downloading and uploading large files. Large files being in the order of hundres of MB, or few GB. The size is not important, it's the time that the client has to stay connected to the server that matters. Edit: I'm actually inclined to go the way of python multiprocessing, but not sure, whether that's a correct thing to do with Twisted etc.
Append system python installation's sys.path to my personal python installation
8,454,391
1
0
717
0
python,ubuntu,module
python is gonna check if there is a $PYTHONPATH environment variable set. Use that for the path of your other modules. use export PYTHONPATH="path:locations"
0
1
0
0
2011-12-10T04:15:00.000
2
0.099668
false
8,454,339
1
0
0
1
I have a system python installation and a personal python installation in my home directory. My personal python comes in ahead in my $PATH and I usually run that. But the system python installation has some modules that I want to use with my personal python installation. So, basically, I want to append the sys.path of the system python installation to the sys.path of the personal python installation. I read up on the docs and source of site module and saw that I could use the sitecustomize or usercustomize modules to do this. But where I am stuck is how do I get the sys.path of the system python to be appended to the personal python's sys.path. The gensitepackages function in the site modules seems to calculate the paths to be added to sys.path but it is using the PREFIXES global variable instead of taking it as an argument, so for all I know, I can't use it. Adding system python's prefixes to PREFIXES is also not an option as by the time the customize module(s) are loaded, the PREFIXES is already used to build the path. Any ideas on how to go about this? Also, I'm not sure if I should ask this on askubuntu/unix&linux. Comments? Edit: Guess I wasn't clear on this part. I want the system python's path to be appended so that when I try to use modules that are not present in my personal python, it will automatically fallback to the system python's modules.
most performatic free database for file system tracking
8,463,655
0
0
1,214
0
python,database,database-performance,pyinotify
You could try Redis. It is most certainly fast. But really, since you're tracking a filesystem, and disks are slow as snails in comparison to even a medium-fast database, performance shouldn't be your primary concern.
0
1
0
1
2011-12-11T01:36:00.000
3
0
false
8,461,306
0
0
0
1
I'm tracking a linux filesystem (that could be any type) with pyinotify module for python (which is actually the linux kernel behind doing the job). Many directories/folders/files (as much as the user want to) are being tracked with my application and now i would like track the md5sum of each file and store them on a database (includes every moving, renaming, new files, etc). I guess that a database should be the best option to store all the md5sum of each file... But what should be the best database for that? Certainly a very performatic one. I'm looking for a free one, because the application is gonna be GPL.
How to evenly distribute tasks among nodes with Celery?
8,479,823
2
0
1,269
0
python,django,celery,django-celery
it's actually easy: you start one celery-instance per ec2-instance. set concurrency to the number of cores per ec2-instance. now the tasks don't interfere and distribute nicely among you instances. (the above assumes that your tasks are cpu bound)
0
1
0
0
2011-12-12T18:51:00.000
1
0.379949
false
8,479,320
0
0
1
1
I am using Celery with Django to manage a task que and using one (or more) small(single core) EC2 instances to process the task. I have some considerations. My task eats 100% CPU on a single core. - uses whatever CPU available but only in one core If 2 tasks are in progress on the same core, each task will be slowed down by half. I would like to start each task ASAP and not let it be que. Now say I have 4 EC2 instances, i start celery with "-c 5" . i.e. 5 concurrent tasks per instance. In this setup, if I have 4 new tasks, id like to ensure, each of them goes to different instance, rather than 4 going to same instance and each task fighting for CPU. Similarly, if I have 8 tasks, id like each instance to get 2 tasks at a time, rather than 2 instances processing 4 tasks each. Does celery already behave the way I described? If not then how can i make it behave as such?
python subprocess non-blocking and returning output
8,483,908
0
0
2,406
0
python,subprocess
If you specify stdout=PIPE, then your subprocess will write to the pipe and hang when the pipe buffer is full. The python program shoudn't hang - Popen is asynchronous which is why Popen.wait() can be called later to wait for the subprocess to exit. Read from Popen.stdout in order to keep the subprocess happy, and print, discard, or process the output as you see fit.
0
1
0
0
2011-12-13T02:33:00.000
2
0
false
8,483,654
0
0
0
1
I know this has been asked a lot of times but I've yet to find a proper way of doing this. If I want to run a local command the docs say I have to use subprocess as it's replacing all other methods such as os.system/peopen etc. If I call subprocess.Popen(command, shell=True, stdout=subprocess.PIPE) in my program and the command is for example a openvpn directive which connects my computer to a VPN the process will hang indefinitely since openvpn returns it's output ending with a new line but hangs in there while connected and so does my program (frozen). Some say I should remove the stdout=subprocess.PIPE which indeed works in a non-blocking way but then everything gets printed to the console instead of me having some sort of control over the output (maybe I dont want to print it). So is there some sort of proper way of doing this, an example maybe of executing commands in a non-blocking way and also having control over the output.?
Sharing util modules between actively developed apps
8,487,730
3
4
250
0
python
You can take advantage of Python paths (the paths searched when looking for module to import). Thus you can create different directory for utils and include it within different repository than the project that use these utils. Then include path to this repository in PYTHONPATH. This way if you write import mymodule, it will eventually find mymodule in the directory containing utils. So, basically, it will work similarly as it works for standard Python modules. This way you will have one repository for utils (or separate for each util, if you wish), and separate repositories for other projects, regardless of the version control system you use.
0
1
0
1
2011-12-13T09:34:00.000
2
1.2
true
8,486,942
0
0
0
1
We have a growing library of apps depending on a set of common util modules. We'd like to: share the same utils codebase between all projects allow utils to be extended (and fixed!) by developers working on any project have this be reasonably simple to use for devs (i.e. not a big disruption to workflow) cross-platform (no diffs for devs on Macs/Win/Linux) We currently do this "manually", with the utils versioned as part of each app. This has its benefits, but is also quite painful to repeatedly fix bugs across a growing number of codebases. On the plus side, it's very simple to deal with in terms of workflow - util module is part of each app, so on that side there is zero overhead. We also considered (fleetingly) using filesystem links or some such (not portable between OS's) I understand the implications about release testing and breakage, etc. These are less of a problem than the mismatched utils are at the moment.
How can I launch ipython from shell, by running 'python ...'?
8,609,983
1
26
25,038
0
python,shell,command-line,python-3.x,ipython
I think you mean something like python C:\Python27\Scripts\ipython-script.py
0
1
0
0
2011-12-13T15:01:00.000
6
0.033321
false
8,491,323
1
0
0
1
I would like to add some commandline options to a python launch code in order to actually invoke an ipython shell. How do I do that?
using python: how create firewall to block/drop network packets
8,500,799
0
0
1,331
0
python,firewall,iptables
Short answer: no. Iptables is command line tool for controling netfilter, a linux kernel module for filtering network packet. This is only for Linux. Windows have his own kernel approach, and same for BSD* system. You might found some tools done in Python for controling one of them, but not all :) Feel free to start a new project !
0
1
0
0
2011-12-14T04:22:00.000
1
0
false
8,499,462
0
0
0
1
Like the question stated... How to use python to block/drop packets from blacklisted host (MAC address) ... or more specific, ARP packets I know that *inux has utilities like "iptables" that can perform this ... is there any modules or solution in python that can apply on windows or both *inux and windows? thanks ...
Tornadoweb webapp cannot be managed via upstart
8,516,641
0
1
417
0
python,daemon,tornado,upstart
There are two often used solutions The first one is to let your application honestly report its pid. If you could force your application to write the actual pid into the pidfile then you could get its pid from there. The second one is a little more complicated. You may add specific environment variable for the script invocation. This environment variable will stay with all the forks if forks don't clear environment and than you can find all of your processes by parsing /proc/*/environ files. There should be easier solution for finding processes by their environment but I'm not sure.
0
1
0
0
2011-12-14T14:22:00.000
2
0
false
8,506,002
0
0
1
2
Few days ago I found out that my webapp wrote ontop of the tornadoweb framework doesn't stop or restart via upstart. Upstart just hangs and doesn't do anything. I investigated the issue and found that upstart recieves wrong PID, so it can only run once my webapp daemon and can't do anything else. Strace shows that my daemon makes 4 (!) clone() calls instead of 2. Week ago anything was good and webapp was fully and correctly managed by the upstart. OS is Ubuntu 10.04.03 LTS (as it was weeks ago). Do you have any ideas how to fix it? PS: I know about "expect fork|daemon" directive, it changes nothing ;)
Tornadoweb webapp cannot be managed via upstart
10,837,488
1
1
417
0
python,daemon,tornado,upstart
Sorry my silence, please. Investigation of the issue ended with the knowledge about uuid python library which adds 2 forks to my daemon. I get rid of this lib and tornado daemon works now properly. Alternative answer was supervisord which can run any console tools as a daemon which can't daemonize by itself.
0
1
0
0
2011-12-14T14:22:00.000
2
1.2
true
8,506,002
0
0
1
2
Few days ago I found out that my webapp wrote ontop of the tornadoweb framework doesn't stop or restart via upstart. Upstart just hangs and doesn't do anything. I investigated the issue and found that upstart recieves wrong PID, so it can only run once my webapp daemon and can't do anything else. Strace shows that my daemon makes 4 (!) clone() calls instead of 2. Week ago anything was good and webapp was fully and correctly managed by the upstart. OS is Ubuntu 10.04.03 LTS (as it was weeks ago). Do you have any ideas how to fix it? PS: I know about "expect fork|daemon" directive, it changes nothing ;)
Best approach to filter a large dataset depending on the datastore
8,513,090
2
2
183
0
python,google-app-engine
I assume it's so slow because you're doing a query for each user you're looking up. You can avoid the need to do this with good use of key names. For each user in your database, insert entities with their key name set to the unique identifier for the social network. These can be the same entities you're already using, or new 'index' entities created just for this purpose. When sent a list of identifiers, simply do a bulk get operation for all the key names of that entity to identify if they exist - eg, by doing MyKind.get_by_key_name(key_names).
0
1
0
0
2011-12-14T16:51:00.000
1
1.2
true
8,508,346
0
0
1
1
I'm currently implementing a web service providing the social features of a game. One of this game feature is the ability to manage a friends list. The friends the user can add depends on the contacts he's having on an external social network of his choice (currently Facebook or Twitter). The current behavior of the system is the following: The client application uses the social network (Facebook or Twitter) API to retrieve the contact list of the player. Each of this contact is provided with a unique identifier (namely, the social network he's originating from, and its identifier on this social network, for example "Tw12345"). The client sends the list of all those identifiers to the game web service hosted on GAE. The web service checks for each identifier if it has a user that matches in his own database. It returns a list of identifiers, filtered to contains only those who also have a match in the game database. It obviously doesn't work well, because most users contacts list are huge. The server is spending a tremendous amount of time checking the database to filter which contact have a matching game account. Now, I'm having a hard time figuring out how I can proceed more efficiently. As the identifiers aren't following any given order, I can't use integer operations to select users on the database. Also, I can't rely on Twitter or Facebook to do the filtering on their side, because that's not supported by their API. I thought of a system using some kind of memcached data tree to store a list of "known" identifiers (as the query only needs to know that there's a matching user, not which user exactly is matching), but I'm afraid of the time the cache will take to build up anytime it gets cleared. If any of you have an experience on this kind of set-related trouble, I'll be very happy to hear it! Thanks!
Prompting user on package install "pip install "
8,533,282
2
1
1,415
0
python,stdout,pip,distutils
Don't do that. If it is absolutely necessary to have information from the user during the install, ask for an environment variable to be set, and fail if it is not set. Better yet, require a plain text configure file to run your module - and set it with default values during the install Don't try to make an interactive session needed during the install, because the idea of PIP and easy_install are that they also install the pre-requisites of a package - so they may install a lot of packages in a batch,. The user will just expect pip install to do its job, and an unexpected interactive prompt will ruin automated installs, pre-requiste chains, buildout installs, remote installs, and so on.
0
1
0
0
2011-12-16T09:48:00.000
1
0.379949
false
8,532,369
1
0
0
1
I've created a tar.gz of a package that includes a setup.py file. setup.py uses the setup() function provided in distutils.core. I want to prompt the user when they run "pip install .tar.gz". Unfortunately, it looks like pip redirects all stdout and stderr of the "python setup.py install" command through a special log filter, which reads stdout line by line. This means I can't have a prompt such as "Email: ..." since "Email: " will not get printed until after the user has pressed enter. Also, the log filter indents every line of output, which is not ideal.
How to switch execution to a new script in Python?
8,540,114
0
1
118
0
python,exec,execute
Since you're downloading another Python script, you could try using exec to run it. In 2.x there is a helper function execfile for this sort of thing (which wraps the use of the exec keyword; in 3.x you must read the file and pass the resulting string to exec (which is now a function, if I'm reading correctly). You must always be 110% sure you trust the content in cases like this! It seems that this isn't an issue for you, though.
0
1
0
0
2011-12-16T19:14:00.000
2
0
false
8,539,024
1
0
0
1
Is it possible to have one script call another and then exit (don't know if I'm putting this right), leaving the other script running? Specifically I want an update script to download installer.py and then run it. Since installer.py overwrites the update script I can't just do a subprocess.call() since it will fail at the 'delete old install' stage. Is this possible or must I instead leave the updater script alone, replace everything else, put the new one in a temporary directory and then replace it next time the program is run? Is this considered a better approach? Thank you very much and sorry if this is blindingly obvious.
Problems juggling system python and python 27 on Mac Snow Leopard
8,541,671
1
2
335
0
python,macos,psycopg2
The best way to solve this is to use virtualenv. When you create a virtual environment, you have the option (-p) of specifying which Python executable you want. Once you're in the virtualenv, you don't have to worry about it at all, and all your regular commands (including pip) will refer to the proper Python executable, site packages, libraries, etc. E.g. if you want to create a virtualenv for MacPort's Python 2.7, you can do: $ virtualenv -p /opt/local/bin/python2.7 myvirtualenv
0
1
0
0
2011-12-16T19:24:00.000
2
1.2
true
8,539,146
0
0
0
1
I had a lot of problems getting geoDjango installed on mac os x snow leopard. After the dust settled, I realized that the system install of python (2.6) was on the recieivng end of psycopg2 Python 2.7 is on the system path. And when I invoke python in the terminal, python 2.7 is the one the is fired up. But if I do an easy_install, or mac port install of psycopg2, it doesn't get installed to Libraries/framework.python/verisons/2.7/lib or bin (i am working from memory here). I do however find a copy installed in the system install of python 2.6 How do I get things like mac port and easy_install to target python 2.7 and ignore the system python?
B2B App authentication on GAE - Google Accounts or custom user base (Django or Web2Py)?
8,566,440
1
1
388
0
python,django,google-app-engine,web2py
Why both Create Django user from Google users. you will be able to adapt your system user with other system next
0
1
0
0
2011-12-19T19:28:00.000
4
0.049958
false
8,566,399
0
0
1
1
Which of these would you pick for a B2B app (targeting small/med-small businesses) built on GAE with python: Google Accounts Custom Users with Django Custom Users with Web2Py I'm very tempted to go the Google Accounts route as it's very well integrated into GAE and takes care of everything from user creation to session authentication, and even takes care of forgotten passwords. However, I'm sure there are significant drawbacks to this, including usability, but if you are starting from scratch, which approach would you pick and why?
Python main control
8,569,449
2
1
114
0
python,date,time,controls,daemon
cron can easily handle that timing (although 2 entries will be required), so unless you have extreme low-latency requirements it's best to have it invoke the script on demand.
0
1
0
0
2011-12-20T00:34:00.000
3
0.132549
false
8,569,433
0
0
0
1
I've run into an issue and am looking for guidance from a few veterans. I've written a program in python that I'd like to run only periodically. I'm going to upload it to my sever, and what I'd like for it to do is to run every Monday through Friday, and every 5 minutes between 9:30 and 4. Basically I've written modules to query the market, and evaluate securities that I own. I don't want to tax the servers, so every 5 minutes should be fine. What I want is some advice on how I should arrange the main sequence. Should I run the program from a continuous loop that just checks the time? Or should I run the code, scheduled from a daemon? Thoughts?
How to make Eclipse inherit $PYTHONPATH on Mac OS X
8,658,155
0
0
603
0
python,eclipse,pythonpath
You need to configure it inside Eclipse. Now, provided have a shell that has the PYTHONPATH properly setup, you should be able to start Eclipse from it and when adding an interpreter (you may want to remove your currently configured interpreter), it should automatically pick all of the PYTHONPATH you had setup from the shell (some of those may be unchecked in the wizard at that time, so, you have to go on and check those too -- just don't add the paths from files you'll be editing inside your project, as those should be added to the PYTHONPATH for your project so that PyDev is able to track changes on those files to properly offer you code-completion).
0
1
0
1
2011-12-20T00:43:00.000
1
0
false
8,569,486
0
0
0
1
I'm attempting to get Eclipse + PyDev set up so I don't need to alter the PYTHONPATH from within Eclipse, but rather it will inherit the PYTHONPATH from the .profile document from inside my home directory. Is that possible, or do I need to actually add the PYTHONPATH locations using Eclipse's PYTHONPATH editor? I ask because I am getting different errors when going from Terminal-based python to python in Eclipse, using the same files.
Is there an equivalent of a SQL View in Google App Engine Python?
8,570,245
5
2
106
1
python,sql,google-app-engine
Read-only views (the most common type) are basically queries against one or more tables to present the illusion of new tables. If you took a college-level database course, you probably learned about relational databases, and I'm guessing you're looking for something like relational views. The short answer is No. The GAE datastore is non-relational. It doesn't have tables. It's essentially a very large distributed hash table that uses composite keys to present the (very useful) illusion of Entities, which are easy at first glance to mistake for rows in a relational database. The longer answer depends on what you'd do if you had a view.
0
1
0
0
2011-12-20T02:21:00.000
2
1.2
true
8,570,066
0
0
1
1
I've been learning python by building a webapp on google app engine over the past five or six months. I also just finished taking a databases class this semester where I learned about views, and their benefits. Is there an equivalent with the GAE datastore using python?
Web service call in Python (Twisted + ZSI) not working in chroot jail
8,579,368
0
0
280
0
python,linux,web-services,twisted,chroot
If your application is raising an unexpected exception at some point - eg, because some dependency fails to import, because it is not installed in the chroot - then this can cause connections to be unexpectedly closed. It's hard to say with any precision, since you haven't mentioned what kind of connections you have or what APIs you're using to manage them. Make sure you have logging enabled and look for unexpected tracebacks being written to your log file. If you see any, there's a good chance they are associated with the problem that is causing your application to fail.
0
1
0
0
2011-12-20T16:29:00.000
1
0
false
8,578,664
0
0
0
1
I have a Python script that calls a web service using ZSI with Twisted. On Linux, I'm running this script and it works fine. Now, I want this script to run in a chroot jail which is somewhere in my filesystem. I have added the usr, lib and the etc directories in the jail. When I execute the script from the jail, there is no response from the web service and Twisted reports an error which looks like: [Failure instance: Traceback (failure with no frames): twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion. ] If I chroot to the root of the filesystem (/) and if the new jail uses the already existing usr, lib and etc directories, it works with no errors. I'm suspecting that there is a library that's missing or a library in the bin/usr/etc directories of the first chroot jail that is not correct. Do you have any clue that can help me? Does somebody have a solution to this problem?
python script exits when a shell command(to terminate a different tool) is run from os.system()
8,597,964
1
0
156
0
python,shell,operating-system
Is the script, by any chance, launched by that same tool? If yes, you need to run os.setsid() to stop being dependent on it.
0
1
0
0
2011-12-21T23:56:00.000
2
1.2
true
8,597,906
0
0
0
1
I am running a shell command from within my python script using os.system("some shell command") This command essentially terminates a tool. I need to check that this tool is terminated in my script. But as soon as the tool is terminated the script is terminated too!
How to keep global variables persistent over multiple google appengine instances?
8,617,102
0
2
2,158
0
python,google-app-engine,variables,global
Interesting question. Some bad news first, I don't think there's a better way of storing data; no, you won't be able to stop new instances from spawning and no, you cannot make seperate instances always have the same data. What you could do is have the instances perioidically sync themselves with a master record in the datastore, by choosing the frequency of this intelligently and downloading/uploading the information in one lump you could limit the number of read/writes to a level that works for you. This is firmly in the kludge territory though. Despite finding the quota for just about everything else, I can't find the limits for free read/write so it is possible that they're ludicrously small but the fact that you're hitting them with a mere 10 smartphones raises a red flag to me. Are you certain that the smartphones are being polled (or calling in) at a sensible frequency? It sounds like you might be hammering them unnecessarily.
0
1
0
0
2011-12-23T13:10:00.000
4
0
false
8,616,487
0
0
1
3
Our situation is as follows: We are working on a schoolproject where the intention is that multiple teams walk around in a city with smarthphones and play a city game while walking. As such, we can have 10 active smarthpones walking around in the city, all posting their location, and requesting data from the google appengine. Someone is behind a webbrowser,watching all these teams walk around, and sending them messages etc. We are using the datastore the google appengine provides to store all the data these teams send and request, to store the messages and retrieve them etc. However we soon found out we where at our max limit of reads and writes, so we searched for a solution to be able to retrieve periodic updates(which cost the most reads and writes) without using any of the limited resources google provides. And obviously, because it's a schoolproject we don't want to pay for more reads and writes. Storing this information in global variables seemed an easy and quick solution, which it was... but when we started to truly test we noticed some of our data was missing and then reappearing. Which turned out to be because there where so many requests being done to the cloud that a new instance was made, and instances don't keep these global variables persistent. So our question is: Can we somehow make sure these global variables are always the same on every running instance of google appengine. OR Can we limit the amount of instances ever running, no matter how many requests are done to '1'. OR Is there perhaps another way to store this data in a better way, without using the datastore and without using globals.
How to keep global variables persistent over multiple google appengine instances?
8,617,199
0
2
2,158
0
python,google-app-engine,variables,global
Consider jabber protocol for communication between peers. Free limits are on quite high level for it.
0
1
0
0
2011-12-23T13:10:00.000
4
0
false
8,616,487
0
0
1
3
Our situation is as follows: We are working on a schoolproject where the intention is that multiple teams walk around in a city with smarthphones and play a city game while walking. As such, we can have 10 active smarthpones walking around in the city, all posting their location, and requesting data from the google appengine. Someone is behind a webbrowser,watching all these teams walk around, and sending them messages etc. We are using the datastore the google appengine provides to store all the data these teams send and request, to store the messages and retrieve them etc. However we soon found out we where at our max limit of reads and writes, so we searched for a solution to be able to retrieve periodic updates(which cost the most reads and writes) without using any of the limited resources google provides. And obviously, because it's a schoolproject we don't want to pay for more reads and writes. Storing this information in global variables seemed an easy and quick solution, which it was... but when we started to truly test we noticed some of our data was missing and then reappearing. Which turned out to be because there where so many requests being done to the cloud that a new instance was made, and instances don't keep these global variables persistent. So our question is: Can we somehow make sure these global variables are always the same on every running instance of google appengine. OR Can we limit the amount of instances ever running, no matter how many requests are done to '1'. OR Is there perhaps another way to store this data in a better way, without using the datastore and without using globals.
How to keep global variables persistent over multiple google appengine instances?
8,624,587
3
2
2,158
0
python,google-app-engine,variables,global
You should be using memcache. If you use the ndb (new database) library, you can automatically cache the results of queries. Obviously this won't improve your writes much, but it should significantly improve the numbers of reads you can do. You need to back it with the datastore as data can be ejected from memcache at any time. If you're willing to take the (small) chance of losing updates you could just use memcache. You could do something like store just a message ID in the datastore and have the controller periodically verify that every message ID has a corresponding entry in memcache. If one is missing the controller would need to reenter it.
0
1
0
0
2011-12-23T13:10:00.000
4
1.2
true
8,616,487
0
0
1
3
Our situation is as follows: We are working on a schoolproject where the intention is that multiple teams walk around in a city with smarthphones and play a city game while walking. As such, we can have 10 active smarthpones walking around in the city, all posting their location, and requesting data from the google appengine. Someone is behind a webbrowser,watching all these teams walk around, and sending them messages etc. We are using the datastore the google appengine provides to store all the data these teams send and request, to store the messages and retrieve them etc. However we soon found out we where at our max limit of reads and writes, so we searched for a solution to be able to retrieve periodic updates(which cost the most reads and writes) without using any of the limited resources google provides. And obviously, because it's a schoolproject we don't want to pay for more reads and writes. Storing this information in global variables seemed an easy and quick solution, which it was... but when we started to truly test we noticed some of our data was missing and then reappearing. Which turned out to be because there where so many requests being done to the cloud that a new instance was made, and instances don't keep these global variables persistent. So our question is: Can we somehow make sure these global variables are always the same on every running instance of google appengine. OR Can we limit the amount of instances ever running, no matter how many requests are done to '1'. OR Is there perhaps another way to store this data in a better way, without using the datastore and without using globals.
"Error: Python not found" when trying to access the android repository
8,617,691
2
1
593
0
android,python,git,repository
It seems to me that you should add python to your path variable.
0
1
0
1
2011-12-23T14:31:00.000
1
0.379949
false
8,617,208
0
0
0
1
It gives me the error on line 23 of the repo file: exec: python: not found. the thing is, I have python installed in C:\Python27 (the default) I'm using the Git Bash when typing in these commands. I've tried to move the python folder into the git directory to run the repo file and it still says the same thing. I've tried to run the python interpreter and then run the repo file, but it says the same thing. Anybody have any suggestions? I just wanna download the android source code through the git and repo.
Python 2.7.2 and Google App Engine SDK 1.6.1 on Win 7 Home Premium not working
13,126,746
2
9
2,601
0
python,windows,google-app-engine
I was facing the same issue, browse button was disabled. I ran dev_appserver.py helloworld command at command prompt and then opened localhost:8080 in my browser the hello world program ran successfully.
0
1
0
0
2011-12-23T23:25:00.000
8
0.049958
false
8,621,527
0
0
1
6
I have installed Python 2.7.2 (Win7 32-bit) and Google App Engine SDK 1.6.1 for Win7 on a 64-bit system running Win7 Home Premium. Default folder locations for both Python and GAE. When I try to run the helloworld project as described in the Google Python Getting Started doc, the Launcher's "browse" button never becomes active. The GAE SDK is supposed to do fine with Python 2.7. Is there a complete listing anywhere of environment variables needed for this setup to work? So far, all posts I have seen are from users who have gotten well past this absolutely basic step.
Python 2.7.2 and Google App Engine SDK 1.6.1 on Win 7 Home Premium not working
8,622,105
0
9
2,601
0
python,windows,google-app-engine
Do you see anything in the GAE SDK logs? Which browser are you using? What is your default browser? The default security settings in IE require you to enable intranet access. I recently had to rebuild my Win7 dev box. Chrome was my default browser. When I installed GAE SDK v1.6.1 I had a similar problem to what you describe. I checked logs and fiddled with the browser configuration to resolve it. My recollection was that once I made IE 9 my default browser again, i saw the intranet security error. After enabling access to intranet sites like localhost:8080, things started working OK, but start-up was sometimes slow. Then I made Chrome my default browser again and the start-up became a bit faster and more reliable.
0
1
0
0
2011-12-23T23:25:00.000
8
0
false
8,621,527
0
0
1
6
I have installed Python 2.7.2 (Win7 32-bit) and Google App Engine SDK 1.6.1 for Win7 on a 64-bit system running Win7 Home Premium. Default folder locations for both Python and GAE. When I try to run the helloworld project as described in the Google Python Getting Started doc, the Launcher's "browse" button never becomes active. The GAE SDK is supposed to do fine with Python 2.7. Is there a complete listing anywhere of environment variables needed for this setup to work? So far, all posts I have seen are from users who have gotten well past this absolutely basic step.
Python 2.7.2 and Google App Engine SDK 1.6.1 on Win 7 Home Premium not working
11,284,009
0
9
2,601
0
python,windows,google-app-engine
I am quite sure that is because you had changed the encode from ANSI to another type (such as UTF-8) on app.yaml, change it back to ANSI, then you can run the project on google app engine launcher. BTW, the helloworld tutorial on google has no problem.
0
1
0
0
2011-12-23T23:25:00.000
8
0
false
8,621,527
0
0
1
6
I have installed Python 2.7.2 (Win7 32-bit) and Google App Engine SDK 1.6.1 for Win7 on a 64-bit system running Win7 Home Premium. Default folder locations for both Python and GAE. When I try to run the helloworld project as described in the Google Python Getting Started doc, the Launcher's "browse" button never becomes active. The GAE SDK is supposed to do fine with Python 2.7. Is there a complete listing anywhere of environment variables needed for this setup to work? So far, all posts I have seen are from users who have gotten well past this absolutely basic step.
Python 2.7.2 and Google App Engine SDK 1.6.1 on Win 7 Home Premium not working
17,632,351
0
9
2,601
0
python,windows,google-app-engine
I had a similar issue; it turned out my problem was not due to environment variables. Debugging GAE: First off let me say that if you are having problems with GAE, I would strongly recommend launching using the CLI, google_appengine/dev_appserver.py. There is a large stack trace of the reason GAE is failing (instead of simply a red link in the GAE Launcher GUI) that will point you in the right direction. Hidden bad characters: When copying the text from google's "hello world" tutorial, there was an invisible hidden character at the start of my YAML file (I found it using kdiff, a diff tool). After deleting this character, my app launched (and showed up as not red in the GAE Launcher GUI). Environment Variables: As to your original question, the only relevant environment variable I have set is my PATH variable, where I have appended the folder of my python executable (in my case C:\Python27) so that I can run python files without specifying the full path to Python. Let me repeat, however, that I do not believe this is the cause of your problem, but you can more directly confirm this using the CLI.
0
1
0
0
2011-12-23T23:25:00.000
8
0
false
8,621,527
0
0
1
6
I have installed Python 2.7.2 (Win7 32-bit) and Google App Engine SDK 1.6.1 for Win7 on a 64-bit system running Win7 Home Premium. Default folder locations for both Python and GAE. When I try to run the helloworld project as described in the Google Python Getting Started doc, the Launcher's "browse" button never becomes active. The GAE SDK is supposed to do fine with Python 2.7. Is there a complete listing anywhere of environment variables needed for this setup to work? So far, all posts I have seen are from users who have gotten well past this absolutely basic step.
Python 2.7.2 and Google App Engine SDK 1.6.1 on Win 7 Home Premium not working
31,930,984
2
9
2,601
0
python,windows,google-app-engine
I compared the helloworld example to the guestbook demo and found that the application element was key. I added the line at the top of the app.yaml file "application: helloworld" and the helloworld example started working in the Google App Engine (GAE). Note that the 'application' element is supposed to be optional as defined in the app.yaml reference. It looks like it is optional if you use the command line, and it is not optional if you use GAE.
0
1
0
0
2011-12-23T23:25:00.000
8
0.049958
false
8,621,527
0
0
1
6
I have installed Python 2.7.2 (Win7 32-bit) and Google App Engine SDK 1.6.1 for Win7 on a 64-bit system running Win7 Home Premium. Default folder locations for both Python and GAE. When I try to run the helloworld project as described in the Google Python Getting Started doc, the Launcher's "browse" button never becomes active. The GAE SDK is supposed to do fine with Python 2.7. Is there a complete listing anywhere of environment variables needed for this setup to work? So far, all posts I have seen are from users who have gotten well past this absolutely basic step.
Python 2.7.2 and Google App Engine SDK 1.6.1 on Win 7 Home Premium not working
31,988,393
0
9
2,601
0
python,windows,google-app-engine
I did two changes together - 1. added the line at the top of app.yaml file "application:helloworld" 2. changed the last line in app.yaml "script: helloworld.app" to "script: helloworld.py" my GAE started working. However to islolate the issue I 'undid' both changes, it turns out that the the 2nd change - changing helloworld.app to helloworld.py did the magic
0
1
0
0
2011-12-23T23:25:00.000
8
0
false
8,621,527
0
0
1
6
I have installed Python 2.7.2 (Win7 32-bit) and Google App Engine SDK 1.6.1 for Win7 on a 64-bit system running Win7 Home Premium. Default folder locations for both Python and GAE. When I try to run the helloworld project as described in the Google Python Getting Started doc, the Launcher's "browse" button never becomes active. The GAE SDK is supposed to do fine with Python 2.7. Is there a complete listing anywhere of environment variables needed for this setup to work? So far, all posts I have seen are from users who have gotten well past this absolutely basic step.
Removing cocos2d-python from Mac
10,575,373
1
0
559
0
python,cocos2d-python
You could fix it. The problem comes from the fact that cocos2D is built on top of Pyglet, and the stable release of pyglet does not yet support Mac OS X 64 bits architecture. You have to use the 1.2 release of pyglet or later, which by now is not released yet. A workaround is to remove any existing Pyglet install: pip uninstall piglet Then install the latest Pyglet from the mercurial repository pip install hg+https://pyglet.googlecode.com/hg/
1
1
0
0
2011-12-24T18:25:00.000
1
1.2
true
8,626,180
0
0
0
1
I installed cocos2d today on OS X Lion, but whenever I try to import cocos in the Python interpreter, I get a bunch of import errors. File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/init.py", line 105, in import_all() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/init.py", line 89, in import_all import actions File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/actions/ init.py", line 37, in from basegrid_actions import * File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/actions/ basegrid_actions.py", line 62, in from pyglet.gl import * File "build/bdist.macosx-10.6-intel/egg/pyglet/gl/init.py", line 510, in File "build/bdist.macosx-10.6-intel/egg/pyglet/window/init.py", line 1669, in File "build/bdist.macosx-10.6-intel/egg/pyglet/window/carbon/ init.py", line 69, in File "build/bdist.macosx-10.6-intel/egg/pyglet/lib.py", line 90, in load_library File "build/bdist.macosx-10.6-intel/egg/pyglet/lib.py", line 226, in load_framework File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/ctypes/init.py", line 431, in LoadLibrary return self._dlltype(name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/ctypes/init.py", line 353, in init self._handle = _dlopen(self._name, mode) OSError: dlopen(/System/Library/Frameworks/QuickTime.framework/ QuickTime, 6): no suitable image found. Did find: /System/Library/Frameworks/QuickTime.framework/QuickTime: mach-o, but wrong architecture /System/Library/Frameworks/QuickTime.framework/QuickTime: mach-o, but wrong architecture Since I can't fix it, I'd like to remove cocos2d entirely. The problem is that I can't seem to find a guide anywhere that details how to remove it from the Python installation. Any help regarding either of these problems is greatly appreciated.
rvmsudo analog for python/virtualenv
8,662,790
1
0
191
0
python,rvm,virtualenv
One work around is to use "sudo -E". This will preserve the calling user's environment across the sudo. Note that if the adversary controls your environment this is an immediate root exploit (via LD_LIBRARY_PATH and similar).
0
1
0
0
2011-12-25T15:49:00.000
1
0.197375
false
8,630,305
1
0
1
1
As anyone who knows what virtualenv does for python there is an analog for ruby. What's interesting about the ruby installation is that there is a "rvmsudo" that projects the current rvm environment on the root/sudo user before executing the requested command. virtualenv does not offer an obvious implementation of the same command. Is there something I'm missing?
How to pause a python script running in terminal
8,630,607
1
3
9,002
0
python,terminal,pausing-execution
As others have commented, unless you are running your script in a virtual machine that can be suspended, you would need to modify your script to track its state.
0
1
0
0
2011-12-25T17:08:00.000
6
0.033321
false
8,630,573
0
0
0
4
I have a web crawling python script running in terminal for several hours, which is continuously populating my database. It has several nested for loops. For some reasons I need to restart my computer and continue my script from exactly the place where I left. Is it possible to preserve the pointer state and resume the previously running script in terminal? I am looking for a solution which will work without altering the python script. Modifying the code is a lower priority as that would mean to relaunch the program and reinvest time. Update: Thanks for the VM suggestion. I'll take that. For the sake of completion, what generic modifications should be made to script to make it pause and resumable? Update2: Porting on VM works fine. I have also modified script to make it failsafe against network failures. Code written below.
How to pause a python script running in terminal
8,630,611
0
3
9,002
0
python,terminal,pausing-execution
If this problem is important enough to warrant this kind of financial investment, you could run the script on a virtual machine. When you need to shut down, suspend the virtual machine, and then shut down the computer. When you want to start again, start the computer, and then wake up your virtual machine.
0
1
0
0
2011-12-25T17:08:00.000
6
0
false
8,630,573
0
0
0
4
I have a web crawling python script running in terminal for several hours, which is continuously populating my database. It has several nested for loops. For some reasons I need to restart my computer and continue my script from exactly the place where I left. Is it possible to preserve the pointer state and resume the previously running script in terminal? I am looking for a solution which will work without altering the python script. Modifying the code is a lower priority as that would mean to relaunch the program and reinvest time. Update: Thanks for the VM suggestion. I'll take that. For the sake of completion, what generic modifications should be made to script to make it pause and resumable? Update2: Porting on VM works fine. I have also modified script to make it failsafe against network failures. Code written below.
How to pause a python script running in terminal
8,630,649
4
3
9,002
0
python,terminal,pausing-execution
You might try suspending your computer or running in a virtual machine which you can subsequently suspend. But as your script is working with network connections chances are your script won't work from the point you left once you bring up the system. Suspending a computer and restoring it or saving a Virtual M/C and restoring it would mean you need to restablish the network connection. This is true for any elements which are external to your system and network is one of them. And there are high chances that if you are using a dynamic network, the next time you boot chances are you would get a new IP and the network state that you were working previously would be void. If you are planning to modify the script, few things you need to keep it mind. Add serializing and Deserializing capabilities. Python has the pickle and the faster cPickle method to do it. Add Restart points. The best way to do this is to save the state at regular interval and when restarting your script, restart from last saved state after establishing all the transients elements like network. This would not be an easy task so consider investing a considrable amount of time :-) Note*** On a second thought. There is one alternative from changing your script. You can try using cloud Virtualization Solutions like Amazon EC2.
0
1
0
0
2011-12-25T17:08:00.000
6
1.2
true
8,630,573
0
0
0
4
I have a web crawling python script running in terminal for several hours, which is continuously populating my database. It has several nested for loops. For some reasons I need to restart my computer and continue my script from exactly the place where I left. Is it possible to preserve the pointer state and resume the previously running script in terminal? I am looking for a solution which will work without altering the python script. Modifying the code is a lower priority as that would mean to relaunch the program and reinvest time. Update: Thanks for the VM suggestion. I'll take that. For the sake of completion, what generic modifications should be made to script to make it pause and resumable? Update2: Porting on VM works fine. I have also modified script to make it failsafe against network failures. Code written below.
How to pause a python script running in terminal
8,631,087
1
3
9,002
0
python,terminal,pausing-execution
Since you're populating a database with your data, I suggest to use it as a way to track the progress of the script (get the latest URL parsed, have a list of pending URLs, etc.). If the script is terminated abruptly, you don't have to worry about saving its state because the database transactions will come to the rescue and only the data that you've committed will be saved. When the script is retarted, only the data for the URLs that you completely processed will be stored and you it can resume just picking up the next URL according to the database.
0
1
0
0
2011-12-25T17:08:00.000
6
0.033321
false
8,630,573
0
0
0
4
I have a web crawling python script running in terminal for several hours, which is continuously populating my database. It has several nested for loops. For some reasons I need to restart my computer and continue my script from exactly the place where I left. Is it possible to preserve the pointer state and resume the previously running script in terminal? I am looking for a solution which will work without altering the python script. Modifying the code is a lower priority as that would mean to relaunch the program and reinvest time. Update: Thanks for the VM suggestion. I'll take that. For the sake of completion, what generic modifications should be made to script to make it pause and resumable? Update2: Porting on VM works fine. I have also modified script to make it failsafe against network failures. Code written below.
Hadoop - Saving Log Data and Developing GUI
8,633,193
0
0
321
0
java,python,hadoop
I think you can use HIVE. Even I am new to Hadoop but read some where that HIVE is for hadoop analytics. Not sure whether it has GUI or not, but for sure it has SQL capability to query unstructed data.
0
1
0
0
2011-12-26T05:17:00.000
2
0
false
8,633,112
0
1
0
2
I am doing research for my new project, Following is the details of my project, research and questions: Project: Save the Logs (ex. format is TimeStamp,LOG Entry,Location,Remarks etc ) from different sources. Here Different sources is like, gettting the LOG data from the different systems world wide (Just an Overview) (After saving the LOG Entries in Hadoop as specified in 1) Generate Reports of the LOGs saved in Hadoop on demand like drill down, drill up etc NOTE: For every minute approx. thier will be 50 to 60 MB of LOG Entries from the systems (I checked it). Research and Questions: For saving log entries in the Hadoop from different sources, we used Apache Flume. We are creating our own MR programs and servlets. Is thier any good options other than flume? Is thier any Hadoop Data Analysis (Open Source) tool to genarte reports etc? I am doing my research, if any of us add some comments to me it will be helpfull.
Hadoop - Saving Log Data and Developing GUI
8,633,276
1
0
321
0
java,python,hadoop
Have you looked at Datameer ? It provides a GUI to import all these types of files, and create reports as well as dashboards.
0
1
0
0
2011-12-26T05:17:00.000
2
0.099668
false
8,633,112
0
1
0
2
I am doing research for my new project, Following is the details of my project, research and questions: Project: Save the Logs (ex. format is TimeStamp,LOG Entry,Location,Remarks etc ) from different sources. Here Different sources is like, gettting the LOG data from the different systems world wide (Just an Overview) (After saving the LOG Entries in Hadoop as specified in 1) Generate Reports of the LOGs saved in Hadoop on demand like drill down, drill up etc NOTE: For every minute approx. thier will be 50 to 60 MB of LOG Entries from the systems (I checked it). Research and Questions: For saving log entries in the Hadoop from different sources, we used Apache Flume. We are creating our own MR programs and servlets. Is thier any good options other than flume? Is thier any Hadoop Data Analysis (Open Source) tool to genarte reports etc? I am doing my research, if any of us add some comments to me it will be helpfull.
Python, communicating variables between a script and .exe
8,634,993
1
0
292
0
python,variables,input,executable
The easiest way to send variables to a sub-program is using command line arguments or environment variables. If you want bidirectional communication you can use pipes to transmit the info that you are currently sending over the text files (even on Windows). The python subprocess (http://docs.python.org/library/subprocess.html) module is very good at that that sort of thing.
0
1
0
0
2011-12-26T10:39:00.000
1
1.2
true
8,634,943
1
0
0
1
I have a python script: main.py, which executes a cx_frozen python script: test.exe I have created. main.py needs to send variables to test.exe. If the frozen script is able to send variables back that would be great too. I have, up to this point been saving out .txt files from main.py and accepting them from test.exe side. But now that I am introducing multithreading, I am concerned the instances of test.exe will collect information from the .txts intended for other instances of test.exe. I was wondering if this is possible? How do I tell main.py to send variables to test.exe accept them ... and, if possible, send them back to main.py Thanks