Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
PHP Exec() and Python script scaleability
12,151,910
0
4
266
0
php,python,exec
In case of CGI, server starts a copy of PHP interpreter every time it gets a request. PHP in turn starts Python process, which is killed after exec(). There is a huge overhead on starting two processes and doing all imports on every request. In case of FastCGI or WSGI, server keeps couple processes warmed up (min and max amount of running processes is configurable), so at price of some memory you get rid of starting new process every time. However, you still have to start/stop Python process on every exec() call. If you can use a Python app without exec(), eg port PHP part to Python, it would boost performance a lot. But as you mentioned this is a small project so the only important criteria is if your current server can sustain current load.
0
1
0
1
2012-08-27T22:18:00.000
1
1.2
true
12,150,405
0
0
0
1
I have a php application that executes Python scripts via exec() and cgi. I have a number of pages that do this and while I know WSGI is the better way to go long-term, I'm wondering if for a small/medium amount of traffic this arrangement is acceptable. I ask because a few posts mentioned that Apache has to spawn a new process for each instance of the Python interpreter which increases overhead, but I don't know how significant it is for a smaller project. Thank you.
where can i find libpython2.6.dylib on osx
23,327,147
0
1
2,408
0
python,macos,python-2.6,dylib
On a Mac, it is stored under /usr/lib
0
1
0
0
2012-08-28T15:49:00.000
1
0
false
12,162,976
1
0
0
1
I'm looking for libpython2.6.dylib in my frameworks folder but for all my instals I can only find the libpython2.7.dylib. I'm looking in 'System/Library/Frameworks/Python.framework/Versions/2.x'/lib . I also notice that libpython2.7.dylib is actually just an alias for '/System/Library/Frameworks/Python.framework/Versions/2.7/Python', does this mean i can just make my own alias of the other 'Python' binaries that are on all the install directories?
Using BPF on a PCAP file
13,545,284
1
1
993
0
python,pcap,scapy
Open the pcap file with pcap_open_offline(), compile the filter "arp" with pcap_compile(), set the filter on the pcap_t * to the resulting filter with pcap_setfilter(), and then read the packets from that pcap_t *.
0
1
0
0
2012-08-29T13:05:00.000
2
0.099668
false
12,178,659
0
0
0
1
I have written a code which sniffs packets on the network. It then filters it according to the MAC address and stores it as a .pcap file. Now I want to add a function to the code which can read the .pcap file or the object that holds the sniffed packets to filter it again to get ARP packets. I tried using PCAP library's bpf function but it doesn't help. Any other way this might work?
How to prevent Hudson from entering shutdown mode automatically or when idle?
12,382,944
2
1
864
0
continuous-integration,hudson,shutdown,python-idle
Solution: disable the thinBackup plugin ... I figured this out by taking a look at the Hudson logs at http://localhost:8080/log/all thinBackup was running every time the Hudson instance went into shutdown mode. The fact that shutdown mode was occurring at periods of inactivity is also consistent with the behavior of thinBackup. I then disabled the plug-in and Hudson no longer enters shutdown mode. What's odd is that thinBackup had been installed for some time before this problem starting occurring. I am seeking out a solution from thinBackup to re-enable the plugin without the negative effects and will update here if I get an answer.
0
1
0
0
2012-08-29T16:53:00.000
2
1.2
true
12,182,882
0
0
1
1
After several months of successful and unadulterated continuous integration, my Hudson instance, running on Mac OSX 10.7.4 Lion, decides it wants to enter shutdown mode after every 20-30 minutes of inactivity. For those of you familiar with shutdown mode, the instance of course doesn't shutdown, but has the undesirable effect (in this case) of stopping new jobs from starting. I know I haven't changed any settings, so it makes me think the problem was slowly growing and keeps triggering shutdown mode. I know there is plenty of storage space on the machine with 400+ GB to go so I'm wondering what else would trigger shutdown mode without actually using the Hudson web portal to manually do it. As mentioned before, the problem also seems to be tied to inactivity. I tried creating a quick fix, which is a build job that does nothing every 5 minutes. It appeared to work at first, but after long periods of inactivity I will find it back in shutdown mode. Any ideas what might be going on?
Using Numpy and SciPy on Apache Pig
12,618,627
0
1
524
0
python,numpy,scipy,apache-pig
You can stream through a (C)Python script that imports scipy. I am for instance using this to cluster data inside bags, using import scipy.cluster.hierarchy
0
1
0
0
2012-08-29T17:55:00.000
1
0
false
12,183,759
0
1
0
1
I want to write UDFs in Apache Pig. I'll be using Python UDFs. My issue is I have tons of data to analyse and need packages like NumPy and SciPy. Buy this they dont have Jython support I cant use them along with Pig. Do we have a substitue ?
How can I stream data, on my Mac, from a bluetooth source using R?
12,187,989
0
0
377
0
python,macos,r,bluetooth
there is a strong probability that you can enumerate the bluetooth as serial port for the bluetooth and use pyserial module to communicate pretty easily... but if this device does not enumerate serially you will have a very large headache trying to do this... see if there are any com ports that are available if there are its almost definitely enumerating as a serial connection
0
1
0
0
2012-08-29T23:16:00.000
1
1.2
true
12,187,795
0
1
0
1
I have a device that is connected to my Mac via bluetooth. I would like to use R (or maybe Python, but R is preferred) to read the data real-time and process it. Does anyone know how I can do the data streaming using R on a Mac? Cheers
how to implement an on_revoked event in celery
18,464,160
0
0
355
0
python,celery
Use AbortableTask as a template and create a RevokableTask class to your specification.
0
1
0
0
2012-08-30T16:00:00.000
1
0
false
12,200,972
0
0
1
1
I have a task that retries often, and I would like a way for it to cleanup if it is revoked while it is in the retry state. It seems like there are a few options for doing this, and I'm wondering what the most acceptable/cleanest would be. Here's what I've thought of so far: Custom Camera that picks up revoked tasks and calls on_revoked Custom Event Consumer that knows to process on_revoked on tasks that get revoked Using AbortableTasks and using abort instead of revoke (I'd really like to avoid this) Are there any other options that I am missing?
java- how to code a process to intercept the output streams of program running on remote machine/know when remote program has halted/completed
12,206,913
1
0
142
0
java,python,ruby,remote-debugging
If you're only looking to determine when it has completed (and not looking to really capture all the output, as in your other question) you can simply check for the existence of the process id and, when you fail to find the process id, phone home. You really don't need the logs for that.
0
1
0
1
2012-08-30T23:10:00.000
2
0.099668
false
12,206,879
0
0
1
1
I want to run a java program on a remote machine, and intercept its logs-- also I want to be able to know if the program has completed execution, and also whether it was successful execution or if execution was halted due to an error. Is there any ready-made java library available for this purpose? Also, I would like to be able to use this program for obtaining logs/execution completion for remote programs in different languages-- like Java/Ruby/Python etc--
How to install Django 1.4 with Python 3.2.3 in Debian?
12,208,688
3
2
728
0
python,django,debian
Django does not support Python 3. You will need to install a version of Python 2.x.
0
1
0
0
2012-08-31T03:54:00.000
2
0.291313
false
12,208,680
1
0
1
1
I installed Python 3.2.3 in Debian /usr/local/bin/python3 and I installed Django 1.4 in the same directory. But when I try to import django from python 3 shell interpreter I get syntax error! What am I doing wrong?
python subprocess.popen use preexec_fn with arguments
37,494,427
0
0
2,368
0
python,python-2.7
if your target function is simple enough, you may want to try anonymous functions ("lambda function"). And, you can place this lambda function as it is or, as a function object (ex)f=lambda x,y: x+y and, no need to use partial nor importing "functools" package. (btw, if you want to use only partial, you can also clean up as 'from functools import partial' and directly use partial as your local function. <example with anonymous function> import subprocess subprocess.Popen(<cmd>, preexec_fn=(lambda x,y:x+y))
0
1
0
0
2012-08-31T04:18:00.000
2
0
false
12,208,839
1
0
0
1
I have read the python document about subprocesses, but the argument preexec_fn for subprocess.Popen can only point to a function with no argument. Now I want to call a function with two arguments just like what preexec_fn does, I've tried to use global variables, but it doesn't work. How can I do that?
how to switch between processes in pdb
12,219,648
0
2
869
0
python,pdb
It seems like it automatically gets switched at some point (probably I/O). If you want to force it though, you should call time.sleep().
0
1
0
0
2012-08-31T16:26:00.000
2
0
false
12,219,231
1
0
0
2
I am debugging a Python application, that makes use of os.fork() at some point. After evaluating the function PDB remains in the parent process (as I can see from the value returned from the function). How do I switch between child and parent process in PDB?
how to switch between processes in pdb
12,220,323
0
2
869
0
python,pdb
There is no way to do that with pdb. Your best bet will be to start your pdb session (using pdb.set_trace()) inside the child process after the fork.
0
1
0
0
2012-08-31T16:26:00.000
2
0
false
12,219,231
1
0
0
2
I am debugging a Python application, that makes use of os.fork() at some point. After evaluating the function PDB remains in the parent process (as I can see from the value returned from the function). How do I switch between child and parent process in PDB?
nose.run() seems to hold test files open after the first run
12,226,647
0
0
193
0
python,unit-testing,nose
Solved* it with some outside help. I wouldn't consider this the proper solution, but by searching through sys.modules for all of my test_modules (which point to *.pyc files) and deling them, nose finally recognizes changes again. I'll have to delete them before each nose.run() call. These must be in-memory versions of the pyc files, as simply deleting them out in the shell wasn't doing it. Good enough for now. Edit: *Apparently I didn't entirely solve it. It does seem to work for a bit, and then all of a sudden it won't anymore, and I have to restart my shell. Now I'm even more confused.
0
1
0
0
2012-09-01T05:42:00.000
1
0
false
12,225,244
0
0
0
1
I'm having the same problem on Windows and Linux. I launch any of various python 2.6 shells and run nose.py() to run my test suite. It works fine. However, the second time I run it, and every time thereafter I get exactly the same output, no matter how I change code or test files. My guess is that it's holding onto file references somehow, but even deleting the *.pyc files, I can never get the output of nose.run() to change until I restart the shell, or open another one, whereupon the problem starts again on the second run. I've tried both del nose and reload(nose) to no avail.
How Can I mount shared drive in Python with no root?
12,229,835
3
0
813
0
python,linux,share,root,drive
You can't mount without root privileges (except in some circumstances, see below.) If you have no privileges on that machine, you have to ask the administrator. What an administrator can do is insert certain mount points into /etc/fstab and mark them user. An administrator could also install sudo for you and allow you to execute sudo mount. Python has no way (and shouldn't have a way) to circumvent these basic security features.
0
1
0
0
2012-09-01T17:11:00.000
2
0.291313
false
12,229,752
0
0
0
1
I am trying to mount a shared drive by using os.system() in python. The problem is, that the installed linux version has no sudo command. Installing a sudo-package has failed. When using the command su, I am getting an error that it must be used with suid. I can't chmod +s because I have no root. Any ideas? Mods? Or Buffer Overflow is the only solution here? =) Thank you in advance.
Django with Tornado
12,256,534
3
1
523
0
python,django,tornado
I haven't seen big projects that use tornado in front of django. But technically, you can do monkey.patch_all() with gevent. And then tornado will make sense. It's really bad solution, but if all you need is async unstable django waiting for you with chainsaw at the corner to cut your legs off instead of shooting them - then that is yours.
0
1
0
0
2012-09-01T19:25:00.000
2
1.2
true
12,230,701
0
0
1
2
I see a lot of people using django with tornado using WSGIContainer. Does this make any sense? Tornado is supposed to be asynchronous, but Django is synchronous. Aren't you just shooting yourself in the foot by doing that?
Django with Tornado
12,254,758
0
1
523
0
python,django,tornado
Django comes with a debug server, so i guess, using Tornado with Django, the Tornado here is the mix of Apache + mod_WSGI
0
1
0
0
2012-09-01T19:25:00.000
2
0
false
12,230,701
0
0
1
2
I see a lot of people using django with tornado using WSGIContainer. Does this make any sense? Tornado is supposed to be asynchronous, but Django is synchronous. Aren't you just shooting yourself in the foot by doing that?
Python Parent Child Relationships in Google App Engine Datastore
12,240,201
2
0
299
0
python,google-app-engine,inheritance,data-modeling
What you're talking about is inheritance heirarchies, but App Engine keys provide for object heirarchies. An example of the former is "a banana is a fruit", while an example of the latter is "a car has a steering wheel". Parent properties are the wrong thing to use here; you want to use PolyModel.
0
1
0
0
2012-09-02T03:15:00.000
1
1.2
true
12,233,151
0
0
1
1
I am trying to model a parent hierarchy relationship in Google App Engine using Python. For example, I would like to model fruit. So the root would be fruit, then a child of fruit would be vine-based, tree-based. Then for example children of tree-based would be apple, pear, banana, etc. Then as children of apple, I would like to add macintosh, golden delicious, granny smith, etc. I am trying to figure out the easiest way to model this such that I can put in another entity of type basket a an entity of type fruit, or of type granny smith. Any help would be greatly appreciated! Thanks Jon
Will ndb work with entities that were created without using ndb on GAE?
12,237,692
5
1
140
0
python,google-app-engine,app-engine-ndb
ndb is simply a wrapper API. The core datastore is based on protocol buffers, and doesn't care what you use to access it. In other words, yes AFAIK it should work just fine.
0
1
0
0
2012-09-02T16:23:00.000
1
1.2
true
12,237,658
1
0
0
1
Switching to the ndb library on python GAE. Can I use ndb with entities that were created previously using the low-level api? Or do I have to copy all the old entities and re-save them in order to use ndb? Thanks!
How to write a automated tool for debugging a child process through gdb
12,267,951
0
3
395
0
python,c,debugging,gdb,interactive
If you're running the python process interactively from a terminal, just run "gdb" normally (e.g. using os.system()) as a child of the python process. It will inherit the terminal and take over stdio just as it would if it was run from the shell. (And if you aren't running it interactively, you'll need to explain what exactly you mean by "give me a a gdb prompt"). Also, if you know you're going to be debugging the process in the future, it's probably best to spawn the process under gdb in the first place than go through the work of getting it to stop at the right spot and attaching it.
0
1
0
0
2012-09-03T04:24:00.000
2
0
false
12,241,960
1
0
0
1
I have a script in python, which spawns a new process which I want to debug in gdb. Generally I do the usual process to debug this child process. Put sleep in this process until some condition is true and then attach gdb through pid of this process in a different session, put some breakpoints and make that conditions true so that it continues after sleep. I want to do this in an automated way, say the python script itself spawns a new gdb process and gives me a gdb prompt ? I know a little about curses so may be I can do something with that. But main problem is how to spawn a interactive process ( gdb here ) in python and how to give gdb prompt to user, I dont have much idea. Any help is appreciated.
django-celery in multiple server production environment
12,246,221
6
5
1,600
0
python,django,rabbitmq,celery,django-celery
It really depends on the size of the project, ideally you have RabbitMq, celery workers and web workers running on different machines. You need only one RabbitMQ and eventually multiple queue workers (bigger queues need more workers of course). You dont need 1 celery worker per webworker, the webworkers are going to publish tasks to the broker and then the workers will get them from there, in fact the webworker does not care about the amount of workers connected to the broker as it only communicates with the broker. Of course if you are starting a project it makes sense to keep everything on the same hardware and keep budget low and wait for the traffic and the money to flow :) You want to have the same code on every running instance of your app, no matter if they are celery workers/ webservers or whatever.
0
1
0
0
2012-09-03T10:23:00.000
1
1
false
12,245,999
0
0
1
1
I trying to deploy a django project using django, but I have these questions unsolved: Should I run one celeryd for each web server? Should I run just one RabbitMQ server, on another machine (not) running celeryd there, accesible to all my web servers? or RabbitMQ must be run also on each of the web servers? How can I use periodic tasks if the code is the same in all web servers? Thank for your answers.
What does this console message mean in Google App Engine
21,434,751
1
4
177
0
python,google-app-engine,app-engine-ndb
This seems to happen if you have async operations in progress before you enter the ndb.toplevel function. My guess is that this warns you that theses async operations will not be waited for at the end of the request. This could be an issue if you expected them to be included in your "toplevel" function and they are tasklets waiting for an operation to complete before executing some more.
0
1
0
0
2012-09-05T17:45:00.000
1
0.197375
false
12,286,987
0
0
1
1
I'm using Google App Engine NDB with a lot of async operations and yields. The console shows me this message: tasklets.py:119] all_pending: clear set([Future 106470190 created by dispatch(webapp2.py:570) for tasklet post(sync.py:387); pending]) Is this a warning of some sort? Should it be ignored? It doesn't cause any unusual behavior. (sync.py is one of my files, but the other stuff aren't mine)
Is there a way to use the python subprocess module's Popen.communicate() without waiting for the process to exit?
12,306,605
1
0
112
0
python,io,subprocess
Check this out: http://pypi.python.org/pypi/async_subprocess/0.2.1 "Provides an asynchronous version of Popen.communicate"
0
1
0
0
2012-09-06T17:07:00.000
1
1.2
true
12,305,045
0
0
0
1
I use Windows XP and Python 2.5. I am trying to make a way for my python programs to secretly communicate with each other, I think the STARTUPOPTIONS class could hide the window, but I can't find out how I could communicate with them. On my python subprocess I tried using raw_input and on the parent I tried writing to the Popen.stdin and flushing it, but it diden't seam to work. I would use Popen.communicate, but that waits for the process to exit. Is there a way to do this? Thanks! EDIT: I am going to only make my program available for windows users so if there is a module that is only available for windows, I will not mind it can't run on other Operating Systems.
Celery: long dedicated monolithic task vs short multiple tasks
12,355,563
10
6
1,293
0
python,rabbitmq,celery,distributed-computing,django-celery
There is no one-size-fits-all answer to this Dividing a big task A into many small parts (A¹, A², A³, …) will increase potential concurrency. So if you have 1 worker instance with 10 worker threads/processes, A can now run in parallel using the 10 threads instead of sequentially on one thread. The number of parts is called the tasks granularity (fine or coarsely grained). If the task is too finely grained the overhead of messaging will drag performance down. Each part must have enough computation/IO to offset the overhead of sending the task message to the broker, possibly writing it to disk if there are no workers to take it, the worker to receive the message, and so on (do note that messaging overhead can be tweaked, e.g. you can have a queue that is transient (not persisting messages to disk), and send tasks that are not so important there). A busy cluster may make all of this moot Maximum parallelism may already have been achieved if you have a busy cluster (e.g. 3 worker instances with 10 threads/processes each, all running tasks). Then you many not get much benefit by dividing the task, but tasks doing I/O have a greater chance of improvement than CPU-bound tasks (split by I/O operations). Long running tasks are fine The worker is not allergic to long running tasks, be that 10 minutes or an hour. But it's not ideal either because any long running task will block that slot from finishing any waiting tasks. To mitigate this people use routing, so that you have a dedicated queue, with dedicated workers for tasks that must run ASAP. -
0
1
0
0
2012-09-06T19:38:00.000
1
1.2
true
12,307,173
0
0
0
1
In my solution I use distributed tasks to monitor hardware instances for a period of time (say, 10 minutes). I have to do some stuff when: I start this monitoring session I finish this monitoring session (Potentially) during the monitoring session Is it safe to have a single task run for the whole session (10 minutes) and perform all these, or should I split these actions into their own tasks? The advantages of a single task, as I see it, are that it would be easier to manage and enforce timing constraints. But: Is it a good idea to run a large pool of (mostly) asleep workers? For example, if I know that at most I will have 200 sessions open, I have a pool of 500 workers to ensure there are always available "session" seats?
What is the default installation directory with an MSI install of Python 2.7?
12,318,294
5
2
3,491
0
python,windows,installation-path
the default installation folder is c:/python27
0
1
0
0
2012-09-07T12:37:00.000
1
1.2
true
12,318,264
1
0
0
1
When using windows MSI installation from python.org/download (current version is 2.7.3), python by default installs in some folder but it is nowhere documented what is this default. Can someone check that on windows? Also does this MSI installation recognizes is it 32 or 64bit windows and installs appropriate version or it assumes that it shall always be 32bit version of python? Particularly I am interested what happens on Windows 7 when this installer is used.
Putting password for fabric rsync_project() function
12,335,299
2
1
1,763
0
python,rsync,fabric
If rsync using ssh as a remote shell transport is an option and you can setup public key authentication for the users, that would provide you a secure way of doing the rsync without requiring passwords to be entered.
0
1
0
1
2012-09-08T22:53:00.000
2
1.2
true
12,335,114
0
0
0
1
I'd like to pass somehow user password for rsync_project() function (that is wrapper for regular rsync command) from Fabric library. I've found the option --password-file=FILE of rsync command that requires password stored in FILE. This could somehow work but I am looking for better solution as I have (temporarily) passwords stored as plain-text in database. Please provide me any suggestions how should I work with it.
Eclipse Pydev: Run selected lines of code
12,774,197
19
19
11,174
0
python,eclipse,pydev
press CTRL+ALT+ENTER to send the selected lines to the interactive console
0
1
0
1
2012-09-08T23:53:00.000
2
1
false
12,335,424
0
0
0
1
Is there a command in Eclipse Pydev which allows me to only run a few selected (highlighted) lines of code within a larger script? If not, is it possible to run multiple lines of code in the PyDev console at once?
Google Appengine not signing emails with DKIM code
12,340,517
1
0
441
0
python,google-app-engine,dkim
How long ago did you create your DNS TXT record? Since DKIM is a DNS controlled service, and DNS often takes up to days to propagate across the Internet, you may need to wait for that to happen before Google will recognize it as valid.
0
1
0
0
2012-09-09T15:39:00.000
1
0.197375
false
12,340,456
0
0
1
1
Am confused why emails sent by my appengine app are not being signed with DKIM. Enabled DKIM signing on Google Apps dashboard. Confirmed that my domain is "Authenticating email" Have setup DNS TXT record using the values indicated in the apps domain. Have confirmed, using 3rd party validation tool, that the DNS is correct. Also, I assume that having a green-light indicator for authenticating email in my Google Apps domain means this record has been validated by Google Apps. Email-send is being triggered by a click by a user browsing my application via my custom url. The custom url matches the domain for the return address of the sender. The sender return address is an owner of the account. As far as I know, these are the requirements for emails to be signed automatically. Yet, alas, they are not being signed. Any help or ideas will be greatly appreciated. Thanks -
How can one load an AppEngine cloud storage backup to a local development server?
37,780,181
0
3
1,341
0
python,google-app-engine
For those using windows change the open line to: raw = open('path_to_datastore_export_file', 'rb') The file must be opened in binary mode!
0
1
0
0
2012-09-09T15:41:00.000
3
0
false
12,340,468
0
0
1
1
I'm experimenting with the Google cloud storage backup feature for an application. After downloading the backup files using gsutil, how can they be loaded into a local development server? Is there a parser available for these formats (eg, protocol buffers)?
google app engine email attachment: download it to file system
12,349,722
1
0
218
0
python,google-app-engine,download,email-attachments
You cannot write a File to your web application directory in App Engine. Possible choices for you are: Save the content in the Datastore. Use the Blobstore Use the Google Storage facility. Alternately, you might want to post the content away to an external server that can store the data, either your own or some 3rd party like Amazon S3.
0
1
0
0
2012-09-10T07:10:00.000
1
0.197375
false
12,346,831
0
0
1
1
I am able to receive an email to app engine. I see the data in an attachment is payload base64 encoded data. Can I download the attachment as it is to file system without processing it or without storing it to blobstore?
how do i pass multiple files as arguments in command line for python using regular expressions?
12,348,372
1
0
1,629
0
python,windows-7,command-line,tags,command-line-arguments
You cannot using standard Windows cmd shell. You can use something like bash from Cygwin, maybe PowerShell. If you want to open *.py from application like vim but in Python, then you can use glob module.
0
1
0
0
2012-09-10T08:56:00.000
2
1.2
true
12,348,291
1
0
0
1
I'm not able to do this ptags.py *.py or python *.py i'm getting an error saying "Cannot open file named *.py" but i'm able to open all the python files in vim using this command vim *.py python 2.7 in windows 7 command prompt
Python AppEngine coding for Android app?
12,349,313
0
0
130
0
java,android,python,google-app-engine
The communication with your server can be totally independent of the languages used on the server and client end. Typically web applications use principles such as REST to communicate. This is why your browser runs using HTML and JavaScript and your server can be using anything, including python. It really depends on what you need your server to do for your app.
0
1
0
0
2012-09-10T09:44:00.000
1
1.2
true
12,349,086
0
0
1
1
I'm a newbie Android Developer, and my app requires that it interacts with a server. I came across Google AppEngine, and find it to be a good choice for this app. If I code my Android app in Java, and do the server coding for Google AppEngine in Python, will my Android App be able to communicate with the server? I mean will this Java (client) + Python (server) combination work well?
VIM: How to access the redo-register
12,357,914
1
3
194
0
python,vim,autocomplete,refactoring,rename
The . register contains the last inserted text. See :help quote_.. The help doesn't specifically mention any caveats of when this register is populated, however it does mention that it doesn't work when editing the command line. This shouldn't be an issue for you.
0
1
0
0
2012-09-10T17:07:00.000
3
0.066568
false
12,356,348
0
0
0
1
As a secondary task to a Python auto-completion (https://github.com/davidhalter/jedi), I'm writing a VIM plugin with the ability to do renaming (refactoring). The most comfortable way to do renaming is to use cw and autocommand InsertLeave :call do_renaming_func(). To do this I need to access the redo-register (see help redo-register) or something similar, which would record the written text. If possible, I like to do this without macros, because I don't want to mess up anything.
sqlite3 command line tools don't work in Ubuntu
12,360,397
2
1
1,252
1
python,linux,sqlite,ubuntu
Are you putting the DB file name in the command ? $ sqlite3 test.db
0
1
0
0
2012-09-10T22:23:00.000
1
1.2
true
12,360,279
0
0
0
1
Trying to set up some basic data I/O scripts in python that read and write from a local sqlite db. I'd like to use the command line to verify that my scripts work as expected, but they don't pick up on any of the databases or tables I'm creating. My first script writes some data from a dict into the table, and the second script reads it and prints it. Write: # first part of script creates a dict called 'totals' import sqlite3 as lite con = lite.connect('test.db') with con: cur = con.cursor() cur.execute("DROP TABLE IF EXISTS testtbl") cur.execute("CREATE TABLE testtbl(Date TEXT PRIMARY KEY, Count INT, AverageServerTime REAL, TotalServerTime REAL, AverageClientTime REAL, TotalClientTime REAL)") cur.execute('INSERT INTO testtbl VALUES("2012-09-08", %s, %s, %s, %s, %s)' % (float(totals['count()']), float(totals['serverTime/count()']), float(totals['serverTime']), float(totals['totalLoadTime/count()']), float(totals['totalLoadTime']))) Read: import sqlite3 as lite con = lite.connect('test.db') with con: cur = con.cursor() cur.execute("SELECT * FROM testtbl") rows = cur.fetchall() for row in rows: print row These scripts are separate and both work fine. However, if I navigate to the directory in the command line and activate sqlite3, nothing further works. I've tried '.databases', '.tables', '.schema' commands and can't get it to respond to this particular db. I can create dbs within the command line and view them, but not the ones created by my script. How do I link these up? Running stock Ubuntu 12.04, Python 2.7.3, SQLite 3.7.9. I also installed libsqlite3-dev but that hasn't helped.
Write/Read with High Replication Datastore + NDB
12,378,411
2
2
863
0
google-app-engine,python-2.7,google-cloud-datastore
Pretty sure you are running into the HRD feature where queries are "eventually consistent". NDB's caching has nothing to do with this behavior.
0
1
0
0
2012-09-11T10:41:00.000
2
1.2
true
12,367,904
0
0
1
1
So I have been reading a lot of documentation on HRD and NDB lately, yet I still have some doubts regarding how NDB caches things. Example case: Imagine a case where a users writes data and the app needs to fetch it immediately after the write. E.g. A user creates a "Group" (similar to a Facebook/Linkedin group) and is redirected to the group immediately after creating it. (For now, I'm creating a group without assigning it an ancestor) Result: When testing this sort of functionality locally (having enabled high replication), the immediate fetch of the newly created group fails. A NoneType is returned. Question: Having gone through the High Replication docs and Google IO videos, I understand that there is a higher write latency, however, shouldn't NDB caching take care of this? I.e. A write is cached, and then asynchronously actually written on disk, therefore, an immediate read would be reading from cache and thus there should be no problem. Do I need to enforce some other settings?
Tornado secure cookie expiration (aka secure session cookie)
12,385,159
11
7
4,626
0
python,cookies,tornado
It seems to me that you are really on the right track. You try lower and lower values, and the cookie has a lower and lower expiration time. Pass expires_days=None to make it a session cookie (which expires when the browser is closed).
0
1
0
0
2012-09-12T08:04:00.000
1
1.2
true
12,383,697
0
0
1
1
How can I set in Tornado a secure cookie that expires when the browser is closed? If I use set_cookie I can do this without passing extra arguments (I just set the cookie), but how if I have to use set_secure_cookie? I tried almost everything: passing nothing: expiration is set to its default value, that is 1 month passing an integer value: the value is considered as day, i.e. 1 means 1 day passing a float value: it works, for example setting 0.1 it means almost one hour and a half
Serving many on-the-fly generated images with Django
12,390,045
0
4
676
0
python,django,apache,comet,wsgi
If one user is all you need to bring your webserver down then the problem is not apache or mod_wsgi. First you should optimize your tiling routines and check if you really only deliver the data a user actually sees. After that a faster cpu, more ram, a ssd and aggressive caching will give you more performance. At last you may get some extra points for using another webserver, but dont expect too much from that.
0
1
0
0
2012-09-12T11:59:00.000
3
0
false
12,387,707
0
0
1
2
Similar to a tiling server for spatial image data, I want to view many on-the-fly generated images in my Django based web application (merge images, color change, etc.). Since one client can easily request many (>100) images in a short time, it is easy to bring the web server (Apache + mod_wsgi) down. Hence, I am looking for alternative ways. Since we already use Celery, it might be a good idea to do this image processing asynchronously and push the generated data to the client. To get started with that I switched the WSGI server to be gevent with Apache used as a proxy. However, I haven't managed to get the push thing working yet and I am not quite sure if this is the right direction anyway. Based on that I have three questions: Do you think this (Celery, gevent, Socket.IO) is a sensible way to allow many clients to use the application without bringing the web server down? Do you see alternatives? If I hand over the image processing to Celery and let it push the image data to the browser when it is done, the connection won't go through Apache, will it? If some kind of pushing to the client is used, would it be better to use one connection or one for each image (and close it when done)? Background: The Django application I am working on allows a user to display very large images. This is done by tiling the large images before and show only the currently relevant tiles in a grid to the user. From what I understand this is the standard way to serve data in the field of mapping and spatial image data (e.g. OpenStreetMap). But unlike mapping data, we also have many slices in Z a user can scroll through (biological images). All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. Currently we use Apache with mod_wsgi (WSGIRestrictedEmbedded On) and it is easy to bring the server down. Just browsing through the image stack will lead to a hanging web server. I already tried to adjust MaxClients, etc. and turned KeepAlive off. I also tried different thread/processes combinations for mod_wsgi. However, nothing helped enough to allow usage for more than one user. Therefore, I thought a Comet/WebSocket way could help here.
Serving many on-the-fly generated images with Django
12,390,401
1
4
676
0
python,django,apache,comet,wsgi
All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. You need a load balancer, with image requests being sent to a front-end server (e.g. NginX) that will multiplex (and cache!) as many requests as needed, provided you supply enough backend servers to do the heavy lifting. This looks like a classic case for Amazon distributed computing: you could store the tiles in S3 storage (or maybe NFS over EBS). All the image manipulation servers get the data from a single image repository. At the beginning, you can have both the Web application and one instance of the image manipulation server on the same machine. But basically your processes are three: Web serving that calculates image URLs (you'll need some way to encode the manipulation as parameters in the URLs, otherwise you'll have to use cookies and session storage, which is ickier) image server that receives the "image formula" and provides the JPEG tile file server that allows access to the large images or single original tiles I have worked at several such architectures, wherein our image layers were stored in a single image file (e.g. five zoom levels, each fifteen channels from FIR to UV, for a total of 75 "images" up to 100K pixels on a side, and the client could request 'Zoom level 2, red channel plus double of difference between UV-1 channel and green, tiles from X=157, Y=195 to X=167,Y=205').
0
1
0
0
2012-09-12T11:59:00.000
3
0.066568
false
12,387,707
0
0
1
2
Similar to a tiling server for spatial image data, I want to view many on-the-fly generated images in my Django based web application (merge images, color change, etc.). Since one client can easily request many (>100) images in a short time, it is easy to bring the web server (Apache + mod_wsgi) down. Hence, I am looking for alternative ways. Since we already use Celery, it might be a good idea to do this image processing asynchronously and push the generated data to the client. To get started with that I switched the WSGI server to be gevent with Apache used as a proxy. However, I haven't managed to get the push thing working yet and I am not quite sure if this is the right direction anyway. Based on that I have three questions: Do you think this (Celery, gevent, Socket.IO) is a sensible way to allow many clients to use the application without bringing the web server down? Do you see alternatives? If I hand over the image processing to Celery and let it push the image data to the browser when it is done, the connection won't go through Apache, will it? If some kind of pushing to the client is used, would it be better to use one connection or one for each image (and close it when done)? Background: The Django application I am working on allows a user to display very large images. This is done by tiling the large images before and show only the currently relevant tiles in a grid to the user. From what I understand this is the standard way to serve data in the field of mapping and spatial image data (e.g. OpenStreetMap). But unlike mapping data, we also have many slices in Z a user can scroll through (biological images). All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. Currently we use Apache with mod_wsgi (WSGIRestrictedEmbedded On) and it is easy to bring the server down. Just browsing through the image stack will lead to a hanging web server. I already tried to adjust MaxClients, etc. and turned KeepAlive off. I also tried different thread/processes combinations for mod_wsgi. However, nothing helped enough to allow usage for more than one user. Therefore, I thought a Comet/WebSocket way could help here.
Python 3.2.3, easy_install, Mac OS X
18,519,381
0
2
5,950
0
python,macos,python-3.x,easy-install
For what its worth on my install of python3 (using homebrew), calling the correct binary was all that was required. easy_install3 was already on the system path, as was easy_install-3.3.
0
1
0
0
2012-09-12T16:34:00.000
2
0
false
12,392,699
0
0
0
1
I am a windows 7 user, so pardon me for my ignorance. I have been trying to help my friend get easy_install working on her Mac OS X laptop. We managed to get everything working for 2.7 with these commands in the terminal: python distribute_setup.py (which installs "distribute") easy_install We tried the same thing for Python 3.2.3: python3.2 distribute_setup.py easy_install But the package gets installed for python 2.7 instead of 3.2.3. From what I know, this is because easy_install only works with 2.7. On my windows 7, I managed to do all these by going into the command prompt, python32 directory and doing: python distribute_setup.py Then going into the python32/script directory and running easy_install.exe directly: easy_install This installs the package to python 3.2.3 with no problems. Question: What should we be doing for Mac OS X? Is there a Mac equivalent of running "easy_install.exe"?
Errno 32 Broken pipe, Errno 107 Transport endpoint is not connected python socket
12,423,190
0
0
2,488
0
python,apache,tcp,broken-pipe,sigpipe
I had to set apache settings to following: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 5 I will further investigate the problem and see if this is proper solution.
0
1
0
0
2012-09-13T13:56:00.000
1
1.2
true
12,407,939
0
0
0
1
My TCP Server is written in Qt 4.7, works well with TCP Client also written in Qt 4.7. I am trying to connect and communicate with Server with client written in python 2.7.3. I start the Server process via apache http request with subprocess.call(path_to_server). I am using mod_wsgi 3.3 and django 1.4. Connection is established without a problem. I am receiving [Errno 32] Broken pipe exception on socket.send() randomly (I can spam same msg for 10 times and it will be sent 0-10 times). Same happens to socket.shutdown() & socket.close(), I can spam disconnect command and it will randomly disconnect, otherwise receiving [Errno 107] Transport endpoint is not connected exception. netstat -nap says connection is established. When I try running same client script using python2.7 shell everything works fine. What am I missing here? EDIT Everything works on Windows 7, running same apache,mod_wsgi,python,django configuration. TCP Server is also Windows compatible. Error happens on centos6.2 32bit.
Changing the default python in OSX Mountain Lion
12,421,681
2
4
4,810
0
python,macos,macports,osx-mountain-lion
The sudo port select command only switches what /usr/local/bin/python points to, and does not touch the /usr/bin/python path at all. The /usr/bin/python executable is still the default Apple install. Your $PATH variable may still look in /usr/local/bin before /usr/bin though when you type in python at your Terminal prompt.
0
1
0
0
2012-09-14T09:20:00.000
2
1.2
true
12,421,574
0
0
0
1
I installed Python through MacPorts, and then changed the path to that one. /opt/local/bin/python using this command sudo port select python python27 But now i want to revert to the Mac one at this path /usr/bin/python How can I go about doing this? EDIT: I uninstalled the MacPort Python, restarted the terminal and everything went back to normal. Strange. But I sill don't know why/how.
development environment for python projects
12,425,441
0
0
210
0
python,ide,text-editor
Komodo is a good commercial IDE.And Eric is a free python IDE which is written with python.
0
1
0
1
2012-09-14T13:17:00.000
4
0
false
12,425,407
1
0
0
1
I am running python on linux and am currently using vim for my single-file programs, and gedit for multi-file programs. I have seen development environments like eclipse and was basically wondering if there's a similar thing on ubuntu designed for python.
Usage of pypy compiler
12,432,095
2
8
7,679
0
python,python-3.x,python-2.7,pypy
pypy is a compliant alternative implementation of the python language. This means that there are few (intentional) differences. One of the few differences is pypy does not use reference counting. This means, for instance, you have to manually close your files, they will not be automatically closed when your file variable goes out of scope as in CPython.
0
1
0
0
2012-09-14T20:55:00.000
2
0.197375
false
12,431,847
1
0
0
1
Is there a difference in python programming while using just python and while using pypy compiler? I wanted to try using pypy so that my program execution time becomes faster. Does all the syntax that work in python works in pypy too? If there is no difference, can you tell me how can i install pypy on debian lunux and some usage examples on pypy? Google does not contain much info on pypy other than its description.
How many bytes are received with dataReceived?
12,433,363
3
2
296
0
python,twisted
In Python 2, str → a sequence of bytes, which is sometimes used as ASCII text bytes → an alias for str (available in python 2.6 and later) unicode → a sequence of unicode code units (UCS-2 or UCS-4, depending on compile time options, UCS-2 by default) In Python 3, str → a sequence of unicode code units (UCS-4) bytes → a sequence of bytes unicode → no such thing any more, you mean str Think of the type passed to dataReceived as bytes. It is bytes in Python 2.x, it will be bytes when Twisted has been ported to Python 3.x. Therefore, the length in bytes of the received segment is simply len(data).
0
1
0
0
2012-09-14T20:58:00.000
2
1.2
true
12,431,871
1
0
0
2
I am using Twisted to receive data from a socket. My protocol class inherits from Protocol. As there are no byte type in Python 2.*, the type of received data is str. Of course, len (data) gives me the length of the string but how can I know the number of bytes received ? There is not sizeof or something equivalent that allows me to know the number of bytes ? Or should I consider that whatever the platform, the number of bytes will be 2 * len (data) ? thanks in advance
How many bytes are received with dataReceived?
12,431,966
4
2
296
0
python,twisted
The length of the string is the length in bytes.
0
1
0
0
2012-09-14T20:58:00.000
2
0.379949
false
12,431,871
1
0
0
2
I am using Twisted to receive data from a socket. My protocol class inherits from Protocol. As there are no byte type in Python 2.*, the type of received data is str. Of course, len (data) gives me the length of the string but how can I know the number of bytes received ? There is not sizeof or something equivalent that allows me to know the number of bytes ? Or should I consider that whatever the platform, the number of bytes will be 2 * len (data) ? thanks in advance
Design recommendations for a multi-user web based count down timer
12,462,388
0
1
414
0
java,python,web-applications,timer,clock
Single master clock that is always running (no users need be logged in for clock to continue running) That's not hard. The variance between what any given viewer sees and the actual time on the master clock can not be greater than 1 second. That's pretty much impossible. You can take into account network delays and such , but you can't ensure this. Any changes made to the master clock/countdown timer/countup timer need to be seen by all viewers near instantly. You could do that with sockets, or you could just keep polling the server... do a websearch for "javascript ntp". There are a handful of libraries that will do most of what you want ( and i'd argue, enough of what you want ). most work like this: try to calculate the offset of the local clock to the master clock continually poll master clock for time, trying to figure out the average delay show the time based on fancy math of local vs master clock. years ago i worked on some flash-based chat rooms. a SWF established a socket connection to a TwistedPython server. that worked well enough for our needs, but we didn't care about latency.
0
1
0
0
2012-09-17T14:44:00.000
2
0
false
12,461,724
0
0
1
1
I help with streaming video production on a weekly basis. We stream live video to a number of satellite locations in the Dallas area. In order to ensure that all of the receiving locations are on the same schedule as the broadcasting location we use a desktop clock/timer application and the remote locations VNC into that desktop to see the clock. I would like to replace the current timer application with a web based one so that we can get rid of the inherently fragile VNC solution. Here are my requirements: Single master clock that is always running (no users need be logged in for clock to continue running) The variance between what any given viewer sees and the actual time on the master clock can not be greater than 1 second. Any changes made to the master clock/countdown timer/countup timer need to be seen by all viewers near instantly. Here is my question: I know enough java and python to be dangerous. But I've never written a web app that requires real time syncing between the server and the client like this. I'm looking for some recommendations on how to architect a web application that meets the above requirements. Any suggestions on languages, libraries, articles, or blogs that can point me in the right direction would be appreciated. One caveat though: I would prefer to avoid using Java EE or .Net if possible.
How do I notify Python/Tornado that the client has closed the tab/browser?
12,549,017
0
4
1,509
0
python,connection,chat,tornado
I've been having this issue for like 5 - 6 days and finally found out what the problem is, well.. not exactly actually but it's solved! I've been searching on the internet but found nothing. I told in the above post that I do remember it working when I tried the same script a couple months ago, but I never mentioned using nginx back then. I've been struggling with Apache + mod_proxy but I don't know what the issue is with apache but when I tried nginx this time again it just worked! If you have the same issue (on_connection_close not getting fired) "TRY" nginx. Thanks for your help too @Nikolay.
0
1
0
0
2012-09-17T21:15:00.000
2
1.2
true
12,467,204
0
0
0
1
I've been searching for quite a while for a solution about this but no dice. Edit: I didn't point out that I'm trying to make a chat server. So people log in, their id gets appended to a users and a listeners list. And they start chatting. But when one of them tries to close the tab or browser the user will never be deleted out of both lists, so he/she stays logged in. Edit2: I thought that the numbering above was a little confusing so I posted the part in the script as well at the bottom. So far I've tried the on_connection_close() function (which doesn't ever get fired, I don't know why), the on_finish() function (which gets fired every time when a finish() is called) so that doesn't fit the bill either. Now I've come up with a little bit of a solution which involves the on_finish() function: Whenever the UpdateHandler class' post() function gets called then self.done = 0 is set. Just before the finish() function gets fired I set self.done = 1. Now the on_finish() function gets called and I print self.done on the console and it's 1. In the same on_finish() function I do an IF self.done = 1 statement, as expected it returns TRUE and Tornado's io_loop.add_timeout with the parameters time.time()+3 (so that it sleeps for 3 seconds to make sure if the user navigated to another page within the website or completely went away from the website) and the callback that eventually is going to be called. After the 3 seconds I want to check whether self.done still equals 1 or if the user is still on the website then sure enough it will be 0. btw, every 30 seconds the server finishes the connection and then sends the user a notification to initiate a new connection so that the connection never times out on it's own. When the client closes the browser and the 30 second long timeout expires then the server tries to send a notification, if the client was still on my website then it would initiate a new connection thus calling the post() function in the UpdateHandler class I mentioned above thus setting the variable self.done back to 0. (That's why I gave the io_loop.add_timeout a margin of 3 seconds.) Now that that's taken care of I wanted to go ahead and try and see how it works. I started the server and opened up a browser navigated to the right url and watched how the server responded (by placing a few print statements in the script). When the user stays connected I can see that after the post() call (which shows at that time self.done = 0) it sleeps for 3 seconds, and then the callback function gets called but this one prints self.done = 1 which is strange. I know this is not the most efficient way but it's the only solution I could come up with, which didn't even work as expected. Conclusion: I hope someone has a good alternative or maybe even a point in my theory that I missed which breaks the whole thing. I really would like to know how to let Tornado know that the client closed the browser without waiting for the 30 second timeout to finish. Maybe with pinging the open connection or something. I looked into TORNADIO for a little bit but didn't like it that much. I want to do this in pure Tornado if it's possible of course. I'll submit the code ASAP, I've been trying for like half an hour looking at 'How to Format' etc. but when I try to submit my edit it gives an error. Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon.
install OSQA using xampp on windows 7
13,022,630
1
0
201
0
python,xampp,osqa
If you're flexible about xampp, try bitnami native installer: http://bitnami.org/stack/osqa It took me about 10min for the installer to run, then I had running on Win7 localhost.
0
1
0
0
2012-09-18T07:38:00.000
1
1.2
true
12,472,432
0
0
0
1
can anyone tell me that how can i install osqa on windows 7 with xampp localhost ? i don't know xampp support python. Thanks in advance.
Game Engine 3D to Python3.x?
31,010,631
1
2
1,505
0
python,python-3.x,game-engine
If you want a game engine in python, I would recommend these: Kivy (multiplatform) PyGame (multiplatform) Blender (graphical game engine made in python, multiplatform, also used for modeling) PyOpenGL (multiplatform, 3d game engine like blender) These are some game engines I know. You also might want to try Unity3d.
0
1
0
1
2012-09-19T02:46:00.000
2
0.099668
false
12,487,889
0
0
0
2
What its the best game-engine 3D to Python 3.x and easy to install on Linux (Debian 7 "wheezy")?
Game Engine 3D to Python3.x?
12,488,430
2
2
1,505
0
python,python-3.x,game-engine
Not sure if it is the "best" - but not working on the field I am aware of few others than Blender 3D 's game engine. Blender moved to Python 3 scripting at version 2.5, so any newer version than that will use Python 3 for BGE (Blender Game Engine) scripts. Pygame is also available for Python 3.x, and it does feature a somewhat low-level interface to OpenGL - -sou you could do 3d with it. Both should not have any major problems running in Debian, but maybe you will have to configure some sort of PPA to get packages being installed for Python 3. Also, be sure that your Debian's python3 is 3.2 - this distribution is known to have surprisingly obsolete packages even when one is running the most recent one.
0
1
0
1
2012-09-19T02:46:00.000
2
0.197375
false
12,487,889
0
0
0
2
What its the best game-engine 3D to Python 3.x and easy to install on Linux (Debian 7 "wheezy")?
How to support both argparse and optparse?
12,491,541
3
4
394
0
python,argparse,optparse
I would stick with optparse as long as it provides the functionality you currently need (and expect to need in future). optparse works perfectly fine, it just won't be developed further. It's still available in Python 3, so even if one day you decide to move to Python 3, it will continue to work.
0
1
0
0
2012-09-19T08:57:00.000
2
1.2
true
12,491,429
1
0
0
1
I have a small application that runs on fairly recent Linux distributions with Python 2.7+ but also on CentOS and Scientific Linux boxes that have not yet made the switch to Python 2.7. optparse is deprecated with Python 2.7 and frankly I don't want to support optparse anyway, which is why I developed the application with argparse in mind. However, argparse does not exist on these older distributions. Moreover, the sysadmins are rather suspicious of installing a backport of argparse. Now, that should I do? Stick with optparse? Write yet-another-wrapper around both libraries? Convince sysadmins and users (who in most cases are just able to start the application) to install an argparse backport?
Multiple lines user input in command-line Python application
12,494,019
0
6
6,926
0
python,string,input,copy-paste
Use : input = raw_input("Enter text") These gets in input as a string all the input. So if you paste a whole text, all of it will be in the input variable. EDIT: Apparently, this works only with Python Shell on Windows.
0
1
0
0
2012-09-19T11:37:00.000
3
0
false
12,493,934
1
0
0
1
Is there any easy way to handle multiple lines user input in command-line Python application? I was looking for an answer without any result, because I don't want to: read data from a file (I know, it's the easiest way); create any GUI (let's stay with just a command line, OK?); load text line by line (it should pasted at once, not typed and not pasted line by line); work with each of lines separately (I'd like to have whole text as a string). What I would like to achieve is to allow user pasting whole text (containing multiple lines) and capture the input as one string in entirely command-line tool. Is it possible in Python? It would be great, if the solution worked both in Linux and Windows environments (I've heard that e.g. some solutions may cause problems due to the way cmd.exe works).
easy_install2.5 fails to install python-ldap on Ubuntu 12.04
12,494,482
1
0
1,640
0
python,ubuntu
It looks like you need the python 2.5 header files. You might be able to find them in synaptic under python2.5-dev or something similar.
0
1
0
0
2012-09-19T12:12:00.000
1
1.2
true
12,494,465
0
0
0
1
easy_install-2.5 python-ldap Searching for python-ldap Reading http://pypi.python.org/simple/python-ldap/ Reading http://www.python-ldap.org/ Best match: python-ldap 2.4.10 Downloading http://pypi.python.org/packages/source/p/python-ldap/python-ldap-2.4.10.tar.gz#md5=a15827ca13c90e9101e5e9405c1d83be Processing python-ldap-2.4.10.tar.gz Running python-ldap-2.4.10/setup.py -q bdist_egg --dist-dir /tmp/easy_install-dplmGE/python-ldap-2.4.10/egg-dist-tmp-ZlXBub defines: HAVE_SASL HAVE_TLS HAVE_LIBLDAP_R extra_compile_args: extra_objects: include_dirs: /opt/openldap-RE24/include /usr/include/sasl /usr/include library_dirs: /opt/openldap-RE24/lib /usr/lib libs: ldap_r file Lib/ldap.py (for module ldap) not found file Lib/ldap/controls.py (for module ldap.controls) not found file Lib/ldap/extop.py (for module ldap.extop) not found file Lib/ldap/schema.py (for module ldap.schema) not found warning: no files found matching 'Makefile' warning: no files found matching 'Modules/LICENSE' file Lib/ldap.py (for module ldap) not found file Lib/ldap/controls.py (for module ldap.controls) not found file Lib/ldap/extop.py (for module ldap.extop) not found file Lib/ldap/schema.py (for module ldap.schema) not found file Lib/ldap.py (for module ldap) not found file Lib/ldap/controls.py (for module ldap.controls) not found file Lib/ldap/extop.py (for module ldap.extop) not found file Lib/ldap/schema.py (for module ldap.schema) not found In file included from Modules/LDAPObject.c:4:0: Modules/common.h:10:20: fatal error: Python.h: No such file or directory compilation terminated. error: Setup script exited with error: command 'gcc' failed with exit status 1 aptitude search python2.5 i python2.5 - An interactive high-level object-oriented language (version 2.5) v python2.5-celementtree - v python2.5-cjkcodecs - v python2.5-ctypes - v python2.5-dialog - v python2.5-elementtree - v python2.5-iplib - i A python2.5-minimal - A minimal subset of the Python language (version 2.5) v python2.5-plistlib - v python2.5-profiler - v python2.5-reverend - v python2.5-wsgiref
Python Error The _posixsubprocess module is not being used
12,508,287
3
7
12,868
0
python,multithreading
check if you can import _posixsubprocess manually, subprocess tries to import this in it's code, if it produces an exception this warning is produced.
0
1
0
0
2012-09-20T07:53:00.000
5
0.119427
false
12,508,243
1
0
0
3
Hi im running a subprocess with threads trough a python wrapper and I get the following warning when I use the subprocess module. "The _posixsubprocess module is not being used, Child process reliability may suffer if your program uses threads." What dose this mean? How can I get rid of it?
Python Error The _posixsubprocess module is not being used
51,902,409
3
7
12,868
0
python,multithreading
unsetting PYTHONHOME has fixed this issue for me.
0
1
0
0
2012-09-20T07:53:00.000
5
0.119427
false
12,508,243
1
0
0
3
Hi im running a subprocess with threads trough a python wrapper and I get the following warning when I use the subprocess module. "The _posixsubprocess module is not being used, Child process reliability may suffer if your program uses threads." What dose this mean? How can I get rid of it?
Python Error The _posixsubprocess module is not being used
62,709,088
0
7
12,868
0
python,multithreading
It could be if you have more than a version of Python in use. you need to specify the correct version of python to use for each programme. For example, I need python 3.7 for miniconda, but mendeleydesktop claims for trouble with this version: also problem with _posixsubproces and its location so instead of run the program in a phyton enviroment only I use python2.7, and it solve the problem. Hope it helps. Cheers, Flor
0
1
0
0
2012-09-20T07:53:00.000
5
0
false
12,508,243
1
0
0
3
Hi im running a subprocess with threads trough a python wrapper and I get the following warning when I use the subprocess module. "The _posixsubprocess module is not being used, Child process reliability may suffer if your program uses threads." What dose this mean? How can I get rid of it?
Python app which reads and writes into its current working directory as a .app/exe
12,520,441
0
2
1,641
0
python,file,io,exe,.app
Use the __file__ variable. This will give you the filename of your module. Using the functions in os.path you can determine the full path of the parent directory of your module. The os.path module is in the standard python documentation, you should be able to find that. Then you can combine the module path with your filename to open it, using os.path.join.
0
1
0
0
2012-09-20T19:29:00.000
2
0
false
12,519,610
1
0
0
1
I have a python script which reads a text file in it's current working directory called "data.txt" then converts the data inside of it into a json format for another separate program to handle. The problem i'm having is that i'm not sure how to read the .txt file (and write a new one) which is in the same directory as the .app when the python script is all bundled up. The current method i'm using doesn't work because of something to do with it using the fact that it's ran from the terminal instead of executed as a .app? Any help is appreciated!
How to fetch time-based entity from datastore in app engine
12,522,188
2
1
391
0
python,google-app-engine,app-engine-ndb
You could add another field for the date. A ComputedProperty would probably make sense for that. Or you could fetch from the start of the day, in batches, and stop fetching once you reach the end of your day. I'd imagine you could come up with a sensible default based on how many appointments you'd typically have in one day to keep this reasonably efficient.
0
1
0
0
2012-09-20T23:03:00.000
3
1.2
true
12,522,160
0
0
1
2
I have an application on app engine, and this application has an entity called Appointment. An Appointment has a start_time and a end_time. I want to fetch appointments based on the time, so I want to get all appointments for a given day, for example. Since app engine doesn't support inequality query based on two fields, what can I do?
How to fetch time-based entity from datastore in app engine
12,535,660
1
1
391
0
python,google-app-engine,app-engine-ndb
The biggest problem is that a "date" means a different start and end "time" depending on a time zone of a user. And you cannot force all of your users to stick to one time zone all of the lives, not to mention DST changes twice a year. So you cannot simply create a new property in your entity to store a "date" object as was suggested. (This is why GAE does not have a "date" type property.) I built a scheduling app. When a user specifies the desired range for events (it can be a day, a week or a month), I retrieve all events that have an end time larger than the start time of the requested range, and then I loop through them until I find the last event which has a start time smaller than the end time of the range. You can specify how many entities you want to fetch in one request depending on a requested range (more for month than for a day). For example, if a given calendar is likely to have 5-10 events per day, it's enough to fetch the first 10 entities (you can always fetch more if condition is not met). For a month you can set a batch size to 100, for example. This is a micro-optimization, however.
0
1
0
0
2012-09-20T23:03:00.000
3
0.066568
false
12,522,160
0
0
1
2
I have an application on app engine, and this application has an entity called Appointment. An Appointment has a start_time and a end_time. I want to fetch appointments based on the time, so I want to get all appointments for a given day, for example. Since app engine doesn't support inequality query based on two fields, what can I do?
Import Error: No module named pytz after using easy_install
14,970,445
3
6
21,691
0
python,python-2.7
For what it's worth, the answer to the fundamental problem here is that the pytz installation process didn't actually extract the ".egg" file (at least, this is what I noticed with a very similar issue.) You may consider going into the site-packages folder and extracting it yourself.
0
1
0
0
2012-09-22T03:28:00.000
3
0.197375
false
12,540,435
1
0
0
2
Today is my first day at Python and have been going through problems. One that I was working on was, "Write a short program which extracts the current date and time from the operating system and prints it on screen in the following format: day, month, year, current time in GMT. Demonstrate that it works." I was going to use pytz, so used easy_install pytz This installed it in my site-packages (pytz-2012d-py2.7.egg) Is this the correct directory for me to be able to import the module? In my python shell i use from pytz import timezone I get, "ImportError: No module named pytz" Any ideas? Thanks in advance
Import Error: No module named pytz after using easy_install
27,397,683
3
6
21,691
0
python,python-2.7
It is important if you are using python v2 or python v3 - it has separate easy_install package! In debian there are: python-pip python3-pip and then easy_install easy_install3 If you use wrong version of easy_install you will be updating wrong libraries.
0
1
0
0
2012-09-22T03:28:00.000
3
0.197375
false
12,540,435
1
0
0
2
Today is my first day at Python and have been going through problems. One that I was working on was, "Write a short program which extracts the current date and time from the operating system and prints it on screen in the following format: day, month, year, current time in GMT. Demonstrate that it works." I was going to use pytz, so used easy_install pytz This installed it in my site-packages (pytz-2012d-py2.7.egg) Is this the correct directory for me to be able to import the module? In my python shell i use from pytz import timezone I get, "ImportError: No module named pytz" Any ideas? Thanks in advance
hadoop streaming with python modules
12,556,901
0
0
1,087
0
python,hadoop,hadoop-streaming
After reviewing sent_tokenize's source code, it looks like the nltk.sent_tokenize AND the nltk.tokenize.sent_tokenize methods/functions rely on a pickle file (one used to do punkt tokenization) to operate. Since this is Hadoop-streaming, you'd have to figure out where/how to place that pickle file into the zip'd code module that is added into the hadoop job's jar. Bottom line? I recommend using the RegexpTokenizer class to do sentence and word level tokenization.
0
1
0
0
2012-09-23T13:51:00.000
1
1.2
true
12,552,890
0
0
0
1
I've seen a technique (on stackoverflow) for executing a hadoop streaming job using zip files to store referenced python modules. I'm having some errors during the mapping phase of my job's execution. I'm fairly certain it's related to the zip'd module loading. To debug the script, I have run my data set through sys.stdin/sys.stdout using command line pipes into my mapper and reducer so something like this: head inputdatafile.txt | ./mapper.py | sort -k1,1 | ./reducer.py the results look great. When I run this through hadoop though, I start hitting some problems. ie: the mapper and reducer fail and the entire hadoop job fails completely. My zip'd module file contains *.pyc files - is that going to impact this thing? Also where can I find the errors generated during the map/reduction process using hadoop streaming? I've used the -file command line argument to tell hadoop where the zip'd module is located and where my mapper and reducer scripts are located. i'm not doing any crazy configuration options to increase the number of mappers and reducers used in the job. any help would be greatly appreciated! thanks!
Celery Task Grouping/Aggregation
12,705,857
4
13
3,185
0
python,asynchronous,task,celery,aggregation
An easy way to accomplish this is to write all the actions a task should take on a persistent storage (eg. database) and let a periodic job do the actual process in one batch (with a single connection). Note: make sure you have some locking in place to prevent the queue from being processes twice! There is a nice example on how to do something similar at kombu level (http://ask.github.com/celery/tutorials/clickcounter.html) Personally I like the way sentry does something like this to batch increments at db level (sentry.buffers module)
0
1
0
0
2012-09-23T21:14:00.000
2
0.379949
false
12,556,309
0
0
1
1
I'm planning to use Celery to handle sending push notifications and emails triggered by events from my primary server. These tasks require opening a connection to an external server (GCM, APS, email server, etc). They can be processed one at a time, or handled in bulk with a single connection for much better performance. Often there will be several instances of these tasks triggered separately in a short period of time. For example, in the space of a minute, there might be several dozen push notifications that need to go out to different users with different messages. What's the best way of handling this in Celery? It seems like the naïve way is to simply have a different task for each message, but that requires opening a connection for each instance. I was hoping there would be some sort of task aggregator allowing me to process e.g. 'all outstanding push notification tasks'. Does such a thing exist? Is there a better way to go about it, for example like appending to an active task group? Am I missing something? Robert
Options for Image Caching
12,585,067
1
0
717
0
python,google-app-engine,caching,distributed-caching,image-caching
Blobstore is fine. Just make sure you set the HTTP cache headers in your url handler. This allows your files to be either cached by the browser (in which case you pay nothing) or App Engine's Edge Cache, where you'll pay for bandwidth but not blobstore accesses. Be very careful with edge caching though. If you set an overly long expiry, users will never see an updated version. Often the solution to this is to change the url when you change the version.
0
1
0
0
2012-09-24T22:40:00.000
3
0.066568
false
12,573,915
0
0
1
1
I am running a website on google app engine written in python with jinja2. I have gotten memcached to work for most of my content from the database and I am fuzzy on how I can increase the efficiency of images served from the blobstore. I don't think it will be much different on GAE than any other framework but I wanted to mention it just in case. Anyway are there any recommended methods for caching images or preventing them from eating up my read and write quotas?
why is monkeyrunner not working when run from a remote machine?
12,593,620
0
2
1,078
0
android,python,centos,monkeyrunner,rpyc
Using this function to run monekyrunner doesn't work although running ls, pwd works fine. conn.modules.subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL /opt/android-sdk/tools/MYSCRIPT.py", shell=True) The chunk of code below solved my problem : import rpyc import subprocess , os conn = rpyc.classic.connect("192.XXX.XXX.XXX",XXXXX) conn.execute ("print 'Hello'") conn.modules.os.popen("monkeyrunner -v ALL MYSCRIPT.py",) Hope this helps to those who are experiencing the same problem as mine.
0
1
0
1
2012-09-25T07:20:00.000
2
0
false
12,578,021
0
0
0
1
I need to run a monkeyrunner script in a remote machine. I'm using python to to automate it and RPyC so that I could connect to other machines, everything is running in CentOS. written below is the command that I used: import rpyc import subprocess conn = rpyc.classic.connect("192.XXX.XXX.XXX",XXXXX) conn.execute ("print 'Hello'") subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL /opt/android-sdk/tools/MYSCRIPT.py", shell=True) and this is the result: can't open specified script file Usage : monkeyrunner [option] script_file -s MonkeyServer IP Address -p MonkeyServer TCP Port -v MonkeyServer Logging level And then I realized that if you use the command below, it is running the command in your machine. (example: the command inside the Popen is "ls" the result that it will give you is the list of files and directories in the current directory of the LOCALHOST) hence, the command is wrong. subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL /opt/android-sdk/tools/MYSCRIPT.py", shell=True) and so I replaced the code with this conn.modules.subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL /opt/android-sdk/tools/MYSCRIPT.py", shell=True) And give me this error message ======= Remote traceback ======= Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py", line 300, in _dispatch_request res = self._HANDLERS[handler](self, *args) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py", line 532, in _handle_call return self._local_objects[oid](*args, **dict(kwargs)) File "/usr/lib/python2.4/subprocess.py", line 542, in init errread, errwrite) File "/usr/lib/python2.4/subprocess.py", line 975, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory ======= Local exception ======== Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/netref.py", line 196, in call return syncreq(_self, consts.HANDLE_CALL, args, kwargs) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/netref.py", line 71, in syncreq return conn.sync_request(handler, oid, *args) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py", line 438, in sync_request raise obj OSError: [Errno 2] No such file or directory I am thinking that it cannot run the file because I don't have administrator access (since I didn't supply the username and password of the remote machine)? Help!
How to put a PC into sleep mode using python?
12,581,615
0
2
1,160
0
python
If you have "Wake On Lan" enabled you could potentially run a python script on a different PC and trigger the wake up after your specific period of time. The scripts would probably need to talk to each other, unless you just do it all at times set in advance.
0
1
0
0
2012-09-25T10:51:00.000
2
0
false
12,581,463
1
0
0
1
While the copy is in progress, can we put a PC into sleep mode for a specific period of time, then wake up and continue copy using python script? Can you please share the code? Actually this is possible using shell script.
Scripting Hashbang: How to get the python 2 interpreter?
12,606,395
0
3
739
0
python,linux,shell,unix,python-2.7
If you must run on python 2, you best also call the interpreter as python2. I think most UNIX releases have symlinks from /usr/bin/python/and /usr/bin/python2 to the appropriate binary.
0
1
0
0
2012-09-26T16:31:00.000
3
0
false
12,606,333
0
0
0
1
I'm writing scripts that have to run on a number of different UNIX-like releases. These are written in python 2.x. Unfortunately, some newer releases have taken to calling this flavor binary "python2" instead of "python." Thus, "#!/usr/bin/env python" doesn't work to look for the proper installed python interpreter. Either I get the version 3 interpreter (bad) or no interpreter at all (worse!) Is there a clever way to write a python script such that it will load with the python2 interpreter if installed, else the python interpreter if it's installed? I'd have to use other mechanisms to detect when "python" is a python3, but as I'm inside a python-like environment at that point, I can live with that. I imagine I can write a ripple launcher, call it "findpython2," and use that as the #! interpreter for the script, but that means I have to install findpython2 in the search path, which is decidedly sub-optimal (these scripts are often called by absolute reference, so they're not in the path.)
How to automate the sending of files over the network using python?
12,613,729
0
0
1,825
0
python,shell,terminal,centos
per my experience, use sftp the first time will prompt user to accept host public key, such as The authenticity of host 'xxxx' can't be established. RSA key fingerprint is xxxx. Are you sure you want to continue connecting (yes/no)? once you input yes, the public key will be saved in ~/.ssh/known_hosts, and next time you will not get such prompt/alert. To avoid this prompt/alert in batch script, you can use turn strict host check off use scp -Bqpo StrictHostKeyChecking=no but you are vulnerable to man-in-the-middle attack. you can also choose to connect to target server manually and save host public key before deploy your batch script.
0
1
1
0
2012-09-27T03:14:00.000
3
0
false
12,613,552
0
0
0
1
Here's what I need to do: I need to copy files over the network. The files to be copied is in the one machine and I need to send it to the remote machines. It should be automated and it should be made using python. I am quite familiar with os.popen and subprocess.Popen of python. I could use this to copy the files, BUT, the problem is once I have run the one-liner command (like the one shown below) scp xxx@localhost:file1.txt yyy@]192.168.104.XXX:file2.txt it will definitely ask for something like Are you sure you want to connect (yes/no)? Password : And if im not mistaken., once I have sent this command (assuming that I code this in python) conn.modules.os.popen("scp xxx@localhost:file1.txt yyy@]192.168.104.XXX:file2.txt") and followed by this command conn.modules.os.popen("yes") The output (and I'm quite sure that it would give me errors) would be the different comparing it to the output if I manually type it in in the terminal. Do you know how to code this in python? Or could you tell me something (a command etc.) that would solve my problem Note: I am using RPyC to connect to other remote machines and all machines are running on CentOS
Python libraries after removing MacPorts and installing homebrew
12,623,496
3
1
215
0
python,macports,homebrew
I wouldn't use the Macports packages in Homebrew. I'd reinstall them all. A lot of Python packages are compiled , or at least have compiled elements. You're asking for a lot of potential troubles mixing them up.
0
1
0
0
2012-09-27T13:59:00.000
1
1.2
true
12,623,155
1
0
0
1
I recently removed Macports and all its packages and installed Python, Gphoto and some other bits using Homebrew. However python is crashing when looking for libraries as it is looking for them in a MacPorts path. My PATH is correct and the python config show the right path /usr/local/Cellar etc. Can someone tell me how to set Python to use the libraries installed via Homebrew, I suppose change the path effectively?
python terminate/kill subprocess group
12,680,641
3
3
1,294
0
python,subprocess,popen,kill,kill-process
By changing the gid at the beginning of the execution of script2.py, the sub sequent processes belongs to script2 gid. So calling killpg() from script1.py with script2's pid does it well.
0
1
0
0
2012-09-27T16:28:00.000
2
1.2
true
12,625,951
0
0
0
1
I have a few python scripts who are opening themselves in cascade by subprocess.Popen(). (I call script1.py who make a popen of script2.py who makes popen of script3.py, etc) Is there any way to terminate/kill all subprocesses of script1.py from the script1.py PID. os.killpg() doesn't work. Thanks for your help.
Which one is the faster way to get the list of directories subprocess.Popen or os.listdir
12,629,604
3
0
692
0
python,python-2.7
os.listdir is very likely to be compiled c that calls the same base libc system methods that ls does. In contrast, subprocess.Popen forks a whole new process which is an expensive system operation and requires new file handles to deal with tty operations.
0
1
0
0
2012-09-27T19:55:00.000
1
0.53705
false
12,629,091
0
0
0
1
I want to retrieve the list of files in a directory. What would be the fastest way to do so using subprocess.Popen or using os.listdir. The directory contain 10000 of files. and this has to be done recursively to retrieve the list from the directory and its sub directories. I know we can use os.walk to retrieve the contents of directories but os.walk just not work for what I am suppose to do. Thanks
Optimizing the size of embedded Python interpreter
12,632,227
3
17
8,769
0
python,embedded
There may be ways you can cram it down a little more just by configuring, but not much more. Also, the actual interactive-mode code is pretty trivial, so I doubt you're going to save much there. I'm sure there are more substantial features you're not using that you could hack out of the interpreter to get the size down. For example, you can probably throw out a big chunk of the parser and compiler and just deal with nothing but bytecode. The problem is that the only way to do that is to hack the interpreter source. (And it's not the most beautiful code in the world, so you're going to have to dedicate a good amount of time to learning your way around.) And you'll have to know what features you can actually hack out. The only other real alternative would be to write a smaller interpreter for a Python-like language—e.g., by picking up the tinypy project. But from your comments, it doesn't sound as if "Python-like" is sufficient for you unless it's very close. Well, I suppose there's one more alternative: Hack up a different, nicer Python implementation than CPython. The problem is that Jython and IronPython aren't native code (although maybe you can use a JVM->native compiler, or possibly cram enough of Jython into a J2ME JVM?), and PyPy really isn't ready for prime time on embedded systems. (Can you wait a couple years?) So, you're probably stuck with CPython.
0
1
0
1
2012-09-27T23:32:00.000
3
0.197375
false
12,631,577
0
0
0
1
I spent the last 3 hours trying to find out if it possible to disable or to build Python without the interactive mode or how can I get the size of the python executable smaller for linux. As you can guess it's for an embedded device and after the cross compilation Python is approximately 1MB big and that is too much for me. Now the questions: Are there possibilities to shrink the Python executable? Maybe to disable the interactive mode (starting Python programms on the command line). I looked for the configure options and tried some of them but it doesn't produce any change for my executable. I compile it with optimized options from gcc and it's already stripped.
Python CGI in IIS: issue with urandom function
21,917,122
0
5
928
0
python,cgi,iis-7.5
I've solved the _urandom() error by changing IIS 7.5 settings to Impersonate User = yes. I'm not a Windows admin so I cannot elaborate. Afterwards import cgi inside python script worked just fine.
0
1
0
1
2012-09-28T12:21:00.000
1
0
false
12,639,930
0
0
0
1
I’m having a very strange issue with running a python CGI script in IIS. The script is running in a custom application pool which uses a user account from the domain for identity. Impersonation is disabled for the site and Kerberos is used for authentication. When the account is member of the “Domain Admins” group, everything works like a charm When the account is not member of “Domain Admins”, I get an error on the very first line in the script: “import cgi”. It seems like that import eventually leads to a random number being generated and it’s the call to _urandom() which fails with a “WindowsError: [Error 5] Access is denied”. If I run the same script from the command prompt, when logged in with the same user as the one from the application pool, everything works as a charm. When searching the web I have found out that the _urandom on windows is backed by the CryptGenRandom function in the operating system. Somehow it seems like my python CGI script does not have access to that function when running from the IIS, while it has access to that function when run from a command prompt. To complicate things further, when logging in as the account running the application pool and then invoking the CGI-script from the web browser it works. It turns out I have to be logged in with the same user as the application pool for it to work. As I previously stated, impersonation is disabled, but somehow it seems like the identity is somehow passed along to the security functions in windows. If I modify the random.py file that calls the _urandom() function to just return a fixed number, everything works fine, but then I have probably broken a lot of the security functions in python. So have anyone experienced anything like this? Any ideas of what is going on?
Storing text files > 1MB in GAE/P
12,648,928
1
2
223
0
python,google-app-engine,google-docs-api,blobstore,google-cloud-storage
If you're already using the Files API to read and write the files, I'd recommend you use Google Cloud Storage rather than the Blobstore. GCS offers a richer RESTful API (makes it easier to do things like access control), does a number of things to accelerate serving static data, etc.
0
1
0
0
2012-09-28T12:52:00.000
3
0.066568
false
12,640,409
0
0
1
2
I have a Google App Engine app where I need to store text files that are larger than 1 MB (the maximum entity size. I'm currently storing them in the Blobstore and I make use of the Files API for reading and writing them. Current operations including uploading them from a user, reading them to process and update, and presenting them to a user. Eventually, I would like to allow a user to edit them (likely as a Google doc). Are there advantages to storing such text files in Google Cloud Storage, as a Google Doc, or in some other location instead of using the Blobstore?
Storing text files > 1MB in GAE/P
12,641,339
0
2
223
0
python,google-app-engine,google-docs-api,blobstore,google-cloud-storage
Sharing data is more easy in Google Docs (now Google Drive) and Google Cloud Storage. Using Google drive u can also use the power of Google Apps scripts.
0
1
0
0
2012-09-28T12:52:00.000
3
0
false
12,640,409
0
0
1
2
I have a Google App Engine app where I need to store text files that are larger than 1 MB (the maximum entity size. I'm currently storing them in the Blobstore and I make use of the Files API for reading and writing them. Current operations including uploading them from a user, reading them to process and update, and presenting them to a user. Eventually, I would like to allow a user to edit them (likely as a Google doc). Are there advantages to storing such text files in Google Cloud Storage, as a Google Doc, or in some other location instead of using the Blobstore?
Vertx SockJS server vs sockjs-tornado
13,562,205
3
2
1,306
0
python,tornado,vert.x,sockjs
Vertx has build-in clustering support. I haven't tried it with many nodes, but it seemed to work well with a few. Internally it uses hazelcast to organise the nodes. Vertx also runs on a JVM, which has already many monitoring/admin tools which might be useful. So Vertx seems to me like the "batteries included" solution.
0
1
1
0
2012-09-29T11:32:00.000
2
0.291313
false
12,652,336
0
0
0
1
I've been inspecting two similar solutions for supporting web sockets via sockJS using an independent Python server, and so far I found two solutions. I need to write a complex, scalable web socket based web application, and I'm afraid it will be hard to scale Tornado, and it seems Vertx is better with horizontal scaling of web sockets. I also understand that Redis can be used in conjunction with Tornado for scaling a pub/sub system horizontally, and HAproxy for scaling the SockJS requests. Between Vertx and Tornado, what is the preferred solution for writing a scalable system which supports SockJS?
Create empty file using python
31,773,158
61
311
452,830
0
python
Of course there IS a way to create files without opening. It's as easy as calling os.mknod("newfile.txt"). The only drawback is that this call requires root privileges on OSX.
0
1
0
0
2012-09-29T17:22:00.000
2
1
false
12,654,772
0
0
0
1
I'd like to create a file with path x using python. I've been using os.system(y) where y = 'touch %s' % (x). I've looked for a non-directory version of os.mkdir, but I haven't been able to find anything. Is there a tool like this to create a file without opening it, or using system or popen/subprocess?
/usr/bin/env python opens up python, but ./blabla.py does not execute
12,663,789
0
0
2,334
0
python,centos
Add #!/usr/bin/env python at the head of your script file. It tell your system to search for the python interpreter and execute your script with it.
0
1
0
1
2012-09-30T18:20:00.000
3
0
false
12,663,774
0
0
0
1
python blabla.py will execute. But ./blabla.py gives me an error of "no such file or directory" on CentOS6.3. /usr/bin/env python does open up python properly. I am new to linux and really would like to get this working. Could someone help? Thanks in advance! Note: thanks to all the fast replies! I did have the #!/usr/bin/env python line at the beginning. which python gives /usr/bin/python as an output. And the chmod +x was done as well. The exact error was "no such file or directory" for ./blabla.py, but python blabla.py runs fine.
cx_Freeze. How install service and execute script after install
36,247,376
0
9
690
0
python,cx-freeze
I think if you don't have a certificate that's impossible.
0
1
0
0
2012-10-01T08:39:00.000
1
0
false
12,669,938
1
0
0
1
I wrote lets scripts for customer. In order not to install python and dependent packages, I packed all to 3 exe-file using cx-freeze. First - winservice, who does most of the work. Second - settings wizard. Third - client for work with winservice. Faced to the task, need after installing the package (made using bdist_msi) register the service in system, and run the wizard. How do it?
failed to set __main__.__loader__ in Python
15,816,458
0
12
5,239
0
python
I also had this problem. Like mottyg1 said, it happens when the python script is run from a directory containing non-english characters. I can't change the directory name though, and my python script needed to be in the directory in order to perform manipulations on the filenames. So my workaround was simply to move the script to a different folder and then pass in the directory containing the files to be changed. So to be clear, the problem is only when the directory containing the python file has non-english characters, but python can still handle such characters in its functions, at least as far as I've been able to tell.
0
1
0
0
2012-10-02T18:34:00.000
2
0
false
12,696,151
1
0
0
1
When running any Python script (by double clicking a .py file on Windows 7) I'm getting a Python: failed to set __main__.__loader__ error message. What to do? More details: The scripts work on other machines. The only version of Python installed on the machine on which the scripts don't work is 3.2. I get the same error when trying to run the script from the Windows shell (cmd). Here's an example for the content of a file named "hey.py" that I failed to run on my machine: print('hey')
How to debug Celery/Django tasks running locally in Eclipse
19,998,790
1
29
23,443
0
python,eclipse,celery
I create a management command to test task.. find it easier than running it from shell..
0
1
0
0
2012-10-02T20:57:00.000
5
0.039979
false
12,698,212
0
0
1
1
I need to debug Celery task from the Eclipse debugger. I'm using Eclipse, PyDev and Django. First, I open my project in Eclipse and put a breakpoint at the beginning of the task function. Then, I'm starting the Celery workers from Eclipse by Right Clicking on manage.py from the PyDev Package Explorer and choosing "Debug As->Python Run" and specifying "celeryd -l info" as the argument. This starts MainThread, Mediator and three more threads visible from the Eclipse debugger. After that I return back to the PyDev view and start the main application by Right Click on the project and choosing Run As/PyDev:Django My issues is that once the task is submitted by the mytask.delay() it doesn't stop on the breakpoint. I put some traces withing the tasks code so I can see that it was executed in one of the worker threads. So, how to make the Eclipse debugger to stop on the breakpoint placed withing the task when it executed in the Celery workers thread?
suspend embedded python script execution
12,862,310
1
1
164
0
python,embed
Well, the only way I could come up with is to run the Python engine on a separate thread. Then the main thread is blocked when the python thread is running. When I need to suspend, I block the Python thread, and let the main thread run. When necessary, the OnIdle of the main thread, i block it and let the python continue. it seems to be working fine.
0
1
0
0
2012-10-03T15:14:00.000
1
1.2
true
12,711,511
0
0
0
1
I have an app that embeds python scripting. I'm adding calls to python from C, and my problem is that i need to suspend the script execution let the app run, and restore the execution from where it was suspended. The idea is that python would call, say "WaitForData" function, so at that point the script must suspend (pause) the calls bail out so the app event loop would continue. When the necessary data is present, i would like to restore the execution of the script, it is like the python call returns at that point. i'm running single threaded python. any ideas how can i do this, or something similar, where i'll have the app event loop run before python call exits?
Telnet Server ubuntu - password stream
12,788,388
1
4
690
0
python,telnet,ubuntu-12.04
If you really want to use system's su program, you will need to create a terminal pair, see man 7 pty, in python that's pty.openpty call that returns you a pair of file descriptors, one for you and one for su. Then you have to fork, in the child process change stdin/out/err to slave fd and exec su. In the parent process you send data to and receive data from master fd. Linux kernel connects those together. Alternatively you could perhaps emulate su instead?
0
1
0
0
2012-10-03T18:17:00.000
1
1.2
true
12,714,434
0
0
0
1
I am trying to create a Telnet Server using Python on Ubuntu 12.04. In order to be able to execute commands as a different user, I need to use the su command, which then prompts for the password. Now, I know that the prompt is sent to the STDERR stream, but I have no idea which stream I am supposed to send the password to. If I try to send it via STDIN, I get the error: su: must be run from a terminal. How do I proceed?
Boto reverse the stream
12,716,129
3
4
636
1
python,stream,twisted,boto
boto is a Python library with a blocking API. This means you'll have to use threads to use it while maintaining the concurrence operation that Twisted provides you with (just as you would have to use threads to have any concurrency when using boto ''without'' Twisted; ie, Twisted does not help make boto non-blocking or concurrent). Instead, you could use txAWS, a Twisted-oriented library for interacting with AWS. txaws.s3.client provides methods for interacting with S3. If you're familiar with boto or AWS, some of these should already look familiar. For example, create_bucket or put_object. txAWS would be better if it provided a streaming API so you could upload to S3 as the file is being uploaded to you. I think that this is currently in development (based on the new HTTP client in Twisted, twisted.web.client.Agent) but perhaps not yet available in a release.
0
1
1
0
2012-10-03T18:54:00.000
2
1.2
true
12,714,965
0
0
1
1
I have a server which files get uploaded to, I want to be able to forward these on to s3 using boto, I have to do some processing on the data basically as it gets uploaded to s3. The problem I have is the way they get uploaded I need to provide a writable stream that incoming data gets written to and to upload to boto I need a readable stream. So it's like I have two ends that don't connect. Is there a way to upload to s3 with a writable stream? If so it would be easy and I could pass upload stream to s3 and it the execution would chain along. If there isn't I have two loose ends which I need something in between with a sort of buffer, that can read from the upload to keep that moving, and expose a read method that I can give to boto so that can read. But doing this I'm sure I'd need to thread the s3 upload part which I'd rather avoid as I'm using twisted. I have a feeling I'm way over complicating things but I can't come up with a simple solution. This has to be a common-ish problem, I'm just not sure how to put it into words very well to search it
Python jug (or other) for embarrassingly parallel jobs in cluster environment with heterogenous tasks
12,785,406
3
3
522
0
python,parallel-processing,cluster-computing
Author of jug here. Jug does handle the dependencies very well. If you change any of the inputs or intermediate steps, running jug status will tell you the state of the computation. There is currently no way to specify that some tasks (what jug calls jobs) should have multiple processes allocated to them. In the past, whenever I had tasks which were to be run in multiple threads, I was forced to take a worst-case-scenario approach and allocate all processes to the jug execute process. This means, of course, that single-threaded tasks will take up all the processes. Since the bulk of the computation was in the multi-threaded tasks, it was acceptable.
0
1
0
0
2012-10-05T16:58:00.000
1
1.2
true
12,750,787
0
0
0
1
I have the usual large set of dependent jobs and want to run them effectively in a PBS cluster environment. I have been using Ruffus and am pretty happy with it, but I also want to experiment a bit with other approaches. One that looks interesting in python is jug. However, it appears that jug assumes that the jobs are homogeneous in their requirements. I have some jobs that require 8GB RAM while others require only 100MB; some can consume all processors and some are single-threaded. I'm aiming for being able to quickly assemble a pipeline, run it and have it "update" based on dependencies, and log reasonably so that I can see what jobs still need to be run. Is anyone using jug or other similar system with these types of requirements?
how to load from Python a dll file written in tcl
12,754,599
2
1
221
0
python,dll,tcl
The easiest way would be by using the Tkinter package and its built in Tcl interpreter inside your Python Process. If it is a Tcl extension dll, it makes no real sense to call it from Python without much setup first.
0
1
0
0
2012-10-05T20:54:00.000
2
1.2
true
12,753,896
1
0
0
2
I have a script written in tcl, that load a dll file. I want to load this file in python. I am not sure if the dll file is written in tcl, but because the tcl file imports it I think it is written in tcl. I have tried to use WinDLL("path_to_dll") but I get that there is no module with the provided name.
how to load from Python a dll file written in tcl
12,757,979
1
1
221
0
python,dll,tcl
Python and Tcl have substantially different C APIs; there's no reason at all to expect that a binary library written for one would work with the other. It might be possible to write a library that would use the API of whichever library it is loaded into — from the Tcl perspective, this would involve linking against the Tcl stub library so that there's no nasty hard binding to the API — but I don't know how to do that on the Python side of things and it's definitely highly tricky stuff to attempt to do. More practical would be to have one library that contains a language-independent implementation and then two more that bind that API to a particular language (the binding layer could even be automatically generated with a tool like SWIG, though that doesn't address language impedance issues). Of course, if you're just wanting to write the library from one language and consume it from another, you can do that. A library is just bytes on disk, after all. It does usually tend to be easier to let specialist tools (compilers, linkers) look after writing libraries though; the data format of a library isn't exactly the simplest thing ever!
0
1
0
0
2012-10-05T20:54:00.000
2
0.099668
false
12,753,896
1
0
0
2
I have a script written in tcl, that load a dll file. I want to load this file in python. I am not sure if the dll file is written in tcl, but because the tcl file imports it I think it is written in tcl. I have tried to use WinDLL("path_to_dll") but I get that there is no module with the provided name.
No module named opengl.opengl
12,791,804
0
0
2,646
0
python,macos,import,pyopengl
Thanks guys! I figured it out.It was infact a separate module which I needed to copy over to the "site-packages" location and it worked fine.So in summary no issues with the path just that the appropriate module was not there.
0
1
0
0
2012-10-06T01:00:00.000
2
0
false
12,755,804
1
0
0
1
I am trying to run a python script on my mac .I am getting the error :- ImportError: No module named opengl.opengl I googled a bit and found that I was missing pyopengl .I installed pip.I go to the directory pip-1.0 and then say sudo pip install pyopengl and it installs correctly I believe because I got this Successfully installed pyopengl Cleaning up... at the end. I rerun the script but i am still getting the same error .Can someone tell me what I might be missing? Thanks!
How to make an executable for a python project
12,763,086
1
1
172
0
linux,jar,python-2.7
If you only want it to run on a Linux machine, using Python eggs is the simplest way. python snake.egg will try to execute the main.py inside the egg. Python eggs are meant to be packages, and basically is a zip file with metadata files included.
0
1
0
1
2012-10-06T19:16:00.000
2
0.099668
false
12,763,015
0
0
0
1
Sorry if my title is not correct. Below is the explanation of what i'm looking for. I've coded a small GUI game (let say a snake game) in python, and I want it to be run on Linux machine. I can run this program by just run command "python snake.py" in the terminal. However, I want to combine all my .py files into one file, and when I click on this file, it just run my game. I don't want to go to shell and type "python snake.py". I means something like manifest .jar in java. Could any one help me please? If my explanation is not good enough, please let me know. I'll give some more explanation.
Python Shell - Syntax Error
12,766,135
2
0
923
0
python,python-idle,python-2.3
I'm guessing that you're running python hello.py within the Python REPL. This won't work; python hello.py is something that starts Python that you'll need to run in your system shell.
0
1
0
0
2012-10-07T04:28:00.000
2
0.197375
false
12,766,124
1
0
0
2
I am very very new to Python. I am trying to get a python program to run by writing 'python hello.py' but every time I do that I get a Syntax Error. Why is that? However when I open the file and click on run module, it works. Why won't it work if I type 'python hello.py' Is there anyway I can navigate to the directory where the file is stored and then run it? I tried placing my file in directly in the Python23 folder :P didnt work anyway Please help me. I have Python 2.3.5
Python Shell - Syntax Error
12,766,384
0
0
923
0
python,python-idle,python-2.3
What is the error? Place python's filepath to the python.exe in your system's PATH, then you can run a python file from anywhere.
0
1
0
0
2012-10-07T04:28:00.000
2
1.2
true
12,766,124
1
0
0
2
I am very very new to Python. I am trying to get a python program to run by writing 'python hello.py' but every time I do that I get a Syntax Error. Why is that? However when I open the file and click on run module, it works. Why won't it work if I type 'python hello.py' Is there anyway I can navigate to the directory where the file is stored and then run it? I tried placing my file in directly in the Python23 folder :P didnt work anyway Please help me. I have Python 2.3.5
Bitnami - /opt/bitnami/python/bin/.python2.7.bin: error while loading shared libraries: libreadline.so.5
12,898,508
7
4
1,566
0
python,django,centos,bitnami
Can you execute the following and see if it solves your issue? . /opt/bitnami/scripts/setenv.sh (notice the space between the dot and the path to the script) Also what are you executing that gives you that error?
0
1
0
1
2012-10-10T08:20:00.000
1
1
false
12,814,973
0
0
1
1
I getting the below issue when firing up django or ipython notebook /opt/bitnami/python/bin/.python2.7.bin: error while loading shared libraries: libreadline.so.5 However libreadline.so.5 exists in my system after locating it as shown below root@linux:/opt/bitnami/scripts# locate libreadline.so.5 /opt/bitnami/common/lib/libreadline.so.5 /opt/bitnami/common/lib/libreadline.so.5.2 I have also exported the path in the environment variable (where the libreadlive.so.5 is located) but still does'nt seems to be resolving my issue (see below) export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$HOME/opt/bitnami/common/lib Also there is a script which is being provided by bitnami which is located in /opt/bitnami/scripts/setenv.sh. But even after executing it still i am stuck. Anyone can help me with this
I have installed PIL with Homebrew, but when I try to import it, it says no module exists
12,818,253
1
1
344
0
python,macos,python-imaging-library,homebrew
I think you would see this issue if you were using the python binary that was not installed by homebrew along with a package that you did install via homebrew. Could you verify that the python binary you are using is not the one that was included in OS X by default?
0
1
0
0
2012-10-10T09:50:00.000
1
1.2
true
12,816,482
1
0
0
1
I have installed PIL with homebrew. It then says that there was a symlink error, and that is can be easily fixed by doing it again, but with a sudo. I did so. Now, when I go into python, and import Image, it doesn't work! I've tried import PIL, import image, import pil, and none of them work. When I try to install it again, it says Error: pil-1.1.7 already installed. Please help!
Distcp with Hadoop streaming job
12,843,376
1
1
906
0
python,hadoop,hadoop-streaming
You can use S3InputFormat (https://github.com/ATLANTBH/emr-s3-io) to read data directly to you mappers. But beware, in case of job failure you will redownload all data. So, I suggest to download all data before process it. If you don't need to process whole data at once, you can start you mapreduce after distcp start. But you should write you own extension of FileInputFormat which will record somewhere (in input directory, i suppose) which files was processed, and on each invocation filter out processed files (in method getSplits()) and process only uprocessed files.
0
1
0
0
2012-10-11T08:30:00.000
1
0.197375
false
12,835,329
0
0
0
1
I'll broadly divide the work to be done in two parts: I have huge data (amounting to approx 1 TB, divided into hundreds of files), which I'm copying from S3 to HDFS via distcp This data will be acted upon by a hadoop streaming job (a simple mapper and reducer, written in python) Now, I'll have to wait till all the data is copied to HDFS and only after that I can start my actual job. Here's the question: Considering that DISTCP in itself is a map-reduce job, is there a way I can "stream line" these two jobs, namely, can the second job start working on the data that has already been copied (e.g. distcp has already copied a few files, on which the second job can technically already start)? I hope I've made myself clear.
Exclude files from causing GAE server restart
12,847,050
0
0
94
0
python,google-app-engine,go
If you change the files that comprise your application, the application will need to restart in order to serve the new files. If this is a real sticking point for you, I would suggest hosting the files elsewhere, like a CDN. Your application and the static resources that it employs do not need to be all in the same place.
0
1
0
0
2012-10-11T17:40:00.000
1
1.2
true
12,845,400
0
0
1
1
Is there a way to avoid GAE server restart when the file within the root of my application changes. I use Go (GAE server is python based) runtime. The intention is not to reload the server when some of my files (html, css, js files; which are under /static folder) changes. This is to avoid startup time during development. Any way to exclude them from file watch. thanks.
How to configure python32 interpreter under cygwin for eclipse in window ?
12,863,615
0
0
544
0
python-3.x,cygwin
Probably in /usr/bin, or in windows as {Cygwin Install directory}\bin. For example: D:\Cygwin\bin\ Ensure that you run python3.2.exe within the Cygwin emulation layer.
0
1
0
0
2012-10-12T00:14:00.000
1
0
false
12,850,504
1
0
0
1
I've seen new cygwin for window contains python3.2 files. Where is the python3.2.exe file under cygwin on window? I need it to configure interpreter for Eclipse in window, but I don't see it.
Checking a directory continuously for new files
12,857,136
1
0
384
0
python,multithreading,directory
"Busy" polling is a waste of resources. There are a number of different, operating-system dependant APIs that allow this kind of monitoring. Some of your choices (for Linux that is) include: python-inotifyx - simple Python binding to the Linux inotify python-pyinotify - simple Linux inotify Python bindings python-gamin - Python binding for the gamin client library
0
1
0
0
2012-10-12T10:26:00.000
1
1.2
true
12,857,063
1
0
0
1
I need to write a program which continuously checks a directory for new files and does some processing whenever there a new file is added. Obviously I will probably need to use some threads but is there any convenient way or pointers that anyone can suggest for achieving this in python ? (FOr both linux and windows)
How to run a Python project?
12,873,584
3
4
41,336
0
python,python-idle
Indeed, the command to run a Python file should be run in the command prompt. Python should be in your path variable for it to work flexible. When the python folder is added to path you can call python everywhere in the command prompt, otherwise just in your python install folder. The following is from the python website: Windows has a built-in dialog for changing environment variables (following guide applies to XP classical view): Right-click the icon for your machine (usually located on your Desktop and called “My Computer”) and choose Properties there. Then, open the Advanced tab and click the Environment Variables button. In short, your path is: My Computer ‣ Properties ‣ Advanced ‣ Environment Variables In this dialog, you can add or modify User and System variables. To change System variables, you need non-restricted access to your machine (i.e. Administrator rights). Another way of adding variables to your environment is using the set command in a command prompt: set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib If you do it via My Computer, then look for the line named path in Enviroment Variables. Give that the value of your Python installation folder.
0
1
0
0
2012-10-13T13:26:00.000
3
0.197375
false
12,873,542
1
0
0
2
I have installed the Enthought Python distribution on my computer, but I don't have any idea how to use it. I have PyLab and IDLE but I want to run .py files by typing the following command: python fileName.py I don't know where to write this command: IDLE, PyLab or Python.exe or Windows command prompt. When I do this in IDLE it says: SyntaxError: invalid syntax Please help me to figure this out.
How to run a Python project?
12,873,556
3
4
41,336
0
python,python-idle
Open a command prompt: Press ⊞ Win and R at the same time, then type in cmd and press ↵ Enter Navigate to the folder where you have the ".py" file (use cd .. to go one folder back or cd folderName to enter folderName) Then type in python filename.py
0
1
0
0
2012-10-13T13:26:00.000
3
1.2
true
12,873,542
1
0
0
2
I have installed the Enthought Python distribution on my computer, but I don't have any idea how to use it. I have PyLab and IDLE but I want to run .py files by typing the following command: python fileName.py I don't know where to write this command: IDLE, PyLab or Python.exe or Windows command prompt. When I do this in IDLE it says: SyntaxError: invalid syntax Please help me to figure this out.
can I use FUSE with Cython bindings
12,875,850
0
0
173
0
python,c,cython,fuse
You should use it as a Python Fuse System, then you can write your cython stuff in it's own module and just import that module in your python code. Usually file system related operations are IO bound and not CPU bound so I'm not sure how much of a speedup you would get with cython, but I guess try out and compare the results.
1
1
0
0
2012-10-13T18:18:00.000
1
0
false
12,875,660
1
0
0
1
I know FUSE has bindings for C, C++, Python etc. Which effectively means that I can develop FUSE filesystems using those languages. I wish to use Cython, as it offers much faster speeds as compared to pure Python. That is stressed in a filesystem. Is it possible to produce a FUSE filesystem by coding in Cython? As far as I understand, Python documentation is all that I require to write Cython code for FUSE. But (if it is indeed possible) should I be using Cython as a Python FUSE system or C system??
How can a python web app open a program on the client machine?
12,877,893
1
1
351
0
python,linux,web
Note this is not a standard way. Imagine the websites out there had the ability to open Notepad or Minesweeper at their will when you visit or click something. The way it is done is, you need to have a service which is running on the client machine which can expose certain apis and trust the request from the web apps call. this needs to be running on the client machines all the time and in your web app, you can send a request to this service to launch the application that you desire.
0
1
0
0
2012-10-13T23:18:00.000
3
0.066568
false
12,877,870
0
0
1
1
I'm going to be using python to build a web-based asset management system to manage the production of short cg film. The app will be intranet-based running on a centos machine on the local network. I'm hoping you'll be able to browse through all the assets and shots and then open any of them in the appropriate program on the client machine (also running centos). I'm guessing that there will have to be some sort of set up on the client-side to allow the app to run commands, which is fine because I have access to all of the clients that will be using it (although I don't have root access). Is this sort of thing possible?