Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Is there a way to capture the MSDOS command prompt string?
10,109,380
1
0
676
0
python,windows
Not sure if it helps but if you enter "SET" from the command prompt you'll see a list of environment variables, including the current PROMPT (however it won't appear in the list if it's the default prompt).
0
1
0
0
2012-04-11T15:40:00.000
3
0.066568
false
10,109,305
0
0
0
1
In Windows XP when you open cmd.exe you get a console window with a command prompt looking like: "C:\User and Settings\Staffer\My Documents>" where s> the underscore after the '>' is the cursor. This is the default for Windows XP. A user might change it using the PROMPT=something or by using set PROMPT=something In the console window, at the command prompt, entering the internal command "prompt" with no arguments does not return what the current prompt string is. Is there a command or preferably a Python library that can retrieve what the command prompt is. I didn't want to write a Python module if there was a builtin way of retrieving that string. The use case for getting the command prompt string is when I use the Python subprocess module to run a python program, and then return to the same console's command prompt while the subprocess is running, I get the cursor on a blank line. I can press Enter and the command prompt will redisplay; but it looks as if hasn't returned from the subprocess yet, which misleads my users. One solution for the gui part of my app is to run pythonw runapp.py. However I'm left wondering if there's a way to get a clean command prompt when calling subprocess by using already made DOS commands, Python library, proper use of subprocess.Popen() and communicate()?
What is the performance overhead of Popen in python
10,116,621
2
3
601
0
python,native,popen
Forking a separate process to do something is almost always much more expensive than calling a function that does the same thing. But if that Python function is very inefficient, and the OS forks new processes quickly (i.e., is a UNIX variant,) you could imagine a rare case where this is not true -- but it will definitely be rare.
0
1
0
1
2012-04-12T02:25:00.000
1
1.2
true
10,116,602
0
0
0
1
compared to invoking a python library function that does the same thing. I've some legacy code that uses Popen to invoke a executable with some parameters. Now there a python library that supports that same function. I was wondering what the performance implications are.
How to use suds in a memory efficient way?
23,891,675
0
2
599
0
python,memory-management,suds,gevent
try client.clone() or client(..., cache=DocumentCache())
0
1
0
0
2012-04-12T11:28:00.000
1
0
false
10,122,625
0
0
0
1
I have a large wsdl file, that takes 30MB to initialize with suds. I use gevent to spawn 100 greenlets that I use as workers for external service. How can I use single instance on suds Client but still get 100 parallel connections? It is a huge waste of memory to initialize all those suds Clients. What I really need is 100 transports and one single suds Client instance to translate xml messages in and out. Any help?
Debugger in python for Google App Engine and Django
10,139,804
1
2
337
0
django,google-app-engine,python-2.7
After a week of racking my brain, I finally figured out the problem. The gaesessions code was the culprit. We put DEFAULT_LIFETIME = datetime.timedelta(hours=1) and originally it was DEFAULT_LIFETIME = datetime.timedelta(days=7). Not sure why running it through any debugger such as wing or pycharm would prevent the browser from getting a session. The interesting thing is the code change with hours=1 works fine on linux with wing debugger. Very Strange!
0
1
0
0
2012-04-12T12:53:00.000
2
0.099668
false
10,123,958
0
0
1
1
I am having a problem that has baffled me for over a week. I have a project that is written in python with Django on Google App Engine. The project has a login page and when I run the application in Google App Engine or from the command line using dev_server.py c:\project, it works fine. When I try to run the application through a debugger like Wing or Pycharm, I cannot get past the login page. After trying to login, it takes me back to the login screen again. When I look at the logs, it shows a 302 (redirect) in the debugger but normally it shows a 200 (OK). Could someone explain why this would be happening? Thanks -Dimitry
How to set the default libraries when doing unit tests under Python 2.7
21,678,252
0
1
111
0
unit-testing,google-app-engine,python-2.7
app.yaml configuration is not applied when doing unit tests with webtest app and NoseGAE. use_library does not work neither. The right solution for this case is to provide proper python path to the preferred lib version, e.g. PYTHONPATH=../google_appengine/lib/django-1.5 when running nosetests.
0
1
0
1
2012-04-12T14:38:00.000
2
0
false
10,125,860
0
0
1
1
I'm in the process of migrating my Google AppEngine solution from Python 2.5 to 2.7. The application migration was relatively easy, but I'm struggling with the unittests. In the 2.5 version I was using the use_library function to set the django version to 1.2, but this isn't supported anymore on 2.7. Now I set the default version in the app.yaml. When I'm now running my unittests the default django version becomes 0.96 and I can't manage to set the 1.2 as the default version. Who knows how I can set the default libraries for the unittest, so the match the settings in the app.yaml?
List of virtualenvs
10,132,440
3
9
11,296
0
macos,osx-lion,python-2.7,virtualenv
Creating a virtualenv actually creates a new folder with that name. You have to find that folder.
0
1
0
0
2012-04-12T21:45:00.000
2
0.291313
false
10,132,344
1
0
0
1
Silly question...I created a virtualenv months ago and can't remember what it's called. Where can I find it? OSX 10.7 Python 2.7.1 Virtualenv 1.6.4 Thanks!
Callback when VLC done on command line
12,893,390
0
0
414
0
python,command-line,vlc
To close VLC after any actions append vlc://quit to your command line
0
1
0
0
2012-04-12T22:12:00.000
2
0
false
10,132,632
0
0
0
1
I need to make VLC download then play songs. I'm planning on using the os.popen to issue commands to the VLC command line (I'm having some problems getting the python binding working...). My question is, is there any callback that I can get when VLC is done downloading so that I can know to start streaming?
ZeroMQ selective pub/sub pattern?
14,329,435
1
4
956
0
python,zeromq
What I see as the only possibility is to use the DEALER-ROUTER combination. DEALER at the frontend, ROUTER at the backend. Every frontend server shall contain a DEALER socket for every backend server (for broadcast) and one DEALER socket on top connected to all the backend servers at once for the round-robin thing. Now let me explain why. You can't really use PUB-SUB in such a critical case, because that pattern can very easily drop messages silently, it does not queue. So in fact the message posted to PUB can arrive to any subset of SUB since it's (dis)connecting in the background. For this reason you need to simulate broadcast by looping over DEALER sockets assigned to all the background servers. It will queue messages if the backend part is not connected, but beware of the HWM. The only final solution is to use heartbeat to know when a backend is dead and destroy the socket assigned to it. A ROUTER socket at the background is a logical solution since you can asynchronously accept any number of requests and since it's a ROUTER socket it is super easy to send the response back to the frontend that requested the task. By having a single ROUTER in the background servers you can make it in a way that they are not even aware of the fact that there is a broadcast happening, they see everything as a direct request to them. Broadcasting is purely a frontend thing. The only issue with this solution might be that if your backend server is not fast enough, all the frontend servers may fill it up so that it reaches the HWM and starts dropping the packages. You can prevent this by having more threads/processes processing the messages from the ROUTER socket. zmq_proxy() is a useful function for this stuff. Hope this helps ;-)
0
1
0
0
2012-04-13T15:51:00.000
2
0.099668
false
10,144,158
0
0
1
1
I'm trying to design ZeroMQ architecture for N front-end servers and M back-end workers, where front-end servers would send task to back-end ones. Front-end servers do have information about back-end ones, but back-end ones do not know about front-end. I have two types of tasks, one type should use round robin and go to just one back-end server, while other type should be broadcasted to all back-end servers. I don't want to have a central broker, as it would be single point of failure. For the first type of tasks request/response pattern seems to be the right one, while for the second it would be publisher/subscriber pattern. But how about pattern combining the two? Is there any patter that would allow me to select at send time if I want to sent message to all or just one random back-end servers? The solution I've come up with is just use publisher/subscriber and prepend messages with back-end server ID and some magic value if it's addressed to all. However, this would create lot unnecessary traffic. Is there cleaner and more efficient way to do it?
Getting a piece of information from development GAE server to local filesystem
10,152,181
3
0
66
0
python,google-app-engine
How about writing the XML data to the blobstore and then write a handler that uses send_blob to download to your local file system? You can use the files API to write to the blobstore from you application.
0
1
1
0
2012-04-14T08:01:00.000
1
1.2
true
10,152,055
0
0
1
1
I have an application I am developing on top of GAE, using Python APIs. I am using the local development server right now. The application involves parsing large block of XML data received from outside service. So the question is - is there an easy way to get this XML data exported out of the GAE application - e.g., in regular app I would just write it to a temp file, but in GAE app I can not do that. So what could I do instead? I can not easily run all the code that produces the service call outside of GAE since it uses some GAE functions to create the call, but it would be much easier if I could take the XML result out and develop/test the parser part outside and then put it back to GAE app. I tried to log it using logging and then extract it from the console, but when XML is getting big it doesn't work well. I know there's bulk data import/export APIs but seems to be an overkill for extracting just this one piece of information to write it to data store and then export the whole store. So how to do it in the best way?
Nginx: Speeding up Image Upload?
10,165,928
1
1
673
0
python,image,nginx
Yes, set the proxy_max_temp_file_size to zero, or some other reasonably small value. Another option (which might be a better choice) is to set the proxy_temp_path to faster storage so that nginx can do a slightly better job of insulating the application from buggy or malicious hosts.
0
1
0
1
2012-04-14T23:05:00.000
1
0.197375
false
10,158,096
0
0
1
1
My python application sits behind an Nginx instance. When I upload an image, which is one of the purpose of my app, I notice that nginx first saves the image in filesystem (used 'watch ls -l /tmp') and then hands it over to the app. Can I configure Nginx to work in-memory with image POST? My intent is to avoid touching the slow filesystem (the server runs on an embedded device).
Python thread waiting for copying the file
10,159,496
0
0
115
0
python,c
There are a lot of important details to your scenario that aren't mentioned, but working on the assumption that you can't write a locking mechanism in to the C program and then use it in the Python program (for example, you're using an existing application on your system), you could look in to os.stat and check the last modified time, m_time. That is of course reliant on you knowing that a recent m_time means the file won't be opened again in the C program and used again. If the file handle is kept open in the C program at all times, and written to occasionally then there is a not a lot of easy options for knowing when it is and isn't being written to.
0
1
0
0
2012-04-15T04:11:00.000
1
0
false
10,159,430
1
0
0
1
I have a c program which is running in thread and appending some data in a file. I want to run a python thread which will copy the same file(which c thread is writing) after some time interval. Is there any safe way to do this? I am doing this in linux OS.
Move file to another directory once it is done transferring
10,176,476
1
6
1,292
0
python,bash,ubuntu,file-monitoring
One technique I use works with FTP. You issue a command to the FTP server to transfer the file to an auxiliary directory. Once the command completes, you send a second command to the server, this time telling it to rename the file from from the aux directory to the final destination directory.If you're using inotify or polling the directory, the filename won't appear until the rename has completed, thus, you're guaranteed that the file is complete. I'm not familar with rsync so I don't know if it has a similar rename capability.
0
1
0
1
2012-04-15T16:35:00.000
3
1.2
true
10,163,877
0
0
0
1
I have a video encoding script that I would like to run as soon as a file is moved into a specific directory. If I use something like inotify, how do I ensure that the file isn't encoded until it is done moving? I've considered doing something like: Copy (rsync) file into a temporary directory. Once finished, move (simple 'mv') into the encode directory. Have my script monitor the encode directory. However, how do I get step #2 to work properly and only run once #1 is complete? I am using Ubuntu Server 11.10 and I'd like to use bash, but I could be persuaded to use Python if that'd simplify issues. I am not "downloading" files into this directory, per se; rather I will be using rsync the vast majority of the time. Additionally, this Ubuntu Server is running on a VM. I have my main file storage mounted via NFS from a FreeBSD server.
Full proto too large to save, cleared variables
10,192,419
9
9
1,244
0
python,django,google-app-engine
Are you using appstats? It looks like this can happen when appstats is recording state about your app, especially if you're storing lots of data on the stack. It isn't harmful, but you won't be able to see everything when inspecting calls in appstats.
0
1
0
0
2012-04-16T06:26:00.000
1
1
false
10,169,574
0
0
1
1
I got this error while rendering google app engine code. Do any body have knowledge about this error?
Synchronize Memcache and Datastore on Google App Engine
10,188,925
0
3
1,816
0
python,google-app-engine,memcached,google-cloud-datastore
I think you could create tasks which will persist the data. This has the advantage that, unlike memcached the tasks are persisted and so no chats would be lost. when a new chat comes in, create a task to save the chat data. In the task handler do the persist. You could either configure the task queue to pull at 1 per second (or slightly slower) and save each bit of chat data held in the task persist the incoming chats in a temporary table (in different entity groups), and every have the tasks pull all unsaved chats from the temporary table, persist them to the chat entity then remove them from the temporary table.
0
1
0
0
2012-04-17T03:20:00.000
5
0
false
10,184,591
0
0
1
2
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store. Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way? I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
Synchronize Memcache and Datastore on Google App Engine
10,191,468
0
3
1,816
0
python,google-app-engine,memcached,google-cloud-datastore
i think you would be fine by using the chat session as entity group and save the chat messages . this once per second limit is not the reality, you can update/save at a higher rate and im doing it all the time and i don't have any problem with it. memcache is volatile and is the wrong choice for what you want to do. if you start encountering issues with the write rate you can start setting up tasks to save the data.
0
1
0
0
2012-04-17T03:20:00.000
5
0
false
10,184,591
0
0
1
2
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store. Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way? I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
Transferring files between Windows Servers using shutil copy/move
10,197,024
1
3
1,457
0
python,windows
for lot of automation tasks but confined to local machine use fabric
0
1
1
0
2012-04-17T18:22:00.000
2
0.099668
false
10,196,803
0
0
0
1
I am using shutil.copy() to transfer files from one server to another server on a network, both Windows. I have used shutil and os modules for lot of automation tasks but confined to local machine. Are there better approaches to transfer file (I mean in terms of performance) from one server to another?
Restricting the number of task requests per App Engine instance
10,200,635
7
1
228
0
python,google-app-engine,memory-management,python-2.7,task-queue
There's not currently any way to advise the App Engine infrastructure about this. You could have your tasks return a non-200 status code if they shouldn't run now, in which case they'll be automatically retried (possibly on another instance), but that could lead to a lot of churn. Backends are probably your best option. If you set up dynamic backends, they'll only be spun up as required for task queue traffic. You can send tasks to a backend by specifying the URL of the backend as the 'target' argument. You can gain even more control over task execution by using pull queues. Then, you can spin up backends as you choose (or use push queue tasks, for that matter), and have the instances pull tasks off the pull queue in whatever manner suits.
0
1
0
0
2012-04-17T22:12:00.000
1
1
false
10,199,963
0
0
1
1
I have a Google App Engine app that periodically processes bursts of memory-intensive long-running tasks. I'm using the taskqueue API on the Python2.7 run-time in thread-safe mode, so each of my instances is handling multiple tasks concurrently. As a result, I frequently get these errors: Exceeded soft private memory limit with 137.496 MB after servicing 8 requests total After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application. As far as I can tell, each instance is taking on 8 tasks each and eventually hitting the soft memory limit. The tasks start off using very low amounts of memory but eventually grow to about 15-20MB. Is there any way to restrict tell App Engine to assign no more than 5 requests to an instance? Or tell App Engine that the task is expected to use 20MB of memory over 10 minutes and to adjust accordingly? I'd prefer not to use the backend APIs since I want the number of instances handling tasks to automatically scale, but if that's the only way, I'd like to know how to structure that.
Google app engine database access on localhost
10,212,501
1
1
577
0
python,google-app-engine
Since you are just getting started I assume you don't care much about what is in your local datastore. Therefore, when starting your app, pass the --clear_datastore to dev_appserver.py. What is happening? As daemonfire300 said, you are having conflicting application IDs here. The app you are trying to run has the ID "sample-app". Your datastore holds data for "template-builder". The easiest way to deal with it is clearing the datastore (as described above). If you indeed want to keep both data, pass --default_partition=dev~sample-app to dev_appserver.py (or the other way around, depending on which app ID you want to use).
0
1
0
0
2012-04-18T07:05:00.000
1
1.2
true
10,204,521
0
0
1
1
I am new to Python and Google App Engine. I have installed an existing Python application on localhost, and it is running fine for the static pages. But, when I try to open a page which is fetching data, it is showing an error message: BadRequestError: app "dev~sample-app" cannot access app "dev~template-builder"'s data template-builder is the name of my online application. I think there is some problem with accessing the Google App Engine data on localhost. What should I do to get this to work?
How to get number of visitors of a page on GAE?
10,221,347
1
2
1,919
0
python,google-app-engine,analytics,visitors
There is no way to tell when someone stops viewing a page unless you use Javascript to inform the server when that happens. Forums etc typically assume that someone has stopped viewing a page after n minutes of inactivity, and base their figures on that. For minimal resource use, I would suggest using memcache exclusively here. If the value gets evicted, the count will be incorrect, but the consequences of that are minimal, and other solutions will use a lot more resources.
0
1
0
0
2012-04-18T21:02:00.000
3
0.066568
false
10,217,948
0
0
1
1
I need to get the number of unique visitors(say, for the last 5 minutes) that are currently looking at an article, so I can display that number, and sort the articles by most popular. ex. Similar to how most forums display 'There are n people viewing this thread' How can I achieve this on Google App Engine? I am using Python 2.7. Please try to explain in a simple way because I recently started learning programming and I am working on my first project. I don't have lots of experience. Thank you!
djutils autodiscover importing command without top level parent, queue_consumer doesn't like it
10,219,986
0
0
35
0
python,django,queue
Looks like duplicate imports in my project were causing the problems.
0
1
0
0
2012-04-19T00:13:00.000
1
1.2
true
10,219,934
0
0
1
1
When djutils goes through autodiscover in the init, it imports my task "app.commands.task1" and states this as part of the output of the log. But when I run webserver and try to queue the command, the queue_consumer log indicates that it cannot find the key "project.app.commands.queuecmd_task1" QueueException: project.app.commands.queuecmd_task1 not found in CommandRegistry I assume that the fact that the string it is trying to find has "project" prepended is the reason it cannot find the task. Why would that be happening?
EventListener mechanism in twisted
10,223,666
1
1
389
0
python,design-patterns,twisted
Twisted havs a package "twisted.words.xish.utility.EventDispatcher", pydoc it to know the usage, it is simple. However, I think what make Twisted strong is its "Deferred". You can looks Deferred object as a Closure of related events (something OK, something failed), callback, fallback are registed observer function. Deferred has advanced feature, such as can be nested. So in my opinion, you can use default EventDispatcher in Twisted, or invent some simple new. But If you introduce some complicated mechanism into Twisted, it dooms to lead a confusion and mess.
0
1
0
0
2012-04-19T06:53:00.000
1
1.2
true
10,223,003
0
0
0
1
I just wanted to hear ideas on correct usage of EventListener/EventSubscription provider in twisted. Most of the examples and the twisted source handle events through the specific methods with a pretty hard coupling. Dispatching target methods of those events are "hardcoded" in a specific Protocol class and then it is a duty of inheriting class to override these to receive the "event". This is very nice and transparent to use while we know of all of potential subscribers when creating the Protocol. However in larger projects there is a need (perhaps I am in a wrong mindset) for a more dynamic event subscription and subscription removal: think of hundereds of object with a lifespan of a minute all interested in the same event. What would be correct way to acheive this according to the "way of twisted". I currently have created an event subscription / dispatching mechanism, however there is a lingering thought that the lack of this pattern in twisted library might suggest that there is a better way.
python import as equivalent in Java?
10,224,419
0
1
982
0
java,python
Import all the files in Eclipse. If you manage to get the code compile using the refactor functions of the IDE it will save you all the trouble. There is no functionality of adding synonyms to the imports in java, but even if there was such how would that have helped you? You still will need to change all your files.
0
1
0
0
2012-04-19T08:32:00.000
2
0
false
10,224,384
1
0
0
1
I have a bunch of idl files that automatically create four packages, with a lot of java files into it. I need to insert those java files in a com.bla. package architecture. Thing is in my generated files I have imports UCO.State for example, that do not fit with my new package architecture. So question is : Is there a java equivalent to 'import com.bla as bla' ? The only other option I see is to import the UCO package and rename all UCO.State and other directly by State. But that would mean refactoring hundreds of files o_O. Any idea ? Thanks !
How can I get a user's IP from a Blobstore upload handler?
10,227,981
2
1
204
0
google-app-engine,python-2.7,blobstore
Try checking for user IP at the point where you generate upload url via create_upload_url(). The upload handler is actually called by Blobstore upload logic after the upload is done, hence the strange IP.
0
1
1
0
2012-04-19T11:28:00.000
1
1.2
true
10,227,042
0
0
0
1
I'm attempting to get the user's IP from inside of my upload handler, but it seems that the only IP supplied is 0.1.0.30. Is there any way around this or any way to get the user's actual IP from inside of the upload handler?
Python logging - logrotate options
10,238,553
2
2
2,637
0
python,logging,multiprocessing,logrotate
As WatchedFileHandler was provided specifically for use with external rotation tools like logrotate, I would suggest that it be used (Option 1). Why do you think you need Option 2? In a multi-process environment where each process writes to its own logs, there shouldn't be any problems. However, processes shouldn't ever share log files.
0
1
0
0
2012-04-19T19:19:00.000
2
0.197375
false
10,235,220
0
0
0
2
I am trying to use logrotate to rotate logs for a multi-process python service. Which of the following combination is commonly used (right and safe)? WatchedFileHandler + logrotate with create option OR FileHandler + logrotate with copytruncate option Option-1 seem to be used in openstack nova and glance projects. I have not seen option-2 being used. Will option-2 work as expected?. Are there any drawbacks with these approaches when used for multi-process apps?
Python logging - logrotate options
10,235,643
0
2
2,637
0
python,logging,multiprocessing,logrotate
I would suggest using Python's own log rotation to get the best integration. The only drawback is that you have an additional place to configure the details.
0
1
0
0
2012-04-19T19:19:00.000
2
0
false
10,235,220
0
0
0
2
I am trying to use logrotate to rotate logs for a multi-process python service. Which of the following combination is commonly used (right and safe)? WatchedFileHandler + logrotate with create option OR FileHandler + logrotate with copytruncate option Option-1 seem to be used in openstack nova and glance projects. I have not seen option-2 being used. Will option-2 work as expected?. Are there any drawbacks with these approaches when used for multi-process apps?
How do I run a python interpreter in Emacs?
15,099,767
0
32
37,056
0
python,emacs,interpreter
In emacs 24.2 there is python-switch-to-python
0
1
0
0
2012-04-20T06:27:00.000
8
0
false
10,241,279
1
0
0
2
I just downloaded GNU emacs23.4, and I already have python3.2 installed in Windows7. I have been using Python IDLE to edit python files. The problem is that I can edit python files with Emacs but I do not know how to run python interpreter in Emacs. When i click on "switch to interpreter", then it says "Searching for program: no such file or directory, python" Someone says i need to make some change on .emacs file, but i do not know where to look for. And I am very unexperienced and just started to learn programming. I am not familiar with commonly used terminologies. I have been searching for solutions but most of the articles i find on the Internet only confuse me. so the questions are: how do i run python interpreter in Emacs? are there different kind of python interpreter? if so, why do they have different interpreters for one language?
How do I run a python interpreter in Emacs?
20,375,113
14
32
37,056
0
python,emacs,interpreter
C-c C-z can do this. It is the key-binding for the command python-switch-to-python
0
1
0
0
2012-04-20T06:27:00.000
8
1
false
10,241,279
1
0
0
2
I just downloaded GNU emacs23.4, and I already have python3.2 installed in Windows7. I have been using Python IDLE to edit python files. The problem is that I can edit python files with Emacs but I do not know how to run python interpreter in Emacs. When i click on "switch to interpreter", then it says "Searching for program: no such file or directory, python" Someone says i need to make some change on .emacs file, but i do not know where to look for. And I am very unexperienced and just started to learn programming. I am not familiar with commonly used terminologies. I have been searching for solutions but most of the articles i find on the Internet only confuse me. so the questions are: how do i run python interpreter in Emacs? are there different kind of python interpreter? if so, why do they have different interpreters for one language?
Simplest way to write cross-platform application with Python plugin extensibility?
10,245,373
0
0
1,296
0
c++,python,objective-c,plugins,cross-platform
The simplest way to share the cross-platform Python component of your application would probably be to implement it as a command-line program, and then invoke it using the relevant system calls in each of the front-ends. It's not the most robust way, but it could be sufficient. If you want plugins to just be a file containing Python code, I would recommend that they at least conform to a convention, e.g. by extending a class, and then have your code load them into the Python runtime using "import plugin_name". This would be better than having the plugins exist as separate programs because you would be able to access the output as Python types, rather than needing to parse text from standard input.
1
1
0
0
2012-04-20T10:44:00.000
2
0
false
10,244,713
0
0
0
1
I am writing an application which should be able to run on Linux, Mac OS X, Windows and BSD (not necessarily as a single executable, so it doesn't have to be Java) and be extensible using simple plugins. The way I want my plugins to be implemented is as a simple Python program which must implement a certain function and simply return a dictionary to the main program. Plugin installation should just be a matter of copying the plugin script file into the ./plugins directory relative to the main executable. The main program should be a stand-alone executable with shared code amongst all of the above platforms, but with platform specific front-ends (so the Linux and BSD versions would just be CLI tools, the Windows version have C++ and MFC front-end, and the Mac OS X version would have a Objecive-C and Cocoa front-end). So I guess it's really two questions: What's the simplest way to share common controller code between multiple front ends from: a. Objective-C on a Mac? b. C++ on Windows? c. C/Python from Linux/BSD? What's the simplest way to implement plugins from my common controller to execute custom plugins?
Tornado/Async Webserver theory, how to handle longer running operations to utilize the async server
10,246,669
2
2
652
0
python,asynchronous,tornado
Yes, as I understand your question, that is a normal use-case for Tornado. If all requests to your Tornado server would make requests to myapi.com, and myapi.com is blocking, then yes, myapi.com would still be the bottleneck. However, if only some requests have to be handled by myapi.com, then Tornado would still be a win, as it can keep handling such requests while waiting for responses for the requests to myapi.com. But regardless, if myapi.com can't handle the load, then putting a Tornado server in front of it won't magically fix that. The difference is that your Tornado server will still be able to respond to requests even when myapi.com is busy.
0
1
0
0
2012-04-20T12:47:00.000
1
1.2
true
10,246,539
0
0
0
1
I have just begun to look at tornado and asynchronous web servers. In many examples for tornado, longer requests are handled by something like: make a call to tornado webserver tornado makes async web call to an api let tornado keep taking requests while callback waits to be called handle response in callback. server to user. So for hypothetical purposes say users are making a request to tornado server at /retrive. /retrieve will make a request to an internal api myapi.com/retrieve_posts_for_user_id/ or w/e. the api request could take a second to run while getting requests, then when it finally returns tornado servers up the response. First of all is this flow the 'normal' way to use tornado? Many of the code examples online would suggest so. Secondly, (this is where my mind is starting to get boggled) assuming that the above flow is the standard flow, should myapi.com be asyncronous? If its not async and the requests can take seconds apiece wouldn't it create the same bottleneck a blocking server would? Perhaps an example of a normal use case for tornado or any async would help to shed some light on this issue? Thank you.
Python: possible to send text to a java app's stdin that is already running?
10,251,723
1
0
159
0
java,python,windows,stdin
Have the java app read from a named pipe. A named pipe allows multiple clients to write to it, and are language-agnostic.
0
1
0
0
2012-04-20T16:55:00.000
1
1.2
true
10,250,353
0
0
1
1
I need to send text to a java app's stdin that is started independently from Python. I have been using pywin32 sendkeys up to this point, but there are some inconsistencies with the output that are making me look for other solutions. I am aware of subprocess, but it looks like that can only be used to interact with a child process that was started by Python, not one that is started independently. Socket is not an option for me because Windows does not allow multiple connections to the same port.
Compiling PyPy on cygwin
10,252,762
1
2
1,101
0
python,gcc,cygwin,pypy
Windows needs the ".exe" extension to know that it's executable. You'll need to modify the build to look for Windows and use the .exe extension.
0
1
0
0
2012-04-20T19:22:00.000
3
0.066568
false
10,252,304
1
0
0
1
I'm trying to compile PyPy on cygwin, and the compilation stops when python tries to open the file "externmod", which was just compiled with gcc. The problem with gcc on cygwin is that it automatically appends a '.exe' to whatever you're compiling, so even though gcc is called as gcc -shared -Wl,--enable-auto-image-base -pthread -o /tmp/usession-release-1.8/shared_cache/externmod, the output file ends up being externmod.exe regardless. So python tries to open /tmp/usession-release-1.8/shared_cache/externmod and can't find it--thus the compilation stops. Anyone know how to solve this, short of recompiling gcc? I don't want to do that.
Behavior of Spoof MAC Address communication
10,255,376
0
3
1,699
0
python,networking,arp,scapy,spoof
The strange thing I see here that how the ARP Reply bounced back to VM(1) although it uses a spoofed MAC address.. Well, try to check on the ARP table on VM (2) and see which MAC record it holds for VM (1); probably you'll find the legitimate MAC address cached due to some communications before you spoof the MAC address.
0
1
0
0
2012-04-21T01:07:00.000
1
0
false
10,255,319
0
0
0
1
I am programming with Python and his partner Scapy. I am facing a situation that i dont know if it is a normal behavior from ARP Protocol or some another problem. I have this scenario: 1- A vm machine (1) sending an "ARP Request" to another vm machine (2) with Spoofed Source MAC Address field (generated with Scapy). 2 - The vm machine (2) receives that "ARP Request" with the Source MAC Address field Spoofed and RESPONDS that with an "ARP Reply". The strange part is that the vm machine (1) receives that. Notes: I have confirmed with Wireshark that the first packet (ARP Request) gets on the vm machine (2) with the Source MAC Address Field REALLY spoofed. And the promiscous mode on networks interfaces are disabled, so, the vm machines only receive packets that are REALLY destined to their interfaces. So, my questions: a) Is it the normal behavior from ARP Protocol? b) Because vm machine (1) has another MAC Address configured on your interface (the real one), how the response packet sent from vm machine (2) with another MAC Address on the Destination field (that is spoofed, so, not even exists on the network) arrives to vm machine (1) and is effectively processed by vm machine (1) like a valid "ARP Reply"??
Changing window size of Geany's execute (F5) option
10,625,293
3
3
1,116
0
python,ide,geany
To solve that problem I added additional parameters to terminal command that geany runs. In Geany go to preferences (Edit->Preferences). Open Tools tab. There is an input field named Terminal where you can specify terminal program to use. I changed that to "gnome-terminal --maximize" to open terminal maximized. For Gnome-Terminal you can find more window options running "gnome-terminal --help-window-options" from terminal.
0
1
0
0
2012-04-21T13:02:00.000
1
1.2
true
10,259,148
0
0
0
1
I'm using Geany 0.18 for python developing and am in general really satisfied, but there is one little thing, that's still bugging me: I usually use the F5 (Build-->Execute) option to test my scripts, the appearing window is rather small, and if my script prints lines which are too long they are hard to read. I would like to change the default-window size of the little one popping up if I hit F5, but I haven't found anything to accomplish this. Is this possible at all ? Thanks Mischa
How to set up celery workers on separate machines?
43,633,216
2
58
26,305
0
python,celery
The way I deployed it is like this: clone your django project on a heroku instance (this will run the frontend) add RabitMQ as an add on and configure it clone your django project into another heroku instance (call it like worker) where you will run the celery tasks
0
1
0
0
2012-04-21T16:36:00.000
2
0.197375
false
10,260,925
0
0
1
2
I am new to celery.I know how to install and run one server but I need to distribute the task to multiple machines. My project uses celery to assign user requests passing to a web framework to different machines and then returns the result. I read the documentation but there it doesn't mention how to set up multiple machines. What am I missing?
How to set up celery workers on separate machines?
10,261,277
60
58
26,305
0
python,celery
My understanding is that your app will push requests into a queueing system (e.g. rabbitMQ) and then you can start any number of workers on different machines (with access to the same code as the app which submitted the task). They will pick out tasks from the message queue and then get to work on them. Once they're done, they will update the tombstone database. The upshot of this is that you don't have to do anything special to start multiple workers. Just start them on separate identical (same source tree) machines. The server which has the message queue need not be the same as the one with the workers and needn't be the same as the machines which submit jobs. You just need to put the location of the message queue in your celeryconfig.py and all the workers on all the machines can pick up jobs from the queue to perform tasks.
0
1
0
0
2012-04-21T16:36:00.000
2
1.2
true
10,260,925
0
0
1
2
I am new to celery.I know how to install and run one server but I need to distribute the task to multiple machines. My project uses celery to assign user requests passing to a web framework to different machines and then returns the result. I read the documentation but there it doesn't mention how to set up multiple machines. What am I missing?
Choosing Scripting lang
10,262,395
7
2
492
0
python,scripting,programming-languages,lua
Lua strings are encoding-agnostic. So, yes, you can write unicode strings in Lua scripts. If you need pattern matching, then the standard Lua string library does not support unicode classes. But plain substring search works.
0
1
0
1
2012-04-21T19:10:00.000
5
1
false
10,262,114
0
0
0
1
I need to script my app (not a game) and I have a problem, choosing a script lang for this. Lua looks fine (actually, it is ideal for my task), but it has problems with unicode strings, which will be used. Also, I thought about Python, but I don't like It's syntax, and it's Dll is too big for me ( about 2.5 Mib). Python and other such langs have too much functions, battaries and modules which i do not need (e.g. I/O functions) - script just need to implement logic, all other will do my app. So, I'd like to know is there a scripting lang, which satisfies this conditions: unicode strings I can import C++ functions and then call them from script Can be embedded to app (no dll's) without any problems Reinventing the wheel is not a good idea, so I don't want to develop my own lang. Or there is a way to write unicode strings in Lua's source? Like in C++ L"Unicode string"
TCP reverse proxy to copies of Twisted business logic processes
10,267,334
0
0
382
0
python,twisted
You might want to consider running an established load-balancing reverse proxy such as HAProxy or nginx, since it will be the weak point of your system. If you have a bug in your reverse proxy/load balancer, everything goes down. You could also handle SSL at the proxy so your twisted application servers don't need handle it.
0
1
0
0
2012-04-22T05:32:00.000
1
0
false
10,265,506
0
0
0
1
I'm looking for a way implement a reverseProxy to copies of (Twisted) server processes. I'm think of a setup where the business logic is run in copies to allow for easy maintenance and upgrade, and stores shared data in a database and perhaps memcached. I saw the reverseProxy class in twisted.web, but I don't think this is what I'm looking for for non-HTTP. First off, is this a good design in general and/or is there a more "twisted" way to do it?
Trying to Run a Python script using EasyEclipse for Python, get error "The selection cannot be launched, and there are no recent launches."
10,273,751
0
1
2,868
0
python,eclipse,pydev
I got it. When I was creating a file, I needed to specify the .py extension. After doing this, the file ran.
0
1
0
0
2012-04-22T15:28:00.000
1
1.2
true
10,269,245
0
0
0
1
This is my first time using Eclipse, so I think it must be some newbie configuration error. However, I can not sort it out, so I hope one of you might be able to help me. This is what I did: Installed Python 2.7 Installed EasyEclipse for Python 1.3.1 Went to Window>Preferences>Pydev>Interpreter-Python and selected C:\python27\python.exe Created a new project (when I had to select project type, there was only python 2.3, 2.4, and 2.5, so I selected 2.5, even though I am running 2.7) Created a file in that project with a simple helloworld. Clicked the "Run" play button. When I do this, I get the error "The selection cannot be launched, and there are no recent launches." As I said, I get the feeling this is just some configuration issue. In particular, could it be something about my "run configuration"? I see this as a dialog box, but I really don't know what I am supposed to be doing in it. Your help is greatly appreciated. Thank you.
Why is the listdir() function part of os module and not os.path?
10,274,338
12
2
370
0
python,listdir
I personally find the division between os and os.path to be a little inconsistent. According to the documentation, os.path should just be an alias to a module that works with paths for a particular platform (i.e., on OS X, Linux, and BSD you'll get posixpath, on Windows or ancient Macs you'll get something else). >>> import os >>> help(os) Help on module os: NAME os - OS routines for Mac, NT, or Posix depending on what system we're on. ... >>> help(os.path) Help on module posixpath: NAME posixpath - Common operations on Posix pathnames. The listdir function doesn't operate on the path itself, instead it operates on the directory identified by the path. Most of the functions in os.path operate on the actual path and not on the filesystem. This means that many functions in os.path are string manipulation functions, and most functions in os are IO functions / syscalls. Examples: os.path.join, os.path.dirname, os.path.splitext, are just string manipulation functions. os.listdir, os.getcwd, os.remove, os.stat are all syscalls, and actually touch the filesystem. Counterexamples: os.path has exists, getmtime, islink, and others which are basically wrappers around os.stat, and touch the filesystem. I consider them miscategorized, but others may disagree. Fun fact of the day: You won't find the modules in the top level of the library documentation, but you can actually import the version of os.path for any platform without having to actually run on that platform. This is documented in the documentation for os.path, However, you can also import and use the individual modules if you want to manipulate a path that is always in one of the different formats. They all have the same interface: posixpath for UNIX-style paths ntpath for Windows paths macpath for old-style MacOS paths os2emxpath for OS/2 EMX paths You can't do the same thing with os, it wouldn't make any sense.
0
1
0
0
2012-04-23T02:45:00.000
1
1.2
true
10,274,231
1
0
0
1
os.path module seems to be the default module for all path related functions. Yet, the listdir() function is part of the os module and not os.path module, even though it accepts a path as its input. Why has this design decision been made ?
Cross-platform USB development for Mac/Windows - possible with Ruby/Python?
10,289,020
0
1
528
0
python,ruby,usb,native
Well it is definitely possible; I don't know what the equivalent is in rubygems by pyUSB is a easy to use module you can leverage to do this and then there are numerous http libraries for python. As for making it self contained, it is possible but not ideal. py2exe is a program that basically takes a copy of the python interpreter, all dependencies used in your programs and your script and glues it all together in a exe file, but by default py2exe will not pack it into a exe, but there are instructions on the wiki
0
1
0
1
2012-04-23T20:34:00.000
1
0
false
10,287,853
0
0
0
1
I'd like to develop an app that runs natively (self-contained executable) for both Mac and Windows that will detect/poll for a USB device being inserted and send an HTTP call as a result. I'm mainly a Ruby programmer, so ideally I could do this with a combination of Macruby/IronRuby and shared libraries, but it's looking like libusb requires a special driver to be installed on Windows (which I can't expect the clients to do). Are there libraries/gems that would facilitate this? Is it possible to do what I'm describing using Python/Ruby? It's not as important to be shared code as it is that the codebase is Python/Ruby/single language. libusb would be ideal if it didn't require an install of a special driver on Windows.
Updating app engine project code
10,296,008
0
0
270
0
python,google-app-engine
Try calling python appcfg.py update myapp/
0
1
0
0
2012-04-24T10:16:00.000
1
1.2
true
10,295,954
0
0
1
1
I am new to google app engine and python. I have created an application in python with the help of google app engine. i am using cmmand 'appcfg.py update myapp/' from command prompt to update live code. this command was working perfectly but suddenly it stops working. Now every time i run this command it opens up the appcfg.py file. Please help me what is happening with the command
What is BDB for in Python?
10,302,538
9
8
2,244
0
python,debugging
The bdb module implements the basic debugger facilities for pdb.Pdb, which is the concrete debugger class that is used to debug Python scripts from a terminal. Unless you're planning to write your own debugger user interface, you shouldn't need to use anything from bdb.
0
1
0
0
2012-04-24T16:45:00.000
1
1.2
true
10,302,298
1
0
0
1
I was looking through the Python standards libs and saw BDB the debugger framework. What is it for and can I get additional value from it? At the moment I use Eclipse/PyDev with the internal debugger which also supports conditionals breakpoints. Can I get something new from BDB?
GAE - Deployment Error: "AttributeError: can't set attribute"
12,338,986
1
11
4,377
0
python,google-app-engine,deployment
Add the --oauth2 flag to appcfg.py update for an easier fix
0
1
0
0
2012-04-25T11:55:00.000
6
0.033321
false
10,315,069
0
0
1
4
When I try to deploy my app I get the following error: Starting update of app: flyingbat123, version: 0-1 Getting current resource limits. Password for avigmati: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in run_file(__file__, globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file execfile(script_path, globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in main(sys.argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main result = AppCfgApp(argv).Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run self.action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update self.UpdateVersion(rpcserver, self.basepath, appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion self.options.max_size) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload resource_limits = GetResourceLimits(self.rpcserver, self.config) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits resource_limits.update(GetRemoteResourceLimits(rpcserver, config)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits version=config.version) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate auth_token = self._GetAuthToken(credentials[0], credentials[1]) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken e.headers, response_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__ self.reason = args["Error"] AttributeError: can't set attribute 2012-04-25 19:30:15 (Process exited with code 1) The following is my app.yaml: application: flyingbat123 version: 0-1 runtime: python api_version: 1 threadsafe: no It seems like an authentication error, but I'm entering a valid email and password. What am I doing wrong?
GAE - Deployment Error: "AttributeError: can't set attribute"
12,912,373
0
11
4,377
0
python,google-app-engine,deployment
This also happens if your default_error value overlaps with your static_dirs in app.yaml.
0
1
0
0
2012-04-25T11:55:00.000
6
0
false
10,315,069
0
0
1
4
When I try to deploy my app I get the following error: Starting update of app: flyingbat123, version: 0-1 Getting current resource limits. Password for avigmati: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in run_file(__file__, globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file execfile(script_path, globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in main(sys.argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main result = AppCfgApp(argv).Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run self.action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update self.UpdateVersion(rpcserver, self.basepath, appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion self.options.max_size) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload resource_limits = GetResourceLimits(self.rpcserver, self.config) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits resource_limits.update(GetRemoteResourceLimits(rpcserver, config)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits version=config.version) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate auth_token = self._GetAuthToken(credentials[0], credentials[1]) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken e.headers, response_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__ self.reason = args["Error"] AttributeError: can't set attribute 2012-04-25 19:30:15 (Process exited with code 1) The following is my app.yaml: application: flyingbat123 version: 0-1 runtime: python api_version: 1 threadsafe: no It seems like an authentication error, but I'm entering a valid email and password. What am I doing wrong?
GAE - Deployment Error: "AttributeError: can't set attribute"
10,871,690
1
11
4,377
0
python,google-app-engine,deployment
I had the same problem and after inserting logger.warn(body), I get this: WARNING appengine_rpc.py:231 Error=BadAuthentication Info=InvalidSecondFactor The standard error message could have been more helpful, but this makes me wonder if I should not use an application specific password?
0
1
0
0
2012-04-25T11:55:00.000
6
0.033321
false
10,315,069
0
0
1
4
When I try to deploy my app I get the following error: Starting update of app: flyingbat123, version: 0-1 Getting current resource limits. Password for avigmati: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in run_file(__file__, globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file execfile(script_path, globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in main(sys.argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main result = AppCfgApp(argv).Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run self.action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update self.UpdateVersion(rpcserver, self.basepath, appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion self.options.max_size) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload resource_limits = GetResourceLimits(self.rpcserver, self.config) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits resource_limits.update(GetRemoteResourceLimits(rpcserver, config)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits version=config.version) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate auth_token = self._GetAuthToken(credentials[0], credentials[1]) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken e.headers, response_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__ self.reason = args["Error"] AttributeError: can't set attribute 2012-04-25 19:30:15 (Process exited with code 1) The following is my app.yaml: application: flyingbat123 version: 0-1 runtime: python api_version: 1 threadsafe: no It seems like an authentication error, but I'm entering a valid email and password. What am I doing wrong?
GAE - Deployment Error: "AttributeError: can't set attribute"
12,750,238
2
11
4,377
0
python,google-app-engine,deployment
I know this doesn't answer the OP question, but it may help others who experience problems using --oauth2 mentioned by others in this question. I have 2-step verification enabled, and I had been using the application-specific password, but found it tedious to look up and paste the long string every day or so. I found that using --oauth2 returns This application does not exist (app_id=u'my-app-id') but by adding the --no_cookies option appcfg.py --oauth2 --no_cookies update my-app-folder\ I can now authenticate each time by just clicking [Allow access] in the browser window that is opened. I'm using Python SDK 1.7.2 on Windows 7 NOTE: I found this solution elsewhere, but I can't remember where, so I can't properly attribute it. Sorry. .
0
1
0
0
2012-04-25T11:55:00.000
6
0.066568
false
10,315,069
0
0
1
4
When I try to deploy my app I get the following error: Starting update of app: flyingbat123, version: 0-1 Getting current resource limits. Password for avigmati: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in run_file(__file__, globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file execfile(script_path, globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in main(sys.argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main result = AppCfgApp(argv).Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run self.action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update self.UpdateVersion(rpcserver, self.basepath, appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion self.options.max_size) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload resource_limits = GetResourceLimits(self.rpcserver, self.config) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits resource_limits.update(GetRemoteResourceLimits(rpcserver, config)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits version=config.version) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate auth_token = self._GetAuthToken(credentials[0], credentials[1]) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken e.headers, response_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__ self.reason = args["Error"] AttributeError: can't set attribute 2012-04-25 19:30:15 (Process exited with code 1) The following is my app.yaml: application: flyingbat123 version: 0-1 runtime: python api_version: 1 threadsafe: no It seems like an authentication error, but I'm entering a valid email and password. What am I doing wrong?
Eclipse / PyDev overrides @sys, cannot find Python 64bits interpreter
10,343,117
0
2
321
0
python,linux,eclipse,pydev
I don't really think there's anything that can be done on the PyDev side... it seems @sys is resolved based on the kind of process you're running (not your system), so, if you use a 64 bit vm (I think) it should work... Other than that, you may have to provide the actual path instead of using @sys...
0
1
0
1
2012-04-25T12:05:00.000
1
1.2
true
10,315,232
0
0
1
1
I'm working in a multiuser environment with the following setup: Linux 64bits environment (users can login in to different servers). Eclipse (IBM Eclipse RSA-RTE) 32bits. So Java VM, Eclipse and PyDev is 32bits. Python 3 interpreter is only available for 64bits at this moment. In the preferences for PyDev, I want to set the path to the Python interpreter like this: /app/python/@sys/3.2.2/bin/python In Eclipse/PyDev, @sys points to i386_linux26 even if the system actually is amd64_linux26. So if I do not explicitly write amd64_linux26 instead of @sys, PyDev will not be able to find the Python 3 interpreter which is only available for 64bits. The link works as expected outside Eclipse/PyDev, e.g. in the terminal. Any ideas how to force Eclipse/PyDev to use the real value of @sys? Thanks in advance!
How to obtain pre-built *debug* version of Python library (e.g. Python27_d.dll) for Windows
10,323,635
4
12
12,543
0
python,gdb,debug-symbols,activepython,mingw-w64
The best way to create a debug version of Python under Windows is to use the Debug build in the Visual Studio projects that come with the Python source, using the compiler version needed for the specific Python release, i.e. VS 2008. There may be other ways, but this is certainly the best way. If you really need a 64-bit debug build also, the best way is to buy a copy of VS 2008 (i.e. not use the Express version). It may be possible to create an AMD64 debug build using the SDK 64-bit compiler, but again, using the officially-supported procedures is the best way.
0
1
0
1
2012-04-25T12:31:00.000
3
0.26052
false
10,315,662
0
0
0
1
Firstly, I should state that my current development environment is MSYS + mingw-w64 + ActivePython under Windows 7 and that on a normal day I am primarily a Linux developer. I am having no joy obtaining, or compiling, a version of the Python library with debug symbols. I need both 32bit and 64bit debug versions of the Python27.dll file, ideally. I want to be able to embed Python and implement Python extensions in C++, and be able to call upon a seamless debugging facility using the gdb-7.4 I have built for mingw-w64, and WingIDE for the pure Python side of things. Building Python 2.7.3 from source with my mingw-w64 toolchain is proving too problematic -- and before anyone flames me for trying: I acknowledge that this environment is unsupported, but I thought I might be able to get this working with a few judicious patches (hacks) and: make OPT='-g -DMS_WIN32 -DWIN32 -DNDEBUG -D_WINDOWS -DUSE_DL_EXPORT' I was wrong... I gave up at posixmodule.c since the impact of my changes became uncertain; ymmv. I have tried building with Visual C++ 2010 Express but being primarily a Linux developer the culture-shock is too much for me to bear today; the Python project does not even import successfully. Apparently, I need Visual C++ 2008, yet I am already convinced I don't want to go down this road if at all possible... It's really surprising to me that there is not a zip-file providing the requisite .dlls somewhere on the Internet. ActiveState should really provide these as an optional download with each release of ActivePython that they make -- perhaps that's where the paid support comes in ;-). What is the best way to obtain the Python debug library files given my environment?
Large Variations in Identical Python Process Run Times
10,323,602
3
2
152
0
python,multithreading,terminal,cpu,cpu-usage
One reason for this might be the use of hyper-threading. HT logical CPUs appear to the operating system as separate CPUs, but really are not. So if two threads run on the same core in separate logical (HT) CPUs, performance would be smaller than if they ran on separate cores. The easiest solution might be to disable hyper-threading. If that is not an option, use processor affinity to pin each Python process to its separate CPU.
0
1
0
0
2012-04-25T14:15:00.000
1
1.2
true
10,317,570
1
0
0
1
I just bought a new, machine to run python scripts for large scale modeling. It has two CPUs with 4 cores each (Xeon, 2.8GhZ). Each core has hyper-threading enabled for 4 logical cpu cores. Now for the problem: When I run identical python processes in 8 separate terminals, top command shows that each process is taking 100% of the cpu. However, the process in terminal 1 is running about 4 times slower than the process in terminal 8. This seems odd to me... I wonder if it has something to do with how the processes are scheduled on the various (logical?) cores? Does anyone have an idea of how I could all of the to run in about the same speed? EDIT (in response to larsmans): Good point. The script is a ginat loop that runs about 10,000 times. Each loop reads in a text file (500 lines) and runs some basic calculations on the quantities read in. While the loop runs, it uses about 0.2% of Memory. There is no writing to disk during the loop. I could understand that the read access could be a limiting factor but I am perplexed about the fact that it would be the first process that would be the slowest if that was the case. I would have expected that it would get slower as I start more processes... I timed the processes a couple of times using the time command in the terminal. EDIT2: I just found out that sometimes a single core is designated to handle all reading and writing - so multiple processes (even if they run on separate cores) will use one single core for all the I/O... This would however only affect one of the cores, not cause all to have various processing speeds...
Gracefull shutdown, close db connections, opened files, stop work on SIGTERM, in multiprocessing
10,322,481
5
3
403
1
python,database,multiprocessing,signals
Store all the open files/connections/etc. in a global structure, and close them all and exit in your SIGTERM handler.
0
1
0
0
2012-04-25T19:27:00.000
1
1.2
true
10,322,422
0
0
0
1
I have a daemon process witch spawns child processes using multiprocessing to do some work, each child process opens its own connection handle do DB (postgres in my case). Jobs to processes are passed via Queue and if queue is empty processes invoke sleep for some time, and recheck queue How can I implement "graceful shutdown" on SIGTERM? Each subprocess should terminate as fast as possible, with respect of closing/terminating current cursor/transaction and db connection, and opened files.
How to create a hard link from within a Python script on a Mac?
10,327,842
1
0
2,229
0
python,macos,unix,filesystems,osx-lion
os.link claims to work on all Unix platforms. Are there any OS X specific issues with it?
0
1
0
1
2012-04-26T05:40:00.000
2
0.099668
false
10,327,804
0
0
0
1
I'm assuming with a call to a UNIX shell, but I was wondering if there are other options from within Python.
Steps to access Django application hosted in VM from Windows 7 client
10,331,810
0
0
422
1
django,wxpython,sql-server-2008-r2,vmware,python-2.7
Maybe this could help you a bit, although my set-up is slightly different. I am running an ASP.NET web app developed on Windows7 via VMware fusion on OS X. I access the web app from outside the VM (browser of Mac or other computers/phones within the network). Here are the needed settings: Network adapter set to (Bridged), so that the VM has its own IP address Configure the VM to have a static IP At this point, the VM is acting as its own machine, so you can access it as if it were another server sitting on the network.
0
1
0
0
2012-04-26T10:21:00.000
2
1.2
true
10,331,518
0
0
1
1
We have developed an application using DJango 1.3.1, Python 2.7.2 using Database as SQL server 2008. All these are hosted in Win 2008 R2 operating system on VM. The clients has windows 7 as o/s. We developed application keeping in view with out VM, all of sudden client has come back saying they can only host the application on VM. Now the challnege is to access application from client to server which is on VM. If anyone has done this kind of applications, request them share step to access the applicaiton on VM. As I am good at standalone systems, not having knowledge on VM accessbility. We have done all project and waiting to someone to respond ASAP. Thanks in advance for your guidence. Regards, Shiva.
Why two smtpd.py are installed?
10,335,348
1
0
181
0
python
The one in /usr/bin is in your PATH and can be executed by calling its filename in a shell. The second one is in library directory referenced by PYTHONPATH or sys.path and can be used as a module in python scripts. They are probably hard or symlinks if they have the same content.
0
1
0
1
2012-04-26T14:12:00.000
1
0.197375
false
10,335,259
0
0
0
1
After installing python on Linux, smtpd.py will be installed under /usr/bin directory. Why does this module exist here? How about the other one under directory /usr/lib/python2.x? What's the difference?
Python invoking shell command with params loop
10,337,484
2
0
488
0
python,shell,loops,subprocess
You subprocess.call probably blocked on whatever your command was. I doubt its your python script, but rather whatever the shell command might be (taking too long). You can tell if your command is completing or not by checking the return code: print subprocess.call(["command","param"]) It should print 0 if it was successful, or raise an exception if the command has problems. But if you never see consecutive prints, then its never returning from the call.
0
1
0
0
2012-04-26T16:24:00.000
2
1.2
true
10,337,451
1
0
0
2
The following little script is supposed to run a shell command with a parameter every 10 minutes. It's ran correctly once (30 minutes ago) however isn't playing ball now (should have done the process another 2 times since). Have I made an error? while(True): subprocess.call(["command","param"]) time.sleep(600)
Python invoking shell command with params loop
10,337,506
1
0
488
0
python,shell,loops,subprocess
Try subprocess.Popen if you don't need to wait for the command to complete. From the docs, subprocess.call: Run the command described by args. Wait for command to complete, then return the returncode attribute.
0
1
0
0
2012-04-26T16:24:00.000
2
0.099668
false
10,337,451
1
0
0
2
The following little script is supposed to run a shell command with a parameter every 10 minutes. It's ran correctly once (30 minutes ago) however isn't playing ball now (should have done the process another 2 times since). Have I made an error? while(True): subprocess.call(["command","param"]) time.sleep(600)
Where can I find the source for "python mode" when editing emacs configuration for mac os x?
10,340,315
5
1
108
0
python,macos,emacs
In general I would do M-x describe-function RET python-mode--by default bound to C-h f-- and the first line in the info window is: python-mode is an interactive compiled Lisp function in ``python.el'. And that python.el is clickable, for me, and takes me to the file that it was defined in, at which point M-x pwd works.
0
1
0
0
2012-04-26T19:33:00.000
1
1.2
true
10,340,141
1
0
0
1
I'm looking to play with python mode for emacs on mac os x, but I can't seem to find the source files for the mode. What are the standard locations, where a default installation of emacs might have put its modes when installed on Mac OS X? (I'm using GNU Emacs 24.0.95.1 (i386-apple-darwin11.3.0, NS apple-appkit-1138.32))
install python + mechanize + lxml on windows
10,475,706
0
3
8,933
0
python,mechanize,lxml
I'm not happy with any of the solutions offered. Hopefully someone will come along with a better one but for now it seems safe to say that there is no simple solution for installing python + mechanize + lxml + arbitrary other libraries on windows.
0
1
0
0
2012-04-27T10:10:00.000
6
0
false
10,348,770
1
0
0
1
What is the easiest way to install python 2 plus lxml plus mechanize on windows? I'm looking for a solution that is easy to follow and also makes it easy to install other libraries (eggs?) in the future. Edit I want to be able to install libraries which require a compiler. Ruby for windows has a dev kit which allows you to easily install gems that require a compiler. I'm looking for a similar setup for Python.
Proper Unix (.profile, .bash_profile) changes for Python usage
10,368,402
2
5
479
0
python,profile,.bash-profile
python works out of the box on OS X (as does ruby, for that matter). The only changes I would recommend for a beginner are: 1) Python likes to be reassured that the terminal can handle UTF-8 before it will print Unicode strings. Add export LANG=en_US.UTF-8 to .profile. (It may be that the .UTF-8 part is already present by default on Lion - I haven't checked since Snow Leopard.) Of course, this is something that will help you in debugging, but you shouldn't rely on it being set this way on other machines. 2) Install pip by doing easy_install pip (add sudo if necessary). After that, install Python packages using pip install; this way, you can easily remove them using pip uninstall.
0
1
0
1
2012-04-28T23:08:00.000
2
1.2
true
10,368,361
0
0
0
1
I new to Python and to programming in general. I'm a novice, and do not work in programming, just trying to teach myself how to program as a hobby. Prior to Python, I worked with Ruby for a bit and I learned that one of the biggest challenges was actually properly setting up my computer. Background: I'm on a Macbook with OSX 10.7. With Ruby, you have to (or rather, you should), edit your ./profile and add PATH info. When you install and use RVM, there are additional items you need to add to your bash_profile. Do you have to make similar changes with Python? What are the best practices as I'm installing/getting started to ensure I can install modules and packages correctly?
Cross compiling a python script on windows into linux executable
10,456,295
0
5
7,825
0
python,windows,ubuntu,cross-compiling
I have no experience deploying applications on Linux - but can't you add dependencies when you package the software for apt-get? I install packages that bring in other libraries all the time. Seems like you could do this for wx.
0
1
0
0
2012-04-29T12:44:00.000
3
0
false
10,372,216
1
0
0
1
I've created a program using Python on Windows. How do you turn it into Linux executable? To be specific Linux Ubuntu 9.10.
Python: how to copy files in /bin/ folder
10,386,159
1
0
1,732
0
python,copy,system-administration
You need to run the program with escalated privileges. Under Ubuntu, this is normally done with the sudo command, which will prompt the user for their password.
0
1
0
0
2012-04-30T15:20:00.000
2
0.099668
false
10,386,132
1
0
0
1
I'd like to place a special file in the /usr/bin folder of Ubuntu. Basically I'm trying to write a setup file in python which would do the job. But administrative privileges are needed to fulfill the job, how to provide my setup with these privileges (provided that I have the password and can use it in my program)?
Remote API is extremely slow
10,398,726
2
2
415
0
python,google-app-engine
Don't forget that the remoteapi executes your code locally and only calls appengine servers for datastore/blobstore/etc. operations. So in essence, you're running code that's hitting a database living over the network. It's definitely slower.
0
1
0
1
2012-05-01T04:24:00.000
2
0.197375
false
10,393,531
0
0
1
1
I use the remote API for some utility tasks, and I've noticed that it is orders of magnitude slower than code running on Appengine. A simple get_by_id(list) took a couple of minutes using the remote API, and a couple of seconds running on Appengine. The logs show that the remote API fetched separately taking a couple of seconds each; whereas on Appengine the whole list of objects is retrieved in about the same time. Is there any way to improve this situation?
get_application_id() behaviour with aliased app id
10,423,664
3
1
124
0
python,google-app-engine
No - get_application_id returns the ID of the app that is actually serving your request. You can examine the hostname to see if the request was directed to oldappid.appspot.com.
0
1
0
0
2012-05-02T23:44:00.000
1
1.2
true
10,423,245
0
0
1
1
I was forced to alias my app name after migrating to the High Replication Datastore. I use the google.appengine.api.app_identity.get_application_id() method throughout my app, but now it returns the new app id instead of the original one even when visiting via the old app id url. Is there a way to output the original app id? thanks
Pointers on using celery with sorl-thumbnails with remote storages?
11,048,085
4
11
1,459
0
python,django,amazon-s3,celery,sorl-thumbnail
As I understand Sorl works correctly with the S3 storage but it's very slow. I believe that you know what image sizes do you need. You should launch the celery task after the image was uploaded. In task you call to sorl.thumbnail.default.backend.get_thumbnail(file, geometry_string, **options) Sorl will generate a thumbnail and upload it to S3. Next time you request an image from template it's already cached and served directly from Amazon's servers a clean way to handle a placeholder thumbnail image while the image is being processed. For this you will need to override the Sorl backend. Add new argument to get_thumbnail function, e.g. generate=False. When you will call this function from celery pass generate=True And in function change it's logic, so if thumb is not present and generate is True you work just like the standard backend, but if generate is false you return your placeholder image with text like "We process your image now, come back later" and do not call backend._create_thumbnail. You can launch a task in this case, if you think that thumbnail can be accidentally deleted. I hope this helps
0
1
0
0
2012-05-03T02:48:00.000
3
0.26052
false
10,424,456
0
0
1
1
I'm surprised I don't see anything but "use celery" when searching for how to use celery tasks with sorl-thumbnails and S3. The problem: using remote storages causes massive delays when generating thumbnails (think 100s+ for a page with many thumbnails) while the thumbnail engine downloads originals from remote storage, crunches them, then uploads back to s3. Where is a good place to set up the celery task within sorl, and what should I call? Any of your experiences / ideas would be greatly appreciated. I will start digging around Sorl internals to find a more useful place to delay this task, but there are a few more things I'm curious about if this has been solved before. What image is returned immediately? Sorl must be told somehow that the image returned is not the real thumbnail. The cache must be invalidated when celery finishes the task. Handle multiple thumbnail generation requests cleanly (only need the first one for a given cache key) For now, I've temporarily solved this by using an nginx reverse proxy cache that can serve hits while the backend spends time generating expensive pages (resizing huge PNGs on a huge product grid) but it's a very manual process.
How to make python3.2 interpreter the default interpreter in debian
28,640,488
5
18
24,208
0
python,linux,debian
btw, if you are using bash or running from the shell, and you normally include at the top of the file the following line: #!/usr/bin/python then you can change the line to instead be: #!/usr/bin/python3 That is another way to have pythonX run instead of the default (where X is 2 or 3).
0
1
0
0
2012-05-03T15:12:00.000
5
0.197375
false
10,434,260
1
0
0
2
I got both python2 and python3 installed in my debian machine. But when i try to invoke the python interpreter by just typing 'python' in bash, python2 pops up and not python3. Since I am working with the latter at the moment, It would be easier to invoke python3 by just typing python. Please guide me through this.
How to make python3.2 interpreter the default interpreter in debian
10,468,921
9
18
24,208
0
python,linux,debian
Well, you can simply create a virtualenv with the python3.x using this command: virtualenv -p <path-to-python3.x> <virtualenvname>
0
1
0
0
2012-05-03T15:12:00.000
5
1
false
10,434,260
1
0
0
2
I got both python2 and python3 installed in my debian machine. But when i try to invoke the python interpreter by just typing 'python' in bash, python2 pops up and not python3. Since I am working with the latter at the moment, It would be easier to invoke python3 by just typing python. Please guide me through this.
Python installation mess on Mac OS X, cannot run python
10,435,770
1
4
1,985
0
python,macos,bash,shell,installation
Something got messed up in your $PATH. Have a look in ~/.profile, ~/.bashrc, ~/.bash_profile, etc., and look for a line starting with export that doesn't end cleanly.
0
1
0
0
2012-05-03T16:40:00.000
3
0.066568
false
10,435,715
0
0
0
1
I stupidly downloaded python 3.2.2 and since then writing 'python' in the terminal yields 'command not found'. Also, when starting the terminal I get this: Last login: Wed May 2 23:17:28 on ttys001 -bash: export: `folder]:/Library/Frameworks/Python.framework/Versions/2.7/bin:/opt/local/bin:/opt/local/sbin:/usr/local/git/bin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Applications/android-sdk-mac_86/tools:/Applications/android-sdk-mac_86/platform-tools:/usr/local/git/bin:/usr/X11/bin:/usr/local/ant/bin': not a valid identifier Why the Android SDK folder is there is beyond me. It's all jazzed up. Any ideas how can I remove the offending file, folder or fix his problem? I've checked the System Profiler and python 2.6.1 and 2.7.2.5 shows up.
Distributed server model
10,440,880
1
1
221
1
python,distributed-computing
The general way to handle this is to have the threads report their status back to the server daemon. If you haven't seen a status update within the last 5N seconds, then you kill the thread and start another. You can keep track of the current active threads that you've spun up in a list, then just loop through them occasionally to determine state. You of course should also fix the errors in your program that are causing threads to exit prematurely. Premature exits and killing a thread could also leave your program in an unexpected, non-atomic state. You should probably also have the server daemon run a cleanup process that makes sure any items in your queue, or whatever you're using to determine the workload, get reset after a certain period of inactivity.
0
1
0
0
2012-05-03T22:39:00.000
1
1.2
true
10,440,277
0
0
1
1
Lets say I have 100 servers each running a daemon - lets call it server - that server is responsible for spawning a thread for each user of this particular service (lets say 1000 threads per server). Every N seconds each thread does something and gets information for that particular user (this request/response model cannot be changed). The problem I a have is sometimes a thread hangs and stops doing something. I need some way to know that users data is stale, and needs to be refreshed. The only idea I have is every 5N seconds have the thread update a MySQL record associated with that user (a last_scanned column in the users table), and another process that checks that table every 15N seconds, if the last_scanned column is not current, restart the thread.
python Input delegation for subprocesses
11,967,614
0
1
116
0
shell,python-3.x,subprocess,python-idle
Use process.stdin.write. Remember to set stdin = subprocess.PIPE when you call subprocess.Popen.
0
1
0
0
2012-05-04T03:35:00.000
1
0
false
10,442,359
0
0
0
1
I am currently displaying the output of a subprocess onthe python shell (in my case iDLE on windows) by using a pipe and displaying each line. I want to do this with a subprocess that has user input, so that the prompt will appear on the python console, and the user can enter the result, and the result can be send to the subprocess. Is there a way to do this?
Relative Python Path to Script
10,446,554
1
2
259
0
python,relative-path
Exec format error will come when the shell isn't set at the script. try adding #!/bin/sh at the beginning of the script and execute the python script.
0
1
0
0
2012-05-04T09:45:00.000
1
1.2
true
10,446,440
0
0
0
1
Python project looks like this: setup.py README Application scripts hello.py shell_scripts date.sh From hello.py I'm executing the command subprocess.call(['../shell_scripts/date.sh']) and receiving the error OSError: [Errno 8] Exec format error. Note: date.sh is a perfectly valid shell script and is executable. I've also tried os.path.realpath to no avail. I assume this is due to an invalid path?
Python: interacting with STDIN/OUT of a running process in *nix
10,455,014
0
4
2,715
0
python,unix,stdout,stdin
You cannot redirect stdin or stdout for a running process. You can, however, add code to your caller -- foo.py -- that will read from foo.py's stdin and send it to bar.py's stdout, and vice-versa. In this model, foo.py would connect bar.py's stdin and stdout to pipes and would be responsible for shuttling data between those pipes and the real stdin/stdout.
0
1
0
0
2012-05-04T17:55:00.000
2
0
false
10,453,799
0
0
0
1
Is there any way of attaching a console's STDIN/STDOUT to an already running process? Use Case: I have a python script which runs another python script on the command line using popen. Let's say foo.py runs popen to run python bar.py. Then bar.py blocks on input. I can get the PID of python bar.py. Is there any way to attach a new console to the running python instance in order to be able to work interactively with it? This is specifically useful because I want to run pdb inside of bar.py.
How do Homebrew, PIP, easy_install etc. work so that I can clean up
10,506,147
2
19
9,232
0
python,macports,pip,homebrew,easy-install
The advantage of using a Python installed via a package manager like Homebrew or MacPorts would be that this provides a simple way of removing the Python installation and reinstalling it. Also, you can install a more recent version than the one Mac OS X provides.
0
1
0
0
2012-05-04T20:48:00.000
2
0.197375
false
10,455,947
0
0
0
1
I have a problem that comes from me following tutorials without really understanding what I'm doing. The root of the problem I think is the fact that I don't understand how the OS X filesystem works. The problem is bigger than Python but it was when I started learning about Python that I realized how little I really understand. So in the beginning I started following tutorials which led me to use the easy_install command a lot and when a lot of tutorials recommended PIP I never got it running. So I have run a lot of commands and installed a lot of different packages. As I have understood Lion comes with a python install. I have been using this a lot and from this I have installed various packages with easy_install. Is there any way to go back to default installation and begin from the very beginning? Is this something I want to do? If so why? Is there any advantage of using a Python version I have installed with Homebrew? How can I see from where Python is run when I run the Python command? When I do install something with either easy_install, homebrew, macports etc where do things actually end up?
Python PyInstaller Ubuntu Troubles
10,468,962
1
2
731
0
python,linux,ubuntu,tkinter,pyinstaller
The following is reposted from my comment on the question, so that this question may be marked as answered (assuming OP is satisfied with this answer). It was originally posted as a comment because it does not answer the question directly. The reason there aren't many tutorials on how to do this on Linux is because there is not much point to do this on Linux, as the actual Python files can be turned into a package with a set of dependencies and everything. Perhaps you should try that instead; the PyInstaller approach is only worth it if you have a valid reason not to use packages (and such reasons do exist).
0
1
0
0
2012-05-06T06:31:00.000
1
1.2
true
10,468,669
1
0
0
1
I have been searching for tutorials on how to use pyinstaller and cant find that one that i can follow. I have been researching this for hours on end and cant find anything that helps me. I am using Linux and was wondering if anyone can help me out form the very begging, because there is not one part i understand about this. I also have three files that make up one program, and am also using Tkinter so i dont know if that makes it more difficult.
is it possible to run a task scheduler in bottle web framework
11,097,542
0
2
663
0
python,task,scheduler,bottle
I would suggest threading it allows the webserver to be unaffected by the scheduled tasks that will either be in a queue or written into the code itself.
0
1
0
0
2012-05-07T05:55:00.000
1
1.2
true
10,477,310
0
0
1
1
Does anyone have any examples on how to integrate a task scheduler in Bottle. Something like APScheduler or sched?
Installing data files into %APPDATA% with distutils on Windows 7 X64
13,338,067
1
6
594
0
python,distutils
You may use something like the common solution on *nix. Install the config files to %PROGRAMFILES%, and copy them to %APPDATA% when the program detects a particular user is running the program for the first time (which can be detected by checking that the config files are missing).
0
1
0
0
2012-05-07T14:50:00.000
1
0.197375
false
10,484,184
1
0
0
1
My setup routine using distutils that works perfectly fine on Windows XP does not work for Windows 7. Here are the specifics: My package has a lot of config files which I install into %APPDATA%. On Windows I run setup.py with the bdist_wininst option to create an installer. On Win7 the installer is then executed as Administrator so that the module can be installed into %PROGRAMFILES%\Python etc. The installation does not report any errors but as you might have guessed the config files will not have been installed into %APPDATA% nor anywhere else (I searched for them). If I open a cmd as Administrator and install my package with the install option directly (setup.py install), everything works perfectly fine however. So, what am I missing here? Is this a limitation in the graphical installer or am I doing something wrong?
Scripting on Mac to automate creation of project folder structure and Mercurial repository/subrepository cloning
10,492,057
2
0
640
0
python,macos,bash,mercurial,automator
The steps in your manual workflow can become the steps in a shell script quite literally. One difference is that you might use sed to modify files rather than open them in an editor. You'd use positional parameters or getopts to pass in your parameters. See man bash for information on those. Then come back and ask specific focused questions. -- Dennis Williamson
0
1
0
0
2012-05-07T22:30:00.000
1
1.2
true
10,490,018
0
0
0
1
I frequently (more than once a week) create new 'projects' on my development machine (Mac). I'm trying to streamline the workflow and automate what I do manually now: Create a project folder structure (I have a template directory structure which I copy) Clone (Mercurial) a boilerplate source code baseline into the newly created folder structure Clone another (Mercurial) repository as a sub-repository of the above baseline repository Modify .hgsub config file (Mercurial) to set up sub-repository Modify hgrc config file (Mercurial) to set up default push folder Do an initial commit (Mercurial) Create a series of aliases in my bash_profile What's the best (or easiest) way to script the above workflow? I'd like to pass a couple of parameters such as project name, and sub-repository name, etc. Is this something that I can easily to in a shell script? Automator? Python script? Thanks. Prembo.
Opening and communicating with a subprocess
10,495,460
0
0
130
0
python,process
Have a look how communicate is implemented. There are essentially 2 ways to do it: either use select() and be notified whether you can read/write, or delegate the read and write, which both can block, to a thread, respectively.
0
1
0
0
2012-05-08T08:46:00.000
3
0
false
10,495,393
1
0
0
2
I have a subprocess that I use. I must be able to asynchronously read and write to/from this process to it's respective stdout and stdin. How can I do this? I have looked at subprocess, but the communicate method waits for process termination (which is not what I want) and the subprocess.stdout.read method can block. The subprocess is not a Python script but can be edited if absolutely necessary. In total I will have around 15 of these subprocesses.
Opening and communicating with a subprocess
10,497,549
0
0
130
0
python,process
Have you considered using some queue or NOSQL DB for inter process communication? I would suggest you to use Redis, and read and write to different keys with your processes.
0
1
0
0
2012-05-08T08:46:00.000
3
0
false
10,495,393
1
0
0
2
I have a subprocess that I use. I must be able to asynchronously read and write to/from this process to it's respective stdout and stdin. How can I do this? I have looked at subprocess, but the communicate method waits for process termination (which is not what I want) and the subprocess.stdout.read method can block. The subprocess is not a Python script but can be edited if absolutely necessary. In total I will have around 15 of these subprocesses.
Execute java methodes via a Python or Perl client
10,519,519
0
3
114
0
java,python,perl,client
What you are talking about is Web Services. A corollary to this is XML and SOAP. In Java, Python, C#, C++... any language, you can create a Web Service that conforms to a standard pattern. Using NetBeans (Oracle's Java IDE) it is easy to create Java web services. Otherwise, use google to search for "web services tutorial [your programming language]
0
1
0
1
2012-05-09T15:42:00.000
1
0
false
10,519,454
0
0
1
1
I have a java application as server (installed on Tomcat/Apache) and another java application as client. The client's task is to get some arguments and pass them to the server and call an adequate method on the server to be execute. I want to have the client in other languages like Perl, Python or TCL. So, I‌ need to know how to establish the communication and what is the communication structure. I'm not seeking for some codes but rather to know more about how to execute some java codes via other languages. I try to google it, but I mostly found the specific question/answer and not a tutorial or something like that. I wonder if I should search for a specific expression ? Do you know any tutorial or site whom explains such structures considering all aspects ? Many thanks Bye.
convert text post into xml file using python in google app engine
10,561,318
2
0
265
0
python,google-app-engine,app-inventor
The solution I found was to do nothing with shival wolfs code on app engine, and to replace the 'postfile' block in the app inventor code with a 'posttext' block with the text you want to send attached to it. Also change the filename variable to the name you want the file called including file type (i.e. .xml, .csv, .txt etc). This appears to work for me.
0
1
1
0
2012-05-09T18:47:00.000
1
0.379949
false
10,522,243
0
0
1
1
newbie here in need of help. Using App Inventor amd App Engine. Learning python as i go along. Still early days. Need to post text data from AI to app engine, save in blob store as file (.xml), to be emailed as an attachment. Am able to send pictures using Shival Wolfs wolfwebmail2, and am sure with a bit of playing with the code i can change it to save the text post as a file in blob store to do the same operation. As stated newbie learning fast. Many thanks in advance for any pointers.
Can't Install Plone on VirtualBox Shared Folder because Python Fails to Install
10,527,263
3
2
419
0
python,debian,plone,virtualbox,zope
I've done some experimenting with VirtualBox recently. It's great, but I'm pretty sure that the shared folders are going to be limited to what's supported by the host operating system. Windows doesn't have anything like hard or symbolic links. I suspect that you're trying to do this so that you can edit instance files out of the shared directory with host tools. You might be able to pull this off by installing to non-shared files, then copying the critical parts (like the src directory if you're doing this for development purposes) to a host directory, and then (and only then) establishing that existing host directory as a shared directory. If you try it, let us know how it works!
0
1
0
1
2012-05-09T21:47:00.000
2
0.291313
false
10,524,564
1
0
0
2
I'm running Windows 7 64bit as my Host OS and Debian AMD64 as my Guest OS. On my Windows machine a folder called www is mounted on Debian under /home/me/www. I have no problem installing Plone on Debian (the guest OS) with the unified installer. However, when I try to change the default install path from /home/me/Plone to /home/me/www/plone, the installation always fails because Python fails to install. In the install.log it says ln: failed to create hard link 'python' => 'python2.6': Operation not permitted It looks like it might have something to do with access permissions, but I have tried to run the install script either using sudo or as a normal user, none of it helps. The script installs fine elsewhere, just not in the shared folder in Virtualbox. Any suggestions? More Information: I don't have a root account on Debian (testing, System Python version is 2.7) and always use sudo.
Can't Install Plone on VirtualBox Shared Folder because Python Fails to Install
10,543,865
0
2
419
0
python,debian,plone,virtualbox,zope
How about using Debian's mount --bind to mount specific Host folders to portions of the installation tree?
0
1
0
1
2012-05-09T21:47:00.000
2
0
false
10,524,564
1
0
0
2
I'm running Windows 7 64bit as my Host OS and Debian AMD64 as my Guest OS. On my Windows machine a folder called www is mounted on Debian under /home/me/www. I have no problem installing Plone on Debian (the guest OS) with the unified installer. However, when I try to change the default install path from /home/me/Plone to /home/me/www/plone, the installation always fails because Python fails to install. In the install.log it says ln: failed to create hard link 'python' => 'python2.6': Operation not permitted It looks like it might have something to do with access permissions, but I have tried to run the install script either using sudo or as a normal user, none of it helps. The script installs fine elsewhere, just not in the shared folder in Virtualbox. Any suggestions? More Information: I don't have a root account on Debian (testing, System Python version is 2.7) and always use sudo.
Problems with shutil.copytree
10,540,159
1
2
6,653
0
python,shutil,copytree
The error code is telling you that you don't have the permission to either read the source or write to the destination. Did the permission settings of your files and folders change?
0
1
0
0
2012-05-10T18:02:00.000
3
0.066568
false
10,539,724
0
0
0
1
I want to copy folder from my local server on my computer, using function shutil.copytree, i using macOS, but today i have problem,python always show me the same message,"[error 1] operation not permitted",but yesterday mine script work without problems with same folders... Can someone tell me whats is the problem, what could have happened?
Unable to write to file programmatically on linux server
10,550,999
1
1
1,569
0
python,linux,cgi,subprocess
Either the environment is different (maybe it's trying to write to the wrong dir) or more likely, the cgi isn't running as the same user that you are logging in as. For example, it's fairly common for cgi scripts to be executed as "nobody" or "www" etc. You could try getting the cgi to write a file into /tmp. That should at least confirm the user the cgi is running as
0
1
0
1
2012-05-11T11:56:00.000
3
0.066568
false
10,550,858
0
0
0
1
I have a python CGI script which takes form input x andy (integers) and passes it to a C++ executable using subprocess in which the program writes the sum of the two values to a text file. The code works fine on my local machine. However, after much testing ,I found that whenever I run this program on my server (in /var/www) and attempt to write the file some kind of error occurs because I get the "Internal Server Error" page. The server is not mine and so I do not actually have sudo privileges. But from putty, I can manually run the executable, and it indeed writes the file. My guess is I have to somehow run the executable from the python script with some amount of permission, but I'm not sure. I would appreciate any suggestions! EDIT: @gnibbler: Interesting, I was able to write the file to the /tmp directory with no problem. So I think your suggestions are correct and the server simply won't allow me to write when calling the script from the browser. Also, I cannot access the directory with the error logs, so I will have try to get permission for that.
I am getting incompatitble debugger version in eclipse while using pydev
10,644,024
0
0
113
0
python,eclipse,pydev
Maybe you have an old version of the debugger in your pythopath? Please check your interpreter configuration and see if you didn't add it there by accident.
0
1
0
0
2012-05-12T15:36:00.000
1
0
false
10,565,079
0
0
1
1
i get a incompatible debugger version in eclipse while using pydev my default port of the pydev debugger is 5678
How to find the error on web2py and GAE SDK when I got Internal error - Ticket issued: unknow
10,566,246
1
2
554
0
python,google-app-engine,web2py
You have to read the logs on GAE dashboard to figure out the Python exception it is throwing
0
1
0
0
2012-05-12T17:25:00.000
1
1.2
true
10,565,929
0
0
1
1
I'm working with Web2py and Google App Engine SDK. I have an action that works using the WSGI version, but fails when running on SDK. Inside this action, there are no imports specific from GAE libraries... but I can't figure out what is wrong cause I only got the message: Internal error Ticket issued: unknown And there is no ticket showing the error. How can I debug web2py when working with GAE and specifically in this case?
What is best monitoring tool for Tornado as Async container?
10,576,242
3
3
1,541
0
python,monitoring,tornado
The question is a bit obscure but the assumption here is that you are asking what web application performance monitoring tools exist. In this case you are asking for one that will work with the Tornado ASYNC API vs the WGSI container that sits on top of Tornado web server. You mention 'RPM Lite' which would interpret as being New Relic web application performance service. For that, as you found, only WSGI applications running on Tornado are currently supported and not the ASYNC API of Tornado. Some investigation of support for ASYNC Python web frameworks has been done but Tornado API wasn't used as test case for that and so not known when/if the ASYNC API may be supported. As for alternatives, it depends on what you want to get out of this and is where you need to expand on the question. If you are more after tracking web traffic then you can alway using Google analytics or tools that extract details from web server logs. If you are specifically after tools which can instrument the actual web application and tell you what is going on inside of it, including show time taken in database, web externals, etc, like New Relic does, then there isn't currently any other options that I know of for ASYNC systems and in particular Tornado ASYNC API.
0
1
0
0
2012-05-13T12:18:00.000
1
1.2
true
10,571,655
0
0
0
1
Tornado web can be used wth WSGI or ASYNC container. There are numerous solutions for WSGI container. Most appealing solution by now is RPM Lite, but it requires Tornado to run in wsgi mode which I do not want. I need solution that will fully monitor Tornado Async application. EDOT: Thanks @Graham for reading between lines, I've been expecting people that understand topic would have an answer.
Where are the logs from BackgroundThreads on App Engine?
10,573,307
3
2
354
0
python,google-app-engine,backend,background-thread
There is a combobox in the top left corner of the versions/backends of your application switch to the backend there and you will see the backend logs.
0
1
0
0
2012-05-13T16:13:00.000
1
0.53705
false
10,573,217
0
0
1
1
I'm writing an app that writes log entries from a BackgroundThread object on a backend instance. My problem is that I don't know how to access the logs. The docs say, "A background thread's os.environ and logging entries are independent of those of the spawning thread," and indeed, the log entries don't show up with the backend instance's entries on the admin console. But the admin console doesn't offer an option for showing the background threads. appcfg request_logs doesn't seem to be the answer either. Does anybody know?
Localhost is not refreshing/reseting
10,575,238
2
0
437
1
python,google-app-engine
Those warnings shouldn't prevent you from seeing new 'content,' they simply mean that you are missing some libraries necessary to run local versions of CloudSQL (MySQL) and the Images API. First to do is try to clear your browser cache. What changes did you make to your Hello World app?
0
1
0
0
2012-05-13T20:57:00.000
3
0.132549
false
10,575,184
0
0
1
3
I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset/refresh the localhost. I tried to create new projects/directories with different content but my localhost constantly shows the old "Hello world!" I get the following in the log window: WARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\users\tomek\appdata\local\temp\dev_appserver.datastore WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging Please help...
Localhost is not refreshing/reseting
10,593,822
0
0
437
1
python,google-app-engine
Press CTRL-F5 in your browser, while on the page. Forces a cache refresh.
0
1
0
0
2012-05-13T20:57:00.000
3
0
false
10,575,184
0
0
1
3
I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset/refresh the localhost. I tried to create new projects/directories with different content but my localhost constantly shows the old "Hello world!" I get the following in the log window: WARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\users\tomek\appdata\local\temp\dev_appserver.datastore WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging Please help...
Localhost is not refreshing/reseting
41,388,817
0
0
437
1
python,google-app-engine
You can try opening up the DOM reader (Mac: alt+command+i, Windows: shift+control+i) the reload the page. It's weird, but it works for me.
0
1
0
0
2012-05-13T20:57:00.000
3
0
false
10,575,184
0
0
1
3
I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset/refresh the localhost. I tried to create new projects/directories with different content but my localhost constantly shows the old "Hello world!" I get the following in the log window: WARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\users\tomek\appdata\local\temp\dev_appserver.datastore WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging Please help...
How to use mysql in gevent based programs in python?
12,335,813
1
4
1,197
1
python,mysql,gevent
Postgres may be better suited due to its asynchronous capabilities
0
1
0
0
2012-05-14T09:41:00.000
2
1.2
true
10,580,835
0
0
1
2
I have found that ultramysql meets my requirement. But it has no document, and no windows binary package. I have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql. Is monkey.patch_all() make mysql operations async? Can anyone show me a correct way to go.
How to use mysql in gevent based programs in python?
13,006,283
1
4
1,197
1
python,mysql,gevent
I think one solution is use pymysql. Since pymysql use python socket, after monkey patch, should be work with gevent.
0
1
0
0
2012-05-14T09:41:00.000
2
0.099668
false
10,580,835
0
0
1
2
I have found that ultramysql meets my requirement. But it has no document, and no windows binary package. I have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql. Is monkey.patch_all() make mysql operations async? Can anyone show me a correct way to go.
Python Versions on Mac
10,589,656
8
12
35,177
0
python,macos,version,homebrew
Lion uses Python 2.7 by default; 2.5 and 2.6 are also available. /Library/Frameworks/Python.framework does not exist on a stock install of Lion. My guess is that you've ended up with this by installing some application. The default Python install is primarily installed in /System/Library/Frameworks/Python.framework, although some components are located elsewhere. Yes - you can brew install python@2 to get a Python 2.7 separate from the system version, or brew install python to get Python 3.7. Both will install to /usr/local, like any other Homebrew recipe.
0
1
0
0
2012-05-14T19:09:00.000
2
1.2
true
10,589,590
1
0
0
1
I'm working on Mac Os 10.7 (Lion) and I have some questions: What is the pre-installed version of python on Lion? I've been working on this computer for some time now, and i've installed lots of software in order to do college work many times I didn't know what I was really doing. The thing is: now I hava on the /Library/Frameworks/Python.framework/Versions/ a folder called "7.0" I'm pretty sure there no python version 7. Is this folder native or a third-part program installation. Can I delete it? (it's using 1 Gb on disk). Where is located the original python that comes with mac os? I've choose Homebrew as my package manager, is there a easy way to manage python versions with it?
How do I convert a Python program to a runnable .exe Windows program?
11,889,192
1
44
185,806
0
python,exe
For this you have two choices: A downgrade to python 2.6. This is generally undesirable because it is backtracking and may nullify a small portion of your scripts Your second option is to use some form of exe converter. I recommend pyinstaller as it seems to have the best results.
0
1
0
0
2012-05-15T01:01:00.000
9
0.022219
false
10,592,913
1
0
0
1
I am looking for a way to convert a Python Program to a .exe file WITHOUT using py2exe. py2exe says it requires Python 2.6, which is outdated. Is there a way this is possible so I can distribute my Python program without the end-user having to install Python?
Bundling Python program with a native binary
10,595,836
2
2
126
0
python
using pyinstaller for example you could add the executable file as a resource file, it extracts the resources to a temp folder, from there you can access your resource files as you want
0
1
0
0
2012-05-15T05:18:00.000
1
0.379949
false
10,594,517
1
0
0
1
Simple question: I have a python program which launches an executable. The launcher uses optparse module to setup the run and lauch the binary through shell. Is it possible to bundle the laucher and the binary into a single package? Platform of interest is Linux.
install python and make in cygwin
10,607,864
6
20
37,080
0
python,windows,makefile,cygwin,installation
@spacediver is right on. Run cygwin's setup.exe again and when you get to the packages screen make sure you select make and python (and any other libs/apps you may need - perhaps gcc or g++).
0
1
0
0
2012-05-15T08:55:00.000
6
1
false
10,597,284
1
0
0
5
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library. Does Someone know how to install these components in cygwin?
install python and make in cygwin
10,597,334
12
20
37,080
0
python,windows,makefile,cygwin,installation
Look into cygwin native package manager, devel category. You should find make and python there.
0
1
0
0
2012-05-15T08:55:00.000
6
1
false
10,597,284
1
0
0
5
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library. Does Someone know how to install these components in cygwin?
install python and make in cygwin
19,168,003
7
20
37,080
0
python,windows,makefile,cygwin,installation
After running into this problem myself, I was overlooking all of the relevant answers saying to check the setup.exe again. This was the solution to me, there are a few specific things to check. Check /bin for "make.exe". If it's not there, you have not installed it correctly Run the setup.exe. Don't be afraid, as new package installs append to your installation and do not over write In the setup.exe, make sure you run the install from the Internet and NOT your local folder. This was where I was running into problems. Search "make" and make sure you select to Install it, do not leave this as "Default".
0
1
0
0
2012-05-15T08:55:00.000
6
1
false
10,597,284
1
0
0
5
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library. Does Someone know how to install these components in cygwin?
install python and make in cygwin
58,692,435
0
20
37,080
0
python,windows,makefile,cygwin,installation
In my case, it was happened due to python is not well installed. So python.exe is referenced in the shell so it can't find the file because the system is different. Please check cygwin python is well installed.
0
1
0
0
2012-05-15T08:55:00.000
6
0
false
10,597,284
1
0
0
5
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library. Does Someone know how to install these components in cygwin?
install python and make in cygwin
43,129,128
5
20
37,080
0
python,windows,makefile,cygwin,installation
Here is a command line version to install python in cygwin wget rawgit.com/transcode-open/apt-cyg/master/apt-cyg install apt-cyg /bin apt-cyg install python
0
1
0
0
2012-05-15T08:55:00.000
6
0.16514
false
10,597,284
1
0
0
5
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library. Does Someone know how to install these components in cygwin?